source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Extreme%20value%20theory
Extreme value theory or extreme value analysis (EVA) is a branch of statistics dealing with the extreme deviations from the median of probability distributions. It seeks to assess, from a given ordered sample of a given random variable, the probability of events that are more extreme than any previously observed. Extreme value analysis is widely used in many disciplines, such as structural engineering, finance, economics, earth sciences, traffic prediction, and geological engineering. For example, EVA might be used in the field of hydrology to estimate the probability of an unusually large flooding event, such as the 100-year flood. Similarly, for the design of a breakwater, a coastal engineer would seek to estimate the 50-year wave and design the structure accordingly. Data analysis Two main approaches exist for practical extreme value analysis. The first method relies on deriving block maxima (minima) series as a preliminary step. In many situations it is customary and convenient to extract the annual maxima (minima), generating an "Annual Maxima Series" (AMS). The second method relies on extracting, from a continuous record, the peak values reached for any period during which values exceed a certain threshold (falls below a certain threshold). This method is generally referred to as the "Peak Over Threshold" method (POT). For AMS data, the analysis may partly rely on the results of the Fisher–Tippett–Gnedenko theorem, leading to the generalized extreme value distribution being selected for fitting. However, in practice, various procedures are applied to select between a wider range of distributions. The theorem here relates to the limiting distributions for the minimum or the maximum of a very large collection of independent random variables from the same distribution. Given that the number of relevant random events within a year may be rather limited, it is unsurprising that analyses of observed AMS data often lead to distributions other than the generalize
https://en.wikipedia.org/wiki/OBO%20Foundry
The Open Biological and Biomedical Ontologies (OBO) Foundry is a group of people dedicated to build and maintain ontologies related to the life sciences. The OBO Foundry establishes a set of principles for ontology development for creating a suite of interoperable reference ontologies in the biomedical domain. Currently, there are more than a hundred ontologies that follow the OBO Foundry principles. The OBO Foundry effort makes it easier to integrate biomedical results and carry out analysis in bioinformatics. It does so by offering a structured reference for terms of different research fields and their interconnections (ex: a phenotype in a mouse model and its related phenotype in zebrafish). Introduction The Foundry initiative aims at improving the integration of data in the life sciences. One approach to integration is the annotation of data from different sources using controlled vocabularies. Ideally, such controlled vocabularies take the form of ontologies, which support logical reasoning over the data annotated using the terms in the vocabulary. The formalization of concepts in the biomedical domain is especially known via the work of the Gene Ontology Consortium, a part of the OBO Foundry. This has led to the development of certain proposed principles of good practice in ontology development, which are now being put into practice within the framework of the Open Biomedical Ontologies consortium through its OBO Foundry initiative. OBO ontologies form part of the resources of the National Center for Biomedical Ontology, where they form a central component of the NCBO's BioPortal. Open Biological and Biomedical Ontologies The Open Biological and Biomedical Ontologies (OBO; formerly Open Biomedical Ontologies) is an effort to create ontologies (controlled vocabularies) for use across biological and medical domains. A subset of the original OBO ontologies has started the OBO Foundry, which leads the OBO efforts since 2007. The creation of OBO in 2001 w
https://en.wikipedia.org/wiki/Astrolabe
An astrolabe ( , ; ; ) is an astronomical instrument dating to ancient times. It serves as a star chart and physical model of visible heavenly bodies. Its various functions also make it an elaborate inclinometer and an analog calculation device capable of working out several kinds of problems in astronomy. In its simplest form it is a metal disc with a pattern of wires, cutouts, and perforations that allows a user to calculate astronomical positions precisely. Historically used by astronomers, it is able to measure the altitude above the horizon of a celestial body, day or night; it can be used to identify stars or planets, to determine local latitude given local time (and vice versa), to survey, or to triangulate. It was used in classical antiquity, the Islamic Golden Age, the European Middle Ages and the Age of Discovery for all these purposes. The astrolabe is effective for determining latitude on land or calm seas. Although it is less reliable on the heaving deck of a ship in rough seas, the mariner's astrolabe was developed to solve that problem. Applications A 10th-century astronomer deduced that there were around 1000 applications for the astrolabe's various functions, and these ranged from the astrological, the astronomical and the religious, to seasonal and daily time-keeping and tide tables. At the time of their use, astrology was widely considered as much of a serious science as astronomy, and study of the two went hand-in-hand. The astronomical interest varied between folk astronomy (of the pre-Islamic tradition in Arabia) which was concerned with celestial and seasonal observations, and mathematical astronomy, which would inform intellectual practices and precise calculations based on astronomical observations. In regard to the astrolabe's religious function, the demands of Islamic prayer times were to be astronomically determined to ensure precise daily timings, and the qibla, the direction of Mecca towards which Muslims must pray, could also be
https://en.wikipedia.org/wiki/IBM%20System/7
The IBM System/7 was a computer system designed for industrial control, announced on October 28, 1970 and first shipped in 1971. It was a 16-bit machine and one of the first made by IBM to use novel semiconductor memory, instead of magnetic core memory conventional at that date. IBM had earlier products in industrial control market, notably the IBM 1800 which appeared in 1964. However, there was minimal resemblance in architecture or software between the 1800 series and the System/7. System/7 was designed and assembled in Boca Raton, Florida. Hardware architecture The processor designation for the system was IBM 5010. There were 8 registers which were mostly general purpose (capable of being used equally in instructions) although R0 had some extra capabilities for indexed memory access or system I/O. Later models may have been faster, but the versions existing in 1973 had register to register operation times of 400 ns, memory read operations at 800 ns, memory write operations at 1.2 µs, and direct IO operations were generally 2.2 μs. The instruction set would be familiar to a modern RISC programmer, with the emphasis on register operations and few memory operations or fancy addressing modes. For example, the multiply and divide instructions were done in software and needed to be specifically built into the operating system to be used. The machine was physically compact for its day, designed around chassis/gate configurations shared with other IBM machines such as the 3705 communications controller, and a typical configuration would take up one or two racks about high, the smallest System/7's were only about high. The usual console device was a Teletype Model 33 ASR (designated as the IBM 5028), which was also how the machine would generally read its boot loader sequence. Since the semiconductor memory emptied when it lost power (in those days, losing memory when you switched off the power was regarded as a novelty) and the S/7 didn't have ROM, the machi
https://en.wikipedia.org/wiki/H4R3me2
H4R3me2 is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the di-methylation at the 3rd arginine residue of the histone H4 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial. Nomenclature The name of this modification indicates dimethylation of arginine 3 on histone H4 protein subunit: Arginine Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases. Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction, and transcriptional regulation. Arginine methylation plays a major role in gene regulation because of the ability of the PRMTs to deposit key activating (histone H4R3me2, H3R2me2, H3R17me2, H3R26me2) or repressive (H3R2me2, H3R8me2, H4R3me2) histone marks. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. Mechanism and function of modification JMJD6, a Jumonji domain-containing protein, was reported to demethylate H4R3me2. H4R3me2 is a major mark deposited by Prmt5. H4R8me2s is linked to transcriptional repression and is tightly linked with H4R3me2s methylation. Epigenetic implications The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between th
https://en.wikipedia.org/wiki/Eric%20Bradlow
Eric Thomas Bradlow is K.P. Chao Professor, Professor of Marketing, Statistics, Education and Economics, Chairperson Wharton Marketing Department, and Vice-Dean of Analytics at the Wharton School of the University of Pennsylvania. He is known for his work on marketing research methods, missing data problems, and psychometrics. He is a fellow of the American Statistical Association and a fellow the American Education Research Association. Professor Bradlow is also co-founder of GBH Insights, a leading data-focused marketing strategy and insights firm that caters to Fortune 500 companies. Awards American Marketing Association, EXPLOR Award (2007) 2006 Research Committee of the Society of General Internal Medicine Best Paper Award Inaugural Fellow of the University of Pennsylvania, (2009) Finalist, John D.C. Little Award (2008) for best paper in Marketing Science or Management Science Finalist, Paul E. Green Award for the best paper in Journal of Marketing Research, 2004. Finalist 1997 American Statistical Association Savage Award Dissertation Prize Books Wainer, H., Bradlow, E.T., and Wang, X. (2007), “Testlet Response Theory and Its Applications”, Cambridge University Press, . Bradlow, E.T., Niedermeier, K., Williams, P. (2009), “Marketing in the Financial Services Industry”, McGraw-Hill, New York. Selected publications Chandon, P., Hutchinson, J. W., Bradlow, E. T., & Young, S. H. (2009). Does in-store marketing work? Effects of the number and position of shelf facings on brand attention and evaluation at the point of purchase. Journal of Marketing, 73(6), 1-17. Hui, Sam K., Eric T. Bradlow, and Peter S. Fader. "Testing behavioral hypotheses using an integrated model of grocery store shopping path and purchase behavior." Journal of consumer research 36, no. 3 (2009): 478-493. Werner, Rachel M., and Eric T. Bradlow. "Relationship between Medicare’s hospital compare performance measures and mortality rates." Jama 296, no. 22 (2006): 2694-2702. Park, Young-Hoon, a
https://en.wikipedia.org/wiki/Divergence%20%28computer%20science%29
In computer science, a computation is said to diverge if it does not terminate or terminates in an exceptional state. Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive (i.e. to continue producing an action within a finite amount of time). Definitions Various subfields of computer science use varying, but mathematically precise, definitions of what it means for a computation to converge or diverge. Rewriting In abstract rewriting, an abstract rewriting system is called convergent if it is both confluent and terminating. The notation t ↓ n means that t reduces to normal form n in zero or more reductions, t↓ means t reduces to some normal form in zero or more reductions, and t↑ means t does not reduce to a normal form; the latter is impossible in a terminating rewriting system. In the lambda calculus an expression is divergent if it has no normal form. Denotational semantics In denotational semantics an object function f : A → B can be modelled as a mathematical function where ⊥ (bottom) indicates that the object function or its argument diverges. Concurrency theory In the calculus of communicating sequential processes (CSP), divergence is a drastic situation where a process performs an endless series of hidden actions. For example, consider the following process, defined by CSP notation: The traces of this process are defined as: Now, consider the following process, which conceals the tick event of the Clock process: By definition, P is called a divergent process. See also Infinite loop Termination analysis Notes
https://en.wikipedia.org/wiki/European%20Culture%20Collections%27%20Organisation
The European Culture Collections' Organisation (ECCO) is a European non-profit organisation which promotes the collaboration and exchange of ideas and information on all aspects of culture collection activity. Corporate members of ECCO are microbial resource centres of countries with microbiological societies affiliated to the Federation of the European Microbiological Societies (FEMS). History The organisation of the European Culture Collections' Organisation (ECCO) was established in 1981. See also American Type Culture Collection (ATCC) World Federation for Culture Collections Sources ECCO B. E. Kirsop, C. P. Kurtzman, T. Nakase, D. Yarrow, Living Resources for Biotechnology : Yeasts, 1988, p. 208 D. L. Hawksworth, B. E. Kirsop, S. C. Jong, Filamentous Fungi (Living Resources for Biotechnology), 1988, pp. 182–183 External links ECCO Culture Culture collections Microbiology organizations Pan-European scientific societies
https://en.wikipedia.org/wiki/Simon%20Memorial%20Prize
The Simon Memorial Prize is an award that honors 'distinguished work in experimental or theoretical low temperature physics'. The prize is awarded by the Institute of Physics and is presented at the International Conference on Low Temperature Physics, which takes place every three years. The prize is named after Francis Simon, who contributed eminently to the field of low-temperature physics. The first prize was awarded in 1959 to Heinz London. Not to be confused with the Robert Simon Memorial Prize awarded for dissertations from the Department of Applied Physics and Applied Mathematics of Columbia University. Winners The following have won this prize: See also Institute of Physics Awards List of physics awards List of awards named after people
https://en.wikipedia.org/wiki/European%20Conference%20on%20Computational%20Biology
The European Conference on Computational Biology (ECCB) is a scientific meeting on the subjects of bioinformatics and computational biology. It covers a wide spectrum of disciplines, including bioinformatics, computational biology, genomics, computational structural biology, and systems biology. ECCB is organized annually in different European cities. Since 2007, the conference has been held jointly with Intelligent Systems for Molecular Biology (ISMB) every second year. The conference also hosts the European ISCB Student Council Symposium. The proceedings of the conference are published by the journal Bioinformatics. History Formation ECCB was formed with the intent of providing a European conference focusing on advances in computational biology and their application to problems in molecular biology. The conference was initially to be held on a rotating basis, with the idea that previously successful regional conferences (for instance, the German Conference on Bioinformatics (GCB), the French Journées Ouvertes Biologie, Informatique et Mathématiques (JOBIM) conference and the British Genes, Proteins & Computers (GPC) conference) would be jointly held with ECCB if that region was hosting ECCB in that particular year. The first ECCB conference was held in October 2002 in Saarbrücken, Germany and was chaired by Hans-Peter Lenhof. 69 scientific papers were submitted to the conference. Partnership with ISMB In 2004, ECCB was jointly held with the Intelligent Systems for Molecular Biology (ISMB) conference for the first time. It was also co-located with the Genes, Proteins & Computers conference. This meeting, held in Glasgow, UK, was the largest bioinformatics conference ever held, attended by 2,136 delegates, submitting 496 scientific papers. ISCB Board member and Director of the Spanish National Bioinformatics Institute Alfonso Valencia considers ISMB/ECCB 2004 to be an important milestone in the history of ISMB: "it was the first one where the balance between Euro
https://en.wikipedia.org/wiki/Accelerator%20physics%20codes
A charged particle accelerator is a complex machine that takes elementary charged particles and accelerates them to very high energies. Accelerator physics is a field of physics encompassing all the aspects required to design and operate the equipment and to understand the resulting dynamics of the charged particles. There are software packages associated with each such domain. There are a large number of such codes. The 1990 edition of the Los Alamos Accelerator Code Group's compendium provides summaries of more than 200 codes. Certain of those codes are still in use today although many are obsolete. Another index of existing and historical accelerator simulation codes is located at Single particle dynamics codes For many applications it is sufficient to track a single particle through the relevant electric and magnetic fields. Old codes no longer maintained by their original authors or home institutions include: BETA, AGS, ALIGN, COMFORT, DESIGN, DIMAD, HARMON, LEGO, LIAR, MAGIC, MARYLIE, PATRICIA, PETROS, RACETRACK, SYNCH, TRANSPORT, TURTLE, and UAL. Some legacy codes are maintained by commercial organizations for academic, industrial and medical accelerator facilities that continue to use those codes. TRANSPORT, TRACE 3-D and TURTLE are among the historic codes that are commercially maintained. Major maintained codes include: Columns Spin Tracking Tracking of a particle's spin. Taylor Maps Construction of Taylor series maps to high order that can be used for simulating particle motion and also can be used for such things as extracting single particle resonance strengths. Weak-Strong Beam-Beam Interaction Can simulate the beam-beam interaction with the simplification that one beam is essentially fixed in size. See below for a list of strong-strong interaction codes. Electromagnetic Field Tracking Can track (ray trace) a particle through arbitrary electromagnetic fields. Higher Energy Collective effects The interactions between the particles in the b
https://en.wikipedia.org/wiki/Piczo
Piczo was a social networking and blogging website for teens. It was founded in 2003 by Jim Conning in San Francisco, California. Early investors included Catamount, Sierra Ventures, U.S. Venture Partners, and Mangrove Capital Partners. In March 2009, Piczo was acquired by Stardoll (Stardoll AB with CEO Mattias Miksche). After the acquisition Piczo was led by Stardoll's CEO Mattias Miksche and his Stardoll team. In September 2012, Piczo.com was acquired by Posh Media Group (PMGcom Publishing AB with CEO Christofer Båge). In November 2012, Piczo.com shut down. Their eponymous service, also called Piczo (Piczo.com), was an online photo website builder and community, which was for the generation of free advertising-supported websites. Launched in 2005 Piczo allowed users to add images, text, guestbooks, message boxes, videos, music and other content to their site using plain text and HTML. Partners included YouTube, VideoEgg, Photobucket, Flock, Yahoo & PollDaddy. When it began, the company's focus was individual web-page design, and blogs were not included as a feature. In addition to the website development aspect of the site, Piczo once had a User Generated Content repository (the Piczo Zone) where users can browse, post, and consume content that they or others have used on their site. Later on, Piczo remodelled the entire site, and this along with many other features were no longer available. One of the features that stayed is "The Board" where Piczo informs users about HTML and Internet safety, though most pages are designed for the old Piczo. In August 2010 Piczo announced "Piczo Plus", a feature that allows users to buy an "ad-free" site, which is no longer available to purchase. Popularity Piczo saw around 10 million unique visitors a month. While primarily offering services in English and German, Piczo was also available in French, Spanish, Romanian, Russian, Japanese, and Korean BETA versions. The service was very popular with a teenage audience in B
https://en.wikipedia.org/wiki/Cryptozoa
Cryptozoa is the collective name for small animals who live in darkness and under conditions of high relative humidity, as in the wet soil underneath rocks, decomposing tree bark etc. Examples include pseudoscorpions, slugs, centipedes and earwigs. The habitat of the cryptozoa allows avoidance of fluctuations of temperature and humidity, which makes the contained range of considerably different species quite remarkable. Moreover, cryptozoa are notable for their inclusion of often unnamed varieties of organisms. The term "cryptozoic fauna" was originally coined by Arthur Dendy. Habitat Sometimes referred to as the cryptozoic niche, the habitat allowing for cryptozoic life is characterized by a shielding of exterior light sources, with stable and cool temperature and high humidity. Forest humus and leaf litter can provide the necessary conditions for cryptozoic life in part because of the shielding from surrounding trees. Nonetheless, temperate woodlands are not the only ground for such a habitat: the tropics and the desert are often suitable for cryptozoa, such as scorpions or Solifugae. Examples Examples of the cryptozoa include land-planarians, amphipods, pill-woodlice, centipedes, pill-millipedes, thysanurans, false-scorpions and oribatid mites.
https://en.wikipedia.org/wiki/Rayleigh%20theorem%20for%20eigenvalues
In mathematics, the Rayleigh theorem for eigenvalues pertains to the behavior of the solutions of an eigenvalue equation as the number of basis functions employed in its resolution increases. Rayleigh, Lord Rayleigh, and 3rd Baron Rayleigh are the titles of John William Strutt, after the death of his father, the 2nd Baron Rayleigh. Lord Rayleigh made contributions not just to both theoretical and experimental physics, but also to applied mathematics. The Rayleigh theorem for eigenvalues, as discussed below, enables the energy minimization that is required in many self-consistent calculations of electronic and related properties of materials, from atoms, molecules, and nanostructures to semiconductors, insulators, and metals. Except for metals, most of these other materials have an energy or a band gap, i.e., the difference between the lowest, unoccupied energy and the highest, occupied energy. For crystals, the energy spectrum is in bands and there is a band gap, if any, as opposed to energy gap. Given the diverse contributions of Lord Rayleigh, his name is associated with other theorems, including Parseval's theorem. For this reason, keeping the full name of "Rayleigh Theorem for Eigenvalues" avoids confusions. Statement of the theorem The theorem, as indicated above, applies to the resolution of equations called eigenvalue equations. i.e., the ones of the form HѰ = λѰ, where H is an operator, Ѱ is a function and λ is number called the eigenvalue. To solve problems of this type, we expand the unknown function Ѱ in terms of known functions. The number of these known functions is the size of the basis set. The expansion coefficients are also numbers. The number of known functions included in the expansion, the same as that of coefficients, is the dimension of the Hamiltonian matrix that will be generated. The statement of the theorem follows. Let an eigenvalue equation be solved by linearly expanding the unknown function in terms of N known functions. Let the
https://en.wikipedia.org/wiki/Trellis%20quantization
Trellis quantization is an algorithm that can improve data compression in DCT-based encoding methods. It is used to optimize residual DCT coefficients after motion estimation in lossy video compression encoders such as Xvid and x264. Trellis quantization reduces the size of some DCT coefficients while recovering others to take their place. This process can increase quality because coefficients chosen by Trellis have the lowest rate-distortion ratio. Trellis quantization effectively finds the optimal quantization for each block to maximize the PSNR relative to bitrate. It has varying effectiveness depending on the input data and compression method.
https://en.wikipedia.org/wiki/Lock-in%20amplifier
A lock-in amplifier is a type of amplifier that can extract a signal with a known carrier wave from an extremely noisy environment. Depending on the dynamic reserve of the instrument, signals up to a million times smaller than noise components, potentially fairly close by in frequency, can still be reliably detected. It is essentially a homodyne detector followed by low-pass filter that is often adjustable in cut-off frequency and filter order. The device is often used to measure phase shift, even when the signals are large, have a high signal-to-noise ratio and do not need further improvement. Recovering signals at low signal-to-noise ratios requires a strong, clean reference signal with the same frequency as the received signal. This is not the case in many experiments, so the instrument can recover signals buried in the noise only in a limited set of circumstances. The lock-in amplifier is commonly believed to have been invented by Princeton University physicist Robert H. Dicke who founded the company Princeton Applied Research (PAR) to market the product. However, in an interview with Martin Harwit, Dicke claims that even though he is often credited with the invention of the device, he believes that he read about it in a review of scientific equipment written by Walter C. Michels, a professor at Bryn Mawr College. This could have been a 1941 article by Michels and Curtis, which in turn cites a 1934 article by C. R. Cosens, while another timeless article was written by C. A. Stutt in 1949. Whereas traditional lock-in amplifiers use analog frequency mixers and RC filters for the demodulation, state-of-the-art instruments have both steps implemented by fast digital signal processing, for example, on an FPGA. Usually sine and cosine demodulation is performed simultaneously, which is sometimes also referred to as dual-phase demodulation. This allows the extraction of the in-phase and the quadrature component that can then be transferred into polar coordinates, i.
https://en.wikipedia.org/wiki/Activity-regulated%20cytoskeleton-associated%20protein
Activity-regulated cytoskeleton-associated protein is a plasticity protein that in humans is encoded by the ARC gene. The gene is believed to derive from a retrotransposon. The protein is found in the neurons of tetrapods and other animals where it can form virus-like capsids that transport RNA between neurons. ARC mRNA is localized to activated synaptic sites in an NMDA receptor-dependent manner, where the newly translated protein is believed to play a critical role in learning and memory-related molecular processes. Arc protein is widely considered to be important in neurobiology because of its activity regulation, localization, and utility as a marker for plastic changes in the brain. Dysfunction in the production of Arc protein has been implicated as an important factor in understanding various neurological conditions, including amnesia, Alzheimer's disease, Autism spectrum disorders, and Fragile X syndrome. ARC was first characterized in 1995 and is a member of the immediate-early gene (IEG) family, a rapidly activated class of genes functionally defined by their ability to be transcribed in the presence of protein synthesis inhibitors. Along with other IEGs such as ZNF268 and HOMER1, ARC is a significant tool for systems neuroscience as illustrated by the development of the cellular compartment analysis of temporal activity by fluorescence in situ hybridization, or catFISH technique (see fluorescent in situ hybridization). Gene The ARC gene, located on chromosome 15 in the mouse, chromosome 7 in the rat, and chromosome 8 in the human, is conserved across vertebrate species and has low sequence homology to spectrin, a cytoskeletal protein involved in forming the actin cellular cortex. A number of promoter and enhancer regions have been identified that mediate activity-dependent Arc transcription: a serum response element (SRE; see serum response factor) at ~1.5 kb upstream of the initiation site. a second SRE at ~6.5 kb; and a synaptic activity response e
https://en.wikipedia.org/wiki/Gregori%20Aminoff%20Prize
The Gregori Aminoff Prize is an international prize awarded since 1979 by the Royal Swedish Academy of Sciences in the field of crystallography, rewarding "a documented, individual contribution in the field of crystallography, including areas concerned with the dynamics of the formation and dissolution of crystal structures. Some preference should be shown for work evincing elegance in the approach to the problem." The prize, which is named in memory of the Swedish scientist and artist Gregori Aminoff (1883–1947), Professor of Mineralogy at the Swedish Museum of Natural History from 1923, was endowed through a bequest by his widow Birgit Broomé-Aminoff. The prize can be shared by several winners. It is considered the Nobel prize for crystallography. Recipients of the Prize Source: Royal Swedish Academy of Science See also List of chemistry awards List of physics awards
https://en.wikipedia.org/wiki/Escape%20response
Escape response, escape reaction, or escape behavior is a mechanism by which animals avoid potential predation. It consists of a rapid sequence of movements, or lack of movement, that position the animal in such a way that allows it to hide, freeze, or flee from the supposed predator. Often, an animal's escape response is representative of an instinctual defensive mechanism, though there is evidence that these escape responses may be learned or influenced by experience. The classical escape response follows this generalized, conceptual timeline: threat detection, escape initiation, escape execution, and escape termination or conclusion. Threat detection notifies an animal to a potential predator or otherwise dangerous stimulus, which provokes escape initiation, through neural reflexes or more coordinated cognitive processes. Escape execution refers to the movement or series of movements that will hide the animal from the threat or will allow for the animal to flee. Once the animal has effectively avoided the predator or threat, the escape response is terminated. Upon completion of the escape behavior or response, the animal may integrate the experience with its memory, allowing it to learn and adapt its escape response. Escape responses are anti-predator behaviour that can vary from species to species. The behaviors themselves differ depending upon the species, but may include camouflaging techniques, freezing, or some form of fleeing (jumping, flying, withdrawal, etc.). In fact, variation between individuals is linked to increased survival. In addition, it is not merely increased speed that contributes to the success of the escape response; other factors, including reaction time and the individual's context can play a role. The individual escape response of a particular animal can vary based on an animal's previous experiences and its current state. Evolutionary importance The ability to perform an effective escape maneuver directly affects the fitness of the
https://en.wikipedia.org/wiki/Level-spacing%20distribution
In mathematical physics, level spacing is the difference between consecutive elements in some set of real numbers. In particular, it is the difference between consecutive energy levels or eigenvalues of a matrix or linear operator. Mathematical physics
https://en.wikipedia.org/wiki/Motor%20coordination
In physiology, motor coordination is the orchestrated movement of multiple body parts as required to accomplish intended actions, like walking. This coordination is achieved by adjusting kinematic and kinetic parameters associated with each body part involved in the intended movement. The modifications of these parameters typically relies on sensory feedback from one or more sensory modalities (see multisensory integration), such as proprioception and vision. Properties Large Degrees of Freedom Goal-directed and coordinated movement of body parts is inherently variable because there are many ways of coordinating body parts to achieve the intended movement goal. This is because the degrees of freedom (DOF) is large for most movements due to the many associated neuro-musculoskeletal elements. Some examples of non-repeatable movements are when pointing or standing up from sitting. Actions and movements can be executed in multiple ways because synergies (as described below) can vary without changing the outcome. Early work from Nikolai Bernstein worked to understand how coordination was developed in executing a skilled movement. In this work, he remarked that there was no one-to-one relationship between the desired movement and coordination patterns to execute that movement. This equivalence suggests that any desired action does not have a particular coordination of neurons, muscles, and kinematics. Complexity The complexity of motor coordination goes unnoticed in everyday tasks, such as in the task of picking up and pouring a bottle of water into a glass. This seemingly simple task is actually composed of multiple complex tasks. For instance, this task requires the following: (1) properly reaching for the water bottle and then configuring the hand in a way that enables grasping the bottle. (2) applying the correct amount of grip force to grasp the bottle without crushing it. (3) coordinating the muscles required for lifting and articulating the bottle so that
https://en.wikipedia.org/wiki/Gadget
A gadget is a mechanical device or any ingenious article. Gadgets are sometimes referred to as gizmos. History The etymology of the word is disputed. The word first appears as reference to an 18th-century tool in glassmaking that was developed as a spring pontil. As stated in the glass dictionary published by the Corning Museum of Glass, a gadget is a "metal rod with a spring clip that grips the foot of a vessel and so avoids the use of a pontil". Gadgets were first used in the late 18th century. According to the Oxford English Dictionary, there is anecdotal evidence for the use of "gadget" as a placeholder name for a technical item whose precise name one can't remember since the 1850s; with Robert Brown's 1886 book Spunyarn and Spindrift, A sailor boy's log of a voyage out and home in a China tea-clipper containing the earliest known usage in print. A widely circulated story holds that the word gadget was "invented" when Gaget, Gauthier & Cie, the company behind the repoussé construction of the Statue of Liberty (1886), made a small-scale version of the monument and named it after their firm; however this contradicts the evidence that the word was already used before in nautical circles, and the fact that it did not become popular, at least in the USA, until after World War I. Other sources cite a derivation from the French gâchette which has been applied to various pieces of a firing mechanism, or the French gagée, a small tool or accessory. The October 1918 issue of Notes and Queries contains a multi-article entry on the word "gadget" (12 S. iv. 187). H. Tapley-Soper of The City Library, Exeter, writes: A discussion arose at the Plymouth meeting of the Devonshire Association in 1916 when it was suggested that this word should be recorded in the list of local verbal provincialisms. Several members dissented from its inclusion on the ground that it is in common use throughout the country; and a naval officer who was present said that it has for years been a po
https://en.wikipedia.org/wiki/Aleph%20number
In mathematics, particularly in set theory, the aleph numbers are a sequence of numbers used to represent the cardinality (or size) of infinite sets that can be well-ordered. They were introduced by the mathematician Georg Cantor and are named after the symbol he used to denote them, the Hebrew letter aleph (). The cardinality of the natural numbers is (read aleph-nought or aleph-zero; the term aleph-null is also sometimes used), the next larger cardinality of a well-ordered set is aleph-one then and so on. Continuing in this manner, it is possible to define a cardinal number for every ordinal number as described below. The concept and notation are due to Georg Cantor, who defined the notion of cardinality and realized that infinite sets can have different cardinalities. The aleph numbers differ from the infinity () commonly found in algebra and calculus, in that the alephs measure the sizes of sets, while infinity is commonly defined either as an extreme limit of the real number line (applied to a function or sequence that "diverges to infinity" or "increases without bound"), or as an extreme point of the extended real number line. Aleph-zero (aleph-zero, also aleph-nought or aleph-null) is the cardinality of the set of all natural numbers, and is an infinite cardinal. The set of all finite ordinals, called or (where is the lowercase Greek letter omega), has cardinality . A set has cardinality if and only if it is countably infinite, that is, there is a bijection (one-to-one correspondence) between it and the natural numbers. Examples of such sets are the set of all integers, any infinite subset of the integers, such as the set of all square numbers or the set of all prime numbers, the set of all rational numbers, the set of all constructible numbers (in the geometric sense), the set of all algebraic numbers, the set of all computable numbers, the set of all computable functions, the set of all binary strings of finite length, and the
https://en.wikipedia.org/wiki/Schlessinger%27s%20theorem
In algebra, Schlessinger's theorem is a theorem in deformation theory introduced by that gives conditions for a functor of artinian local rings to be pro-representable, refining an earlier theorem of Grothendieck. Definitions Λ is a complete Noetherian local ring with residue field k, and C is the category of local Artinian Λ-algebras (meaning in particular that as modules over Λ they are finitely generated and Artinian) with residue field k. A small extension in C is a morphism Y→Z in C that is surjective with kernel a 1-dimensional vector space over k. A functor is called representable if it is of the form hX where hX(Y)=hom(X,Y) for some X, and is called pro-representable if it is of the form Y→lim hom(Xi,Y) for a filtered direct limit over i in some filtered ordered set. A morphism of functors F→G from C to sets is called smooth if whenever Y→Z is an epimorphism of C, the map from F(Y) to F(Z)×G(Z)G(Y) is surjective. This definition is closely related to the notion of a formally smooth morphism of schemes. If in addition the map between the tangent spaces of F and G is an isomorphism, then F is called a hull of G. Grothendieck's theorem showed that a functor from the category C of Artinian algebras to sets is pro-representable if and only if it preserves all finite limits. This condition is equivalent to asking that the functor preserves pullbacks and the final object. In fact Grothendieck's theorem applies not only to the category C of Artinian algebras, but to any category with finite limits whose objects are Artinian. By taking the projective limit of the pro-representable functor in the larger category of linearly topologized local rings, one obtains a complete linearly topologized local ring representing the functor. Schlessinger's representation theorem One difficulty in applying Grothendieck's theorem is that it can be hard to check that a functor preserves all pullbacks. Schlessinger showed that it is sufficient to check that the functor pres
https://en.wikipedia.org/wiki/MK484
The MK484 AM radio IC is a fully functional AM radio detector on a chip. It is constructed in a TO-92 case, resembling a small transistor. It replaces the similar ZN414 AM radio IC from the 1970s. The MK484 is favored by many hobbyists. It is advantageous in that it performs well with minimal discrete components, and can run from a single 1.5-volt cell. The MK484 has now in turn been replaced by the TA7642. Standard operation The simplest circuit employing the MK484 can be constructed using only a battery, an earphone (or high-impedance speaker), a coil and a variable capacitor. Extended operation The output of the MK484 can be fed into the base of a transistor to provide greater amplification as a class-A amplifier, however this is often an inefficient design. Conversely, the LM386 audio amplifier may be used to drive a small speaker. Note that higher voltage is required if the LM386 is to be used. Therefore, small signal diodes (such as 1N4148) are recommended to create a voltage drop, or use a Zener DC–DC converter with a red LED (in forward, can double as a power indicator) and a resistor (several hundred ohms for 9V operation), to avoid overvolting the MK484. Advantages Compact size Low power consumption Low cost: Rs 40 to 60 in Indian electronic markets Retails from 65 cents each on various websites
https://en.wikipedia.org/wiki/Multibody%20simulation
Multibody simulation (MBS) is a method of numerical simulation in which multibody systems are composed of various rigid or elastic bodies. Connections between the bodies can be modeled with kinematic constraints (such as joints) or force elements (such as spring dampers). Unilateral constraints and Coulomb-friction can also be used to model frictional contacts between bodies. Multibody simulation is a useful tool for conducting motion analysis. It is often used during product development to evaluate characteristics of comfort, safety, and performance. For example, multibody simulation has been widely used since the 1990s as a component of automotive suspension design. It can also be used to study issues of biomechanics, with applications including sports medicine, osteopathy, and human-machine interaction. The heart of any multibody simulation software program is the solver. The solver is a set of computation algorithms that solve equations of motion. Types of components that can be studied through multibody simulation range from electronic control systems to noise, vibration and harshness. Complex models such as engines are composed of individually designed components, e.g. pistons/crankshafts. The MBS process often can be divided in 5 main activities. The first activity of the MBS process chain is the” 3D CAD master model”, in which product developers, designers and engineers are using the CAD system to generate a CAD model and its assembly structure related to given specifications. This 3D CAD master model is converted during the activity “Data transfer” to the MBS input data formats i.e. STEP. The “MBS Modeling” is the most complex activity in the process chain. Following rules and experiences, the 3D model in MBS format, multiple boundaries, kinematics, forces, moments or degrees of freedom are used as input to generate the MBS model. Engineers have to use MBS software and their knowledge and skills in the field of engineering mechanics and machine dynamic
https://en.wikipedia.org/wiki/Demographic%20trap
According to the Encyclopedia of International Development, the term demographic trap is used by demographers "to describe the combination of high fertility (birth rates) and declining mortality (death rates) in developing countries, resulting in a period of high population growth rate (PGR)." High fertility combined with declining mortality happens when a developing country moves through the demographic transition of becoming developed. During "stage 2" of the demographic transition, quality of health care improves and death rates fall, but birth rates still remain high, resulting in a period of high population growth. The term "demographic trap" is used by some demographers to describe a situation where stage 2 persists because "falling living standards reinforce the prevailing high fertility, which in turn reinforces the decline in living standards." This results in more poverty, where people rely on more children to provide them with economic security. Social scientist John Avery explains that this results because the high birth rates and low death rates "lead to population growth so rapid that the development that could have slowed population is impossible." Results One of the significant outcomes of the "demographic trap" is explosive population growth. This is currently seen throughout Asia, Africa and Latin America, where death rates have dropped during the last half of the 20th century due to advanced health care. However, in subsequent decades most of those countries were unable to keep improving economic development to match their population's growth, by filling the education needs for more school age children, creating more jobs for the expanding workforce, and providing basic infrastructure and services, such as sewage, roads, bridges, water supplies, electricity, and stable food supplies. A possible result of a country remaining trapped in stage 2 is its government may reach a state of "demographic fatigue," writes Donald Kaufman. In this condit
https://en.wikipedia.org/wiki/Nuclear%20interaction%20length
Nuclear interaction length is the mean distance travelled by a hadronic particle before undergoing an inelastic nuclear interaction. See also Nuclear collision length Radiation length External links Particle Data Group site Experimental particle physics
https://en.wikipedia.org/wiki/62%20%28number%29
62 (sixty-two) is the natural number following 61 and preceding 63. In mathematics 62 is: the eighteenth discrete semiprime () and tenth of the form (2.q), where q is a higher prime. with an aliquot sum of 34; itself a semiprime, within an aliquot sequence of seven composite numbers (62,34,20,22,14,10,8,7,1,0) to the Prime in the 7-aliquot tree. This is the longest aliquot sequence for a semiprime up to 118 which has one more sequence member. 62 is the tenth member of the 7-aliquot tree (7, 8, 10, 14, 20, 22, 34, 38, 49, 62, 75, 118, 148, etc). a nontotient. palindromic and a repdigit in bases 5 (2225) and 30 (2230) the sum of the number of faces, edges and vertices of icosahedron or dodecahedron. the number of faces of two of the Archimedean solids, the rhombicosidodecahedron and truncated icosidodecahedron. the smallest number that is the sum of three distinct positive squares in two (or more) ways, the only number whose cube in base 10 (238328) consists of 3 digits each occurring 2 times. The 20th & 21st, 72nd & 73rd, 75th & 76th digits of pi. In science Sixty-two is the atomic number of samarium, a lanthanide. In other fields 62 is the code for international direct dial calls to Indonesia. In the 1998 Home Run Race, Mark McGwire hit his 62nd home run on September 8, breaking the single-season record. Sammy Sosa hit his 62nd home run just days later on September 13. Under Social Security (United States), the earliest age at which a person may begin receiving retirement benefits (other than disability).
https://en.wikipedia.org/wiki/Class%20I%20PI%203-kinases
Class I PI 3-kinases are a subgroup of the enzyme family, phosphoinositide 3-kinase that possess a common protein domain structure, substrate specificity, and method of activation. Class I PI 3-kinases are further divided into two subclasses, class IA PI 3-kinases and class IB PI 3-kinases. Class IA PI 3-kinases Class IA PI 3-kinases are activated by receptor tyrosine kinases (RTKs). There are three catalytic subunits that are classified as class IA PI 3-kinases: p110α p110β p110δ There are currently five regulatory subunits that are known to associate with class IA PI 3-kinases catalytic subunits: p85α and p85β p55α and p55γ p50α Class IB PI 3-kinases Class IB PI 3-kinases are activated by G-protein-coupled receptors (GPCRs). The only known class IB PI 3-kinase catalytic subunit is p110γ. There are two known regulatory subunits for p110γ: p101 p84/ p87PIKAP. See also Phosphoinositide 3-kinase#Class I Phosphoinositide 3-kinase inhibitor
https://en.wikipedia.org/wiki/Afocal%20system
In optics, an afocal system (a system without focus) is an optical system that produces no net convergence or divergence of the beam, i.e., has an infinite effective focal length. This type of system can be created with a pair of optical elements where the physical distance d between the elements is equal to the sum of each element's focal length fi (d = f1+f2). A simple example of an afocal optical system is an optical telescope imaging a star, the light entering the system is from the star at infinity (to the left) and the image it forms is at infinity (to the right), i.e., the collimated light is collimated by the afocal system. Although the system does not alter the divergence of a collimated beam, it does alter the width of the beam, increasing magnification. The magnification of such a telescope is given by Afocal systems are used in laser optics, for instance as beam expanders, Infrared and forward looking infrared systems, camera zoom lenses and telescopic lens attachments such as teleside converters, and photography setups combining cameras and telescopes (Afocal photography). See also Lens (optics) Teleside converter Galilean telescope, which uses this design
https://en.wikipedia.org/wiki/Storiform%20pattern
A woven or storiform pattern is a histopathologic architectural pattern. The name "storiform" originates from Latin storea (woven), as storiform tissue tends to resemble woven fabric on microscopy. Storiform fibrosis is a histologic sign of IgG4-related disease, accompanied by a dense lymphoplasmocytic infiltrate, often a partially eosinophilic infiltrate and obliterative phlebitis. See also Histopathology, for additional patterns
https://en.wikipedia.org/wiki/Residual%20carrier
In analogue TV technology, residual carrier is the ratio of carrier level which is modulated by the maximum video signal to the unmodulated carrier level. Video signal Video signal (VF) or more formally composite video signal (CVS) is the signal which carries the video information as well as some auxiliary signals for synchronizing. In all systems 0–300 mV level is reserved for auxiliary signals and 300–1000 mV is reserved for video information. In monochrome TV, 300 mV (or sometimes 350 mV) corresponds to black and 1000 mV corresponds to white.( In color TV depending on the system used, superimposed color subcarrier may have slightly higher values.) Modulation The signal modulates a carrier by vestigal sideband modulation (a version of amplitude modulation), where an increase in the video modulating signal produces a decrease in the carrier amplitude, called "negative modulation". When there is no modulating signal, the carrier has the full level and when there is a modulating video frequency (VF) signal the level of IF is lower. Since the level when transmitting a black scene is lower than the white level, the level of carrier when transmitting black is higher than the level of carrier when transmitting white. The modulators are set to yield 10% or 12.5% of carrier level when modulated by 1 V signal (which corresponds to white scene) This percentage is known as residual carrier. An example of the modulated carrier is shown in the accompanying figure. In this figure, the level of the modulating VF signal is 1 volt (white level). 1 V yields residual carrier (10%). Short duration full carrier (100%) is the sync pulse and the 73% is so called back porch (black level). The duration of the signal is approximately 100 μs. See also Carrier wave White clipper Zero reference pulse
https://en.wikipedia.org/wiki/Gelsemine
Gelsemine (C20H22N2O2) is an indole alkaloid isolated from flowering plants of the genus Gelsemium, a plant native to the subtropical and tropical Americas, and southeast Asia, and is a highly toxic compound that acts as a paralytic, exposure to which can result in death. It has generally potent activity as an agonist of the mammalian glycine receptor, the activation of which leads to an inhibitory postsynaptic potential in neurons following chloride ion influx, and systemically, to muscle relaxation of varying intensity and deleterious effect. Despite its danger and toxicity, recent pharmacological research has suggested that the biological activities of this compound may offer opportunities for developing treatments related to xenobiotic or diet-induced oxidative stress, and of anxiety and other conditions, with ongoing research including attempts to identify safer derivatives and analogs to make use of gelsemine's beneficial effects. Natural sources Gelsemine is found in, and can be isolated from, the subtropical to tropical flowering plant genus Gelsemium, family Loganiaceae, which as of 2014 included five species, where G. sempervirens Ait., the type species, is prevalent in the Americas and G. elegans Benth. in China and East Asia. The species in the Americas, G. sempervirens, has a number of common names that include yellow or Carolina jasmine (or jessamine), gelsemium, evening trumpetflower, and woodbine. The plant genus is native to the subtropical and tropical Americas, e.g., in Mexico, Honduras, Guatemala, and Belize, as well as to China and southeast Asia. The species is prized for its "heavily fragrant yellow flowers," and has been cultivated since mid-seventeenth century (in Europe). It is found in southeastern and south-central states of the U.S., and as a garden plant in warmer areas where it can be trained to grow over arbors or to cover walls (see image). All plant parts of the herbage and exudates of this genus, including its sap and nectar, ap
https://en.wikipedia.org/wiki/Chloroflexota
The Chloroflexota are a phylum of bacteria containing isolates with a diversity of phenotypes, including members that are aerobic thermophiles, which use oxygen and grow well in high temperatures; anoxygenic phototrophs, which use light for photosynthesis (green non-sulfur bacteria); and anaerobic halorespirers, which uses halogenated organics (such as the toxic chlorinated ethenes and polychlorinated biphenyls) as electron acceptors. The members of the phylum Chloroflexota are monoderms (that is, have one cell membrane with no outer membrane), but they stain mostly gram-negative. Many well-studied phyla of bacteria are diderms and stain gram-negative, whereas well-known monoderms that stain Gram-positive include Firmicutes (or Bacillota) (low G+C gram-positives), Actinomycetota (high-G+C gram-positives) and Deinococcota (gram-positive diderms with thick peptidoglycan). History The taxon name was created in the 2001 edition of Volume 1 of Bergey's Manual of Systematic Bacteriology and is the Latin plural of the name Chloroflexus, the name of the type genus of the phylum, a common practice. In 1987, Carl Woese, regarded as one of the forerunner of the molecular phylogeny revolution, divided Eubacteria into 11 divisions based on 16S ribosomal RNA (SSU) sequences and grouped the genera Chloroflexus, Herpetosiphon and Thermomicrobium into the "green non-sulfur bacteria and relatives", which was temporarily renamed as "Chloroflexi" in Volume One of Bergey's Manual of Systematic Bacteriology. Chloroflexota being a deep branching phylum (see Bacterial phyla), it was considered in Volume One of Bergey's Manual of Systematic Bacteriology to include a single class with the same name. Since 2001, however, new classes have been created thanks to newly discovered species, and the phylum Chloroflexi is now divided into several classes. "Dehalococcoidetes" is a placeholder name given by Hugenholtz & Stackebrandt, 2004, after "Dehalococcoides ethenogenes" a species partially
https://en.wikipedia.org/wiki/Transmon
In quantum computing, and more specifically in superconducting quantum computing, a transmon is a type of superconducting charge qubit that was designed to have reduced sensitivity to charge noise. The transmon was developed by Robert J. Schoelkopf, Michel Devoret, Steven M. Girvin, and their colleagues at Yale University in 2007. Its name is an abbreviation of the term transmission line shunted plasma oscillation qubit; one which consists of a Cooper-pair box "where the two superconductors are also capacitatively shunted in order to decrease the sensitivity to charge noise, while maintaining a sufficient anharmonicity for selective qubit control". The transmon achieves its reduced sensitivity to charge noise by significantly increasing the ratio of the Josephson energy to the charging energy. This is accomplished through the use of a large shunting capacitor. The result is energy level spacings that are approximately independent of offset charge. Planar on-chip transmon qubits have T1 coherence times approximately 30 μs to 40 μs. Recent work has shown significantly improved T1 times as long as 95 μs by replacing the superconducting transmission line cavity with a three-dimensional superconducting cavity, and by replacing niobium with tantalum in the transmon device, T1 is further improved up to 0.3 ms. These results demonstrate that previous T1 times were not limited by Josephson junction losses. Understanding the fundamental limits on the coherence time in superconducting qubits such as the transmon is an active area of research. Comparison to Cooper-pair box The transmon design is similar to the first design of the charge qubit known as a "Cooper-pair box"; both are described by the same Hamiltonian, with the only difference being the ratio. Here is the Josephson energy of the junction, and is the charging energy inversely proportional to the total capacitance of the qubit circuit. Transmons typically have (while for typical Cooper-pair-box qubits), whic
https://en.wikipedia.org/wiki/IOS%2011
iOS 11 is the eleventh major release of the iOS mobile operating system developed by Apple Inc., being the successor to iOS 10. It was announced at the company's Worldwide Developers Conference on June 5, 2017, and released on September 19, 2017. It was succeeded by iOS 12 on September 17, 2018. Overview iOS 11 was introduced at the Apple Worldwide Developers Conference keynote address on June 5, 2017. The first developer beta version was released after the keynote presentation, with the first public beta released on June 26, 2017. iOS 11 was officially released by Apple on September 19, 2017. It brought many changes to iOS. Some major highlights were: The lock screen and Notification Center were combined, allowing all notifications to be displayed directly on the lock screen. The control center was completely redesigned, combining all pages into a single unified page. It also brought the ability to rearrange the position of the controls, some of which could be used with 3D Touch for quick access to additional options. The App Store received its first major design overhaul since iOS 7 to focus on editorial content and daily highlights. A "Files" file manager app allowed access to files stored locally on-device and in iCloud and other cloud services. With this addition, users could also for the first time save files downloaded using Safari right on their iPhone without any third-party apps. Siri was updated to translate between languages and use a privacy-minded "on-device learning" technique to better understand a user's interests and offer suggestions. The camera introduced new settings for improved portrait-mode photos and utilized new encoding technologies to reduce file sizes on newer devices. In a later release of iOS 11, Messages was integrated with iCloud to better synchronize messages across iOS and macOS devices. A previous point release also added support for person-to-person Apple Pay payments. It introduced the ability to record the screen,
https://en.wikipedia.org/wiki/Kleene%E2%80%93Rosser%20paradox
In mathematics, the Kleene–Rosser paradox is a paradox that shows that certain systems of formal logic are inconsistent, in particular the version of Haskell Curry's combinatory logic introduced in 1930, and Alonzo Church's original lambda calculus, introduced in 1932–1933, both originally intended as systems of formal logic. The paradox was exhibited by Stephen Kleene and J. B. Rosser in 1935. The paradox Kleene and Rosser were able to show that both systems are able to characterize and enumerate their provably total, definable number-theoretic functions, which enabled them to construct a term that essentially replicates Richard's paradox in formal language. Curry later managed to identify the crucial ingredients of the calculi that allowed the construction of this paradox, and used this to construct a much simpler paradox, now known as Curry's paradox. See also List of paradoxes
https://en.wikipedia.org/wiki/Stream%20%28computing%29
In computer science, a stream is a sequence of data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches. Streams are processed differently from batch data – normal functions cannot operate on streams as a whole, as they have potentially unlimited data, and formally, streams are codata (potentially unlimited), not data (which is finite). Functions that operate on a stream, producing another stream, are known as filters, and can be connected in pipelines, analogously to function composition. Filters may operate on one item of a stream at a time, or may base an item of output on multiple items of input, such as a moving average. Examples The term "stream" is used in a number of similar ways: "Stream editing", as with sed, awk, and perl. Stream editing processes a file or files, in-place, without having to load the file(s) into a user interface. One example of such use is to do a search and replace on all the files in a directory, from the command line. On Unix and related systems based on the C language, a stream is a source or sink of data, usually individual bytes or characters. Streams are an abstraction used when reading or writing files, or communicating over network sockets. The standard streams are three streams made available to all programs. I/O devices can be interpreted as streams, as they produce or consume potentially unlimited data over time. In object-oriented programming, input streams are generally implemented as iterators. In the Scheme language and some others, a stream is a lazily evaluated or delayed sequence of data elements. A stream can be used similarly to a list, but later elements are only calculated when needed. Streams can therefore represent infinite sequences and series. In the Smalltalk standard library and in other programming languages as well, a stream is an external iterator. As in Scheme, streams can represent finite or infinite
https://en.wikipedia.org/wiki/Solar%20panel
A solar panel is a device that converts sunlight into electricity by using photovoltaic (PV) cells. PV cells are made of materials that generate electrons when exposed to light. The electrons flow through a circuit and produce direct current (DC) electricity, which can be used to power various devices or be stored in batteries. Solar panels are also known as solar cell panels, solar electric panels, or PV modules. Solar panels are usually arranged in groups called arrays or systems. A photovoltaic system consists of one or more solar panels, an inverter that converts DC electricity to alternating current (AC) electricity, and sometimes other components such as controllers, meters, and trackers. A photovoltaic system can be used to provide electricity for off-grid applications, such as remote homes or cabins, or to feed electricity into the grid and earn credits or payments from the utility company. This is called a grid-connected photovoltaic system. Some advantages of solar panels are that they use a renewable and clean source of energy, reduce greenhouse gas emissions, and lower electricity bills. Some disadvantages are that they depend on the availability and intensity of sunlight, require cleaning, and have high initial costs. Solar panels are widely used for residential, commercial, and industrial purposes, as well as for space and transportation applications. History In 1839, the ability of some materials to create an electrical charge from light exposure was first observed by the French physicist Edmond Becquerel. Though these initial solar panels were too inefficient for even simple electric devices, they were used as an instrument to measure light. The observation by Becquerel was not replicated again until 1873, when the English electrical engineer Willoughby Smith discovered that the charge could be caused by light hitting selenium. After this discovery, William Grylls Adams and Richard Evans Day published "The action of light on selenium" in 1876,
https://en.wikipedia.org/wiki/WebMaker%20CMS
WebMaker CMS is a UK-based company which markets an online website builder package called WebMaker CMS, a website builder. The product provides free hosting for the first year, which is renewed by annual fee. WebMaker has a number of add-ons or plug-ins/widgets, enabling customers to add extra features as and when they are needed by selecting them from within the WebMaker product. WebMaker is sold by the UK marketing agency, Insight Group, based in Bracknell, Berkshire, UK. History Insight Group launched WebMaker CMS in 2011, having already been a designer and builder of websites using higher-level website building solutions such as the Kameleon content management system.
https://en.wikipedia.org/wiki/Balasubramanian%20Gopal
Balasubramanian Gopal (born 1970) is an Indian structural biologist, molecular biophysicist and a professor at the Molecular Biophysics Unit of the Indian Institute of Science. He is known for his studies on cell wall synthesis in Staphylococcus aureus and is an elected fellow of the National Academy of Sciences, India, Indian National Science Academy and the Indian Academy of Sciences. He received the National Bioscience Award for Career Development of the Department of Biotechnology in 2010. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 2015, for his contributions to biological sciences. Biography Balasubramnian Gopal, born on 31 August 1970, completed his master's degree at Indian Institute of Technology, Kanpur and started his career as a biochemist by joining Torrent Pharmaceuticals at their Ahmedabad station. Later, he took a break from his job and joined the Indian Institute of Science (IISc) from where he secured a PhD. Moving to the UK, he did his post-doctoral studies in crystallography at the National Institute for Medical Research. Returning to India, he joined the Molecular Biophysics Unit of IISc as a member of Lab 301 where he and his colleagues are engaged in researches on structural and mechanistic aspects of membrane-associated proteins involved in inter-cell communication, transcriptional regulation and mediate antimicrobial resistance. Gopal is known to have done considerable research in molecular biophysics and has contributed to widening our understanding of the cell wall synthesis in Staphylococcus aureus, a common gram-positive bacterium found in human respiratory tract and skin. He has published his research findings as articles in peer-reviewed journals and Google Scholar, an online article repository has listed 77 of them. He has delivered keynote
https://en.wikipedia.org/wiki/Aortopulmonary%20space
The aortopulmonary space is a small space between the aortic arch and the pulmonary artery. It contains the ligamentum arteriosum, the recurrent laryngeal nerve, lymph nodes, and fatty tissue. The space is bounded anteriorly by the ascending aorta, posteriorly by the descending aorta, medially by the left main bronchus, and laterally by mediastinal pleura. The presence of radiodensity in this space on radiography may indicate lymphadenopathy.
https://en.wikipedia.org/wiki/Mathletics%20%28educational%20software%29
Mathletics is an online educational website which launched in 2005. The website places an emphasis upon Web 2.0 technologies to teach an interactive learning style which is designed to replicate the use of a personal tutor as to "address the balance between teacher-led instruction and independent, student-driven learning". Mathletics operates through a subscription-based system, offering access at an individual level as well as collectively as a school. Online users, acknowledged by the website as 'Mathletes', have access to math quizzes and challenges, and can participate in a real-time networked competition known as 'Live Mathletics'. Mathletics provides a customisable avatar for each individual user, which visually represents the player in the 'Live Mathletics' competitions. Alongside these learning interfaces, Mathletics grants individual users with the capacity to customise their avatar's clothing and general aesthetics are fueled by credits awarded to the user through the completion of quizzes and tasks. Since 2007, Mathletics started World Maths Day to make maths exciting and engaging for all school-aged children. In 2010, World Maths Day created a Guinness World Record for the Largest Online Maths Competition. The upcoming World Maths Day will take place on March 23, 2024. As of 2023, Mathletics caters to 3.2 million users worldwide and 14, 000 schools. History Mathletics was established as a Personal Learning Environment (PLE) application in 2005 by 3P Learning, catering for Australian schools. The website is structured to facilitate an engagement with students from K-12 educational level, and offers various visual resources in their interactive and online Web 2.0 appropriation of the Australian Curriculum. Though initially based around this curriculum, Mathletics broadened its offices as well as its student and teacher audiences to various other countries residing in North America, Europe, Asia and the Middle East, adapting to those regions' various s
https://en.wikipedia.org/wiki/Carboxysome
Carboxysomes are bacterial microcompartments (BMCs) consisting of polyhedral protein shells filled with the enzymes ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO)—the predominant enzyme in carbon fixation and the rate limiting enzyme in the Calvin cycle—and carbonic anhydrase. Carboxysomes are thought to have evolved as a consequence of the increase in oxygen concentration in the ancient atmosphere; this is because oxygen is a competing substrate to carbon dioxide in the RuBisCO reaction. To overcome the inefficiency of RuBisCO, carboxysomes concentrate carbon dioxide inside the shell by means of co-localized carbonic anhydrase activity, which produces carbon dioxide from the bicarbonate that diffuses into the carboxysome. The resulting concentration of carbon dioxide near RuBisCO decreases the proportion of ribulose-1,5-bisphosphate oxygenation and thereby avoids costly photorespiratory reactions. The surrounding shell provides a barrier to carbon dioxide loss, helping to increase its concentration around RuBisCO. Carboxysomes are an essential part of the broader metabolic network called the Carbon dioxide-Concentrating Mechanism (CCM), which functions in two parts: (1) Membrane transporters concentrate inorganic carbon (Ci) in the cell cytosol which is devoid of carbonic anhydrases. Carbon is primarily stored in the form of HCO3- which cannot re-cross the lipid membrane, as opposed to neutral CO2 which can easily escape the cell. This stockpiles carbon in the cell, creating a disequilibrium between the intracellular and extracellular environments of about 30x the Ci concentration in water. (2) Cytosolic HCO3- diffuses into the carboxysome, where carboxysomal carbonic anhydrases dehydrate it back to CO2 in the vicinity of Rubisco, allowing Rubisco to operate at its maximal rate. Carboxysomes are the best studied example of bacterial microcompartments, the term for functionally diverse organelles that are alike in having a protein shell. Discovery Pol
https://en.wikipedia.org/wiki/Spring%20%28operating%20system%29
Spring is a discontinued project in building an experimental microkernel-based object-oriented operating system (OS) developed at Sun Microsystems in the early 1990s. Using technology substantially similar to concepts developed in the Mach kernel, Spring concentrated on providing a richer programming environment supporting multiple inheritance and other features. Spring was also more cleanly separated from the operating systems it would host, divorcing it from its Unix roots and even allowing several OSes to be run at the same time. Development faded out in the mid-1990s, but several ideas and some code from the project was later re-used in the Java programming language libraries and the Solaris operating system. History Spring started in a roundabout fashion in 1987, as part of Sun and AT&T's collaboration to create a merged UNIX. Both companies decided it was also a good opportunity to "reimplement UNIX in an object-oriented fashion". However, after only a few meetings, this part of the project died. Sun decided to keep their team together and instead explore a system on the leading edge. Along with combining Unix flavours, the new system would also be able to run almost any other system, and in a distributed fashion. The system was first running in a "complete" fashion in 1993, and produced a series of research papers. In 1994, a "research quality" release was made under a non-commercial license, but it is unclear how widely this was used. Described as a "clean slate" intended to help Sun improve its existing Unix products, the software was made available at a cost of $75, with Sun targeting universities and computer scientists. Commercial research institutions could obtain the software at a cost of $750. The team broke up and moved to other projects within Sun, using some of the Spring concepts on a variety of other projects. Background The Spring project began soon after the release of Mach 3. In earlier versions Mach was simply a modified version of existi
https://en.wikipedia.org/wiki/Comparison%20of%20open-source%20operating%20systems
These tables compare free software / open-source operating systems. Where not all of the versions support a feature, the first version which supports it is listed. General information Supported architectures Supported hardware General Networking Network technologies Supported file systems Supported file system features Security features See also Berkeley Software Distribution Comparison of operating systems Comparison of Linux distributions Comparison of BSD operating systems Comparison of kernels Comparison of file systems Comparison of platform virtualization software Comparison of DOS operating systems List of operating systems Live CD Microsoft Windows RTEMS Unix Unix-like
https://en.wikipedia.org/wiki/Digital%20signal%20conditioning
In digital instrumentation system, especially in digital electronics, digital computers have taken a major role in near every aspect of life in our modern world. Digital electronics is at the heart of computers, but there are many direct applications of digital electronics. All these digital electronics need data to be presented to them in a digital format (i.e. the data have to be digitally conditioned). This is called digital conditioning. Since computers are electronics devices, all the information they work with has to be digitally formatted. Therefore, if they are used to control a variable such as temperature, then the temperature has to be represented digitally. That's why we need digital signal conditioning to condition process-control signal to be an approximated digital format. Introduction and digital fundamentals Digital signal conditioning in process control means finding a way to represent analog process information in digital format. Use of in control system is particularly valuable number of other reasons, however: A computer can control multivibrator process-control system. Nonlinearities in sensor output can be linearized by the computer. Complicated control equation can be solved quickly and modified as needed. Networking of control computers allow a large process-control complex to operate in a fully integrated fashion. Digital information The use of digital techniques in process control system hat process variable measurements and control information be encoded into a digital form. Digital signals themselves are simply two-scale (binary) These levels may be represented in many ways. For example, two volts, two currents, two frequencies etc. Digital words Given the simple binary information that is carried by signal digital, it is clear that multiple signals must be used to describe analog information. Generally, this is done by using an assemblage of digital levels to construct a binary number, often called a word. The individual digital
https://en.wikipedia.org/wiki/PDZ%20domain
The PDZ domain is a common structural domain of 80-90 amino-acids found in the signaling proteins of bacteria, yeast, plants, viruses and animals. Proteins containing PDZ domains play a key role in anchoring receptor proteins in the membrane to cytoskeletal components. Proteins with these domains help hold together and organize signaling complexes at cellular membranes. These domains play a key role in the formation and function of signal transduction complexes. PDZ domains also play a highly significant role in the anchoring of cell surface receptors (such as Cftr and FZD7) to the actin cytoskeleton via mediators like NHERF and ezrin. PDZ is an initialism combining the first letters of the first three proteins discovered to share the domain — post synaptic density protein (PSD95), Drosophila disc large tumor suppressor (Dlg1), and zonula occludens-1 protein (zo-1). PDZ domains have previously been referred to as DHR (Dlg homologous region) or GLGF (glycine-leucine-glycine-phenylalanine) domains. In general PDZ domains bind to a short region of the C-terminus of other specific proteins. These short regions bind to the PDZ domain by beta sheet augmentation. This means that the beta sheet in the PDZ domain is extended by the addition of a further beta strand from the tail of the binding partner protein. The C-terminal carboxylate group is bound by a nest (protein structural motif) in the PDZ domain, i.e. a PDZ-binding motif. Origins of discovery PDZ is an acronym derived from the names of the first proteins in which the domain was observed. Post-synaptic density protein 95 (PSD-95) is a synaptic protein found only in the brain. Drosophila disc large tumor suppressor (Dlg1) and zona occludens 1 (ZO-1) both play an important role at cell junctions and in cell signaling complexes. Since the discovery of PDZ domains more than 20 years ago, hundreds of additional PDZ domains have been identified. The first published use of the phrase “PDZ domain” was not in a paper, b
https://en.wikipedia.org/wiki/Known%20%28software%29
Known is an open source publishing tool designed to provide a way of more easily publishing status updates, blog posts, and photos to a wide range of social media services. It also allows you to keep a copy of the content you publish and post on your own site. Known is available as installable open source software, similar to WordPress. It is a part of the IndieWeb movement, and is used as a teaching tool in higher education. It also supports multi-user use, and is sometimes considered as an intranet platform. Known supports the W3C Recommendations Micropub and Webmention among others. Known is supported since 2019 by Open Collective that serves as fiscal sponsor since for many FLOSS projects.
https://en.wikipedia.org/wiki/Thermostad
A thermostad is a homogeneous layer of oceanic waters in terms of temperature, it is defined as a relative minimum of the vertical temperature gradient.
https://en.wikipedia.org/wiki/Protofour
Protofour or P4 is a set of standards for model railways allowing construction of models to a scale of 4 mm to (1:76.2), the predominant scale of model railways of the British prototype. For historical reasons almost all manufacturers of British prototype models use 00 gauge (1:76.2 models running on gauge track). There are several finescale standards which have been developed to enable more accurate models than 00, and P4 is the most accurate in common use. The P4 standards specify a scale model track gauge of for standard gauge railways. Joe Brook Smith was the first to propose use of an exact scale track gauge in July 1964, when also the term “Protofour” was invented by Malcolm Cross. The standards were later published in Model Railway News by the Model Railway Study Group in August 1966. Just as in the prototype railway, on a model the wheel-rail interface is the fundamental aspect of reliable operation. So as well as a track gauge, P4 also specifies the wheel profile and track parameters to use, which are largely a scaled-down version of real-life standards with some allowances for practical manufacturing tolerances. P4 standards have been extended to several other prototypes. Broader than standard gauges have been modelled using P4 standards, including Brunel's gauge, modelled with track and Irish P4, the Irish broad gauge modelled in P4 in 4 mm scale with gauge track. Several successful models of narrow gauge prototypes with a correspondingly accurate track gauges have also been produced to P4 standards. P4 standards are promoted worldwide by the Scalefour Society, which is based in the United Kingdom. The EM Gauge Society also provides support for modelling to P4 standards: many P4 modellers belong to both societies. The standards document is hosted by the Scalefour Society and the society's Central London Area Group (CLAG) make a HTML version available. S4 Standard The S4 Standard is maintained as part of the P4 standards. The S4 Stand
https://en.wikipedia.org/wiki/Daylight%20factor
In architecture, a daylight factor (DF) is the ratio of the light level inside a structure to the light level outside the structure. It is defined as: DF = (Ei / Eo) x 100% where, Ei = illuminance due to daylight at a point on the indoors working plane, Eo = simultaneous outdoor illuminance on a horizontal plane from an unobstructed hemisphere of overcast sky. To calculate Ei, requires knowing the amount of outside light received inside of a building. Light can reach a room via through a glazed window, rooflight, or other aperture via three paths: Direct light from a patch of sky visible at the point considered, known as the sky component (SC), Light reflected from an exterior surface and then reaching the point considered, known as the externally reflected component (ERC), Light entering through the window but reaching the point only after reflection from an internal surface, known as the internally reflected component (IRC). The sum of the three components gives the illuminance level (typically measured in lux) at the point considered: Illuminance = SC + ERC + IRC The daylight factor can be improved by increasing SC (for example placing a window so it "sees" more of the sky rather than adjacent buildings), increasing ERC (for example by painting surrounding buildings white), increasing IRC (for example by using light colours for room surfaces). In most rooms, the ceiling and floor are a fixed colour, and much of the walls are covered by furnishings. This gives less flexibility in changing the daylight factor by using different wall colours than might be expected meaning changing SC is often the key to good daylight design. Architects and engineers use daylight factors in architecture and building design to assess the internal natural lighting levels as perceived on working planes or surfaces. They use this information to determine if light is sufficient for occupants to carry out normal activities. The design day for daylight factor calculations is based
https://en.wikipedia.org/wiki/Zoomracks
Zoomracks was a shareware database management system for the Atari ST and IBM PC that used a card-file metaphor for displaying and manipulating data. Its main claim to fame was an early and somewhat contentious software patent lawsuit filed against Apple Computer's HyperCard and similar products. Zoomracks, introduced in 1985, represented data in a form that was visually represented by filing cards, known as "QUICKCARD"s. Cards could be designed within the program as "templates", using general-purpose data fields known as "FIELDSCROLL"s, which could hold up to 250 lines of 80 characters. Cards were collected into a "RACK", which was essentially a single database file. The display was character-based and did not make use of the Atari's GEM interface even though this was the primary platform for the product. Unlike similar database programs of the era, Zoomracks did not support different types of data internally; everything was represented as text. When a rack was opened the cards were displayed as if they were in a sort of linear rolodex, and the user could "zoom in", non-graphically, on any particular area to see more details of the cards in that area, and then zoom in again to see all of the fields on a particular card. The racks could display their cards sorted in a variety of ways, making navigation much easier than with a real-world rolodex, which is sorted only by a single pre-defined index (normally last name). Data could be moved from database to database simply by cutting a card out of one stack and pasting it into another. Up to nine racks could be opened at one time. Zoomracks II, introduced in 1987, added support for report generation and some basic mathematics. In order to extract a certain subset of the information in a rack and lay it out for printing, the original Zoomracks required the user to cut and paste the desired cards into a new rack. In Zoomracks II a report (possibly only one per rack?) could be defined, laid out as needed complete with h
https://en.wikipedia.org/wiki/Zone-H
Zone-H is an archive of defaced websites. It was established in Estonia on March 2, 2002. A whois request on the domain shows that it was created on February 14, 2002. Product Once a defaced website is submitted to Zone-H, it is mirrored on the Zone-H servers. The website is then moderated by the Zone-H staff to check if the defacement was fake. Sometimes, the hackers themselves submit their hacked pages to the site. It is an Internet security portal containing original IT security news, digital warfare news, geopolitics, proprietary and general advisories, analyses, forums, researches. Zone-H is the largest web intrusions archive. It is published in several languages. Recently Zone-H was banned by some Indian ISPs due to legal prosecution from the Indian government. Zone-H is popular in Iran and Turkey. See also WabiSabiLabi, online marketplace also created by Roberto Preatoni
https://en.wikipedia.org/wiki/Oxymonad
The Oxymonads (or Oxymonadida) are a group of flagellated protozoa found exclusively in the intestines of termites and other wood-eating insects. Along with the similar parabasalid flagellates, they harbor the symbiotic bacteria that are responsible for breaking down cellulose. It includes Dinenympha, Pyrsonympha, and Oxymonas. Characteristics Most Oxymonads are around 50 μm in size and have a single nucleus, associated with four flagella. Their basal bodies give rise to several long sheets of microtubules, which form an organelle called an axostyle, but different in structure from the axostyles of parabasalids. The cell may use the axostyle to swim, as the sheets slide past one another and cause it to undulate. An associated fiber called the preaxostyle separates the flagella into two pairs. A few oxymonads have multiple nuclei, flagella, and axostyles. Relationship to Trimastix The free-living flagellate Trimastix is closely related to the oxymonads. It lacks mitochondria and has four flagella separated by a preaxostyle, but unlike the oxymonads has a feeding groove. This character places the Oxymonads and Trimastix among the Excavata, and in particular they may belong to the metamonads. Taxonomy Order Oxymonadida Grassé 1952 emend. Cavalier-Smith 2003 Family Oxymonadidae Kirby 1928 [Oxymonadaceae] Genus ?Barroella Zeliff 1944 [Kirbyella Zeliff 1930 non Kirkaldy 1906 non Bolivar 1909] Genus ?Metasaccinobaculus Freitas 1945 Genus ?Tubulimonoides Krishnamurthy & Sultana 1976 Genus Microrhopalodina Grassé & Foa 1911 [Proboscidiella Kofoid & Swezy 1926; Opisthomitus Grassé 1952 non Duboscq & Grassé 1934] Genus Oxymonas Janicki 1915 Genus Sauromonas Grassé & Hollande 1952 Family Polymastigidae Bütschli 1884 [Polymastigaceae] Genus ?Brachymonas Grassé 1952 non Hiraishi et al. 1995 Genus ?Paranotila Cleveland 1966 Genus Monocercomonoides Travis 1932 Genus Polymastix Bütschli 1884 non Gruber 1884 Family Pyrsonymphidae Grassé 1892 [Pyrsonymphaceae; Di
https://en.wikipedia.org/wiki/Davicil
Davicil is a chlorinated pyridine derivative with antimicrobial properties, which is used as a fungicide. It can be allergenic in humans and produce contact dermatitis.
https://en.wikipedia.org/wiki/Mid-range
In statistics, the mid-range or mid-extreme is a measure of central tendency of a sample defined as the arithmetic mean of the maximum and minimum values of the data set: The mid-range is closely related to the range, a measure of statistical dispersion defined as the difference between maximum and minimum values. The two measures are complementary in sense that if one knows the mid-range and the range, one can find the sample maximum and minimum values. The mid-range is rarely used in practical statistical analysis, as it lacks efficiency as an estimator for most distributions of interest, because it ignores all intermediate points, and lacks robustness, as outliers change it significantly. Indeed, for many distributions it is one of the least efficient and least robust statistics. However, it finds some use in special cases: it is the maximally efficient estimator for the center of a uniform distribution, trimmed mid-ranges address robustness, and as an L-estimator, it is simple to understand and compute. Robustness The midrange is highly sensitive to outliers and ignores all but two data points. It is therefore a very non-robust statistic, having a breakdown point of 0, meaning that a single observation can change it arbitrarily. Further, it is highly influenced by outliers: increasing the sample maximum or decreasing the sample minimum by x changes the mid-range by while it changes the sample mean, which also has breakdown point of 0, by only It is thus of little use in practical statistics, unless outliers are already handled. A trimmed midrange is known as a – the n% trimmed midrange is the average of the n% and (100−n)% percentiles, and is more robust, having a breakdown point of n%. In the middle of these is the midhinge, which is the 25% midsummary. The median can be interpreted as the fully trimmed (50%) mid-range; this accords with the convention that the median of an even number of points is the mean of the two middle points. These trimmed mid
https://en.wikipedia.org/wiki/Constant%20%28mathematics%29
In mathematics, the word constant conveys multiple meanings. As an adjective, it refers to non-variance (i.e. unchanging with respect to some other value); as a noun, it has two different meanings: A fixed and well-defined number or other non-changing mathematical object. The terms mathematical constant or physical constant are sometimes used to distinguish this meaning. A function whose value remains unchanged (i.e., a constant function). Such a constant is commonly represented by a variable which does not depend on the main variable(s) in question. For example, a general quadratic function is commonly written as: where , and are constants (coefficients or parameters), and a variable—a placeholder for the argument of the function being studied. A more explicit way to denote this function is which makes the function-argument status of (and by extension the constancy of , and ) clear. In this example , and are coefficients of the polynomial. Since occurs in a term that does not involve , it is called the constant term of the polynomial and can be thought of as the coefficient of . More generally, any polynomial term or expression of degree zero (no variable) is a constant. Constant function A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as , has a graph of a horizontal line parallel to the x-axis. Such a function always takes the same value (in this case 5), because the variable does not appear in the expression defining the function. Context-dependence The context-dependent nature of the concept of "constant" can be seen in this example from elementary calculus: "Constant" means not depending on some variable; not changing as that variable changes. In the first case above, it means not depending on h; in the second, it means not depending on x. A constant in a narrower context could be regarded as a variable in a broader context. No
https://en.wikipedia.org/wiki/Function%20composition
In mathematics, function composition is an operation that takes two functions and , and produces a function such that . In this operation, the function is applied to the result of applying the function to . That is, the functions and are composed to yield a function that maps in domain to in codomain . Intuitively, if is a function of , and is a function of , then is a function of . The resulting composite function is denoted , defined by for all in . The notation is read as " of ", " after ", " circle ", " round ", " about ", " composed with ", " following ", " then ", or " on ", or "the composition of and ". Intuitively, composing functions is a chaining process in which the output of function feeds the input of function . The composition of functions is a special case of the composition of relations, sometimes also denoted by . As a result, all properties of composition of relations are true of composition of functions, such as the property of associativity. Composition of functions is different from multiplication of functions (if defined at all), and has some quite different properties; in particular, composition of functions is not commutative. Examples Composition of functions on a finite set: If , and , then , as shown in the figure. Composition of functions on an infinite set: If (where is the set of all real numbers) is given by and is given by , then: If an airplane's altitude at time  is , and the air pressure at altitude is , then is the pressure around the plane at time . Properties The composition of functions is always associative—a property inherited from the composition of relations. That is, if , , and are composable, then . Since the parentheses do not change the result, they are generally omitted. In a strict sense, the composition is only meaningful if the codomain of equals the domain of ; in a wider sense, it is sufficient that the former be an improper subset of the latter. Moreover, it is oft
https://en.wikipedia.org/wiki/Angular%20distance
Angular distance or angular separation, also known as apparent distance or apparent separation, denoted , is the angle between the two sightlines, or between two point objects as viewed from an observer. Angular distance appears in mathematics (in particular geometry and trigonometry) and all natural sciences (e.g., kinematics, astronomy, and geophysics). In the classical mechanics of rotating objects, it appears alongside angular velocity, angular acceleration, angular momentum, moment of inertia and torque. Use The term angular distance (or separation) is technically synonymous with angle itself, but is meant to suggest the linear distance between objects (for instance, a couple of stars observed from Earth). Measurement Since the angular distance (or separation) is conceptually identical to an angle, it is measured in the same units, such as degrees or radians, using instruments such as goniometers or optical instruments specially designed to point in well-defined directions and record the corresponding angles (such as telescopes). Formulation To derive the equation that describes the angular separation of two points located on the surface of a sphere as seen from the center of the sphere, we use the example of two astronomical objects and observed from the Earth. The objects and are defined by their celestial coordinates, namely their right ascensions (RA), ; and declinations (dec), . Let indicate the observer on Earth, assumed to be located at the center of the celestial sphere. The dot product of the vectors and is equal to: which is equivalent to: In the frame, the two unitary vectors are decomposed into: Therefore, then: Small angular distance approximation The above expression is valid for any position of A and B on the sphere. In astronomy, it often happens that the considered objects are really close in the sky: stars in a telescope field of view, binary stars, the satellites of the giant planets of the solar system, etc. In the case
https://en.wikipedia.org/wiki/Iprodione
Iprodione is a hydantoin fungicide and nematicide. Application Iprodione is used on crops affected by Botrytis bunch rot, Brown rot, Sclerotinia and other fungal diseases in plants. It is currently applied in a variety of crops: fruit, vegetables, ornamental trees and shrubs and on lawns. It is a contact fungicide that inhibits the germination of fungal spores and it blocks the growth of the fungal mycelium. It has been marketed under the brand name "Rovral" and "Chipco green" (both brands of Bayer CropScience). This chemical was developed originally by Rhône-Poulenc Agrochimie (later Aventis CropScience and in 2002 acquired by Bayer). As of 2004 there were no composition patents on iprodione. DevGen NV (Now part of Syngenta) discovered that iprodione kills nematodes and filed for patent protection for those uses. Iprodione was approved in the Turkish market under the brand name Devguard for use on tomatoes and cucumbers in 2009, and was approved in the US as Enclosure for use in commercial peanut production in May 2010. Iprodione was approved in Europe in 2010, but approval was not renewed in 2017.
https://en.wikipedia.org/wiki/Physics%20of%20the%20Solid%20State
Physics of the Solid State is a peer-reviewed scientific journal of solid state physics that publishes articles from researchers based at the Russian Academy of Sciences and other leading institutions in Russia. The journal is published by Pleiades Publishing and the online version is provided by the publisher Springer. It is edited by Alexander A. Kaplyanskii. Abstracting and indexing Physics of the Solid State is abstracted and indexed in the following databases: According to the publisher, the journal has an impact factor of 0.895.
https://en.wikipedia.org/wiki/Foodomics
Foodomics was defined in 2009 as "a discipline that studies the Food and Nutrition domains through the application and integration of advanced -omics technologies to improve consumer's well-being, health, and knowledge". Foodomics requires the combination of food chemistry, biological sciences, and data analysis. The study of foodomics became under the spotlight after it was introduced in the first international conference in 2009 at Cesena, Italy. Many experts in the field of omics and nutrition were invited to this event in order to find the new approach and possibility in the area of food science and technology. However, research and development of foodomics today are still limited due to high throughput analysis required. The American Chemical Society journal called Analytical Chemistry dedicated its cover to foodomics in December 2012. Foodomics involves four main areas of omics: Genomics, which involves investigation of genome and its pattern. Transcriptomics, which explores a set of gene and identifies the difference among various conditions, organisms, and circumstance, by using several techniques including microarray analysis; Proteomics, studies every kind of proteins that is a product of the genes. It covers how protein functions in a particular place, structures, interactions with other proteins, etc.; Metabolomics, includes chemical diversity in the cells and how it affects cell behavior; Advantages of foodomics Foodomics greatly helps the scientists in an area of food science and nutrition to gain a better access to data, which is used to analyze the effects of food on human health, etc. It is believed to be another step towards better understanding of development and application of technology and food. Moreover, the study of foodomics leads to other omics sub-disciplines, including nutrigenomics which is the integration of the study of nutrition, gene and omics. Colon cancer Foodomics approach is used to analyze and establish the links betwee
https://en.wikipedia.org/wiki/2015%20in%20paleobotany
This article records new taxa of fossil plants that are scheduled to be described during the year 2015, as well as other significant discoveries and events related to paleobotany that are scheduled to occur in the year 2015. Ferns and fern allies Marchantiophyta Bennettitales Czekanowskiales Ginkgophytes Conifers Araucariaceae Cupressaceae Pinaceae Other conifers Conifer research A study on the morphology, identity and affinity of the purported Cretaceous pitcher plant Archaeamphora longicervia is published by Wong et al. (2015), who interpret the supposed pitchers as insect-induced leaf galls, and consider A. longicervia to be insect-galled leaves of the gymnosperm species Liaoningocladus boii. Other seed plants Flowering plants Basal angiosperms Unplaced non-eudicots Magnoliids Monocots Basal eudicots Superasterids Angiosperm research Pollen grains representing the oldest fossils of members of the family Asteraceae discovered so far are described from the Late Cretaceous of Antarctica by Barreda et al. (2015). Superrosids Saxifragales Vitales Fabids Malvids Other angiosperms Other plants
https://en.wikipedia.org/wiki/Reaction%20coordinate
In chemistry, a reaction coordinate is an abstract one-dimensional coordinate chosen to represent progress along a reaction pathway. Where possible it is usually a geometric parameter that changes during the conversion of one or more molecular entities, such as bond length or bond angle. For example, in the homolytic dissociation of molecular hydrogen, an apt choice would be the coordinate corresponding to the bond length. Non-geometric parameters such as bond order are also used, but such direct representation of the reaction process can be difficult, especially for more complex reactions. In molecular dynamics simulations, a reaction coordinate is called a collective variable. A reaction coordinate parametrises reaction process at the level of the molecular entities involved. It differs from extent of reaction, which measures reaction progress in terms of the composition of the reaction system. (Free) energy is often plotted against reaction coordinate(s) to demonstrate in schematic form the potential energy profile (an intersection of a potential energy surface) associated with the reaction. In the formalism of transition-state theory the reaction coordinate for each reaction step is one of a set of curvilinear coordinates obtained from the conventional coordinates for the reactants, and leads smoothly among configurations, from reactants to products via the transition state. It is typically chosen to follow the path defined by potential energy gradient – shallowest ascent/steepest descent – from reactants to products. Notes and references Physical chemistry Quantum chemistry Theoretical chemistry Computational chemistry Molecular physics Chemical kinetics
https://en.wikipedia.org/wiki/Gene%20duplication
Gene duplication (or chromosomal duplication or gene amplification) is a major mechanism through which new genetic material is generated during molecular evolution. It can be defined as any duplication of a region of DNA that contains a gene. Gene duplications can arise as products of several types of errors in DNA replication and repair machinery as well as through fortuitous capture by selfish genetic elements. Common sources of gene duplications include ectopic recombination, retrotransposition event, aneuploidy, polyploidy, and replication slippage. Mechanisms of duplication Ectopic recombination Duplications arise from an event termed unequal crossing-over that occurs during meiosis between misaligned homologous chromosomes. The chance of it happening is a function of the degree of sharing of repetitive elements between two chromosomes. The products of this recombination are a duplication at the site of the exchange and a reciprocal deletion. Ectopic recombination is typically mediated by sequence similarity at the duplicate breakpoints, which form direct repeats. Repetitive genetic elements such as transposable elements offer one source of repetitive DNA that can facilitate recombination, and they are often found at duplication breakpoints in plants and mammals. Replication slippage Replication slippage is an error in DNA replication that can produce duplications of short genetic sequences. During replication DNA polymerase begins to copy the DNA. At some point during the replication process, the polymerase dissociates from the DNA and replication stalls. When the polymerase reattaches to the DNA strand, it aligns the replicating strand to an incorrect position and incidentally copies the same section more than once. Replication slippage is also often facilitated by repetitive sequences, but requires only a few bases of similarity. Retrotransposition Retrotransposons, mainly L1, can occasionally act on cellular mRNA. Transcripts are reverse tr
https://en.wikipedia.org/wiki/Kapteyn%20series
Kapteyn series is a series expansion of analytic functions on a domain in terms of the Bessel function of the first kind. Kapteyn series are named after Willem Kapteyn, who first studied such series in 1893. Let be a function analytic on the domain with . Then can be expanded in the form where The path of the integration is the boundary of . Here , and for , is defined by Kapteyn's series are important in physical problems. Among other applications, the solution of Kepler's equation can be expressed via a Kapteyn series: Relation between the Taylor coefficients and the coefficients of a function Let us suppose that the Taylor series of reads as Then the coefficients in the Kapteyn expansion of can be determined as follows. Examples The Kapteyn series of the powers of are found by Kapteyn himself: For it follows (see also ) and for Furthermore, inside the region , See also Schlömilch's series
https://en.wikipedia.org/wiki/Magic%20circle%20%28mathematics%29
Magic circles were invented by the Song dynasty (960–1279) Chinese mathematician Yang Hui (c. 1238–1298). It is the arrangement of natural numbers on circles where the sum of the numbers on each circle and the sum of numbers on diameters are identical. One of his magic circles was constructed from the natural numbers from 1 to 33 arranged on four concentric circles, with 9 at the center. Yang Hui magic circles Yang Hui's magic circle series was published in his Xugu Zhaiqi Suanfa《續古摘奇算法》(Sequel to Excerpts of Mathematical Wonders) of 1275. His magic circle series includes: magic 5 circles in square, 6 circles in ring, magic eight circle in square magic concentric circles, magic 9 circles in square. Yang Hui magic concentric circle Yang Hui's magic concentric circle has the following properties The sum of the numbers on four diameters = 147, 28 + 5 + 11 + 25 + 9 + 7 + 19 + 31 + 12 = 147 The sum of 8 numbers plus 9 at the center = 147; 28 + 27 + 20 + 33 + 12 + 4 + 6 + 8 + 9 = 147 The sum of eight radius without 9 = magic number 69: such as 27 + 15 + 3 + 24 = 69 The sum of all numbers on each circle (not including 9) = 2 × 69 There exist 8 semicircles, where the sum of numbers = magic number 69; there are 16 line segments (semicircles and radii) with magic number 69, more than a 6 order magic square with only 12 magic numbers. Yang Hui magic eight circles in a square 64 numbers arrange in circles of eight numbers, total sum 2080, horizontal / vertical sum = 260. From NW corner clockwise direction, the sum of 8-number circles are: 40 + 24 + 9 + 56 + 41 + 25 + 8 + 57 = 260 14 + 51 + 46 + 30 + 3 + 62 + 35 + 19 = 260 45 + 29 + 4 + 61 + 36 + 20 + 13 + 52 = 260 37 + 21 + 12 + 53 + 44 + 28 + 5 + 60 = 260 47 + 31 + 2 + 63 + 34 + 18 + 15 + 50 = 260 7 + 58 + 39 + 23 + 10 + 55 + 42 + 26 = 260 38 + 22 + 11 + 54 + 43 + 27 + 6 + 59 = 260 48 + 32 + 1 + 64 + 33 + 17 + 16 + 49 = 260 Also the sum of the eight numbers along the WE/NS axis 14 + 51 + 62 +
https://en.wikipedia.org/wiki/Standard%20state
In chemistry, the standard state of a material (pure substance, mixture or solution) is a reference point used to calculate its properties under different conditions. A superscript circle ° (degree symbol) or a Plimsoll (⦵) character is used to designate a thermodynamic quantity in the standard state, such as change in enthalpy (ΔH°), change in entropy (ΔS°), or change in Gibbs free energy (ΔG°). The degree symbol has become widespread, although the Plimsoll is recommended in standards, see discussion about typesetting below. In principle, the choice of standard state is arbitrary, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a conventional set of standard states for general use. The standard state should not be confused with standard temperature and pressure (STP) for gases, nor with the standard solutions used in analytical chemistry. STP is commonly used for calculations involving gases that approximate an ideal gas, whereas standard state conditions are used for thermodynamic calculations. For a given material or substance, the standard state is the reference state for the material's thermodynamic state properties such as enthalpy, entropy, Gibbs free energy, and for many other material standards. The standard enthalpy change of formation for an element in its standard state is zero, and this convention allows a wide range of other thermodynamic quantities to be calculated and tabulated. The standard state of a substance does not have to exist in nature: for example, it is possible to calculate values for steam at 298.15 K and , although steam does not exist (as a gas) under these conditions. The advantage of this practice is that tables of thermodynamic properties prepared in this way are self-consistent. Conventional standard states Many standard states are non-physical states, often referred to as "hypothetical states". Nevertheless, their thermodynamic properties are well-defined, usually by an extrapolation from some
https://en.wikipedia.org/wiki/Lynne%20McClure
Catherine Lynne McClure (born 1952) is a British mathematics educator. In 2014 she was appointed as director of Cambridge Mathematics, a program at the University of Cambridge that spans the university's mathematics and education faculties, Cambridge Assessment, and the Cambridge University Press, and is aimed at developing a flexible tool to inform new mathematics curricula for primary and secondary mathematics education. Education and career McClure was born in 1952. She has a degree from University College London and a Postgraduate Certificate in Education from the University of Oxford, and master's degrees from both the Open University and the University of Cambridge. After working as a primary and secondary mathematics teacher, she became principal lecturer in education at Oxford Brookes University and then at the University of Edinburgh. At Cambridge, she directed the NRICH and Underground Maths Projects before being appointed to direct Cambridge Mathematics. McClure was appointed Officer of the Order of the British Empire (OBE) in the 2022 New Year Honours for services to education. Service McClure was president of the Mathematical Association for 2014–2015, and executive chair of the International Society for Design and Development in Education for 2017–2019. She has served twice on the Advisory Committee on Mathematics Education, chairs the Strategic Board of the Cambridgeshire Maths Hub and is a trustee of National Numeracy. Publications Meeting the needs of your most able pupils : mathematics, 2006 Maths problem solving, 2008
https://en.wikipedia.org/wiki/X86%20memory%20models
In computing, the x86 memory models are a set of six different memory models of the x86 CPU operating in real mode which control how the segment registers are used and the default size of pointers. Memory segmentation Four registers are used to refer to four segments on the 16-bit x86 segmented memory architecture. DS (data segment), CS (code segment), SS (stack segment), and ES (extra segment). Another 16-bit register can act as an offset into a given segment, and so a logical address on this platform is written segment:offset, typically in hexadecimal notation. In real mode, in order to calculate the physical address of a byte of memory, the hardware shifts the contents of the appropriate segment register 4 bits left (effectively multiplying by 16), and then adds the offset. For example, the logical address 7522:F139 yields the 20-bit physical address: Note that this process leads to aliasing of memory, such that any given physical address has up to 4096 corresponding logical addresses. This complicates the comparison of pointers to different segments. Pointer sizes Pointer formats are known as near, far, or huge. Near pointers are 16-bit offsets within the reference segment, i.e. DS for data and CS for code. They are the fastest pointers, but are limited to point to 64 KB of memory (to the associated segment of the data type). Near pointers can be held in registers (typically SI and DI). mov bx, word [reg] mov ax, word [bx] mov dx, word [bx+2] Far pointers are 32-bit pointers containing a segment and an offset. To use them the segment register ES is used by using the instruction les [reg]|[mem],dword [mem]|[reg]. They may reference up to 1024 KiB of memory. Note that pointer arithmetic (addition and subtraction) does not modify the segment portion of the pointer, only its offset. Operations which exceed the bounds of zero or 65535 (0xFFFF) will undergo modulo 64K operation just as any normal 16-bit operation. For example, if the segment regi
https://en.wikipedia.org/wiki/List%20of%20rules%20of%20inference
This is a list of rules of inference, logical laws that relate to mathematical formulae. Introduction Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound. A sound and complete set of rules need not include every rule in the following list, as many of the rules are redundant, and can be proven with the other rules. Discharge rules permit inference from a subderivation based on a temporary assumption. Below, the notation indicates such a subderivation from the temporary assumption to . Rules for propositional calculus Rules for negations Reductio ad absurdum (or Negation Introduction) Reductio ad absurdum (related to the law of excluded middle) Ex contradictione quodlibet Rules for conditionals Deduction theorem (or Conditional Introduction) Modus ponens (or Conditional Elimination) Modus tollens Rules for conjunctions Adjunction (or Conjunction Introduction) Simplification (or Conjunction Elimination) Rules for disjunctions Addition (or Disjunction Introduction) Case analysis (or Proof by Cases or Argument by Cases or Disjunction elimination) Disjunctive syllogism Constructive dilemma Rules for biconditionals Biconditional introduction Biconditional elimination Rules of classical predicate calculus In the following rules, is exactly like except for having the term wherever has the free variable . Universal Generalization (or Universal Introduction) Restriction 1: is a variable which does not occur in . Restriction 2: is not mentioned in any hypothesis or undischarged assumptions. Universal Instantiation (or Universal Elimination) Restriction: No free occurrence of in falls within the scope of a quantifier quantifying a variable occurring in .
https://en.wikipedia.org/wiki/Schramm%E2%80%93Loewner%20evolution
In probability theory, the Schramm–Loewner evolution with parameter κ, also known as stochastic Loewner evolution (SLEκ), is a family of random planar curves that have been proven to be the scaling limit of a variety of two-dimensional lattice models in statistical mechanics. Given a parameter κ and a domain in the complex plane U, it gives a family of random curves in U, with κ controlling how much the curve turns. There are two main variants of SLE, chordal SLE which gives a family of random curves from two fixed boundary points, and radial SLE, which gives a family of random curves from a fixed boundary point to a fixed interior point. These curves are defined to satisfy conformal invariance and a domain Markov property. It was discovered by as a conjectured scaling limit of the planar uniform spanning tree (UST) and the planar loop-erased random walk (LERW) probabilistic processes, and developed by him together with Greg Lawler and Wendelin Werner in a series of joint papers. Besides UST and LERW, the Schramm–Loewner evolution is conjectured or proven to describe the scaling limit of various stochastic processes in the plane, such as critical percolation, the critical Ising model, the double-dimer model, self-avoiding walks, and other critical statistical mechanics models that exhibit conformal invariance. The SLE curves are the scaling limits of interfaces and other non-self-intersecting random curves in these models. The main idea is that the conformal invariance and a certain Markov property inherent in such stochastic processes together make it possible to encode these planar curves into a one-dimensional Brownian motion running on the boundary of the domain (the driving function in Loewner's differential equation). This way, many important questions about the planar models can be translated into exercises in Itô calculus. Indeed, several mathematically non-rigorous predictions made by physicists using conformal field theory have been proven using this st
https://en.wikipedia.org/wiki/Certainty
Certainty (also known as epistemic certainty or objective certainty) is the epistemic property of beliefs which a person has no rational grounds for doubting. One standard way of defining epistemic certainty is that a belief is certain if and only if the person holding that belief could not be mistaken in holding that belief. Other common definitions of certainty involve the indubitable nature of such beliefs or define certainty as a property of those beliefs with the greatest possible justification. Certainty is closely related to knowledge, although contemporary philosophers tend to treat knowledge as having lower requirements than certainty. Importantly, epistemic certainty is not the same thing as psychological certainty (also known as subjective certainty or certitude), which describes the highest degree to which a person could be convinced that something is true. While a person may be completely convinced that a particular belief is true, and might even be psychologically incapable of entertaining its falsity, this does not entail that the belief is itself beyond rational doubt or incapable of being false. While the word "certainty" is sometimes used to refer to a person's subjective certainty about the truth of a belief, philosophers are primarily interested in the question of whether any beliefs ever attain objective certainty. The philosophical question of whether one can ever be truly certain about anything has been widely debated for centuries. Many proponents of philosophical skepticism deny that certainty is possible, or claim that it is only possible in a priori domains such as logic or mathematics. Historically, many philosophers have held that knowledge requires epistemic certainty, and therefore that one must have infallible justification in order to count as knowing the truth of a proposition. However, many philosophers such as René Descartes were troubled by the resulting skeptical implications, since all of our experiences at least seem to be c
https://en.wikipedia.org/wiki/Tahini
Tahini () or tahina (, ) is a Middle Eastern condiment made from toasted ground hulled sesame. It is served by itself (as a dip) or as a major ingredient in hummus, baba ghanoush, and halva. Tahini is used in the cuisines of the Levant and Eastern Mediterranean, the South Caucasus, the Balkans, South Asia, Central Asia, and amongst Ashkenazi Jews as well as parts of Russia and North Africa. Sesame paste (though not called tahini) is also used in some East Asian cuisines. Etymology Tahini is of Arabic origin and comes from a colloquial Levantine Arabic pronunciation of (), or more accurately (), whence also English tahina and Hebrew t'china . It is derived from the root , which as a verb means "to grind", and also produces the word , "flour" in some dialects. The word tahini appeared in English by the late 1930s. History The oldest mention of sesame is in a cuneiform document written 4000 years ago that describes the custom of serving the gods sesame wine. The historian Herodotus writes about the cultivation of sesame 3500 years ago in the region of the Tigris and Euphrates in Mesopotamia. It was mainly used as a source of oil. Tahini is mentioned as an ingredient of hummus kasa, a recipe transcribed in an anonymous 13th-century Arabic cookbook, Kitab Wasf al-Atima al-Mutada. Sesame paste is an ingredient in some Chinese and Japanese dishes; Sichuan cuisine uses it in some recipes for dandan noodles. Sesame paste is also used in Indian cuisine. In North America, sesame tahini, along with other raw nut butters, was available by 1940 in health food stores. Preparation and storage Tahini is made from sesame seeds that are soaked in water and then crushed to separate the bran from the kernels. The crushed seeds are soaked in salt water, causing the bran to sink. The floating kernels are skimmed off the surface, toasted, and ground to produce an oily paste. It can also be prepared with untoasted seeds and called "raw tahini". Because of tahini's high oil
https://en.wikipedia.org/wiki/List%20of%20self-intersecting%20polygons
Self-intersecting polygons, crossed polygons, or self-crossing polygons are polygons some of whose edges cross each other. They contrast with simple polygons, whose edges never cross. Some types of self-intersecting polygons are: the crossed quadrilateral, with four edges the antiparallelogram, a crossed quadrilateral with alternate edges of equal length the crossed rectangle, an antiparallelogram whose edges are two opposite sides and the two diagonals of a rectangle, hence having two edges parallel Star polygons pentagram, with five edges Hexagram, with six edges heptagram, with seven edges octagram, with eight edges enneagram or nonagram, with nine edges decagram, with ten edges hendecagram, with eleven edges dodecagram, with twelve edges icositetragram, with twenty four edges 257-gram, with two hundred and fifty seven edges See also Complex polygon Geometric shapes Mathematics-related lists
https://en.wikipedia.org/wiki/Origin%20PC
Origin PC Corp. is a custom personal computer manufacturing company located in Miami, Florida. Founded by former Alienware employees in 2009, Origin PC assembles high-performance gaming and professional-use desktop and laptop computers from third-party components. History Soon after the acquisition of Alienware by Dell, former executives Kevin Wasielewski, Richard Cary, and Hector Penton formed Origin PC in Miami, Florida. The company states that the name Origin came from the company's intention to get back to the roots of building custom, high-performance computers for gamers and hardware enthusiasts. Origin PC's first products were the GENESIS desktop and the EON18 laptop. In 2014, Origin PC announced a new line of EVO series laptops. On January 7, 2014, at CES, Origin PC announced and launched Genesis (Full-Tower) and Millennium (Mid-Tower) desktop case. In July 2019, Corsair Components, Inc. announced its acquisition of Origin PC Corp. Hardware Origin gaming laptops are based upon the Clevo whitebox notebook chassis. See also List of computer system manufacturers
https://en.wikipedia.org/wiki/Signal-to-interference-plus-noise%20ratio
In information theory and telecommunication engineering, the signal-to-interference-plus-noise ratio (SINR) (also known as the signal-to-noise-plus-interference ratio (SNIR)) is a quantity used to give theoretical upper bounds on channel capacity (or the rate of information transfer) in wireless communication systems such as networks. Analogous to the signal-to-noise ratio (SNR) used often in wired communications systems, the SINR is defined as the power of a certain signal of interest divided by the sum of the interference power (from all the other interfering signals) and the power of some background noise. If the power of noise term is zero, then the SINR reduces to the signal-to-interference ratio (SIR). Conversely, zero interference reduces the SINR to the SNR, which is used less often when developing mathematical models of wireless networks such as cellular networks. The complexity and randomness of certain types of wireless networks and signal propagation has motivated the use of stochastic geometry models in order to model the SINR, particularly for cellular or mobile phone networks. Description SINR is commonly used in wireless communication as a way to measure the quality of wireless connections. Typically, the energy of a signal fades with distance, which is referred to as a path loss in wireless networks. Conversely, in wired networks the existence of a wired path between the sender or transmitter and the receiver determines the correct reception of data. In a wireless network one has to take other factors into account (e.g. the background noise, interfering strength of other simultaneous transmission). The concept of SINR attempts to create a representation of this aspect. Mathematical definition The definition of SINR is usually defined for a particular receiver (or user). In particular, for a receiver located at some point x in space (usually, on the plane), then its corresponding SINR given by where P is the power of the incoming signal of int
https://en.wikipedia.org/wiki/Warped%20Passages
Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions is the debut non-fiction book by Lisa Randall, published in 2005, about particle physics in general and additional dimensions of space (cf. Kaluza–Klein theory) in particular. The book has made it to top 50 at amazon.com, making it the world's first successful book on theoretical physics by a female author. She herself characterizes the book as being about physics and the multi-dimensional universe. The book describes, at a non-technical level, theoretical models Professor Randall developed with the physicist Raman Sundrum, in which various aspects of particle physics (e.g. supersymmetry) are explained in a higher-dimensional braneworld scenario. These models have since generated thousands of citations. Overview She comments that her motivation for writing this book was her "thinking that there were people who wanted a more complete and balanced vision of the current state of physics." She has noticed there is a large audience that thinks physics is about the bizarre or exotic. She observes that when people develop an understanding of the science of particle physics and the experiments that produce the science, people get excited. "The upcoming experiments at the Large Hadron Collider (LHC) at CERN near Geneva will test many ideas, including some of the warped extra-dimensional theories I talk about." Another motivation was that she "gambled that there are people who really want to understand the physics and how the many ideas connect." Background Randall is currently a professor at Harvard University in Cambridge, Massachusetts, focusing on particle physics and cosmology. She stays current through her research into the nature of matter's most basic elements, and the forces that govern these most basic elements. Randall's experiences, which qualify her as an authority on the subject of the book, are her original "contributions in a wide variety of physics studies, including cosmological
https://en.wikipedia.org/wiki/Eevee
is a Pokémon species in the Pokémon franchise. Created by Motofumi Fujiwara, it first appeared in the video games Pokémon Red and Blue. It has later appeared in various merchandise, spinoff titles, as well as animated and printed adaptations of the franchise. It is also the game mascot and starter Pokémon for Pokémon: Let's Go, Eevee! Known as the Evolution Pokémon in the games and the anime, Eevee has an unstable genetic code, which allows it to evolve into one of eight different Pokémon, known as Eeveelutions, depending on the situation. The first three of these evolutions, Vaporeon, Jolteon, and Flareon, were introduced alongside Eevee in Pokémon Red and Blue. Five more evolutions have since been introduced in Pokémon games: Espeon, Umbreon, Leafeon, Glaceon, and Sylveon. Conception and characteristics The design for Eevee and its initial evolutions, Jolteon and Flareon, were provided by Japanese graphic designer Motofumi Fujiwara, while fellow graphic designer Atsuko Nishida designed Vaporeon. Ken Sugimori, an illustrator and friend of the Pokémon franchise's creator, Satoshi Tajiri, provided illustrations of Eevee and its evolutions after seeing Fujiwara and Nishida's sprites. According to Fujiwara, he "wanted to create a blank slate Pokémon." Drawing upon his vague childhood memories, including an instance where he became lost in a forest and "encountered an undefinable creature," Fujiwara would go on to create the early design for Eevee, which he likened to "a fluffy cat or dog-like creature one would see in the country." In the original Japanese games, the Pokémon was called Eievui, a name which has similar prefixes to its current English name. However, before the English versions of the games were released, Eevee was originally going to be named Eon rather than Eevee. It was renamed to "Eevee" shortly before the English releases of Pokémon Red and Blue. According to the Pokémon video games, Eevee is a mammalian creature with brown fur, a bushy tail that
https://en.wikipedia.org/wiki/Bioregionalism
Bioregionalism is a philosophy that suggests that political, cultural, and economic systems are more sustainable and just if they are organized around naturally defined areas called bioregions, similar to ecoregions. Bioregions are defined through physical and environmental features, including watershed boundaries and soil and terrain characteristics. Bioregionalism stresses that the determination of a bioregion is also a cultural phenomenon, and emphasizes local populations, knowledge, and solutions. Bioregionalism asserts "that a bioregion's environmental components (geography, climate, plant life, animal life, etc.) directly influence ways for human communities to act and interact with each other which are, in turn, optimal for those communities to thrive in their environment. As such, those ways to thrive in their totality—be they economic, cultural, spiritual, or political—will be distinctive in some capacity as being a product of their bioregional environment." Bioregionalism is a concept that goes beyond national boundaries—an example is the concept of Cascadia, a region that is sometimes considered to consist of most of Oregon and Washington, the Alaska Panhandle, the far north of California and the West Coast of Canada, sometimes also including some or all of Idaho and western Montana. Another example of a bioregion, which does not cross national boundaries, but does overlap state lines, is the Ozarks, a bioregion also referred to as the Ozarks Plateau, which consists of southern Missouri, northwest Arkansas, the northeast corner of Oklahoma, southeast corner of Kansas. Bioregions are not synonymous with ecoregions as defined by bodies such as the World Wildlife Fund or the Commission for Environmental Cooperation; the latter are scientifically based and focused on wildlife and vegetation. Bioregions, by contrast are human regions, informed by nature but with a social and political element. In this way bioregionalism is simply political localism with an
https://en.wikipedia.org/wiki/Find%20%28Windows%29
In computing, find is a command in the command-line interpreters (shells) of a number of operating systems. It is used to search for a specific text string in a file or files. The command sends the specified lines to the standard output device. Overview The find command is a filter to find lines in the input data stream that contain or don't contain a specified string and send these to the output data stream. It does not support wildcard characters. The command is available in DOS, Digital Research FlexOS, IBM/Toshiba 4690 OS, IBM OS/2, Microsoft Windows, and ReactOS. On MS-DOS, the command is available in versions 2 and later. DR DOS 6.0 and Datalight ROM-DOS include an implementation of the command. The FreeDOS version was developed by Jim Hall and is licensed under the GPL. The Unix command find performs an entirely different function, analogous to forfiles on Windows. The rough equivalent to the Windows find is the Unix grep. Syntax FIND [/V] [/C] [/N] [/I] "string" [[drive:][path]filename[...]] Arguments: "string" This command-line argument specifies the text string to find. [drive:][path]filename Specifies a file or files in which to search the specified string. Flags: /V Displays all lines NOT containing the specified string. /C Displays only the count of lines containing the string. /N Displays line numbers with the displayed lines. /I Ignores the case of characters when searching for the string. Note: If a pathname is not specified, FIND searches the text typed at the prompt or piped from another command. Examples C:\>find "keyword" < inputfilename > outputfilename C:\>find /V "any string" FileName See also Findstr, Windows and ReactOS command-line tool to search for patterns of text in files. find (Unix), a Unix command that finds files by attribute, very different from Windows find grep, a Unix command that finds text matching a pattern, similar to Windows find forfiles, a Windows command that finds files by attribute, similar to Unix find
https://en.wikipedia.org/wiki/Clostridium%20sporogenes
Clostridium sporogenes is a species of Gram-positive bacteria that belongs to the genus Clostridium. Like other strains of Clostridium, it is an anaerobic, rod-shaped bacterium that produces oval, subterminal endospores and is commonly found in soil. Unlike Clostridium botulinum, it does not produce the botulinum neurotoxins. In colonized animals, it has a mutualistic rather than pathogenic interaction with the host. It is being investigated as a way to deliver cancer-treating drugs to tumours in patients. C. sporogenes is often used as a surrogate for C. botulinum when testing the efficacy of commercial sterilisation. Clostridium sporogenes colonizes the human gastrointestinal tract, but is only present in a subset of the population; in the intestine, it uses tryptophan to synthesize indole and subsequently (IPA) – a type of auxin (plant hormone) – which serves as a potent neuroprotective antioxidant within the human body and brain. IPA is an even more potent scavenger of hydroxyl radicals than melatonin. Similar to melatonin but unlike other antioxidants, it scavenges radicals without subsequently generating reactive and pro-oxidant intermediate compounds. C. sporogenes is the only species of bacteria known to synthesize 3-indolepropionic acid in vivo at levels which are subsequently detectable in the blood stream of the host.
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Loeser
François Loeser (born August 25, 1958) is a French mathematician. He is Professor of Mathematics at the Pierre-and-Marie-Curie University in Paris. From 2000 to 2010 he was Professor at École Normale Supérieure. Since 2015, he is a senior member of the Institut Universitaire de France. He was awarded the CNRS Silver Medal in 2011 and the Charles-Louis de Saulces de Freycinet Prize of the French Academy of Sciences in 2007. He was awarded an ERC Advanced Investigator Grant in 2010 and has been a Plenary Speaker at the European Congress of Mathematics in Amsterdam in 2008. In 2014 Loeser was an Invited Speaker at the International Congresses of Mathematicians in Seoul. In 2015 he was elected as a fellow of the American Mathematical Society "for contributions to algebraic and arithmetic geometry and to model theory". He was elected member of Academia Europaea in 2019. He is a specialist of algebraic geometry and is best known for his work on motivic integration, part of it in collaboration with Jan Denef.
https://en.wikipedia.org/wiki/Visarga
Visarga () means "sending forth, discharge". In Sanskrit phonology (), (also called, equivalently, by earlier grammarians) is the name of the voiceless glottal fricative, , written as ''. Visarga is an allophone of and in pausa (at the end of an utterance). Since is a common inflectional suffix (of nominative singular, second person singular, etc.), visarga appears frequently in Sanskrit texts. In the traditional order of Sanskrit sounds, visarga and anusvāra appear between vowels and stop consonants. The precise pronunciation of visarga in Vedic texts may vary between Śākhās. Some pronounce a slight echo of the preceding vowel after the aspiration: will be pronounced , and will be pronounced . Visarga is not to be confused with colon. Types The visarga is commonly found in writing, resembling the punctuation mark of colon or as two tiny circles one above the other. This form is retained by most Indian scripts. According to Sanskrit phonologists, the visarga has two optional allophones, namely (jihvāmūlīya or the guttural visarga) and (upadhmānīya or the fricative visarga). The former may be pronounced before , , and the latter before , and , as in (tava pitāmahaḥ kaḥ?, 'who is your grandfather?'), (pakṣiṇaḥ khe uḍḍayante, 'birds fly in the sky'), (bhoḥ pāhi, 'sir, save me'), and (tapaḥphalam, 'result of penances'). They were written with various symbols, e.g. X-like symbol vs sideways 3-like symbol above flipped sideways one, or both as two crescent-shaped semi-circles one above the other, facing the top and bottom respectively. Distinct signs for jihavamulīya and upadhmanīya exists in Kannada, Tibetan, Sharada, Brahmi and Lantsa scripts. Other Brahmic scripts Burmese In the Burmese script, the visarga (variously called shay ga pauk, wizza nalone pauk, or shay zi and represented with two dots to the right of the letter as ), when used with joined to a letter, creates the high tone. Japanese Motoori Norinaga invented a mark for visarga wh
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Schmidt%20theorem
In mathematical analysis, the Hilbert–Schmidt theorem, also known as the eigenfunction expansion theorem, is a fundamental result concerning compact, self-adjoint operators on Hilbert spaces. In the theory of partial differential equations, it is very useful in solving elliptic boundary value problems. Statement of the theorem Let (H, ⟨ , ⟩) be a real or complex Hilbert space and let A : H → H be a bounded, compact, self-adjoint operator. Then there is a sequence of non-zero real eigenvalues λi, i = 1, …, N, with N equal to the rank of A, such that |λi| is monotonically non-increasing and, if N = +∞, Furthermore, if each eigenvalue of A is repeated in the sequence according to its multiplicity, then there exists an orthonormal set φi, i = 1, …, N, of corresponding eigenfunctions, i.e., Moreover, the functions φi form an orthonormal basis for the range of A and A can be written as
https://en.wikipedia.org/wiki/Waveplate
A waveplate or retarder is an optical device that alters the polarization state of a light wave travelling through it. Two common types of waveplates are the half-wave plate, which shifts the polarization direction of linearly polarized light, and the quarter-wave plate, which converts linearly polarized light into circularly polarized light and vice versa. A quarter-wave plate can be used to produce elliptical polarization as well. Waveplates are constructed out of a birefringent material (such as quartz or mica, or even plastic), for which the index of refraction is different for light linearly polarized along one or the other of two certain perpendicular crystal axes. The behavior of a waveplate (that is, whether it is a half-wave plate, a quarter-wave plate, etc.) depends on the thickness of the crystal, the wavelength of light, and the variation of the index of refraction. By appropriate choice of the relationship between these parameters, it is possible to introduce a controlled phase shift between the two polarization components of a light wave, thereby altering its polarization. A common use of waveplates—particularly the sensitive-tint (full-wave) and quarter-wave plates—is in optical mineralogy. Addition of plates between the polarizers of a petrographic microscope makes the optical identification of minerals in thin sections of rocks easier, in particular by allowing deduction of the shape and orientation of the optical indicatrices within the visible crystal sections. This alignment can allow discrimination between minerals which otherwise appear very similar in plane polarized and cross polarized light. Principles of operation A waveplate works by shifting the phase between two perpendicular polarization components of the light wave. A typical waveplate is simply a birefringent crystal with a carefully chosen orientation and thickness. The crystal is cut into a plate, with the orientation of the cut chosen so that the optic axis of the crystal is p
https://en.wikipedia.org/wiki/Western%20Indo-Pacific
The Western Indo-Pacific is a biogeographic region of the Earth's seas, comprising the tropical waters of the eastern and central Indian Ocean. It is part of the larger Indo-Pacific, which includes the tropical Indian Ocean, the western and central Pacific Ocean, and the seas connecting the two in the general area of Indonesia. The Western Indo-Pacific may be classified as a marine realm, one of the great biogeographic divisions of the world's ocean basins, or as a subrealm of the Indo-Pacific. The Western Indo-Pacific realm covers the western and central portion of the Indian Ocean, including Africa's east coast, the Red Sea, Gulf of Aden, Persian Gulf, Arabian Sea, Bay of Bengal, and Andaman Sea, as well as the coastal waters surrounding Madagascar, the Seychelles, Comoros, Mascarene Islands, Maldives, and Chagos Archipelago. The transition between the Western Indo-Pacific and Central Indo-Pacific occurs at the Strait of Malacca and in southern Sumatra. The Western Indo-Pacific does not include the temperate and polar waters of the Indian Ocean, which are part of separate marine realms. The boundary between the Western Indo-Pacific and Temperate Southern Africa marine realms lies in South Africa near the border with Mozambique, where the southernmost mangroves and tropical corals are found. Subdivisions The Western Indo-Pacific is further subdivided into marine provinces, and the marine provinces divided into marine ecoregions: Red Sea and Gulf of Aden Northern and Central Red Sea Southern Red Sea Gulf of Aden Somali/Arabian Persian Gulf Gulf of Oman Western Arabian Sea Central Somali Coast Western Indian Ocean Northern Monsoon Current Coast East African Coral Coast Seychelles Cargados Carajos/Tromelin Island Mascarene Islands Southeast Madagascar Western and Northern Madagascar Bight of Sofala/Swamp Coast Delagoa West and South Indian Shelf Western India South India and Sri Lanka Central Indian Ocean Islands Maldives Chagos Bay of Bengal E
https://en.wikipedia.org/wiki/Systematic%20code
In coding theory, a systematic code is any error-correcting code in which the input data are embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols. Systematic codes have the advantage that the parity data can simply be appended to the source block, and receivers do not need to recover the original source symbols if received correctly – this is useful for example if error-correction coding is combined with a hash function for quickly determining the correctness of the received source symbols, or in cases where errors occur in erasures and a received symbol is thus always correct. Furthermore, for engineering purposes such as synchronization and monitoring, it is desirable to get reasonable good estimates of the received source symbols without going through the lengthy decoding process which may be carried out at a remote site at a later time. Properties Every non-systematic linear code can be transformed into a systematic code with essentially the same properties (i.e., minimum distance). Because of the advantages cited above, linear error-correcting codes are therefore generally implemented as systematic codes. However, for certain decoding algorithms such as sequential decoding or maximum-likelihood decoding, a non-systematic structure can increase performance in terms of undetected decoding error probability when the minimum free distance of the code is larger. For a systematic linear code, the generator matrix, , can always be written as , where is the identity matrix of size . Examples Checksums and hash functions, combined with the input data, can be viewed as systematic error-detecting codes. Linear codes are usually implemented as systematic error-correcting codes (e.g., Reed-Solomon codes in CDs). Convolutional codes are implemented as either systematic or non-systematic codes. Non-systematic convolutional codes can provide better performance under maximum-likelihood (Viterbi) decoding. In
https://en.wikipedia.org/wiki/Psophometric%20voltage
Psophometric voltage is a circuit noise voltage measured with a psophometer that includes a CCIF-1951 weighting network. "Psophometric voltage" should not be confused with "psophometric emf," i.e., the emf in a generator or line with 600 Ω internal resistance. For practical purposes, the psophometric emf is twice the corresponding psophometric voltage. Psophometric voltage readings, V, in millivolts, are commonly converted to dBm(psoph) by dBm(psoph) = 20 log10V – 57.78.
https://en.wikipedia.org/wiki/Laghava
The laghava ( ; from the ) is the Devanagari abbreviation sign, comparable to the full stop or ellipsis as used in the Latin alphabet. It is encoded in Unicode at . It is used as abbreviation sign in Hindi and other Devanagari-script-based languages. For example, "Dr." is written as "", "M.Sc." as "", etc. See also 。: CJK full stop ° : degree symbol
https://en.wikipedia.org/wiki/Neutron%20poison
In applications such as nuclear reactors, a neutron poison (also called a neutron absorber or a nuclear poison) is a substance with a large neutron absorption cross-section. In such applications, absorbing neutrons is normally an undesirable effect. However, neutron-absorbing materials, also called poisons, are intentionally inserted into some types of reactors in order to lower the high reactivity of their initial fresh fuel load. Some of these poisons deplete as they absorb neutrons during reactor operation while others remain relatively constant. The capture of neutrons by short half-life fission products is known as reactor poisoning; neutron capture by long-lived or stable fission products is called reactor slagging. Transient fission product poisons Some of the fission products generated during nuclear reactions have a high neutron absorption capacity, such as xenon-135 (microscopic cross-section σ = 2,000,000 barns (b); up to 3 million barns in reactor conditions) and samarium-149 (σ = 74,500 b). Because these two, fission-product poisons remove neutrons from the reactor, they will affect the thermal utilization factor and, thus, the reactivity. The poisoning of a reactor core by these fission products may become so serious that the chain reaction comes to a standstill. Xenon-135 in particular tremendously affects the operation of a nuclear reactor because it is the most powerful, known, neutron poison. The inability of a reactor to be restarted due to the buildup of xenon-135 (reaches a maximum after about 10 hours) is sometimes referred to as xenon-precluded start-up. The period of time in which the reactor is unable to override the effects of xenon-135 is called the xenon dead time or poison outage. During periods of steady state operation, at a constant neutron flux level, the xenon-135 concentration builds up to its equilibrium value for that reactor power in about 40 to 50 hours. When the reactor power is increased, xenon-135 concentration initially
https://en.wikipedia.org/wiki/Weakly%20o-minimal%20structure
In model theory, a weakly o-minimal structure is a model-theoretic structure whose definable sets in the domain are just finite unions of convex sets. Definition A linearly ordered structure, M, with language L including an ordering relation <, is called weakly o-minimal if every parametrically definable subset of M is a finite union of convex (definable) subsets. A theory is weakly o-minimal if all its models are weakly o-minimal. Note that, in contrast to o-minimality, it is possible for a theory to have models that are weakly o-minimal and to have other models that are not weakly o-minimal. Difference from o-minimality In an o-minimal structure the definable sets in are finite unions of points and intervals, where interval stands for a sets of the form , for some a and b in . For weakly o-minimal structures this is relaxed so that the definable sets in M are finite unions of convex definable sets. A set is convex if whenever a and b are in , a < b and c ∈  satisfies that a < c < b, then c is in C. Points and intervals are of course convex sets, but there are convex sets that are not either points or intervals, as explained below. If we have a weakly o-minimal structure expanding (R,<), the real ordered field, then the structure will be o-minimal. The two notions are different in other settings though. For example, let R be the ordered field of real algebraic numbers with the usual ordering < inherited from R. Take a transcendental number, say π, and add a unary relation S to the structure given by the subset (−π,π) ∩ R. Now consider the subset A of R defined by the formula so that the set consists of all strictly positive real algebraic numbers that are less than π. The set is clearly convex, but cannot be written as a finite union of points and intervals whose endpoints are in R. To write it as an interval one would either have to include the endpoint π, which isn't in R, or one would require infinitely many intervals, such as the union Sinc
https://en.wikipedia.org/wiki/Stress%20%28journal%29
Stress is a bimonthly peer-reviewed medical journal covering research on stress in terms of: the mechanisms of stressful stimulation, the physiological and behavioural responses to stress, and their regulation, in both the short and long term; adaptive mechanisms, and the pathological consequences of stress. This includes research in physiology, neuroscience, molecular biology, genetics, immunology, and behaviour. The journal is published by Taylor & Francis and the editor-in-chief is James Herman (University of Cincinnati). It was established in 1996 and according to the Journal Citation Reports it has a 2020 impact factor of 3.493.
https://en.wikipedia.org/wiki/Fractional%20Fourier%20transform
In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the n-th power, where n need not be an integer — thus, it can transform a function to any intermediate domain between time and frequency. Its applications range from filter design and signal analysis to phase retrieval and pattern recognition. The FRFT can be used to define fractional convolution, correlation, and other operations, and can also be further generalized into the linear canonical transformation (LCT). An early definition of the FRFT was introduced by Condon, by solving for the Green's function for phase-space rotations, and also by Namias, generalizing work of Wiener on Hermite polynomials. However, it was not widely recognized in signal processing until it was independently reintroduced around 1993 by several groups. Since then, there has been a surge of interest in extending Shannon's sampling theorem for signals which are band-limited in the Fractional Fourier domain. A completely different meaning for "fractional Fourier transform" was introduced by Bailey and Swartztrauber as essentially another name for a z-transform, and in particular for the case that corresponds to a discrete Fourier transform shifted by a fractional amount in frequency space (multiplying the input by a linear chirp) and evaluating at a fractional set of frequency points (e.g. considering only a small portion of the spectrum). (Such transforms can be evaluated efficiently by Bluestein's FFT algorithm.) This terminology has fallen out of use in most of the technical literature, however, in preference to the FRFT. The remainder of this article describes the FRFT. Introduction The continuous Fourier transform of a function is a unitary operator of space that maps the function to its frequential version (all expressions are taken in the sense, rather than point
https://en.wikipedia.org/wiki/Precursor%20%28chemistry%29
In chemistry, a precursor is a compound that participates in a chemical reaction that produces another compound. In biochemistry, the term "precursor" often refers more specifically to a chemical compound preceding another in a metabolic pathway, such as a protein precursor. Illicit drug precursors In 1988, the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances introduced detailed provisions and requirements relating the control of precursors used to produce drugs of abuse. In Europe the Regulation (EC) No. 273/2004 of the European Parliament and of the Council on drug precursors was adopted on 11 February 2004. (European law on drug precursors) Illicit explosives precursors On January 15, 2013, the Regulation (EU) No. 98/2013 of the European Parliament and of the Council on the marketing and use of explosives precursors was adopted. The Regulation harmonises rules across Europe on the making available, introduction, possession and use, of certain substances or mixtures that could be misused for the illicit manufacture of explosives. Detection A portable, advanced sensor based on infrared spectroscopy in a hollow fiber matched to a silicon-micromachined fast gas chromatography column can analyze illegal stimulants and precursors with nanogram-level sensitivity. Raman spectroscopy has been successfully tested to detect explosives and their precursors. Technologies able to detect precursors in the environment could contribute to an early location of sites where illegal substances (both explosives and drugs of abuse) are produced. See also Binary chemical weapon Chemical synthesis DEA list of chemicals Derivative (chemistry) Educt, a reagent or reactant Metabolism#Anabolism Monoamine precursor Prodrug Protein precursor
https://en.wikipedia.org/wiki/Propylene%20glycol%20alginate
Propylene glycol alginate (PGA) is an emulsifier, stabilizer, and thickener used in food products. It is a food additive with E number E405. Chemically, propylene glycol alginate is an ester of alginic acid, which is derived from kelp. Some of the carboxyl groups are esterified with propylene glycol, some are neutralized with an appropriate alkali, and some remain free. See also List of food additives, Codex Alimentarius
https://en.wikipedia.org/wiki/Green%20Dot%20%28symbol%29
The Green Dot () is the financing symbol of a European network of industry-funded systems for recycling the packaging materials of consumer goods. The logo is a trademark protected worldwide - it is not a recycling logo. Background The German "Der Grüne Punkt" is considered the forerunner of the European scheme. It was originally introduced by Der Grüne Punkt - Duales System Deutschland GmbH (DSD) in 1990 before the introduction of a Packaging Ordinance under the Waste Act. Since the successful introduction of the German industry-funded dual system, similar Green Dot systems have been introduced in most other European countries. The Green Dot scheme is covered under the European "Packaging and packaging waste directive - 94/62/EC", which is binding on all companies if their products use packaging and requires manufacturers to recover their own packaging. According to the directive, companies are held responsible for the end-of-life management of their packaging - either through self-compliance or joining a s.c. producer responsibility organization (PRO). Environmentalists claim that some countries deliberately turn a blind eye to the European directive. Since its European introduction, the scheme has been rolled out to 31 European countries. Use of the symbol on packaging is voluntary but if it is used, producers need to ensure a valid contract with the respective organizations. The Green Dot is used by more than 130,000 companies, encompassing more than 200 billion packages globally. Concept The Green Dot system was conceived by Klaus Töpfer, Germany's environment minister in the early 1990s. The aim of the Green Dot is to indicate to consumers who see the logo that the manufacturer of the product contributes to the cost of recovery and recycling. This can be with household waste collected by the authorities (e.g. in special bags - in Germany these are yellow), or in containers in public places such as car parks and outside supermarkets. The system is finan
https://en.wikipedia.org/wiki/Claude%20Berge
Claude Jacques Berge (5 June 1926 – 30 June 2002) was a French mathematician, recognized as one of the modern founders of combinatorics and graph theory. Biography and professional history Claude Berge's parents were André Berge and Geneviève Fourcade. André Berge (1902–1995) was a physician and psychoanalyst who, in addition to his professional work, had published several novels. He was the son of the René Berge, a mining engineer, and Antoinette Faure. Félix François Faure (1841–1899) was Antoinette Faure's father; he was President of France from 1895 to 1899. André Berge married Geneviève in 1924, and Claude was the second of their six children. His five siblings were Nicole (the eldest), Antoine, Philippe, Edith, and Patrick. Claude attended the near Verneuil-sur-Avre, about west of Paris. This famous private school, founded by the sociologist Edmond Demolins in 1899, attracted students from all over France to its innovative educational program. At this stage in his life, Claude was unsure about the topic in which he should specialize. He said in later life: "I wasn't quite sure that I wanted to do mathematics. There was often a greater urge to study literature." His love of literature and other non-mathematical subjects never left him and we shall discuss below how they played a large role in his life. However, he decided to study mathematics at the University of Paris. After the award of his first degree, he continued to undertake research for his doctorate, advised by André Lichnerowicz. He began publishing mathematics papers in 1950. In that year two of his papers appeared, the short paper Sur l'isovalence et la régularité des transformateurs and the major, 30-page paper Sur un nouveau calcul symbolique et ses applications. The symbolic calculus which he discussed in this major paper is a combination of generating functions and Laplace transforms. He then applied this symbolic calculus to combinatorial analysis, Bernoulli numbers, difference equations,