source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Mass%20matrix
In analytical mechanics, the mass matrix is a symmetric matrix that expresses the connection between the time derivative of the generalized coordinate vector of a system and the kinetic energy of that system, by the equation where denotes the transpose of the vector . This equation is analogous to the formula for the kinetic energy of a particle with mass and velocity , namely and can be derived from it, by expressing the position of each particle of the system in terms of . In general, the mass matrix depends on the state , and therefore varies with time. Lagrangian mechanics yields an ordinary differential equation (actually, a system of coupled differential equations) that describes the evolution of a system in terms of an arbitrary vector of generalized coordinates that completely defines the position of every particle in the system. The kinetic energy formula above is one term of that equation, that represents the total kinetic energy of all the particles. Examples Two-body unidimensional system For example, consider a system consisting of two point-like masses confined to a straight track. The state of that systems can be described by a vector of two generalized coordinates, namely the positions of the two particles along the track. Supposing the particles have masses , the kinetic energy of the system is This formula can also be written as where N-body system More generally, consider a system of particles labelled by an index , where the position of particle number is defined by free Cartesian coordinates (where ). Let be the column vector comprising all those coordinates. The mass matrix is the diagonal block matrix where in each block the diagonal elements are the mass of the corresponding particle: where is the identity matrix, or more fully: Rotating dumbbell For a less trivial example, consider two point-like objects with masses , attached to the ends of a rigid massless bar with length , the assembly being free to ro
https://en.wikipedia.org/wiki/CIMD
Computer Interface to Message Distribution (CIMD) is a proprietary short message service centre protocol developed by Nokia for their SMSC (now: Nokia Networks). Syntax An example CIMD exchange looks like the following: <STX>03:007<TAB>021:12345678<TAB>033:hello<TAB><ETX> <STX>53:007<TAB>021:12345678<TAB>060:971107131212<TAB><ETX> Each packet starts with STX (hex 02) and ends with ETX (hex 03). The content of the packet consists of fields separated by TAB (hex 09). Each field, in turn, consists of a parameter type, a colon (:), and the parameter value. Note that the last field must also be terminated with a TAB before the ETX. Two-digit parameter types are operation codes and each message must have exactly one. The number after the operation code is the sequence number used to match an operation to its response. The response code (acknowledgement) of the message is equal to the operation code plus 50. In the example above, the operation code 03 means submit message. Field 021 defines the destination address (telephone number), with field 033 is the user data (content) of the message. Response code 53 with a field 060 time stamp indicates that the message was accepted; if the message failed, the SMSC would reply with field 900 (error code) instead. A good amount of supporting software to implement CIMD is available from Nokia's Website to build CIMD client. You can fire SMS from message center with the help of CIMD client tools. See also Universal Computer Protocol/External Machine Interface (UCP/EMI) Short message peer-to-peer protocol (SMPP) External links Nokia: CIMD specification for SC v7.0 Nokia: CIMD specification for SC v8.0 Software Kannel, Open-Source WAP and SMS Gateway with CIMD 1.3 and CIMD 2.0 support. Ixonos MISP CIMD simulator, Open-Source CIMD v2 compliant server for testing CIMD client applications GSM standard Mobile technology Network protocols
https://en.wikipedia.org/wiki/Oligolecty
The term oligolecty is used in pollination ecology to refer to bees that exhibit a narrow, specialized preference for pollen sources, typically to a single family or genus of flowering plants. The preference may occasionally extend broadly to multiple genera within a single plant family, or be as narrow as a single plant species. When the choice is very narrow, the term monolecty is sometimes used, originally meaning a single plant species but recently broadened to include examples where the host plants are related members of a single genus. The opposite term is polylectic and refers to species that collect pollen from a wide range of species. The most familiar example of a polylectic species is the domestic honey bee. Oligolectic pollinators are often called oligoleges or simply specialist pollinators, and this behavior is especially common in the bee families Andrenidae and Halictidae, though there are thousands of species in hundreds of genera, in essentially all known bee families; in certain areas of the world, such as deserts, oligoleges may represent half or more of all the resident bee species. Attempts have been made to determine whether a narrow host preference is due to an inability of the bee larvae to digest and develop on a variety of pollen types, or a limitation of the adult bee's learning and perception (i.e., they simply do not recognize other flowers as potential food sources), and most of the available evidence suggests the latter. However, a few plants whose pollen contains toxic substances (e.g., Toxicoscordion and related genera in the Melanthieae) are visited by oligolectic bees, and these may fall into the former category. The evidence from large-scale phylogenetic analyses of bee evolution suggests that, for most groups of bees, oligolecty is the ancestral condition and polylectic lineages arose from among those ancestral specialists. There are some cases where oligoleges collect their host plant's pollen as larval food but, for various r
https://en.wikipedia.org/wiki/Valentin%20Goranko
Valentin Feodorov Goranko (born 22 September 1959 in Sofia, Bulgaria) is a Bulgarian-Swedish logician, Professor of Logic and Theoretical Philosophy at the Department of Philosophy, Stockholm University. Education and academic career Goranko studied mathematics (M.Sc. 1984) and obtained Ph.D. in Mathematical Logic at the Faculty of Mathematics and Informatics of the Sofia University "St. Kliment Ohridski" in 1988. Before joining Stockholm University in 2014, he has had several academic positions at universities in Bulgaria (until 1992), South Africa (1992-2009), Denmark (2009-2014) and Sweden (since 2014) and has taught a wide variety of courses in Mathematics, Computer Science, and Logic. Research fields Goranko has a broad range of research interests in the theory and applications of Logic to artificial intelligence, multi-agent systems, philosophy, computer science, and game theory, where he has published 4 books and over 140 research papers and chapters in handbooks and other research collections. Professional service President elect (with mandate 2024–2027) of the Division of Logic, Methodology and Philosophy of Science and Technology (DLMPST) of the International Union of History and Philosophy of Science and Technology (IUHPST) President (since 2018) of the Scandinavian Logic Society Senior member and past president (2016-2020) of the management board of the Association for Logic, Language and Information (FoLLI) Editor-in-chief (Logic) of the FoLLI Publications series on Logic, Language and Information, a sub-series of Springer LNCS. Executive member of the Board of the European Association for Computer Science Logic EACSL Editor-in-chief on the journal Logics Associate Editor of the ACM Transactions on Computational Logic and member of the editorial boards of several other scientific journals. Published books 2015 Logic and Discrete Mathematics: A Concise Introduction 2016 Temporal Logics in Computer Science 2016 Logic as a Tool: A Gu
https://en.wikipedia.org/wiki/Hausdorff%20moment%20problem
In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence be the sequence of moments of some Borel measure supported on the closed unit interval . In the case , this is equivalent to the existence of a random variable supported on , such that . The essential difference between this and other well-known moment problems is that this is on a bounded interval, whereas in the Stieltjes moment problem one considers a half-line , and in the Hamburger moment problem one considers the whole line . The Stieltjes moment problems and the Hamburger moment problems, if they are solvable, may have infinitely many solutions (indeterminate moment problem) whereas a Hausdorff moment problem always has a unique solution if it is solvable (determinate moment problem). In the indeterminate moment problem case, there are infinite measures corresponding to the same prescribed moments and they consist of a convex set. The set of polynomials may or may not be dense in the associated Hilbert spaces if the moment problem is indeterminate, and it depends on whether measure is extremal or not. But in the determinate moment problem case, the set of polynomials is dense in the associated Hilbert space. Completely monotonic sequences In 1921, Hausdorff showed that is such a moment sequence if and only if the sequence is completely monotonic, that is, its difference sequences satisfy the equation for all . Here, is the difference operator given by The necessity of this condition is easily seen by the identity which is non-negative since it is the integral of a non-negative function. For example, it is necessary to have See also Total monotonicity
https://en.wikipedia.org/wiki/Staggered%20extension%20process
The staggered extension process (also referred to as StEP) is a common technique used in biotechnology and molecular biology to create new, mutated genes with qualities of one or more initial genes. The technique itself is a modified polymerase chain reaction with very short (approximately 10 seconds) cycles. In these cycles the elongation of DNA is very quick (only a few hundred base pairs) and synthesized fragments anneal with complementary fragments of other strands. In this way, mutations of the initial genes are shuffled and in the end genes with new combinations of mutations are amplified. The StEP protocol has been found to be useful as a method of directed evolution for the discovery of enzymes useful to industry.
https://en.wikipedia.org/wiki/Turk%27s%20solution
In hemocytometry, Türk's solution (or Türk's fluid) is a hematological stain (either crystal violet or aqueous methylene blue) prepared in 99% acetic acid (glacial) and distilled water. The solution destroys the red blood cells and platelets within a blood sample (acetic acid being the main lyzing agent), and stains the nuclei of the white blood cells, making them easier to see and count. Türk's solution is intended for use in determining total leukocyte count in a defined volume of blood. Erythrocytes are hemolyzed while leukocytes are stained for easy visualization. Composition of Türk's solution is as follows:
https://en.wikipedia.org/wiki/Permutation%20model
In mathematical set theory, a permutation model is a model of set theory with atoms (ZFA) constructed using a group of permutations of the atoms. A symmetric model is similar except that it is a model of ZF (without atoms) and is constructed using a group of permutations of a forcing poset. One application is to show the independence of the axiom of choice from the other axioms of ZFA or ZF. Permutation models were introduced by and developed further by . Symmetric models were introduced by Paul Cohen. Construction of permutation models Suppose that A is a set of atoms, and G is a group of permutations of A. A normal filter of G is a collection F of subgroups of G such that G is in F The intersection of two elements of F is in F Any subgroup containing an element of F is in F Any conjugate of an element of F is in F The subgroup fixing any element of A is in F. If V is a model of ZFA with A the set of atoms, then an element of V is called symmetric if the subgroup fixing it is in F, and is called hereditarily symmetric if it and all elements of its transitive closure are symmetric. The permutation model consists of all hereditarily symmetric elements, and is a model of ZFA. Construction of filters on a group A filter on a group can be constructed from an invariant ideal on of the Boolean algebra of subsets of A containing all elements of A. Here an ideal is a collection I of subsets of A closed under taking finite unions and subsets, and is called invariant if it is invariant under the action of the group G. For each element S of the ideal one can take the subgroup of G consisting of all elements fixing every element S. These subgroups generate a normal filter of G.
https://en.wikipedia.org/wiki/Mathematical%20modelling%20competition
Mathematical modelling competitions are team competitions for students that aim to promote mathematical modelling to solve problems of real-world importance. Several types of math contests exist. Contests are held at all levels, from grade school to undergraduate college students. See also Mathematical Contest in Modeling MathWorks Math Modeling Challenge International Mathematical Olympiad List of mathematics competitions
https://en.wikipedia.org/wiki/Diphenylamine
Diphenylamine is an organic compound with the formula (C6H5)2NH. The compound is a derivative of aniline, consisting of an amine bound to two phenyl groups. The compound is a colorless solid, but commercial samples are often yellow due to oxidized impurities. Diphenylamine dissolves well in many common organic solvents, and is moderately soluble in water. It is used mainly for its antioxidant properties. Diphenylamine is widely used as an industrial antioxidant, dye mordant and reagent and is also employed in agriculture as a fungicide and antihelmintic. Preparation and reactivity Diphenylamine is manufactured by the thermal deamination of aniline over oxide catalysts: 2 C6H5NH2 → (C6H5)2NH + NH3 It is a weak base, with a Kb of 10−14. With strong acids, it forms salts. For example, treatment with sulfuric acid gives the bisulfate [(C6H5)2NH2]+[HSO4]− as a white or yellowish powder with m.p. 123-125 °C. Diphenylamine undergoes various cyclisation reactions. With sulfur, it gives phenothiazine, a precursor to pharmaceuticals. (C6H5)2NH + 2 S → S(C6H4)2NH + H2S With iodine, it undergoes dehydrogenation to give carbazole, with release of hydrogen iodide: (C6H5)2NH + I2 → (C6H4)2NH + 2 HI Arylation with iodobenzene gives triphenylamine. it is also used as a test reagent in the dische's test . Applications Testing for DNA The Dische test uses diphenylamine to test for DNA, and can be used to distinguish DNA from RNA. Apple scald inhibitor Diphenylamine is used as a pre- or postharvest scald inhibitor for apples applied as an indoor drench treatment. Its anti-scald activity is the result of its antioxidant properties, which protect the apple skin from the oxidation products of α-farnesene during storage. Apple scald is physical injury that manifests in brown spots after fruit is removed from cold storage. Stabilizer for smokeless powder In the manufacture of smokeless powder, diphenylamine is commonly used as a stabilizer, s
https://en.wikipedia.org/wiki/MIMO-OFDM
Multiple-input, multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) is the dominant air interface for 4G and 5G broadband wireless communications. It combines multiple-input, multiple-output (MIMO) technology, which multiplies capacity by transmitting different signals over multiple antennas, and orthogonal frequency-division multiplexing (OFDM), which divides a radio channel into a large number of closely spaced subchannels to provide more reliable communications at high speeds. Research conducted during the mid-1990s showed that while MIMO can be used with other popular air interfaces such as time-division multiple access (TDMA) and code-division multiple access (CDMA), the combination of MIMO and OFDM is most practical at higher data rates. MIMO-OFDM is the foundation for most advanced wireless local area network (wireless LAN) and mobile broadband network standards because it achieves the greatest spectral efficiency and, therefore, delivers the highest capacity and data throughput. Greg Raleigh invented MIMO in 1996 when he showed that different data streams could be transmitted at the same time on the same frequency by taking advantage of the fact that signals transmitted through space bounce off objects (such as the ground) and take multiple paths to the receiver. That is, by using multiple antennas and precoding the data, different data streams could be sent over different paths. Raleigh suggested and later proved that the processing required by MIMO at higher speeds would be most manageable using OFDM modulation, because OFDM converts a high-speed data channel into a number of parallel lower-speed channels. Operation In modern usage, the term "MIMO" indicates more than just the presence of multiple transmit antennas (multiple input) and multiple receive antennas (multiple output). While multiple transmit antennas can be used for beamforming, and multiple receive antennas can be used for diversity, the word "MIMO" refers to the simultane
https://en.wikipedia.org/wiki/Mar%C3%ADa%20Emilia%20Caballero
María Emilia Caballero Acosta is a Mexican mathematician specializing in probability theory, including Lévy processes, branching processes, Markov processes, and Lamperti representations (an exponential relation between Markov processes and Lévy processes). She is a professor in the Faculty of Sciences and Researcher in the Institute of Sciences of the National Autonomous University of Mexico (UNAM). Education and career After doing her undergraduate studies at the National Autonomous University of Mexico, Caballero went to Pierre and Marie Curie University in France for graduate study in mathematics. She completed a doctorat de troisième cycle in 1973, with the dissertation Quelques proprietes en theorie du potentiel in potential theory, jointly supervised by Marcel Brelot and Paul Malliavin. Her interest in probability theory developed out of this work and the probabilistic theory of potential. Already in 1964, she had begun working as an adjunct professor at the Escuela Nacional Preparatoria and as an assistant in the Faculty of Sciences at UNAM. On completing her doctorate in 1973, she took her present position at the Institute of Mathematics. Recognition Caballero is a member of the Mexican Academy of Sciences. She won UNAM's Juana de Asbaje Medal in 2004. In 2012 she won UNAM's National University Award, the first woman in the Institute of Mathematics to win this award.
https://en.wikipedia.org/wiki/Thailand%20Center%20of%20Excellence%20for%20Life%20Sciences
The Thailand Center of Excellence for Life Sciences (TCELS) was founded in 2004 by the government of Thailand. TCELS is a public organization under the auspices of the Ministry of Higher Education, Science, Research and Innovation. TCELS has the responsibility of providing a link between innovation in life sciences and investment, and spurring domestic and international partnership in the life science business in Thailand. History TCELS was founded in 2004. Initially, TCELS was established as an organization under the umbrella of the Office of Knowledge Management and Development (OKMD), which houses a group of public organizations under the supervision of the Office of the Prime Minister. On 27 May 2011, TCELS was elevated to a public organization under the Ministry of Science and Technology (MST). Mission Support and develop the life sciences business and industry Promote and support innovations, research, and knowledge related to the commercialization of life sciences products and services Develop and support the necessary infrastructure and human capacity for life sciences business and industry Create a strategic plan for developing life sciences business and industry Serve as the coordination center for facilitating cooperation among domestic and international organizations for life sciences business and industry Serve as Thailand’s life sciences business information and knowledge center. Focus areas Pharmaceuticals and biotechnology A key project is pharmacogenomics. TCELS supports the Medical Genomic Center to promote awareness of this diagnostic tool. Natural products TCELS has supported a project with the aim of developing new products from Hevea brasiliensis. The research is conducted by a team from Prince of Songkhla University. Biomedical engineering TCELS conducts various projects in medical robotics, medical devices, and operates the Advanced Dental Technology Center (ADTEC). Medical services Advanced Cell and Gene Therapies Program Automated Ce
https://en.wikipedia.org/wiki/LOBPCG
Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem for a given pair of complex Hermitian or real symmetric matrices, where the matrix is also assumed positive-definite. Background Kantorovich in 1948 proposed calculating the smallest eigenvalue of a symmetric matrix by steepest descent using a direction of a scaled gradient of a Rayleigh quotient in a scalar product , with the step size computed by minimizing the Rayleigh quotient in the linear span of the vectors and , i.e. in a locally optimal manner. Samokish proposed applying a preconditioner to the residual vector to generate the preconditioned direction and derived asymptotic, as approaches the eigenvector, convergence rate bounds. D'yakonov suggested spectrally equivalent preconditioning and derived non-asymptotic convergence rate bounds. Block locally optimal multi-step steepest descent for eigenvalue problems was described in. Local minimization of the Rayleigh quotient on the subspace spanned by the current approximation, the current residual and the previous approximation, as well as its block version, appeared in. The preconditioned version was analyzed in and. Main features Matrix-free, i.e. does not require storing the coefficient matrix explicitly, but can access the matrix by evaluating matrix-vector products. Factorization-free, i.e. does not require any matrix decomposition even for a generalized eigenvalue problem. The costs per iteration and the memory use are competitive with those of the Lanczos method, computing a single extreme eigenpair of a symmetric matrix. Linear convergence is theoretically guaranteed and practically observed. Accelerated convergence due to direct preconditioning, in contrast to the Lanczos method, including variable and non-symmetric as well as fixed and positive definite precondition
https://en.wikipedia.org/wiki/Codewars
Codewars is an educational community for computer programming. On the platform, software developers train on programming challenges known as kata. These discrete programming exercises train a range of skills in a variety of programming languages, and are completed within an online integrated development environment. On Codewars the community and challenge progression is gamified, with users earning ranks and honor for completing kata, contributing kata, and quality solutions. The platform is owned and operated by Qualified, a technology company that provides a platform for assessing and training software engineering skills. History Founded by Nathan Doctor and Jake Hoffner in November 2012, the project initially began at a Startup Weekend competition that year, where it was prototyped. It was awarded first place in that competition, drawing the attention of engineers, and funding interest from two of the judges Paige Craig (angel investor) and Brian Lee (entrepreneur). After building the first production iteration of the platform, it was launched to the Hacker News community, receiving significant attention for its challenge format and signing up approximately 10,000 users within that weekend. See also CodeFights CodinGame Competitive programming HackerRank External links AngelList profile Programming contests Computer programming American educational websites
https://en.wikipedia.org/wiki/K-space%20%28functional%20analysis%29
In mathematics, more specifically in functional analysis, a K-space is an F-space such that every extension of F-spaces (or twisted sum) of the form is equivalent to the trivial one where is the real line. Examples The spaces for are K-spaces, as are all finite dimensional Banach spaces. N. J. Kalton and N. P. Roberts proved that the Banach space is not a K-space. See also
https://en.wikipedia.org/wiki/Moder%20humus
Moder is a forest floor type formed under mixed-wood and pure deciduous forests. Moder is a kind of humus whose properties are the transition between :mor humus and mull humus types. Moders are similar to mors as they are made up of partially to fully humified organic components accumulated on the mineral soil. Compared to mulls, moders are zoologically active. In addition, moders present as in the middle of mors and mulls with a higher decomposition capacity than mull but lower than mor. Moders are characterized by a slow rate of litter decomposition by litter-dwelling organisms and fungi, leading to the accumulation of organic residues. Moder humus forms share the features of the mull and mor humus forms. Properties Moders develop in semiarid, temperate, and Mediterranean climates. Moders' chemical characteristics show low acidity, total carbon, carbon-nitrogen ratio, cation exchange capacity, and high total nitrogen and base saturation. Moders have a higher availability of nutrients than mors. Formation of moder humus forms Morders form in deciduous forest situations when the soil has few micro-organisms, bacteria, and invertebrates, such as earthworms, to decompose the organic matter on the soil surface. The organic matter accumulation horizon could identify by capitalized letters. It is generally possible to observe three distinct "sub-layers" or horizons designated by the litter (L) *, fermentation (F) *, and humus (H) * layers. Identify different layers for moder "L" litter: A horizon is defined by accumulating primary leaves (and needles), twigs, and woody materials, with the original structures visible. "F" Fermentation: A horizon defined by the buildup of partially decomposed organic matter generated primarily from leaves, twigs, and woody materials. Some of the original structures are difficult to identify, and materials may have been pounded into small pieces or particles in part by soil fauna, as in a "MODER". "H" Humus: A horizon defined by th
https://en.wikipedia.org/wiki/Linnett%20double-quartet%20theory
Linnett double-quartet theory (LDQ) is a method of describing the bonding in molecules which involves separating the electrons depending on their spin, placing them into separate 'spin tetrahedra' to minimise the Pauli repulsions between electrons of the same spin. Introduced by J. W. Linnett in his 1961 monograph and 1964 book, this method expands on the electron dot structures pioneered by G. N. Lewis. While the theory retains the requirement for fulfilling the octet rule, it dispenses with the need to force electrons into coincident pairs. Instead, the theory stipulates that the four electrons of a given spin should maximise the distances between each other, resulting in a net tetrahedral electronic arrangement that is the fundamental molecular building block of the theory. By taking cognisance of both the charge and the spin of the electrons, the theory can describe bonding situations beyond those invoking electron pairs, for example two-centre one-electron bonds. This approach thus facilitates the generation of molecular structures which accurately reflect the physical properties of the corresponding molecules, for example molecular oxygen, benzene, nitric oxide or diborane. Additionally, the method has enjoyed some success for generating the molecular structures of excited states, radicals, and reaction intermediates. The theory has also facilitated a more complete understanding of chemical reactivity, hypervalent bonding and three-centre bonding. Historical background The cornerstone of classical bonding theories is the Lewis structure, published by G. N. Lewis in 1916 and continuing to be widely taught and disseminated to this day. In this theory, the electrons in bonds are believed to pair up, forming electron pairs which result in the binding of nuclei. While Lewis’ model could explain the structures of many molecules, Lewis himself could not rationalise why electrons, negatively-charged particles which should repel, were able to form electron pairs in
https://en.wikipedia.org/wiki/Fire%20sprinkler%20system
A fire sprinkler system is an active fire protection method, consisting of a water supply system providing adequate pressure and flowrate to a water distribution piping system, to which fire sprinklers are connected. Although initially used only in factories and large commercial buildings, systems for homes and small buildings are now available at a cost-effective price. Fire sprinkler systems are extensively used worldwide, with over 40 million sprinkler heads fitted each year. Fire sprinkler systems are generally designed as a life saving system, but are not necessarily designed to protect the building. Of buildings completely protected by fire sprinkler systems, if a fire did initiate, it was controlled by the fire sprinklers alone in 96% of these cases. History Leonardo da Vinci designed a sprinkler system in the 15th century. Leonardo automated his patron's kitchen with a super-oven and a system of conveyor belts. In a comedy of errors, everything went wrong during a huge banquet, and a fire broke out. "The sprinkler system worked all too well, causing a flood that washed away all the food and a good part of the kitchen." Ambrose Godfrey created the first successful automated sprinkler system in 1723. He used gunpowder to release a tank of extinguishing fluid. The world's first modern recognizable sprinkler system was installed in the Theatre Royal, Drury Lane in the United Kingdom in 1812 by its architect, William Congreve, and was covered by patent No. 3606 dated the same year. The apparatus consisted of a cylindrical airtight reservoir of 400 hogsheads (c. 95,000 litres) fed by a water main which branched to all parts of the theatre. A series of smaller pipes fed from the distribution pipe were pierced with a series of holes which would pour water in the event of a fire. Frederick Grinnell improved Henry S. Parmalee's design and in 1881 patented the automatic sprinkler that bears his name. He continued to improve the device and in 1890 invented t
https://en.wikipedia.org/wiki/Agroecology%20and%20Sustainable%20Food%20Systems
Agroecology and Sustainable Food Systems is a peer-reviewed scientific journal covering sustainable agriculture. It was established in 1990 as the Journal of Sustainable Agriculture, obtaining its current title in 2013. It is published by Taylor & Francis and the editor-in-chief is Stephen R. Gliessman (University of California, Santa Cruz). Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index Expanded and Scopus.
https://en.wikipedia.org/wiki/Background%20extinction%20rate
Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history before humans became a primary contributor to extinctions. This is primarily the pre-human extinction rates during periods in between major extinction events. Currently there have been five mass extinctions that have happened since the beginning of time all resulting in a variety of reasons. Overview Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it. Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years. Measurement Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There is three different ways to calculate background extinction rate.. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct. Lifespan estimates Some species lifespan es
https://en.wikipedia.org/wiki/Botanical%20illustration
Botanical illustration is the art of depicting the form, color, and details of plant species. They are generally meant to be scientifically descriptive about subjects depicted and are often found printed alongside a botanical description in books, magazines, and other media. Some are sold as artworks. Often composed by a botanical illustrator in consultation with a scientific author, their creation requires an understanding of plant morphology and access to specimens and references. Many illustrations are in watercolour, but may also be in oils, ink, or pencil, or a combination of these and other media. The image may be life-size or not, though at times a scale is shown, and may show the life cycle and/or habitat of the plant and its neighbors, the upper and reverse sides of leaves, and details of flowers, bud, seed and root system. The fragility of dried or otherwise preserved specimens, and restrictions or impracticalities of transport, saw illustrations used as valuable visual references for taxonomists. In particular, minute plants or other botanical specimens only visible under a microscope were often identified through illustrations. To that end, botanical illustrations used to be generally accepted as types for attribution of a botanical name to a taxon. However, current guidelines state that on or after 1 January 2007, the type must be a specimen 'except where there are technical difficulties of specimen preservation or if it is impossible to preserve a specimen that would show the features attributed to the taxon by the author of the name.' (Arts 40.4 and 40.5 of the Shenzen Code, 2018). History Early herbals and pharmacopoeia of many cultures include illustrations of plants. Botanical illustrations in such texts were often created to assist with identification of a specie for some medicinal purpose. The earliest surviving illustrated botanical work is the Codex vindobonensis. It is a copy of Dioscorides's , and was made in the year 512 for Juliana A
https://en.wikipedia.org/wiki/Braids%2C%20Links%2C%20and%20Mapping%20Class%20Groups
Braids, Links, and Mapping Class Groups is a mathematical monograph on braid groups and their applications in low-dimensional topology. It was written by Joan Birman, based on lecture notes by James W. Cannon, and published in 1974 by the Princeton University Press and University of Tokyo Press, as volume 82 of the book series Annals of Mathematics Studies. Although braid groups had been introduced in 1891 by Adolf Hurwitz and formalized in 1925 by Emil Artin, this was the first book devoted to them. It has been described as a "seminal work", one that "laid the foundations for several new subfields in topology". Topics Braids, Links, and Mapping Class Groups is organized into five chapters and an appendix. The first introductory chapter defines braid groups, configuration spaces, and the use of configuration spaces to define braid groups on arbitrary two-dimensional manifolds. It provides a solution to the word problem for braids, the question of determining whether two different-looking braid presentations really describe the same group element. It also describes the braid groups as automorphism groups of free groups and of multiply-punctured disks. The next three chapters present connections of braid groups to three different areas of mathematics. Chapter 2 concerns applications to knot theory, via Alexander's theorem that every knot or link can be formed by closing off a braid, and provides the first complete proof of the Markov theorem on equivalence of links formed in this way. It also includes material on the conjugacy problem, important in this area because conjugate braids close off to form the same link, and on the "algebraic link problem" (not to be confused with algebraic links) in which one must determine whether two links can be related to each other by finitely many moves of a certain type, equivalent to the homeomorphism of link complements. Chapter 3 concerns representation theory, and includes Fox derivatives and Fox's free differential calculus,
https://en.wikipedia.org/wiki/Reciprocal%20innervation
René Descartes (1596–1650) was one of the first to conceive a model of reciprocal innervation (in 1626) as the principle that provides for the control of agonist and antagonist muscles. Reciprocal innervation describes skeletal muscles as existing in antagonistic pairs, with contraction of one muscle producing forces opposite to those generated by contraction of the other. For example, in the human arm, the triceps acts to extend the lower arm outward while the biceps acts to flex the lower arm inward. To reach optimum efficiency, contraction of opposing muscles must be inhibited while muscles with the desired action are excited. This reciprocal innervation occurs so that the contraction of a muscle results in the simultaneous relaxation of its corresponding antagonist. A common example of reciprocal innervation, is the effect of the nociceptive (or nocifensive) reflex, or defensive response to pain, otherwise commonly known as the withdrawal reflex; a type of involuntary action of the body to remove the body part from the vicinity of an offending object by contracting the appropriate muscles (usually flexor muscles), while relaxing the extensor muscles, allowing smooth movement. The concept of reciprocal innervation as applicable to the eye is also known as Sherrington's law (after Charles Scott Sherrington), wherein increased innervation to an extraocular muscle is accompanied by a simultaneous decrease in innervation to its specific antagonist, such as the medial rectus and the lateral rectus in the case of an eye looking to one side of the midline. When looking outward or laterally, the lateral rectus of one eye must contract via increased innervation, while its antagonist, the medial rectus of the same eye - shall relax. The converse would occur in the other eye, both eyes demonstrating the law of reciprocal innervation. The significance of Descartes’ Law of Reciprocal Innervation has been additionally highlighted by recent research and applications of bioe
https://en.wikipedia.org/wiki/William%20Ayshford%20Sanford
William Ayshford Sanford, DL (1818– 28 October 1902) was a landowner, naturalist and Liberal Party politician, who served as Colonial Secretary of Western Australia from 1852 to 1855. Sanford was born in 1818, the son of Edward Ayshford Sanford, a Member of Parliament for Somerset, by his first wife Henrietta Langham, daughter of Sir William Langham, 8th Baronet. The family had owned Nynehead Court in Somerset since 1599, and William Ayshford Sanford succeeded to the estate on the death of his father in 1871. He served as Colonial Secretary from 1852 to 1855, and in this position Sanford asked the assistant Surveyor of the state, Robert Austin, to make observations and collections of birds while exploring inland regions. In the report of the Austin Expedition of 1854 is a note on a "Ground Parrot", following Sanford's labelling of what is assumed to be the type specimen of Pezoporus occidentalis, the cryptic "night parrot". After returning to England, Sanford is noted for his interest in natural history, especially the paleontology of the Somerset area. As a large landowner, he was a prominent public man in Somerset, was chairman of the Wellington bench of magistrates, and a deputy lieutenant of the county. He had been a strong supporter of the Liberals, but in his late years became a Liberal Unionist. Sanford died at his residence Nynehead Court, Somerset, on 28 October 1902. He was twice married, first in 1857 to Sarah Ellen Seymour (d.1867), daughter of Henry Seymour, of Knoyle House, Wiltshire, a male-line descendant of the Seymour baronets; and secondly in 1874 to Sarah Elizabeth Harriet Hervey (d.1877), daughter of Lord Arthur Hervey, Bishop of Bath and Wells, by his wife Patience Singleton. By his first wife, he left children: Colonel Edward Charles Ayshford Sanford (1859–1923) Mary Ethel Ayshford Sanford (1861–1941), who married her cousin Field Marshal Paul Methuen, 3rd Baron Methuen (1845–1932), and left children Henry Seymour John Ayshford Sanford
https://en.wikipedia.org/wiki/Piano%20tuning
Piano tuning is the act of adjusting the tension of the strings of an acoustic piano so that the musical intervals between strings are in tune. The meaning of the term 'in tune', in the context of piano tuning, is not simply a particular fixed set of pitches. Fine piano tuning requires an assessment of the vibration interaction among notes, which is different for every piano, thus in practice requiring slightly different pitches from any theoretical standard. Pianos are usually tuned to a modified version of the system called equal temperament. (See Piano key frequencies for the theoretical piano tuning.) In all systems of tuning, every pitch may be derived from its relationship to a chosen fixed pitch, which is usually A440 (440 Hz), the note A above middle C. For a classical piano and musical theory, the middle C is usually labelled as C4 (as in scientific pitch notation); However, in the MIDI standard definition this middle C (261.626 Hz) is labelled C3. In practice, a MIDI software can label middle C as C3-C5, which can cause confusion, especially for beginners. Piano tuning is done by a wide range of independent piano technicians, piano rebuilders, piano-store technical personnel, and hobbyists. Professional training and certification is available from organizations or guilds, such as the Piano Technicians Guild. Many piano manufacturers recommend that pianos be tuned twice a year. Background Many factors cause pianos to go out of tune, particularly atmospheric changes. For instance, changes in humidity will affect the pitch of a piano; high humidity causes the sound board to swell, stretching the strings and causing the pitch to go sharp, while low humidity has the opposite effect. Changes in temperature can also affect the overall pitch of a piano. In newer pianos the strings gradually stretch and wooden parts compress, causing the piano to go flat, while in older pianos the tuning pins (that hold the strings in tune) can become loose and not hold the pia
https://en.wikipedia.org/wiki/Sperm%20guidance
Sperm guidance is the process by which sperm cells (spermatozoa) are directed to the oocyte (egg) for the aim of fertilization. In the case of marine invertebrates the guidance is done by chemotaxis. In the case of mammals, it appears to be done by chemotaxis, thermotaxis and rheotaxis. Background Since the discovery of sperm attraction to the female gametes in ferns over a century ago, sperm guidance in the form of sperm chemotaxis has been established in a large variety of species Although sperm chemotaxis is prevalent throughout the Metazoa kingdom, from marine species with external fertilization such as sea urchins and corals, to humans, much of the current information on sperm chemotaxis is derived from studies of marine invertebrates, primarily sea urchin and starfish. As a matter of fact, until not too long ago, the dogma was that, in mammals, guidance of spermatozoa to the oocyte was unnecessary. This was due to the common belief that, following ejaculation into the female genital tract, large numbers of spermatozoa 'race' towards the oocyte and compete to fertilize it. This belief was taken apart when it became clear that only few of the ejaculated spermatozoa — in humans, only ~1 of every million spermatozoa — succeed in entering the oviducts (Fallopian tubes) and when more recent studies showed that mammalian spermatozoa employ at least three different mechanisms, each of which can potentially serve as a guidance mechanism: chemotaxis, thermotaxis and rheotaxis. Sperm guidance in non-mammalian species Sperm guidance in non-mammalian species is performed by chemotaxis. The oocyte secretes a chemoattractant, which, as it diffuses away, forms a concentration gradient: a high concentration close to the egg, and a gradually lower concentration as the distance from the oocyte increases. Spermatozoa can sense this chemoattractant and orient their swimming direction up the concentration gradient towards the oocyte. Sperm chemotaxis was demonstrated in a l
https://en.wikipedia.org/wiki/Linear%20induction%20accelerator
Linear induction accelerators utilize ferrite-loaded, non-resonant magnetic induction cavities. Each cavity can be thought of as two large washer-shaped disks connected by an outer cylindrical tube. Between the disks is a ferrite toroid. A voltage pulse applied between the two disks causes an increasing magnetic field which inductively couples power into the charged particle beam. The linear induction accelerator was invented by Christofilos in the 1960s. Linear induction accelerators are capable of accelerating very high beam currents (>1000 A) in a single short pulse. They have been used to generate X-rays for flash radiography (e.g. DARHT at LANL), and have been considered as particle injectors for magnetic confinement fusion and as drivers for free electron lasers. A compact version of a linear induction accelerator, the dielectric wall accelerator, has been proposed as a proton accelerator for medical proton therapy.
https://en.wikipedia.org/wiki/Sequential%20quadratic%20programming
Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable. SQP methods solve a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions, of the problem. Algorithm basics Consider a nonlinear programming problem of the form: The Lagrangian for this problem is where and are Lagrange multipliers. The standard Newton's Method searches for the solution by iterating the following equation, where denotes the Hessian matrix: . However, because the matrix is generally singular (and therefore non-invertible), the Newton step cannot be calculated directly. Instead the basic sequential quadratic programming algorithm defines an appropriate search direction at an iterate , as a solution to the quadratic programming subproblem Note that the term in the expression above may be left out for the minimization problem, since it is constant under the operator. Together, the SQP algorithm starts by first choosing the initial iterate , then calculating and . Then the QP subproblem is built and solved to find the Newton step direction which is used to update the parent problem iterate using . This process is repeated for until the parent problem satisfies a convergence test. Alternative approaches Sequential linear programming Sequential linear-quadratic programming Augmented Lagrangian method Implementations SQP methods have been impl
https://en.wikipedia.org/wiki/Graphics%20pipeline
The computer graphics pipeline, also known as the rendering pipeline or graphics pipeline, is a framework within computer graphics that outlines the necessary procedures for transforming a three-dimensional (3D) scene into a two-dimensional (2D) representation on a screen. Once a 3D model is generated, whether it's for a video game or any other form of 3D computer animation, the graphics pipeline converts the model into a visually perceivable format on the computer display. Due to the dependence on specific software, hardware configurations, and desired display attributes, a universally applicable graphics pipeline does not exist. Nevertheless, graphics application programming interfaces (APIs), such as Direct3D and OpenGL, were developed to standardize common procedures and oversee the graphics pipeline of a given hardware accelerator. These APIs provide an abstraction layer over the underlying hardware, relieving programmers from the need to write code explicitly targeting various graphics hardware accelerators like AMD, Intel, Nvidia, and others. The model of the graphics pipeline is usually used in real-time rendering. Often, most of the pipeline steps are implemented in hardware, which allows for special optimizations. The term "pipeline" is used in a similar sense for the pipeline in processors: the individual steps of the pipeline run in parallel as long as any given step has what it needs. Concept The 3D pipeline usually refers to the most common form of computer 3D rendering called 3D polygon rendering, distinct from raytracing and raycasting. In raycasting, a ray originates at the point where the camera resides, and if that ray hits a surface, the color and lighting of the point on the surface where the ray hit is calculated. In 3D polygon rendering the reverse happens- the area that is in view of the camera is calculated and then rays are created from every part of every surface in view of the camera and traced back to the camera. Structure A graphic
https://en.wikipedia.org/wiki/ISO/IEC%2027017
ISO/IEC 27017 is a security standard developed for cloud service providers and users to make a safer cloud-based environment and reduce the risk of security problems. It was published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) under the joint ISO and IEC subcommittee, ISO/IEC JTC 1/SC 27. It is part of the ISO/IEC 27000 family of standards, standards which provides best practice recommendations on information security management. This standard was built from ISO/IEC 27002, suggesting additional security controls for the cloud which were not completely defined in ISO/IEC 27002. This International Standard provides guidelines supporting the implementation of information security controls for cloud service customers, who implements the controls, and cloud service providers to support the implementations of those controls. The selection of appropriate information security controls and the application of the implementation guidance provided, will depend on a risk assessment and any legal, contractual, regulatory or other cloud-sector specific information security requirements. What does the standard provide? ISO/IEC 27017 provides guidelines for information security controls applicable to the use of cloud services by providing an additional implementation guidance for 37 controls specified in ISO/IEC 27002 and 7 additional controls related to cloud services which address the following: Who is responsible for what between the cloud service provider and the cloud customer. The removal or return of assets at the end of a contract. Protection and separation of the customer's virtual environment. Virtual machine configuration. Administrative operations and procedures associated with the cloud environment. Cloud customer monitoring of activity. Virtual and cloud network environment alignment. Structure of the standard The official title of the standard is "Information technology — Security tech
https://en.wikipedia.org/wiki/Biorthogonal%20system
In mathematics, a biorthogonal system is a pair of indexed families of vectors such that where and form a pair of topological vector spaces that are in duality, is a bilinear mapping and is the Kronecker delta. An example is the pair of sets of respectively left and right eigenvectors of a matrix, indexed by eigenvalue, if the eigenvalues are distinct. A biorthogonal system in which and is an orthonormal system. Projection Related to a biorthogonal system is the projection where its image is the linear span of and the kernel is Construction Given a possibly non-orthogonal set of vectors and the projection related is where is the matrix with entries and then is a biorthogonal system. See also
https://en.wikipedia.org/wiki/Katalin%20Vesztergombi
Katalin L. Vesztergombi (born July 17, 1948) is a Hungarian mathematician known for her contributions to graph theory and discrete geometry. A student of Vera T. Sós and a co-author of Paul Erdős, she is an emeritus associate professor at Eötvös Loránd University and a member of the Hungarian Academy of Sciences. Education As a high-school student in the 1960s, Vesztergombi became part of a special class for gifted mathematics students at Fazekas Mihály Gimnázium with her future collaborators László Lovász, József Pelikán, and others. She completed her Ph.D. in 1987 at Eötvös Loránd University. Her dissertation, Distribution of Distances in Finite Point Sets, is connected to the Erdős distinct distances problem and was supervised by Vera Sós. Contributions Vesztergombi's research contributions include works on permutations, graph coloring and graph products, combinatorial discrepancy theory, distance problems in discrete geometry, geometric graph theory, the rectilinear crossing number of the complete graph, and graphons. With László Lovász and József Pelikán, she is the author of the textbook Discrete Mathematics: Elementary and Beyond. Personal Vesztergombi is married to László Lovász, with whom she is also a frequent research collaborator. Selected publications Books Research articles
https://en.wikipedia.org/wiki/EIDORS
EIDORS is an open-source software tool box written mainly in MATLAB/GNU Octave designed primarily for image reconstruction from electrical impedance tomography (EIT) data, in a biomedical, industrial or geophysical setting. The name was originally an acronym for Electrical Impedance Tomography and Diffuse Optical Reconstruction Software. While the name reflects the original intention to cover image reconstruction of data from the mathematically similar near infra red diffuse optical imaging, to date there has been little development in that area. The project was launched in 1999 with a Matlab code for 2D EIT reconstruction which had its origin in the PhD thesis of Marko Vauhkonen and the work of his supervisor Jari Kaipio at the University of Kuopio. While Kuopio also developed a three dimensional EIT code this was not released as open-source. Instead the three dimensional version of EIDORS was developed from work done at UMIST (now University of Manchester) by Nick Polydorides and William Lionheart. Methods and models The forward models in EIDORS use the finite element method and this requires mesh generation for sometimes irregular objects (such as human bodies), and the meshing needs to reflect the electrodes used to drive and measure current in EIT. For this purpose an interface was developed to the Netgen Mesh Generator. History As the project grew there was a desire to incorporate forward modelling and reconstruction code from a variety of groups and Andy Adler and Lionheart developed a more extensible software system. The most recent version is 3.10, released in Dec, 2019. The EIDORS project also includes a repository of EIT data distributed under open-source licenses. Applications EIDORS has been extensively used in biomedical applications of EIT, including lung imaging, measuring cardiac output. It has been used for investigation of imaging electrical activity in the brain, and monitoring conductivity changes during radio-frequency ablation. Outsi
https://en.wikipedia.org/wiki/NAB2
NGFI-A-binding protein 2 also known as EGR-1-binding protein 2 or melanoma-associated delayed early response protein (MADER) is a protein that in humans is encoded by the NAB2 gene. Function This gene encodes a member of the family of NGFI-A binding (NAB) proteins, which function in the nucleus to repress or activate transcription induced by some members of the EGR (early growth response) family of transactivators. NAB proteins can homo- or hetero-multimerize with other EGR or NAB proteins through a conserved N-terminal domain, and repress transcription through two partially redundant C-terminal domains. Transcriptional repression by the encoded protein is mediated in part by interactions with the nucleosome remodeling and deactylase (NuRD) complex. Alternatively spliced transcript variants have been described, but their biological validity has not been determined. Pathology Recurrent somatic fusions of the two genes, NGFI-A–binding protein 2 (NAB2) and STAT6, located at chromosomal region 12q13, have been identified in solitary fibrous tumors.
https://en.wikipedia.org/wiki/Diagonal%20morphism%20%28algebraic%20geometry%29
In algebraic geometry, given a morphism of schemes , the diagonal morphism is a morphism determined by the universal property of the fiber product of p and p applied to the identity and the identity . It is a special case of a graph morphism: given a morphism over S, the graph morphism of it is induced by and the identity . The diagonal embedding is the graph morphism of . By definition, X is a separated scheme over S ( is a separated morphism) if the diagonal morphism is a closed immersion. Also, a morphism locally of finite presentation is an unramified morphism if and only if the diagonal embedding is an open immersion. Explanation As an example, consider an algebraic variety over an algebraically closed field k and the structure map. Then, identifying X with the set of its k-rational points, and is given as ; whence the name diagonal morphism. Separated morphism A separated morphism is a morphism such that the fiber product of with itself along has its diagonal as a closed subscheme — in other words, the diagonal morphism is a closed immersion. As a consequence, a scheme is separated when the diagonal of within the scheme product of with itself is a closed immersion. Emphasizing the relative point of view, one might equivalently define a scheme to be separated if the unique morphism is separated. Notice that a topological space Y is Hausdorff iff the diagonal embedding is closed. In algebraic geometry, the above formulation is used because a scheme which is a Hausdorff space is necessarily empty or zero-dimensional. The difference between the topological and algebro-geometric context comes from the topological structure of the fiber product (in the category of schemes) , which is different from the product of topological spaces. Any affine scheme Spec A is separated, because the diagonal corresponds to the surjective map of rings (hence is a closed immersion of schemes): . Let be a scheme obtained by identifying two affine lines thr
https://en.wikipedia.org/wiki/Self-Protecting%20Digital%20Content
Self Protecting Digital Content (SPDC), is a copy protection (digital rights management) architecture which allows restriction of access to, and copying of, the next generation of optical discs and streaming/downloadable content. Overview Designed by Cryptography Research, Inc. of San Francisco, SPDC executes code from the encrypted content on the DVD player, enabling the content providers to change DRM systems in case an existing system is compromised. It adds functionality to make the system "dynamic", as opposed to "static" systems in which the system and keys for encryption and decryption do not change, thus enabling one compromised key to decode all content released using that encryption system. "Dynamic" systems attempt to make future content released immune to existing methods of circumvention. Playback method If a method of playback used in previously released content is revealed to have a weakness, either by review or because it has already been exploited, code embedded into content released in the future will change the method, and any attackers will have to start over and attack it again. Targeting compromised players If a certain model of players are compromised, code specific to the model can be activated to verify that the particular player has not been compromised. The player can be "fingerprinted" if found to be compromised and the information can be used later. Forensic marking Code inserted into content can add information to the output that specifically identifies the player, and in a large-scale distribution of the content, can be used to trace the player. This may include the fingerprint of a specific player. Weaknesses If an entire class of players is compromised, it is infeasible to revoke the ability to use the content on the entire class because many customers may have purchased players in the class. A fingerprint may be used to try to work around this limitation, but an attacker with access to multiple sources of video may "s
https://en.wikipedia.org/wiki/McGee%20graph
In the mathematical field of graph theory, the McGee graph or the (3-7)-cage is a 3-regular graph with 24 vertices and 36 edges. The McGee graph is the unique (3,7)-cage (the smallest cubic graph of girth 7). It is also the smallest cubic cage that is not a Moore graph. First discovered by Sachs but unpublished, the graph is named after McGee who published the result in 1960. Then, the McGee graph was proven the unique (3,7)-cage by Tutte in 1966. The McGee graph requires at least eight crossings in any drawing of it in the plane. It is one of three non-isomorphic graphs tied for being the smallest cubic graph that requires eight crossings. Another of these three graphs is the generalized Petersen graph , also known as the Nauru graph. The McGee graph has radius 4, diameter 4, chromatic number 3 and chromatic index 3. It is also a 3-vertex-connected and a 3-edge-connected graph. It has book thickness 3 and queue number 2. Algebraic properties The characteristic polynomial of the McGee graph is . The automorphism group of the McGee graph is of order 32 and doesn't act transitively upon its vertices: there are two vertex orbits, of lengths 8 and 16. The McGee graph is the smallest cubic cage that is not a vertex-transitive graph. Gallery
https://en.wikipedia.org/wiki/HAND%20domain
In molecular biology, the HAND domain is a protein domain which adopts a secondary structure consisting of four alpha helices, three of which (H2, H3, H4) form an L-like configuration. Helix H2 runs antiparallel to helices H3 and H4, packing closely against helix H4, whilst helix H1 reposes in the concave surface formed by these three helices and runs perpendicular to them. This domain confers DNA and nucleosome binding properties to the proteins in which it occurs. It is named the HAND domain because its 4-helical structure resembles an open hand. HAND domain-containing proteins include proteins involved in nucleosome remodelling, an energy-dependent process that alters histone-DNA interactions within nucleosomes, thereby rendering nucleosomal DNA accessible to regulatory factors. The ATPases involved in nucleosome remodelling belong to the SWI2/SNF2 subfamily of DEAD/H-helicases, which contain a conserved ATPase domain characterised by seven motifs. Proteins within this family differ with regard to domain organisation, their associated proteins and the remodelling complex in which they reside. The ATPase ISWI is a member of this family. ISWI can be divided into two regions: an N-terminal region that contains the SWI2/SNF2 ATPase domain, and a C-terminal region that is responsible for substrate recognition. The C-terminal region contains 12 alpha-helices and can be divided into three domains and a spacer region: a HAND domain, a SANT domain (c-Myb DNA-binding like), a spacer helix, and a SLIDE domain (SANT-like but with several insertions).
https://en.wikipedia.org/wiki/True%20arithmetic
In mathematical logic, true arithmetic is the set of all true first-order statements about the arithmetic of natural numbers. This is the theory associated with the standard model of the Peano axioms in the language of the first-order Peano axioms. True arithmetic is occasionally called Skolem arithmetic, though this term usually refers to the different theory of natural numbers with multiplication. Definition The signature of Peano arithmetic includes the addition, multiplication, and successor function symbols, the equality and less-than relation symbols, and a constant symbol for 0. The (well-formed) formulas of the language of first-order arithmetic are built up from these symbols together with the logical symbols in the usual manner of first-order logic. The structure is defined to be a model of Peano arithmetic as follows. The domain of discourse is the set of natural numbers, The symbol 0 is interpreted as the number 0, The function symbols are interpreted as the usual arithmetical operations on , The equality and less-than relation symbols are interpreted as the usual equality and order relation on . This structure is known as the standard model or intended interpretation of first-order arithmetic. A sentence in the language of first-order arithmetic is said to be true in if it is true in the structure just defined. The notation is used to indicate that the sentence is true in True arithmetic is defined to be the set of all sentences in the language of first-order arithmetic that are true in , written . This set is, equivalently, the (complete) theory of the structure . Arithmetic undefinability The central result on true arithmetic is the undefinability theorem of Alfred Tarski (1936). It states that the set is not arithmetically definable. This means that there is no formula in the language of first-order arithmetic such that, for every sentence θ in this language, Here is the numeral of the canonical Gödel number of the sentence θ
https://en.wikipedia.org/wiki/Mathematical%20sciences
The mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper. Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences. Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island). See also
https://en.wikipedia.org/wiki/NucleaRDB
The NucleaRDB is a database of nuclear receptors. It contains data about the sequences, ligand binding constants and mutations of those proteins. See also Nuclear receptor
https://en.wikipedia.org/wiki/Paracytophagy
Paracytophagy () is the cellular process whereby a cell engulfs a protrusion which extends from a neighboring cell. This protrusion may contain material which is actively transferred between the cells. The process of paracytophagy was first described as a crucial step during cell-to-cell spread of the intracellular bacterial pathogen Listeria monocytogenes, and is also commonly observed in Shigella flexneri. Paracytophagy allows these intracellular pathogens to spread directly from cell to cell, thus escaping immune detection and destruction. Studies of this process have contributed significantly to our understanding of the role of the actin cytoskeleton in eukaryotic cells. Actin cytoskeleton Actin is one of the main cytoskeletal proteins in eukaryotic cells. The polymerization of actin filaments is responsible for the formation of pseudopods, filopodia and lamellipodia during cell motility. Cells actively build actin microfilaments that push the cell membrane towards the direction of advance. Nucleation factors and the Arp2/3 complex Nucleation factors are enhancers of actin polymerization and contribute to the formation of the trimeric polymerization nucleus. This is a structure required to initiate the process of actin filament polymerization in a stable and efficient way. Nucleation factors such as WASP (Wiskott-Aldrich syndrome protein) help to form the seven-protein Arp2/3 nucleation complex, which resembles two actin monomers and therefore allows for easier formation of the polymerization nucleus. Arp2/3 is able to cap the trailing ("minus") end of the actin filament, allowing for faster polymerization at the "plus" end. It can also bind to the side of existing filaments to promote filament branching. WASP analogs used by pathogens for intracellular motility Certain intracellular pathogens such as the bacterial species Listeria monocytogenes and Shigella flexneri can manipulate host cell actin polymerization to move through the cytosol and spread to
https://en.wikipedia.org/wiki/Plant%20nutrition
Plant nutrition is the study of the chemical elements and compounds necessary for plant growth and reproduction, plant metabolism and their external supply. In its absence the plant is unable to complete a normal life cycle, or that the element is part of some essential plant constituent or metabolite. This is in accordance with Justus von Liebig’s law of the minimum. The total essential plant nutrients include seventeen different elements: carbon, oxygen and hydrogen which are absorbed from the air, whereas other nutrients including nitrogen are typically obtained from the soil (exceptions include some parasitic or carnivorous plants). Plants must obtain the following mineral nutrients from their growing medium: the macronutrients: nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg) the micronutrients (or trace minerals): iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo), nickel (Ni) These elements stay beneath soil as salts, so plants absorb these elements as ions. The macronutrients are taken-up in larger quantities; hydrogen, oxygen, nitrogen and carbon contribute to over 95% of a plant's entire biomass on a dry matter weight basis. Micronutrients are present in plant tissue in quantities measured in parts per million, ranging from 0.1 to 200 ppm, or less than 0.02% dry weight. Most soil conditions across the world can provide plants adapted to that climate and soil with sufficient nutrition for a complete life cycle, without the addition of nutrients as fertilizer. However, if the soil is cropped it is necessary to artificially modify soil fertility through the addition of fertilizer to promote vigorous growth and increase or sustain yield. This is done because, even with adequate water and light, nutrient deficiency can limit growth and crop yield. History Carbon, hydrogen and oxygen are the basic nutrients plants receive from air and water. Justus von Liebig proved in 1840 tha
https://en.wikipedia.org/wiki/Robert%20Goldblatt
__notoc__ Robert Ian Goldblatt (born 1949) is a mathematical logician who is Emeritus Professor in the School of Mathematics and Statistics at Victoria University, Wellington, New Zealand. His doctoral advisor was Max Cresswell. His most popular books are Logics of Time and Computation and Topoi: the Categorial Analysis of Logic. He has also written a graduate level textbook on hyperreal numbers which is an introduction to nonstandard analysis. He has been Coordinating Editor of The Journal of Symbolic Logic and a Managing Editor of Studia Logica. He was elected Fellow and Councillor of the Royal Society of New Zealand, President of the New Zealand Mathematical Society, and represented New Zealand to the International Mathematical Union. In 2012 he was awarded the Jones Medal for lifetime achievement in mathematics. Books and handbook chapters 1979: Topoi: The Categorial Analysis of Logic, North-Holland. Revised edition 1984. Dover Publications edition 2006. Internet edition, Project Euclid. Benjamin C. Pierce recommends it as an "excellent beginner book", praising it for the use of simple set-theoretic examples and motivating intuitions, but noted that it "is sometimes criticized by category theorists for being misleading on some aspects of the subject, and for presenting long and difficult proofs where simple ones are available." But the preface of the Dover edition observes (p. xv) that "This is a book about logic, rather than category theory per se. It aims to explain, in an introductory way, how certain logical ideas are illuminated by a category-theoretic perspective." 1982: Axiomatising the Logic of Computer Programming, Lecture Notes in Computer Science 130, Springer-Verlag. 1987: Orthogonality and Spacetime Geometry, Universitext Springer-Verlag 1987: Logics of Time and Computation. CSLI Lecture Notes, 7. Stanford University, Center for the Study of Language and Information . Second edition 1992. 1993: Mathematics of Modality, CSLI Publications
https://en.wikipedia.org/wiki/RAB1
Rab GTPases are molecular switches that regulate membrane traffic. They are active in their GTP-bound form and inactive when bound to GDP. The GTPase YPT1, and its mammalian homologue Rab1, regulate membrane-tethering events on three different pathways: autophagy, ER-Golgi, and intra-Golgi traffic. In the yeast Saccharomyces cerevisiae, many of the ATG proteins needed for macroautophagy are shared with the biosynthetic cytoplasm to the vacuole-targeting (CVT) pathway that transports certain hydrolases into the vacuole. Both pathways require YPT1; however, only the macroautophagy pathway is conserved in higher eukaryotes. In the macroautophagy pathway, Rab1 mediates the recruitment of Atg1 to the PAS. Rab1 regulates macroautophagy by recruiting its effector, Atg1, to the PAS to tether Atg9 vesicles to each other or to other membranes.
https://en.wikipedia.org/wiki/Temperate%20Northern%20Atlantic
The Temperate Northern Atlantic is a biogeographic region of the Earth's seas, comprising the temperate and subtropical waters of the North Atlantic Ocean and connecting seas, including the Mediterranean Sea, Black Sea, and northern Gulf of Mexico. The Temperate Northern Atlantic is a marine realm, one of the great biogeographic divisions of the world's ocean basins. The tropical waters of the Atlantic Ocean, Gulf of Mexico, and Caribbean Sea are part of the Tropical Atlantic marine realm. To the north, the Temperate North Atlantic transitions to the Arctic realm. Subdivisions The Temperate Northern Atlantic realm is divided into six marine provinces. Five of the provinces are further divided into marine ecoregions. The Black Sea is both a province and an ecoregion. Northern European Seas South and West Iceland Faroe Plateau Southern Norway Northern Norway and Finnmark Baltic Sea North Sea Celtic Seas Lusitanian South European Atlantic Shelf Saharan Upwelling Azores Canaries Madeira Mediterranean Sea Adriatic Sea Aegean Sea Levantine Sea Tunisian Plateau-Gulf of Sidra Ionian Sea Western Mediterranean Alboran Sea Black Sea Black Sea Cold Temperate Northwest Atlantic Gulf of Saint Lawrence-Eastern Scotian Shelf Southern Grand Banks-South Newfoundland Scotian Shelf Gulf of Maine-Bay of Fundy Virginian Warm Temperate Northwest Atlantic Carolinian Northern Gulf of Mexico
https://en.wikipedia.org/wiki/Journal%20of%20Mathematical%20Logic
The Journal of Mathematical Logic was established in 2001 and is published by World Scientific. It covers the field of mathematical logic and its applications. Abstracting and indexing The journal is abstracted and indexed in: Current Mathematical Publications Mathematical Reviews MathSciNet Zentralblatt MATH Science Citation Index Expanded Current Contents/Physical, Chemical and Earth Sciences Journal Citation Reports/Science Edition External links English-language journals Academic journals established in 2001 Mathematics journals World Scientific academic journals Logic journals
https://en.wikipedia.org/wiki/Laboratory%20of%20Tree-Ring%20Research
The Laboratory of Tree-Ring Research (LTRR) was established in 1937 by A.E. Douglass, founder of the modern science of dendrochronology. The LTRR is a research unit in the College of Science at the University of Arizona in Tucson. Since its founding, visiting scholars and faculty at the lab have done notable work in the areas of climate change, fire history, ecology, archeology and hydrology.
https://en.wikipedia.org/wiki/Konzo
Konzo is an epidemic paralytic disease occurring among hunger-stricken rural populations in Africa where a diet dominated by insufficiently processed cassava results in simultaneous malnutrition and high dietary cyanide intake. Konzo was first described by Giovanni Trolli in 1938 who compiled the observations from eight doctors working in the Kwango area of the Belgian Congo (now Democratic Republic of the Congo). Signs and symptoms The onset of paralysis (spastic paraparesis) is sudden and symmetrical and affects the legs more than the arms. The resulting disability is permanent but does not progress. Typically, a patient is standing and walking on the balls of the feet with rigid legs and often with ankle clonus. Initially, most patients experience generalized weakness during the first days and are bedridden for some days or weeks before trying to walk. Occasional blurred vision and/or speech difficulties typically clear during the first month, except in severely affected patients. Spasticity is present from the first day, without any initial phase of flaccidity. After the initial weeks of functional improvement, the spastic paraparesis remains stable for the rest of life. Some patients may experience an abrupt aggravating episode, e.g. a sudden and permanent worsening of the spastic paraparesis. Such episodes are identical to the initial onset and can therefore be interpreted as a second onset. The severity of konzo varies; cases range from only hyperreflexia in the lower limbs to a severely disabled patient with spastic paraparesis, associated weakness of the trunk and arms, impaired eye movements, speech and possibly visual impairment. Although the severity varies from patient to patient, the longest upper motor neurons are invariably more affected than the shorter ones. Thus, a konzo patient with speech impairment always shows severe symptoms in the legs and arms. Recently, neuropsychological effects of konzo have been described from DR Congo. Cause The
https://en.wikipedia.org/wiki/Doomguy
The Doomguy (also spelled Doom Guy, as well as referred to as the Doom Marine, Doom Slayer or just the Slayer in 2016's Doom and Doom Eternal) is a fictional character and the protagonist of the Doom video game franchise of first-person shooters created by id Software. He was created by American video game designer John Romero. He was introduced as the player character in the original 1993 video game Doom. Within the Doom series, Doomguy is a demon hunter space marine dressed in green combat armor who rarely speaks onscreen, and his personality and backstory were intentionally vague to reinforce his role as a player avatar. In Doom Eternal, he is voiced by American voice actor Matthew Waterson, while Jason Kelley voices the character in that game's downloadable content The Ancient Gods: Part Two. He has appeared in several other games developed by id Software, including Quake Champions and Quake III Arena. He has been featured in several other game franchises, including his likeness as a customizable skin for the Mii Gunner character in Super Smash Bros. Ultimate, being added as an outfit in Fall Guys, and an outfit in Fortnite. He received mainly positive reviews, with some critics praising him for being a competent protagonist. Concept and creation The Marine is not referred to by name in the original game. Romero described this choice as increasing player immersion: "There was never a name for the [Doom] marine because it's supposed to be YOU [the player]". The character sprites were created by Adrian Carmack, based on an initial sketch and clay model he made. In 2017, John Romero stated that he was the original model of the character for the cover box art. In 2020, Romero revealed that the real name of the character is Doomguy. In 2021, Doom Eternal director Hugo Martin revealed that the female Doomguy was nearly added, but scrapped due to how much of an endeavor it would have been. Tom Hall's original design draft, "The Doom Bible", described several planned
https://en.wikipedia.org/wiki/Intorel
Visionic is a network management computer system and network monitoring software application produced by Intorel. In 2002, Intorel launched the first version of later to become its flagship product - Visionic.
https://en.wikipedia.org/wiki/StatPlus
StatPlus is a software product developed by AnalystSoft for basic univariate and multivariate statistical analysis (MANOVA, GLM, Latin squares), as well as time series analysis, nonparametric statistics, survival analysis and statistical charts including control charts. It was originally developed for use in biomedical sciences and known as BioStat. It is nowadays mostly used in biomedicine and natural sciences. The software has a version for the Mac OS X known as StatPlus:mac. This version may also be used as an add-on (software) to Microsoft Excel, similar to Microsoft's Analysis Toolpak on Windows.
https://en.wikipedia.org/wiki/T-norm
In mathematics, a t-norm (also T-norm or, unabbreviated, triangular norm) is a kind of binary operation used in the framework of probabilistic metric spaces and in multi-valued logic, specifically in fuzzy logic. A t-norm generalizes intersection in a lattice and conjunction in logic. The name triangular norm refers to the fact that in the framework of probabilistic metric spaces t-norms are used to generalize the triangle inequality of ordinary metric spaces. Definition A t-norm is a function T: [0, 1] × [0, 1] → [0, 1] that satisfies the following properties: Commutativity: T(a, b) = T(b, a) Monotonicity: T(a, b) ≤ T(c, d) if a ≤ c and b ≤ d Associativity: T(a, T(b, c)) = T(T(a, b), c) The number 1 acts as identity element: T(a, 1) = a Since a t-norm is a binary algebraic operation on the interval [0, 1], infix algebraic notation is also common, with the t-norm usually denoted by . The defining conditions of the t-norm are exactly those of a partially ordered abelian monoid on the real unit interval [0, 1]. (Cf. ordered group.) The monoidal operation of any partially ordered abelian monoid L is therefore by some authors called a triangular norm on L. Classification of t-norms A t-norm is called continuous if it is continuous as a function, in the usual interval topology on [0, 1]2. (Similarly for left- and right-continuity.) A t-norm is called strict if it is continuous and strictly monotone. A t-norm is called nilpotent if it is continuous and each x in the open interval (0, 1) is nilpotent, that is, there is a natural number n such that x ... x (n times) equals 0. A t-norm is called Archimedean if it has the Archimedean property, that is, if for each x, y in the open interval (0, 1) there is a natural number n such that x ... x (n times) is less than or equal to y. The usual partial ordering of t-norms is pointwise, that is, T1 ≤ T2   if   T1(a, b) ≤ T2(a, b) for all a, b in [0, 1]. As functions, pointwise larger t-norms are sometimes call
https://en.wikipedia.org/wiki/NEK6
Serine/threonine-protein kinase Nek6 is an enzyme that in humans is encoded by the NEK6 gene. Function The Aspergillus nidulans 'never in mitosis A' (NIMA) gene encodes a serine/threonine kinase that controls initiation of mitosis. NIMA-related kinases (NEKs) are a group of protein kinases that are homologous to NIMA. Evidence suggests that NEKs perform functions similar to those of NIMA. It is a protein kinase which plays an important role in mitotic cell cycle progression. Required for chromosome segregation at metaphase-anaphase transition, robust mitotic spindle formation and cytokinesis. Phosphorylates ATF4, CIR1, PTN, RAD26L, RBBP6, RPS7, RPS6KB1, TRIP4, STAT3 and histones H1 and H3. Phosphorylates KIF11 to promote mitotic spindle formation. Involved in G2/M phase cell cycle arrest induced by DNA damage. Inhibition of activity results in apoptosis. May contribute to tumorigenesis by suppressing p53/TP53-induced cancer cell senescence. Interactions NEK6 has been shown to interact with NEK9.
https://en.wikipedia.org/wiki/Laser-induced%20incandescence
Laser-induced incandescence (LII) is an in situ method of measuring aerosol particle volume fraction, primary particle sizes, and other thermophysical properties in flames, during gas-phase nanoparticle synthesis, and in aerosol streams more broadly. The technique is prominently used to characterize soot. The technique can broadly be separated into applications involving continuous or pulsed laser sources, with the former implemented in the Single Particle Soot Photometer (SP2) and the latter used in time-resolved laser-induced incandescence (TiRe-LII) analyses.
https://en.wikipedia.org/wiki/Bernoulli%27s%20triangle
Bernoulli's triangle is an array of partial sums of the binomial coefficients. For any non-negative integer n and for any integer k included between 0 and n, the component in row n and column k is given by: i.e., the sum of the first k nth-order binomial coefficients. The first rows of Bernoulli's triangle are: Similarly to Pascal's triangle, each component of Bernoulli's triangle is the sum of two components of the previous row, except for the last number of each row, which is double the last number of the previous row. For example, if denotes the component in row n and column k, then: Sequences derived from the Bernoulli triangle As in Pascal's triangle and other similarly constructed triangles, sums of components along diagonal paths in Bernoulli's triangle result in the Fibonacci numbers. As the third column of Bernoulli's triangle (k = 2) is a triangular number plus one, it forms the lazy caterer's sequence for n cuts, where n ≥ 2. The fourth column (k = 3) is the three-dimensional analogue, known as the cake numbers, for n cuts, where n ≥ 3. The fifth column (k = 4) gives the maximum number of regions in the problem of dividing a circle into areas for n + 1 points, where n ≥ 4. In general, the (k + 1)th column gives the maximum number of regions in k-dimensional space formed by hyperplanes, for n ≥ k. It also gives the number of compositions (ordered partitions) of n + 1 into k + 1 or fewer parts.
https://en.wikipedia.org/wiki/Philip%20Leder
Philip Leder (November 19, 1934 – February 2, 2020) was an American geneticist. Early life and education Leder was born in Washington, D.C., and studied at Harvard University, graduating in 1956. In 1960, he graduated from Harvard Medical School and completed his medical residency at the University of Minnesota. Scientific accomplishments Leder made several contributions in each decade of the modern genetics era from the 1960s through the 1990s. He may be best known for his early work with Marshall Nirenberg in the elucidation of the genetic code and the Nirenberg and Leder experiment. Since then, he has made several contributions in the fields of molecular genetics, immunology and the genetics of cancer. His group defined the base sequence of a complete mammalian gene (the gene for beta globin), which enabled him to determine its organization in detail, including its associated control signals. His research into the structure of genes which carry the code for antibody molecules was of major significance. The main focus of this inquiry was the question of how the vast diversity of antibody molecules is formed by a limited number of encoded genes. Leder's work on antibody genes was later extended to research into Burkitt's lymphoma, a tumour of antibody-producing cells, which involves the oncogene c-myc. This was crucial in understanding the origin of this type of tumor. In 1988, Leder and Timothy Stewart were granted the first patent on a genetically engineered animal. This animal, a mouse which had genes injected into its embryo to increase susceptibility to cancer, became known as the "oncomouse" and has been used in the laboratory study of cancer therapies. Positions In 1968, Leder headed the Biochemistry Department of the Graduate Program of the Foundation for Advanced Education in the Sciences at the National Institute of Health. In 1972 he was appointed director of the Laboratory for Molecular Genetics at the same institution and remained in that post u
https://en.wikipedia.org/wiki/Display%20motion%20blur
Display motion blur, also called HDTV blur and LCD motion blur, refers to several visual artifacts (anomalies or unintended effects affecting still or moving images) that are frequently found on modern consumer high-definition television sets and flat panel displays for computers. Causes Many motion blur factors have existed for a long time in film and video (e.g. slow camera shutter speed). The emergence of digital video, and HDTV display technologies, introduced many additional factors that now contribute to motion blur. The following factors are generally the primary or secondary causes of perceived motion blur in video. In many cases, multiple factors can occur at the same time within the entire chain, from the original media or broadcast, all the way to the receiver end. Pixel response time on LCD displays (motion blur caused by slow pixel response time) Lower camera shutter speeds common in Hollywood production films (blur in the content of the film), and common in miniaturized camera sensors that require more light. Blur from eye tracking fast-moving objects on sample-and-hold LCD, plasma, or microdisplay. Resolution resampling (blur due to resizing image to fit the native resolution of the HDTV); not a motion blur. Deinterlacing by the display, and telecine processing by studios. These processes can soften images, and/or introduce motion-speed irregularities. Compression artifacts, present in digital video streams, can contribute additional blur during fast motion. Motion blur has been a more severe problem for LCD displays, due to their sample-and-hold nature. Even in situations when pixel response time is very short, motion blur remains a problem because their pixels remain lit, unlike CRT phosphors that merely flash briefly. Reducing the time an LCD pixel is lit can be accomplished via turning off the backlight for part of a refresh. This reduces motion blur due to eye tracking by decreasing the time the backlight is on. In addition, strobed back
https://en.wikipedia.org/wiki/Sumihiro%27s%20theorem
In algebraic geometry, Sumihiro's theorem, introduced by , states that a normal algebraic variety with an action of a torus can be covered by torus-invariant affine open subsets. The "normality" in the hypothesis cannot be relaxed. The hypothesis that the group acting on the variety is a torus can also not be relaxed. Notes
https://en.wikipedia.org/wiki/Sex%20differences%20in%20intelligence
Sex differences in human intelligence have long been a topic of debate among researchers and scholars. It is now recognized that there are no significant sex differences in general intelligence, though particular subtypes of intelligence vary somewhat between sexes. While some test batteries show slightly greater intelligence in males, others show slightly greater intelligence in females. In particular, studies have shown female subjects performing better on tasks related to verbal ability, and males performing better on tasks related to rotation of objects in space, often categorized as spatial ability. Some research indicates that male advantages on some cognitive tests are minimized when controlling for socioeconomic factors. It has also been hypothesized that there is slightly higher variability in male scores in certain areas compared to female scores, leading to males' being over-represented at the top and bottom extremes of the distribution, though the evidence for this hypothesis is inconclusive. IQ research Background There is no statistically significant difference between the average IQ scores of men and women. Average differences have been reported, however, on some tests of mathematics and verbal ability in certain contexts. Some studies have suggested that there may be more variability in cognitive ability among males than among females, but others have contradicted this, or presented evidence that differential variability is culturally rather than biologically determined. According to psychologist Diane Halpern, "there are both differences and similarities in the cognitive abilities of women and men, but there is no data-based rationale to support the idea that either is the smarter or superior sex." Findings Although most tests show no sex difference, there are some that do. For example, it has been found that female subjects tend to perform better on tests of verbal abilities and processing speed while males tend to perform better on tests o
https://en.wikipedia.org/wiki/Programmed%20cell%20death%20protein%201
Programmed cell death protein 1, also known as PD-1 and CD279 (cluster of differentiation 279), is a protein on the surface of T and B cells that has a role in regulating the immune system's response to the cells of the human body by down-regulating the immune system and promoting self-tolerance by suppressing T cell inflammatory activity. This prevents autoimmune diseases, but it can also prevent the immune system from killing cancer cells. PD-1 is an immune checkpoint and guards against autoimmunity through two mechanisms. First, it promotes apoptosis (programmed cell death) of antigen-specific T-cells in lymph nodes. Second, it reduces apoptosis in regulatory T cells (anti-inflammatory, suppressive T cells). PD-1 inhibitors, a new class of drugs that block PD-1, activate the immune system to attack tumors and are used to treat certain types of cancer. The PD-1 protein in humans is encoded by the PDCD1 gene. PD-1 is a cell surface receptor that belongs to the immunoglobulin superfamily and is expressed on T cells and pro-B cells. PD-1 binds two ligands, PD-L1 and PD-L2. Discovery In a screen for genes involved in apoptosis, Yasumasa Ishida, Tasuku Honjo and colleagues at Kyoto University in 1992 discovered and named PD-1. In 1999, the same group demonstrated that mice where PD-1 was knocked down were prone to autoimmune disease and hence concluded that PD-1 was a negative regulator of immune responses. Structure PD-1 is a type I membrane protein of 288 amino acids. PD-1 is a member of the extended CD28/CTLA-4 family of T cell regulators. The protein's structure includes an extracellular IgV domain followed by a transmembrane region and an intracellular tail. The intracellular tail contains two phosphorylation sites located in an immunoreceptor tyrosine-based inhibitory motif and an immunoreceptor tyrosine-based switch motif, which suggests that PD-1 negatively regulates T-cell receptor TCR signals. This is consistent with binding of SHP-1 and SHP-2 phos
https://en.wikipedia.org/wiki/Heathkit%20H11
The Heathkit H11 Computer is an early kit-format personal computer introduced in 1978. It is essentially a Digital Equipment PDP-11 in a small-form-factor case, designed by Heathkit. The H11 is one of the first 16-bit personal computers, at a list price of US$1,295, () but it also requires at least a computer terminal and some form of storage to make it useful. It was too expensive for most Heathkit customers, and was discontinued in 1982. Specifications The H11 featured: Processor — LSI-11 (KD11-HA half-size or "double-height" card) Speed — 2.5 MHz ROM — 8 kWords (16 kBytes) (max) RAM — 32 kWords (64 kBytes) (max) Slots — 7 Q-bus slots Storage — H27 8-inch floppy drive (2 256k 8-inch single sided drives) or paper tape I/O — serial (RS-232) or parallel ports Operating system — HT-11 (a simplified version of RT-11) Instruction set — PDP-11/40 instruction set Languages — BASIC, Focal and others Initial memory limitations restrict the selection of system software, but the system RAM can be expanded to 32 kWords * 16 bit. Many PDP-11 operating systems and programs run without trouble. The system will also work with most DEC PDP-11 equipment, including many Q-bus compatible peripherals. See also Elektronika BK Heathkit H8
https://en.wikipedia.org/wiki/Annulus%20%28botany%29
An annulus in botany is an arc or a ring of specialized cells on the sporangium. These cells are arranged in a single row, and are associated with the release or dispersal of spores. Ferns In leptosporangiate ferns, the annulus located on the outer rim of the sporangium and serves in spore dispersal. It consists typically of a ring or belt of dead water-filled cells with differentially thickened cell walls that stretches about two-thirds around each sporangium in leptosporangiate ferns. The thinner walls on the outside allow water to evaporate quickly under dry conditions. This dehiscence causes the cells to shrink and a contraction and straightening of the annulus ring, eventually rupturing the sporangial wall by ripping apart thin-walled lip cells on the opposite side of the sporangium. As more water evaporates, air bubbles form in the cells causing the contracted annulus to snap forward again, thus dislodging and launching the spores away from the plant. The type and position of the annulus is variable (e.g. patch, apical, oblique, or vertical) and can be used to distinguish major groups of leptosporangiate ferns. Mosses In mosses, an annulus is a complete ring of cells around the tip of the sporangium, which dissolve to allow the cap to fall off and the spores to be released.
https://en.wikipedia.org/wiki/Transcendental%20number%20theory
Transcendental number theory is a branch of number theory that investigates transcendental numbers (numbers that are not solutions of any polynomial equation with rational coefficients), in both qualitative and quantitative ways. Transcendence The fundamental theorem of algebra tells us that if we have a non-constant polynomial with rational coefficients (or equivalently, by clearing denominators, with integer coefficients) then that polynomial will have a root in the complex numbers. That is, for any non-constant polynomial with rational coefficients there will be a complex number such that . Transcendence theory is concerned with the converse question: given a complex number , is there a polynomial with rational coefficients such that If no such polynomial exists then the number is called transcendental. More generally the theory deals with algebraic independence of numbers. A set of numbers {α1, α2, …, αn} is called algebraically independent over a field K if there is no non-zero polynomial P in n variables with coefficients in K such that P(α1, α2, …, αn) = 0. So working out if a given number is transcendental is really a special case of algebraic independence where n = 1 and the field K is the field of rational numbers. A related notion is whether there is a closed-form expression for a number, including exponentials and logarithms as well as algebraic operations. There are various definitions of "closed-form", and questions about closed-form can often be reduced to questions about transcendence. History Approximation by rational numbers: Liouville to Roth Use of the term transcendental to refer to an object that is not algebraic dates back to the seventeenth century, when Gottfried Leibniz proved that the sine function was not an algebraic function. The question of whether certain classes of numbers could be transcendental dates back to 1748 when Euler asserted that the number logab was not algebraic for rational numbers a and b provided b is n
https://en.wikipedia.org/wiki/Supercompact%20cardinal
In set theory, a supercompact cardinal is a type of large cardinal independently introduced by Solovay and Reinhardt. They display a variety of reflection properties. Formal definition If is any ordinal, is -supercompact means that there exists an elementary embedding from the universe into a transitive inner model with critical point , and That is, contains all of its -sequences. Then is supercompact means that it is -supercompact for all ordinals . Alternatively, an uncountable cardinal is supercompact if for every such that there exists a normal measure over , in the following sense. is defined as follows: . An ultrafilter over is fine if it is -complete and , for every . A normal measure over is a fine ultrafilter over with the additional property that every function such that is constant on a set in . Here "constant on a set in " means that there is such that . Properties Supercompact cardinals have reflection properties. If a cardinal with some property (say a 3-huge cardinal) that is witnessed by a structure of limited rank exists above a supercompact cardinal , then a cardinal with that property exists below . For example, if is supercompact and the generalized continuum hypothesis (GCH) holds below then it holds everywhere because a bijection between the powerset of and a cardinal at least would be a witness of limited rank for the failure of GCH at so it would also have to exist below . Finding a canonical inner model for supercompact cardinals is one of the major problems of inner model theory. The least supercompact cardinal is the least such that for every structure with cardinality of the domain , and for every sentence such that , there exists a substructure with smaller domain (i.e. ) that satisfies . Supercompactness has a combinatorial characterization similar to the property of being ineffable. Let be the set of all nonempty subsets of which have cardinality . A cardinal is supercompact iff for every s
https://en.wikipedia.org/wiki/HOTHEAD%20%28gene%29
HOTHEAD is an Arabidopsis thaliana gene that encodes a flavin adenine dinucleotide-containing oxidoreductase. This gene has a role in the creation of the carpel during the formation of flowers through the fusion of epidermal cells. Observations of reversion of the hothead phenotype and genotype led to the suggestion that the plants were able to "remember" the sequences of genes present in their ancestors, possibly through a cache of complementary RNA. This report attracted broad attention, and alternative explanations were suggested. Later research suggested that the supposed reversion phenomenon was due to the plants having a pronounced bias towards outcrossing (because of their floral defects), rather than self-fertilizing at high rates, as is typical for A. thaliana.
https://en.wikipedia.org/wiki/List%20of%20largest%20video%20screens
This is a list of the largest video-capable screens in the world. See also Jumbotron
https://en.wikipedia.org/wiki/Teletraffic%20engineering
Teletraffic engineering, telecommunications traffic engineering, or just traffic engineering when in context, is the application of transportation traffic engineering theory to telecommunications. Teletraffic engineers use their knowledge of statistics including queuing theory, the nature of traffic, their practical models, their measurements and simulations to make predictions and to plan telecommunication networks such as a telephone network or the Internet. These tools and knowledge help provide reliable service at lower cost. The field was created by the work of A. K. Erlang for circuit-switched networks but is applicable to packet-switched networks, as they both exhibit Markovian properties, and can hence be modeled by e.g. a Poisson arrival process. The crucial observation in traffic engineering is that in large systems the law of large numbers can be used to make the aggregate properties of a system over a long period of time much more predictable than the behaviour of individual parts of the system. In PSTN architectures The measurement of traffic in a public switched telephone network (PSTN) allows network operators to determine and maintain the quality of service (QoS) and in particular the grade of service (GoS) that they promise their subscribers. The performance of a network depends on whether all origin-destination pairs are receiving a satisfactory service. Networks are handled as: loss systems, where calls that cannot be handled are given equipment busy tone, or queuing systems, where calls that cannot be handled immediately are queued. Congestion is defined as the situation when exchanges or circuit groups are inundated with calls and are unable to serve all the subscribers. Special attention must be given to ensure that such high loss situations do not arise. To help determine the probability of congestion occurring, operators should use the Erlang formulas or the Engset calculation. Exchanges in the PSTN make use of trunking concepts to he
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20592
Zinc finger protein 592 is a protein that in humans is encoded by the ZNF592 gene. Function This gene is thought to play a role in a complex developmental pathway and the regulation of genes involved in cerebellar development. Mutations in this gene have been associated with autosomal recessive spinocerebellar ataxia.
https://en.wikipedia.org/wiki/TX-0
The TX-0, for Transistorized Experimental computer zero, but affectionately referred to as tixo (pronounced "tix oh"), was an early fully transistorized computer and contained a then-huge 64K of 18-bit words of magnetic-core memory. Construction of the TX-0 began in 1955 and ended in 1956. It was used continually through the 1960s at MIT. The TX-0 incorporated around 3,600 Philco high-frequency surface-barrier transistors, the first transistor suitable for high-speed computers. The TX-0 and its direct descendant, the original PDP-1, were platforms for pioneering computer research and the development of what would later be called computer "hacker" culture. For MIT, this was the first computer to provide a System console which allowed for direct interaction, as opposed to previous computers, which required the use of punched card as a primary interface for programmers debugging their programs. Members of MIT's Tech Model Railroad Club, "the very first hackers at MIT", reveled in the interactivity afforded by the console, and were recruited by Marvin Minsky to work on this and other systems used by Minsky's AI group. Background Designed at the MIT Lincoln Laboratory largely as an experiment in transistorized design and the construction of very large core memory systems, the TX-0 was essentially a transistorized version of the equally famous Whirlwind, also built at Lincoln Lab. While the Whirlwind filled an entire floor of a large building, TX-0 fit in a single reasonably sized room and yet was somewhat faster. Like the Whirlwind, the TX-0 was equipped with a vector display system, consisting of a 12-inch oscilloscope with a working area of 7 by 7 inches connected to the 18-bit output register of the computer, allowing it to display points and vectors with a resolution up to 512×512 screen locations. The TX-0 was an 18-bit computer with a 16-bit address range. First two bits of machine word designate instruction and remaining 16 bits are used to specify memory loc
https://en.wikipedia.org/wiki/Nimrod%20%28computer%29
The Nimrod, built in the United Kingdom by Ferranti for the 1951 Festival of Britain, was an early computer custom-built to play Nim, inspired by the earlier Nimatron. The twelve-by-nine-by-five-foot (3.7-by-2.7-by-1.5-meter) computer, designed by John Makepeace Bennett and built by engineer Raymond Stuart-Williams, allowed exhibition attendees to play a game of Nim against an artificial intelligence. The player pressed buttons on a raised panel corresponding with lights on the machine to select their moves, and the Nimrod moved afterward, with its calculations represented by more lights. The speed of the Nimrod's calculations could be reduced to allow the presenter to demonstrate exactly what the computer was doing, with more lights showing the state of the calculations. The Nimrod was intended to demonstrate Ferranti's computer design and programming skills rather than to entertain, though Festival attendees were more interested in playing the game than the logic behind it. After its initial exhibition in May, the Nimrod was shown for three weeks in October 1951 at the Berlin Industrial Show before being dismantled. The game of Nim running on the Nimrod is a candidate for one of the first video games, as it was one of the first computer games to have any sort of visual display of the game. It appeared only four years after the 1947 invention of the cathode-ray tube amusement device, the earliest known interactive electronic game to use an electronic display, and one year after Bertie the Brain, a computer similar to the Nimrod which played tic-tac-toe at the 1950 Canadian National Exhibition. The Nimrod's use of light bulbs rather than a screen with real-time visual graphics, however, much less moving graphics, does not meet some definitions of a video game. Development In the summer of 1951, the United Kingdom held the Festival of Britain, a national exhibition held throughout the UK to promote the British contribution to science, technology, industrial design,
https://en.wikipedia.org/wiki/POW-R
POW-R (Psychoacoustically Optimized Wordlength Reduction) is a set of commercial dithering and noise shaping algorithms used in digital audio bit-depth reduction. Developed by a consortium of four companies – The POW-R Consortium – the algorithms were first made available in 1999 in digital audio hardware products. POW-R is now licensed for use by many companies, particularly those in the digital audio workstation (DAW) arena, where it currently has significant market share. History POW-R was developed between 1997 and 1998 after an unfavorable change in the licensing terms of a leading bit-depth reduction algorithm of the time prompted some of its licensees to put together a consortium to develop a viable alternative algorithm. Formed by four audio engineering companies: Lake Technology (Dolby Labs), Weiss Engineering, Millennia Media and Z-Systems, the consortium set out with the goal to create 'the most sonically transparent dithering algorithm possible'. In 1999, the first products containing POW-R were released by consortium companies. Other companies became interested in using POW-R in their products, and the algorithms are now licensed to a number of leading DAW vendors including Apple, Avid (Digidesign), Sonic Studio, Ableton, Magix / Sequoia / Samplitude, and others. Reception One of the first products to include POW-R was a hardware dithering unit from Weiss engineering; in a review of this product in 1999, mastering engineer Bob Katz spoke highly of the new algorithm declaring it 'an incredible achievement'. Technical details Technically, the entire POW-R suite is not noise shaping; rather, the original POW-R algorithm is based on narrow-band Nyquist dither, while other POW-R algorithms include noise shaping and white noise. Unlike noise-shaping algorithms based on an ‘Absolute threshold of hearing’ model (i.e. the quietest sound that can be heard on otherwise silent conditions), POW-R has been designed to give optimal performance at normal listenin
https://en.wikipedia.org/wiki/McDonald%E2%80%93Kreitman%20test
The McDonald–Kreitman test is a statistical test often used by evolutionary and population biologists to detect and measure the amount of adaptive evolution within a species by determining whether adaptive evolution has occurred, and the proportion of substitutions that resulted from positive selection (also known as directional selection). To do this, the McDonald–Kreitman test compares the amount of variation within a species (polymorphism) to the divergence between species (substitutions) at two types of sites, neutral and nonneutral. A substitution refers to a nucleotide that is fixed within one species, but a different nucleotide is fixed within a second species at the same base pair of homologous DNA sequences. A site is nonneutral if it is either advantageous or deleterious. The two types of sites can be either synonymous or nonsynonymous within a protein-coding region. In a protein-coding sequence of DNA, a site is synonymous if a point mutation at that site would not change the amino acid, also known as a silent mutation. Because the mutation did not result in a change in the amino acid that was originally coded for by the protein-coding sequence, the phenotype, or the observable trait, of the organism is generally unchanged by the silent mutation. A site in a protein-coding sequence of DNA is nonsynonymous if a point mutation at that site results in a change in the amino acid, resulting in a change in the organism's phenotype. Typically, silent mutations in protein-coding regions are used as the "control" in the McDonald–Kreitman test. In 1991, John H. McDonald and Martin Kreitman derived the McDonald–Kreitman test while performing an experiment with Drosophila (fruit flies) and their differences in amino acid sequence of the alcohol dehydrogenase gene. McDonald and Kreitman proposed this method to estimate the proportion of substitutions that are fixed by positive selection rather than by genetic drift. In order to set up the McDonald–Kreitman test, we
https://en.wikipedia.org/wiki/Field%20%28physics%29
In physics, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. For example, on a weather map, the surface temperature is described by assigning a number to each point on the map; the temperature can be considered at a certain point in time or over some interval of time, to study the dynamics of temperature change. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field. In the modern framework of the quantum theory of fields, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. "The fact that the electromagnetic field can possess momentum and energy makes it very real ... a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e., they follow Gauss's law). A field can be classified as a scalar field, a vector field, a spinor f
https://en.wikipedia.org/wiki/Health%20technology
Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures, and systems developed to solve a health problem and improve quality of lives". This includes pharmaceuticals, devices, procedures, and organizational systems used in the healthcare industry, as well as computer-supported information systems. In the United States, these technologies involve standardized physical objects, as well as traditional and designed social means and methods to treat or care for patients. Development Pre-digital Era During a pre-digital era, patients suffered from inefficient and faulty clinical systems, processes, and conditions. Many medical errors happened in the past due to undeveloped health technologies. Some examples of these medical errors included adverse drug events and alarm fatigue. Alarm fatigue is caused when an alarm is repeatedly triggered or activated and one becomes desensitized to them. As the alarms were sometimes triggered by unimportant events in the past, nurses thought the alarm was not significant. Alarm fatigue is dangerous because it could lead to death and dangerous situations. With technological development, an intelligent program of integration and physiologic sense-making was developed and helped reduce the number of false alarms. Also, with greater investment in health technologies, fewer medical errors happened. Outdated paper records were replaced in many healthcare organizations by electronic health records (EHR). According to studies, this change has brought a lot of changes to healthcare. Drug administration has improved, healthcare providers can now access medical information easier, provide better treatments and faster results, and save more costs. Improvement To help promote and expand the adoption of health information technology, Congress passed the HITECH act as part of the American Recovery and Reinvestment Act of 2009. HITEC
https://en.wikipedia.org/wiki/Balancer%20chromosome
Balancer chromosomes (or simply balancers) are a type of genetically engineered chromosome used in laboratory biology for the maintenance of recessive lethal (or sterile) mutations within living organisms without interference from natural selection. Since such mutations are viable only in heterozygotes, they cannot be stably maintained through successive generations and therefore continually lead to production of wild-type organisms, which can be prevented by replacing the homologous wild-type chromosome with a balancer. In this capacity, balancers are crucial for genetics research on model organisms such as Drosophila melanogaster, the common fruit fly, for which stocks cannot be archived (e.g. frozen). They can also be used in forward genetics screens to specifically identify recessive lethal (or sterile) mutations. For that reason, balancers are also used in other model organisms, most notably the nematode worm Caenorhabditis elegans and the mouse. Typical balancer chromosomes are designed to (1) carry recessive lethal mutations themselves, eliminating homozygotes which do not carry the desired mutation; (2) suppress meiotic recombination with their homologs, which prevents de novo creation of wild-type chromosomes; and (3) carry dominant genetic markers, which can help identify rare recombinants and are useful for screening purposes. History Balancer chromosomes were first used in the fruit fly by Hermann Muller, who pioneered the use of radiation for organismal mutagenesis. In the modern usage of balancer chromosomes, random mutations are first induced by exposing living organisms with otherwise normal chromosomes to substances which cause DNA damage; in flies and nematodes, this usually occurs by feeding larvae ethyl methanesulfonate (EMS). The DNA-damaged larvae (or the adults into which they develop) are then screened for mutations. When a phenotype of interest is observed, the line expressing the mutation is crossed with another line containing balancer
https://en.wikipedia.org/wiki/Vehicular%20metrics
There are a broad range of metrics that denote the relative capabilities of various vehicles. Most of them apply to all vehicles while others are type-specific. See also
https://en.wikipedia.org/wiki/StatsDirect
StatsDirect is a statistical software package designed for biomedical, public health, and general health science uses. The second generation of the software was reviewed in general medical and public health journals. Features and use StatsDirect's interface is menu driven and has editors for spreadsheet-like data and reports. The function library includes common medical statistical methods that can be extended by users via an XML-based description that can embed calls to native StatsDirect numerical libraries, R scripts, or algorithms in any of the .NET languages (such as C#, VB.Net, J#, or F#). Common statistical misconceptions are challenged by the interface. For example, users can perform a chi-square test on a two-by-two table, but they are asked whether the data are from a cohort (perspective) or case-control (retrospective) study before delivering the result. Both processes produce a chi-square test result but more emphasis is put on the appropriate statistic for the inference, which is the odds ratio for retrospective studies and relative risk for prospective studies. Origins Professor Iain Buchan, formerly of the University of Manchester, wrote a doctoral thesis on the foundational work and is credited as the creator of the software. Buchan said he wished to address the problem of clinicians lacking the statistical knowledge to select and interpret statistical functions correctly, and often misusing software written by and for statisticians as a result. The software debuted in 1989 as Arcus, then Arcus ProStat in 1993, both written for the DOS platform. Arcus Quickstat for Windows followed in 1999. In 2000, an expanded version, StatsDirect, was released for Microsoft Windows. In 2013, the third generation of this software was released, written in C# for the .NET platform. StatsDirect reports embed the metadata necessary to replay calculations, which may be needed if the original data is ever updated. The reproducible report technology follows the rese
https://en.wikipedia.org/wiki/Unnatural%20Selection%20%28TV%20series%29
Unnatural Selection (or stylized as, "unnatural selection") is a 2019 TV documentary series that presents an overview of genetic engineering and particularly, the DNA-editing technology of CRISPR, from the perspective of scientists, corporations and biohackers working from their home. It was released by Netflix on October 18, 2019. Episodes Unnatural Selection is a documentary series. Season 1 The first season consists of 4 episodes. It became available for streaming on October 18, 2019. Participants The documentary TV series includes the following notable participants (alphabetized by last name): Andrea Crisanti – Italian microbiologist Nelson Dellis – American memory athlete Jennifer Doudna – American biochemist and Nobel laureate for CRISPR Victor Dzau – President, U.S. National Academy of Medicine Preston Estep – American geneticist and CSO of Veritas Genetics Kevin M. Esvelt – American biologist Katherine A. High – American doctor and CSO of Spark Therapeutics Juan Carlos Izpisúa Belmonte – Spanish geneticist Jeffrey Kahn – American professor of bioethics James Russell – New Zealand ecologist Aaron Traywick – American life extension activist Josiah Zayner – American biohacker, artist, and scientist John J. Zhang – Fertility Specialist Reception According to reviewer Megan Molteni, writing for Wired Magazine, "Unnatural Selection chronicles the ambitions and struggles of scientists, doctors, patients, conservationists, and biohackers as they seek to wrest control of evolution from nature itself. They are all navigating the profound ethical dilemmas of a world where it’s possible to rewrite the code of life inside any organism, including human ... If you were looking for a Schoolhouse Rock! explanation of how Crispr works or a deep dive on the history of its discovery, Unnatural Selection won’t deliver ... All the requisite references will be made [in the series]—to Gattaca, to Huxley, to “life, uh, finds a way.” ... After watching Unnatural Sel
https://en.wikipedia.org/wiki/Rader%27s%20FFT%20algorithm
Rader's algorithm (1968), named for Charles M. Rader of MIT Lincoln Laboratory, is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of prime sizes by re-expressing the DFT as a cyclic convolution (the other algorithm for FFTs of prime sizes, Bluestein's algorithm, also works by rewriting the DFT as a convolution). Since Rader's algorithm only depends upon the periodicity of the DFT kernel, it is directly applicable to any other transform (of prime order) with a similar property, such as a number-theoretic transform or the discrete Hartley transform. The algorithm can be modified to gain a factor of two savings for the case of DFTs of real data, using a slightly modified re-indexing/permutation to obtain two half-size cyclic convolutions of real data; an alternative adaptation for DFTs of real data uses the discrete Hartley transform. Winograd extended Rader's algorithm to include prime-power DFT sizes , and today Rader's algorithm is sometimes described as a special case of Winograd's FFT algorithm, also called the multiplicative Fourier transform algorithm (Tolimieri et al., 1997), which applies to an even larger class of sizes. However, for composite sizes such as prime powers, the Cooley–Tukey FFT algorithm is much simpler and more practical to implement, so Rader's algorithm is typically only used for large-prime base cases of Cooley–Tukey's recursive decomposition of the DFT. Algorithm Begin with the definition of the discrete Fourier transform: If N is a prime number, then the set of non-zero indices forms a group under multiplication modulo N. One consequence of the number theory of such groups is that there exists a generator of the group (sometimes called a primitive root, which can be found by exhaustive search or slightly better algorithms). This generator is an integer g such that for any non-zero index n and for a unique (forming a bijection from q to non-zero n). Similarly, for any non-zero index
https://en.wikipedia.org/wiki/Two-dimensionalism
Two-dimensionalism is an approach to semantics in analytic philosophy. It is a theory of how to determine the sense and reference of a word and the truth-value of a sentence. It is intended to resolve the puzzle: How is it possible to discover empirically that a necessary truth is true? Two-dimensionalism provides an analysis of the semantics of words and sentences that makes sense of this possibility. The theory was first developed by Robert Stalnaker, but it has been advocated by numerous philosophers since, including David Chalmers. Two-dimensional semantic analysis Any given sentence, for example, the words, "Water is H2O" is taken to express two distinct propositions, often referred to as a primary intension and a secondary intension, which together compose its meaning. The primary intension of a word or sentence is its sense, i.e., is the idea or method by which we find its referent. The primary intension of "water" might be a description, such as watery stuff. The thing picked out by the primary intension of "water" could have been otherwise. For example, on some other world where the inhabitants take "water" to mean watery stuff, but, where the chemical make-up of watery stuff is not H2O, it is not the case that water is H2O for that world. The secondary intension of "water" is whatever thing "water" happens to pick out in this world, whatever that world happens to be. So, if we assign "water" the primary intension watery stuff, then the secondary intension of "water" is H2O, since H2O is watery stuff in this world. The secondary intension of "water" in our world is H2O, which is H2O in every world because unlike watery stuff it is impossible for H2O to be other than H2O. When considered according to its secondary intension, "Water is H2O" is true in every world. Impact If two-dimensionalism is workable it solves some very important problems in the philosophy of language. Saul Kripke has argued that "Water is H2O" is an example of a necessary truth whi
https://en.wikipedia.org/wiki/Three-dimensional%20edge-matching%20puzzle
A three-dimensional edge-matching puzzle is a type of edge-matching puzzle or tiling puzzle involving tiling a three-dimensional area with (typically regular) polygonal pieces whose edges are distinguished with colors or patterns, in such a way that the edges of adjacent pieces match. Edge-matching puzzles are known to be NP-complete, and capable of conversion to and from equivalent jigsaw puzzles and polyomino packing puzzle. Three-dimensional edge-matching puzzles are not currently under direct U.S. patent protection, since the 1892 patent by E. L. Thurston has expired. Current examples of commercial three-dimensional edge-matching puzzles include the Dodek Duo, The Enigma, Mental Misery, and Kadon Enterprises' range of three-dimensional edge-matching puzzles. See also Edge-matching puzzle Domino tiling
https://en.wikipedia.org/wiki/The%20Man%20Who%20Wasn%27t%20There%20%281983%20film%29
The Man Who Wasn't There is a 1983 American 3-D comedy film directed by Bruce Malmuth and starring Steve Guttenberg. Plot When he accidentally takes possession of a top-secret invisibility potion while en route to his wedding, government bureaucrat Sam Cooper finds himself engulfed in a madcap free-for-all as Russians and other bad guys try to get the substance. To elude the Reds, his own State Department bosses and his livid fiancée, Cooper takes the vanishing juice himself—which only makes matters worse. Cast Steve Guttenberg as Sam Cooper Jeffrey Tambor as Boris Potemkin Art Hindle as Ted Durand Morgan Hart as Amanda Lisa Langlois as Cindy Worth William Forsythe as Pug Face Crusher Bruce Malmuth as Fireplug Crusher Ron Canada as Barker Michael Ensign as Assistant Secretary Richard Paul as Pudgy Aide Miguel Ferrer as A Waiter Production The project began when Paramount executives were inspired by the success Friday the 13th Part III and commissioned Friday series producer Frank Mancuso Jr. to produce another 3D film. Despite not even having a script or team ready for such a project, Paramount announced an untitled 3-D film for Summer of 1983 with the production thrown together very quickly. The concept was to remake something from the Paramount film library, or some widely known subject, and add in 3-D effects. Following rejected pitches that included a 3-D remake of Rosemary's Baby, Mancuso ultimately decided to do a mixture of Foul Play and North by Northwest with added invisibility elements. Critical reception Movie historian Leonard Maltin declared the picture a "BOMB" (his lowest possible rating) and noted that "...Better writing, directing, and acting can be found at your average nursery-school pageant."
https://en.wikipedia.org/wiki/Osmoregulation%20in%20rock%20doves
The rock dove, Columbia livia, has a number of special adaptations for regulating water uptake and loss. Challenges C. livia pigeons drink directly by water source or indirectly from the food they ingest. They drink water through a process called double-suction mechanism. The daily diet of the pigeon places many physiological challenges that it must overcome through osmoregulation. Protein intake, for example, causes an excess of toxins of amine groups when it is broken down for energy. To regulate this excess and secrete these unwanted toxins, C. livia must remove the amine groups as uric acid. Nitrogen excretion through uric acid can be considered an advantage because it does not require a lot of water, but producing it takes more energy because of its complex molecular composition. Pigeons adjust their drinking rates and food intake in parallel, and when adequate water is unavailable for excretion, food intake is limited to maintain water balance. As this species inhabits arid environments, research attributes this to their strong flying capabilities to reach the available water sources, not because of exceptional potential for water conservation. C. livia kidneys, like mammalian kidneys, are capable of producing urine hyperosmotic to the plasma using the processes of filtration, reabsorption, and secretion. The medullary cones function as countercurrent units that achieve the production of hyperosmotic urine. Hyperosmotic urine can be understood in light of the law of diffusion and osmolarity. Organ of osmoregulation Unlike a number of other bird species which have the salt gland as the primary osmoregulatory organ, C. livia does not use its salt gland. It uses the function of the kidneys to maintain homeostatic balance of ions such as sodium and potassium while preserving water quantity in the body. Filtration of the blood, reabsorption of ions and water, and secretion of uric acid are all components of the kidney's process. Columba livia has two kidneys th
https://en.wikipedia.org/wiki/Burr%20puzzle
A burr puzzle is an interlocking puzzle consisting of notched sticks, combined to make one three-dimensional, usually symmetrical unit. These puzzles are traditionally made of wood, but versions made of plastic or metal can also be found. Quality burr puzzles are usually precision-made for easy sliding and accurate fitting of the pieces. In recent years the definition of "burr" is expanding, as puzzle designers use this name for puzzles not necessarily of stick-based pieces. History The term "burr" is first mentioned in a 1928 book by Edwin Wyatt, but the text implies that it was commonly used before. The term is attributed to the finished shape of many of these puzzles, resembling a seed burr. The origin of burr puzzles is unknown. The first known record appears in a 1698 engraving used as a frontispiece page of Chambers's Cyclopaedia. Later records can be found in German catalogs from the late 18th century and early 19th century. There are claims of the burr being a Chinese invention, like other classic puzzles such as the Tangram. In Kerala, India, these wooden puzzles are called edakoodam(ഏടാകൂടം). Six-piece burr The six-piece burr, also called "Puzzle Knot" or "Chinese Cross", is the most well-known and presumably the oldest of the burr puzzles. This is actually a family of puzzles, all sharing the same finished shape and basic shape of the pieces. The earliest US patent for a puzzle of this kind dates back to 1917. For many years, the six-piece burr was very common and popular, but was considered trite and uninteresting by enthusiasts. Most of the puzzles made and sold were very similar to one another and most of them included a "key" piece, an unnotched stick that slides easily out. In the late 1970s, however, the six-piece burr regained the attention of inventors and collectors, thanks largely to a computer analysis conducted by the mathematically trained puzzle designer Bill Cutler which was published by Martin Gardner in his Mathematical Games column
https://en.wikipedia.org/wiki/Kolakoski%20sequence
In mathematics, the Kolakoski sequence, sometimes also known as the Oldenburger–Kolakoski sequence, is an infinite sequence of symbols {1,2} that is the sequence of run lengths in its own run-length encoding. It is named after the recreational mathematician William Kolakoski (1944–97) who described it in 1965, but it was previously discussed by Rufus Oldenburger in 1939. Definition The initial terms of the Kolakoski sequence are: 1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1,2,2,1,2,1,1,2,1,2,2,1,1,... Each symbol occurs in a "run" (a sequence of equal elements) of either one or two consecutive terms, and writing down the lengths of these runs gives exactly the same sequence: 1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1,2,2,1,2,1,1,2,1,2,2,1,1,2,1,1,2,1,2,2,1,2,2,1,1,2,1,2,2,... 1, 2 , 2 ,1,1, 2 ,1, 2 , 2 ,1, 2 , 2 ,1,1, 2 ,1,1, 2 , 2 ,1, 2 ,1,1, 2 ,1, 2 , 2 ,1,1, 2 ,... The description of the Kolakoski sequence is therefore reversible. If K stands for "the Kolakoski sequence", description #1 logically implies description #2 (and vice versa): 1. The terms of K are generated by the runs (i.e., run-lengths) of K 2. The runs of K are generated by the terms of K Accordingly, one can say that each term of the Kolakoski sequence generates a run of one or two future terms. The first 1 of the sequence generates a run of "1", i.e. itself; the first 2 generates a run of "22", which includes itself; the second 2 generates a run of "11"; and so on. Each number in the sequence is the length of the next run to be generated, and the element to be generated alternates between 1 and 2: 1,2 (length of sequence l = 2; sum of terms s = 3) 1,2,2 (l = 3, s = 5) 1,2,2,1,1 (l = 5, s = 7) 1,2,2,1,1,2,1 (l = 7, s = 10) 1,2,2,1,1,2,1,2,2,1 (l = 10, s = 15) 1,2,2,1,1,2,1,2,2,1,2,2,1,1,2 (l = 15, s = 23) As can be seen, the length of the sequence at each stage is equal to the sum of terms in the previous stage. This animation illustrates the process: These self-generating properties, which remain if the
https://en.wikipedia.org/wiki/Total%20institution
A total institution or residential institution is a place of work and residence where a great number of similarly situated people, cut off from the wider community for a considerable time, together lead an enclosed, formally administered round of life. Privacy is limited in total institutions, as all aspects of life including sleep, play, and work, are conducted in the same place. The concept is mostly associated with the work of sociologist Erving Goffman. Etymology The term is sometimes credited as having been coined and defined by Canadian sociologist Erving Goffman in his paper "On the Characteristics of Total Institutions", presented in April 1957 at the Walter Reed Institute's Symposium on Preventive and Social Psychiatry. An expanded version appeared in Donald Cressey's collection, The Prison, and was reprinted in Goffman's 1961 collection, Asylums. Fine and Manning, however, note that Goffman heard the term in lectures by Everett Hughes (likely during the late-1940s seminar, "Work and Occupations"). Regardless of whether Goffman coined the term, he can be credited with popularizing it. Typology Total institutions are divided by Goffman into five different types: institutions established to care for people felt to be both harmless and incapable: orphanages, poor houses, group homes and nursing homes. places established to care for people felt to be incapable of looking after themselves and a threat to the community, albeit an unintended one: leprosariums, mental hospitals, certain types of group homes, and tuberculosis sanitariums. institutions organised to protect the community against what are felt to be intentional dangers to it, with the welfare of the people thus sequestered not the immediate issue: concentration camps, P.O.W. camps, penitentiaries, and jails. institutions purportedly established to better pursue some worklike tasks and justifying themselves only on these instrumental grounds: colonial compounds, work camps, boarding schools, sh
https://en.wikipedia.org/wiki/Dale%20Erdahl
Dale EmMons Erdahl (November 1, 1931 – May 16, 2005) was an American businessman, farmer, and politician. Erdahl was born in Frost, Minnesota. He went to the Blue Earth County Public Schools and graduated from Blue Earth Area High School, in Blue Earth, Minnesota. in 1949. He went to Augsburg University and the University of Minnesota. He received his bachelor's degree in human services from Metropolitan State University in 1984. Erhahl lived in Blue Earth, Minnesota. He worked as an insurance underwriter and was a farmer. Erdahl served in the Minnesota House of Representatives from 1971 to 1974 and was a Republican. His cousin was Arlen Erdahl who also served in the Minnesota Legislature. He moved to Sioux Falls, South Dakota when he retired. He died in Sioux Falls, South Dakota. Notes 1931 births 2005 deaths People from Faribault County, Minnesota Politicians from Sioux Falls, South Dakota Augsburg University alumni University of Minnesota alumni Metropolitan State University alumni Insurance underwriters Businesspeople from Minnesota Farmers from Minnesota Republican Party members of the Minnesota House of Representatives 20th-century American politicians 20th-century American businesspeople
https://en.wikipedia.org/wiki/CDC%206000%20series
The CDC 6000 series is a discontinued family of mainframe computers manufactured by Control Data Corporation in the 1960s. It consisted of the CDC 6200, CDC 6300, CDC 6400, CDC 6500, CDC 6600 and CDC 6700 computers, which were all extremely rapid and efficient for their time. Each is a large, solid-state, general-purpose, digital computer that performs scientific and business data processing as well as multiprogramming, multiprocessing, Remote Job Entry, time-sharing, and data management tasks under the control of the operating system called SCOPE (Supervisory Control Of Program Execution). By 1970 there also was a time-sharing oriented operating system named KRONOS. They were part of the first generation of supercomputers. The 6600 was the flagship of Control Data's 6000 series. Overview The CDC 6000 series computers are composed of four main functional devices: the central memory one or two high-speed central processors ten peripheral processors (Peripheral Processing Unit, or PPU) and a display console. The 6000 series has a distributed architecture. The family's members differ primarily by the number and kind of central processor(s): The CDC 6600 is a single CPU with 10 functional units that can operate in parallel, each working on an instruction at the same time. The CDC 6400 is a single CPU with an identical instruction set, but with a single unified arithmetic function unit that can only do one instruction at a time. The CDC 6500 is a dual-CPU system with two 6400 central processors The CDC 6700 is also a dual-CPU system, with a 6600 and a 6400 central processor. Certain features and nomenclature had also been used in the earlier CDC 3000 series: Arithmetic was ones complement. The name COMPASS was used by CDC for the assembly languages on both families. The name SCOPE was used for its operating system implementations on the 3000 and 6000 series. The only currently (as of 2018) running CDC 6000 series machine, a 6500, has been restored by Liv
https://en.wikipedia.org/wiki/Czech%20Brain%20Ageing%20Study
Czech Brain Ageing Study (CBAS) is a longitudinal, observational study on aging and dementia from two large centers in the Czech Republic combining clinical care and clinical research. The pilot project leading to the pilot data for CBAS was established in 2005 as a longitudinal follow-up of subjects at risk of Alzheimer disease dementia (AD). A major step forward occurred in 2011 after receiving a substantial funding from the European Regional Development Fund and the Czech Ministry of Health. This funding enabled to establish the International Clinical Research Center (FNUSA-ICRC) in Brno and also enabled to synchronize the efforts in clinical and translational research between the ICRC and the Cognitive Center at Department of Neurology of the Motol University Hospital in Prague. CBAS now recruits a large numbers of participants across the two sites and is the only longitudinal study of its kind in the Czech Republic. It is also about to become the largest study to study risk factors for AD in Central and Eastern Europe. Study design Participants undergo annual follow-up with clinical evaluations, multimodality brain MRI, comprehensive standardized neuropsychological testing including memory tests aimed to detect early cognitive decline and laboratory examination. The APOE, TOMM40 and BDNF genotyping is done at baseline. In a subset of participants, CSF and/or amyloid PET is performed. The majority of participants also undergo a unique translational, experimental neuropsychological protocol that has been inspired by animal research. This protocol consists of various tasks aiming to detect early cognitive and clinical impairment in AD spectrum. The strong translational aspect of this approach, which includes the use of the human analogue of the Morris Water Maze, is aimed at the identification of individuals at preclinical and prodromal stage of AD. Biological sample bank (DNA, CSF and plasma) matched with the CBAS clinical data is also a part of CBAS. From
https://en.wikipedia.org/wiki/Bounded%20set%20%28topological%20vector%20space%29
In functional analysis and related areas of mathematics, a set in a topological vector space is called bounded or von Neumann bounded, if every neighborhood of the zero vector can be inflated to include the set. A set that is not bounded is called unbounded. Bounded sets are a natural way to define locally convex polar topologies on the vector spaces in a dual pair, as the polar set of a bounded set is an absolutely convex and absorbing set. The concept was first introduced by John von Neumann and Andrey Kolmogorov in 1935. Definition Suppose is a topological vector space (TVS) over a field A subset of is called or just in if any of the following equivalent conditions are satisfied: : For every neighborhood of the origin there exists a real such that for all scalars satisfying This was the definition introduced by John von Neumann in 1935. is absorbed by every neighborhood of the origin. For every neighborhood of the origin there exists a scalar such that For every neighborhood of the origin there exists a real such that for all scalars satisfying For every neighborhood of the origin there exists a real such that for all real Any one of statements (1) through (5) above but with the word "neighborhood" replaced by any of the following: "balanced neighborhood," "open balanced neighborhood," "closed balanced neighborhood," "open neighborhood," "closed neighborhood". e.g. Statement (2) may become: is bounded if and only if is absorbed by every balanced neighborhood of the origin. If is locally convex then the adjective "convex" may be also be added to any of these 5 replacements. For every sequence of scalars that converges to and every sequence in the sequence converges to in This was the definition of "bounded" that Andrey Kolmogorov used in 1934, which is the same as the definition introduced by Stanisław Mazur and Władysław Orlicz in 1933 for metrizable TVS. Kolmogorov used this definition to prove that a TVS is seminor
https://en.wikipedia.org/wiki/Mount%20Airy%20Forest
The Mount Airy Forest, in Cincinnati, Ohio, was established in 1911. It was one of the earliest, if not the first, urban reforestation project in the United States. With nearly , it's the largest park in Cincinnati's park system. History The originally forested land was cleared for agricultural use in the 19th century, but years of poor grazing and agricultural practices led to severe erosion and poor soil composition. As quoted in a 1914 Cincinnati Times-Star editorial, a farmer facetiously remarked that his farm (in Westwood) "was a good one when he first took it up but that since he had cleared off all the trees it had slid down the creek and was to be found somewhere in the neighborhood of New Orleans." According to the National Park Service: Established in 1911, the Mount Airy Forest covers an impressive 1459 acres and includes natural areas, planned landscapes, buildings, structures, and landscape features. The numerous hiking trails, bridle paths, walls, gardens, pedestrian bridges, and various other improvements within Mount Airy Forest reflect the ambitious park planning and development that took place in Cincinnati in the early-to-mid-20th century. Conceived as the nation's first urban reforestation project, the park has developed over the years—especially during the Depression and post-World War II period- into a park with a variety of areas, spaces and structures designed to accommodate recreational, social, and educational activities. Today it continues to offer a large expanse of protected land within the city limits where the public can enjoy the richness and diversity of nature. In the largest reforestation program undertaken by a city seen until that time, the barren land was restored to a park largely in the 1930s by the Civilian Conservation Corps (CCC). The rustic CCC structures are still standing and are listed on the National Register of Historic Places. The park now includes 700 acres of reforested hardwoods, 200 acres of forested everg
https://en.wikipedia.org/wiki/Egg%20allergy
Egg allergy is an immune hypersensitivity to proteins found in chicken eggs, and possibly goose, duck, or turkey eggs. Symptoms can be either rapid or gradual in onset. The latter can take hours to days to appear. The former may include anaphylaxis, a potentially life-threatening condition which requires treatment with epinephrine. Other presentations may include atopic dermatitis or inflammation of the esophagus. In the United States, 90% of allergic responses to foods are caused by cow's milk, eggs, wheat, shellfish, peanuts, tree nuts, fish, and soybeans. The declaration of the presence of trace amounts of allergens in foods is not mandatory in any country, with the exception of Brazil. Prevention is by avoiding eating eggs and foods that may contain eggs, such as cake or cookies. It is unclear if the early introduction of the eggs to the diet of babies aged 4–6 months decreases the risk of egg allergies. Egg allergy appears mainly in children but can persist into adulthood. In the United States, it is the second most common food allergy in children after cow's milk. Most children outgrow egg allergy by the age of five, but some people remain allergic for a lifetime. In North America and Western Europe, egg allergy occurs in 0.5% to 2.5% of children under the age of five years. The majority grow out of it by school age, but for roughly one-third, the allergy persists into adulthood. Strong predictors for adult-persistence are anaphylaxis, high egg-specific serum immunoglobulin E (IgE), robust response to the skin prick test and absence of tolerance to egg-containing baked foods. Signs and symptoms Food allergies usually have an onset from minutes to one to two hours. Symptoms may include: rash, hives, itching of mouth, lips, tongue, throat, eyes, skin, or other areas, swelling of lips, tongue, eyelids, or the whole face, difficulty swallowing, runny or congested nose, hoarse voice, wheezing, shortness of breath, diarrhea, abdominal pain, lightheadedness, fa
https://en.wikipedia.org/wiki/Balloon%20help
Balloon help is a help system introduced by Apple Computer in their 1991 release of System 7.0. The name referred to the way the help text was displayed, in "speech balloons", like those containing words in a comic strip. The name has since been used by many to refer to any sort of pop-up help text. The problem During the leadup to System 7, Apple studied the problem of getting help in depth. They identified a number of common questions, such as Where am I?, How do I get to...?, or worse, Why is that item "grayed out"?. In the context of computer use they identified two main types of questions users asked: What is this thing? and How do I accomplish...?. Existing help systems typically didn't provide useful information on either of these topics, and were often nothing more than the paper manual copied into an electronic form. One of the particularly thorny problems was the What is this thing? question. In an interface that often included non-standard widgets or buttons labeled with an indecipherable icon, many functions required the end user referring to their manual. Users generally refused to do this, and ended up not using the full power of their applications since many of their functions were "hidden". It was this problem that Apple decided to attack, and after extensive testing, settled on Balloon Help as the solution. Apple's solution for How do I accomplish...? was Apple Guide, which would be added to System 7.5 in 1994. Mechanism Balloon help was activated by choosing Show Balloon Help from System 7's new Help menu (labelled with a Balloon Help icon in System 7, the Apple Guide icon in System 7.5, and the word Help in Mac OS 8). While balloon help was active, moving the mouse over an item would display help for that item. Balloon help was deactivated by choosing Hide Balloon Help from the same menu. The underlying system was based on a set of resources included in application software, holding text that would appear in the balloons. The balloon graphics
https://en.wikipedia.org/wiki/FragAttacks
FragAttacks, or fragmentation and aggregation attacks, are a group of Wi-Fi vulnerabilities discovered by security research Mathy Vanhoef. Since the vulnerabilities are design flaws in the Wi-Fi standard, any device released after 1997 could be vulnerable. The attack can be executed without special privileges. The attack was detailed on August 5, 2021 at Black Hat Briefings USA and at later at the USENIX 30th Security Symposium, where recordings are shared publicly. The attack does not leave any trace in the network logs. Patches Vanhoef worked with the Wi-Fi Alliance to help vendors issue patches. Microsoft started issuing patches for Windows 7 through Windows 10 on May 11, 2021.
https://en.wikipedia.org/wiki/Automatic%20transmission
An automatic transmission (sometimes abbreviated AT) is a multi-speed transmission used in motor vehicles that does not require any input from the driver to change forward gears under normal driving conditions. Vehicles with internal combustion engines, unlike electric vehicles, require the engine to operate in a narrow range of rates of rotation, requiring a gearbox, operated manually or automatically, to drive the wheels over a wide range of speeds. The most common type of automatic transmission is the hydraulic automatic, which uses a planetary gearset, hydraulic controls, and a torque converter. Other types of automatic transmissions include continuously variable transmissions (CVT), automated manual transmissions (AMT), and dual-clutch transmissions (DCT). The 1904 Sturtevant "horseless carriage gearbox" is often considered to be the first true automatic transmission. The first mass-produced automatic transmission is the General Motors Hydramatic four-speed hydraulic automatic, which was introduced in 1939. Prevalence Vehicles with internal combustion engines, unlike electric vehicles, require the engine to operate in a narrow range of rates of rotation, requiring a gearbox, operated manually or automatically, to drive the wheels over a wide range of speeds. Globally, 43% of new cars produced in 2015 were manual transmissions, falling to 37% by 2020. Automatic transmissions have long been prevalent in the United States, but only started to become common in Europe much later. In Europe in 1997, only 10-12% of cars had automatic transmission. In 1957 over 80% of new cars in the United States had automatic transmission. Automatic transmission has been standard in large cars since at least 1974. By 2020 only 2.4% of new cars had manual transmission. Historically, automatic transmissions were less efficient, but lower fuel prices in the US made this less of a problem than in Europe. In the United Kingdom, a majority of new cars have had automatic transmission
https://en.wikipedia.org/wiki/FastCode
FastCode is an open source programming project that aims to provide optimized runtime library routines for Embarcadero Delphi and C++ Builder. This community-driven project was started in 2003 by Dennis Kjaer Christensen and has since contributed optimized functionality to the 32-bit Delphi runtime library (RTL). Organized as a competition divided into challenges, FastCode focuses on optimizing specific functions against multiple targets. The project offers benchmarking tools and validation processes for each function contribution. Contributions are scored, with points awarded based on performance against the targets. Embarcadero recognizes and incorporates the code created by the FastCode team into their Delphi codebase. Most participants in this project are assembler developers who utilize processor-specific code. The list of challenges tackled by the FastCode project is extensive; it covers diverse areas ranging from string manipulation functions like PosEx or CompareText to mathematical operations such as Power or Int64Mul. Structure The project is organized as a competition divided into challenges. Each challenge takes one function and optimizes it against a number of targets. The project provides tools for benchmarking and validating each function contribution. One point is given per contribution (maximally one function per target is given points) and ten points are awarded for a target winner. A list with all contributors and their scores is maintained, and at the end of each year, until 2008, a winner was celebrated. Borland, Codegear and Embarcadero, the owners of Delphi and C++ Builder, have historically sponsored prizes. The majority of participants in the competition are assembler developers who often utilize processor-specific 32-bit code and extra instruction sets, such as MMX, SSE, SSE2, SSE3, SSSE3 and SSE4. The project enjoys the support of Embarcadero who recognizes the contributions of the FastCode team and incorporates their code into the co
https://en.wikipedia.org/wiki/Excess%20noise%20ratio
In electronics, excess noise ratio is a characteristic of a noise generator such as a "noise diode", that is used to measure the noise performance of amplifiers. The Y-factor method is a common measurement technique for this purpose. By using a noise diode, the output noise of an amplifier is measured using two input noise levels, and by measuring the output noise factor (referred to as Y) the noise figure of the amplifier can be determined without having to measure the amplifier gain. Background Any amplifier generates noise. In a radio receiver the first stage dominates the overall noise of the receiver and in most cases thermal, or Johnson noise, determines the overall noise performance of a receiver. As radio signals decrease in size, the noise at the input of the receiver will determine a lower threshold of what can be received. The level of noise is determined by calculating the noise in a 50 ohm resistor at the input of the receiver as follows: where: = Boltzmann's constant = 1.38 × 10−23 J/K = Temperature = Bandwidth Thus, receivers with a narrow bandwidth have a higher sensitivity than receivers with a large bandwidth and input noise can be decreased by cooling the receiver input stage. A noise diode is a device which has a defined excess noise ratio (ENR). When the diode is off (unpowered) the noise from it will be thermal noise defined by the above formula. The bandwidth to be used is the bandwidth of the receiver. When the diode is on (powered) the noise from it will be increased from the thermal noise by the diode's excess noise ratio. This figure could be 6 dB for testing an amplifier with 40 dB gain and could be 16 dB for an amplifier with less gain or higher noise. To determine the noise figure of an amplifier one uses a noise diode at the input to the amplifier and determines the output noise Y with the diode switched on and off. Knowing both Y and the ENR, one can then determine the amount of noise contributed by the amplifier and henc
https://en.wikipedia.org/wiki/Ectopic%20hormone
An ectopic hormone is a hormone produced by tumors derived from tissue that is not typically associated with its production. On the other hand, the term entopic is used to refer to hormones produced by tissue in tumors that are normally engaged in the production of that hormone. The excess hormone secretion is considered detrimental to the normal body homeostasis. This hormone production typically results in a set of signs and symptoms that are called a paraneoplastic syndrome. Some clinical syndromes caused by ectopic hormone production include: