source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/3G%20MIMO
3G MIMO describes MIMO techniques which have been considered as 3G standard techniques. MIMO, as the state of the art of intelligent antenna (IA), improves the performance of radio systems by embedding electronics intelligence into the spatial processing unit. Spatial processing includes spatial precoding at the transmitter and spatial postcoding at the receiver, which are dual each other from information signal processing theoretic point of view. Intelligent antenna is technology which represents smart antenna, multiple antenna (MIMO), self-tracking directional antenna, cooperative virtual antenna and so on. Technology Spatial precoding of intelligent antenna includes spatial beamforming and spatial coding. In wireless communications, spatial precoding has been developing for high reliability, high rate and lower interference as shown in the following table. Summary of 3G MIMO The table summarizes the history of 3G MIMO techniques candidated for 3G standards. Although the table additionally contains the future part but the contents are not clearly filled out since the future is not precisely predictable. IA in ad hoc networking IA technology enables client terminals, which have either multiple antennas or a self-tracking directional antenna, to communicate to each other with as high as possible signal-to-interference-and-noise ratio (SINR). Assume that there is a source terminal, a destination terminal, and some candidate interference terminals. Compared to conventional approaches, an advanced IA based terminal will perform spatial precoding (spatial beamforming and/or spatial coding) not only to enhance the signal power at the destination terminal but also to diminish the interfering power at interference terminals. As a human does, the advanced IA terminal is given to know that occurring high interference to other terminals will eventually degrade the performance of the associated wireless network. Principal Issues of Research The following items list
https://en.wikipedia.org/wiki/Wallace%20Arthur
Wallace Arthur (born 30 March 1952) is an evolutionary biologist and science writer. He is Emeritus Professor of Zoology at the University of Galway. His most recent book is Understanding Life in the Universe, published by Cambridge University Press, which focuses on the likely extent (how many planets?) and nature (how much like us?) of extraterrestrial life. He was one of the founding editors of the journal Evolution & Development, serving as an editor for nearly 20 years. He has held visiting positions at Harvard University, Darwin College Cambridge, and the University of Warmia and Mazury in Olsztyn, Poland. Early life and education Wallace Arthur was born in Belfast, Northern Ireland, in 1952. He attended Friends School Lisburn and Campbell College Belfast. He received a BSc in biology from the University of Ulster in 1973 and a PhD in evolutionary biology from the University of Nottingham in 1977. Scientific work Arthur describes himself as "a bit of a maverick" who likes "making connections across disciplinary boundaries". His early work was at the interface between evolution and ecology, his later work at the interface between evolution and development, or ‘evo-devo’. His main contributions have been on the origin of animal body plans, the role of developmental bias in evolution, and the evolution of arthropod segmentation. His most recent book explores the interface between biology and astronomy, with two key themes: the likelihood of life having evolved on multiple exoplanets, and the nature of that life being probably not too different to life on Earth. Arthur is a proponent of a more comprehensive evolutionary synthesis that takes into account progress in the field of evo-devo. Books Mechanisms of Morphological Evolution: 1984, Wiley Theories of Life: Darwin, Mendel and Beyond: 1987, Penguin The Niche in Competition and Evolution: 1987, Wiley A Theory of the Evolution of Development: 1988, Wiley The Green Machine: Ecology and the Balance of Na
https://en.wikipedia.org/wiki/Eduard%20Ritter%20von%20Weber
Eduard Ritter von Weber (May 12, 1870 in Munich – June 20, 1934 in Würzburg) was a German mathematician. Von Weber attended the and afterward from 1888-1894 pursued studies in mathematics in Munich, Göttingen, and Paris. In 1893 he was awarded the Ph.D. from the University of Munich (his dissertation being titled Studien zur Theorie der infinitesimalen Transformationen, Gustav C. Bauer, advisor). Habilitation followed at the University of Munich in 1895, becoming full professor there in 1903. He moved to the University of Würzburg in 1907. Von Weber concerned himself particularly with partial differential equations, in particular the Pfaff problem, and wrote the article "Partial Differential Equations" in the Enzyklopädie der mathematischen Wissenschaften (Encyclopedia of the Mathematical Sciences). Von Weber had versatile interests and spoke numerous languages, including Russian, Portuguese, Spanish, Norwegian, Persian, Arabic, Hebrew, and Irish.
https://en.wikipedia.org/wiki/Kismet%20%28software%29
Kismet is a network detector, packet sniffer, and intrusion detection system for 802.11 wireless LANs. Kismet will work with any wireless card which supports raw monitoring mode, and can sniff 802.11a, 802.11b, 802.11g, and 802.11n traffic. The program runs under Linux, FreeBSD, NetBSD, OpenBSD, and macOS. The client can also run on Microsoft Windows, although, aside from external drones (see below), there's only one supported wireless hardware available as packet source. Distributed under the GNU General Public License, Kismet is free software. Features Kismet differs from other wireless network detectors in working passively. Namely, without sending any loggable packets, it is able to detect the presence of both wireless access points and wireless clients, and to associate them with each other. It is also the most widely used and up to date open source wireless monitoring tool. Kismet also includes basic wireless IDS features such as detecting active wireless sniffing programs including NetStumbler, as well as a number of wireless network attacks. Kismet features the ability to log all sniffed packets and save them in a tcpdump/Wireshark or Airsnort compatible file format. Kismet can also capture "Per-Packet Information" headers. Kismet also features the ability to detect default or "not configured" networks, probe requests, and determine what level of wireless encryption is used on a given access point. In order to find as many networks as possible, Kismet supports channel hopping. This means that it constantly changes from channel to channel non-sequentially, in a user-defined sequence with a default value that leaves big holes between channels (for example, 1-6-11-2-7-12-3-8-13-4-9-14-5-10). The advantage with this method is that it will capture more packets because adjacent channels overlap. Kismet also supports logging of the geographical coordinates of the network if the input from a GPS receiver is additionally available. Server / Drone / Client
https://en.wikipedia.org/wiki/Q-LAN
Q-LAN is the audio over IP audio networking technology component of the Q-Sys platform from QSC Audio Products.
https://en.wikipedia.org/wiki/Moving%20magnet%20actuator
A moving magnet actuator is a type of electromagnetic linear actuator. It typically consists of an arrangement of a mobile permanent magnet and fixed coil, arranged so that currents in the coil generate a pair of equal and opposite forces between the coil and magnet. A voice coil actuator, also called a voice coil motor (VCM), is an electromagnetic linear actuator where the magnet is fixed and the coil is mobile. In this configuration the coil is common called a voice coil. See also Tubular linear motor
https://en.wikipedia.org/wiki/TB11Cs3H1%20snoRNA
TB11Cs3H1 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB11Cs3H1 is predicted to guide the pseudouridylation of LSU3 ribosomal RNA (rRNA) at residue Ψ1308.
https://en.wikipedia.org/wiki/University%20of%20Kentucky%20Research%20and%20Education%20Center%20Botanical%20Garden
The University of Kentucky Research and Education Center Botanical Garden, also known as the UK REC Botanical Garden, is a research farm and botanical garden for the University of Kentucky in Princeton, Kentucky. The University's Agricultural Experiment Station was established in 1885, with the West Kentucky Substation at Princeton founded in 1925. Today the Experiment Station Farm consists of almost 1,300 acres (520 hectares) where crops such as corn, wheat, soybeans, tobacco, fruits, vegetables and ornamentals are studied. The Princeton site also includes a 10-acre (40,000 m²) orchard/vineyard, plus 2 acres (8,000 m²) of grapes, and 1.5 acres (6,000 m²) for research in small fruit trees and ornamentals. See also List of botanical gardens in the United States Botanical gardens in Kentucky Botanical research institutes Research institutes in Kentucky University of Kentucky Protected areas of Caldwell County, Kentucky Princeton, Kentucky Education in Caldwell County, Kentucky
https://en.wikipedia.org/wiki/Amaranthus%20hybridus
Amaranthus hybridus, commonly called green amaranth, slim amaranth, smooth amaranth, smooth pigweed, or red amaranth, is a species of annual flowering plant. It is a weedy species found now over much of North America and introduced into Europe and Eurasia. Description Amaranthus hybridus grows from a short taproot and can be up to 2.5 m in height. It is a glabrous or glabrescent plant. Distribution Amaranthus hybridus was originally a pioneer plant in eastern North America. It has been reported to have been found in every state except Wyoming, Utah, and Alaska. It is also found in many provinces of Canada, and in parts of Mexico, the West Indies, Central America, and South America. It has been naturalized in many places of warmer climate. It grows in many different places, including disturbed habitats. Taxonomy It is extremely variable, and many other Amaranthus species are believed to be natural hybridizations or derive from A. hybridus. As a weed Although easily controlled and not particularly competitive, it is recognized as a harmful weed of North American crops. Uses The seeds and cooked leaves are edible. The plant was used for food and medicine by several Native American groups and in traditional African medicine. It is among the species consumed as Quelite quintonilli in Mexican food markets.
https://en.wikipedia.org/wiki/Artificially%20Expanded%20Genetic%20Information%20System
Artificially Expanded Genetic Information System (AEGIS) is a synthetic DNA analog experiment that uses some unnatural base pairs from the laboratories of the Foundation for Applied Molecular Evolution in Gainesville, Florida. AEGIS is a NASA-funded project to try to understand how extraterrestrial life may have developed. The system uses twelve different nucleobases in its genetic code. These include the four canonical nucleobases found in DNA (adenine, cytosine, guanine and thymine) plus eight synthetic nucleobases). AEGIS includes S:B, Z:P, V:J and K:X base pairs. See also Abiogenesis Astrobiology Hachimoji DNA xDNA Hypothetical types of biochemistry Xeno nucleic acid
https://en.wikipedia.org/wiki/Parabolic%20geometry%20%28differential%20geometry%29
In differential geometry and the study of Lie groups, a parabolic geometry is a homogeneous space G/P which is the quotient of a semisimple Lie group G by a parabolic subgroup P. More generally, the curved analogs of a parabolic geometry in this sense is also called a parabolic geometry: any geometry that is modeled on such a space by means of a Cartan connection. Examples The projective space Pn is an example. It is the homogeneous space PGL(n+1)/H where H is the isotropy group of a line. In this geometrical space, the notion of a straight line is meaningful, but there is no preferred ("affine") parameter along the lines. The curved analog of projective space is a manifold in which the notion of a geodesic makes sense, but for which there are no preferred parametrizations on those geodesics. A projective connection is the relevant Cartan connection that gives a means for describing a projective geometry by gluing copies of the projective space to the tangent spaces of the base manifold. Broadly speaking, projective geometry refers to the study of manifolds with this kind of connection. Another example is the conformal sphere. Topologically, it is the n-sphere, but there is no notion of length defined on it, just of angle between curves. Equivalently, this geometry is described as an equivalence class of Riemannian metrics on the sphere (called a conformal class). The group of transformations that preserve angles on the sphere is the Lorentz group O(n+1,1), and so Sn = O(n+1,1)/P. Conformal geometry is, more broadly, the study of manifolds with a conformal equivalence class of Riemannian metrics, i.e., manifolds modeled on the conformal sphere. Here the associated Cartan connection is the conformal connection. Other examples include: CR geometry, the study of manifolds modeled on a real hyperquadric , where is the stabilizer of an isotropic line (see CR manifold) contact projective geometry, the study of manifolds modeled on where is that subgrou
https://en.wikipedia.org/wiki/Progress%20testing
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme. History Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively. Use in academic programs The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
https://en.wikipedia.org/wiki/CGMS-A
Copy Generation Management System – Analog (CGMS-A) is a copy protection mechanism for analog television signals. It consists of a waveform inserted into the non-picture vertical blanking interval (VBI) of an analogue video signal. If a compatible recording device (for example, a DVD recorder) detects this waveform, it may block or restrict recording of the video content. It is not the same as the broadcast flag, which is designed for use in digital television signals, although the concept is the same. There is a digital form of CGMS specified as CGMS-D which is required by the DTCP ("5C") protection standard. History CGMS-A has been in existence since 1995, and has been standardized by various organizations including the IEC and EIA/CEA. It is used in devices such as PVRs/DVRs, DVD players and recorders, D-VHS, and Blu-ray recorders, as well certain television broadcasts. More recent TiVo firmware releases comply with CGMS-A signals. Applications Implementation of CGMS-A is required for certain applications by DVD CCA license. D-VHS and some DVD recorders comply with CGMS-A signal on analog inputs. The technology requires minimal signal processing. Where the source signal is analogue (e.g. VHS, analogue broadcast), the CGMS-A signalling may be present in that source. Where the source signal is digital (e.g. DVD, digital broadcast), then the Copy Control Information (CCI) is carried in metadata in the digital transport or program stream, and a compliant hardware device (e.g. a DVD player) will read that data, and encode it into the analogue video signal generated within the device itself. There is no blanket legal requirement for devices which record video to detect or act upon the CGMS-A information. For example, the DMCA "does not require manufacturers of consumer electronics, telecommunications or computing equipment to design their products affirmatively to respond to any particular technological measure.". Standardization CGMS-A is standardized throug
https://en.wikipedia.org/wiki/List%20of%20works%20featuring%20killer%20toys
Killer toys are fictional characters based on toys, dolls or puppets that come alive and commit violent or scary acts. Reasons for these actions have included possession by demons, devils, monsters, ghosts, supernatural creatures, dark magic, and malevolent or malfunctioning technology. List of films The films that feature killer toys are listed as follows: In television Evil dolls are antagonists in episodes of The Twilight Zone, both the 1959–1964 series and the 2002 version: 1962: "The Dummy" (3.33) with a ventriloquist dummy that attempts to exact revenge when he is replaced 1963: "Living Doll" (5.6) features Talky Tina (voiced by June Foray), a doll belonging to the stepdaughter of Erich Streator (played by Telly Savalas) 1964: "Caesar and Me" (5.28) features a ventriloquist dummy that goads his owner into committing robberies and deserts him when the police come for him 2002: "The Collection" features a young girl's strange collection of dolls which were made from her past babysitters The theme of evil toys has also been used in Doctor Who episodes: 1966: "The Celestial Toymaker" 2011: "Night Terrors" 1986: Smurfs episode "Gargamel's Dummy," the series' antagonist Gargamel casts an evil spell on a ventriloquist's dummy Jokey Smurf had created for a talent show, hoping to use it to help him destroy the Smurf village and ultimately, the Smurfs. The dummy was a parody of Gargamel, and the real Gargamel, angered upon learning of it, casts the spell to begin exacting his revenge. 1992: The Simpsons episode "Treehouse of Horror III", the segment "Clown Without Pity" features a Krusty doll that tries to kill Homer. The segment borrows elements from the Twilight Zone episode "Living Doll", the Child's Play films, Gremlins, the 1975 TV film Trilogy of Terror segment "Amelia" about a killer Zuni fetish doll as well as its 1996 cinematic sequel Trilogy of Terror II segment "He Who Kills", which are both in turn adaptations of Richard Matheson's 1969 short st
https://en.wikipedia.org/wiki/Not%20Another%20Completely%20Heuristic%20Operating%20System
Not Another Completely Heuristic Operating System, or Nachos, is instructional software for teaching undergraduate, and potentially graduate level operating systems courses. It was developed at the University of California, Berkeley, designed by Thomas Anderson, and is used by numerous schools around the world. Originally written in C++ for MIPS, Nachos runs as a user-process on a host operating system. A MIPS simulator executes the code for any user programs running on top of the Nachos operating system. Ports of the Nachos code exist for a variety of architectures. In addition to the Nachos code, a number of assignments are provided with the Nachos system. The goal of Nachos is to introduce students to concepts in operating system design and implementation by requiring them to implement significant pieces of functionality within the Nachos system. In Nachos' case, Operating System simulator simply means that you can run an OS (a guest OS) on top of another one (the host OS), similar to Bochs/VMware. It features emulation for: A CPU (a MIPS CPU) A hard drive An interrupt controller, timer, and misc. other components which are there to run the Nachos user space applications. That means that you can write programs for Nachos, compile them with a real compiler (an old gcc compiler that produces code for MIPS) and run them. The Nachos kernel instead is compiled to the platform of the Host OS and thus runs natively on the Host OS' CPU. Nachos version 3.4 has been the stable, commonly used version of Nachos for many years. Nachos version 4.0 has existed as a beta since approximately 1996. Implementation Nachos has various modules implementing the functionality of a basic operating system. The wrapper functions for various system calls of the OS kernel are generally implemented in a manner similar to that of the UNIX system calls . Various parts of the OS are instantiated as objects using the native code. For example, a class Machineis used as the master cla
https://en.wikipedia.org/wiki/Curtido
Curtido () is a type of lightly fermented cabbage relish. It is typical in Salvadoran cuisine and that of other Central American countries, and is usually made with cabbage, onions, carrots, oregano, and sometimes lime juice; it resembles sauerkraut, kimchi, or tart coleslaw. It is commonly served alongside pupusas, the national delicacy. Fellow Central American country Belize has a similar recipe called "curtido" by its Spanish speakers; however, it is a spicy, fermented relish made with onions, habaneros, and vinegar. It is used to top salbutes, garnaches, and other common dishes in Belizean cuisine. See also Encurtido – a pickled vegetable appetizer, side dish and condiment in the Mesoamerican region
https://en.wikipedia.org/wiki/Sliding%20%28motion%29
Sliding is a type of motion between two surfaces in contact. This can be contrasted to rolling motion. Both types of motion may occur in bearings. The relative motion or tendency toward such motion between two surfaces is resisted by friction. Friction may damage or "wear" the surfaces in contact. However, wear can be reduced by lubrication. The science and technology of friction, lubrication, and wear is known as tribology. Sliding may occur between two objects of arbitrary shape, whereas rolling friction is the frictional force associated with the rotational movement of a somewhat disclike or other circular object along a surface. Generally, the frictional force of rolling friction is less than that associated with sliding kinetic friction. Typical values for the coefficient of rolling friction are less than that of sliding friction. Correspondingly sliding friction typically produces greater sound and thermal bi-products. One of the most common examples of sliding friction is the movement of braking motor vehicle tires on a roadway, a process which generates considerable heat and sound, and is typically taken into account in assessing the magnitude of roadway noise pollution. Sliding friction Sliding friction (also called kinetic friction) is a contact force that resists the sliding motion of two objects or an object and a surface. Sliding friction is almost always less than that of static friction; this is why it is easier to move an object once it starts moving rather than to get the object to begin moving from a rest position. Where , is the force of kinetic friction. is the coefficient of kinetic friction, and N is the normal force. Examples of sliding friction Sledding Pushing an object across a surface Rubbing one's hands together (The friction force generates heat.) A car sliding on ice A car skidding as it turns a corner Opening a window Almost any motion where there is contact between an object and a surface Falling down a bowling
https://en.wikipedia.org/wiki/Q%20%28number%20format%29
The Q notation is a way to specify the parameters of a binary fixed point number format. For example, in Q notation, the number format denoted by Q8.8 means that the fixed point numbers in this format have 8 bits for the integer part and 8 bits for the fraction part. A number of other notations have been used for the same purpose. Definition Texas Instruments version The Q notation, as defined by Texas Instruments, consists of the letter followed by a pair of numbers mn, where m is the number of bits used for the integer part of the value, and n is the number of fraction bits. By default, the notation describes signed binary fixed point format, with the unscaled integer being stored in two's complement format, used in most binary processors. The first bit always gives the sign of the value(1 = negative, 0 = non-negative), and it is not counted in the m parameter. Thus the total number w of bits used is 1 + m + n. For example, the specification describes a signed binary fixed-point number with a w = 16 bits in total, comprising the sign bit, three bits for the integer part, and 12 bits that are the fraction. That is, a 16-bit signed (two's complement) integer, that is implicitly multiplied by the scaling factor 2−12 In particular, when n is zero, the numbers are just integers. If m is zero, all bits except the sign bit are fraction bits; then the range of the stored number is from −1.0 (inclusive) to +1 (exclusive). The m and the dot may be omitted, in which case they are inferred from the size of the variable or register where the value is stored. Thus means a signed integer with any number of bits, that is implicitly multiplied by 2−12. The letter can be prefixed to the to denote an unsigned binary fixed-point format. For example, describes values represented as unsigned 16-bit integers with implicit scaling factor of 2−15, which range from 0.0 to (216−1)/215 = +1.999969482421875. ARM version A variant of the Q notation has been in use by ARM.
https://en.wikipedia.org/wiki/Variable%20%28computer%20science%29
In computer programming, a variable is an abstract storage location paired with an associated symbolic name, which contains some known or unknown quantity of data or object referred to as a value; or in simpler terms, a variable is a named container for a particular set of bits or type of data (like integer, float, string etc...). A variable can eventually be associated with or identified by a memory address. The variable name is the usual way to reference the stored value, in addition to referring to the variable itself, depending on the context. This separation of name and content allows the name to be used independently of the exact information it represents. The identifier in computer source code can be bound to a value during run time, and the value of the variable may thus change during the course of program execution. Variables in programming may not directly correspond to the concept of variables in mathematics. The latter is abstract, having no reference to a physical object such as storage location. The value of a computing variable is not necessarily part of an equation or formula as in mathematics. Variables in computer programming are frequently given long names to make them relatively descriptive of their use, whereas variables in mathematics often have terse, one- or two-character names for brevity in transcription and manipulation. A variable's storage location may be referenced by several different identifiers, a situation known as aliasing. Assigning a value to the variable using one of the identifiers will change the value that can be accessed through the other identifiers. Compilers have to replace variables' symbolic names with the actual locations of the data. While a variable's name, type, and location often remain fixed, the data stored in the location may be changed during program execution. Actions on a variable In imperative programming languages, values can generally be accessed or changed at any time. In pure functional and logic lan
https://en.wikipedia.org/wiki/Tribenuron
Tribenuron in the form of tribenuron-methyl is a sulfonylurea herbicide. Its mode of action is the inhibition of acetolactate synthase, group 2 of the Herbicide Resistance Action Committee's classification scheme. Chemistry In the 1970s, chemists at DuPont worked extensively on sulfonylurea herbicides, following the invention of this class of herbicides by George Levitt which had led to the commercialisation of chlorsulfuron. Tribenuron (the carboxylic acid) and its methyl ester were first disclosed in general terms in one of Levitt's patents and subsequently the ester was subject to further patenting and selected for development under the code name DPX L5300. In the final step of its synthesis, 2-methoxycarbonylbenzenesulfonyl isocyanate was condensed with 2-methylamino-4-methoxy-6-methyl-1,3,5-triazine to form the sulfonylurea product. Mode of action Tribenuron is an herbicide that acts as an acetolactate synthase inhibitor. For the purposes of herbicide resistance management, the Herbicide Resistance Action Committee has placed it in group 2 (legacy HRAC Group B). Applications Tribenuron has a broad spectrum of activity on commercially important broadleaf weeds and grasses but at the recommended use rate it is safe to important crops such as wheat. When introduced by DuPont, its recommended application rate was . The estimated use in US agriculture is mapped by the US Geological Service and shows that from 1992 to 2018, up to were applied each year. The compound is used mainly in wheat but also in pasture. Physicochemistry In a clay-water suspension, tribenuron has increased sorption with decreasing pH and even more so with suspended load. Resistant crops A tribenuron-resistance transformation has been achieved in watermelon and validated by survival of the als mutants but not the controls, under tribenuron treatment. Two oilseed type sunflower cultivars have been produced by USDA-ARS by conventional breeding.
https://en.wikipedia.org/wiki/Totally%20bounded%20space
In topology and related branches of mathematics, total-boundedness is a generalization of compactness for circumstances in which a set is not necessarily closed. A totally bounded set can be covered by finitely many subsets of every fixed “size” (where the meaning of “size” depends on the structure of the ambient space). The term precompact (or pre-compact) is sometimes used with the same meaning, but precompact is also used to mean relatively compact. These definitions coincide for subsets of a complete metric space, but not in general. In metric spaces A metric space is totally bounded if and only if for every real number , there exists a finite collection of open balls of radius whose centers lie in M and whose union contains . Equivalently, the metric space M is totally bounded if and only if for every , there exists a finite cover such that the radius of each element of the cover is at most . This is equivalent to the existence of a finite ε-net. A metric space is said to be totally bounded if every sequence admits a Cauchy subsequence; in complete metric spaces, a set is compact if and only if it is closed and totally bounded. Each totally bounded space is bounded (as the union of finitely many bounded sets is bounded). The reverse is true for subsets of Euclidean space (with the subspace topology), but not in general. For example, an infinite set equipped with the discrete metric is bounded but not totally bounded: every discrete ball of radius or less is a singleton, and no finite union of singletons can cover an infinite set. Uniform (topological) spaces A metric appears in the definition of total boundedness only to ensure that each element of the finite cover is of comparable size, and can be weakened to that of a uniform structure. A subset of a uniform space is totally bounded if and only if, for any entourage , there exists a finite cover of by subsets of each of whose Cartesian squares is a subset of . (In other words, replaces
https://en.wikipedia.org/wiki/Biclique%20attack
A biclique attack is a variant of the meet-in-the-middle (MITM) method of cryptanalysis. It utilizes a biclique structure to extend the number of possibly attacked rounds by the MITM attack. Since biclique cryptanalysis is based on MITM attacks, it is applicable to both block ciphers and (iterated) hash-functions. Biclique attacks are known for having weakened both full AES and full IDEA, though only with slight advantage over brute force. It has also been applied to the KASUMI cipher and preimage resistance of the Skein-512 and SHA-2 hash functions. The biclique attack is still () the best publicly known single-key attack on AES. The computational complexity of the attack is , and for AES128, AES192 and AES256, respectively. It is the only publicly known single-key attack on AES that attacks the full number of rounds. Previous attacks have attacked round reduced variants (typically variants reduced to 7 or 8 rounds). As the computational complexity of the attack is , it is a theoretical attack, which means the security of AES has not been broken, and the use of AES remains relatively secure. The biclique attack is nevertheless an interesting attack, which suggests a new approach to performing cryptanalysis on block ciphers. The attack has also rendered more information about AES, as it has brought into question the safety-margin in the number of rounds used therein. History The original MITM attack was first suggested by Diffie and Hellman in 1977, when they discussed the cryptanalytic properties of DES. They argued that the key-size was too small, and that reapplying DES multiple times with different keys could be a solution to the key-size; however, they advised against using double-DES and suggested triple-DES as a minimum, due to MITM attacks (MITM attacks can easily be applied to double-DES to reduce the security from to just , since one can independently bruteforce the first and the second DES-encryption if they have the plain- and ciphertext). Since
https://en.wikipedia.org/wiki/Fixed-asset%20turnover
Fixed-asset turnover is the ratio of sales (on the profit and loss account) to the value of fixed assets (on the balance sheet). It indicates how well the business is using its fixed assets to generate sales. Generally speaking, the higher the ratio, the better, because a high ratio indicates the business has less money tied up in fixed assets for each unit of currency of sales revenue. A declining ratio may indicate that the business is over-invested in plant, equipment, or other fixed assets. In A.A.T. assessments this financial measure is calculated in two different ways. 1. Total Asset Turnover Ratio = Revenue / Total Assets 2. Net Asset Turnover Ratio = Revenue / (Total Assets - Current Liabilities)
https://en.wikipedia.org/wiki/Optical%20burst%20switching
Optical burst switching (OBS) is an optical networking technique that allows dynamic sub-wavelength switching of data. OBS is viewed as a compromise between the yet unfeasible full optical packet switching (OPS) and the mostly static optical circuit switching (OCS). It differs from these paradigms because OBS control information is sent separately in a reserved optical channel and in advance of the data payload. These control signals can then be processed electronically to allow the timely setup of an optical light path to transport the soon-to-arrive payload. This is known as delayed reservation. Purpose The purpose of optical burst switching (OBS) is to dynamically provision sub-wavelength granularity by optimally combining electronics and optics. OBS considers sets of packets with similar properties called bursts. Therefore, OBS granularity is finer than optical circuit switching (OCS). OBS provides more bandwidth flexibility than wavelength routing but requires faster switching and control technology. OBS can be used for realizing dynamic end-to-end all optical communications. Method In OBS, packets are aggregated into data bursts at the edge of the network to form the data payload. Various assembling schemes based on time and/or size exist (see burst switching). Edge router architectures have been proposed (see ). OBS features the separation between the control plane and the data plane. A control signal (also termed burst header or control packet) is associated to each data burst. The control signal is transmitted in optical form in a separated wavelength termed the control channel, but signaled out of band and processed electronically at each OBS router, whereas the data burst is transmitted in all optical form from one end to the other end of the network. The data burst can cut through intermediate nodes, and data buffers such as fiber delay lines may be used. In OBS data is transmitted with full transparency to the intermediate nodes in the network. After
https://en.wikipedia.org/wiki/Compulsive%20talking
Compulsive talking (or talkaholism) is talking that goes beyond the bounds of what is considered to be socially acceptable. The main criteria for determining if someone is a compulsive talker are talking in a continuous manner or stopping only when the other person starts talking, and others perceiving their talking as a problem. Personality traits that have been positively linked to this compulsion include assertiveness, willingness to communicate, self-perceived communication competence, and neuroticism. Studies have shown that most people who are talkaholics are aware of the amount of talking they do, are unable to stop, or do not see it as a problem. Characteristics It has been suggested, through research done by James C. McCroskey and Virginia P. Richmond, that United States society finds talkativeness attractive. It is something which is rewarded and positively correlated with leadership and influence. However, those who compulsively talk are not to be confused with those who are simply highly verbal and vary their quantity of talk. Compulsive talkers are those who are highly verbal in a manner that differs greatly from the norm and is not in the person's best interest. Those who have been characterized as compulsive talkers talk with a greater frequency, dominate conversations, and are less inhibited than others. They have also been found to be more argumentative and have a positive attitude regarding communication. Tendencies towards compulsive talking also are more frequently seen in the personality structure of neurotic psychotic extraverts. It has also been found that talkaholics are never behaviorally shy. Talkaholic scale In 1993 James C. McCroskey and Virginia P. Richmond constructed the Talkaholic Scale, a Likert-type model, to help identify those who are compulsive talkers. A score of 40 or above, which indicates two standard deviations above the norm, would signal someone to be a true talkaholic. Cultural similarities A study of 811 university st
https://en.wikipedia.org/wiki/Calcipressin
In molecular biology, the calcipressin family of proteins negatively regulate calcineurin by direct binding. They are essential for the survival of T helper type 1 cells. Calcipressin 1 is a phosphoprotein that increases its capacity to inhibit calcineurin when phosphorylated at the conserved FLISPP motif; this phosphorylation also controls the half-life of calcipressin 1 by accelerating its degradation. In humans, the Calcipressins family of proteins is derived from three genes: Calcipressin 1 (encoded by RCAN1) is also known as modulatory calcineurin-interacting protein 1 (MCIP1), Adapt78 and Down syndrome critical region 1 (DSCR1). Calcipressin 2 (encoded by RCAN2) is variously known as MCIP2, ZAKI-4 and DSCR1-like 1. Calcipressin 3 (encoded by RCAN3) is also called MCIP3 and DSCR1-like 2.
https://en.wikipedia.org/wiki/Entomostracites
Entomostracites is a scientific name for several trilobites, now assigned to various other genera. E. bucephalus = Paradoxides paradoxissimus E. crassicauda = Illaenus crassicauda E. expansus = Asaphus expansus E. gibbosus = Olenus gibbosus E. granulatus = Nankinolithus granulatus E. laciniatus = Lichas laciniatus E. laticauda = Eobronteus laticauda E. paradoxissimus = Paradoxides paradoxissimus E. pisiformis = Agnostus pisiformis E. punctatus = Encrinurus punctatus E. scarabaeoides = Peltura scarabaeoides E. spinulosus = Parabolina spinulosa
https://en.wikipedia.org/wiki/Envelope%20%28waves%29
In physics and engineering, the envelope of an oscillating signal is a smooth curve outlining its extremes. The envelope thus generalizes the concept of a constant amplitude into an instantaneous amplitude. The figure illustrates a modulated sine wave varying between an upper envelope and a lower envelope. The envelope function may be a function of time, space, angle, or indeed of any variable. In beating waves A common situation resulting in an envelope function in both space x and time t is the superposition of two waves of almost the same wavelength and frequency: which uses the trigonometric formula for the addition of two sine waves, and the approximation Δλ ≪ λ: Here the modulation wavelength λmod is given by: The modulation wavelength is double that of the envelope itself because each half-wavelength of the modulating cosine wave governs both positive and negative values of the modulated sine wave. Likewise the beat frequency is that of the envelope, twice that of the modulating wave, or 2Δf. If this wave is a sound wave, the ear hears the frequency associated with f and the amplitude of this sound varies with the beat frequency. Phase and group velocity The argument of the sinusoids above apart from a factor 2 are: with subscripts C and E referring to the carrier and the envelope. The same amplitude F of the wave results from the same values of ξC and ξE, each of which may itself return to the same value over different but properly related choices of x and t. This invariance means that one can trace these waveforms in space to find the speed of a position of fixed amplitude as it propagates in time; for the argument of the carrier wave to stay the same, the condition is: which shows to keep a constant amplitude the distance Δx is related to the time interval Δt by the so-called phase velocity vp On the other hand, the same considerations show the envelope propagates at the so-called group velocity vg: A more common expression for the group veloci
https://en.wikipedia.org/wiki/Multiply%20perfect%20number
In mathematics, a multiply perfect number (also called multiperfect number or pluperfect number) is a generalization of a perfect number. For a given natural number k, a number n is called (or perfect) if the sum of all positive divisors of n (the divisor function, σ(n)) is equal to kn; a number is thus perfect if and only if it is . A number that is for a certain k is called a multiply perfect number. As of 2014, numbers are known for each value of k up to 11. It is unknown whether there are any odd multiply perfect numbers other than 1. The first few multiply perfect numbers are: 1, 6, 28, 120, 496, 672, 8128, 30240, 32760, 523776, 2178540, 23569920, 33550336, 45532800, 142990848, 459818240, ... . Example The sum of the divisors of 120 is 1 + 2 + 3 + 4 + 5 + 6 + 8 + 10 + 12 + 15 + 20 + 24 + 30 + 40 + 60 + 120 = 360 which is 3 × 120. Therefore 120 is a number. Smallest known k-perfect numbers The following table gives an overview of the smallest known numbers for k ≤ 11 : Properties It can be proven that: For a given prime number p, if n is and p does not divide n, then pn is . This implies that an integer n is a number divisible by 2 but not by 4, if and only if n/2 is an odd perfect number, of which none are known. If 3n is and 3 does not divide n, then n is . Odd multiply perfect numbers It is unknown whether there are any odd multiply perfect numbers other than 1. However if an odd number n exists where k > 2, then it must satisfy the following conditions: The largest prime factor is ≥ 100129 The second largest prime factor is ≥ 1009 The third largest prime factor is ≥ 101 Bounds In little-o notation, the number of multiply perfect numbers less than x is for all ε > 0. The number of k-perfect numbers n for n ≤ x is less than , where c and c are constants independent of k. Under the assumption of the Riemann hypothesis, the following inequality is true for all numbers n, where k > 3 where is Euler's gamma constant. This can
https://en.wikipedia.org/wiki/Halorubrum%20orientale
Halorubrum orientale is a halophilic Archaeon in the family of Halorubraceae.
https://en.wikipedia.org/wiki/Ordbog%20over%20det%20danske%20Sprog
Ordbog over det danske Sprog () or ODS is a comprehensive dictionary of the Danish language, describing its usage from c. 1700 to 1955 in great detail. The ODS was published in 28 volumes between 1919 and 1956 by the Society for Danish Language and Literature (Det Danske Sprog- og Litteraturselskab). Five supplementary volumes were published between 1992 and 2005. The project was begun by Danish linguist Verner Dahlerup. Since 1915, the project was led by linguist Lis Jacobsen. A digitized version of the ODS has been maintained by the Society for Danish Language and Literature since November 2005. This organization also maintains a sister dictionary Den Danske Ordbog covering Danish language use since 1950. In addition, the society maintains Holbergordbogen, named for the 18th-century playwright Ludvig Holberg, covering language use between 1700 and 1750 and a digital version of Moths Ordbog a Danish-Latin dictionary from around 1700. Ordbog over det danske Sprog covers approximately 225,000 entries. See also Dansk Sprognævn Oxford English Dictionary Svenska Akademiens ordbok Woordenboek der Nederlandsche Taal External links Online version Online dictionaries Danish dictionaries
https://en.wikipedia.org/wiki/Sequential%20minimal%20optimization
Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). It was invented by John Platt in 1998 at Microsoft Research. SMO is widely used for training support vector machines and is implemented by the popular LIBSVM tool. The publication of the SMO algorithm in 1998 has generated a lot of excitement in the SVM community, as previously available methods for SVM training were much more complex and required expensive third-party QP solvers. Optimization problem Consider a binary classification problem with a dataset (x1, y1), ..., (xn, yn), where xi is an input vector and is a binary label corresponding to it. A soft-margin support vector machine is trained by solving a quadratic programming problem, which is expressed in the dual form as follows: subject to: where C is an SVM hyperparameter and K(xi, xj) is the kernel function, both supplied by the user; and the variables are Lagrange multipliers. Algorithm SMO is an iterative algorithm for solving the optimization problem described above. SMO breaks this problem into a series of smallest possible sub-problems, which are then solved analytically. Because of the linear equality constraint involving the Lagrange multipliers , the smallest possible problem involves two such multipliers. Then, for any two multipliers and , the constraints are reduced to: and this reduced problem can be solved analytically: one needs to find a minimum of a one-dimensional quadratic function. is the negative of the sum over the rest of terms in the equality constraint, which is fixed in each iteration. The algorithm proceeds as follows: Find a Lagrange multiplier that violates the Karush–Kuhn–Tucker (KKT) conditions for the optimization problem. Pick a second multiplier and optimize the pair . Repeat steps 1 and 2 until convergence. When all the Lagrange multipliers satisfy the KKT conditions (within a user-de
https://en.wikipedia.org/wiki/Perinephritis
Perinephritis is an infection of the surroundings of the kidney either right or left. It can be the result of extravasated infiltration of the bacteria out of the renal pelvis (pyelonephritis) or a result of another kidney infection. The consequences include the infection of the neighbouring organs (for example transverse colon) or retroperitoneum, and/or hypertension. Perirenal abscess also may occur.
https://en.wikipedia.org/wiki/Krakout
Krakout is a Breakout clone that was released for the ZX Spectrum, Amstrad CPC, BBC Micro, Commodore 64, Thomson computers and MSX platforms in 1987. One of the wave of enhanced Breakout variants to emerge in the wake of Arkanoid, its key distinctions are that gameplay is horizontal in layout, and that it allows the player to select the acceleration characteristics of the bat before playing. It was written by Andy Green and Rob Toone and published by Gremlin Graphics. The music was composed by Ben Daglish. Reception In 1990, Dragon gave the game 4 out of 5 stars, calling it "one of our favorites, this is Breakout with a different flavor". Reviews Computer Gamer (Jun, 1987) Tilt (May, 1987) Happy Computer (1987) ASM (Aktueller Software Markt) (Mar, 1987) Tilt (Jul, 1987) Computer Gamer (Apr, 1987) Commodore User (Apr, 1987) Your Sinclair (Feb, 1989) Zzap! (Apr, 1987) Crash! (Feb, 1989) Jeux & Stratégie #45
https://en.wikipedia.org/wiki/Self-reconfiguring%20modular%20robot
Modular self-reconfiguring robotic systems or self-reconfigurable modular robots are autonomous kinematic machines with variable morphology. Beyond conventional actuation, sensing and control typically found in fixed-morphology robots, self-reconfiguring robots are also able to deliberately change their own shape by rearranging the connectivity of their parts, in order to adapt to new circumstances, perform new tasks, or recover from damage. For example, a robot made of such components could assume a worm-like shape to move through a narrow pipe, reassemble into something with spider-like legs to cross uneven terrain, then form a third arbitrary object (like a ball or wheel that can spin itself) to move quickly over a fairly flat terrain; it can also be used for making "fixed" objects, such as walls, shelters, or buildings. In some cases this involves each module having 2 or more connectors for connecting several together. They can contain electronics, sensors, computer processors, memory and power supplies; they can also contain actuators that are used for manipulating their location in the environment and in relation with each other. A feature found in some cases is the ability of the modules to automatically connect and disconnect themselves to and from each other, and to form into many objects or perform many tasks moving or manipulating the environment. By saying "self-reconfiguring" or "self-reconfigurable" it means that the mechanism or device is capable of utilizing its own system of control such as with actuators or stochastic means to change its overall structural shape. Having the quality of being "modular" in "self-reconfiguring modular robotics" is to say that the same module or set of modules can be added to or removed from the system, as opposed to being generically "modularized" in the broader sense. The underlying intent is to have an indefinite number of identical modules, or a finite and relatively small set of identical modules, in a mesh or m
https://en.wikipedia.org/wiki/Selectable%20marker
A selectable marker is a gene introduced into a cell, especially a bacterium or to cells in culture, that confers a trait suitable for artificial selection. They are a type of reporter gene used in laboratory microbiology, molecular biology, and genetic engineering to indicate the success of a transfection or other procedure meant to introduce foreign DNA into a cell. Selectable markers are often antibiotic resistance genes (An antibiotic resistance marker is a gene that produces a protein that provides cells expressing this protein with resistance to an antibiotic.). Bacteria that have been subjected to a procedure to introduce foreign DNA are grown on a medium containing an antibiotic, and those bacterial colonies that can grow have successfully taken up and expressed the introduced genetic material. Normally the genes encoding resistance to antibiotics such as ampicillin, chloramphenicol, tetracycline or kanamycin, etc., are considered useful selectable markers for E. coli. Modus operandi The non-recombinants are separated from recombinants; i.e., a r-DNA is introduced in bacteria, some bacteria are successfully transformed some remain non-transformed. When grown on medium containing ampicillin, bacteria die due to lack of ampicillin resistance. The position is later noted on nitrocellulose paper and separated out to move them to nutrient medium for mass production of required product. An alternative to a selectable marker is a screenable marker which can also be denoted as a reporter gene, which allows the researcher to distinguish between wanted and unwanted cells, e.g. between blue and white colonies. These wanted or unwanted cells are simply un-transformed cells that were unable to take up the gene during the experiment. Positive and Negative For molecular biology research different types of markers may be used based on the selection sought. These include: Positive or selection markers are selectable markers that confer selective advantage to the host organ
https://en.wikipedia.org/wiki/Roger%20Cotes
Roger Cotes (10 July 1682 – 5 June 1716) was an English mathematician, known for working closely with Isaac Newton by proofreading the second edition of his famous book, the Principia, before publication. He also invented the quadrature formulas known as Newton–Cotes formulas, and made a geometric argument that can be interpreted as a logarithmic version of Euler's formula. He was the first Plumian Professor at Cambridge University from 1707 until his death. Early life Cotes was born in Burbage, Leicestershire. His parents were Robert, the rector of Burbage, and his wife, Grace, née Farmer. Roger had an elder brother, Anthony (born 1681), and a younger sister, Susanna (born 1683), both of whom died young. At first Roger attended Leicester School, where his mathematical talent was recognised. His aunt Hannah had married Rev. John Smith, and Smith took on the role of tutor to encourage Roger's talent. The Smiths' son, Robert Smith, became a close associate of Roger Cotes throughout his life. Cotes later studied at St Paul's School in London and entered Trinity College, Cambridge, in 1699. He graduated BA in 1702 and MA in 1706. Astronomy Roger Cotes's contributions to modern computational methods lie heavily in the fields of astronomy and mathematics. Cotes began his educational career with a focus on astronomy. He became a fellow of Trinity College in 1707, and at age 26 he became the first Plumian Professor of Astronomy and Experimental Philosophy. On his appointment to professor, he opened a subscription list in an effort to provide an observatory for Trinity. Unfortunately, the observatory was still unfinished when Cotes died, and was demolished in 1797. In correspondence with Isaac Newton, Cotes designed a heliostat telescope with a mirror revolving by clockwork. He recomputed the solar and planetary tables of Giovanni Domenico Cassini and John Flamsteed, and he intended to create tables of the moon's motion, based on Newtonian principles. Finally, in 1707 he
https://en.wikipedia.org/wiki/Schoof%27s%20algorithm
Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve. The algorithm was published by René Schoof in 1985 and it was a theoretical breakthrough, as it was the first deterministic polynomial time algorithm for counting points on elliptic curves. Before Schoof's algorithm, approaches to counting points on elliptic curves such as the naive and baby-step giant-step algorithms were, for the most part, tedious and had an exponential running time. This article explains Schoof's approach, laying emphasis on the mathematical ideas underlying the structure of the algorithm. Introduction Let be an elliptic curve defined over the finite field , where for a prime and an integer . Over a field of characteristic an elliptic curve can be given by a (short) Weierstrass equation with . The set of points defined over consists of the solutions satisfying the curve equation and a point at infinity . Using the group law on elliptic curves restricted to this set one can see that this set forms an abelian group, with acting as the zero element. In order to count points on an elliptic curve, we compute the cardinality of . Schoof's approach to computing the cardinality makes use of Hasse's theorem on elliptic curves along with the Chinese remainder theorem and division polynomials. Hasse's theorem Hasse's theorem states that if is an elliptic curve over the finite field , then satisfies This powerful result, given by Hasse in 1934, simplifies our problem by narrowing down to a finite (albeit large) set of possibilities. Defining to be , and making use of this result, we now have that computing the value of modulo where , is sufficient for determining , and thus . While there is no efficient way to c
https://en.wikipedia.org/wiki/Polyhedral%20combinatorics
Polyhedral combinatorics is a branch of mathematics, within combinatorics and discrete geometry, that studies the problems of counting and describing the faces of convex polyhedra and higher-dimensional convex polytopes. Research in polyhedral combinatorics falls into two distinct areas. Mathematicians in this area study the combinatorics of polytopes; for instance, they seek inequalities that describe the relations between the numbers of vertices, edges, and faces of higher dimensions in arbitrary polytopes or in certain important subclasses of polytopes, and study other combinatorial properties of polytopes such as their connectivity and diameter (number of steps needed to reach any vertex from any other vertex). Additionally, many computer scientists use the phrase “polyhedral combinatorics” to describe research into precise descriptions of the faces of certain specific polytopes (especially 0-1 polytopes, whose vertices are subsets of a hypercube) arising from integer programming problems. Faces and face-counting vectors A face of a convex polytope P may be defined as the intersection of P and a closed halfspace H such that the boundary of H contains no interior point of P. The dimension of a face is the dimension of this hull. The 0-dimensional faces are the vertices themselves, and the 1-dimensional faces (called edges) are line segments connecting pairs of vertices. Note that this definition also includes as faces the empty set and the whole polytope P. If P itself has dimension d, the faces of P with dimension d − 1 are called facets of P and the faces with dimension d − 2 are called ridges. The faces of P may be partially ordered by inclusion, forming a face lattice that has as its top element P itself and as its bottom element the empty set. A key tool in polyhedral combinatorics is the ƒ-vector of a polytope, the vector (f0, f1, ..., fd − 1) where fi is the number of i-dimensional features of the polytope. For instance, a cube has eight vertices, twel
https://en.wikipedia.org/wiki/Raphopoda
Raphopoda is a grouping of heterotrophic protists. It contains the heterotroph organisms within class Raphidomonadea, classified as two orders: Commatiida, comprised by the sole genus of flagellates Commation, and Actinophryida, an order of heliozoa, amoebae with stiff specialized pseudopodia called axopodia.
https://en.wikipedia.org/wiki/Glucose%20phosphate%20broth
Glucose phosphate broth is used to perform methyl red (MR) test and Voges–Proskauer test (VP). Contents Glucose – 5 g/L Dipotassium phosphate – 5 g/L Proteose Peptone – 5 g/L Distilled water – 1000 mL pH – 6.9 Methyl red test Principle It is used to determine the ability of an organism to produce mixed acids by fermentation of glucose and to overcome the buffering capacity of the medium. Procedure Inoculate MacConkey's (Glucose phosphate broth) with pure culture of test organism. Incubate the broth at 35 °C for 48–72 hours. After incubation add 5 drops of methyl red directly into the broth, through the sides of the tube. Interpretation The development of stable red color in the surface of the medium indicates sufficient acid production to lower the pH to 4.4 and constitute a positive test. Since other organism may produce lesser quantities of acid from the test substrate, an intermediate orange color between yellow and red may develop. This does not indicate positive test. Controls Positive and negative controls should be run after preparation of each lot of medium. Positive control: Escherichia coli Negative control: Klebsiella Voges Proskauer test Principle It is used to determine the ability of some organisms to produce a neutral end product, acetyl methyl carbinol (acetoin) from glucose fermentation. The production of acetoin, a neutral reacting end product produced by members such as Klebsiella, Enterobacter etc., is the chief end product of glucose metabolism and form less quantities of mixed acids. In the presence of atmospheric oxygen and 40% KOH, acetoin is converted to diacetyl and α-naphthol serves as catalyst to bring out red color complex. Media Glucose Phosphate Broth Reagents A: α-naphthol – 5 g Absolute ethyl alcohol – 100 mL – 0.6 mL – 3 parts B: KOH – 40 g Distilled water – 100 mL – 0.2 mL – 1 part Procedure Inoculate a tube of glucose phosphate broth with a pure inoculum of test organism and incubate at 35 °C for 24 hours.
https://en.wikipedia.org/wiki/Undercut%20procedure
The undercut procedure is a procedure for fair item assignment between two people. It provably finds a complete envy-free item assignment whenever such assignment exists. It was presented by Brams and Kilgour and Klamler and simplified and extended by Aziz. Assumptions The undercut procedure requires only the following weak assumptions on the people: Each person has a weak preference relation on subsets of items. Each preference relation is strictly monotonic: for every set and item , the person strictly prefers to . It is not assumed that agents have responsive preferences. Main idea The undercut procedure can be seen as a generalization of the divide and choose protocol from a divisible resource to a resource with indivisibilities. The divide-and-choose protocol requires one person to cut the resource to two equal pieces. But, if the resource contains with indivisibilities, it may be impossible to make an exactly-equal cut. Accordingly, the undercut procedure works with almost-equal-cuts. An almost-equal-cut of a person is a partition of the set of items to two disjoint subsets (X,Y) such that: The person weakly prefers X to Y; If any single item is moved from X to Y, then the person strictly prefers Y to X (i.e., for all x in X, the person prefers to ). Procedure Each person reports all his almost-equal-cuts. There are two cases: Case 1: the reports are different, e.g., there is a partition (X,Y) that is an almost-equal-cut for Alice but not for George. Then, this partition is presented to George. George can either accept or reject it: George accepts the partition if he prefers Y to X. Then Alice receives X and George receives Y and the resulting allocation is envy-free. George rejects the partition if he prefers X to Y. By assumption, (X,Y) is not an almost-equal-cut for George. Therefore, there exists an item x in X such that George prefers to . George reports ; we say that George undercuts X. Since (X,Y) is an almost-equal-cut for Alice, Alic
https://en.wikipedia.org/wiki/PatchMatch
The core PatchMatch algorithm quickly finds correspondences between small square regions (or patches) of an image. The algorithm can be used in various applications such as object removal from images, reshuffling or moving contents of images, or retargeting or changing aspect ratios of images, optical flow estimation, or stereo correspondence. Algorithm The goal of the algorithm is to find the patch correspondence by defining a nearest-neighbor field (NNF) as a function of offsets, which is over all possible matches of patch (location of patch centers) in image A, for some distance function of two patches . So, for a given patch coordinate in image and its corresponding nearest neighbor in image , is simply . However, if we search for every point in image , the work will be too hard to complete. So the following algorithm is done in a randomized approach in order to accelerate the calculation speed. The algorithm has three main components. Initially, the nearest-neighbor field is filled with either random offsets or some prior information. Next, an iterative update process is applied to the NNF, in which good patch offsets are propagated to adjacent pixels, followed by random search in the neighborhood of the best offset found so far. Independent of these three components, the algorithm also use a coarse-to-fine approach by building an image pyramid to obtain the better result. Initialization When initializing with random offsets, we use independent uniform samples across the full range of image . This algorithm avoids using an initial guess from the previous level of the pyramid because in this way the algorithm can avoid being trapped in local minima. Iteration After initialization, the algorithm attempted to perform iterative process of improving the . The iterations examine the offsets in scan order (from left to right, top to bottom), and each undergoes propagation followed by random search. Propagation We attempt to improve using the known offsets o
https://en.wikipedia.org/wiki/Arie%20Bialostocki
Arie Bialostocki is an Israeli American mathematician with expertise and contributions in discrete mathematics and finite groups. Education and career Arie received his BSc, MSc, and PhD (1984) degrees from Tel-Aviv University in Israel. His dissertation was done under the supervision of Marcel Herzog. After a year of postdoc at University of Calgary, Canada, he took a faculty position at the University of Idaho, became a professor in 1992, and continued to work there until he retired at the end of 2011. At Idaho, Arie maintained correspondence and collaborations with researchers from around the world who would share similar interests in mathematics. His Erdős number is 1. He has supervised seven PhD students and numerous undergraduate students who enjoyed his colorful anecdotes and advice. He organized the Research Experience for Undergraduates (REU) program at the University of Idaho from 1999 to 2003 attracting many promising undergraduates who themselves have gone on to their outstanding research careers. Mathematics research Arie has published more than 50 publications in reputed mathematics journals. The following are some of Arie's most important contributions: Bialostocki redefined a -injector in a finite group G to be any maximal nilpotent subgroup  of  satisfying , where  is the largest cardinality of a subgroup of  which is nilpotent of class at most . Using his definition, it was proved by several authors that in many non-solvable groups the nilpotent injectors form a unique conjugacy class. Bialostocki contributed to the generalization of the Erdős-Ginzburg-Ziv theorem (also known as the EGZ theorem). He conjectured: if  is a sequence of elements of , then  contains at least  zero sums of length . The EGZ theorem is a special case where . The conjecture was partially confirmed by Kisin, Füredi and Kleitman, and Grynkiewicz. Bialostocki introduced the EGZ polynomials and contributed to generalize the EGZ theorem for higher degree polynomial
https://en.wikipedia.org/wiki/Museum%20with%20No%20Frontiers
Museum With No Frontiers (MWNF) is an international non-profit organisation founded on the initiative of Eva Schubert in 1995 in the context of the Barcelona Process Euro-Mediterranean Partnership relaunched as the Union for the Mediterranean). MWNF provides a platform that enables all partners to interact productively and contribute to a transnational presentation of history, art and culture based on equal voices and the equal visibility of all concerned. For that purpose, MWNF develops exhibition formats that do not require moving the artworks, but instead, artefacts in museums, monuments and archaeological sites are presented in situ (Exhibition Trails) or in a virtual environment (the MWNF Virtual Museum). History The MWNF Virtual Museum, so far the largest online museum, was launched in 2005. It enables partners from different countries to present a joint theme taking into consideration the perspectives of all concerned and to create virtual ensembles that otherwise could not exist. The first thematic section, www.discoverislamicart.org, was completed in cooperation with partners from 14 countries. Discover Islamic Art presents the heritage of Islam not only in southern Mediterranean countries but also in Europe. Its Database comprises 850 artefacts and 385 monuments and archaeological sites relating to almost 1,300 years of history, from the Umayyad caliphate (AH 41–132 / AD 661–750) until the end of the Ottoman Empire (AH 1340 / AD 1922). Eighteen Virtual Exhibitions present the history, art and cultural legacy of the great Islamic dynasties of the Mediterranean. Descriptions are available in Arabic, English, French and Spanish; for the Virtual Exhibitions also in German, Italian, Portuguese, Turkish and Swedish. The Virtual Museum's second thematic section, www.discoverbaroqueart.org, was inaugurated in 2010. The newest section, sharinghistory.org, has been online since April 2015. Exhibition Trail is the name of another exhibition format set up by MWNF
https://en.wikipedia.org/wiki/Star%20of%20David
The Star of David () is a generally recognized symbol of both Jewish identity and Judaism. Its shape is that of a hexagram: the compound of two equilateral triangles. A derivation of the seal of Solomon was used for decorative and mystical purposes by Muslims and Kabbalistic Jews. The hexagram does appear occasionally in Jewish contexts since antiquity as a decorative motif, such as a stone bearing a hexagram from the arch of the 3rd–4th century Khirbet Shura synagogue. While a hexagram found in a religious context can be seen from a manuscript of the Hebrew Bible from 11th century Cairo. Its association as a distinctive symbol for the Jewish people and their religion dates back to 17th-century Prague. In the 19th century, the symbol began to be widely used among the Jewish communities of Eastern Europe, ultimately coming to be used to represent Jewish identity or religious beliefs. It became representative of Zionism after it was chosen as the central symbol for a Jewish national flag at the First Zionist Congress in 1897. By the end of World War I, it had become an internationally accepted symbol for the Jewish people, being used on the gravestones of fallen Jewish soldiers. Today, the star is used as the central symbol on the national flag of the State of Israel. Roots Unlike the menorah, the Lion of Judah, the shofar and the lulav, the hexagram was not originally a uniquely Jewish symbol. The hexagram, being an inherently simple geometric construction, has been used in various motifs throughout human history, which were not exclusively religious. It appeared as a decorative motif in both 4th-century synagogues and Christian churches in the Galilee region. Gershom Scholem writes that the term "seal of Solomon" was adopted by Jews from Islamic magic literature, while he could not assert with certainty whether the term "shield of David" originated in Islamic or Jewish mysticism. Leonora Leet argues though that not just the terminology, but the esoteric philos
https://en.wikipedia.org/wiki/Charles%20William%20Clenshaw
Charles William Clenshaw (15 March 1926, Southend-on-Sea, Essex – 23 September 2004) was an English mathematician, specializing in numerical analysis. He is known for the Clenshaw algorithm (1955) and Clenshaw–Curtis quadrature (1960). In a 1984 paper Beyond Floating Point, Clenshaw and Frank W. J. Olver introduced symmetric level-index arithmetic. Biography Charles William Clenshaw attended the local high school in Southend-on-Sea from 1937 to 1943. In 1946 he graduated with a degree in mathematics and physics from King's College London. There in 1948 he graduated with a PhD in mathematics. From 1945 to 1969 he was a mathematician at the UK's National Physical Laboratory (NPL) in Bushy Park, Teddington. There from 1961 to 1969 he was a senior principal scientific officer and headed the numerical methods group in NPL's mathematics division. In 1969 he resigned from NPL and accepted an appointment as professor of numerical analysis at Lancaster University. He and Emlyn Howard Lloyd (1918–2008), professor of statistics, strengthened the mathematics department, and the department's numerical analysis group became one of best in the UK. The mathematics department hosted the first four summer schools in numerical analysis sponsored by the UK's Engineering and Physical Sciences Research Council. Clenshaw did research in approximation theory based on Chebyshev polynomials, software development supporting trigonometric functions, Bessel functions, etc., and computer arithmetic systems. His PhD students include William Allan Light (1950–2002). Upon his death, Clenshaw was survived by his wife, three sons, a daughter, and ten grandchildren. Sgt. Ian Charles Cooper Clenshaw (1918–1940), one of Charles William Clenshaw's brothers, was officially the first RAF pilot to be killed in the Battle of Britain. Selected publications (over 380 citations) (over 240 citations) (over 1110 citations) (over 100 citations)
https://en.wikipedia.org/wiki/Metaphyseal%20chondrodysplasia%20Schmid%20type
Metaphyseal chondrodysplasia Schmid type is a type of chondrodysplasia associated with a deficiency of collagen, type X, alpha 1. Unlike other "rickets syndromes", affected individuals have normal serum calcium, phosphorus, and urinary amino acid levels. Long bones are short and curved, with widened growth plates and metaphyses. It is named for the German researcher F. Schmid, who characterized it in 1949.
https://en.wikipedia.org/wiki/Statistical%20semantics
In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval. History The term statistical semantics was first used by Warren Weaver in his well-known paper on machine translation. He argued that word sense disambiguation for machine translation should be based on the co-occurrence frequency of the context words near a given target word. The underlying assumption that "a word is characterized by the company it keeps" was advocated by J.R. Firth. This assumption is known in linguistics as the distributional hypothesis. Emile Delavenay defined statistical semantics as the "statistical study of the meanings of words and their frequency and order of recurrence". "Furnas et al. 1983" is frequently cited as a foundational contribution to statistical semantics. An early success in the field was latent semantic analysis. Applications Research in statistical semantics has resulted in a wide variety of algorithms that use the distributional hypothesis to discover many aspects of semantics, by applying statistical techniques to large corpora: Measuring the similarity in word meanings Measuring the similarity in word relations Modeling similarity-based generalization Discovering words with a given relation Classifying relations between words Extracting keywords from documents Measuring the cohesiveness of text Discovering the different senses of words Distinguishing the different senses of words Subcognitive aspects of words Distinguishing praise from criticism Related fields Statistical semantics focuses on the meanings of common words and the relations between common words, unlike text mining, which tends to focus on whole documents, document collections, or named entities (names of people, places, and organizations). Statistical semantics is a subfield of compu
https://en.wikipedia.org/wiki/Schouten%E2%80%93Nijenhuis%20bracket
In differential geometry, the Schouten–Nijenhuis bracket, also known as the Schouten bracket, is a type of graded Lie bracket defined on multivector fields on a smooth manifold extending the Lie bracket of vector fields. There are two different versions, both rather confusingly called by the same name. The most common version is defined on alternating multivector fields and makes them into a Gerstenhaber algebra, but there is also another version defined on symmetric multivector fields, which is more or less the same as the Poisson bracket on the cotangent bundle. It was invented by Jan Arnoldus Schouten (1940, 1953) and its properties were investigated by his student Albert Nijenhuis (1955). It is related to but not the same as the Nijenhuis–Richardson bracket and the Frölicher–Nijenhuis bracket. Definition and properties An alternating multivector field is a section of the exterior algebra ∧∗TM over the tangent bundle of a manifold M. The alternating multivector fields form a graded supercommutative ring with the product of a and b written as ab (some authors use a∧b). This is dual to the usual algebra of differential forms Ω∗M by the pairing on homogeneous elements: The degree of a multivector A in is defined to be |A| = p. The skew symmetric Schouten–Nijenhuis bracket is the unique extension of the Lie bracket of vector fields to a graded bracket on the space of alternating multivector fields that makes the alternating multivector fields into a Gerstenhaber algebra. It is given in terms of the Lie bracket of vector fields by for vector fields ai, bj and for vector fields and smooth function , where is the common interior product operator. It has the following properties. |ab| = |a| + |b| (The product has degree 0) |[a,b]| = |a| + |b| − 1 (The Schouten–Nijenhuis bracket has degree −1) (ab)c = a(bc), ab = (−1)|a||b|ba (the product is associative and (super) commutative) [a, bc] = [a, b]c + (−1)|b|(|a| − 1)b[a, c] (Poisson identity) [a,
https://en.wikipedia.org/wiki/Advanced%20Fuel%20Cycle%20Initiative
The Advanced Fuel Cycle Initiative (AFCI) is an extensive research and development effort of the United States Department of Energy (DOE). The mission and focus of AFCI is to enable the safe, secure, economic and sustainable expansion of nuclear energy by conducting research, development, and demonstration focused on nuclear fuel recycling and waste management to meet U.S. needs. The program was absorbed into the GNEP project, which was renamed IFNEC. Focus Continue critical fuel cycle research, development and demonstration (RD&D) activities Pursue development of policy and regulatory framework to support fuel cycle closure Determine and develop RD&D infrastructure needed to mature technologies Establish advanced modeling and simulation program element Implement a science-based RD&D program Campaigns The AFCI is an extensive RD&D effort to close the fuel cycle. The different areas within the AFCI are separated into campaigns. The RD&D of each campaign is completed by the United States Department of Energy's national laboratories. Transmutation fuels Fast reactor development Separations Waste forms Grid Appropriate Reactor Campaign Safeguards Systems analysis Modeling and simulation Safety and regulatory Transmutation fuels The mission of the Transmutation Fuels Campaign is the generation of data, methods and models for fast reactor transmutation fuels and targets qualification by performing RD&D activities on fuel fabrication and performance. The campaign is led by Idaho National Laboratory. Reactor development The mission of the Reactor Campaign is to develop advanced recycling reactor technologies required for commercial deployment in a closed nuclear fuel cycle. The Reactor Campaign is led at Argonne National Laboratory. Separations The mission of the Separations Campaign is to develop and demonstrate industrially deployable and economically feasible technologies for the recycling of used nuclear fuel to provide improved safety,
https://en.wikipedia.org/wiki/James%20B.%20Anderson
James Bernhard Anderson (November 16, 1935 – January 14, 2021) was an American chemist and physicist. From 1995 to 2014 he was Evan Pugh Professor of Chemistry and Physics at the Pennsylvania State University. He specialized in Quantum Chemistry by Monte Carlo methods, molecular dynamics of reactive collisions, kinetics and mechanisms of gas phase reactions, and rare-event theory. Life James Anderson was born in 1935 in Cleveland, Ohio to American-born parents of Swedish descent, Bertil and Lorraine Anderson. He was raised in Morgantown, West Virginia and spent his childhood summers on the island of Put-in-Bay, Ohio. Anderson earned a B.S. in chemical engineering from the Pennsylvania State University, an M.S. from the University of Illinois, and an M.A. and Ph.D. from Princeton University. Anderson married his wife Nancy Anderson (née Trotter) in 1958. They have three children and six grandchildren. He died on January 14, 2021, in State College, Pennsylvania. Career Anderson began his professional career as an engineer in petrochemical research and development with Shell Chemical Company from 1958–60 in Deer Park, Texas. He began his academic career as a professor of chemical engineering at Princeton University in 1964 and continued as a professor of engineering at Yale University in 1968 before moving to the Pennsylvania State University in 1974. From 1995 until his retirement in 2014, he was Evan Pugh Professor of Chemistry and Physics at the Pennsylvania State University. Anderson also served as a visiting professor at Cambridge University, the University of Milan, the University of Kaiserslautern, the University of Göttingen, Free University of Berlin, and RWTH Aachen University. Research Anderson made key contributions in several areas of chemistry and physics. The main areas of impact are: reaction kinetics and molecular dynamics, the 'rare-event' approach to chemical reactions, Quantum Monte Carlo (QMC) methods, Monte Carlo simulation of radi
https://en.wikipedia.org/wiki/Elementary%20Number%20Theory%2C%20Group%20Theory%20and%20Ramanujan%20Graphs
Elementary Number Theory, Group Theory and Ramanujan Graphs is a book in mathematics whose goal is to make the construction of Ramanujan graphs accessible to undergraduate-level mathematics students. In order to do so, it covers several other significant topics in graph theory, number theory, and group theory. It was written by Giuliana Davidoff, Peter Sarnak, and Alain Valette, and published in 2003 by the Cambridge University Press, as volume 55 of the London Mathematical Society Student Texts book series. Background In graph theory, expander graphs are undirected graphs with high connectivity: every small-enough subset of vertices has many edges connecting it to the remaining parts of the graph. Sparse expander graphs have many important applications in computer science, including the development of error correcting codes, the design of sorting networks, and the derandomization of randomized algorithms. For these applications, the graph must be constructed explicitly, rather than merely having its existence proven. One way to show that a graph is an expander is to study the eigenvalues of its adjacency matrix. For an -regular graph, these are real numbers in the interval , and the largest eigenvalue (corresponding to the all-1s eigenvector) is exactly . The spectral expansion of the graph is defined from the difference between the largest and second-largest eigenvalues, the spectral gap, which controls how quickly a random walk on the graph settles to its stable distribution; this gap can be at most . The Ramanujan graphs are defined as the graphs that are optimal from the point of view of spectral expansion: they are -regular graphs whose spectral gap is exactly . Although Ramanujan graphs with high degree, such as the complete graphs, are easy to construct, expander graphs of low degree are needed for the applications of these graphs. Several constructions of low-degree Ramanujan graphs are now known, the first of which were by and . Reviewer Jürgen Elstro
https://en.wikipedia.org/wiki/Secure%20cryptoprocessor
A secure cryptoprocessor is a dedicated computer-on-a-chip or microprocessor for carrying out cryptographic operations, embedded in a packaging with multiple physical security measures, which give it a degree of tamper resistance. Unlike cryptographic processors that output decrypted data onto a bus in a secure environment, a secure cryptoprocessor does not output decrypted data or decrypted program instructions in an environment where security cannot always be maintained. The purpose of a secure cryptoprocessor is to act as the keystone of a security subsystem, eliminating the need to protect the rest of the subsystem with physical security measures. Examples A hardware security module (HSM) contains one or more secure cryptoprocessor chips. These devices are high grade secure cryptoprocessors used with enterprise servers. A hardware security module can have multiple levels of physical security with a single-chip cryptoprocessor as its most secure component. The cryptoprocessor does not reveal keys or executable instructions on a bus, except in encrypted form, and zeros keys by attempts at probing or scanning. The crypto chip(s) may also be potted in the hardware security module with other processors and memory chips that store and process encrypted data. Any attempt to remove the potting will cause the keys in the crypto chip to be zeroed. A hardware security module may also be part of a computer (for example an ATM) that operates inside a locked safe to deter theft, substitution, and tampering. Modern smartcards are probably the most widely deployed form of secure cryptoprocessor, although more complex and versatile secure cryptoprocessors are widely deployed in systems such as Automated teller machines, TV set-top boxes, military applications, and high-security portable communication equipment. Some secure cryptoprocessors can even run general-purpose operating systems such as Linux inside their security boundary. Cryptoprocessors input program instructions
https://en.wikipedia.org/wiki/Loop%20quantum%20gravity
Loop quantum gravity (LQG) is a theory of quantum gravity, which aims to reconcile quantum mechanics and general relativity, incorporating matter of the Standard Model into the framework established for the intrinsic quantum gravity case. It is an attempt to develop a quantum theory of gravity based directly on Einstein's geometric formulation rather than the treatment of gravity as a mysterious mechanism (force). As a theory, LQG postulates that the structure of space and time is composed of finite loops woven into an extremely fine fabric or network. These networks of loops are called spin networks. The evolution of a spin network, or spin foam, has a scale above the order of a Planck length, approximately 10−35 meters, and smaller scales are meaningless. Consequently, not just matter, but space itself, prefers an atomic structure. The areas of research, which involve about 30 research groups worldwide, share the basic physical assumptions and the mathematical description of quantum space. Research has evolved in two directions: the more traditional canonical loop quantum gravity, and the newer covariant loop quantum gravity, called spin foam theory. The most well-developed theory that has been advanced as a direct result of loop quantum gravity is called loop quantum cosmology (LQC). LQC advances the study of the early universe, incorporating the concept of the Big Bang into the broader theory of the Big Bounce, which envisions the Big Bang as the beginning of a period of expansion that follows a period of contraction, which has been described as the Big Crunch. History In 1986, Abhay Ashtekar reformulated Einstein's general relativity in a language closer to that of the rest of fundamental physics, specifically Yang–Mills theory. Shortly after, Ted Jacobson and Lee Smolin realized that the formal equation of quantum gravity, called the Wheeler–DeWitt equation, admitted solutions labelled by loops when rewritten in the new Ashtekar variables. Carlo Rovelli an
https://en.wikipedia.org/wiki/Future%20value
Future value is the value of an asset at a specific date. It measures the nominal future sum of money that a given sum of money is "worth" at a specified time in the future assuming a certain interest rate, or more generally, rate of return; it is the present value multiplied by the accumulation function. The value does not include corrections for inflation or other factors that affect the true value of money in the future. This is used in time value of money calculations. Overview Money value fluctuates over time: $100 today has a different value than $100 in five years. This is because one can invest $100 today in an interest-bearing bank account or any other investment, and that money will grow/shrink due to the rate of return. Also, if $100 today allows the purchase of an item, it is possible that $100 will not be enough to purchase the same item in five years, because of inflation (increase in purchase price). An investor who has some money has two options: to spend it right now or to invest it. The financial compensation for saving it (and not spending it) is that the money value will accrue through the interests that he will receive from a borrower (the bank account on which he has the money deposited). Therefore, to evaluate the real worthiness of an amount of money today after a given period of time, economic agents compound the amount of money at a given interest rate. Most actuarial calculations use the risk-free interest rate which corresponds the minimum guaranteed rate provided the bank's saving account, for example. If one wants to compare their change in purchasing power, then they should use the real interest rate (nominal interest rate minus inflation rate). The operation of evaluating a present value into the future value is called capitalization (how much will $100 today be worth in 5 years?). The reverse operation which consists in evaluating the present value of a future amount of money is called a discounting (how much $100 that will be r
https://en.wikipedia.org/wiki/OOFEM
OOFEM is a free and open-source multi-physics finite element code with object oriented architecture. The aim of this project is to provide efficient and robust tool for FEM computations as well as to offer highly modular and extensible environment for development. Main features Solves various linear and nonlinear problems from structural, thermal and fluid mechanics. Particularly includes many material models for nonlinear fracture mechanics of quasibrittle materials, such as concrete. Efficient parallel processing support based on domain decomposition and message passing paradigms. Direct as well as iterative solvers are available. Direct solvers include symmetric and unsymmetric skyline solver and sparse direct solver. Iterative solvers support many sparse storage formats and come with various preconditioners. Interfaces to third party linear and eigen value solver libraries are available, including IML, PETSc, SLEPc, and SPOOLES. Support for eXtented Finite Elements (XFEM) and iso-geometric analysis (IGA). License OOFEM is free, open source software, released under the GNU Lesser General Public License version 2.1 on any later version See also List of numerical analysis software List of finite element software packages
https://en.wikipedia.org/wiki/Fast%20Pair
The Google Fast Pair Service, or simply Fast Pair, is Google's proprietary standard for quickly pairing Bluetooth devices when they come in close proximity for the first time using Bluetooth Low Energy (BLE). It was announced in October 2017 and initially designed for connecting audio devices such as speakers, headphones and car kits with the Android operating system. In 2018, Google added support for ChromeOS devices, and in 2019, Google announced that Fast Pair connections could now be synced with other Android devices with the same Google Account. Google has partnered with Bluetooth SoC designers including Qualcomm, Airoha Technology, and BES Technic to add Fast Pair support to their SDKs. In May 2019, Qualcomm announced their Smart Headset Reference Design, Qualcomm QCC5100, QCC3024 and QCC3034 SoC series with support for Fast Pair and Google Assistant. In July 2019, Google announced True Wireless Features (TWF), Find My Device and enhanced Connected Device Details. List of supported devices Earbuds 1More EVO 1More Dual Driver BT ANC Anker Soundcore Liberty 4 NC Anker Spirit Pro GVA Beats Studio Buds Cleer Ally Plus Dizo Wireless Dash Neckband Dizo Go pods D Jabra Elite 2 Jabra Elite 3 Jabra Elite 4 Active Jabra Elite 10 Jaybird Tarah Jaybird Vista 2 JBL Live Free NC+ TWS JBL Live Tune 225 TWS JBL Reflect Mini NC TWS JBL Peak II JBL Tour Pro+ JBL Club Pro+ LG Tone Free (All devices) Marshall Minor III Microsoft Surface Earbuds Moto Buds 600 ANC Nothing ear (1) Nothing ear (2) Nothing ear (stick) OnePlus Buds OnePlus Buds Z OnePlus Buds Z2 OnePlus Buds Pro 2 OnePlus Buds Pro 2R OPPO Enco Air3 OPPO Enco Air3 Pro Google Pixel Buds (First generation) Google Pixel Buds A-Series Google Pixel Buds (Second generation) Google Pixel Buds Pro Realme Buds Air Realme Buds Air 2 Realme Buds Air 3 Realme Buds Air 3s Realme Buds Air 5 Pro Realme Buds Air Neo Realme Buds Air Pro Realme Buds Q2 Realme Buds Q2S Realme Techlife Buds
https://en.wikipedia.org/wiki/Multilink%20striping
Multilink striping is a type of data striping used in telecommunications to achieve higher throughput or increase the resilience of a network connection by data aggregation over multiple network links simultaneously. Multipath routing and multilink striping are often used synonymously. However, there are some differences. When applied to end-hosts, multilink striping requires multiple physical interfaces and access to multiple networks at once. On the other hand, multiple routing paths can be obtained with a single end-host interface, either within the network, or, in case of a wireless interface and multiple neighboring nodes, at the end-host itself. See also RFC 1990, The PPP Multilink Protocol (MP) Link aggregation Computer networking
https://en.wikipedia.org/wiki/Dember%20effect
In physics, the Dember effect is when the electron current from a cathode subjected to both illumination and a simultaneous electron bombardment is greater than the sum of the photoelectric current and the secondary emission current . History Discovered by Harry Dember (1882–1943) in 1925, this effect is due to the sum of the excitations of an electron by two means: photonic illumination and electron bombardment (i.e. the sum of the two excitations extracts the electron). In Dember’s initial study, he referred only to metals; however, more complex materials have been analyzed since then. Photoelectric effect The photoelectric effect due to the illumination of the metallic surface extracts electrons (if the energy of the photon is greater than the extraction work) and excites the electrons which the photons don’t have the energy to extract. In a similar process, the electron bombardment of the metal both extracts and excites electrons inside the metal. If one considers a constant and increases , it can be observed that has a maximum of about 150 times . On the other hand, considering a constant and increasing the intensity of the illumination the , supplementary current, tends to saturate. This is due to the usage in the photoelectric effect of all the electrons excited (sufficiently) by the primary electrons of . See also Anomalous photovoltaic effect Photo-Dember
https://en.wikipedia.org/wiki/Ronald%20M.%20Foster
Ronald Martin Foster (3 October 1896 – 2 February 1998), was an American mathematician at Bell Labs whose work was of significance regarding electronic filters for use on telephone lines. He published an important paper, A Reactance Theorem, (see Foster's reactance theorem) which quickly inspired Wilhelm Cauer to begin his program of network synthesis filters which put the design of filters on a firm mathematical footing. He is also known for the Foster census of cubic, symmetric graphs and the 90-vertex cubic symmetric Foster graph. Education Foster was a Harvard College graduate S.B. (Mathematics), summa cum laude, Class of 1917. He also received two honorary Sc.D.s. Professional career 1917 – 1943 Research & Development Department (later Bell Labs), American Telephone & Telegraph, as a Research Engineer (Applied Mathematician), New York City, New York. 1943 – 1963 Professor and Head of Department of Mathematics, Polytechnic Institute of Brooklyn, Brooklyn, New York City, New York. Publications Campbell, GA, Foster, RM, Fourier Integrals for Practical Applications, "Bell System Technical Journal", pp 639–707, 1928. Pierce, BO, Foster. RM. "A Short Table of Integrals", Fourth Edition, Ginn and Company, pp 1–189, 1956.
https://en.wikipedia.org/wiki/Global%20Historical%20Climatology%20Network
The Global Historical Climatology Network (GHCN) is a data set of temperature, precipitation and pressure records managed by the National Climatic Data Center (NCDC), Arizona State University and the Carbon Dioxide Information Analysis Center. The aggregate data are collected from many continuously reporting fixed stations at the Earth's surface. In 2012, there were 25,000 stations within 180 countries and territories. Some examples of monitoring variables are the total daily precipitation and maximum and minimum temperature. A caveat to this is 66% of the stations report only the daily precipitation. The original idea for the application of the GHCN-M data was to provide climatic analysis for data sets that require daily monitoring. Its purpose is to create a global base-line data set that can be compiled from stations worldwide. This work has often been used as a foundation for reconstructing past global temperatures, and was used in previous versions of two of the best-known reconstructions, that prepared by the NCDC, and that prepared by NASA as its Goddard Institute for Space Studies (GISS) temperature set. The average temperature record is 60 years long with ~1650 records greater than 100 years and ~220 greater than 150 years (based on GHCN v2 in 2006). The earliest data included in the database were collected in 1697. History The initial version of Global Historical Climatology Network was developed in the summer of 1992. This first version, known as Version 1 was a collaboration between research stations and data sets alike to the World Weather Records program and the World Monthly Surface Station Climatology from the National Center for Atmospheric Research. Within the stations, all of them have at least 10 years of data, 2/5 have more than 50 years of data, and 1/10 have 100 years of data.  Version 1, or more commonly notated as V1 was the collection of monthly mean temperatures from 6,000 stations. There were, as of 2022, 3 subsequent versions of the
https://en.wikipedia.org/wiki/Apelin
Apelin (also known as APLN) is a peptide that in humans is encoded by the APLN gene. Apelin is one of two endogenous ligands for the G-protein-coupled APJ receptor that is expressed at the surface of some cell types. It is widely expressed in various organs such as the heart, lung, kidney, liver, adipose tissue, gastrointestinal tract, brain, adrenal glands, endothelium, and human plasma. Discovery Apelin is a peptide hormone that was identified in 1998 by Masahiko Fujino and his colleagues at Gunma University and Takeda Pharmaceutical Company. In 2013, a second peptide hormone named Elabela was found by Bruno Reversade to also act as an endogenous ligand to the APLNR. Biosynthesis The apelin gene encodes a pre-proprotein of 77 amino acids, with a signal peptide in the N-terminal region. After translocation into the endoplasmic reticulum and cleavage of the signal peptide, the proprotein of 55 amino acids may generate several active fragments: a 36 amino acid peptide corresponding to the sequence 42-77 (apelin 36), a 17 amino acid peptide corresponding to the sequence 61-77 (apelin 17) and a 13 amino acid peptide corresponding to the sequence 65-77 (apelin 13). This latter fragment may also undergo a pyroglutamylation at the level of its N-terminal glutamine residue. However the presence and/or the concentrations of those peptides in human plasma has been questioned. Recently, 46 different apelin peptides ranging from apelin 55 () to apelin 12 have been identified in bovine colostrum, including C-ter truncated isoforms. Physiological functions The sites of receptor expression are linked to the different functions played by apelin in the organism. Vascular Vascular expression of the receptor participates in the control of blood pressure and its activation promotes the formation of new blood vessels (angiogenesis). The blood pressure-lowering (hypotensive) effect of apelin results from the activation of receptors expressed at the surface of endothelial cel
https://en.wikipedia.org/wiki/Juvenile%20%28organism%29
A juvenile is an individual organism (especially an animal) that has not yet reached its adult form, sexual maturity or size. Juveniles can look very different from the adult form, particularly in colour, and may not fill the same niche as the adult form. In many organisms the juvenile has a different name from the adult (see List of animal names). Some organisms reach sexual maturity in a short metamorphosis, such as ecdysis in many insects and some other arthropods. For others, the transition from juvenile to fully mature is a more prolonged process—puberty in humans and other species (like higher primates and whales), for example. In such cases, juveniles during this transformation are sometimes called subadults. Many invertebrates cease development upon reaching adulthood. The stages of such invertebrates are larvae or nymphs. In vertebrates and some invertebrates (e.g. spiders), larval forms (e.g. tadpoles) are usually considered a development stage of their own, and "juvenile" refers to a post-larval stage that is not fully grown and not sexually mature. In amniotes, the embryo represents the larval stage. Here, a "juvenile" is an individual in the time between hatching/birth/germination and reaching maturity. Examples For animal larval juveniles, see larva Juvenile birds or bats can be called fledglings For cat juveniles, see kitten For dog juveniles, see puppy For human juvenile life stages, see childhood and adolescence, an intermediary period between the onset of puberty and full physical, psychological, and social adulthood
https://en.wikipedia.org/wiki/Luck
Luck is the phenomenon and belief that defines the experience of improbable events, especially improbably positive or negative ones. The naturalistic interpretation is that positive and negative events may happen at any time, both due to random and non-random natural and artificial processes, and that even improbable events can happen by random chance. In this view, the epithet "lucky" or "unlucky" is a descriptive label that refers to an event's positivity, negativity, or improbability. Supernatural interpretations of luck consider it to be an attribute of a person or object, or the result of a favorable or unfavorable view of a deity upon a person. These interpretations often prescribe how luckiness or unluckiness can be obtained, such as by carrying a lucky charm or offering sacrifices or prayers to a deity. Saying someone is "born lucky" may hold different meanings, depending on the interpretation: it could simply mean that they have been born into a good family or circumstance; or that they habitually experience improbably positive events, due to some inherent property, or due to the lifelong favor of a god or goddess in a monotheistic or polytheistic religion. Many superstitions are related to luck, though these are often specific to a given culture or set of related cultures, and sometimes contradictory. For example, lucky symbols include the number 7 in Christian-influenced cultures, the number 8 in Chinese-influenced cultures. Unlucky symbols and events include entering and leaving a house by different doors or breaking a mirror in Greek culture, throwing rocks into the wind in Navajo culture, and ravens in Western culture. Some of these associations may derive from related facts or desires. For example, in Western culture opening an umbrella indoors might be considered unlucky partly because it could poke someone in the eye, whereas shaking hands with a chimney sweep might be considered lucky partly because it is a kind but unpleasant thing to do g
https://en.wikipedia.org/wiki/Gain%20compression
Gain compression is a reduction in differential or slope gain caused by nonlinearity of the transfer function of the amplifying device. This nonlinearity may be caused by heat due to power dissipation or by overdriving the active device beyond its linear region. It is a large-signal phenomenon of circuits. Relevance Gain compression is relevant in any system with a wide dynamic range, such as audio or RF. It is more common in tube circuits than transistor circuits, due to topology differences, possibly causing the differences in audio performance called "valve sound". The front-end RF amps of radio receivers are particularly susceptible to this phenomenon when overloaded by a strong unwanted signal. Audio effects A tube radio or tube amplifier will increase in volume to a point, and then as the input signal extends beyond the linear range of the device, the effective gain is reduced, altering the shape of the waveform. The effect is also present in transistor circuits. The extent of the effect depends on the topology of the amplifier. Differences between clipping and compression Clipping, as a form of signal compression, differs from the operation of the typical studio audio level compressor, in which gain compression is not instantaneous (delayed in time via attack and release settings). Clipping destroys any audio information which is over a certain threshold. Compression and limiting, change the shape of the entire waveform, not just the shape of the waveform above the threshold. This is why it is possible to limit and compress with very high ratios without causing distortion. Limiting or clipping Gain is a linear operation. Gain compression is not linear and, as such, its effect is one of distortion, due to the nonlinearity of the transfer characteristic which also causes a loss of 'slope' or 'differential' gain. So the output is less than expected using the small signal gain of the amplifier. In clipping, the signal is abruptly limited to a cert
https://en.wikipedia.org/wiki/Inosinic%20acid
Inosinic acid or inosine monophosphate (IMP) is a nucleotide (that is, a nucleoside monophosphate). Widely used as a flavor enhancer, it is typically obtained from chicken byproducts or other meat industry waste. Inosinic acid is important in metabolism. It is the ribonucleotide of hypoxanthine and the first nucleotide formed during the synthesis of purine nucleotides. It can also be formed by the deamination of adenosine monophosphate by AMP deaminase. It can be hydrolysed to inosine. The enzyme deoxyribonucleoside triphosphate pyrophosphohydrolase, encoded by YJR069C in Saccharomyces cerevisiae and containing (d)ITPase and (d)XTPase activities, hydrolyzes inosine triphosphate (ITP) releasing pyrophosphate and IMP. Important derivatives of inosinic acid include the purine nucleotides found in nucleic acids and adenosine triphosphate, which is used to store chemical energy in muscle and other tissues. In the food industry, inosinic acid and its salts such as disodium inosinate are used as flavor enhancers. It is known as E number reference E630. Inosinate synthesis The inosinate synthesis is complex, beginning with a 5-phosphoribosyl-1-pyrophosphate (PRPP). Enzymes taking part in IMP synthesis constitute a multienzyme complex in the cell. Evidence demonstrates that there are multifunctional enzymes, and some of them catalyze non-sequential steps in the pathway. Synthesis of other purine nucleotides Within a few steps inosinate becomes AMP or GMP. Both compounds are RNA nucleotides. AMP differs from inosinate by the replacement of IMP's carbon-6 carbonyl with an amino group. The interconversion of AMP and IMP occurs as part of the purine nucleotide cycle. GMP is formed by the inosinate oxidation to xanthylate (XMP), and afterwards adds an amino group on carbon 2. Hydrogen acceptor on inosinate oxidation is NAD+. Finally, carbon 2 gains the amino group by spending an ATP molecule (which becomes AMP+2Pi). While AMP synthesis requires GTP, GMP synthesis uses ATP.
https://en.wikipedia.org/wiki/Breakage-fusion-bridge%20cycle
Breakage-fusion-bridge (BFB) cycle (also breakage-rejoining-bridge cycle) is a mechanism of chromosomal instability, discovered by Barbara McClintock in the late 1930s. Mechanism The BFB cycle begins when the end region of a chromosome, called its telomere, breaks off. When that chromosome subsequently replicates it forms two sister chromatids which both lack a telomere. Since telomeres appear at the end of chromatids, and function to prevent their ends from fusing with other chromatids, the lack of a telomere on these two sister chromatids causes them to fuse with one another. During anaphase the sister chromatids will form a bridge where the centromere in one of the sister chromatids will be pulled in one direction of the dividing cell, while the centromere of the other will be pulled in the opposite direction. Being pulled in opposite directions will cause the two sister chromatids to break apart from each other, but not necessarily at the site that they fused. This results in the two daughter cells receiving an uneven chromatid. Since the two resulting chromatids lack telomeres, when they replicate the BFB cycle will repeat, and will continue every subsequent cell division until those chromatids receive a telomere, usually from a different chromatid through the process of translocation. Implications in tumors The presence of chromosomal aberrations has been demonstrated in every type of malignant tumor. Although BFB cycles are a major source of genome instability, the rearrangement signature predicted by this model is not commonly present in cancer genomes without other chromosome alterations like chromothripsis. BFB cycles and chromothripsis might be mechanistically related. The chromosome bridge formation could trigger a mutational cascade through the accumulation of chromothripsis in each cell division. This mechanism could explain the evolution and subclonal heterogeneity of some human cancers. Detection Breakage-fusion-bridge creates several identifiable
https://en.wikipedia.org/wiki/Taxonomic%20database
A taxonomic database is a database created to hold information on biological taxa – for example groups of organisms organized by species name or other taxonomic identifier – for efficient data management and information retrieval. Taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online; to underpin the operation of web-based species information systems; as a part of biological collection management (for example in museums and herbaria); as well as providing, in some cases, the taxon management component of broader science or biology information systems. They are also a fundamental contribution to the discipline of biodiversity informatics. Goals Taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. Taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example: beetles in a defined region, mammals globally, or all described taxa in the tree of life. A taxonomic database may incorporate organism identifiers (scientific name, author, and – for zoological taxa – year of original publication), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon (such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc.). Some databases, such as the Global Biodiversity Information Facility(GBIF) database and the Barcode of Life Data System, store the DNA barcode of a taxon if one exists (also called the Barcode Index Number (BIN) which may be assigned, for example, by the International Barcode of Life project (iBOL) or UNITE, a database for fungal DNA barcoding). A taxonomic database aims to accurately model the characteristics of interest that are relevant to the organisms which are in scope for the intended coverage and usage of the system. For exam
https://en.wikipedia.org/wiki/Pointwise%20convergence
In mathematics, pointwise convergence is one of various senses in which a sequence of functions can converge to a particular function. It is weaker than uniform convergence, to which it is often compared. Definition Suppose that is a set and is a topological space, such as the real or complex numbers or a metric space, for example. A net or sequence of functions all having the same domain and codomain is said to converge pointwise to a given function often written as if (and only if) The function is said to be the pointwise limit function of the Sometimes, authors use the term bounded pointwise convergence when there is a constant such that . Properties This concept is often contrasted with uniform convergence. To say that means that where is the common domain of and , and stands for the supremum. That is a stronger statement than the assertion of pointwise convergence: every uniformly convergent sequence is pointwise convergent, to the same limiting function, but some pointwise convergent sequences are not uniformly convergent. For example, if is a sequence of functions defined by then pointwise on the interval but not uniformly. The pointwise limit of a sequence of continuous functions may be a discontinuous function, but only if the convergence is not uniform. For example, takes the value when is an integer and when is not an integer, and so is discontinuous at every integer. The values of the functions need not be real numbers, but may be in any topological space, in order that the concept of pointwise convergence make sense. Uniform convergence, on the other hand, does not make sense for functions taking values in topological spaces generally, but makes sense for functions taking values in metric spaces, and, more generally, in uniform spaces. Topology Let denote the set of all functions from some given set into some topological space As described in the article on characterizations of the category of topological spa
https://en.wikipedia.org/wiki/Ehlers%E2%80%93Geren%E2%80%93Sachs%20theorem
The Ehlers–Geren–Sachs theorem, published in 1968 by Jürgen Ehlers, P. Geren and Rainer K. Sachs, shows that if, in a given universe, all freely falling observers measure the cosmic background radiation to have exactly the same properties in all directions (that is, they measure the background radiation to be isotropic), then that universe is an isotropic and homogeneous FLRW spacetime, if the one uses a kinetic picture and the collision term vanishes, i.e. in the so-called Vlasov case or if there is a so-called detailed balance. This result was later extended to the full Boltzmann case by R. Treciokas and G.F.R. Ellis. Using the fact that, as measured from Earth, the cosmic microwave background is indeed highly isotropic—the temperature characterizing this thermal radiation varies only by tenth of thousandth of a kelvin with the direction of observations—and making the Copernican assumption that Earth does not occupy a privileged cosmic position, this constitutes the strongest available evidence for our own universe's homogeneity and isotropy, and hence for the foundation of current standard cosmological models. Strictly speaking, this conclusion has a potential flaw. While the Ehlers–Geren–Sachs theorem concerns only exactly isotropic measurements, it is known that the background radiation does have minute irregularities. This was addressed by a generalization published in 1995 by W. R. Stoeger, Roy Maartens and George Ellis, which shows that an analogous result holds for observers who measure a nearly isotropic background radiation, and can justly infer to live in a nearly FLRW universe. However the paper by Stoeger et al. assumes that derivatives of the cosmic background temperature multipoles are bounded in terms of the multipoles themselves. The derivatives of the multipoles are not directly accessible to us and would require observations over time and space intervals on cosmological scales. In 1999 John Wainwright, M. J. Hancock and Claes Uggla show a counte
https://en.wikipedia.org/wiki/Point%20of%20subjective%20simultaneity
In multisensory integration research, Point of Subjective Simultaneity (PSS), typically measured in milliseconds, is defined as the stimulus onset asynchrony (SOA) at which a pair of signals from different sensory modalities is perceived as most simultaneous or synchronous. In other words, at PSS, an individual is most likely to integrate information from a pair of signals in the two given modalities. PSS Computation In behavioral experiments, test individuals are usually presented with pairs of signals from different sensory modalities (such as visual and audio) at different SOAs and asked to make either synchrony judgements (i.e. if the pair of signals appears to have come at the exact same time) or temporal order judgements (i.e. which signal appears to have come earlier than the other). Results from an individual's synchrony judgement tasks are typically fitted to a Gaussian curve with average perceived synchrony in percentage (between 0 and 1) on the y-axis and SOA (in milliseconds) on the x-axis, and the PSS of this individual is defined as the mean of the Gaussian distribution. Alternatively, results from an individual's temporal order judgement tasks are typically fitted to an S-shaped logistic psychometric curve, with percentage of trials where the subject responds that signals from one certain modality has come first on the y-axis and SOA (in ms) on the x-axis. In this setting, the PSS is defined as the SOA corresponding to the point at which the percentage on the y-axis is 50%, where this individual is the most unsure about which signal has come first. PSS in Autism Research Studies have suggested that individuals with Autism Spectrum Disorder (ASD) process sensory information differently than their typically developing (TD) peers. Specifically, research has found that two groups exhibit different levels of multisensory temporal recalibration by comparing how much PSS changes in individuals with ASD and TD individuals after presenting them with bia
https://en.wikipedia.org/wiki/Patrick%20C.%20Fischer
Patrick Carl Fischer (December 3, 1935 – August 26, 2011) was an American computer scientist, a noted researcher in computational complexity theory and database theory, and a target of the Unabomber. Biography Fischer was born December 3, 1935, in St. Louis, Missouri. His father, Carl H. Fischer, became a professor of actuarial mathematics at the University of Michigan in 1941, and the family moved to Ann Arbor, Michigan, where he grew up. Fischer himself went to the University of Michigan, receiving a bachelor's degree in 1957 and an MBA in 1958. He went on to graduate studies at the Massachusetts Institute of Technology, earning a Ph.D. in 1962 under the supervision of Hartley Rogers, Jr., with a thesis on the subject of recursion theory. After receiving his Ph.D. in 1962, Fischer joined the faculty of Harvard University as an assistant professor of applied mathematics; his students at Harvard included Albert R. Meyer, through whom Fischer has over 250 academic descendants. as well as noted computer scientists Dennis Ritchie and Arnold L. Rosenberg. In 1965, he moved to a tenured position as associate professor of computer science at Cornell University. After teaching at the University of British Columbia from 1967 to 1968 (where he met his second wife Charlotte Froese) he moved to the University of Waterloo where he became a professor of applied analysis and computer science. At Waterloo, he was department chair from 1972 to 1974. He then moved to Pennsylvania State University in 1974, where he headed the computer science department, and moved again to Vanderbilt University as department chair in 1980. He taught at Vanderbilt for 18 years, and was chair for 15 years. He retired in 1998, and died of stomach cancer on August 26, 2011, in Rockville, Maryland. Like his father, Fischer became a fellow of the Society of Actuaries. Fischer's second wife, Charlotte Froese Fischer, was also a computer science professor at Vanderbilt University and the University of Bri
https://en.wikipedia.org/wiki/Proximity%20ligation%20assay
Proximity ligation assay (in situ PLA) is a technology that extends the capabilities of traditional immunoassays to include direct detection of proteins, protein interactions, extracellular vesicles and post translational modifications with high specificity and sensitivity. Protein targets can be readily detected and localized with single molecule resolution and objectively quantified in unmodified cells and tissues. Utilizing only a few cells, sub-cellular events, even transient or weak interactions, are revealed in situ and sub-populations of cells can be differentiated. Within hours, results from conventional co-immunoprecipitation and co-localization techniques can be confirmed. The PLA principle Two primary antibodies raised in different species recognize the target antigen on the proteins of interest (Figure 1). Secondary antibodies (2o Ab) directed against the constant regions of the different primary antibodies, called PLA probes, bind to the primary antibodies (Figure 2). Each of the PLA probes has a short sequence specific DNA strand attached to it. If the PLA probes are in proximity (that is, if the two original proteins of interest are in proximity, or part of a protein complex, as shown in the figures), the DNA strands can participate in rolling circle DNA synthesis upon addition of two other sequence-specific DNA oligonucleotides together with appropriate substrates and enzymes (Figure 3). The DNA synthesis reaction results in several-hundredfold amplification of the DNA circle. Next, fluorescent-labeled complementary oligonucleotide probes are added, and they bind to the amplified DNA (Figure 4). The resulting high concentration of fluorescence is easily visible as a distinct bright spot when viewed with a fluorescence microscope. In the specific case shown (Figure 5), the nucleus is enlarged because this is a B-cell lymphoma cell. The two proteins of interest are a B cell receptor and MYD88. The finding of interaction in the cytoplasm was inte
https://en.wikipedia.org/wiki/Marine%20heatwave
A marine heatwave (abbreviated as MHW) is a period of abnormally high ocean temperatures relative to the average seasonal temperature in a particular marine region. Marine heatwaves are caused by a variety of factors, including shorter term weather phenomena such as fronts, intraseasonal, annual, or decadal modes like El Niño events, and longer term changes like climate change. Marine heatwaves can have biological impacts on ecosystems at individual, population, and community levels. MHVs have lead to severe biodiversity changes such as coral bleaching, sea star wasting disease, harmful algal blooms, and mass mortality of benthic communities. Unlike heatwaves on land, marine heatwaves can extend for millions of square kilometers, persist for weeks to months or even years, and occur at subsurface levels. Major marine heatwave events such as Great Barrier Reef 2002, Mediterranean 2003, Northwest Atlantic 2012, and Northeast Pacific 2013-2016 have had drastic and long-term impacts on the oceanographic and biological conditions in those areas. The IPCC Sixth Assessment Report stated in 2022 that "marine heatwaves are more frequent [...], more intense and longer [...] since the 1980s, and since at least 2006 very likely attributable to anthropogenic climate change". This confirms earlier findings, for example in the Special Report on the Ocean and Cryosphere in a Changing Climate from 2019 which stated that it is "virtually certain" that the global ocean has absorbed more than 90% of the excess heat in our climate systems, the rate of ocean warming has doubled, and marine heatwave events have doubled in frequency since 1982. Definition The IPCC Sixth Assessment Report defines marine heatwave as follows: "A period during which water temperature is abnormally warm for the time of the year relative to historical temperatures, with that extreme warmth persisting for days to months. The phenomenon can manifest in any place in the ocean and at scales of up to thousands of
https://en.wikipedia.org/wiki/Lipid-gated%20ion%20channels
Lipid-gated ion channels are a class of ion channels whose conductance of ions through the membrane depends directly on lipids. Classically the lipids are membrane resident anionic signaling lipids that bind to the transmembrane domain on the inner leaflet of the plasma membrane with properties of a classic ligand. Other classes of lipid-gated channels include the mechanosensitive ion channels that respond to lipid tension, thickness, and hydrophobic mismatch. A lipid ligand differs from a lipid cofactor in that a ligand derives its function by dissociating from the channel while a cofactor typically derives its function by remaining bound. PIP2-gated channels Phosphatidylinositol 4,5-bisphosphate (PIP2) was the first and remains the best studied lipid to gate ion channels. PIP2 is a cell membrane lipid, and its role in gating ion channels represents a novel role for the molecule. Kir channels: PIP2 binds to and directly activates inwardly rectifying potassium channels (Kir). The lipid binds in a well-defined ligand binding site in the transmembrane domain and causes the helices to splay opening the channel. All members of the Kir super-family of potassium channels are thought to be directly gated by PIP. Kv7 channels: PIP2 binds to and directly activates Kv7.1. In the same study PIP2 was shown to function as a ligand. When the channel was reconstituted into lipid vesicles with PIP2 the channel opened, when PIP2 was omitted the channel was closed. TRP channels: TRP channels were perhaps the first class of channels recognized as lipid-gated. PIP2 regulates the conductance of most TRP channels either positively or negatively. For TRPV5, binding of PIP2 to a site in the transmembrane domain caused a conformational change that appeared to open the conduction pathway, suggesting the channel is classically lipid-gated. A PIP2 compatible site was found in TRPV1 but whether the lipid alone can gate the channels has not been shown. Other TRP channels that directly bin
https://en.wikipedia.org/wiki/American%20Journal%20of%20Human%20Biology
The American Journal of Human Biology is a peer-reviewed scientific journal covering human biology. It is the official publication of the Human Biology Association (formerly known as the Human Biology Council). The journal publishes original research, theoretical articles, reviews, and other communications connected to all aspects of human biology, health and disease. According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.937, ranking it 32nd out of 93 journals in the category "Anthropology" and 55th out of 93 journals in the category "Biology".
https://en.wikipedia.org/wiki/Coelenteramide
Coelenteramide is the oxidized product, or oxyluciferin, of the bioluminescent reactions in many marine organisms that use coelenterazine. It was first isolated as a blue fluorescent protein from Aequorea victoria after the animals were stimulated to emit light. Under basic conditions, the compound will break down further into coelenteramine and 4-hydroxyphenylacetic acid. It is an aminopyrazine.
https://en.wikipedia.org/wiki/The%20Vital%20Question
The Vital Question is a book by the English biochemist Nick Lane about the way the evolution and origin of life on Earth was constrained by the provision of energy. The book was well received by critics; The New York Times, for example, found it "seductive and often convincing" though the reviewer considered much of it speculative beyond the evidence provided. The Guardian wrote that the book presented hard evidence and tightly interlocking theory on a question once thought inaccessible to science, the origin of life. New Scientist found the book's arguments powerful and persuasive with many testable ideas; that it was not easy to read was compensated by the "incredible, epic story" that it told. The Telegraph wrote that the book succeeded brilliantly as science writing, expanding the reader's horizons with a gripping narrative. Context Early theories of the origin of life included spontaneous generation from non-living matter and panspermia, the arrival of life on earth from other bodies in space. The question of how life originated became urgent when Charles Darwin's 1859 On the Origin of Species became widely accepted by biologists. The evolution of new species by splitting off from older ones implied that all life forms were derived from a few such forms, perhaps only one, as Darwin had suggested at the end of his book. Darwin suggested that life could have originated in some "warm little pond" containing a suitable mixture of chemical compounds. The question has continued to be debated into the 21st century. Nick Lane is a biochemist at University College London; he researches "evolutionary biochemistry and bioenergetics, focusing on the origin of life and the evolution of complex cells." He has become known as a science writer, having written four books about evolutionary biochemistry. Book Synopsis In the book, Lane discusses what he considers to be a major gap in biology: why life operates the way that it does, and how it began. In his view as a bio
https://en.wikipedia.org/wiki/Linamarin
Linamarin is a cyanogenic glucoside found in the leaves and roots of plants such as cassava, lima beans, and flax. It is a glucoside of acetone cyanohydrin. Upon exposure to enzymes and gut flora in the human intestine, linamarin and its methylated relative lotaustralin can decompose to the toxic chemical hydrogen cyanide; hence food uses of plants that contain significant quantities of linamarin require extensive preparation and detoxification. Ingested and absorbed linamarin is rapidly excreted in the urine and the glucoside itself does not appear to be acutely toxic. Consumption of cassava products with low levels of linamarin is widespread in the low-land tropics. Ingestion of food prepared from insufficiently processed cassava roots with high linamarin levels has been associated with dietary toxicity, particularly with the upper motor neuron disease known as konzo to the African populations in which it was first described by Trolli and later through the research network initiated by Hans Rosling. However, the toxicity is believed to be induced by ingestion of acetone cyanohydrin, the breakdown product of linamarin. Dietary exposure to linamarin has also been reported as a risk factor in developing glucose intolerance and diabetes, although studies in experimental animals have been inconsistent in reproducing this effect and may indicate that the primary effect is in aggravating existing conditions rather than inducing diabetes on its own. The generation of cyanide from linamarin is usually enzymatic and occurs when linamarin is exposed to linamarase, an enzyme normally expressed in the cell walls of cassava plants. Because the resulting cyanide derivatives are volatile, processing methods that induce such exposure are common traditional means of cassava preparation; foodstuffs are usually made from cassava after extended blanching, boiling, or fermentation. Food products made from cassava plants include garri (toasted cassava tubers), porridge-like fufu, the d
https://en.wikipedia.org/wiki/BSD%20domain
In molecular biology, the BSD domain is an approximately 60-amino-acid-long protein domain named after the BTF2-like transcription factors, synapse-associated proteins and DOS2-like proteins in which it is found. It is also found in several hypothetical proteins. It occurs in one or two copies in a variety of species ranging from primal protozoan to human, and can be found associated with other domains such as the BTB domain or the U-box in multidomain proteins. Its function is as yet unknown. Secondary structure prediction indicates the presence of three predicted alpha helices, which probably form a three-helical bundle in small |domains. The third predicted helix contains neighbouring phenylalanine and tryptophan residues—less common amino acids that are invariant in all the BSD domains identified and that are the domain's most striking sequence features. Some proteins known to contain one or two BSD domains are: Mammalian TFIIH basal transcription factor complex p62 subunit (GTF2H1). Yeast RNA polymerase II transcription factor B 73 kDa subunit (TFB1), the homologue of BTF2. Yeast DOS2 protein, involved in single-copy DNA replication and ubiquitination. Drosophila synapse-associated protein SAP47. Mammalian SYAP1. Various Arabidopsis thaliana (mouse-ear cress) hypothetical proteins.
https://en.wikipedia.org/wiki/Quadrangular%20space
The quadrangular space, also known as the quadrilateral space [of Velpeau] and the foramen humerotricipitale, is one of the three spaces in the axillary space. The other two spaces are: triangular space and triangular interval. Structure The quadrangular space is one of the three spaces in the axillary space. Boundaries The quadrangular space is defined by: above/superior: teres minor muscle. below/inferior: teres major muscle. medially: long head of the triceps brachii muscle (lateral margin). laterally: surgical neck of the humerus. anteriorly: subscapularis muscle. Contents The quadrangular space transmits the axillary nerve, and the posterior humeral circumflex artery. Clinical significance The quadrangular space is a clinically important anatomic space in the arm as it provides the anterior regions of the axilla a passageway to the posterior regions. In the quadrangular space, the axillary nerve and the posterior humeral circumflex artery can be compressed or damaged due to space-occupying lesions or disruption in the anatomy due to trauma. Symptoms include axillary nerve related weakness of the deltoid muscle in the case of any significant mass lesions in the quadrangular space. History The quadrangular space is so named because the three skeletal muscles and one long bone that form its boundaries leave a space in the shape of a complete quadrangle. The quadrangular space is also known as the quadrilateral space, the quadrilateral space of Velpeau, and the foramen humerotricipitale. See also Quadrilateral space syndrome Triangular space Triangular interval Additional images
https://en.wikipedia.org/wiki/Functor%20represented%20by%20a%20scheme
In algebraic geometry, a functor represented by a scheme X is a set-valued contravariant functor on the category of schemes such that the value of the functor at each scheme S is (up to natural bijections) the set of all morphisms . The scheme X is then said to represent the functor and that classify geometric objects over S given by F. The best known example is the Hilbert scheme of a scheme X (over some fixed base scheme), which, when it exists, represents a functor sending a scheme S to a flat family of closed subschemes of . In some applications, it may not be possible to find a scheme that represents a given functor. This led to the notion of a stack, which is not quite a functor but can still be treated as if it were a geometric space. (A Hilbert scheme is a scheme, but not a stack because, very roughly speaking, deformation theory is simpler for closed schemes.) Some moduli problems are solved by giving formal solutions (as opposed to polynomial algebraic solutions) and in that case, the resulting functor is represented by a formal scheme. Such a formal scheme is then said to be algebraizable if there is another scheme that can represent the same functor, up to some isomorphisms. Motivation The notion is an analog of a classifying space in algebraic topology. In algebraic topology, the basic fact is that each principal G-bundle over a space S is (up to natural isomorphisms) the pullback of a universal bundle along some map from S to . In other words, to give a principal G-bundle over a space S is the same as to give a map (called a classifying map) from a space S to the classifying space of G. A similar phenomenon in algebraic geometry is given by a linear system: to give a morphism from a projective variety to a projective space is (up to base loci) to give a linear system on the projective variety. Yoneda's lemma says that a scheme X determines and is determined by its points. Functor of points Let X be a scheme. Its functor of points is the fu
https://en.wikipedia.org/wiki/Toileting
In health care, toileting is the act of assisting a dependent patient with his/her elimination needs. Methods of toileting Depending on a patient's condition, his/her toileting needs may need to be met differently. This could be by assisting the patient to walk to a toilet, to a bedside commode chair, onto a bedpan, or to provide a male patient with a urinal. A more dependent or incontinent patient may have his/her toileting needs met solely through the use of adult diapers. Other options are incontinence pads and urinary catheters. Ambulatory assistance Some patients can walk with assistance from another person, usually a health care worker. Aside from the need for this help, they are capable of meeting their own elimination needs. Bedpan Patients who cannot get out of bed easily but who can control their bladder and bowels are able to request a bedpan. The bedpan is placed underneath the patient, who can urinate or defecate as needed. Some patients are able to place their own bedpans under themselves, and assistance is required only to empty them after the fact. Urinal A urinal is much like a bedpan but only for a male, the urinal is shaped in a way that the male may use it while still in bed and remain comfortable. The urinal is also often used when input and output (I & O) must be recorded. Briefs Incontinent patients often wear briefs to prevent their trousers from being stained by their elimination. Briefs must be checked and changed frequently. Catheter Catheters, in this sense, are tubes that drain urine from the body. A Foley catheter, used with men and women, is inserted into the bladder. An external catheter is attached to the penis of a male patient. In the US, while Foley catheters can only be applied by a nurse or physician, external catheters can be attached by a certified nurse assistant. Collection, measurement, and analysis Input and output Input and output (I & O) is the measure of food and fluids that enter and exit the body. Certain pat
https://en.wikipedia.org/wiki/Undercut%20%28manufacturing%29
In manufacturing, an undercut is a special type of recessed surface that is inaccessible using a straight tool. In turning, it refers to a recess in a diameter generally on the inside diameter of the part. In milling, it refers to a feature which is not visible when the part is viewed from the spindle. In molding, it refers to a feature that cannot be molded using only a single pull mold. In printed circuit board construction, it refers to the portion of the copper that is etched away under the photoresist. Turning On turned parts an undercut is also known as a neck or "relief groove". They are often used at the end of the threaded portion of a shaft or screw to provide clearance for the cutting tool. Molding Undercut - Any indentation or protrusion in a shape that will prevent its withdrawal from a one-piece mold. Milling In milling the spindle is where a cutting tool is mounted. In some situations material must be cut from a direction where the feature can not be seen from the perspective of the spindle and requires special tooling to reach behind the visible material. The corners may be undercut to remove the radius that is usually left by the milling cutter this is commonly referred to as a relief. Etching Undercuts from etching (microfabrication) are a side effect, not an intentional feature. Gears
https://en.wikipedia.org/wiki/ESP-r
ESP-r is a research-oriented open-source building performance simulation software. ESP-r can model heat flow in thermal zones, fluid flow using networks or CFD, electrical power flow, moisture flow, contaminant flow, hygrothermal and fluid flow in HVAC systems, as well as visual and acoustic performance aspects within a modeled energy system/building. It was initially developed in 1974, as Joe Clarke's PhD research at the University of Strathclyde, and has been since extended by researchers from several countries. ESP-r was made available in 2002 in the public domain subject to the GNU Public License. ESP-r is designed to work on Unix, but it can run on Windows using Windows Subsystem for Linux (or in any other operational system using a virtual machine). The current ESP-r archivist is Professor Joseph Clarke, of the University of Strathclyde. ESP-r`s holistic nature, flexibility, and range of features enable a well-informed user to optimize the energy and environmental performance of a building and/or associated energy systems. The user experience provided by ESP-r, however, cannot be compared to the one provided by commercial software. ESP-r learning curve is steep, but there is a growing amount of training material available online. ESP-r has been extensively validated. Among other projects, ESP-r was part of BESTEST, an IEA initiative that created a benchmark for quality assessment of energy simulation software. This benchmark was later incorporated on ASHRAE Standard 140 - Method of Test for Evaluating Building Performance Simulation Software.
https://en.wikipedia.org/wiki/Shear%20and%20moment%20diagram
Shear force and bending moment diagrams are analytical tools used in conjunction with structural analysis to help perform structural design by determining the value of shear forces and bending moments at a given point of a structural element such as a beam. These diagrams can be used to easily determine the type, size, and material of a member in a structure so that a given set of loads can be supported without structural failure. Another application of shear and moment diagrams is that the deflection of a beam can be easily determined using either the moment area method or the conjugate beam method. Convention Although these conventions are relative and any convention can be used if stated explicitly, practicing engineers have adopted a standard convention used in design practices. Normal convention The normal convention used in most engineering applications is to label a positive shear force - one that spins an element clockwise (up on the left, and down on the right). Likewise the normal convention for a positive bending moment is to warp the element in a "u" shape manner (Clockwise on the left, and counterclockwise on the right). Another way to remember this is if the moment is bending the beam into a "smile" then the moment is positive, with compression at the top of the beam and tension on the bottom. This convention was selected to simplify the analysis of beams. Since a horizontal member is usually analyzed from left to right and positive in the vertical direction is normally taken to be up, the positive shear convention was chosen to be up from the left, and to make all drawings consistent down from the right. The positive bending convention was chosen such that a positive shear force would tend to create a positive moment. Alternative drawing convention In structural engineering and in particular concrete design the positive moment is drawn on the tension side of the member. This convention puts the positive moment below the beam described above. A c
https://en.wikipedia.org/wiki/MFEM
MFEM is an open-source C++ library for solving partial differential equations using the finite element method, developed and maintained by researchers at the Lawrence Livermore National Laboratory and the MFEM open-source community on GitHub. MFEM is free software released under a BSD license. The library consists of C++ classes that serve as building blocks for developing finite element solvers applicable to problems of fluid dynamics, structural mechanics, electromagnetics, radiative transfer and many other. Features Some of the features of MFEM include Arbitrary high order finite elements with curved boundaries. H1, H(curl) and H(div) conforming, discontinuous (L2), and NURBS finite element spaces. Local mesh refinement, both conforming (simplex meshes) and non-conforming (quadrilateral/hexahedral meshes). Highly scalable MPI-based parallelism and GPU acceleration. Wide variety of finite element discretization approaches, including Galerkin, discontinuous Galerkin, mixed, high-order and isogeometric analysis methods. Tight integration with the Hypre parallel linear algebra library. Many built-in solvers and interfaces to external libraries such as PETSc, SuiteSparse, Gmsh, etc. Accurate and flexible visualization with VisIt and ParaView. Lightweight design and conservative use of C++ templating. Documentation in the form of examples and mini-applications. See also List of finite element software packages List of numerical analysis software List of numerical libraries
https://en.wikipedia.org/wiki/Z-Wave
Z-Wave is a wireless communications protocol used primarily for residential and commercial building automation. It is a mesh network using low-energy radio waves to communicate from device to device, allowing for wireless control of smart home devices, such as smart lights, security systems, thermostats, sensors, smart door locks, and garage door openers. The Z-Wave brand and technology are owned by Silicon Labs. Over 300 companies involved in this technology are gathered within the Z-Wave Alliance. Like other protocols and systems aimed at the residential, commercial, MDU and building markets, a Z-Wave system can be controlled from a smart phone, tablet, or computer, and locally through a smart speaker, wireless keyfob, or wall-mounted panel with a Z-Wave gateway or central control device serving as both the hub or controller. Z-Wave provides the application layer interoperability between home control systems of different manufacturers that are a part of its alliance. There is a growing number of interoperable Z-Wave products; over 1,700 in 2017, over 2,600 by 2019, and over 4,000 by 2022. History The Z-Wave protocol was developed by Zensys, a Danish company based in Copenhagen, in 1999. That year, Zensys introduced a consumer light-control system, which evolved into Z-Wave as a proprietary system on a chip (SoC) home automation protocol on an unlicensed frequency band in the 900 MHz range. Its 100 series chip set was released in 2003, and its 200 series was released in May 2005, with the ZW0201 chip offering high performance at a low cost. Its 500 series chip, also known as Z-Wave Plus, was released in March 2013, with four times the memory, improved wireless range, improved battery life, an enhanced S2 security framework, and the SmartStart setup feature. Its 700 series chip was released in 2019, with the ability to communicate up to 100 meters directly from point-to-point, or 800 meters across an entire Z-Wave network, an extended battery life of up to 10 year
https://en.wikipedia.org/wiki/Talocalcaneonavicular%20joint
The talocalcaneonavicular joint is a ball and socket joint; the rounded head of the talus is received into the concavity formed by the posterior surface of the navicular, the anterior articular surface of the calcaneus, and the upper surface of the plantar calcaneonavicular ligament. Structure As its shape suggests, this joint is a synovial ball-and-socket joint. It is composed of three articular surfaces: The articulation between the medial talar articular surface on the sustentaculum tali of the superior Calcaneus and the corresponding medial facet found inferiorly on the Talus neck The articulation between the anterior talar articular surface of the superior calcaneus and the anterior facet of the corresponding talus found inferiorly on the talar head The articulation between the articular surface of Navicular and the head of talus (talonavicular joint) Ligaments The plantar calcaneonavicular ligament also called the spring ligament forms the whole floor of the talus as it extends inferior to the talus. It attaches to the anterior aspect of sustentaculum tali inserting into the plantar surface of navicular. By beginning from sustentaculum tali it covers the plantar surfaces of the middle and anterior articulations between the calcaneus and talus and by attaching to the navicular it covers the articulation between the talus and navicular. That is a reason why the medial longitudinal arch of the foot is a bit higher than the lateral longitudinal arch of the foot as this ligament is a main part of it. The calcaneonavicular part of the bifurcated ligament extends from the dorsolateral side of calcaneus (near the tarsal sinus ) to the lateral side of the navicular. It reinforces the joint particularly laterally where the talus articulates with the navicular. The dorsal talonavicular ligament extends from the dorsal aspect of the foot from the neck of the talus to the navicular. The socket of this joint is formed by the concave articular facets of the navicul
https://en.wikipedia.org/wiki/Biological%20hazard
A biological hazard, or biohazard, is a biological substance that poses a threat to the health of living organisms, primarily humans. This could include a sample of a microorganism, virus or toxin that can adversely affect human health. A biohazard could also be a substance harmful to other living beings. The term and its associated symbol are generally used as a warning, so that those potentially exposed to the substances will know to take precautions. The biohazard symbol was developed in 1966 by Charles Baldwin, an environmental-health engineer working for the Dow Chemical Company on their containment products. It is used in the labeling of biological materials that carry a significant health risk, including viral samples and used hypodermic needles. In Unicode, the biohazard symbol is U+2623 (☣). ANSI Z535/OSHA/ISO regulation Biohazardous safety issues are identified with specified labels, signs and paragraphs established by the American National Standards Institute (ANSI). Today, ANSI Z535 standards for biohazards are used worldwide and should always be used appropriately within ANSI Z535 Hazardous Communications (HazCom) signage, labeling and paragraphs. The goal is to help workers rapidly identify the severity of a biohazard from a distance and through colour and design standardization. Biological hazard symbol design: A red on white or white-coloured background is used behind a black biohazard symbol when integrated with a DANGER sign, label or paragraph. An orange on black or white-coloured background is used behind a black biohazard symbol when integrated with a WARNING sign, label or paragraph. A yellow on black or white-coloured background is used behind a black biohazard symbol when integrated with a CAUTION sign, label or paragraph. A green on white or white-coloured background is used behind a black biohazard symbol when integrated with a NOTICE sign, label or paragraph. DANGER is used to identify a biohazard that will cause death. WARNING
https://en.wikipedia.org/wiki/Information%20exchange
Information exchange or information sharing means that people or other entities pass information from one to another. This could be done electronically or through certain systems. These are terms that can either refer to bidirectional information transfer in telecommunications and computer science or communication seen from a system-theoretic or information-theoretic point of view. As "information" in this context invariably refers to (electronic) data that encodes and represents the information at hand, a broader treatment can be found under data exchange. Information exchange has a long history in information technology. Traditional information sharing referred to one-to-one exchanges of data between a sender and receiver. Online information sharing gives useful data to businesses for future strategies based on online sharing. These information exchanges are implemented via dozens of open and proprietary protocols, message, and file formats. Electronic data interchange (EDI) is a successful implementation of commercial data exchanges that began in the late 1970s and remains in use today. Some controversy comes when discussing regulations regarding information exchange. Initiatives to standardize information sharing protocols include extensible markup language (XML), simple object access protocol (SOAP), and web services description language (WSDL). From the point of view of a computer scientist, the four primary information sharing design patterns are sharing information one-to-one, one-to-many, many-to-many, and many-to-one. Technologies to meet all four of these design patterns are evolving and include blogs, wikis, really simple syndication, tagging, and chat. One example of United States government's attempt to implement one of these design patterns (one to one) is the National Information Exchange Model (NIEM). One-to-one exchange models fall short of supporting all of the required design patterns needed to fully implement data exploitation technology. A
https://en.wikipedia.org/wiki/Cyril%20Hogarth
Cyril Alfred Hogarth (22 January 1924 – 6 November 2006) was a British physicist and chairman of South Bucks District Council. A pioneer in the field of oxide semiconductors, he was a professor, head of the physics department, and administrator at Brunel University London, where he worked for 31 years. Early life and education Hogarth was born in 1924, and grew up in Tottenham, north London. He was educated at Tottenham County School. Hogarth earned a degree from the University of London. In 1948, he received a PhD from Queen Mary University of London, studying with Professor J. P. Andrews. That year, Hogarth's theoretical solution for determining the dependence of thermoelectric power of cadmium oxide on ambient oxygen pressure was published in Nature and in Philosophical Magazine. He later received a doctor of science degree in 1977. Career From 1943 to 1946, Hogarth worked on naval radar and countermeasures in the UK, Canada, the US and the Arctic. After earning his PhD, he lectured at Chelsea College of Science and Technology and the University of Reading, before spending some years at the Royal Radar Establishment. Hogarth was "closely involved" in the founding of Brunel University London from 1958, its first professor of physics, head of its physics department, and its pro vice-chancellor for a year in 1980. In 1969, Hogarth was elected as vice-president of the Institute of Physics and the Physical Society. Hogarth retired from Brunel University in 1989, but continued his research and published articles regularly through the mid-1990s. In 1990 and 1991 alone, he published 17 and 13 articles, respectively, in the Journal of Materials Science. In addition to semiconductors, his research focused on materials and their properties. Personal life and death In 1951, Cyril married Dr Audrey Hogarth (1926 – 2010), who had a doctorate in dairy bacteriology from Reading University. The Hogarths lived in Gerrards Cross, where Audrey served as a magistrate for
https://en.wikipedia.org/wiki/B%C3%B2%20kho
Bò kho is a dish of South Vietnamese origin using the kho cooking method, it is a spicy dish made commonly with beef which is known throughout the country and beyond. In rural areas, the dish is described as being "extremely fiery." There are variants of the dish that is made with chicken, known as gà kho, or gà kho gừng (gừng meaning "ginger") and also made with fish, known as cá kho. Origin The taste of the dish is not in the typical Vietnamese style and is more reminiscent of Indian or Malaysian cuisine. The wide distribution of beef and slow-cooked stews in Vietnam are thanks to French culinary influence during colonial times, so the modern dish is considered to have a mixed origin. Cooking Мodern Vietnamese cooks cook bò kho using metal saucepans, but originally it was made by simmering the broth in clay pots. The ingredients of the dish can vary widely. The typical ingredients of the dish are beef, carrot, lemongrass, and garlic. Some other ingredients that can be used are tomatoes, applesauce, and star anise, and galangal. The ingredients are first marinated with some Vietnamese spices and sauces (ginger, chili, Vietnamese-style fish sauce). An off-the-shelf bò kho powder is also available. Then, the dish should be slowly stewed until cooked. It is usually served with rice, rice noodles, or bánh mì, and herbs (examples include Thai basil, and cilantro). See also Gà kho Cá kho Kho (cooking technique)
https://en.wikipedia.org/wiki/Bupirimate
Bupirimate (systematic name 5-butyl-2-ethylamino-6-methylpyrimidin-4-yldimethylsulphamate; brand names Nimrod and Roseclear 2) is an active ingredient of plant protection products (or pesticides), which has an effect as a fungicide. It belongs to the chemical family of pyrimidine sulfamates. Bupirimate has translaminar mobility and systemic translocation in the xylem. It acts mainly by inhibiting sporulation and is used for control of powdery mildew of apples, pears, stone fruit, cucurbits, roses and other ornamentals, strawberries, gooseberries, currants, raspberries, hops, beets and other crops. Bupirimate is not an insecticide. It is of low mammalian toxicity and is non-toxic to bees. However, it is used in many products which also contain insecticides. History A research programme at ICI's Jealott's Hill site during the 1960s had the objective of discovering fungicides which could penetrate into and move within plants and hence could cure established infections. The outcome of the research was three related compounds: dimethirimol, ethirimol and bupirimate which were first marketed in 1968, 1970 and 1975 respectively. The key target for these fungicides are the mildews but each compound differs in its effect on individual mildew species. In particular, bupirimate is effective on apple powdery mildew caused by the fungus Podosphaera leucotricha, which the earlier materials were not. Regulation In terms of the regulation of plant protection products in the European Union, this active substance is in revision of the inclusion in Annex I of the 91/414/EEC Directive. In France, the active substance is permitted in the composition of preparations with an authorization on the market.
https://en.wikipedia.org/wiki/Diploidization
Diploidization is the process of converting a polyploid genome back into a diploid one. Polyploidy is a product of whole genome duplication (WGD) and is followed by diploidization as a result of genome shock. The plant kingdom has undergone multiple events of polyploidization followed by diploidization in both ancient and recent lineages. It has also been hypothesized that vertebrate genomes have gone through two rounds of paleopolyploidy. The mechanisms of diploidization are poorly understood but patterns of chromosomal loss and evolution of novel genes are observed in the process. Elimination of duplicated genes Upon the formation of new polyploids, large sections of DNA are rapidly lost from one genome. The loss of DNA effectively achieves two purposes. First, the eliminated copy restores the normal gene dosage in the diploid organism. Second, the changes in chromosomal genetic structure increase the divergence of the homoeologous chromosomes (similar chromosomes from inter-species hybrid) and promotes homologous chromosome pairing. Both are important in terms of adjusting to the induced genome shock. Evolution of genes to ensure correct chromosome pairing There have been rare events in which genes that ensure proper chromosome pairing have evolved shortly after polyploidization. One such gene, Ph1, exists in hexaploid wheat. These genes keep the two sets of genomes separately by either spatially separating them or giving them a unique chromatin identity to facilitate recognition from its homologous pair. This prevents the need of rapid gene loss to speed up homeologous chromosome diversification. Drive for diploidization Coordinate inter-genomic gene expression Duplicated genes often result in increased dosage of gene products. Doubled dosages are sometimes lethal to the organism thus the two genome copies must coordinate in a structured fashion to maintain normal nuclear activity. Many mechanisms of diploidization promote this coordination. Maintain
https://en.wikipedia.org/wiki/Regulation%20of%20genetic%20engineering
The regulation of genetic engineering varies widely by country. Countries such as the United States, Canada, Lebanon and Egypt use substantial equivalence as the starting point when assessing safety, while many countries such as those in the European Union, Brazil and China authorize GMO cultivation on a case-by-case basis. Many countries allow the import of GM food with authorization, but either do not allow its cultivation (Russia, Norway, Israel) or have provisions for cultivation, but no GM products are yet produced (Japan, South Korea). Most countries that do not allow for GMO cultivation do permit research. Most (85%) of the world's GMO crops are grown in the Americas (North and South). One of the key issues concerning regulators is whether GM products should be labeled. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. A study investigating voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%. In Canada and the USA labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. There is no evidence to support the idea that the consumption of approved GM food has a detrimental effect on human health. Some scientists and advocacy groups, such as Greenpeace and World Wi
https://en.wikipedia.org/wiki/Antidesma%20acidum
Antidesma acidum is a shrub or small tree that is native to an area from Jawa to south-central China and Pakistan. It is a long-lived, shade-tolerant species that is usually found under closed-canopy. The fruit is eaten in many places, the leaves in some locations. In Luang Prabang (Laos) open-air markets, the leaves are only sold alongside Russula mushrooms, to give a sour flavour to soup made from the fungi. Description The species grows as a shrub or small tree, usually up to 6m tall, rarely up to 10m. The young twigs have fine hairs. Obovate to elliptic-oblong papery leaves; upper surface smooth (though rarely surface is pilose), lower surface is pubescent (rarely smooth); dull colour, dries to a yellowish-green; acute to obtuse base (rarely attenuate); rounded to acute to acuminate apex (sometimes mucronate); size usually 5–10 cm, rarely down to 2 cm and up to 21 cm. Terminal to axillary inflorescences. Ellipsoid smooth drupes, 4-6 by 3-4mm, nearly terete to laterally compressed. Flowers in Zhōngguó/China from May to July, fruiting from June to November. Distinguishing characteristics of this species are: the papery dull leaves and their size; domatia are present; the male flowers, at least, have a pubescent disc; usually 2 (rarely 1 or 3) stamens; an absent or small rudimentary ovary; size of the fruit; and female inflorescences and infructescences are usually 2–5 cm, rarely up to 9 cm. In the southern part of the distribution range, pistillodes are always absent from the male flowers. Wood anatomy The wood of A. acidum is diffuse porous with occasional small vessels in solitary or radial multiple arrangements (up to 5 long). The rays are heterocellular, with simple and scaliform perforations, scalariform ray-vessel pits, silica bodies are present in some cells. Septate fibres are present. Distribution The species is native to an area of tropical and subtropical Asia from Jawa, Indonesia to south-central Zhōngguó/China to Pakistan and the Western Himalaya
https://en.wikipedia.org/wiki/Scudder%27s%20American%20Museum
Scudder's American Museum was a museum located in New York City from 1810 to 1841, when it was purchased by P.T. Barnum and transformed into the very successful Barnum's American Museum. Before Scudder The roots of the museum date back to 1791, when the "American Museum" was founded by John Pintard "under the patronage of the Tammany Society." It was located at 57 King Street, with Pintard serving as secretary and Gardner Baker (more of a showman between the two) as keeper. The museum was moved to a building at the intersection of Pearl and Broad streets by 1794 called the "Exchange". It occupied a thirty-by-sixty foot room with a high ceiling, and later opened a second room including a menagerie. It was called "Baker's American Museum" after Baker took control of it from the Tammany Society in 1795. Relying now only on ticket sales to finance operations, he raised admission prices and kept attempting to add new curiosities to draw visitors. After Baker died in 1798, and his widow died in 1800, the collection was purchased by William I. Waldron. It then came into the hands of painter Edward Savage, who opened the "Columbian Gallery of Painting and City Museum" in 1802, and hired John Scudder to oversee the museum collection. Scudder After earning money as a seaman, the collection became the property of John Scudder in 1809, and he opened "Scudder's American Museum" in March 1810 at 21 Chatham Street. The museum moved into part of the City's former poor house in 1817, along with other civic institutions. Poet Fitz-Greene Halleck referenced this social experiment in an 1819 satirical piece which includes the lines: "Once the old alms house, now a school of wisdom, Sacred to Scudder's shells and Dr. Griscom." After Scudder died in August 1821, control of the museum fell to his heirs. Scudder's son (John Jr.) eventually became manager of the museum, and moved it into a five-story building on the corner of Broadway and Ann Street (across the street from St
https://en.wikipedia.org/wiki/The%20God%20Particle%20%28book%29
The God Particle: If the Universe Is the Answer, What Is the Question? is a 1993 popular science book by Nobel Prize-winning physicist Leon M. Lederman and science writer Dick Teresi. The book provides a brief history of particle physics, starting with the pre-Socratic Greek philosopher Democritus, and continuing through Isaac Newton, Roger J. Boscovich, Michael Faraday, and Ernest Rutherford and quantum physics in the 20th century. Lederman explains in the book why he gave the Higgs boson the nickname "The God Particle": In 2013, subsequent to the discovery of the Higgs boson, Lederman co-authored, with theoretical physicist Christopher T. Hill, a sequel: Beyond the God Particle which delves into the future of particle physics in the post-Higgs boson era. This book is part of a trilogy, with companions, Symmetry and the Beautiful Universe and Quantum Physics for Poets (see bibliography below). Historical context Fermilab director and subsequent Nobel physics prize winner Leon Lederman was a very prominent early supporter – some sources say the architect or proposer – of the Superconducting Super Collider project, which was endorsed around 1983, and was a major proponent and advocate throughout its lifetime. Lederman wrote his 1993 popular science book – which sought to promote awareness of the significance of such a project – in the context of the project's last years and the changing political climate of the 1990s. The increasingly moribund project was finally shelved that same year after some $2 billion of expenditure. The proximate causes of the closure were the rising US budget deficit, rising projected costs of the project, and the cessation of the Cold War, which reduced the perceived political pressure within the United States to undertake and complete high-profile science megaprojects. List of chapters Chapter 1: The Invisible Soccer Ball: This chapter uses a metaphor of a soccer game with an invisible ball to depict the process by which the existe