source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Minimum%20redundancy%20feature%20selection
Minimum redundancy feature selection is an algorithm frequently used in a method to accurately identify characteristics of genes and phenotypes and narrow down their relevance and is usually described in its pairing with relevant feature selection as Minimum Redundancy Maximum Relevance (mRMR). Feature selection, one of the basic problems in pattern recognition and machine learning, identifies subsets of data that are relevant to the parameters used and is normally called Maximum Relevance. These subsets often contain material which is relevant but redundant and mRMR attempts to address this problem by removing those redundant subsets. mRMR has a variety of applications in many areas such as cancer diagnosis and speech recognition. Features can be selected in many different ways. One scheme is to select features that correlate strongest to the classification variable. This has been called maximum-relevance selection. Many heuristic algorithms can be used, such as the sequential forward, backward, or floating selections. On the other hand, features can be selected to be mutually far away from each other while still having "high" correlation to the classification variable. This scheme, termed as Minimum Redundancy Maximum Relevance (mRMR) selection has been found to be more powerful than the maximum relevance selection. As a special case, the "correlation" can be replaced by the statistical dependency between variables. Mutual information can be used to quantify the dependency. In this case, it is shown that mRMR is an approximation to maximizing the dependency between the joint distribution of the selected features and the classification variable. Studies have tried different measures for redundancy and relevance measures. A recent study compared several measures within the context of biomedical images.
https://en.wikipedia.org/wiki/Flux%20method
The flux method of crystal growth is a method where the components of the desired substance are dissolved in a solvent (flux). The method is particularly suitable for crystals needing to be free from thermal strain. It takes place in a crucible made of highly stable, non-reactive material. For production of oxide crystals, metals such as platinum, tantalum, and niobium are common. Production of metallic crystals generally uses crucibles made from ceramics such as alumina, zirconia, and boron nitride. The crucibles and their contents are often isolated from the air for reaction, either by sealing them in a quartz ampoule or by using a furnace with atmosphere control. A saturated solution is prepared by keeping the constituents of the desired crystal and the flux at a temperature slightly above the saturation temperature long enough to form a complete solution. Then the crucible is cooled in order to allow the desired material to precipitate. Crystal formation can begin by spontaneous nucleation or may be encouraged by the use of a seed. As material precipitates out of the solution, the amount of solute in the flux decreases and the temperature at which the solution is saturated lowers. This process repeats itself as the furnace continues to cool until the solution reaches its melting point or the reaction is stopped artificially. In flux method synthesis, divergent crystal growth kinetics may emerge, with a small number of crystallites growing at the expense of neighbouring ones, resulting in abnormal grain growth. One advantage of this method is that the crystals grown often display natural facets, which often makes preparing crystals for measurement significantly easier. A disadvantage is that most flux method syntheses produce relatively small crystals. However, some materials such as the "115" heavy fermion superconductors () may grow up to a few centimeters. See also Chemical vapor deposition Crystal growth Crystallography Czochralski process Epitaxy Hy
https://en.wikipedia.org/wiki/Measuring%20rod
A measuring rod is a tool used to physically measure lengths and survey areas of various sizes. Most measuring rods are round or square sectioned; however, they can also be flat boards. Some have markings at regular intervals. It is likely that the measuring rod was used before the line, chain or steel tapes used in modern measurement. History Ancient Sumer The oldest preserved measuring rod is a copper-alloy bar which was found by the German Assyriologist Eckhard Unger while excavating at Nippur (pictured below). The bar dates from c. 2650 BC. and Unger claimed it was used as a measurement standard. This irregularly formed and irregularly marked graduated rule supposedly defined the Sumerian cubit as about , although this does not agree with other evidence from the statues of Gudea from the same region, five centuries later. Ancient India Rulers made from ivory were in use by the Indus Valley Civilization in what today is Pakistan, and in some parts of Western India prior to 1500 BCE. Excavations at Lothal dating to 2400 BCE have yielded one such ruler calibrated to about Ian Whitelaw (2007) holds that 'The Mohenjo-Daro ruler is divided into units corresponding to and these are marked out in decimal subdivisions with remarkable accuracy—to within . Ancient bricks found throughout the region have dimensions that correspond to these units.' The sum total of ten graduations from Lothal is approximate to the angula in the Arthashastra. Ancient East Asia Measuring rods for different purposes and sizes (construction, tailoring and land survey) have been found from China and elsewhere dating to the early 2nd millennium B.C.E. Ancient Egypt Cubit-rods of wood or stone were used in Ancient Egypt. Fourteen of these were described and compared by Lepsius in 1865. Flinders Petrie reported on a rod that shows a length of 520.5 mm, a few millimetres less than the Egyptian cubit. A slate measuring rod was also found, divided into fractions of a Royal Cubit and dating to
https://en.wikipedia.org/wiki/Geniculate%20fibers
The geniculate fibers are the fibers in the region of the genu of the internal capsule; they originate in the motor part of the cerebral cortex, and, after passing downward through the base of the cerebral peduncle with the cerebrospinal fibers, undergo decussation and end in the motor nuclei of the cranial nerves of the opposite side.
https://en.wikipedia.org/wiki/Sensory%20decussation
In neuroanatomy, the sensory decussation or decussation of the lemnisci is a decussation (i.e. crossover) of axons from the gracile nucleus and cuneate nucleus, which are responsible for fine touch, vibration, proprioception and two-point discrimination of the body. The fibres of this decussation are called the internal arcuate fibres and are found at the superior aspect of the closed medulla superior to the motor decussation. It is part of the second neuron in the posterior column–medial lemniscus pathway. Structure At the level of the closed medulla in the posterior white column, two large nuclei namely the gracile nucleus and the cuneate nucleus can be found. The two nuclei receive the impulse from the two ascending tracts: fasciculus gracilis and fasciculus cuneatus. After the two tracts terminate upon these nuclei, the heavily myelinated fibres arise and ascend anteromedially around the periaqueductal gray as internal arcuate fibres. These fibres decussate (cross) to the contralateral (opposite) side, so called the sensory decussation. The ascending bundle after the decussation is called the medial lemniscus. Unlike other ascending tracts of the brain, fibres of the medial lemniscus do not give off collateral branches as they travel along the brainstem. Function The fibres that make up the sensory decussation are responsible for fine touch, proprioception and two-point discrimination of the whole body excluding the head. Additional images
https://en.wikipedia.org/wiki/BIFF%20%28Usenet%29
BIFF, later sometimes B1FF, was a pseudonym on, and the prototypical newbie of, Usenet. BIFF was created as and taken up as a satire of a partly amusing, partly annoying, mostly unwelcome intrusion into a then fairly rarefied community. BIFF had a VIC-20 at first and a Commodore 64 later (as these were both computers looked down upon as low-end by the majority of the veteran Usenet community). BIFF posts were limited to line lengths of either 22characters (like the VIC-20) or 40characters (like the Commodore 64) to look like they'd come from those machines. Origin BIFF was created by Joe Talmadge, who abandoned the character after just two postings. From then on, Richard Sexton took over and was credited by Talmadge as popularising BIFF. Richard Sexton: I make no claim to inventing BIFF. Blame Joe Talmadge. Joe wasn't too busy one year at HP and invented a whole cast of characters such as SYSTEMS ADMINISTRATOR MAN, Bobby Joe (Dedicated Wobegon listener), Joe Supportive (soc.singles reader for 1.5 years) and of course the big bad BIFFSTER. Joe Talmadge: Oh. Hey. Don't blame me. It's Webber's fault. Anyway, I may have invented BIFF, but Richard made him famous. Richard is the Roy Crock of BIFF; meanwhile, I fade quietly into obscurity. Versions have since been posted for the amusement of the Internet at large. Richard Sexton: I posted a few BIFF articles from my account at gryphon, but now there are about 20 people, well connected, posting BIFF forgeries. You probably don't want to hear that the people doing them had to find something to occupy themselves after they dissolved the backbone cabal. Cultural significance and later decline BIFF served as a satire of an exceptional behaviour in a fairly homogeneous environment. The explosive growth of the web led to the rapid decline of BIFF, as Biffisms became no longer exceptional and the Internet rapidly became a much larger and much more heterogeneous environment, leading both to newcomers not being aware of
https://en.wikipedia.org/wiki/Radeon%20R430
The Radeon R430 chip from ATI Technologies can be found in some models of the Radeon X800 GTO video card. The Radeon X800 "R430"-based 110 nanometer series was introduced at the end of 2004 along with ATI's new X850 cards. The X800 was designed to replace the position X700 XT failed to secure, with 12 pipelines and a 256-bit RAM bus. The card more than surpassed the 6600GT with performance similar to that of the GeForce 6800. A close relative, the new X800 XL, was positioned to dethrone Nvidia's GeForce 6800 GT with higher memory speeds and a full 16 pipelines to boost performance. R430 was unable to reach high clock speeds, being mainly designed to reduce the cost per graphics processing unit (GPU), and so a new top-of-the-line core was still needed. See also Radeon R420
https://en.wikipedia.org/wiki/European%20Federation%20for%20Primatology
The European Federation for Primatology (EFP) was founded on 17 December 1993. The seat of the Federation is Niederhausbergen (France). The EFP brings together national primatological societies, as well as groups of primatologists in those countries of Europe where such societies are not yet founded. The EFP members, who belong to societies or groups affiliated to the Federation, are involved in fundamental research, applied biomedical research and zoo management. This accounts for more than 1100 scientists, graduate students and zoo managers. More than 30 academic institutions are represented in the EFP through their membership. The colonies of primates which belong to these institutions account for about 2000 non-human primates. Purposes The purpose of the Federation is: To coordinate actions related to primatology between the different European societies. Such coordination include: Circulation of information between the different national primatological societies and groups of primatologists. Meetings of the different national societies, specialist groups and other workshops. Scientific activities, research and educational projects relevant to primatology. To promote rational management of captive primates and to make primate subjects and study sites available to a maximum number of students and researchers. To provide the Council of Europe and other European institutions with experts on all issues related to primatology. To participate, through the Council of Europe, in decisions relevant to primate trade and primate captive breeding. To promote the establishment of national societies of primatologists, national groups and European specialist groups of primatologists. The aims of all affiliated societies or groups are identical to those of the International Primatological Society, IPS: To encourage all areas of non-human primatological scientific research. To facilitate cooperation among scientists of all nationalities engaged in primate research. To promot
https://en.wikipedia.org/wiki/Assortativity
Assortativity, or assortative mixing, is a preference for a network's nodes to attach to others that are similar in some way. Though the specific measure of similarity may vary, network theorists often examine assortativity in terms of a node's degree. The addition of this characteristic to network models more closely approximates the behaviors of many real world networks. Correlations between nodes of similar degree are often found in the mixing patterns of many observable networks. For instance, in social networks, nodes tend to be connected with other nodes with similar degree values. This tendency is referred to as assortative mixing, or assortativity. On the other hand, technological and biological networks typically show disassortative mixing, or disassortativity, as high degree nodes tend to attach to low degree nodes. Measurement Assortativity is often operationalized as a correlation between two nodes. However, there are several ways to capture such a correlation. The two most prominent measures are the assortativity coefficient and the neighbor connectivity. These measures are outlined in more detail below. Assortativity coefficient The assortativity coefficient is the Pearson correlation coefficient of degree between pairs of linked nodes. Positive values of r indicate a correlation between nodes of similar degree, while negative values indicate relationships between nodes of different degree. In general, r lies between −1 and 1. When r = 1, the network is said to have perfect assortative mixing patterns, when r = 0 the network is non-assortative, while at r = −1 the network is completely disassortative. The assortativity coefficient is given by . The term is the distribution of the remaining degree. This captures the number of edges leaving the node, other than the one that connects the pair. The distribution of this term is derived from the degree distribution as . Finally, refers to the joint probability distribution of the remaining degrees o
https://en.wikipedia.org/wiki/Calcar%20avis
The calcar avis, previously known as the hippocampus minor, is an involution of the wall of the lateral ventricle's posterior cornu produced by the calcarine fissure. It is sometimes visible on ultrasonogram and can resemble a clot. Name The ridge was originally described by anatomists as the calcar avis, while the ridge running along the floor of the temporal horn of the lateral ventricle was described by various names, in particular as the hippocampus. A classical allusion was introduced later with the term pes hippocampi, which may date back to Diemerbroeck in 1672, introducing a comparison with the shape of the folded back forelimbs and webbed feet of the Classical hippocampus (Greek: ἱππόκαμπος), a sea monster with a horse's forequarters and a fish's tail. At a subsequent stage the hippocampus was described as pes hippocampi major, with the calcar avis being named pes hippocampi minor. The renaming of the hippocampus as hippocampus major, and the calcar avis as hippocampus minor, has been attributed to Félix Vicq-d'Azyr systematising nomenclature of parts of the brain in 1786. While "hippocampus minor" was used interchangeably with "calcar avis" for much of the 19th century, for a few years after 1861 the former name was subjected to publicity and ridicule when the hippocampus minor became the centre of a dispute over human evolution between Thomas Henry Huxley and Richard Owen, satirised as the Great Hippocampus Question. The term hippocampus minor fell from use in anatomy textbooks, and was officially removed in the Nomina Anatomica of 1895, but still featured in the Encyclopædia Britannica of 1926, and appeared in general dictionaries as late as 1957. Additional images
https://en.wikipedia.org/wiki/Collateral%20eminence
The collateral eminence is an elongated swelling lying lateral parallel with the hippocampus. It corresponds with the medial part of the collateral fissure, and its size depends on the depth and direction of this fissure. It is continuous behind with a flattened triangular area, the trigone of the lateral ventricle, situated between the posterior and inferior horn. It is not always present.
https://en.wikipedia.org/wiki/Common%20disease-common%20variant
The common disease-common variant (often abbreviated CD-CV) hypothesis predicts that common disease-causing alleles, or variants, will be found in all human populations which manifest a given disease. Common variants (not necessarily disease-causing) are known to exist in coding and regulatory sequences of genes. According to the CD-CV hypothesis, some of those variants lead to susceptibility to complex polygenic diseases. Each variant at each gene influencing a complex disease will have a small additive or multiplicative effect on the disease phenotype. These diseases, or traits, are evolutionarily neutral in part because so many genes influence the traits. The hypothesis has held in the case of putative causal variants in apolipoprotein E, including APOE ε4, associated with Alzheimer's disease. IL23R has been found to be associated with Crohn's disease; the at-risk allele has a frequency of 93% in the general population . One common form of variation across human genomes is called a single nucleotide polymorphism (SNP). As indicated by the name, SNPs are single base changes in the DNA. SNP variants tend to be common in different human populations. These polymorphisms have been valuable as genomic signposts, or "markers", in the search for common variants that influence susceptibility to common diseases. Research has linked common SNPs to diseases such as type 2 diabetes, Alzheimer's, schizophrenia and hypertension. See also Rare functional variant
https://en.wikipedia.org/wiki/Collateral%20fissure
The collateral fissure (or sulcus) is on the tentorial surface of the hemisphere and extends from near the occipital pole to within a short distance of the temporal pole. Behind, it lies below and lateral to the calcarine fissure, from which it is separated by the lingual gyrus; in front, it is situated between the parahippocampal gyrus and the anterior part of the fusiform gyrus. Additional images
https://en.wikipedia.org/wiki/Multi-master%20bus
A multi-master bus is a computer bus in which there are multiple bus master nodes present on the bus. This is used when multiple nodes on the bus must initiate transfer. For example, direct memory access (DMA) is used to transfer data between peripherals and memory without the need to use the central processing unit (CPU). Some buses like I²C use multi-mastering inherently to allow any node to initiate a transfer with another node. Computer buses
https://en.wikipedia.org/wiki/Unicode%20equivalence
Unicode equivalence is the specification by the Unicode character encoding standard that some sequences of code points represent essentially the same character. This feature was introduced in the standard to allow compatibility with preexisting standard character sets, which often included similar or identical characters. Unicode provides two such notions, canonical equivalence and compatibility. Code point sequences that are defined as canonically equivalent are assumed to have the same appearance and meaning when printed or displayed. For example, the code point U+006E (the Latin lowercase "n") followed by U+0303 (the combining tilde "◌̃") is defined by Unicode to be canonically equivalent to the single code point U+00F1 (the lowercase letter "ñ" of the Spanish alphabet). Therefore, those sequences should be displayed in the same manner, should be treated in the same way by applications such as alphabetizing names or searching, and may be substituted for each other. Similarly, each Hangul syllable block that is encoded as a single character may be equivalently encoded as a combination of a leading conjoining jamo, a vowel conjoining jamo, and, if appropriate, a trailing conjoining jamo. Sequences that are defined as compatible are assumed to have possibly distinct appearances, but the same meaning in some contexts. Thus, for example, the code point U+FB00 (the typographic ligature "ff") is defined to be compatible—but not canonically equivalent—to the sequence U+0066 U+0066 (two Latin "f" letters). Compatible sequences may be treated the same way in some applications (such as sorting and indexing), but not in others; and may be substituted for each other in some situations, but not in others. Sequences that are canonically equivalent are also compatible, but the opposite is not necessarily true. The standard also defines a text normalization procedure, called Unicode normalization, that replaces equivalent sequences of characters so that any two texts tha
https://en.wikipedia.org/wiki/Fractal%20dimension%20on%20networks
Fractal analysis is useful in the study of complex networks, present in both natural and artificial systems such as computer systems, brain and social networks, allowing further development of the field in network science. Self-similarity of complex networks Many real networks have two fundamental properties, scale-free property and small-world property. If the degree distribution of the network follows a power-law, the network is scale-free; if any two arbitrary nodes in a network can be connected in a very small number of steps, the network is said to be small-world. The small-world properties can be mathematically expressed by the slow increase of the average diameter of the network, with the total number of nodes , where is the shortest distance between two nodes. Equivalently, we obtain: where is a characteristic length. For a self-similar structure, a power-law relation is expected rather than the exponential relation above. From this fact, it would seem that the small-world networks are not self-similar under a length-scale transformation. Self-similarity has been discovered in the solvent-accessible surface areas of proteins. Because proteins form globular folded chains, this discovery has important implications for protein evolution and protein dynamics, as it can be used to establish characteristic dynamic length scales for protein functionality. The methods for calculation of the dimension Generally we calculate the fractal dimension using either the box counting method or the cluster growing method. The box counting method Let be the number of boxes of linear size , needed to cover the given network. The fractal dimension is then given by This means that the average number of vertices within a box of size By measuring the distribution of for different box sizes or by measuring the distribution of for different box sizes, the fractal dimension can be obtained by a power law fit of the distribution. The cluster growing met
https://en.wikipedia.org/wiki/Mersenne%20conjectures
In mathematics, the Mersenne conjectures concern the characterization of a kind of prime numbers called Mersenne primes, meaning prime numbers that are a power of two minus one. Original Mersenne conjecture The original, called Mersenne's conjecture, was a statement by Marin Mersenne in his Cogitata Physico-Mathematica (1644; see e.g. Dickson 1919) that the numbers were prime for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257, and were composite for all other positive integers n ≤ 257. The first seven entries of his list ( forn = 2, 3, 5, 7, 13, 17, 19) had already been proven to be primes by trial division before Mersenne's time; only the last four entries were new claims by Mersenne. Due to the size of those last numbers, Mersenne did not and could not test all of them, nor could his peers in the 17th century. It was eventually determined, after three centuries and the availability of new techniques such as the Lucas–Lehmer test, that Mersenne's conjecture contained five errors, namely two entries are composite (those corresponding to the primes n = 67, 257) and three primes are missing (those corresponding to the primes n = 61, 89, 107). The correct list for n≤ 257 is: n = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107 and 127. While Mersenne's original conjecture is false, it may have led to the New Mersenne conjecture. New Mersenne conjecture The New Mersenne conjecture or Bateman, Selfridge and Wagstaff conjecture (Bateman et al. 1989) states that for any odd natural number p, if any two of the following conditions hold, then so does the third: p = 2k ± 1 or p = 4k ± 3 for some natural number k. () 2p − 1 is prime (a Mersenne prime). () (2p + 1)/3 is prime (a Wagstaff prime). () If p is an odd composite number, then 2p − 1 and (2p + 1)/3 are both composite. Therefore it is only necessary to test primes to verify the truth of the conjecture. Currently, there are nine known numbers for which all three conditions hold: 3, 5, 7, 13, 17, 19, 31, 61, 127 . Bate
https://en.wikipedia.org/wiki/Midsternal%20line
The midsternal line is used to describe a part of the surface anatomy of the anterior thorax. The midsternal line runs vertical down the middle of the sternum. It can be interpreted as a component of the median plane. See also Midclavicular line
https://en.wikipedia.org/wiki/Spin%20qubit%20quantum%20computer
The spin qubit quantum computer is a quantum computer based on controlling the spin of charge carriers (electrons and electron holes) in semiconductor devices. The first spin qubit quantum computer was first proposed by Daniel Loss and David P. DiVincenzo in 1997, also known as the Loss–DiVincenzo quantum computer. The proposal was to use the intrinsic spin-½ degree of freedom of individual electrons confined in quantum dots as qubits. This should not be confused with other proposals that use the nuclear spin as qubit, like the Kane quantum computer or the nuclear magnetic resonance quantum computer. Spin qubits so far have been implemented by locally depleting two-dimensional electron gases in semiconductors such a gallium arsenide, silicon and germanium. Spin qubits have also been implemented in graphene. Loss–DiVicenzo proposal The Loss–DiVicenzo quantum computer proposal tried to fulfill DiVincenzo's criteria for a scalable quantum computer, namely: identification of well-defined qubits; reliable state preparation; low decoherence; accurate quantum gate operations and strong quantum measurements. A candidate for such a quantum computer is a lateral quantum dot system. Earlier work on applications of quantum dots for quantum computing was done by Barenco et al. Implementation of the two-qubit gate The Loss–DiVincenzo quantum computer operates, basically, using inter-dot gate voltage for implementing swap operations and local magnetic fields (or any other local spin manipulation) for implementing the controlled NOT gate (CNOT gate). The swap operation is achieved by applying a pulsed inter-dot gate voltage, so the exchange constant in the Heisenberg Hamiltonian becomes time-dependent: This description is only valid if: the level spacing in the quantum-dot is much greater than ; the pulse time scale is greater than , so there is no time for transitions to higher orbital levels to happen and the decoherence time is longer than is the Boltzmann co
https://en.wikipedia.org/wiki/Stjepan%20Mohorovi%C4%8Di%C4%87
Stjepan Mohorovičić (August 20, 1890 – February 13, 1980) was a Croatian physicist, geophysicist and meteorologist. Biography Mohorovičić was born in the town of Bakar. His father is the world-famous geophysicist Andrija Mohorovičić. He studied mathematics and physics at the University of Zagreb where among others his professors were Vinko Dvořák and Andrija Mohorovičić and later he studied at Göttingen where some of his professors were Arnold Sommerfeld, Woldemar Voigt and David Hilbert. Later on he received a doctorate degree from the University of Zagreb. Mohorovičić was an opponent of Einstein's theory of relativity. Because of his longtime opposition and criticisms of theory of relativity he remained a high school professor his whole life. His work went largely ignored, especially in Croatia. He died in Zagreb. Scientific work His scientific interests included seismology, meteorology, astrophysics and theoretical physics. He began his career in seismology with his father. In 1913 he developed a new method for locating the hypocenter of an earthquake and gave an independent verification of discontinuity theory put forward by his father. In 1916 he published an idea of the existence of smaller discontinuities in Earth's crust and mantle. He put forward his own theory about the composition and the formation of the Moon, explosive formation of lunar craters and predicted the existence of Moho layer on the Moon. The existence of Moho layer on the Moon was confirmed in 1969 by seismic measurements done by Apollo 11 crew. Mohorovičić is often called "the father of positronium" because his most significant work is the prediction of the existence of positronium. Positronium is the bound state of an electron and a positron and therefore the lightest atom. It was experimentally discovered in 1951 by Martin Deutsch and became known as positronium. Mohorovičić in his paper also calculated spectra of positronium and predicted the existence of positronium in stars becau
https://en.wikipedia.org/wiki/Elazar%20ben%20Tsedaka%20ben%20Yitzhaq
Elazar ben Tsedaka ben Yitzhaq (Samaritan Hebrew: ʾElā̊ʿzår ban Ṣīdqåʿ ban Yēṣʿā̊q; ; January 16, 1927 – February 3, 2010) was the Samaritan High Priest from 2004 until his death. He was born in Nablus. He succeeded his cousin Saloum Cohen in 2004. According to tradition he is the 131st holder of this post since Aaron. During his time in office, he would lead the Samaritan community in their annual Passover ritual sacrifice of sheep. Before retirement he worked as a mathematics teacher. His funeral in February 2010 was attended by Israeli and Palestinian officials, who noted his major efforts in helping to guide his community, and to serve as a bridge between Israeli and Palestinian communities.
https://en.wikipedia.org/wiki/ISO/IEC%209995
ISO/IEC 9995 Information technology — Keyboard layouts for text and office systems is an ISO/IEC standard series defining layout principles for computer keyboards. It does not define specific layouts but provides the base for national and industry standards which define such layouts. The project of this standard was adopted at ISO in Berlin in 1985 under the proposition of Dr Yves Neuville. The ISO/IEC 9995 standard series dates to 1994 and has undergone several updates over the years. Parts The ISO/IEC 9995 standard series currently (as of September 2015) consists of the following parts: ISO/IEC 9995-1:2009 General principles governing keyboard layouts ISO/IEC 9995-2:2009 Alphanumeric section with Amendment 1 (2012) Numeric keypad emulation ISO/IEC 9995-3:2010 Complementary layouts of the alphanumeric zone of the alphanumeric section ISO/IEC 9995-4:2009 Numeric section ISO/IEC 9995-5:2009 Editing and function section ISO/IEC 9995-7:2009 Symbols used to represent functions with Amendment 1 (2012) ISO/IEC 9995-8:2009 Allocation of letters to the keys of a numeric keypad ISO/IEC 9995-9:2016 Multilingual-usage, multiscript keyboard group layouts ISO/IEC 9995-10:2013 Conventional symbols and methods to represent graphic characters not uniquely recognizable by their glyph on keyboards and in documentation ISO/IEC 9995-11:2015 Functionality of dead keys and repertoires of characters entered by dead keys (ISO 9995-6:2006 Function section was withdrawn 2009-10-08.) ISO/IEC 9995-1 ISO/IEC 9995-1 provides a fundamental description of keyboards suitable for text and office systems, and defines several terms which are used throughout the ISO/IEC 9995 standard series. Physical division and reference grid The figure shows the division of a keyboard into sections, which are subdivided into zones. alphanumeric section alphanumeric zone (indicated by green coloring) function zones (indicated by blue coloring) numeric section numeric zone (indicated by darker
https://en.wikipedia.org/wiki/Video%20processing
In electronics engineering, video processing is a particular case of signal processing, in particular image processing, which often employs video filters and where the input and output signals are video files or video streams. Video processing techniques are used in television sets, VCRs, DVDs, video codecs, video players, video scalers and other devices. For example—commonly only design and video processing is different in TV sets of different manufactures. Video processor Video processors are often combined with video scalers to create a video processor that improves the apparent definition of video signals. They perform the following tasks: deinterlacing aspect ratio control digital zoom and pan brightness/contrast/hue/saturation/sharpness/gamma adjustments frame rate conversion and inverse-telecine color point conversion (601 to 709 or 709 to 601) color space conversion (YPBPR/YCBCR to RGB or RGB to YPBPR/YCBCR) mosquito noise reduction block noise reduction detail enhancement edge enhancement motion compensation primary and secondary color calibration (including hue/saturation/luminance controls independently for each) These can either be in chip form, or as a stand-alone unit to be placed between a source device (like a DVD player or set-top-box) and a display with less-capable processing. The most widely recognized video processor companies in the market are: Genesis Microchip (with the FLI chipset – was Genesis Microchip, STMicroelectronics completes acquisition of Genesis Microchip on January 25, 2008) Sigma Designs (with the VXP chipset – was Gennum, Sigma Designs purchased the Image Processing group from Gennum on February 8, 2008, Sigma Designs is now part of Silicon Labs) Integrated Device Technology (with the HQV chipset and Teranex system products – was Silicon Optix, IDT purchased SO on October 21, 2008, IDT is now part of Renesas) Silicon Image (with the VRS chipset and DVDO system products - was Anchor Bay Technologies, Silico
https://en.wikipedia.org/wiki/Balancer%20chromosome
Balancer chromosomes (or simply balancers) are a type of genetically engineered chromosome used in laboratory biology for the maintenance of recessive lethal (or sterile) mutations within living organisms without interference from natural selection. Since such mutations are viable only in heterozygotes, they cannot be stably maintained through successive generations and therefore continually lead to production of wild-type organisms, which can be prevented by replacing the homologous wild-type chromosome with a balancer. In this capacity, balancers are crucial for genetics research on model organisms such as Drosophila melanogaster, the common fruit fly, for which stocks cannot be archived (e.g. frozen). They can also be used in forward genetics screens to specifically identify recessive lethal (or sterile) mutations. For that reason, balancers are also used in other model organisms, most notably the nematode worm Caenorhabditis elegans and the mouse. Typical balancer chromosomes are designed to (1) carry recessive lethal mutations themselves, eliminating homozygotes which do not carry the desired mutation; (2) suppress meiotic recombination with their homologs, which prevents de novo creation of wild-type chromosomes; and (3) carry dominant genetic markers, which can help identify rare recombinants and are useful for screening purposes. History Balancer chromosomes were first used in the fruit fly by Hermann Muller, who pioneered the use of radiation for organismal mutagenesis. In the modern usage of balancer chromosomes, random mutations are first induced by exposing living organisms with otherwise normal chromosomes to substances which cause DNA damage; in flies and nematodes, this usually occurs by feeding larvae ethyl methanesulfonate (EMS). The DNA-damaged larvae (or the adults into which they develop) are then screened for mutations. When a phenotype of interest is observed, the line expressing the mutation is crossed with another line containing balancer
https://en.wikipedia.org/wiki/Outernet%20%28network%29
Outernet is a wireless community network based in Poland, in which everyone owns their own node's hardware configured in a mesh network managed by OLSR. Participants must permit routing of other node's data through their routers, which allows building a large maintenance-free and low-cost network infrastructure which is a personal property of its users. See also B.A.T.M.A.N. External links Outernet project page Free Networks list node map Google group Wireless network organizations
https://en.wikipedia.org/wiki/Formose%20reaction
The formose reaction, discovered by Aleksandr Butlerov in 1861, and hence also known as the Butlerov reaction, involves the formation of sugars from formaldehyde. The term formose is a portmanteau of formaldehyde and aldose. Reaction and mechanism The reaction is catalyzed by a base and a divalent metal such as calcium. The intermediary steps taking place are aldol reactions, reverse aldol reactions, and aldose-ketose isomerizations. Intermediates are glycolaldehyde, glyceraldehyde, dihydroxyacetone, and tetrose sugars. In 1959, Breslow proposed a mechanism for the reaction, consisting of the following steps: The reaction exhibits an induction period, during which only the nonproductive Cannizzaro disproportionation of formaldehyde (to methanol and formate) occurs. The initial dimerization of formaldehyde to give glycolaldehyde (1) occurs via an unknown mechanism, possibly promoted by light or through a free radical process and is very slow. However, the reaction is autocatalytic: 1 catalyzes the condensation of two molecules of formaldehyde to produce an additional molecule of 1. Hence, even a trace (as low as 3 ppm) of glycolaldehyde is enough to initiate the reaction. The autocatalytic cycle begins with the aldol reaction of 1 with formaldehyde to make glyceraldehyde (2). An aldose-ketose isomerization of 2 forms dihydroxyacetone (3). A further aldol reaction of 3 with formaldehyde produces tetrulose (6), which undergoes another ketose-aldose isomerization to form aldotetrose 7 (either threose or erythrose). The retro-aldol reaction of 7 generates two molecules of 1, resulting in the net production of a molecule of 1 from two molecules of formaldehyde, catalyzed by 1 itself (autocatalysis). During this process, 3 can also react with 1 to form ribulose (4), which can isomerize to give rise to ribose (5), an important building block of ribonucleic acid. The aldose-ketose isomerization steps are promoted by chelation to
https://en.wikipedia.org/wiki/Von%20Neumann%20neighborhood
In cellular automata, the von Neumann neighborhood (or 4-neighborhood) is classically defined on a two-dimensional square lattice and is composed of a central cell and its four adjacent cells. The neighborhood is named after John von Neumann, who used it to define the von Neumann cellular automaton and the von Neumann universal constructor within it. It is one of the two most commonly used neighborhood types for two-dimensional cellular automata, the other one being the Moore neighborhood. This neighbourhood can be used to define the notion of 4-connected pixels in computer graphics. The von Neumann neighbourhood of a cell is the cell itself and the cells at a Manhattan distance of 1. The concept can be extended to higher dimensions, for example forming a 6-cell octahedral neighborhood for a cubic cellular automaton in three dimensions. Von Neumann neighborhood of range r An extension of the simple von Neumann neighborhood described above is to take the set of points at a Manhattan distance of r > 1. This results in a diamond-shaped region (shown for r = 2 in the illustration). These are called von Neumann neighborhoods of range or extent r. The number of cells in a 2-dimensional von Neumann neighborhood of range r can be expressed as . The number of cells in a d-dimensional von Neumann neighborhood of range r is the Delannoy number D(d,r). The number of cells on a surface of a d-dimensional von Neumann neighborhood of range r is the Zaitsev number . See also Moore neighborhood Neighbourhood (graph theory) Taxicab geometry Lattice graph Pixel connectivity Chain code
https://en.wikipedia.org/wiki/Sweep%20and%20prune
In physical simulations, sweep and prune is a broad phase algorithm used during collision detection to limit the number of pairs of solids that need to be checked for collision, i.e. intersection. This is achieved by sorting the starts (lower bound) and ends (upper bound) of the bounding volume of each solid along a number of arbitrary axes. As the solids move, their starts and ends may overlap. When the bounding volumes of two solids overlap in all axes they are flagged to be tested by more precise and time-consuming algorithms. Sweep and prune exploits temporal coherence as it is likely that solids do not move significantly between two simulation steps. Because of that, at each step, the sorted lists of bounding volume starts and ends can be updated with relatively few computational operations. Sorting algorithms which are fast at sorting almost-sorted lists, such as insertion sort, are particularly good for this purpose. According with the type of bounding volume used, it is necessary to update the bounding volume dimensions every time a solid is reoriented. To circumvent this, temporal coherence can be used to compute the changes in bounding volume geometry with fewer operations. Another approach is to use bounding spheres or other orientation independent bounding volumes. Sweep and prune is also known as sort and sweep, referred to this way in David Baraff's Ph.D. thesis in 1992. Later works like the 1995 paper about I-COLLIDE by Jonathan D. Cohen et al. refer to the algorithm as sweep and prune. See also Collision detection Bounding volume Physics engine Game physics
https://en.wikipedia.org/wiki/Ventilation/perfusion%20ratio
In respiratory physiology, the ventilation/perfusion ratio (V/Q ratio) is a ratio used to assess the efficiency and adequacy of the matching of two variables: V – ventilation – the air that reaches the alveoli Q – perfusion – the blood that reaches the alveoli via the capillaries The V/Q ratio can therefore be defined as the ratio of the amount of air reaching the alveoli per minute to the amount of blood reaching the alveoli per minute—a ratio of volumetric flow rates. These two variables, V and Q, constitute the main determinants of the blood oxygen (O2) and carbon dioxide (CO2) concentration. The V/Q ratio can be measured with a ventilation/perfusion scan. A V/Q mismatch can cause Type 1 respiratory failure. Physiology Ideally, the oxygen provided via ventilation would be just enough to saturate the blood fully. In the typical adult, 1 litre of blood can hold about 200 mL of oxygen; 1 litre of dry air has about 210 mL of oxygen. Therefore, under these conditions, the ideal ventilation perfusion ratio would be about 0.95. If one were to consider humidified air (with less oxygen), then the ideal v/q ratio would be in the vicinity of 1.0, thus leading to concept of ventilation-perfusion equality or ventilation-perfusion matching. This matching may be assessed in the lung as a whole, or in individual or in sub-groups of gas-exchanging units in the lung. On the other side Ventilation-perfusion mismatch is the term used when the ventilation and the perfusion of a gas exchanging unit are not matched. The actual values in the lung vary depending on the position within the lung. If taken as a whole, the typical value is approximately 0.8. Because the lung is centered vertically around the heart, part of the lung is superior to the heart, and part is inferior. This has a major impact on the V/Q ratio: apex of lung – higher base of lung – lower In a subject standing in orthostatic position (upright) the apex of the lung shows higher V/Q ratio, while at the bas
https://en.wikipedia.org/wiki/Mannan-binding%20lectin
Mannose-binding lectin (MBL), also called mannan-binding lectin or mannan-binding protein (MBP), is a lectin that is instrumental in innate immunity as an opsonin and via the lectin pathway. Structure MBL has an oligomeric structure (400-700 kDa), built of subunits that contain three presumably identical peptide chains of about 30 kDa each. Although MBL can form several oligomeric forms, there are indications that dimers and trimers are biologically inactive as an opsonin and at least a tetramer form is needed for activation of complement. Genes and polymorphisms Human MBL2 gene is located on chromosome 10q11.2-q21. Mice have two homologous genes, but in human the first of them was lost. A low level expression of an MBL1 pseudogene 1 (MBL1P1) was detected in liver. The pseudogene encodes a truncated 51-amino acid protein that is homologous to the MBLA isoform in rodents and some primates. Structural mutations in exon 1 of the human MBL2 gene, at codon 52 (Arg to Cys, allele D), codon 54 (Gly to Asp, allele B) and codon 57 (Gly to Glu, allele C), also independently reduce the level of functional serum MBL by disrupting the collagenous structure of the protein. Furthermore, several nucleotide substitutions in the promoter region of the MBL2 gene at position −550 (H/L polymorphism), −221 (X/Y polymorphism) and −427, −349, −336, del (−324 to −329), −70 and +4 (P/Q polymorphisms) affect the MBL serum concentration. Both the frequency of structural mutations and the promoter polymorphisms that are in strong linkage disequilibrium vary among ethnic groups resulting in seven major haplotypes: HYPA, LYQA, LYPA, LXPA, LYPB, LYQC and HYPD. Differences in the distribution of these haplotypes are the major cause of interracial variations in MBL serum levels. Both HYPA and LYQA are high-producing haplotypes, LYPA intermediate-producing haplotype and LXPA low-producing haplotype, whereas LYPB, LYQC and HYPD are defective haplotypes, which cause a severe MBL deficiency. Both M
https://en.wikipedia.org/wiki/Infinitely%20near%20point
In algebraic geometry, an infinitely near point of an algebraic surface S is a point on a surface obtained from S by repeatedly blowing up points. Infinitely near points of algebraic surfaces were introduced by . There are some other meanings of "infinitely near point". Infinitely near points can also be defined for higher-dimensional varieties: there are several inequivalent ways to do this, depending on what one is allowed to blow up. Weil gave a definition of infinitely near points of smooth varieties, though these are not the same as infinitely near points in algebraic geometry. In the line of hyperreal numbers, an extension of the real number line, two points are called infinitely near if their difference is infinitesimal. Definition When blowing up is applied to a point P on a surface S, the new surface S* contains a whole curve C where P used to be. The points of C have the geometric interpretation as the tangent directions at P to S. They can be called infinitely near to P as way of visualizing them on S, rather than S*. More generally this construction can be iterated by blowing up a point on the new curve C, and so on. An infinitely near point (of order n) Pn on a surface S0 is given by a sequence of points P0, P1,...,Pn on surfaces S0, S1,...,Sn such that Si is given by blowing up Si–1 at the point Pi–1 and Pi is a point of the surface Si with image Pi–1. In particular the points of the surface S are the infinitely near points on S of order 0. Infinitely near points correspond to 1-dimensional valuations of the function field of S with 0-dimensional center, and in particular correspond to some of the points of the Zariski–Riemann surface. (The 1-dimensional valuations with 1-dimensional center correspond to irreducible curves of S.) It is also possible to iterate the construction infinitely often, producing an infinite sequence P0, P1,... of infinitely near points. These infinite sequences correspond to the 0-dimensional valuations of the function
https://en.wikipedia.org/wiki/Pyrimidine%20dimer
Pyrimidine dimers are molecular lesions formed from thymine or cytosine bases in DNA via photochemical reactions, commonly associated with direct DNA damage. Ultraviolet light (UV; particularly UVC) induces the formation of covalent linkages between consecutive bases along the nucleotide chain in the vicinity of their carbon–carbon double bonds. The photo-coupled dimers are fluorescent. The dimerization reaction can also occur among pyrimidine bases in dsRNA (double-stranded RNA)—uracil or cytosine. Two common UV products are cyclobutane pyrimidine dimers (CPDs) and 6–4 photoproducts. These premutagenic lesions alter the structure of the DNA helix and cause non-canonical base pairing. Specifically, adjacent thymines or cytosines in DNA will form a cyclobutane ring when joined together and cause a distortion in the DNA. This distortion prevents replication or transcription machinery beyond the site of the dimerization. Up to 50–100 such reactions per second might occur in a skin cell during exposure to sunlight, but are usually corrected within seconds by photolyase reactivation or nucleotide excision repair. In humans, the most common form of DNA repair is nucleotide excision repair (NER). In contrast, organisms such as bacteria can counterintuitively harvest energy from the sun to fix DNA damage from pyrimidine dimers via photolyase activity. If these lesions are not fixed, polymerase machinery may misread or add in the incorrect nucleotide to the strand. If the damage to the DNA is overwhelming, mutations can arise within the genome of an organism and may lead to the production of cancer cells. Uncorrected lesions can inhibit polymerases, cause misreading during transcription or replication, or lead to arrest of replication. It causes sunburn and it triggers the production of melanin. Pyrimidine dimers are the primary cause of melanomas in humans. Types of dimers A cyclobutane pyrimidine dimer (CPD) contains a four membered ring arising from the coupling of the
https://en.wikipedia.org/wiki/Alexander%20Givental
Alexander Givental () is a Russian-American mathematician working in symplectic topology and singularity theory, as well as their relation to topological string theories. He graduated from Moscow Phys-Math school number 2 (later renamed into Lyceum ) and then the Gubkin Russian State University of Oil and Gas, and he finally got his Ph.D. under the supervision of V. I. Arnold in 1987. He emigrated to the United States in 1990. He provided the first proof of the mirror conjecture for Calabi–Yau manifolds that are complete intersections in toric ambient spaces, in particular for quintic hypersurfaces in P4. He is now Professor of Mathematics at the University of California, Berkeley. As an extracurricular activity, he translates Russian poetry into English and publishes books, including his own translation of a textbook () in geometry by Andrey Kiselyov and poetry of Marina Tsvetaeva. Givental is a father of two.
https://en.wikipedia.org/wiki/Joseph%20J.%20Kohn
Joseph John Kohn (May 18, 1932 – September 13, 2023) was a Czechoslovakian-born American academic and mathematician. He was professor of mathematics at Princeton University, where he researched partial differential operators and complex analysis. Life and work Kohn's father was Czech-Jewish architect Otto Kohn. After Nazi Germany invaded Czechoslovakia, he and his family emigrated to Ecuador in 1939. There, Otto attended Colegio Americano de Quito. In 1945, Joseph moved to the United States, where he attended Brooklyn Technical High School. He studied at MIT (B.S. 1953) and at Princeton University, where he earned his Ph.D. in 1956 under Donald Spencer ("A Non-Self-Adjoint Boundary Value Problem on Pseudo-Kähler Manifolds"). From 1956 to 1957, Kohn was an instructor at Princeton. In 1958, he served as assistant professor, in 1962, associate professor and in 1964, professor at Brandeis University, where he also served as Chairman of the Mathematics Department (1963–66). Since 1968, he had been a professor at Princeton University, where he served as chairman from 1993 to 1996. He was a visiting professor at Harvard (1996–97), Prague, Florence, Mexico City (National Polytechnic Institute), Stanford, Berkeley, Scuola Normale Superiore (Pisa, Italy), and IHES (France). Kohn's work focused, among other things, on the use of partial differential operators in the theory of functions of several complex variables and microlocal analysis. He has at least 65 doctoral descendants. Kohn was a Sloan Fellow in 1963 and a Guggenheim Fellow in 1976–77. From 1976 to 1988, he was a member of the editorial board of the Annals of Mathematics. In 1966, he was an invited speaker at the International Congress of Mathematicians in Moscow ("Differential complexes"). Film director Miloš Forman was his half-brother through their father Otto Kohn. Kohn died in Plainsboro, New Jersey on September 13, 2023, at the age of 91. Awards and honors Since 1966, Kohn has been a member of the Am
https://en.wikipedia.org/wiki/Region%20connection%20calculus
The region connection calculus (RCC) is intended to serve for qualitative spatial representation and reasoning. RCC abstractly describes regions (in Euclidean space, or in a topological space) by their possible relations to each other. RCC8 consists of 8 basic relations that are possible between two regions: disconnected (DC) externally connected (EC) equal (EQ) partially overlapping (PO) tangential proper part (TPP) tangential proper part inverse (TPPi) non-tangential proper part (NTPP) non-tangential proper part inverse (NTPPi) From these basic relations, combinations can be built. For example, proper part (PP) is the union of TPP and NTPP. Axioms RCC is governed by two axioms. for any region x, x connects with itself for any region x, y, if x connects with y, y will connect with x Remark on the axioms The two axioms describe two features of the connection relation, but not the characteristic feature of the connect relation. For example, we can say that an object is less than 10 meters away from itself and that if object A is less than 10 meters away from object B, object B will be less than 10 meters away from object A. So, the relation 'less-than-10-meters' also satisfies the above two axioms, but does not talk about the connection relation in the intended sense of RCC. Composition table The composition table of RCC8 are as follows: "*" denotes the universal relation, no relation can be discarded. Usage example: if a TPP b and b EC c, (row 4, column 2) of the table says that a DC c or a EC c. Examples The RCC8 calculus is intended for reasoning about spatial configurations. Consider the following example: two houses are connected via a road. Each house is located on an own property. The first house possibly touches the boundary of the property; the second one surely does not. What can we infer about the relation of the second property to the road? The spatial configuration can be formalized in RCC8 as the following constraint network: house
https://en.wikipedia.org/wiki/Crosslinking%20of%20DNA
In genetics, crosslinking of DNA occurs when various exogenous or endogenous agents react with two nucleotides of DNA, forming a covalent linkage between them. This crosslink can occur within the same strand (intrastrand) or between opposite strands of double-stranded DNA (interstrand). These adducts interfere with cellular metabolism, such as DNA replication and transcription, triggering cell death. These crosslinks can, however, be repaired through excision or recombination pathways. DNA crosslinking also has useful merit in chemotherapy and targeting cancerous cells for apoptosis, as well as in understanding how proteins interact with DNA. Crosslinking agents Many characterized crosslinking agents have two independently reactive groups within the same molecule, each of which is able to bind with a nucleotide residue of DNA. These agents are separated based upon their source of origin and labeled either as exogenous or endogenous. Exogenous crosslinking agents are chemicals and compounds, both natural and synthetic, that stem from environmental exposures such as pharmaceuticals and cigarette smoke or automotive exhaust. Endogenous crosslinking agents are compounds and metabolites that are introduced from cellular or biochemical pathways within a cell or organism. Exogenous agents Nitrogen mustards are exogenous alkylating agents which react with the N7 position of guanine. These compounds have a bis-(2-ethylchloro)amine core structure, with a variable R-group, with the two reactive functional groups serving to alkylate nucleobases and form a crosslink lesion. These agents most preferentially form a 1,3 5'-d(GNC) interstrand crosslink. The introduction of this agent slightly bends the DNA duplex to accommodate for the agent's presence within the helix. These agents are often introduced as a pharmaceutical and are used in cytotoxic chemotherapy. Cisplatin (cis-diamminedichloroplatinum(II)) and its derivatives mostly act on adjacent guanines at their N7 positi
https://en.wikipedia.org/wiki/331%20model
The 331 model in particle physics is an extension of the electroweak gauge symmetry which offers an explanation of why there must be three families of quarks and leptons. The name "331" comes from the full gauge symmetry group . Details The 331 model in particle physics is an extension of the electroweak gauge symmetry from to with . In the 331 model, hypercharge is given by and electric charge is given by where and are the Gell-Mann matrices of SU(3) and and are parameters of the model. Motivation The 331 model offers an explanation of why there must be three families of quarks and leptons. One curious feature of the Standard Model is that the gauge anomalies independently exactly cancel for each of the three known quark-lepton families. The Standard Model thus offers no explanation of why there are three families, or indeed why there is more than one family. The idea behind the 331 model is to extend the standard model such that all three families are required for anomaly cancellation. More specifically, in this model the three families transform differently under an extended gauge group. The perfect cancellation of the anomalies within each family is ruined, but the anomalies of the extended gauge group cancel when all three families are present. The cancellation will persist for 6, 9, ... families, so having only the three families observed in nature is the least possible matter content. Such a construction necessarily requires the addition of further gauge bosons and chiral fermions, which then provide testable predictions of the model in the form of elementary particles. These particles could be found experimentally at masses above the electroweak scale, which is on the order of 102 - 103 GeV. The minimal 331 model predicts singly and doubly charged spin-one bosons, bileptons, which could show up in electron-electron scattering when it is studied at TeV energy scales and may also be produced in multi-TeV proton–proton scattering at the Large Hadro
https://en.wikipedia.org/wiki/Alinea%20%28restaurant%29
Alinea is a restaurant in Chicago, Illinois, United States. In 2010, Alinea was awarded three stars by the Michelin Guide. Since the closing on December 20, 2017 of Grace, Alinea remains the only Chicago restaurant with three Michelin stars. History The restaurant opened on May 4, 2005, and takes its name from the symbol alinea, which is featured as a logo. Co-owner Nick Kokonas wrote of the restaurant's name, Alinea literally means "off the line". The restaurant's symbol, more commonly known as the pilcrow, indicates the beginning of a new train of thought, or a new paragraph. There's a double meaning: on one hand, Alinea claims to represent a new train of thought about food, but as a restaurant, everything still has to come "off the line". In October 2008, chef and owner Grant Achatz and co-author Kokonas published Alinea, a hardcover coffee-table book featuring more than 100 of the restaurant's recipes. In January 2016, the Alinea Group, the owner of Alinea, bought Moto restaurant in Chicago. On January 1, 2016, Alinea closed temporarily for renovations. The restaurant planned to operate pop-up restaurants worldwide before reopening on May 20, 2016 after an extensive remodel and overhaul of the menu. In May 2016, Alinea and its chef and owner Grant Achatz were featured in the Netflix show Chef's Table. The innovative food of Alinea and the journey of Achatz to renowned chef were highlighted. In 2020, Alinea served diners from a rooftop when all indoor dining was closed in Illinois during the coronavirus pandemic. The menu included a canapé shaped like the SARS‑CoV‑2 virus. Alinea Group co-owner Nick Kokonas stated the appetizer was "meant to provoke discomfort, conversation, and awareness" but some diners described it as "tacky", "disrespectful", and "insensitive". Awards and honors Alinea received the AAA Five Diamond Award, the highest level of recognition given by the AAA, from 2007 to 2017. It ranked ninth on the S. Pellegrino World's 50 Bes
https://en.wikipedia.org/wiki/Iliopectineal%20line
The iliopectineal line is the border of the iliopubic eminence. It can be defined as a compound structure of the arcuate line (from the ilium) and pectineal line (from the pubis). With the sacral promontory, it makes up the linea terminalis. The Iliopectineal line divides the pelvis into the pelvis major (false pelvis) above and the pelvis minor (true pelvis) below.
https://en.wikipedia.org/wiki/China%20doll
A china doll is a doll made partially or wholly out of glazed porcelain. The name comes from china being used to refer to the material porcelain. Colloquially the term china doll is sometimes used to refer to any porcelain or bisque doll, but more specifically it describes only glazed dolls. A typical china doll has a glazed porcelain head with painted molded hair and a body made of cloth or leather. They range in size from more than 30" (76 cm) tall to 1 inch (2.5 cm). Antique china dolls were predominantly produced in Germany, with the peak of popularity between approximately 1850 and 1890. Rare and elaborately decorated antique china dolls can have value on the collectors market. Beginning in the mid-20th-century reproductions of china dolls of various quality were produced in Japan and the United States. History Antique china dolls were predominantly produced in Germany, from around 1840 into the 1930s with the peak in popularity between roughly 1850 and 1890. The earliest china dolls depicted grown women and were dressed in contemporary, fashionable clothes. These dolls display contemporary hairstyles: sausage curls, ribbons or headbands. From approximately the 1850s on, child-like china dolls became popular. Blonde-haired china dolls became more prevalent at the end of the 1800s. China doll heads were produced in large quantities, counting in the millions. Some of the most prolific manufacturers were companies like Kestner; Conta & Boehme; Alt, Beck and Gottschalck; and Hertwig. Other German companies include Kling, Kister, KPM, and Meissen. China dolls were also produced in Czechoslovakia (Schlaggenwald), Denmark (Royal Copenhagen), France (Barrois, Jacob Petit), Poland (Tielsch), and Sweden (Rörstrand.) The earliest known were made by Kestner, KPM, Meissen and Royal Copenhagen. Production of unglazed bisque dolls began in 1850 and they increased their market share towards the end of the 19th century. Harper's Bazaar referred to china dolls as "old fashi
https://en.wikipedia.org/wiki/Poynting%20effect
The Poynting effect may refer to two unrelated physical phenomena. Neither should be confused with the Poynting–Robertson effect. All of these effects are named after John Henry Poynting, an English physicist. Solid mechanics In solid mechanics, the Poynting effect is a Finite strain theory effect observed when an elastic cube is sheared between two plates and stress is developed in the direction normal to the sheared faces, or when a cylinder is subjected to torsion and the axial length changes. The Poynting phenomenon in torsion was noticed experimentally by J. H. Poynting. Chemistry and thermodynamics In thermodynamics, the Poynting effect generally refers to the change in the fugacity of a liquid when a non-condensable gas is mixed with the vapor at saturated conditions. Equivalently in terms of vapor pressure, if one assumes that the vapor and the non-condensable gas behave as ideal gases and an ideal mixture, it can be shown that: where is the modified vapor pressure is the unmodified vapor pressure is the liquid molar volume is the liquid/vapor's gas constant is the temperature is the total pressure (vapor pressure + non-condensable gas) A common example is the production of the medicine Entonox, a high-pressure mixture of nitrous oxide and oxygen. The ability to combine and at high pressure while remaining in the gaseous form is due to the Poynting effect.
https://en.wikipedia.org/wiki/Davydov%20soliton
In quantum biology, the Davydov soliton (after the Soviet Ukrainian physicist Alexander Davydov) is a quasiparticle representing an excitation propagating along the self-trapped amide I groups within the α-helices of proteins. It is a solution of the Davydov Hamiltonian. The Davydov model describes the interaction of the amide I vibrations with the hydrogen bonds that stabilize the α-helices of proteins. The elementary excitations within the α-helix are given by the phonons which correspond to the deformational oscillations of the lattice, and the excitons which describe the internal amide I excitations of the peptide groups. Referring to the atomic structure of an α-helix region of protein the mechanism that creates the Davydov soliton (polaron, exciton) can be described as follows: vibrational energy of the C=O stretching (or amide I) oscillators that is localized on the α-helix acts through a phonon coupling effect to distort the structure of the α-helix, while the helical distortion reacts again through phonon coupling to trap the amide I oscillation energy and prevent its dispersion. This effect is called self-localization or self-trapping. Solitons in which the energy is distributed in a fashion preserving the helical symmetry are dynamically unstable, and such symmetrical solitons once formed decay rapidly when they propagate. On the other hand, an asymmetric soliton which spontaneously breaks the local translational and helical symmetries possesses the lowest energy and is a robust localized entity. Davydov Hamiltonian Davydov Hamiltonian is formally similar to the Fröhlich-Holstein Hamiltonian for the interaction of electrons with a polarizable lattice. Thus the Hamiltonian of the energy operator is where is the exciton Hamiltonian, which describes the motion of the amide I excitations between adjacent sites; is the phonon Hamiltonian, which describes the vibrations of the lattice; and is the interaction Hamiltonian, which describes the interaction
https://en.wikipedia.org/wiki/Chip%20select
Chip select (CS) or slave select (SS) is the name of a control line in digital electronics used to select one (or a set) of integrated circuits (commonly called "chips") out of several connected to the same computer bus, usually utilizing the three-state logic. One bus that uses the chip/slave select is the Serial Peripheral Interface Bus (SPI bus). When an engineer needs to connect several devices to the same set of input wires (e.g., a computer bus), but retain the ability to send and receive data or commands to each device independently of the others on the bus, they can use a chip select. The chip select is a command pin on many integrated circuits which connects the I/O pins on the device to the internal circuitry of that device. When the chip select pin is held in the inactive state, the chip or device is "deaf", and pays no heed to changes in the state of its other input pins; it holds its outputs in the high impedance state, so other chips can drive those signals. When the chip select pin is held in the active state, the chip or device assumes that any input changes it "hears" are meant for it, and responds as if it is the only chip on the bus. Because the other chips have their chip select pins in the inactive state, their outputs are high impedance, allowing the single selected chip to drive its outputs. CS may also affect a power consumption or serve as cycle control in certain circuits (such as SRAM or DRAM).
https://en.wikipedia.org/wiki/Eratosthenes%20Seamount
The Eratosthenes Seamount or Eratosthenes Tablemount is a seamount in the Eastern Mediterranean, in the Levantine basin about south of western Cyprus. Unlike most seamounts, it is a carbonate platform not a volcano. It is a large, submerged massif, about . Its peak lies at the depth of and it rises above the surrounding seafloor, which is located at the depth of up to and is a part of the Herodotus Abyssal Plain. It is one of the largest features on the Eastern Mediterranean seafloor. In 2010 and 2012 the Ocean Exploration Trust's vessel EV Nautilus explored the seamount looking for shipwrecks. Three were found; two were Ottoman vessels from the 19th century and the third was from the 4th century BC. Such seamounts are considered to be ideal for the preservation of shipwrecks because at depths of around the areas are not disturbed by trawlers or by sediments coming off land. Oceanography The Cyprus eddy is a sustained mesoscale eddy with a diameter about , regularly appearing above Eratosthenes Seamount. It was surveyed by oceanographic cruises notably in 1995, 2000, 2001 and 2009. Geology During the Messinian crisis, as the sea level in the Mediterranean dropped by about , the seamount emerged. See also CenSeam Ferdinandea Eratosthenes (crater)
https://en.wikipedia.org/wiki/Occipital%20lymph%20nodes
The occipital lymph nodes, one to three in number, are located on the back of the head close to the margin of the trapezius and resting on the insertion of the . Their afferent vessels drain the occipital region of the scalp, while their efferents pass to the superior deep cervical glands. Additional images Etymology The word occipital comes from the ("the back of the head").
https://en.wikipedia.org/wiki/Mastoid%20lymph%20nodes
The mastoid lymph nodes (retroauricular lymph nodes or posterior auricular glands) are a small group of lymph nodes, usually two in number, located just beneath the ear, on the mastoid insertion of the sternocleidomastoideus muscle, beneath the posterior auricular muscle. Their mastoid lymph nodes receives lymph from the posterior part of the temporoparietal region, the upper part of the cranial surface of the visible ear and the back of the ear canal. The lymph then passes to the superior deep cervical glands. Etymology The word mastoid comes from the (, "mouth, jaws, that with which one chews").
https://en.wikipedia.org/wiki/Mike%20Byster
Michael Byster (born March 5, 1959) is an American mental calculator, and math educator. He worked as a commodity trader until he quit his job to devote himself to teaching children his methods. He has spoken to over 10,000 classrooms for free and continues to mentor kids. Mike is able to do many arithmetic problems in his head at very fast speeds. During a study done years ago, Byster was claimed to have one of the fastest mathematical minds in the world. Biography Early life Byster was raised along with his older sister in the Chicago suburb of Skokie, Illinois. His parents Gloria and Dave encouraged his math shortcuts at a young age. He went to Niles North High School and then attended University of Illinois at Urbana–Champaign, graduating with a bachelor's degree in finance in 1981. Career Byster used to work as a floor trader at the Chicago Mercantile Exchange, but after his cousin, a math teacher in a Chicago area high school, invited him to show the class his shortcuts for doing base 10 arithmetic, Byster quit his job to devote himself to teaching children his methods. After that, he continued to do shows for free to schools across the United States. In December 2003, he released the website Mike's Math, but this was discontinued in 2007. In 2008, Byster produced the Brainetics math and memory system. Byster claims that Brainetics uses both sides of the brain to process and store information, allowing anyone to recall the information at a fast pace. Media appearances Mike appeared on ABC's 20/20 in 2007. From 2007 onwards, he has appeared on multiple television and radio shows. On January 21, 2010, Mike appeared on Oprah Radio's The Gayle King Show. Mike has appeared on The Shopping Channel in Canada and QVC in the United States multiple times. He appeared on Fox News June 8, 2011. He has also appeared on WGN in July 2011, Good Day New York in August 2011 and then Fox News Boston on October 27, 2011. He was on WBEZ's Afternoon Shift on September 27, 2013.
https://en.wikipedia.org/wiki/Priming%20%28agriculture%29
Nano seed priming in botany and agriculture is a form of seed planting preparation in which the seeds are pre-soaked in nanoparticle solution. Seeds are considered to be an important part of crop life cycle as it influences the propagation of critical phases like germination and dormancy. Seed priming before sowing is considered to be one of the promising ways to provide value-added solutions to maximize the natural potential of seed to set the plant for maximum yield potential with respect to both quality and quantity. Positive effect on the shoot and root growth of seedlings of wheat (Triticum aestivum L.) when treated with iron-oxide nanoparticles. This innovative cost-effective and user-friendly method of biofortification has proven to increase grain iron deposition upon harvesting. Hence, the intervention of nanotechnology in terms of seed priming could be an economical and user-friendly smart farming approach to increase the nutritive value of the grains in an eco-friendly manner. Priming is not an extremely widely used method. In general, most kinds of seeds experimented with so far have shown an overall advantage over seeds that are not primed. Many have shown a faster emergence time (the time it takes for seeds to rise above the surface of the soil), a higher emergence rate (the number of seeds that make it to the surface), and better growth, suggesting that the head-start helps them get a good root system down early and grow faster. This method can be useful to farmers because it saves them the money and time spent for fertilizers, re-seeding, and weak plants.
https://en.wikipedia.org/wiki/Eckart%20conditions
The Eckart conditions, named after Carl Eckart, simplify the nuclear motion (rovibrational) Hamiltonian that arises in the second step of the Born–Oppenheimer approximation. They make it possible to approximately separate rotation from vibration. Although the rotational and vibrational motions of the nuclei in a molecule cannot be fully separated, the Eckart conditions minimize the coupling close to a reference (usually equilibrium) configuration. The Eckart conditions are explained by Louck and Galbraith and in Section 10.2 of the textbook by Bunker and Jensen, where a numerical example is given. Definition of Eckart conditions The Eckart conditions can only be formulated for a semi-rigid molecule, which is a molecule with a potential energy surface V(R1, R2,..RN) that has a well-defined minimum for RA0 (). These equilibrium coordinates of the nuclei—with masses MA—are expressed with respect to a fixed orthonormal principal axes frame and hence satisfy the relations Here λi0 is a principal inertia moment of the equilibrium molecule. The triplets RA0 = (RA10, RA20, RA30) satisfying these conditions, enter the theory as a given set of real constants. Following Biedenharn and Louck, we introduce an orthonormal body-fixed frame, the Eckart frame, . If we were tied to the Eckart frame, which—following the molecule—rotates and translates in space, we would observe the molecule in its equilibrium geometry when we would draw the nuclei at the points, . Let the elements of RA be the coordinates with respect to the Eckart frame of the position vector of nucleus A (). Since we take the origin of the Eckart frame in the instantaneous center of mass, the following relation holds. We define displacement coordinates . Clearly the displacement coordinates satisfy the translational Eckart conditions, The rotational Eckart conditions for the displacements are: where indicates a vector product. These rotational conditions follow from the specific construction of the
https://en.wikipedia.org/wiki/De%20Morgan%20algebra
In mathematics, a De Morgan algebra (named after Augustus De Morgan, a British mathematician and logician) is a structure A = (A, ∨, ∧, 0, 1, ¬) such that: (A, ∨, ∧, 0, 1) is a bounded distributive lattice, and ¬ is a De Morgan involution: ¬(x ∧ y) = ¬x ∨ ¬y and ¬¬x = x. (i.e. an involution that additionally satisfies De Morgan's laws) In a De Morgan algebra, the laws ¬x ∨ x = 1 (law of the excluded middle), and ¬x ∧ x = 0 (law of noncontradiction) do not always hold. In the presence of the De Morgan laws, either law implies the other, and an algebra which satisfies them becomes a Boolean algebra. Remark: It follows that ¬(x ∨ y) = ¬x ∧ ¬y, ¬1 = 0 and ¬0 = 1 (e.g. ¬1 = ¬1 ∨ 0 = ¬1 ∨ ¬¬0 = ¬(1 ∧ ¬0) = ¬¬0 = 0). Thus ¬ is a dual automorphism of (A, ∨, ∧, 0, 1). If the lattice is defined in terms of the order instead, i.e. (A, ≤) is a bounded partial order with a least upper bound and greatest lower bound for every pair of elements, and the meet and join operations so defined satisfy the distributive law, then the complementation can also be defined as an involutive anti-automorphism, that is, a structure A = (A, ≤, ¬) such that: (A, ≤) is a bounded distributive lattice, and ¬¬x = x, and x ≤ y → ¬y ≤ ¬x. De Morgan algebras were introduced by Grigore Moisil around 1935, although without the restriction of having a 0 and a 1. They were then variously called quasi-boolean algebras in the Polish school, e.g. by Rasiowa and also distributive i-lattices by J. A. Kalman. (i-lattice being an abbreviation for lattice with involution.) They have been further studied in the Argentinian algebraic logic school of Antonio Monteiro. De Morgan algebras are important for the study of the mathematical aspects of fuzzy logic. The standard fuzzy algebra F = ([0, 1], max(x, y), min(x, y), 0, 1, 1 − x) is an example of a De Morgan algebra where the laws of excluded middle and noncontradiction do not hold. Another example is Dunn's four-valued semantics for De Morgan alg
https://en.wikipedia.org/wiki/Bleb%20%28cell%20biology%29
In cell biology, a bleb is a bulge of the plasma membrane of a cell, characterized by a spherical, "blister-like", bulky morphology. It is characterized by the decoupling of the cytoskeleton from the plasma membrane, degrading the internal structure of the cell, allowing the flexibility required for the cell to separate into individual bulges or pockets of the intercellular matrix. Most commonly, blebs are seen in apoptosis (programmed cell death) but are also seen in other non-apoptotic functions. Blebbing, or zeiosis, is the formation of blebs. Formation Initiation and expansion Bleb growth is driven by intracellular pressure (abnormal growth) generated in the cytoplasm when the actin cortex undergoes actomyosin contractions. The disruption of the membrane-actin cortex interactions are dependent on the activity of myosin-ATPase Bleb initiation is affected by three main factors: high intracellular pressure, decreased amounts of cortex-membrane linker proteins, and deterioration of the actin cortex. The integrity of the connection between the actin cortex and the membrane are dependent on how intact the cortex is and how many proteins link the two structures. When this integrity is compromised, the addition of pressure is able to make the membrane bulge out from the rest of the cell. The presence of only one or two of these factors is often not enough to drive bleb formation. Bleb formation has also been associated with increases in myosin contractility and local myosin activity increases. Bleb formation can be initiated in two ways: 1) through local rupture of the cortex or 2) through local detachment of the cortex from the plasma membrane. This generates a weak spot through which the cytoplasm flows, leading to the expansion of the bulge of membrane by increasing the surface area through tearing of the membrane from the cortex, during which time, actin levels decrease. The cytoplasmic flow is driven by hydrostatic pressure inside the cell. Before the bleb is a
https://en.wikipedia.org/wiki/Fragment%20crystallizable%20region
The fragment crystallizable region (Fc region) is the tail region of an antibody that interacts with cell surface receptors called Fc receptors and some proteins of the complement system. This region allows antibodies to activate the immune system, for example, through binding to Fc receptors. In IgG, IgA and IgD antibody isotypes, the Fc region is composed of two identical protein fragments, derived from the second and third constant domains of the antibody's two heavy chains; IgM and IgE Fc regions contain three heavy chain constant domains (CH domains 2–4) in each polypeptide chain. The Fc regions of IgGs bear a highly conserved N-glycosylation site. Glycosylation of the Fc fragment is essential for Fc receptor-mediated activity. The N-glycans attached to this site are predominantly core-fucosylated diantennary structures of the complex type. In addition, small amounts of these N-glycans also bear bisecting GlcNAc and α-2,6 linked sialic acid residues. The other part of an antibody, called the Fab region, contains variable sections that define the specific target that the antibody can bind. By contrast, the Fc region of all antibodies in a class are the same for each species; they are constant rather than variable. The Fc region is, therefore, sometimes incorrectly termed the "fragment constant region". Fc binds to various cell receptors and complement proteins. In this way, it mediates different physiological effects of antibodies (detection of opsonized particles; cell lysis; degranulation of mast cells, basophils, and eosinophils; and other processes). Engineered Fc fragments In a new development in the field of antibody-based therapeutics, the Fc region of immunoglobulins has been engineered to contain an antigen-binding site. This type of antigen-binding fragment is called Fcab. Fcab fragments can be inserted into a full immunoglobulin by swapping the Fc region, thus obtaining a bispecific antibody (with both Fab and Fcab regions containing distinct bin
https://en.wikipedia.org/wiki/MADS-box
The MADS box is a conserved sequence motif. The genes which contain this motif are called the MADS-box gene family. The MADS box encodes the DNA-binding MADS domain. The MADS domain binds to DNA sequences of high similarity to the motif CC[A/T]6GG termed the CArG-box. MADS-domain proteins are generally transcription factors. The length of the MADS-box reported by various researchers varies somewhat, but typical lengths are in the range of 168 to 180 base pairs, i.e. the encoded MADS domain has a length of 56 to 60 amino acids. There is evidence that the MADS domain evolved from a sequence stretch of a type II topoisomerase in a common ancestor of all extant eukaryotes. Origin of name and history of research The first MADS-box gene to be identified was ARG80 from budding yeast, Saccharomyces cerevisiae, but was at that time not recognized as a member of a large gene family. The MADS-box gene family got its name later as an acronym referring to the four founding members, ignoring ARG80: MCM1 from the budding yeast, Saccharomyces cerevisiae, AGAMOUS from the thale cress Arabidopsis thaliana, DEFICIENS from the snapdragon Antirrhinum majus, SRF from the human Homo sapiens. In A. thaliana, A. majus, and Zea mays this motif is involved in floral development. Early study in these model angiosperms was the beginning of research into the molecular evolution of floral structure in general, as well as their role in nonflowering plants. Diversity MADS-box genes were detected in nearly all eukaryotes studied. While the genomes of animals and fungi generally possess only around one to five MADS-box genes, genomes of flowering plants have around 100 MADS-box genes. Two types of MADS-domain proteins are distinguished; the SRF-like or Type I MADS-domain proteins and the MEF2-like (after MYOCYTE-ENHANCER-FACTOR2) or Type II MADS-domain proteins. SRF-like MADS-domain proteins in animals and fungi have a second conserved domain, the SAM (SRF, ARG80, MCM1) domain. MEF2-like MADS
https://en.wikipedia.org/wiki/MPEG-1%20Audio%20Layer%20I
MPEG-1 Audio Layer I, commonly abbreviated to MP1, is one of three audio formats included in the MPEG-1 standard. It is a deliberately simplified version of MPEG-1 Audio Layer II (MP2), created for applications where lower compression efficiency could be tolerated in return for a less complex algorithm that could be executed with simpler hardware requirements. While supported by most media players, the codec is considered largely obsolete, and replaced by MP2 or MP3. For files only containing MP1 audio, the file extension .mp1 is used. A limited version of MPEG-1 layer I was also used by the Digital Compact Cassette format, in the form of the PASC (Precision Adaptive Subband Coding) audio compression codec. The bit rate of PASC was fixed at 384 kilobits per second, and when encoding audio at a sample frequency of 44.1 kHz, PASC regards the padding slots as 'dummy' and sets them to zero, whereas the ISO/IEC 11172-3 standard uses them to store data. Specification MPEG-1 Layer I is defined in ISO/IEC 11172-3, which first version was published in 1993. Sampling rates: 32, 44.1 and 48 kHz Bitrates: 32, 64, 96, 128, 160, 192, 224, 256, 288, 320, 352, 384, 416 and 448 kbit/s An extension has been provided in MPEG-2 Layer I and is defined in ISO/IEC 13818-3, which first version was published in 1995. Additional sampling rates: 16, 22.05 and 24 kHz Additional bitrates: 48, 56, 80, 112, 144 and 176 kbit/s MP1 uses a comparatively simple sub-band coding, using 32 sub-bands. Licensing Sisvel S.p.A., a Luxembourg-based company, administered a licensing program for patents applying to MPEG Audio which, as of around the first quarter of 2023, has become legacy.
https://en.wikipedia.org/wiki/R-matrix
The term R-matrix has several meanings, depending on the field of study. The term R-matrix is used in connection with the Yang–Baxter equation. This is an equation which was first introduced in the field of statistical mechanics, taking its name from independent work of C. N. Yang and R. J. Baxter. The classical R-matrix arises in the definition of the classical Yang–Baxter equation. In quasitriangular Hopf algebra, the R-matrix is a solution of the Yang–Baxter equation. The numerical modeling of diffraction gratings in optical science can be performed using the R-matrix propagation algorithm. R-matrix method in quantum mechanics There is a method in computational quantum mechanics for studying scattering known as the R-matrix. This method was originally formulated for studying resonances in nuclear scattering by Wigner and Eisenbud. Using that work as a basis, an R-matrix method was developed for electron, positron and photon scattering by atoms. This approach was later adapted for electron, positron and photon scattering by molecules. R-matrix method is used in UKRmol and UKRmol+ code suits. The user-friendly software Quantemol Electron Collisions (Quantemol-EC) and its predecessor Quantemol-N are based on UKRmol/UKRmol+ and employ MOLPRO package for electron configuration calculations. See also UK Molecular R-matrix Codes
https://en.wikipedia.org/wiki/AlkB
AlkB (Alkylation B) is a protein found in E. coli, induced during an adaptive response and involved in the direct reversal of alkylation damage. AlkB specifically removes alkylation damage to single stranded (SS) DNA caused by SN2 type of chemical agents. It efficiently removes methyl groups from 1-methyl adenines, 3-methyl cytosines in SS DNA. AlkB is an alpha-ketoglutarate-dependent hydroxylase, a superfamily non-haem iron-containing proteins. It oxidatively demethylates the DNA substrate. Demethylation by AlkB is accompanied with release of CO2, succinate, and formaldehyde. Human homologs There are nine human homologs of AlkB. They are: Alkb homolog 1, histone h2a dioxygenase, , , , AlkB homolog 5, RNA demethylase, , , , ABH3, like E. coli AlkB, is specific for SS DNA and RNA whereas ABH2 has higher affinity for damages in double-stranded DNA. ALKBH8 has a RNA recognition motif, a methyltransferase domain, and an AlkB-like domain. The methyltransferase domain generates the wobble nucleoside 5-methoxycarbonylmethyluridine (mcm5U) from its precursor 5-carboxymethyluridine (cm5U). The AlkB-like domain generates (S)-5-methoxycarbonylhydroxymethyluridine (mchm5U)in Gly-tRNA-UCC. FTO, which is associated with obesity in humans, is the first identified RNA demethylase. It demethylates N6-methyladenosine in mRNA. There is also another very different protein called AlkB or alkane hydroxylase. It is the catalytic subunit of a non-heme diiron protein, catalyzing the hydroxylation of alkanes, in aerobic bacteria that are able to utilize alkanes as a carbon source. Virus homologs AlkB domains are present within viral replication-associated proteins of plant RNA viruses of the families Closteroviridae, Alphaflexiviridae, Betaflexiviridae, and Secoviridae. Potyviridae is the largest family of plant RNA viruses; among these the AlkB domain is embedded in P1 proteases of endive necrotic mosaic virus (ENMV) of genus Potyvirus, French endive necrotic mosaic virus (FEN
https://en.wikipedia.org/wiki/Negative%20probability
The probability of the outcome of an experiment is never negative, although a quasiprobability distribution allows a negative probability, or quasiprobability for some events. These distributions may apply to unobservable events or conditional probabilities. Physics and mathematics In 1942, Paul Dirac wrote a paper "The Physical Interpretation of Quantum Mechanics" where he introduced the concept of negative energies and negative probabilities: The idea of negative probabilities later received increased attention in physics and particularly in quantum mechanics. Richard Feynman argued that no one objects to using negative numbers in calculations: although "minus three apples" is not a valid concept in real life, negative money is valid. Similarly he argued how negative probabilities as well as probabilities above unity possibly could be useful in probability calculations. Negative probabilities have later been suggested to solve several problems and paradoxes. Half-coins provide simple examples for negative probabilities. These strange coins were introduced in 2005 by Gábor J. Székely. Half-coins have infinitely many sides numbered with 0,1,2,... and the positive even numbers are taken with negative probabilities. Two half-coins make a complete coin in the sense that if we flip two half-coins then the sum of the outcomes is 0 or 1 with probability 1/2 as if we simply flipped a fair coin. In Convolution quotients of nonnegative definite functions and Algebraic Probability Theory Imre Z. Ruzsa and Gábor J. Székely proved that if a random variable X has a signed or quasi distribution where some of the probabilities are negative then one can always find two random variables, Y and Z, with ordinary (not signed / not quasi) distributions such that X, Y are independent and X + Y = Z in distribution. Thus X can always be interpreted as the "difference" of two ordinary random variables, Z and Y. If Y is interpreted as a measurement error of X and the observed value is
https://en.wikipedia.org/wiki/Luteoma
A luteoma is a tumor that occurs in the ovaries during pregnancy. It is associated with an increase of sex hormones, primarily progesterone and testosterone. The size of the tumor can range from 1 to 25 cm in diameter, but is usually 6 to 10 cm in diameter and can grow throughout the duration of the pregnancy. However, luteomas are benign and resolve themselves after delivery. This type of tumor is rare with only about 200 documented cases; many of these cases were detected accidentally, so the actual rate of occurrence may be higher. The most obvious symptom of a luteoma is masculinization of the mother and the possible masculinization of the fetus. This occurs because of the release of testosterone by the luteoma. Testosterone is a sex hormone most abundant in men although small amounts are naturally present in women. Testosterone is responsible for the male characteristics such as deepening of the voice, growth of dark hair, and acne. While not life-threatening, the development of male characteristics associated with luteomas can cause visible changes in the mother and can have drastic effects on the formation of the fetus. Luteomas can cause the fetus to be born with an ambiguous sex, which, depending on how the parents prefer to raise the infant, may result in the parents choosing a sex for the fetus. Luteomas can be associated with disorders of sex development (formerly known as pseudohermaphroditism). Signs and symptoms Luteoma is frequently asymptomatic; only 36% of women actually show signs of masculinization. These signs include acne, the growth of dark hair (especially on the face), deepening of the voice, temporal balding, and clitoromegaly. An increase in testosterone levels in the mother does not necessarily mean masculinization will occur. During a normal pregnancy, the testosterone level will increase slightly in the first and second trimester, but doubles in the third trimester. The testosterone level also depends on the sex of the fetus; male fe
https://en.wikipedia.org/wiki/Infection%20ratio
In finance, the infection ratio describes the relationship between non-performing portfolios and the total loan portfolio. The infection ratio is used to work out the relationship between the non-performing part of the portfolio (i.e., loans not efficiently being recovered) and the total loan portfolio of a bank or other financial entity. The ratio is used to evaluate infection in the loan portfolio between two different time periods, or amongst various organizations, or against an industry standard. The users of this financial management technique are the central banks/regulators, credit rating agencies, and institutions in the business of giving credit lines/loans.
https://en.wikipedia.org/wiki/Complexity%20economics
Complexity economics is the application of complexity science to the problems of economics. It relaxes several common assumptions in economics, including general equilibrium theory. While it does not reject the existence of an equilibrium, it sees such equilibria as "a special case of nonequilibrium", and as an emergent property resulting from complex interactions between economic agents. The complexity science approach has also been applied to computational economics. Models The "nearly archetypal example" is an artificial stock market model created by the Santa Fe Institute in 1989. The model shows two different outcomes, one where "agents do not search much for predictors and there is convergence on a homogeneous rational expectations outcome" and another where "all kinds of technical trading strategies appearing and remaining and periods of bubbles and crashes occurring". Another area has studied the prisoner's dilemma, such as in a network where agents play amongst their nearest neighbors or a network where the agents can make mistakes from time to time and "evolve strategies". In these models, the results show a system which displays "a pattern of constantly changing distributions of the strategies". More generally, complexity economics models are often used to study how non-intuitive results at the macro-level of a system can emerge from simple interactions at the micro level. This avoids assumptions of the representative agent method, which attributes outcomes in collective systems as the simple sum of the rational actions of the individuals. It also takes into account the view of emergence in economics. Measures Economic complexity index Physicist César Hidalgo and Harvard economist Ricardo Hausmann introduced a spectral method to measure the complexity of a country's economy by inferring it from the structure of the network connecting countries to the products that they export. The measure combines information of a country's diversity, which is pos
https://en.wikipedia.org/wiki/Micromodel
Micromodels are a type of card model or paper model that was popular during the 1940s and 1950s in the United Kingdom. In 1941, Geoffrey Heighway invented and marketed a new concept in card models. He took the available concept of card models and miniaturized them so that an entire train or building could be wrapped in a packet of post cards. These packets usually sold for about a shilling, or pocket change. When he released his product in the 1940s it caught on and Micro-modeling became a national pastime. The slogan "Your Workshop in a Cigar Box" is still widely quoted today among paper modelers. Their other slogan was: Three-Dimensional, Volumetric. During the war years, the models were especially popular as they were extremely portable and builders were able to work on them anywhere. Anecdotes say people liked them because they were small enough to take with you, so if you got stuck in a bomb shelter during an air raid, you had something interesting to do. Several of the railway Micromodels were distributed in Australia in the form of small stapled books, rather than individual postcards. The subjects of original Micromodels included among other things, Trains, Planes, Ships, Boats, Buildings, Cars and a Dragon. There were 82 original Micromodel Packets. From these packets one could make up 121 separate models. Micromodels also released some items including how-to booklets and powdered glue. Heighway continued to release models from 1941 until he became seriously ill in 1956. He died in 1959. There were several models that Heighway had mentioned in his catalogs and advertising, that for one reason or another were never released. These were known as the "Might have beens." Micromodels are considered collectable, and some rare originals can only be had for thousands of dollars. Others are more easily obtainable. There have been other products similar to or based on Micromodels, including Modelcraft Ltd. and a set of books based on enlargements of some of He
https://en.wikipedia.org/wiki/Software%20modernization
Legacy modernization, also known as software modernization or platform modernization, refers to the conversion, rewriting or porting of a legacy system to modern computer programming languages, architectures (e.g. microservices), software libraries, protocols or hardware platforms. Legacy transformation aims to retain and extend the value of the legacy investment through migration to new platforms to benefit from the advantage of the new technologies. As a basis and first step of software modernization initiatives, the strategy, the risk management, the estimation of costs, and its implementation, lies the knowledge of the system being modernized. The knowledge of what all functionalities are made for, and the knowledge of how it has been developed. As the subject-matter experts (SMEs) who worked at the inception and during all evolutions of the application are no-longer available or have a partial knowledge, and the lack of proper and up-to-date documentation, modernization initiatives start with assessing and discovering the application using Software intelligence. Strategies Making of software modernization decisions is a process within some organizational context. “Real world” decision making in business organizations often has to be made based on “bounded rationality”. Besides that, there exist multiple (and possibly conflicting) decision criteria; the certainty, completeness, and availability of useful information (as a basis for the decision) is often limited. Legacy system modernization is often a large, multi-year project. Because these legacy systems are often critical in the operations of most enterprises, deploying the modernized system all at once introduces an unacceptable level of operational risk. As a result, legacy systems are typically modernized incrementally. Initially, the system consists completely of legacy code. As each increment is completed, the percentage of legacy code decreases. Eventually, the system is completely modernized. A m
https://en.wikipedia.org/wiki/Analog%20ear
An analog ear or analog cochlea is a model of the ear or of the cochlea (in the inner ear) based on an electrical, electronic or mechanical analog. An analog ear is commonly described as an interconnection of electrical elements such as resistors, capacitors, and inductors; sometimes transformers and active amplifiers are included. Ear background The ear of the typical mammal consists of three parts. The outer ear collects sounds like a horn and guides them to the eardrum. Vibrations of the drum are conveyed to the inner ear via a system of bones called ossicles. These leverage the larger motions of the eardrum to the smaller vibrations of the oval window. This window connects to the cochlea which is a long dual channel arrangement consisting of two channels separated by the basilar membrane. The structure, about 36 mm in length, is coiled to conserve space. The oval window introduces sounds to the upper channel. The lower channel has a round window but this is not driven by the bones of the middle ear. The far end of the structure has a hole between the two channels called the helicotrema that equalizes slowly varying pressures in the two channels. A series of sensory hair cells along the basilar membrane respond to send neural pulses towards the brain. Ear modeling Models for the ear of a direct kind have been created, most notably by Nobel Laureate Georg von Békésy. He used glass slides, razor blades, and an elastic membrane to represent the helicotrema. He could measure vibrations along the basilar membrane in response to different excitations frequencies. He found that the pattern of displacements for given frequency sine wave along the basilar membrane rose somewhat gradually to a peak and thereafter fell. High frequencies favored shorter distances from the oval window than did lower ones. Frequency values approximate a logarithmic distribution with distance. Mechanical and electrical analogs Early mechanical and electrical analog ears were
https://en.wikipedia.org/wiki/History%20of%20Tourette%20syndrome
Tourette syndrome is an inherited neurological disorder that begins in childhood or adolescence, characterized by the presence of multiple physical (motor) tics and at least one vocal (phonic) tic. The eponym was bestowed by Jean-Martin Charcot (1825–1893) on behalf of his intern, Georges Albert Édouard Brutus Gilles de la Tourette (1859–1904), a French physician and neurologist, who published an account of nine patients with Tourette's in 1885. The possibility that movement disorders, including Tourette syndrome, might have an organic origin was raised when an encephalitis epidemic from 1918 to 1926 led to a subsequent epidemic of tic disorders. Research in 1972 advanced the argument that Tourette's is a neurological, rather than psychological, disorder; since the 1990s, a more neutral view of Tourette's has emerged, in which biological vulnerability and adverse environmental events are seen to interact. Findings since 1999 have advanced TS science in the areas of genetics, neuroimaging, neurophysiology, and neuropathology. Questions remain regarding how best to classify Tourette syndrome, and how closely Tourette's is related to other movement disorders or psychiatric disorders. Good epidemiologic data is still lacking, and available treatments are not risk free and not always well tolerated. Fifteenth century The first presentation of Tourette syndrome is thought to be in the 15th-century book Malleus Maleficarum (Hammer of Witches), which describes a priest whose tics were "believed to be related to possession by the devil". Nineteenth century A French doctor, Jean Marc Gaspard Itard, reported the first case of Tourette syndrome in 1825, describing Mme de D (the Marquise de Dampierre) an important woman of nobility in her time, whose episodes later understood to be coprolalia "were obviously in stark contrast to the lady's background, intellect, and refined manners". Jean-Martin Charcot, an influential French physician, assigned his student and intern
https://en.wikipedia.org/wiki/CFD-ACE%2B
CFD-ACE+ is a commercial computational fluid dynamics solver developed by ESI Group. It solves the conservation equations of mass, momentum, energy, chemical species and other scalar transport equations using the finite volume method. These equations enable coupled simulations of fluid, thermal, chemical, biological, electrical and mechanical phenomena. CFD-ACE+ solver allows for coupled heat and mass transport along with complex multi-step gas-phase and surface reactions which makes it especially useful for designing and optimizing semiconductor equipment and processes such as chemical vapor deposition (CVD). Researchers at the Ecole Nationale Superieure d'Arts et Metiers used CFD-ACE+ to simulate the rapid thermal chemical vapor deposition (RTCVD) process. They predicted the deposition rate along the substrate diameter for silicon deposition from silane. They also used CFD-ACE+ to model transparent conductive oxide (TCO) thin film deposition with ultrasonic spray chemical vapor deposition (CVD). The University of Louisville and the Oak Ridge National Laboratory used CFD-ACE+ to develop the yttria-stabilized zirconia CVD process for application of thermal barrier coatings for fossil energy systems. CFD-ACE+ was used by the Indian Institute of Technology Bombay to model the interplay of multiphysics phenomena involved in microfluidic devices such as fluid flow, structure, surface and interfaces etc. Numerical simulation of electroosmotic effect on pressure-driven flows in the serpentine channel of a micro fuel cell with variable zeta potential on the side walls was investigated and reported. Based on their extensive study of CFD software tools for microfluidic applications, researchers at IMTEK, University of Freiburg concluded that generally CFD-ACE+ can be recommended for simulation of free surface flows involving capillary forces. CFD-ACE+ has also been used to design and optimize the various fuel cell components and stacks. Researchers at Ballard Power Syst
https://en.wikipedia.org/wiki/Time-resolved%20photon%20emission
Time-resolved photon emission (TRPE) is used to measure timing waveforms on semiconductor devices. TRPE measurements are performed on the back side of the semiconductor device. The substrate of the device-under-test (DUT) must first be thinned mechanically. The device is mounted on a movable X-Y stage in an enclosure which shields it from all sources of light. The DUT is connected to an active electrical stimulus. The stimulus pattern is continuously looped and a trigger signal is sent to the TRPE instrument in order to tell it when the pattern repeats. A TRPE prober operates in a manner similar to a sampling oscilloscope, and is used to perform semiconductor failure analysis. Theory of operation As the electrical stimulus pattern is repetitively applied to the DUT, internal transistors switch on and off. As pMOS and nMOS transistors switch on or off, they emit photons. These photons emissions are recorded by a sensitive photon detector. By counting the number of photons emitted for a specific transistor across a period of time, a photon histogram may be constructed. The photon histogram records an increase in photon emissions during times that the transistor switches on or off. By detecting the combined photon emissions of pairs p- and n-channel transistors contained in logic gates, it is possible to use the resulting histogram to determine the locations in time of the rising and falling edges of the signal at that node. The waveform produced is not representative of a true voltage waveform, but more accurately represents the derivative of the waveform, with photon spikes being seen only at rising or falling edges.
https://en.wikipedia.org/wiki/Mechanical%20probe%20station
A mechanical probe station is used to physically acquire signals from the internal nodes of a semiconductor device. The probe station utilizes manipulators which allow the precise positioning of thin needles on the surface of a semiconductor device. If the device is being electrically stimulated, the signal is acquired by the mechanical probe and is displayed on an oscilloscope or SMU. The mechanical probe station is often used in the failure analysis of semiconductor devices. There are two types of mechanical probes: active and passive. Passive probes usually consist of a thin tungsten needle. Active probes utilize a FET device on the probe tip in order to significantly reduce loading on the circuit. Microworld Semi-automatic probing stations for full wafer characterization Research Mechanical probe stations are often used in academic research on electronics and materials science. It is often faster and more flexible to test a new electronic device or sample with a probe station than to wire bond and package the device before testing. Probe stations initially were designed to manage micron level semiconductor wafer testing. It is still a huge part of the testing that goes on but as Moore's Law has reduced the sizes of semiconductor devices over time. Probe stations have evolved to better manage both wafer level and device testing. An example of this is the VERSA probe system by Micromanipulator. Systems like the VERSA can visualize and probe all size wafers from 50mm, 100mm, 150mm, 200mm, 300mm, and even 450mm. systems like the VERSA can also manage testing individual chips or die as well as large +24" application boards with decapsulated parts. Semiconductor analysis
https://en.wikipedia.org/wiki/Standard%20RAID%20levels
In computer storage, the standard RAID levels comprise a basic set of RAID ("redundant array of independent disks" or "redundant array of inexpensive disks") configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives (HDDs). The most common types are RAID 0 (striping), RAID 1 (mirroring) and its variants, RAID 5 (distributed parity), and RAID 6 (dual parity). Multiple RAID levels can also be combined or nested, for instance RAID 10 (striping of mirrors) or RAID 01 (mirroring stripe sets). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard. The numerical values only serve as identifiers and do not signify performance, reliability, generation, or any other metric. While most RAID levels can provide good protection against and recovery from hardware defects or defective sectors/read errors (hard errors), they do not provide any protection against data loss due to catastrophic failures (fire, water) or soft errors such as user error, software malfunction, or malware infection. For valuable data, RAID is only one building block of a larger data loss prevention and recovery scheme – it cannot replace a backup plan. RAID 0 RAID 0 (also known as a stripe set or striped volume) splits ("stripes") data evenly across two or more disks, without parity information, redundancy, or fault tolerance. Since RAID 0 provides no fault tolerance or redundancy, the failure of one drive will cause the entire array to fail; as a result of having data striped across all disks, the failure will result in total data loss. This configuration is typically implemented having speed as the intended goal. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical volume out of two or more physical disks. A RAID 0 s
https://en.wikipedia.org/wiki/11%3A11%20%28numerology%29
In numerology, 11:11 is considered to be a significant moment in time for an event to occur. It is seen as an example of synchronicity, as well as a favorable sign or a suggestion towards the presence of spiritual influence. It is additionally thought that the repetition of numbers in the sequence adds "intensity" to them and increases the numerological effect. Critics highlight the lack of substantial evidence for this assertion, and they gesture towards confirmation bias and post-hoc analysis as a scientific explanation for any claims related to the significance or importance of 11:11 and other such sequences. Through observations made in the study of statistics, specifically chaos theory and the law of truly large numbers, skeptics explain these anecdotal observations as a coincidence and an inevitability, rather than as any particular indication towards significance. Significance in Christianity Within Protestant Christianity, particularly in Pentecostal movements, the number 11 or 11:11 has been linked to the concept of transition. Significance in dates For various reasons, individuals are known to attribute significance to dates and numbers. One notable example is the significance given to "the eleventh hour of the eleventh day of the eleventh month," which corresponds to 11:00 a.m. (Paris time) on 11 November 1918. It marks the moment when the armistice ending World War I took effect. On 11 November 2011, also known as "11/11/11," there was an increase in the number of marriages occurring in various regions worldwide, including the United States and across the Asian continent. See also Law of Fives 23 enigma Apophenia
https://en.wikipedia.org/wiki/Linner%20hue%20index
The Linner hue index, , is used to describe the hues which a given caramel coloring may produce. In conjunction with tinctorial strength, or the depth of a caramel coloring's color, it describes the spectra which a solution of the coloring may produce at different dilutions and thicknesses. It also has applications in brewing. In his presentation at the Society of Soft Drink Technologists Annual Meeting in 1970, Robert T. Linner mentioned that most caramel colors had log absorbance spectra which were essentially linear in the visible range. This means that such a spectrum could be characterized by a point (a log absorbance at some particular wavelength) and its slope. Because caramel colors have warm hues (i.e., greater absorbance for shorter wavelengths), the slopes of their log absorbance spectra will be negative. is the negative of this slope, multiplied by a convenient factor. Definition The Linner hue index is defined as: This is simply the negative of the slope of the log absorbance spectrum, between 510 and 610 nm wavelength, for materials that obey Linner's hypothesis of linear log-absorbance spectra. Typical range Linner hue indices typically range from 3 (a greenish yellow or olive hue, depending on the depth of color) to 7.5 (yellow) for caramel colors and beers.
https://en.wikipedia.org/wiki/Range%20%28computer%20programming%29
In computer science, the term range may refer to one of three things: The possible values that may be stored in a variable. The upper and lower bounds of an array. An alternative to iterator. Range of a variable The range of a variable is given as the set of possible values that that variable can hold. In the case of an integer, the variable definition is restricted to whole numbers only, and the range will cover every number within its range (including the maximum and minimum). For example, the range of a signed 16-bit integer variable is all the integers from −32,768 to +32,767. Range of an array When an array is numerically indexed, its range is the upper and lower bound of the array. Depending on the environment, a warning, a fatal exception, or unpredictable behavior will occur if the program attempts to access an array element that is outside the range. In some programming languages, such as C, arrays have a fixed lower bound (zero) and will contain data at each position up to the upper bound (so an array with 5 elements will have a range of 0 to 4). In others, such as PHP, an array may have holes where no element is defined, and therefore an array with a range of 0 to 4 will have up to 5 elements (and a minimum of 2). Range as an alternative to iterator Another meaning of range in computer science is an alternative to iterator. When used in this sense, range is defined as "a pair of begin/end iterators packed together". It is argued that "Ranges are a superior abstraction" (compared to iterators) for several reasons, including better safety. In particular, such ranges are supported in C++20, Boost C++ Libraries and the D standard library. See also Interval
https://en.wikipedia.org/wiki/Odd%20greedy%20expansion
In number theory, the odd greedy expansion problem asks whether a greedy algorithm for finding Egyptian fractions with odd denominators always succeeds. , it remains unsolved. Description An Egyptian fraction represents a given rational number as a sum of distinct unit fractions. If a rational number is a sum of unit fractions with odd denominators, then must be odd. Conversely, every fraction with odd can be represented as a sum of distinct odd unit fractions. One method of finding such a representation replaces by where for a sufficiently large , and then expands as a sum of distinct divisors of . However, a simpler greedy algorithm has successfully found Egyptian fractions in which all denominators are odd for all instances (with odd ) on which it has been tested: let be the least odd number that is greater than or equal to , include the fraction in the expansion, and continue in the same way (avoiding repeated uses of the same unit fraction) with the remaining fraction . This method is called the odd greedy algorithm and the expansions it creates are called odd greedy expansions. Stein, Selfridge, Graham, and others have posed the question of whether the odd greedy algorithm terminates with a finite expansion for every with odd. , this question remains open. Example Let = 4/23. 23/4 = 5; the next larger odd number is 7. So the first step expands 161/5 = 32; the next larger odd number is 33. So the next step expands 5313/4 = 1328; the next larger odd number is 1329. So the third step expands Since the final term in this expansion is a unit fraction, the process terminates with this expansion as its result. Fractions with long expansions It is possible for the odd greedy algorithm to produce expansions that are shorter than the usual greedy expansion, with smaller denominators. For instance, where the left expansion is the greedy expansion and the right expansion is the odd greedy expansion. However, the odd greedy expansion is more ty
https://en.wikipedia.org/wiki/Bcl-2%20homologous%20antagonist%20killer
Bcl-2 homologous antagonist/killer is a protein that in humans is encoded by the BAK1 gene on chromosome 6. The protein encoded by this gene belongs to the BCL2 protein family. BCL2 family members form oligomers or heterodimers and act as anti- or pro-apoptotic regulators that are involved in a wide variety of cellular activities. This protein localizes to mitochondria, and functions to induce apoptosis. It interacts with and accelerates the opening of the mitochondrial voltage-dependent anion channel, which leads to a loss in membrane potential and the release of cytochrome c. This protein also interacts with the tumor suppressor P53 after exposure to cell stress. Structure BAK1 is a pro-apoptotic Bcl-2 protein containing four Bcl-2 homology (BH) domains: BH1, BH2, BH3, and BH4. These domains are composed of nine α-helices, with a hydrophobic α-helix core surrounded by amphipathic helices and a transmembrane C-terminal α-helix anchored to the mitochondrial outer membrane (MOM). A hydrophobic groove formed along the C-terminal of α2 to the N-terminal of α5, and some residues from α8, binds the BH3 domain of other BCL-2 proteins in its active form. Function As a member of the BCL2 protein family, BAK1 functions as a pro-apoptotic regulator involved in a wide variety of cellular activities. In healthy mammalian cells, BAK1 localizes primarily to the MOM, but remains in an inactive form until stimulated by apoptotic signaling. The inactive form of BAK1 is maintained by the protein’s interactions with VDAC2, Mtx2, and other anti-apoptotic members of the BCL2 protein family. Nonetheless, VDAC2 functions to recruit newly synthesized BAK1 to the mitochondria to carry out apoptosis. Moreover, BAK1 is believed to induce the opening of the mitochondrial voltage-dependent anion channel, leading to release of cytochrome c from the mitochondria. Alternatively, BAK1 itself forms an oligomeric pore, MAC, in the MOM, through which pro-apoptotic factors leak in a process called
https://en.wikipedia.org/wiki/Bcl-2-associated%20death%20promoter
The BCL2 associated agonist of cell death (BAD) protein is a pro-apoptotic member of the Bcl-2 gene family which is involved in initiating apoptosis. BAD is a member of the BH3-only family, a subfamily of the Bcl-2 family. It does not contain a C-terminal transmembrane domain for outer mitochondrial membrane and nuclear envelope targeting, unlike most other members of the Bcl-2 family. After activation, it is able to form a heterodimer with anti-apoptotic proteins and prevent them from stopping apoptosis. Mechanism of action Bax/Bak are believed to initiate apoptosis by forming a pore in the mitochondrial outer membrane that allows cytochrome c to escape into the cytoplasm and activate the pro-apoptotic caspase cascade. The anti-apoptotic Bcl-2 and Bcl-xL proteins inhibit cytochrome c release through the mitochondrial pore and also inhibit activation of the cytoplasmic caspase cascade by cytochrome c. Dephosphorylated BAD forms a heterodimer with Bcl-2 and Bcl-xL, inactivating them and thus allowing Bax/Bak-triggered apoptosis. When BAD is phosphorylated by Akt/protein kinase B (triggered by PIP3), it forms the BAD-(14-3-3) protein heterodimer. This leaves Bcl-2 free to inhibit Bax-triggered apoptosis. BAD phosphorylation is thus anti-apoptotic, and BAD dephosphorylation (e.g., by Ca2+-stimulated Calcineurin) is pro-apoptotic. The latter may be involved in neural diseases such as schizophrenia. Interactions Bcl-2-associated death promoter has been shown to interact with: BCL2L1 BCL2A1 BCL2L2 Bcl-2 MCL1 S100A10 YWHAQ and YWHAZ See also Programmed cell death
https://en.wikipedia.org/wiki/Liquid%20Fidelity
Liquid Fidelity is a "microdisplay" technology applied in high-definition televisions. It incorporates Liquid Crystal on Silicon technology capable of producing true 1080p resolution with two million pixels on a single display chip. Components of Liquid Fidelity technology were originally used in 720p HDTVs produced by Uneed Systems of Korea from 2004-2006. Technology Overview Liquid Crystal on Silicon in general is a sophisticated mix of optical and electrical technologies on one chip. The top layer of the chip is liquid crystal material, the bottom layer is an integrated circuit that drives the liquid crystal, and the surface between the layers is highly reflective. The circuit determines how much light passes through the liquid crystal layer, and the reflected light creates an image on a projection screen. LCOS chips with both 720p and 1080p resolution have been developed for HDTVs. Nearly all LCOS chips in mass production have been used in three-chip systems, with one LCOS chip each for red, green and blue light. Sony’s SXRD and JVC’s HD-ILA TVs create images this way. While three-chip systems can produce very good HDTV pictures, they are difficult to align precisely and are expensive. Misalignments can cause visible convergence errors between red, green and blue, particularly along the sides and in the corners of the screen. Liquid Fidelity addresses both the alignment and cost problems. Exclusive technology enables Liquid Fidelity to change its brightness much more quickly than ordinary LCOS chips can. This fast response allows the use of one chip and a color wheel, rather than three chips, so red, green and blue alignment is assured at all areas on the screen. Also, by eliminating two of the three LCOS chips and the additional optical components to support them, Liquid Fidelity HDTVs are generally less expensive to manufacture. Comparison to DLP technology DLP uses MEMS technology, which stands for Micro-Electro-Mechanical Systems. DLP HDT
https://en.wikipedia.org/wiki/Glandular%20branches%20of%20facial%20artery
The glandular branches of the facial artery (submaxillary branches) consist of three or four large vessels, which supply the submandibular gland, some being prolonged to the neighboring muscles, lymph glands, and integument.
https://en.wikipedia.org/wiki/Bus-holder
A bus-holder (or Bus-keeper) is a weak latch circuit which holds last value on a tri-state bus. The circuit is basically a delay element with the output connected back to the input through a relatively high impedance. This is usually achieved with two inverters connected in series, followed by a series resistor connected to the second inverter. The resistor drives the bus weakly; therefore other circuits can override the value of the bus when they are not in tri-state mode. Bus-holders are used to prevent CMOS gate inputs from getting floating values when they are connected to tri-stated nets. Otherwise both transistors in the gate could get turned partially on, increasing power consumption and noise. In severe cases, this increased power consumption can destroy the CMOS gate. This is prevented by the bus-holder pulling the input to the last valid logic level (0 or 1) on the net. The circuit is usually placed in parallel with the tri-stated net. Further reading A. P. Godse, D. A. Godse. Digital Logic Design and Application. Technical Publications, 2008. . Texas Instruments. Implications of slow or floating CMOS inputs. Texas Instruments, 2021. Electronic circuits
https://en.wikipedia.org/wiki/Pharyngeal%20artery
The pharyngeal artery is a branch of the ascending pharyngeal artery. The pharyngeal artery passes inferior-ward in between the superior margin of the superior pharyngeal constrictor muscle, and the levator veli palatini muscle. It issues branches to the constrictor muscles of the pharynx, the stylopharyngeus muscle, the pharyngotympanic tube, and palatine tonsil; a palatine branch may sometimes be present, replacing the ascending palatine branch of facial artery.
https://en.wikipedia.org/wiki/Pterygoid%20branches%20of%20maxillary%20artery
The pterygoid branches of the maxillary artery, irregular in their number and origin, supply the lateral pterygoid muscle and medial pterygoid muscle.
https://en.wikipedia.org/wiki/SmartSlab
SmartSlab is an LED technology invented in 1999 by Tom Barker. The SmartSlab technology uses a hexagonal pixel in a structural composite honeycomb, so that it is possible for displays to be integrated into architecture. SmartSlab Ltd. was a London-based company that developed and marketed SmartSlab-based solutions.
https://en.wikipedia.org/wiki/Complete%20linkage
In genetics, complete (or absolute) linkage is defined as the state in which two loci are so close together that alleles of these loci are virtually never separated by crossing over. The closer the physical location of two genes on the DNA, the less likely they are to be separated by a crossing-over event. In the case of male Drosophila there is complete absence of recombinant types due to absence of crossing over. This means that all of the genes that start out on a single chromosome, will end up on that same chromosome in their original configuration. In the absence of recombination, only parental phenotypes are expected. Linkage Genetic Linkage is the tendency of alleles, which are located closely together on a chromosome, to be inherited together during the process of meiosis in sexually reproducing organisms. During the process of meiosis, homologous chromosomes pair up, and can exchange corresponding sections of DNA. As a result, genes that were originally on the same chromosome can finish up on different chromosomes. This process is known as genetic recombination. The rate of recombination of two discrete loci corresponds to their physical proximity. Alleles that are closer together have lower rates of recombination than those that are located far apart. The distance between two alleles on a chromosome can be determined by calculating the percentage or recombination between two loci. These probabilities of recombination can be used to construct a linkage map, or a graphical representation of the location of genes and gene in respect to one another. If linkage is complete, there should be no recombination events that separate the two alleles, and therefore only parental combinations of alleles should be observed in offspring. Linkage between two loci can have significant implications regarding the inheritance of certain types of diseases. Gene maps or Qualitative Trait Loci (QTL) maps can be produced using two separate methods. One way uses the frequency
https://en.wikipedia.org/wiki/Renewable%20security
Renewable Security was a concept that evolved after the repeated hacks of analogue TV encryption systems in the late 1980s. Simply stated, rather than completely replacing a hacked TV encryption system, only part of it would have to be replaced to make it secure again. Embedded secure processor The decoders at that time often contained all of the conditional access control data in a microcontroller. This data consisted generally of the decoder's identity, the subscriber's identity number and subscription data. When the decoder was hacked, the whole system was effectively compromised as other subscriber identity data could be substituted and the hackers had control. This security model also more commonly known as the Embedded Secure Processor model as the secure processor, the microcontroller, was embedded in the decoder itself. Detachable secure processor The systems manufacturers countered with the Detachable Secure Processor model. In this security model, the decoder itself would not be the critical part of the system. The subscriber identity data and subscription details would be stored in a smartcard - the Detachable Secure Processor. Any compromise of the smartcard could then be countered by issuing a new, more secure, smartcard to subscribers. Advantages and disadvantages Renewable Security is good in theory. It provides hackers with a moving target rather than a stationary one. In the VideoCrypt system, the initial expectation was that the smartcards would be replaced every six months thus making the emergence of a pirate smartcard less likely. In reality, changing or upgrading the smartcards on a widely used TV Encryption system can be expensive and is done as infrequently as possible.
https://en.wikipedia.org/wiki/Olfactory%20fatigue
Olfactory fatigue, also known as odor fatigue, olfactory adaptation, and noseblindness, is the temporary, normal inability to distinguish a particular odor after a prolonged exposure to that airborne compound. For example, when entering a restaurant initially the odor of food is often perceived as being very strong, but after time the awareness of the odor normally fades to the point where the smell is not perceptible or is much weaker. After leaving the area of high odor, the sensitivity is restored with time. Anosmia is the permanent loss of the sense of smell, and is different from olfactory fatigue. It is a term commonly used in wine tasting, where one loses the ability to smell and distinguish wine bouquet after sniffing at wine(s) continuously for an extended period of time. The term is also used in the study of indoor air quality, for example, in the perception of odors from people, tobacco, and cleaning agents. Since odor detection may be an indicator that exposure to certain chemicals is occurring, olfactory fatigue can also reduce one's awareness about chemical hazard exposure. Olfactory fatigue is an example of neural adaptation. The body becomes desensitized to stimuli to prevent the overloading of the nervous system, thus allowing it to respond to new stimuli that are 'out of the ordinary'. Mechanism Olfactory fatigue is the result of a negative, stabilizing feedback loop which lowers the olfactory neuron's sensitivity the longer it is stimulated by an odorant. The increase of Ca2+ ions in the olfactory neuron in response to stimulus both charges the transfer of information to the brain and activates a limiting system to prevent overstimulation. After olfactory neurons depolarize in response to an odorant, the G-protein mediated second messenger response activates adenylyl cyclase, increasing cyclic AMP (cAMP) concentration inside a cell, which then opens a cyclic nucleotide gated cation channel. The influx of Ca2+ ions through this channel trigg
https://en.wikipedia.org/wiki/Host%20tropism
Host tropism is the infection specificity of certain pathogens to particular hosts and host tissues. This explains why most pathogens are only capable of infecting a limited range of host organisms. Researchers can classify pathogenic organisms by the range of species and cell types that they exhibit host tropism for. For instance, pathogens that are able to infect a wide range of hosts and tissues are said to be amphotropic. Ecotropic pathogens, on the other hand, are only capable of infecting a narrow range of hosts and host tissue. Knowledge of a pathogen's host specificity allows professionals in the research and medical industries to model pathogenesis and develop vaccines, medication, and preventive measures to fight against infection. Methods such as cell engineering, direct engineering and assisted evolution of host-adapted pathogens, and genome-wide genetic screens are currently being used by researchers to better understand the host range of a variety of different pathogenic organisms. Mechanisms A pathogen displays tropism for a specific host if it can interact with the host cells in a way that supports pathogenic growth and infection. Various factors affect the ability of a pathogen to infect a particular cell, including: the structure of the cell's surface receptors; the availability of transcription factors that can identify pathogenic DNA or RNA; the ability of the cells and tissue to support viral or bacterial replication; and the presence of physical or chemical barriers within the cells and throughout the surrounding tissue. Cell surface receptors Pathogens frequently enter or adhere to host cells or tissues before causing infection. For this connection to occur, the pathogen must recognize the cell's surface and then bind to it. Viruses, for example, must often bind to specific cell surface receptors to enter a cell. Many viral membranes contain virion surface proteins that are specific to particular host cell surface receptors. If a host cel
https://en.wikipedia.org/wiki/Session%20ID
In computer science, a session identifier, session ID or session token is a piece of data that is used in network communications (often over HTTPS) to identify a session, a series of related message exchanges. Session identifiers become necessary in cases where the communications infrastructure uses a stateless protocol such as HTTP. For example, a buyer who visits a seller's website wants to collect a number of articles in a virtual shopping cart and then finalize the shopping by going to the site's checkout page. This typically involves an ongoing communication where several webpages are requested by the client and sent back to them by the server. In such a situation, it is vital to keep track of the current state of the shopper's cart, and a session ID is one way to achieve that goal. A session ID is typically granted to a visitor on their first visit to a site. It is different from a user ID in that sessions are typically short-lived (they expire after a preset time of inactivity which may be minutes or hours) and may become invalid after a certain goal has been met (for example, once the buyer has finalized their order, they cannot use the same session ID to add more items). As session IDs are often used to identify a user that has logged into a website, they can be used by an attacker to hijack the session and obtain potential privileges. A session ID is usually a randomly generated string to decrease the probability of obtaining a valid one by means of a brute-force search. Many servers perform additional verification of the client, in case the attacker has obtained the session ID. Locking a session ID to the client's IP address is a simple and effective measure as long as the attacker cannot connect to the server from the same address, but can conversely cause problems for a client if the client has multiple routes to the server (e.g. redundant internet connections) and the client's IP address undergoes Network Address Translation. Examples of the names t
https://en.wikipedia.org/wiki/Special%20ordered%20set
In discrete optimization, a special ordered set (SOS) is an ordered set of variables used as an additional way to specify integrality conditions in an optimization model. Special order sets are basically a device or tool used in branch and bound methods for branching on sets of variables, rather than individual variables, as in ordinary mixed integer programming. Knowing that a variable is part of a set and that it is ordered gives the branch and bound algorithm a more intelligent way to face the optimization problem, helping to speed up the search procedure. The members of a special ordered set individually may be continuous or discrete variables in any combination. However, even when all the members are themselves continuous, a model containing one or more special ordered sets becomes a discrete optimization problem requiring a mixed integer optimizer for its solution. The ‘only’ benefit of using Special Ordered Sets compared with using only constraints is that the search procedure will generally be noticeably faster. As per J.A. Tomlin, Special Order Sets provide a powerful means of modeling nonconvex functions and discrete requirements, though there has been a tendency to think of them only in terms of multiple-choice zero-one programming. Context of applications Multiple-choice programming Global Optimization with continuous separable functions. History The origin of the concept was in the paper of Beale titled "Two transportation problems" (1963) where he presented a bid evaluation model. However, the term was first explicitly introduced by Beale and Tomlin (1970). The special order set were first implemented in Scicon's UMPIRE mathematical programming system. Special Order sets were an important and recurring theme in Martin Beale's work, and their value came to be recognized to the point where nearly all production mathematical programming systems (MPS's) implement some version, or subset, of SOS. Types There are two sorts of Special Ordered Sets:
https://en.wikipedia.org/wiki/Multivariate%20optical%20computing
Multivariate optical computing, also known as molecular factor computing, is an approach to the development of compressed sensing spectroscopic instruments, particularly for industrial applications such as process analytical support. "Conventional" spectroscopic methods often employ multivariate and chemometric methods, such as multivariate calibration, pattern recognition, and classification, to extract analytical information (including concentration) from data collected at many different wavelengths. Multivariate optical computing uses an optical computer to analyze the data as it is collected. The goal of this approach is to produce instruments which are simple and rugged, yet retain the benefits of multivariate techniques for the accuracy and precision of the result. An instrument which implements this approach may be described as a multivariate optical computer. Since it describes an approach, rather than any specific wavelength range, multivariate optical computers may be built using a variety of different instruments (including Fourier Transform Infrared (FTIR) and Raman). The "software" in multivariate optical computing is encoded directly into an optical element spectral calculation engine such as an interference filter based multivariate optical element (MOE), holographic grating, liquid crystal tunable filter, spatial light modulator (SLM), or digital micromirror device (DMD) and is specific to the particular application. The optical pattern for the spectral calculation engine is designed for the specific purpose of measuring the magnitude of that multi-wavelength pattern in the spectrum of a sample, without actually measuring a spectrum. Multivariate optical computing allows instruments to be made with the mathematics of pattern recognition designed directly into an optical computer, which extracts information from light without recording a spectrum. This makes it possible to achieve the speed, dependability, and ruggedness necessary for real ti
https://en.wikipedia.org/wiki/Multivariate%20optical%20element
A multivariate optical element (MOE), is the key part of a multivariate optical computer; an alternative to conventional spectrometry for the chemical analysis of materials. It is helpful to understand how light is processed in a multivariate optical computer, as compared to how it is processed in a spectrometer. For example, if we are studying the composition of a powder mixture using diffuse reflectance, a suitable light source is directed at the powder mixture and light is collected, usually with a lens, after it has scattered from the powder surface. Light entering a spectrometer first strikes a device (either a grating or interferometer) that separates light of different wavelengths to be measured. A series of independent measurements is used to estimate the full spectrum of the mixture, and the spectrometer renders a measurement of the spectral intensity at many wavelengths. Multivariate statistics can then be applied to the spectrum produced. In contrast, when using multivariate optical computing, the light entering the instrument strikes an application specific multivariate optical element, which is uniquely tuned to the pattern that needs to be measured using multivariate analysis. This system can produce the same result that multivariate analysis of a spectrum would produce. Thus, it can generally produce the same accuracy as laboratory grade spectroscopic systems, but with the fast speed inherent with a pure, passive, optical computer. The multivariate optical computer makes use of optical computing to realize the performance of a full spectroscopic system using traditional multivariate analysis. A side benefit is that the throughput and efficiency of the system is higher than conventional spectrometers, which increases the speed of analysis by orders of magnitude. While each chemical problem presents its own unique challenges and opportunities, the design of a system for a specific analysis is complex and requires the assembly of several pieces
https://en.wikipedia.org/wiki/Matrix-free%20methods
In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including: the power method, the Lanczos algorithm, Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG), Wiedemann's coordinate recurrence algorithm, and the conjugate gradient method. Krylov subspace methods Distributed solutions have also been explored using coarse-grain parallel software systems to achieve homogeneous solutions of linear systems. It is generally used in solving non-linear equations like Euler's equations in computational fluid dynamics. Matrix-free conjugate gradient method has been applied in the non-linear elasto-plastic finite element solver. Solving these equations requires the calculation of the Jacobian which is costly in terms of CPU time and storage. To avoid this expense, matrix-free methods are employed. In order to remove the need to calculate the Jacobian, the Jacobian vector product is formed instead, which is in fact a vector itself. Manipulating and calculating this vector is easier than working with a large matrix or linear system.
https://en.wikipedia.org/wiki/Table%20of%20thermodynamic%20equations
Common thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows: Definitions Many of the definitions below are also used in the thermodynamics of chemical reactions. General basic quantities General derived quantities Thermal properties of matter Thermal transfer Equations The equations in this article are classified by subject. Thermodynamic processes Kinetic theory Ideal gas Entropy , where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability. , for reversible processes only Statistical physics Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases. Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below. Quasi-static and reversible processes For quasi-static and reversible processes, the first law of thermodynamics is: where δQ is the heat supplied to the system and δW is the work done by the system. Thermodynamic potentials The following energies are called the thermodynamic potentials, and the corresponding fundamental thermodynamic relations or "master equations" are: Maxwell's relations The four most common Maxwell's relations are: More relations include the following. Other differential equations are: Quantum properties Indistinguishable Particles where N is number of particles, h is Planck's constant, I is moment of inertia, and Z is the partition function, in various forms: Thermal properties of matter Thermal transfer Thermal efficiencies See also List of thermodynamic properties Antoine equation Bejan number Bowen ratio Bridgman's equations Clausius–Clapeyron relation Departure functions Duhem–Margules equation Ehrenfest equations Gibbs–Helmholtz equation Phase rule Kopp's law Noro–Frenkel law of corresponding states Onsager reci
https://en.wikipedia.org/wiki/Hyena%20butter
Hyena butter is a secretion from the anal gland of hyenas used to mark territory and to identify individuals by odor. The gooey substance is spread onto objects within the territory of the hyena by rubbing their posterior against the object they mark. Folk beliefs in some regions of East Africa state that witches would ride hyenas and use a gourd full of hyena butter as fuel for the torches that they carried through the night. See also Deer musk Dog odor Territorial marking
https://en.wikipedia.org/wiki/Ionospheric%20absorption
Ionospheric absorption (ISAB) is the scientific name for absorption occurring as a result of the interaction between various types of electromagnetic waves and the free electrons in the ionosphere, which can interfere with radio transmissions. Description Ionosphere absorption is of critical importance when radio networks, telecommunication systems or interlinked radio systems are being planned, particularly when trying to determine propagation conditions. The ionosphere can be described as an area of the atmosphere in which radio waves on shortwave bands are refracted or reflected back to Earth. As a result of this reflection, which is often key in the long-distance propagation of radio waves, some of the shortwave signal strength is decreased. In this regard, ISAB is the primary limiting factor in radio propagation. Attenuation mechanics ISAB is only a factor in the period of the day where radio signals travel through the portion of the ionosphere facing the Sun. The solar wind and radiation cause the ionosphere to become charged with electrons in the first place. At night, the atmosphere becomes drained of its charge, and radio signals can go much farther with less loss of signal. In particular, low frequency signals that would be attenuated to nothing during the day will be received much farther away at night. The specific amount of attenuation can be derived as a function of the Inverse-square law. The lower the frequency, the greater the attenuation. Relative ionospheric absorption can be measured using a riometer. See also Radio horizon Sudden Ionospheric Disturbance Resources Ionosphere Radio frequency propagation
https://en.wikipedia.org/wiki/Optical%20beam-induced%20current
Optical beam induced current (OBIC) is a semiconductor analysis technique performed using laser signal injection. The technique uses a scanning laser beam to create electron–hole pairs in a semiconductor sample. This induces a current which may be analyzed to determine the sample's properties, especially defects or anomalies. Conventional OBIC scans an ultrafast laser beam over the surface of the sample, exciting some electrons into the conduction band through what is known as 'single-photon absorption'. As its name implies, single-photon absorption involves just a single photon to excite the electron into conduction. This can only occur if that single photon carries enough energy to overcome the band gap of the semiconductor (1.12 eV for Si) and provide the electron with enough energy to make it jump into the conduction band. Uses The technique is used in semiconductor failure analysis to locate buried diffusion regions, damaged junctions and gate oxide shorts. The OBIC technique may be used to detect the point at which a focused ion beam (FIB) milling operation in bulk silicon of an IC must be terminated (also known as endpoint). This is accomplished by using a laser to induce a photocurrent in the silicon while simultaneously monitoring the magnitude of the photocurrent by connecting an ammeter to the device's power and ground. As the bulk silicon is thinned, the photocurrent is increased and reaches a peak as the depletion region of the well to substrate junction is reached. This way, endpoint can be achieved to just below the well depth and the device remains operational. See also List of laser articles Notes
https://en.wikipedia.org/wiki/Comparison%20of%20spreadsheet%20software
Spreadsheet is a class of application software design to analyze tabular data called "worksheets". A collection of worksheets is called a "workbook". Online spreadsheets do not depend on a particular operating system but require a standards-compliant web browser instead. One of the incentives for the creation of online spreadsheets was offering worksheet sharing and public sharing or workbooks as part of their features which enables collaboration between multiple users. Some on-line spreadsheets provide remote data update, allowing data values to be extracted from other users' spreadsheets even though they may be inactive at the time. General Operating system support The operating systems the software can run on natively (without emulation). Android and iOS apps can be optimized for Chromebooks and iPads which run the operating systems ChromeOS and iPadOS respectively, the operating optimizations include things like multitasking capabilities, large and multi-display support, better keyboard and mouse support. Supported file formats This table gives a comparison of what file formats each spreadsheet can import and export. "Yes" means can both import and export. Rows and Columns -* 32-bit addressable memory on Microsoft Windows, i.e. ~2.5 GB. See also List of spreadsheet software Comparison of word processors Notes
https://en.wikipedia.org/wiki/Bryology
Bryology (from Greek , a moss, a liverwort) is the branch of botany concerned with the scientific study of bryophytes (mosses, liverworts, and hornworts). Bryologists are people who have an active interest in observing, recording, classifying or researching bryophytes. The field is often studied along with lichenology due to the similar appearance and ecological niche of the two organisms, even though bryophytes and lichens are not classified in the same kingdom. History Bryophytes were first studied in detail in the 18th century. The German botanist Johann Jacob Dillenius (1687–1747) was a professor at Oxford and in 1717 produced the work "Reproduction of the ferns and mosses." The beginning of bryology really belongs to the work of Johannes Hedwig, who clarified the reproductive system of mosses (1792, Fundamentum historiae naturalist muscorum) and arranged a taxonomy. Research Areas of research include bryophyte taxonomy, bryophytes as bioindicators, DNA sequencing, and the interdependency of bryophytes and other plant, fungal and animal species. Among other things, scientists have discovered parasitic (mycoheterotrophic) bryophytes such as Aneura mirabilis (previously known as Cryptothallus mirabilis) and potentially carnivorous liverworts such as Colura zoophaga and Pleurozia. Centers of research in bryology include the University of Bonn in Germany, the University of Helsinki in Finland and the New York Botanical Garden. Journals The Bryologist a scientific journal began publication in 1898, and includes articles on all aspects of the biology of mosses, hornworts, liverworts and lichens and also book reviews. It is published by The American Bryological and Lichenological Society. The scientific Journal of Bryology, renamed in 1972 from its original name of Transactions of the British Bryological Society that commenced in 1947, is published by the British Bryological Society. Notable bryologists Miles Joseph Berkeley (1803–1889) Elizabeth Gertrude Bri
https://en.wikipedia.org/wiki/Incomplete%20LU%20factorization
In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner. Introduction Consider a sparse linear system . These are often solved by computing the factorization , with L lower unitriangular and U upper triangular. One then solves , , which can be done efficiently because the matrices are triangular. For a typical sparse matrix, the LU factors can be much less sparse than the original matrix — a phenomenon called fill-in. The memory requirements for using a direct solver can then become a bottleneck in solving linear systems. One can combat this problem by using fill-reducing reorderings of the matrix's unknowns, such as the Minimum degree algorithm. An incomplete factorization instead seeks triangular matrices L, U such that rather than . Solving for can be done quickly but does not yield the exact solution to . So, we instead use the matrix as a preconditioner in another iterative solution algorithm such as the conjugate gradient method or GMRES. Definition For a given matrix one defines the graph as which is used to define the conditions a sparsity patterns needs to fulfill A decomposition of the form where the following hold is a lower unitriangular matrix is an upper triangular matrix are zero outside of the sparsity pattern: is zero within the sparsity pattern: is called an incomplete LU decomposition (with respect to the sparsity pattern ). The sparsity pattern of L and U is often chosen to be the same as the sparsity pattern of the original matrix A. If the underlying matrix structure can be referenced by pointers instead of copied, the only extra memory required is for the entries of L and U. This preconditioner is called ILU(0). Stability Concerning the stability of the ILU the following theorem was proven by Meijerink and van der Vorst. Let be an M-matrix, the (complete) LU decomposition given by , and the ILU
https://en.wikipedia.org/wiki/Symmetric%20successive%20over-relaxation
In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner. If the original matrix can be split into diagonal, lower and upper triangular as then the SSOR preconditioner matrix is defined as It can also be parametrised by as follows. See also Successive over-relaxation
https://en.wikipedia.org/wiki/Elliott%20803
The Elliott 803 is a small, medium-speed transistor digital computer which was manufactured by the British company Elliott Brothers in the 1960s. About 211 were built. History The 800 series began with the 801, a one-off test machine built in 1957. The 802 was a production model but only seven were sold between 1958 and 1961. The short-lived 803A was built in 1959 and first delivered in 1960; the 803B was built in 1960 and first delivered in 1961. Over 200 Elliott 803 computers were delivered to customers, at a unit price of about £29,000 in 1960 (roughly ). Most sales were of the 803B version with more parallel paths internally, larger memory and hardware floating-point operations. The Elliott 803 was the computer used in the ISI-609, the world's first process or industrial control system, wherein the 803 was a data logger. It was used for this purpose at the US's first dual-purpose nuclear reactor, the N-Reactor. A significant number of British universities had an Elliott 803. Elliott subsequently developed (1963) the much faster, software compatible, Elliott 503. Two complete Elliott 803 computers survive. One is owned by the Science Museum in London but it is not on display to the public. The second is owned by The National Museum of Computing (TNMoC) at Bletchley Park, is fully functional, and can regularly be seen in operation by visitors to that museum. Hardware description The 803 is a transistorised, bit-serial machine; the 803B has more parallel paths internally. It uses ferrite magnetic-core memory in 4096 or 8192 words of 40 bits, comprising 39 bits of data with parity. The central processing unit (CPU) is housed in one cabinet with a height, width, and depth, of . Circuitry is based on printed circuit boards with the circuits being rather simple and most of the signalling carried on wires. There is a second cabinet about half the size used for the power supply, which is unusually based on a large nickel–cadmium battery with charger, an ear
https://en.wikipedia.org/wiki/War%20Research%20Service
The War Research Service (WRS) was a civilian agency of the United States government established during World War II to pursue research relating to biological warfare. Established in May 1942 by Secretary of War Henry L. Stimson, the WRS was embedded in the Federal Security Agency, the federal agency that administered Social Security and other New Deal programs in the administration of President Franklin D. Roosevelt. Headed by George W. Merck, president of the Merck & Co. pharmaceutical firm, the WRS was headquartered at Fort Detrick, Maryland. Being a civilian agency, the WRS was initially tasked to supervise the military Chemical Warfare Service's biological program. However, the WRS was disbanded in 1944, and the weapons research was continued under the exclusive oversight of the CWS.
https://en.wikipedia.org/wiki/Incomplete%20Cholesky%20factorization
In numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization. An incomplete Cholesky factorization is often used as a preconditioner for algorithms like the conjugate gradient method. The Cholesky factorization of a positive definite matrix A is A = LL* where L is a lower triangular matrix. An incomplete Cholesky factorization is given by a sparse lower triangular matrix K that is in some sense close to L. The corresponding preconditioner is KK*. One popular way to find such a matrix K is to use the algorithm for finding the exact Cholesky decomposition in which K has the same sparsity pattern as A (any entry of K is set to zero if the corresponding entry in A is also zero). This gives an incomplete Cholesky factorization which is as sparse as the matrix A. Algorithm For from to : For from to : Implementation Implementation of the incomplete Cholesky factorization in the GNU Octave language. The factorization is stored as a lower triangular matrix, with the elements in the upper triangle set to zero. function a = ichol(a) n = size(a,1); for k = 1:n a(k,k) = sqrt(a(k,k)); for i = (k+1):n if (a(i,k) != 0) a(i,k) = a(i,k)/a(k,k); endif endfor for j = (k+1):n for i = j:n if (a(i,j) != 0) a(i,j) = a(i,j) - a(i,k)*a(j,k); endif endfor endfor endfor for i = 1:n for j = i+1:n a(i,j) = 0; endfor endfor endfunction