source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Edifenphos
Edifenphos (O-ethyl-S,S-diphenyldithiophosphate, EDDP) is a systemic fungicide that inhibits phosphatidylcholine biosynthesis. It was introduced in 1966 by Bayer to combat blast fungus and Pellicularia sasakii in rice cultivation. It was never authorized for use in the EU.
https://en.wikipedia.org/wiki/Winner-take-all%20%28computing%29
Winner-take-all is a computational principle applied in computational models of neural networks by which neurons compete with each other for activation. In the classical form, only the neuron with the highest activation stays active while all other neurons shut down; however, other variations allow more than one neuron to be active, for example the soft winner take-all, by which a power function is applied to the neurons. Neural networks In the theory of artificial neural networks, winner-take-all networks are a case of competitive learning in recurrent neural networks. Output nodes in the network mutually inhibit each other, while simultaneously activating themselves through reflexive connections. After some time, only one node in the output layer will be active, namely the one corresponding to the strongest input. Thus the network uses nonlinear inhibition to pick out the largest of a set of inputs. Winner-take-all is a general computational primitive that can be implemented using different types of neural network models, including both continuous-time and spiking networks. Winner-take-all networks are commonly used in computational models of the brain, particularly for distributed decision-making or action selection in the cortex. Important examples include hierarchical models of vision, and models of selective attention and recognition. They are also common in artificial neural networks and neuromorphic analog VLSI circuits. It has been formally proven that the winner-take-all operation is computationally powerful compared to other nonlinear operations, such as thresholding. In many practical cases, there is not only one single neuron which becomes active but there are exactly k neurons which become active for a fixed number k. This principle is referred to as k-winners-take-all. Circuit example A simple, but popular CMOS winner-take-all circuit is shown on the right. This circuit was originally proposed by Lazzaro et al. (1989) using MOS transistors biased
https://en.wikipedia.org/wiki/Magnetic%20immunoassay
Magnetic immunoassay (MIA) is a type of diagnostic immunoassay using magnetic beads as labels in lieu of conventional enzymes (ELISA), radioisotopes (RIA) or fluorescent moieties (fluorescent immunoassays) to detect a specified analyte. MIA involves the specific binding of an antibody to its antigen, where a magnetic label is conjugated to one element of the pair. The presence of magnetic beads is then detected by a magnetic reader (magnetometer) which measures the magnetic field change induced by the beads. The signal measured by the magnetometer is proportional to the analyte (virus, toxin, bacteria, cardiac marker, etc.) concentration in the initial sample. Magnetic labels Magnetic beads are made of nanometric-sized iron oxide particles encapsulated or glued together with polymers. These magnetic beads range from 35 nm up to 4.5 μm. The component magnetic nanoparticles range from 5 to 50 nm and exhibit a unique quality referred to as superparamagnetism in the presence of an externally applied magnetic field. First discovered by Frenchman Louis Néel, Nobel Physics Prize winner in 1970, this superparamagnetic quality has already been used for medical application in Magnetic Resonance Imaging (MRI) and in biological separations, but not yet for labeling in commercial diagnostic applications. Magnetic labels exhibit several features very well adapted for such applications: they are not affected by reagent chemistry or photo-bleaching and are therefore stable over time, the magnetic background in a biomolecular sample is usually insignificant, sample turbidity or staining have no impact on magnetic properties, magnetic beads can be manipulated remotely by magnetism. Detection Magnetic Immunoassay (MIA) is able to detect select molecules or pathogens through the use of a magnetically tagged antibody. Functioning in a way similar to that of an ELISA or Western Blot, a two-antibody binding process is used to determine concentrations of analytes. MIA uses antibodi
https://en.wikipedia.org/wiki/French%20video%20game%20policy
French video game policy refers to the strategy and set of measures laid out by France since 2002 to maintain and develop a local video game development industry in order to preserve European market diversity. History Proposals for government support The French game developer trade group, known as Association des Producteurs d'Oeuvres Multimedia (APOM, now "Syndicat National du Jeu Video") was founded in 2001 by Eden Studios' Stéphane Baudet, Kalisto's Nicolas Gaume, former cabinet member and author Alain Le Diberder, financier and former journalist Romain Poirot-Lellig and Darkworks' Antoine Villette. APOM was for established for game developers only, since game publishers were already grouped under the umbrella of the Syndicat des Editeurs de Logiciels de Loisirs (SELL). In November 2002, the Prime Minister Jean-Pierre Raffarin visited Darkworks, and formally asked game developers to submit him a set of proposals, promising to meet again in Spring 2003 to give his feedback. Confronted by the bankruptcies or difficulties of many studios such as Cryo, Kalisto, Arxel Tribe, APOM had to propose short term solutions as well as long term, growth-oriented measures to the French government. Video game professionals responded in March 2003 with a set of proposals, including several options to set up a long term financing system to develop quality video games for the European and international market. Era of government support On April 19, 2003, the Prime Minister announced the creation of the Ecole Nationale du Jeu Video et des Medias Interactifs, a national school dedicated to the education of game development executives and project managers. He also announced the creation of a 4 million euro prototyping fund for games managed by the Centre National de la Cinematographie, the "Fonds d'Aide pour l'Edition Multimédia" ("FAEM"), and that he would order a report to be drafted in order to determine and to answer the needs of the game development industry with regards to
https://en.wikipedia.org/wiki/Blasticidin%20S
Blasticidin S is an antibiotic that is used in biology research for selecting cells in cell culture. Cells of interest can express the blasticidin resistance genes BSD or bsr, and can then survive treatment with the antibiotic. Blasticidin S is a nucleoside analogue antibiotic, resembling the nucleoside cytidine. Blasticidin works against human cells, fungi, and bacteria, all by disrupting protein translation. It was originally described by Japanese researchers in the 1950s seeking antibiotics for rice blast fungus. Chemistry A nucleoside analog, blasticidin S resembles the nucleoside cytidine. The chemical structure consists of a cytosine molecule, linked to a glucuronic acid-derived ring, linked in turn to the peptide N-methyl β-arginine. Uses Blasticidin S is widely used in cell culture for selecting and maintaining genetically manipulated cells. Cells of interest express the blasticidin S resistance genes BSD or bsr, and can then survive blasticidin S being added to the culture media. Blasticidin S is typically used at 2–300 micrograms per milliliter of media, depending on the type of cell being grown. Mechanism of action Blasticidin prevents the growth of both eukaryotic and prokaryotic cells. It works by inhibiting termination step of translation and peptide bond formation (to lesser extent) by the ribosome. This means that cells can no longer produce new proteins through translation of mRNA. It is competitive with puromycin suggesting a highly similar binding site. Biosynthesis The first step in blasticidin S biosynthesis is the combination of UDP-glucuronic acid with cytosine to form cytosylglucuronic acid (CGA). Given the product name, the enzyme that performs this combination is called CGA synthase. Cosmid cloning experiments from the Blasticidin S producer Streptomyces griseochromogenes, followed by evaluation of the putative biosynthetic gene cluster via heterologous reconstitution of Blasticidin S production in Streptomyces lividans, indicat
https://en.wikipedia.org/wiki/Oracle%20Enterprise%20Manager%20Ops%20Center
Oracle Enterprise Manager Ops Center (formerly Sun Ops Center) is a data center automation tool that simplifies discovery and management of physical and virtualized assets. Among its features it can: Provide a single console for the management of both the physical and virtual infrastructure in a virtualized environment Allow discovery of any existing infrastructure, including hardware that has just been unpacked and plugged in but has not been switched on Power everything up and then provision this hardware with firmware, operating system hypervisors and other applications as required Once operational, ensure that all the software on the servers, both physical and virtualized, can be automatically updated and patched Enable custom reports to be generated for operational as well as compliance purposes. Ops Center includes a browser-based, platform-independent interface that uses AJAX. See also Sun xVM VirtualBox Sun VDI
https://en.wikipedia.org/wiki/Structural%20Ramsey%20theory
In mathematics, structural Ramsey theory is a categorical generalisation of Ramsey theory, rooted in the idea that many important results of Ramsey theory have "similar" logical structures. The key observation is noting that these Ramsey-type theorems can be expressed as the assertion that a certain category (or class of finite structures) has the Ramsey property (defined below). Structural Ramsey theory began in the 1970s with the work of Nešetřil and Rödl, and is intimately connected to Fraïssé theory. It received some renewed interest in the mid-2000s due to the discovery of the Kechris–Pestov–Todorčević correspondence, which connected structural Ramsey theory to topological dynamics. History is given credit for inventing the idea of a Ramsey property in the early 70s. The first publication of this idea appears to be Graham, Leeb and Rothschild's 1972 paper on the subject. Key development of these ideas was done by Nešetřil and Rödl in their series of 1977 and 1983 papers, including the famous Nešetřil–Rödl theorem. This result was reproved independently by Abramson and Harrington, and further generalised by . More recently, Mašulović and Solecki have done some pioneering work in the field. Motivation This article will use the set theory convention that each natural number can be considered as the set of all natural numbers less than it: i.e. . For any set , an -colouring of is an assignment of one of labels to each element of . This can be represented as a function mapping each element to its label in (which this article will use), or equivalently as a partition of into pieces. Here are some of the classic results of Ramsey theory: (Finite) Ramsey's theorem: for every , there exists such that for every -colouring of all the -element subsets of , there exists a subset , with , such that is -monochromatic. (Finite) van der Waerden's theorem: for every , there exists such that for every -colouring of , there exists a -monochromatic arithmetic
https://en.wikipedia.org/wiki/Chemical%20specificity
Chemical specificity is the ability of binding site of a macromolecule (such as a protein) to bind specific ligands. The fewer ligands a protein can bind, the greater its specificity. Specificity describes the strength of binding between a given protein and ligand. This relationship can be described by a dissociation constant, which characterizes the balance between bound and unbound states for the protein-ligand system. In the context of a single enzyme and a pair of binding molecules, the two ligands can be compared as stronger or weaker ligands (for the enzyme) on the basis of their dissociation constants. (A lower value corresponds to a stronger binding.) Specificity for a set of ligands is unrelated to the ability of an enzyme to catalyze a given reaction, with the ligand as a substrate. If a given enzyme has a high chemical specificity, this means that the set of ligands to which it binds is limited, such that neither binding events nor catalysis can occur at an appreciable rate with additional molecules. An example of a protein-ligand pair whose binding activity can be highly specific is the antibody-antigen system. Affinity maturation typically leads to highly specific interactions, whereas naive antibodies are promiscuous and bind a larger number of ligands. Conversely, an example of a protein-ligand system that can bind substrates and catalyze multiple reactions effectively is the Cytochrome P450 system, which can be considered a promiscuous enzyme due to its broad specificity for multiple ligands. Proteases are a group of enzymes that show a broad range of cleavage specificities. Promiscuous proteases as digestive enzymes unspecifically degrade peptides, whereas highly specific proteases are involved in signaling cascades. Basis Binding The interactions between the protein and ligand substantially affect the specificity between the two entities. Electrostatic interactions and Hydrophobic interactions are known to be the most influential in regards
https://en.wikipedia.org/wiki/Wayne%20Wesolowski
Wayne Wesolowski is builder of miniature models. Wesolowski's models have been exhibited at the Chicago Museum of Science and Industry, the Springfield, Illinois, Lincoln Home Site, the West Chicago City Museum, RailAmerican, and the National Railroad Museum. One of his more noted works is a model of Abraham Lincoln's funeral train. This model took 4½ years to build and is 15 feet (4½ meters) long. Wesolowski appeared on an episode of Tracks Ahead featuring this train and his model of Lincoln's home. Wesolowski has written scores of articles and four books on model building. He has been featured in videos shown on PBS television. Good Morning America selected and showed part of one tape as an example of video education. Wesolowski holds a Ph.D. in chemistry from the University of Arizona and teaches chemistry there. Publications Notes External links Description of Lincoln funeral train project Gallery of Wesolowski models Introductory statement at University of Arizona Rail transport modellers 21st-century American chemists Model makers Scale modeling University of Arizona faculty University of Arizona alumni Living people Year of birth missing (living people)
https://en.wikipedia.org/wiki/EyeOS
eyeOS is a web desktop following the cloud computing concept that seeks to enable collaboration and communication among users. It is mainly written in PHP, XML, and JavaScript. It is a private-cloud application platform with a web-based desktop interface. Commonly called a cloud desktop because of its unique user interface, eyeOS delivers a whole desktop from the cloud with file management, personal management information tools, collaborative tools and with the integration of the client’s applications. History The first publicly available eyeOS version was released on August 1, 2005, as eyeOS 0.6.0 in Olesa de Montserrat, Barcelona (Spain). Quickly, a worldwide community of developers took part in the project and helped improve it by translating, testing and developing it. After two years of development, the eyeOS Team published eyeOS 1.0 (on June 4, 2007). Compared with previous versions, eyeOS 1.0 introduced a complete reorganization of the code and some new web technologies, like eyeSoft, a portage-based web software installation system. Moreover, eyeOS also included the eyeOS Toolkit, a set of libraries allowing easy and fast development of new web Applications. With the release of eyeOS 1.1 on July 2, 2007, eyeOS changed its license and migrated from GNU GPL Version 2 to Version 3. Version 1.2 was released just a few months after the 1.1 version and integrated full compatibility with Microsoft Word files. eyeOS 1.5 Gala was released on January 15, 2008. This version is the first to support both Microsoft Office and OpenOffice.org file formats for documents, presentations and spreadsheets. It also has the ability to import and export documents in both formats using server side scripting. eyeOS 1.6 was released on April 25, 2008, and included many improvements such as synchronization with local computers, drag and drop, a mobile version and more. eyeOS 1.8 Lars was released on January 7, 2009 and featured a completely rewritten file manager and a new s
https://en.wikipedia.org/wiki/YbBiPt
YbBiPt (ytterbium-bismuth-platinum; also named YbPtBi) is an intermetallic material which at low temperatures exhibits an extremely high value of specific heat, which is a characteristic of heavy-fermion behavior. YbBiPt has a noncentrosymmetric cubic crystal structure; in particular it belongs to the ternary half-Heusler compounds. Discovery YbBiPt was discovered by Zachary Fisk (Los Alamos National Laboratory) and coworkers in 1991 in the context of material research devoted to correlated electron systems such as heavy-fermion metals and Kondo insulators. Then the material was studied in detail due to its unconventional properties at very low temperatures (below 1 K). Material properties YbBiPt crystallizes in the MgAgAs structure, which is also known as the half-Heusler structure. YbBiPt exhibits metallic behavior, e.g. continuously decreasing electrical resistivity upon cooling. The temperature dependence of the specific heat shows an anomaly at 0.4K and linear behavior at yet lower temperatures with the enormous Sommerfeld coefficient (which describes the linear-in-temperature contribution to the specific heat caused by metallic electrons) of 8J/(mol Yb K2), which indicates an effective mass of the charge carriers that is extremely large even for heavy-fermion standards. Larger context The crystal structure of YbBiPt makes it an example of the Heusler compounds, more precisely of the so-called half-Heuslers which have XYZ composition. In recent years, there has been a large interest in this material class due to the large variety of physical properties that can be found, and many new Heusler materials have been discovered.
https://en.wikipedia.org/wiki/Taste%20bud
Taste buds are clusters of taste receptor cells, which are also known as gustatory cells. The taste receptors are located around the small structures known as papillae found on the upper surface of the tongue, soft palate, upper esophagus, the cheek, and epiglottis. These structures are involved in detecting the five elements of taste perception: saltiness, sourness, bitterness, sweetness and savoriness (umami). A popular myth assigns these different tastes to different regions of the tongue; in fact, these tastes can be detected by any area of the tongue. Via small openings in the tongue epithelium, called taste pores, parts of the food dissolved in saliva come into contact with the taste receptors. These are located on top of the taste receptor cells that constitute the taste buds. The taste receptor cells send information detected by clusters of various receptors and ion channels to the gustatory areas of the brain via the seventh, ninth and tenth cranial nerves. On average, the human tongue has 2,000-8,000 taste buds. The average lifespan of these is estimated to be 10 days. Types of papillae The taste buds on the tongue sit on raised protrusions of the tongue surface called papillae. There are four types of lingual papillae; all except one contain taste buds: Fungiform papillae - as the name suggests, these are slightly mushroom-shaped if looked at in longitudinal section. These are present mostly at the dorsal surface of the tongue, as well as at the sides. Innervated by facial nerve. Foliate papillae - these are ridges and grooves towards the posterior part of the tongue found at the lateral borders. Innervated by facial nerve (anterior papillae) and glossopharyngeal nerve (posterior papillae). Circumvallate papillae - there are only about 10 to 14 of these papillae on most people, and they are present at the back of the oral part of the tongue. They are arranged in a circular-shaped row just in front of the sulcus terminalis of the tongue. They are ass
https://en.wikipedia.org/wiki/Software%20evolution
Software evolution is the continual development of a piece of software after its initial release to address changing stakeholder and/or market requirements. Software evolution is important because organizations invest large amounts of money in their software and are completely dependent on this software. Software evolution helps software adapt to changing businesses requirements, fix defects, and integrate with other changing systems in a software system environment. General introduction Fred Brooks, in his key book The Mythical Man-Month, states that over 90% of the costs of a typical system arise in the maintenance phase, and that any successful piece of software will inevitably be maintained. In fact, Agile methods stem from maintenance-like activities in and around web based technologies, where the bulk of the capability comes from frameworks and standards. Software maintenance addresses bug fixes and minor enhancements, while software evolution focuses on adaptation and migration. Software technologies will continue to develop. These changes will require new laws and theories to be created and justified. Some models as well would require additional aspects in developing future programs. Innovations and improvements do increase unexpected form of software development. The maintenance issues also would probably change as to adapt to the evolution of the future software. Software processes are themselves evolving, after going through learning and refinements, it is always improve their efficiency and effectiveness. Basic concepts The need for software evolution comes from the fact that no one is able to predict how user requirements will evolve a priori . In other words, the existing systems are never complete and continue to evolve. As they evolve, the complexity of the systems will grow unless there is a better solution available to solve these issues. The main objectives of software evolution are ensuring functional relevance, reliability and flexibility
https://en.wikipedia.org/wiki/Balloon%20popping
A balloon pops when the material that makes up its surface tears or shreds, creating a hole. Normally, there is a balance of the balloon skin's elastic tension in which every point on the balloon's surface is being pulled by the material surrounding it. However, if a hole is made on the balloon's surface, the force becomes imbalanced, since there is no longer any force exerted by the center of the hole on the material at its edge. As a result, the balloon's surface at the edge of the hole pulls away, making it bigger; the high pressure air can then escape through the hole and the balloon pops. A balloon can be popped by either physical or chemical actions. Limpanuparb et al. use popping a balloon as a demonstration to teach about physical and chemical hazards in laboratory safety. Physical A pin or needle is frequently used to pop a balloon. As the needle or pin creates a hole on the balloon surface, the balloon pops. However, if tape is placed on the part where the hole is created, the balloon will not pop since the tape helps reinforce the elastic tension in that area, preventing the edges of the hole pulling away from the center. Likewise, the thick spots of the balloon at the top and the bottom can be pierced by a needle, pin, or even skewer without the balloon popping. Chemical Organic solvent Applying an organic solvent such as toluene onto a balloon's surface can pop it, since the solvent can partially dissolve the material making up the balloon's surface. cis-1,4-polyisoprene (solid) + organic solvent → cis-1,4-polyisoprene (partly dissolved) Baby oil can also be applied to water balloons to pop them. Orange peel Orange peel contains a compound called limonene which is a hydrocarbon compound similar to the rubber that can be used to make balloons. Based on "like dissolves like" principle, rubber balloons can be dissolved by limonene, popping the balloon. If the balloon is vulcanized (hardened with sulfur), the balloon will not pop. Gallery See a
https://en.wikipedia.org/wiki/Psychological%20stress%20and%20sleep
Sleep is a naturally recurring state of mind and body, characterized by altered consciousness, relatively inhibited sensory activity, reduced muscle activity, and inhibition of nearly all voluntary muscles during rapid eye movement (REM) sleep, and reduced interactions with surroundings. An essential aspect of sleep is that it provides the human body with a period of reduced functioning that allows for the systems throughout the body to be repaired. This time allows for the body to recharge and return to a phase of optimal functioning. It is recommended that adults get 7 to 9 hours of sleep each night. Sleep is regulated by an internal process known as the circadian rhythm. This 24-hour cycle regulates periods of alertness and tiredness that an individual experiences. The correlation between psychological stress and sleep is complex and not fully understood. In fact, many studies have found a bidirectional relationship between stress and sleep. This means that sleep quality can affect stress levels, and stress levels can affect sleep quality. Sleep change depends on the type of stressor, sleep perception, related psychiatric conditions, environmental factors, and physiological limits. Stress/sleep cycle It is critical that we receive an adequate amount of sleep each night. According to the Centers for Disease Control and Prevention, people 18-60 years old need 7 or more hours of sleep per night. The majority of college students fall in this age range. While sleep is critical, many college students do not reach this threshold amount of sleep, and subsequently face detrimental effects. However, it is clear that stress and sleep in college students are interrelated, instead of one only affecting the other. "Stress and sleep affect each other. Poor sleep can increase stress, otherwise high-stress can also cause sleep disturbances". As stated in a different way, the way stress and sleep are related is bidirectional in nature. Types of stressors Stressors can be cat
https://en.wikipedia.org/wiki/Food%20packaging
Food packaging is a packaging system specifically designed for food and represents one of the most important aspects among the processes involved in the food industry, as it provides protection from chemical, biological and physical alterations. The main goal of food packaging is to provide a practical means of protecting and delivering food goods at a reasonable cost while meeting the needs and expectations of both consumers and industries. Additionally, current trends like sustainability, environmental impact reduction, and shelf-life extension have gradually become among the most important aspects in designing a packaging system. History Packaging of food products has seen a vast transformation in technology usage and application from the stone age to the industrial revolution: 7000 BC: The adoption of pottery and glass which saw industrialization around 1500 BC. 1700s: The first manufacturing production of tinplate was introduced in England (1699) and in France (1720). Afterwards, the Dutch navy start to use such packaging to prolong the preservation of food products. 1804: Nicolas Appert, in response to inquiries into extending the shelf life of food for the French Army, employed glass bottles along with thermal food treatment. Glass has been replaced by metal cans in this application. However, there is still an ongoing debate about who first introduced the use of tinplates as food packaging. 1870: The use of paper board was launched and corrugated materials patented. 1880s: First cereal packaged in a folding box by Quaker Oats. 1890s: The crown cap for glass bottles was patented by William Painter. 1960s: Development of the two-piece drawn and wall-ironed metal cans in the US, along with the ring-pull opener and the Tetra Brik Aseptic carton package. 1970s: The barcode system was introduced in the retail and manufacturing industry. PET plastic blow-mold bottle technology, which is widely used in the beverage industry, was introduced. 1990s: The app
https://en.wikipedia.org/wiki/Duck%20typing
Duck typing in computer programming is an application of the duck test—"If it walks like a duck and it quacks like a duck, then it must be a duck"—to determine whether an object can be used for a particular purpose. With nominative typing, an object is of a given type if it is declared as such (or if a type's association with the object is inferred through mechanisms such as object inheritance). With duck typing, an object is of a given type if it has all methods and properties required by that type. Duck typing may be viewed as a usage-based structural equivalence between a given object and the requirements of a type. Example This simple example in Python 3 demonstrates how any object may be used in any context until it is used in a way that it does not support. class Duck: def swim(self): print("Duck swimming") def fly(self): print("Duck flying") class Whale: def swim(self): print("Whale swimming") for animal in [Duck(), Whale()]: animal.swim() animal.fly() Output: Duck swimming Duck flying Whale swimming AttributeError: 'Whale' object has no attribute 'fly' If it can be assumed that anything that can swim is a duck because ducks can swim, a whale could be considered a duck; however, if it is also assumed that a duck must be capable of flying, the whale will not be considered a duck. In statically typed languages In some statically-typed languages such as Boo and D, class type checking can be specified to occur at runtime rather than at compile time. Comparison with other type systems Structural type systems Duck typing is similar to, but distinct from, structural typing. Structural typing is a static typing system that determines type compatibility and equivalence by a type's structure, whereas duck typing is dynamic and determines type compatibility by only that part of a type's structure that is accessed during runtime. The TypeScript, Elm and Python languages support structural typing to varying degrees.
https://en.wikipedia.org/wiki/Merbecovirus
Merbecovirus is a subgenus of viruses in the genus Betacoronavirus, including the human pathogen Middle East respiratory syndrome–related coronavirus (MERS-CoV). The viruses in this subgenus were previously known as group 2c coronaviruses. Structure The viruses of this subgenus, like other coronaviruses, have a lipid bilayer envelope in which the membrane (M), envelope (E) and spike (S) structural proteins are anchored. See also Embecovirus (group 2a) Sarbecovirus (group 2b) Nobecovirus (group 2d)
https://en.wikipedia.org/wiki/Comparison%20of%20webmail%20providers
The following tables compare general and technical information for a number of notable webmail providers who offer a web interface in English. The list does not include web hosting providers who may offer email server and/or client software as a part of hosting package, or telecommunication providers (mobile network operators, internet service providers) who may offer mailboxes exclusively to their customers. General General information on webmail providers and products Digital rights Verification How much information users must provide to verify and complete the registration when opening an account (green means less personal information requested): Secure delivery Features to reduce the risk of third-party tracking and interception of the email content; measures to increase the deliverability of correct outbound messages. Other Unique features Features See also Comparison of web search engines - often merged with webmail by companies that host both services
https://en.wikipedia.org/wiki/Colorburst
Colorburst is an analog video, composite video signal generated by a video-signal generator used to keep the chrominance subcarrier synchronized in a color television signal. By synchronizing an oscillator with the colorburst at the back porch (beginning) of each scan line, a television receiver is able to restore the suppressed carrier of the chrominance (color) signals, and in turn decode the color information. The most common use of colorburst is to genlock equipment together as a common reference with a vision mixer in a television studio using a multi-camera setup. Explanation In NTSC, its frequency is exactly 315/88 = 3.579 MHz with a phase of 180°. PAL uses a frequency of exactly 4.43361875 MHz, with its phase alternating between 135° and 225° from line to line. Since the colorburst signal has a known amplitude, it is sometimes used as a reference level when compensating for amplitude variations in the overall signal. SECAM is unique in not having a colorburst signal, since the chrominance signals are encoded using FM rather than QAM, thus the signal phase is immaterial and no reference point is needed. Rationale for NTSC Color burst frequency The original black and white NTSC television standard specified a frame rate of 30 Hz and 525 lines per frame, or 15750 lines per second. The audio was frequency modulated 4.5 MHz above the video signal. Because this was black and white, the video consisted only of luminance (brightness) information. Although all of the space in between was occupied, the line-based nature of the video information meant that the luminance data was not spread uniformly across the frequency domain; it was concentrated at multiples of the line rate. Plotting the video signal on a spectrogram gave a signature that looked like the teeth of a comb or a gear, rather than smooth and uniform. RCA discovered that if the chrominance (color) information, which had a similar spectrum, was modulated on a carrier that was a half-integer multi
https://en.wikipedia.org/wiki/Expense%20ratio
The expense ratio of a stock or asset fund is the total percentage of fund assets used for administrative, management, advertising (12b-1), and all other expenses. An expense ratio of 1% per annum means that each year 1% of the fund's total assets will be used to cover expenses. The expense ratio does not include sales loads or brokerage commissions. Expense ratios are important to consider when choosing a fund, as they can significantly affect returns. Factors influencing the expense ratio include the size of the fund (small funds often have higher ratios as they spread expenses among a smaller number of investors), sales charges, and the management style of the fund. A typical annual expense ratio for a U.S. domestic stock fund is about 1%, although some passively managed funds (such as index funds) have significantly lower ratios. One notable component of the expense ratio of U.S. funds is the "12b-1 fee", which represents expenses used for advertising and promotion of the fund. 12b-1 fees are generally limited to a maximum of 1.00% per year (.75% distribution and .25% shareholder servicing) under Financial Industry Regulatory Authority Rules. The term "expense ratio" is also a key measure of performance for a nonprofit organization. The term is sometimes used in other contexts as well. Waivers, reimbursements and recoupments Some funds will execute "waiver or reimbursement agreements" with the fund's adviser or other service providers, especially when a fund is new and expenses tend to be higher (due to a small asset base). These agreements generally reduce expenses to some pre-determined level or by some pre-determined amount. Sometimes, these waiver/reimbursement amounts must be repaid by the fund during a period that generally cannot exceed 3 years from the year in which the original expense was incurred. If a recoupment plan is in effect, the effect may be to require future shareholders to absorb expenses of the fund incurred during prior years.
https://en.wikipedia.org/wiki/Sister%20chromatid%20exchange
Sister chromatid exchange (SCE) is the exchange of genetic material between two identical sister chromatids. It was first discovered by using the Giemsa staining method on one chromatid belonging to the sister chromatid complex before anaphase in mitosis. The staining revealed that few segments were passed to the sister chromatid which were not dyed. The Giemsa staining was able to stain due to the presence of bromodeoxyuridine analogous base which was introduced to the desired chromatid. The reason for the (SCE) is not known but it is required and used as a mutagenic testing of many products. Four to five sister chromatid exchanges per chromosome pair, per mitosis is in the normal distribution, while 14–100 exchanges is not normal and presents a danger to the organism. SCE is elevated in pathologies including Bloom syndrome, having recombination rates ~10–100 times above normal, depending on cell type. Frequent SCEs may also be related to formation of tumors. Sister chromatid exchange has also been observed more frequently in B51(+) Behçet's disease. Mitosis Mitotic recombination in the budding yeast Saccharomyces cerevisiae is primarily a result of DNA repair processes responding to spontaneous or induced damages that occur during vegetative growth.} (Also reviewed in Bernstein and Bernstein, pp 220–221). In order for yeast cells to repair damage by homologous recombination, there must be present, in the same nucleus, a second DNA molecule containing sequence homology with the region to be repaired. In a diploid cell in G1 phase of the cell cycle, such a molecule is present in the form of the homologous chromosome. However, in the G2 phase of the cell cycle (following DNA replication), a second homologous DNA molecule is also present: the sister chromatid. Evidence indicates that, due to the special nearby relationship they share, sister chromatids are not only preferred over distant homologous chromatids as substrates for recombinational repair, but have
https://en.wikipedia.org/wiki/Coombs%20test
The direct and indirect Coombs tests, also known as antiglobulin test (AGT), are blood tests used in immunohematology. The direct Coombs test detects antibodies that are stuck to the surface of the red blood cells. Since these antibodies sometimes destroy red blood cells they can cause anemia; this test can help clarify the condition. The indirect Coombs test detects antibodies that are floating freely in the blood. These antibodies could act against certain red blood cells; the test can be carried out to diagnose reactions to a blood transfusion. The direct Coombs test is used to test for autoimmune hemolytic anemia, a condition where the immune system breaks down red blood cells, leading to anemia. The direct Coombs test is used to detect antibodies or complement proteins attached to the surface of red blood cells. To perform the test, a blood sample is taken and the red blood cells are washed (removing the patient's plasma and unbound antibodies from the red blood cells) and then incubated with anti-human globulin ("Coombs reagent"). If the red cells then agglutinate, the test is positive, a visual indication that antibodies or complement proteins are bound to the surface of red blood cells and may be causing destruction of those cells. The indirect Coombs test is used in prenatal testing of pregnant women and in testing prior to a blood transfusion. The test detects antibodies against foreign red blood cells. In this case, serum is extracted from a blood sample taken from the patient. The serum is incubated with foreign red blood cells of known antigenicity. Finally, anti-human globulin is added. If agglutination occurs, the indirect Coombs test is positive. Mechanism The two Coombs tests are based on anti-human antibodies binding to human antibodies, commonly IgG or IgM. These anti-human antibodies are produced by plasma cells of non-human animals after immunizing them with human plasma. Additionally, these anti-human antibodies will also bind to human anti
https://en.wikipedia.org/wiki/Pockmark%20%28geology%29
Pockmarks are concave, crater-like depressions on seabeds that are caused by fluids (liquids and gasses) escaping and erupting through the seafloor. They can vary in size and have been found worldwide. Pockmarks were discovered off the coasts of Nova Scotia, Canada in the late 1960s by Lew King and Brian McLean of the Bedford Institute of Oceanography, using a new side scan sonar developed in the late 1960s by Kelvin Hughes. Before the two researchers King and McLean used the side scan sonar, they had noticed 'notches' on echo sounder and shallow seismic records in the seafloor off Nova Scotia. They believed these notches to represent gullies and curvilinear troughs in the muddy seafloor. However, they could never work out how to join these notches from one survey line to the next. It was, therefore, not before they surveyed with the area-coverage system, Side scan sonar, that they realized the notches were in fact closed depressions (craters) and not curvilinear features. This was a great surprise, because there are very few craters on the Earth's surface. Although pockmarks were first documented and published 50 years ago, they are currently still being discovered on the ocean floor and in many lakes, the world over. Spatial delineation and morphometric characterisation of pockmarks in the central North Sea seabed have been achieved by semi-automatic methods. The craters off Nova Scotia are up to in diameter and deep. Pockmarks have been found worldwide. Discovery was aided by the use of high-resolution multibeam acoustic systems for bathymetric mapping. In these cases, pockmarks have been interpreted as the morphological expression of gas or oil leakage from active hydrocarbon system or a deep overpressured petroleum reservoir. Specifically, long term deep fluid flow resulting in pockmarks is linked to undersea methane gas escape under pressure. See also Cold seeps Limnic eruption Demersal fish Bibliography
https://en.wikipedia.org/wiki/Multifunctional%20landscape
Multifunctional landscapes are composed of lands used for multiple different purposes, including agriculture, forestry, settlements, recreation, conservation and restoration. With different parts of the landscape sustaining people and other species, multifunctional landscapes are heterogenous mosaics of lands used for agriculture and settlements that also include significant areas of habitats and regenerating ecosystems. See also Landscape ecology Agroecology Landscape-scale conservation Land use Working landscape Anthropogenic biome
https://en.wikipedia.org/wiki/Tate%27s%20thesis
In number theory, Tate's thesis is the 1950 PhD thesis of completed under the supervision of Emil Artin at Princeton University. In it, Tate used a translation invariant integration on the locally compact group of ideles to lift the zeta function twisted by a Hecke character, i.e. a Hecke L-function, of a number field to a zeta integral and study its properties. Using harmonic analysis, more precisely the Poisson summation formula, he proved the functional equation and meromorphic continuation of the zeta integral and the Hecke L-function. He also located the poles of the twisted zeta function. His work can be viewed as an elegant and powerful reformulation of a work of Erich Hecke on the proof of the functional equation of the Hecke L-function. Erich Hecke used a generalized theta series associated to an algebraic number field and a lattice in its ring of integers. Iwasawa–Tate theory Kenkichi Iwasawa independently discovered essentially the same method (without an analog of the local theory in Tate's thesis) during the Second World War and announced it in his 1950 International Congress of Mathematicians paper and his letter to Jean Dieudonné written in 1952. Hence this theory is often called Iwasawa–Tate theory. Iwasawa in his letter to Dieudonné derived on several pages not only the meromorphic continuation and functional equation of the L-function, he also proved finiteness of the class number and Dirichlet's theorem on units as immediate byproducts of the main computation. The theory in positive characteristic was developed one decade earlier by Ernst Witt, Wilfried Schmid, and Oswald Teichmüller. Iwasawa-Tate theory uses several structures which come from class field theory, however it does not use any deep result of class field theory. Generalisations Iwasawa–Tate theory was extended to the general linear group GL(n) over an algebraic number field and automorphic representations of its adelic group by Roger Godement and Hervé Jacquet in 1972 which f
https://en.wikipedia.org/wiki/Dasyatis%20ushiei
Dasyatis ushiei, the cow stingray or Ushi stingray, is a species of stingray known from a single specimen. Based on the single specimen, its range includes at least Mikawa Bay, Aichi Prefecture, middle Japan. Due to the limited knowledge of its biology and extent of capture in fisheries, this species is assessed as Data Deficient in 2007. Molecular data from 2012 has confirmed that this species is a population of the Broad stingray.
https://en.wikipedia.org/wiki/Bitsquatting
Bitsquatting is a form of cybersquatting which relies on bit-flip errors that occur during the process of making a DNS request. These bit-flips may occur due to factors such as faulty hardware or cosmic rays. When such an error occurs, the user requesting the domain may be directed to a website registered under a domain name similar to a legitimate domain, except with one bit flipped in their respective binary representations. A 2011 Black Hat paper detailed an analysis where eight legitimate domains were targeted with thirty one bitsquat domains. Over the course of about seven months, 52,317 requests were made to the bitsquat domains.
https://en.wikipedia.org/wiki/Mihaela%20Ignatova
Mihaela Ignatova is a Bulgarian mathematician who won the 2020 Sadosky Prize of the Association for Women in Mathematics for her research in mathematical analysis, and in particular in partial differential equations and fluid dynamics. Education In 2004, Ignatova earned both a bachelor's degree from Sofia University and a master's degree from the University of Nantes. She earned a second master's degree from Sofia University in 2006, working under the supervision of mathematician Emil Horozov. She then completed PhD studies from University of Southern California in 2011 under the supervision of Igor Kukavica. Career After working as a visiting assistant professor at the University of California, Riverside, a postdoctoral researcher at Stanford University, and an instructor at Princeton University, she moved to Temple University as an assistant professor in 2018.
https://en.wikipedia.org/wiki/Cointegration
Cointegration is a statistical property of a collection of time series variables. First, all of the series must be integrated of order d (see Order of integration). Next, if a linear combination of this collection is integrated of order less than d, then the collection is said to be co-integrated. Formally, if (X,Y,Z) are each integrated of order d, and there exist coefficients a,b,c such that is integrated of order less than d, then X, Y, and Z are cointegrated. Cointegration has become an important property in contemporary time series analysis. Time series often have trends—either deterministic or stochastic. In an influential paper , Charles Nelson and Charles Plosser (1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends. Introduction If two or more series are individually integrated (in the time series sense) but some linear combination of them has a lower order of integration, then the series are said to be cointegrated. A common example is where the individual series are first-order integrated () but some (cointegrating) vector of coefficients exists to form a stationary linear combination of them. For instance, a stock market index and the price of its associated futures contract move through time, each roughly following a random walk. Testing the hypothesis that there is a statistically significant connection between the futures price and the spot price could now be done by testing for the existence of a cointegrated combination of the two series. History The first to introduce and analyse the concept of spurious—or nonsense—regression was Udny Yule in 1926. Before the 1980s, many economists used linear regressions on non-stationary time series data, which Nobel laureate Clive Granger and Paul Newbold showed to be a dangerous approach that could produce spurious correlation, since standard detrending techniques can result in data that are still non-stationary. Granger's 19
https://en.wikipedia.org/wiki/Glossary%20of%20entomology%20terms
This glossary of entomology describes terms used in the formal study of insect species by entomologists. A–C A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Endrin, Isodrin D–F A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. cf. the related Aldrin, Endrin, Isodrin A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Aldrin, Isodrin G–L A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Aldrin, Endrin M–O P–R S–Z Figures See also Anatomical terms of location Butterfly Caterpillar Comstock–Needham system External morphology of Lepidoptera Glossary of ant terms Glossary of spider terms Glossary of scientific names Insect wing Pupa
https://en.wikipedia.org/wiki/Urban%20ecology
Urban ecology is the scientific study of the relation of living organisms with each other and their surroundings in an urban environment. An urban environment refers to environments dominated by high-density residential and commercial buildings, paved surfaces, and other urban-related factors that create a unique landscape. The goal of urban ecology is to achieve a balance between human culture and the natural environment. Urban ecology is a recent field of study compared to ecology. The methods and studies of urban ecology is a subset of ecology. The study of urban ecology carries increasing importance because more than 50% of the world's population today lives in urban areas. It is also estimated that within the next 40 years, two-thirds of the world's population will be living in expanding urban centers. The ecological processes in the urban environment are comparable to those outside the urban context. However, the types of urban habitats and the species that inhabit them are poorly documented which is why more research should be done in urban ecology. History Historically, ecology has focused on natural environments, but by the 1970s many ecologists began to turn their interest towards ecological interactions taking place in and caused by urban environments. In the nineteenth century, naturalists such as Malthus, De Candolle, Lyell, and Darwin found that competition for resources was crucial in controlling population growth and is a driver of extinction. This concept was the basis of evolutionary ecology. Jean-Marie Pelt's 1977 book The Re-Naturalized Human, Brian Davis' 1978 publication Urbanization and the diversity of insects, and Sukopp et al.'s 1979 article "The soil, flora and vegetation of Berlin's wastelands" are some of the first publications to recognize the importance of urban ecology as a separate and distinct form of ecology the same way one might see landscape ecology as different from population ecology. Forman and Godron's 1986 book Landscap
https://en.wikipedia.org/wiki/Cantor%27s%20first%20set%20theory%20article
Cantor's first set theory article contains Georg Cantor's first theorems of transfinite set theory, which studies infinite sets and their properties. One of these theorems is his "revolutionary discovery" that the set of all real numbers is uncountably, rather than countably, infinite. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. The title of the article, "On a Property of the Collection of All Real Algebraic Numbers" ("Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen"), refers to its first theorem: the set of real algebraic numbers is countable. Cantor's article was published in 1874. In 1879, he modified his uncountability proof by using the topological notion of a set being dense in an interval. Cantor's article also contains a proof of the existence of transcendental numbers. Both constructive and non-constructive proofs have been presented as "Cantor's proof." The popularity of presenting a non-constructive proof has led to a misconception that Cantor's arguments are non-constructive. Since the proof that Cantor published either constructs transcendental numbers or does not, an analysis of his article can determine whether or not this proof is constructive. Cantor's correspondence with Richard Dedekind shows the development of his ideas and reveals that he had a choice between two proofs: a non-constructive proof that uses the uncountability of the real numbers and a constructive proof that does not use uncountability. Historians of mathematics have examined Cantor's article and the circumstances in which it was written. For example, they have discovered that Cantor was advised to leave out his uncountability theorem in the article he submitted — he added it during proofreading. They have traced this and other facts about the article to the influence of Karl Weierstrass and Leopold Kronecker. Historians have also studied Dedekind's contributio
https://en.wikipedia.org/wiki/UTOPIA%20%28bioinformatics%20tools%29
UTOPIA (User-friendly Tools for Operating Informatics Applications) is a suite of free tools for visualising and analysing bioinformatics data. Based on an ontology-driven data model, it contains applications for viewing and aligning protein sequences, rendering complex molecular structures in 3D, and for finding and using resources such as web services and data objects. There are two major components, the protein analysis suite and UTOPIA documents. Utopia Protein Analysis suite The Utopia Protein Analysis suite is a collection of interactive tools for analysing protein sequence and protein structure. Up front are user-friendly and responsive visualisation applications, behind the scenes a sophisticated model that allows these to work together and hides much of the tedious work of dealing with file formats and web services. Utopia Documents Utopia Documents brings a fresh new perspective to reading the scientific literature, combining the convenience and reliability of the Portable Document Format (pdf) with the flexibility and power of the web. History Between 2003 and 2005 work on UTOPIA was funded via The e-Science North West Centre based at The University of Manchester by the Engineering and Physical Sciences Research Council, UK Department of Trade And Industry, and the European Molecular Biology Network (EMBnet). Since 2005 work continues under the EMBRACE European Network of Excellence. UTOPIA's CINEMA (Colour INteractive Editor for Multiple Alignments), a tool for Sequence Alignment, is the latest incarnation of software originally developed at The University of Leeds to aid the analysis of G protein-coupled receptors (GPCRs). SOMAP, a Screen Oriented Multiple Alignment Procedure was developed in the late 1980s on the VMS computer operating system, used a monochrome text-based VT100 video terminal, and featured context-sensitive help and pulldown menus some time before these were standard operating system features. SOMAP was followed by a Unix tool c
https://en.wikipedia.org/wiki/Mongolitubulus
Mongolitubulus is a form genus encapsulating a range of ornamented conical small shelly fossils of the Cambrian period. It is potentially synonymous with Rushtonites, Tubuterium and certain species of Rhombocorniculum, and owing to the similarity of the genera, they are all dealt with herein. Organisms that bore Mongolitubulus-like projections include trilobites, bradoriid arthropods and hallucigeniid lobopodians. Morphology The fossils consist of round, slender, pointed, spines with a slight curvature, and are covered with short rhomboid processes that spiral around the spine surface, forming a regular mosaic with a 60° angle of intersection. Spines vary from sub-millimetric up to two centimetres in length, but do not show any growth lines, suggesting that they were moulted and replaced. Species are defined on the basis of the ornamentation, which may of course be convergent. Spines of Rhombocorniculum cancellatum have a similar surface ornamentation and are also curved, sometimes in two dimensions to form a 'screw'; they had an inner and outer organic layer that surrounded a layer of pillar-like apatite crystals; these enclosed a honeycomb-like structure of narrow edge-parallel chambers. This genus is a useful biostratigraphic marker of the Lower Cambrian. The rhomboid ornament uniformly covers all the spine, with the exception (in some cases) of the smooth-surfaced tip. Mongolitubulus has a comparable structure; phosphatic fossils show that there was a smooth outer layer about 2–3.5 µm thick, a 10–15 µm-thick inner layer comprising axis-parallel fibres that are each ~1 µm wide, and a large cavity in the centre of the spine. Species Affinity M. henrikseni has been shown to be part of the carapace of a bivalved bradoriid arthropod. However, the affinity of M. squamifer is still unresolved; the genus may transpire to be a form taxon, which would require M. henrikseni to be re-classified into a new genus. Unlike the spines of M. henrikseni, which flare out at
https://en.wikipedia.org/wiki/Theory%20of%20Probability%20and%20Mathematical%20Statistics
Theory of Probability and Mathematical Statistics is a peer-reviewed international scientific journal published by Taras Shevchenko National University of Kyiv jointly with the American Mathematical Society two times per year in both print and electronic formats. The subjects covered by the journal are probability theory, mathematical statistics, random processes and fields, statistics of random processes and fields, random operators, stochastic differential equations, stochastic analysis, queuing theory, reliability theory, risk processes, financial and actuarial mathematics. The editor-in-chief is Yuliya Mishura (Ukraine). Abstracting and indexing The journal is abstracted and indexed in the Emerging Sources Citation Index, Mathematical Reviews, Scopus, and Zentralblatt MATH. Editorial Board Yu. Mishura (Editor-in-Chief) (Ukraine) M. Leonenko (Deputy Editor-in-Chief) (United Kingdom) K. Ralchenko (Managing Editor) (Ukraine) V. Anisimov (United Kingdom), A. Ayache (France), T. Bodnar (Sweden), M. Dozzi (France), M. Grothaus (Germany), A. Iksanov (Ukraine), A. Ivanov (Ukraine), R. Maiboroda (Ukraine), A. Malyarenko (Sweden), A. Marynych (Ukraine), L. Mattner (Germany), I. K. Matsak (Ukraine), I. Molchanov (Switzerland), A. Novikov (Australia), O. Okhrin (Germany), A. Olenko (Australia), E. Orsingher (Italy), F. Polito (Italy), D. Silvestrov (Sweden), G. Shevchenko (Ukraine), A. Swishchuk (Canada), A. Volodin (Canada), O. Zakusylo (Ukraine)
https://en.wikipedia.org/wiki/Central%20Drugs%20Standard%20Control%20Organisation
The Central Drugs Standard Control Organisation (CDSCO) is India's national regulatory body for cosmetics, pharmaceuticals and medical devices. It serves a similar function to the European Medicines Agency of the European Union, the PMDA of Japan, the Food and Drug Administration (FDA) of the United States, and the Medicines and Healthcare products Regulatory Agency of the United Kingdom, and the State Administration for Market Regulation of China. The Indian government has announced its plan to bring all medical devices, including implants and contraceptives under a review of the Central Drugs and Standard Control Organisation (CDSCO). Within the CDSCO, the Drug Controller General of India (DCGI) regulates pharmaceutical and medical devices and is positioning within the Ministry of Health and Family Welfare. The DCGI is advised by the Drug Technical Advisory Board (DTAB) and the Drug Consultative Committee (DCC). Divided into zonal offices, each one carries out pre-licensing and post-licensing inspections, post-market surveillance, and drug recalls (where necessary). Manufacturers who deal with the authority required to name an Authorized Indian Representative (AIR) to represent them in all dealings with the CDSCO in India. Though the CDSCO has a good track record with the World Health Organization, it has also been accused of past collusion with independent medical experts and pharmaceutical companies. CDSCO plans to open an international office in Beijing, China. Divisions Central Drugs Standard Control Organization has 8 divisions: BA/BE New Drugs Medical Device & Diagnostics DCC-DTAB Import & Registration Biological Cosmetics Clinical Trials
https://en.wikipedia.org/wiki/Laminaria%20abyssalis
Laminaria abyssalis is a species of brown kelp, notable for its connection to rhodolith beds in the Brazilian coastline. Distribution and ecology Laminaria abyssalis is native to the Atlantic Ocean, off the coast of Brazil. It resides in a habitat spanning over , from upper Espírito Santo to Mid Rio de Janeiro. It thrives in the waters of the continental shelf and intertidal zone, at depths of . The majority of these kelp take root in Rhodolith beds; substrates formed by nodules of calcareous algae. Morphology The stipe of the Laminaria abyssalis averaged out to be 14.3 centimeters in length, with an average width of 0.7 centimeters. The stipe supports an undivided blade, averages out to a length of 241 centimeters, width of 68 centimeters, and thickness of 0.65 centimeters. The holdfast which attaches Laminaria abyssalis to the rhodolith bed it resides in has an average of 4 root-like extensions, each averaging to 13.5 centimeters in length. History Laminaria abyssalis was first discovered in 1967 by A. B. Joly and E. C. Oliveira, who found it in the Brazilian Coastline. A later expedition was made by Quége N in 1988 to determine the many locations in which Laminaria abyssalis grows along Brazil. As of lately, the kelp has been in a semi-endangered state, as sand trawling has become more common in Brazilian waters. This has disrupted the rhodolith beds in which Laminaria abyssalis grows, causing the kelp to slightly die out. The RESTORESEAS project has begun collecting data and making experiments to preserve the lives of these kelps.
https://en.wikipedia.org/wiki/Non-constructive%20algorithm%20existence%20proofs
The vast majority of positive results about computational problems are constructive proofs, i.e., a computational problem is proved to be solvable by showing an algorithm that solves it; a computational problem is shown to be in P (complexity) by showing an algorithm that solves it in time that is polynomial in the size of the input; etc. However, there are several non-constructive results, where an algorithm is proved to exist without showing the algorithm itself. Several techniques are used to provide such existence proofs. Using an unknown finite set In combinatorial game theory A simple example of a non-constructive algorithm was published in 1982 by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy, in their book Winning Ways for Your Mathematical Plays. It concerns the game of Sylver Coinage, in which players take turns specifying a positive integer that cannot be expressed as a sum of previously specified values, with a player losing when they are forced to specify the number 1. There exists an algorithm (given in the book as a flow chart) for determining whether a given first move is winning or losing: if it is a prime number greater than three, or one of a finite set of 3-smooth numbers, then it is a winning first move, and otherwise it is losing. However, the finite set is not known. In graph theory Non-constructive algorithm proofs for problems in graph theory were studied beginning in 1988 by Michael Fellows and Michael Langston. A common question in graph theory is whether a certain input graph has a certain property. For example: Input: a graph G. Question: Can G be embedded in a 3-dimensional space, such that no two disjoint cycles of G are topologically linked (as in links of a chain)? There is a highly exponential algorithm that decides whether two cycles embedded in a 3d-space are linked, and one could test all pairs of cycles in the graph, but it is not obvious how to account for all possible embeddings in a 3d-space. Thus, it is a-pr
https://en.wikipedia.org/wiki/Reversible%20computing
Reversible computing is any model of computation where the computational process, to some extent, is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is a form of unconventional computing. Due to the unitarity of quantum mechanics, quantum circuits are reversible, as long as they do not "collapse" the quantum states on which they operate. Reversibility There are two major, closely related types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility. A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic. There is a style of circuit design ideally exhibiting this property that is referred to as charge recovery logic, adiabatic circuits, or adiabatic computing (see Adiabatic process). Although in practice no nonstationary physical process can be exactly physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well isolated from interactions with unknown external environments, when the laws of physics describing the system's evolution are precisely known. A motivation for the study of technologies aimed at implementing reversible computing is that they offer what is predicted to be the only potential way to improve the computational energy efficiency (i.e., useful operations performed per unit energy dissipated) of computers beyond the fundamental von Neumann–Landauer limit of energy dissipated per irreversible bit operation. Although the Landauer limit was millions of times below the energy consumption of computers in the 2000s and thousands of times less in the 2010s, proponents of reversible computing argue that this
https://en.wikipedia.org/wiki/Windows%20Error%20Reporting
Windows Error Reporting (WER) (codenamed Watson) is a crash reporting technology introduced by Microsoft with Windows XP and included in later Windows versions and Windows Mobile 5.0 and 6.0. Not to be confused with the Dr. Watson debugging tool which left the memory dump on the user's local machine, Windows Error Reporting collects and offers to send post-error debug information (a memory dump) using the Internet to Microsoft when an application crashes or stops responding on a user's desktop. No data is sent without the user's consent. When a crash dump (or other error signature information) reaches the Microsoft server, it is analyzed, and information about a solution is sent back to the user if available. Solutions are served using Windows Error Reporting Responses. Windows Error Reporting runs as a Windows service. Kinshuman Kinshumann is the original architect of WER. WER was also included in the Association for Computing Machinery (ACM) hall of fame for its impact on the computing industry. History Windows XP Microsoft first introduced Windows Error Reporting with Windows XP. Windows Vista Windows Error Reporting was improved significantly in Windows Vista, when public APIs were introduced for reporting failures other than application crashes and hangs. Using the new APIs, as documented on MSDN, developers can create custom reports and customize the reporting user interface. Windows Error Reporting was also revamped with a focus on reliability and user experience. For example, WER can now report errors even from processes in bad states such as stack exhaustions, PEB/TEB corruptions, and heap corruptions, conditions which in releases prior to Windows Vista would have resulted in silent program termination with no error report. A new Control Panel applet, "Problem Reports and Solutions" was also introduced, keeping a record of system and application errors and issues, as well as presenting probable solutions to problems. Windows 7 The Problem Reports and S
https://en.wikipedia.org/wiki/Librem
Librem is a line of computers manufactured by Purism, SPC featuring free (libre) software. The laptop line is designed to protect privacy and freedom by providing no non-free (proprietary) software in the operating system or kernel, avoiding the Intel Active Management Technology, and gradually freeing and securing firmware. Librem laptops feature hardware kill switches for the microphone, webcam, Bluetooth and Wi-Fi. Models Laptops Librem 13, Librem 15 and Librem 14 In 2014, Purism launched a crowdfunding campaign on Crowd Supply to fund the creation and production of the Librem 15 laptop, conceived as a modern alternative to existing open-source hardware laptops, all of which used older hardware. The in the name refers to its 15-inch screen size. The campaign succeeded after extending the original campaign, and the laptops were shipped to backers. In a second revision of the laptop, hardware kill switches for the camera, microphone, Wi-Fi, and Bluetooth were added. After the successful launch of the Librem 15, Purism created another campaign on Crowd Supply for a 13-inch laptop called the Librem 13, which also came with hardware kill switches similar to those on the Librem 15v2. The campaign was again successful and the laptops were shipped to customers. Purism announced in December 2016 that it would start shipping from inventory rather than building to order with the new batches of Librem 15 and 13. , Purism has one laptop model in production, the Librem 14 (version 1, US$1,370). Comparison of laptops Librem Mini The Librem Mini is a small form factor desktop computer, which began shipping in June 2020. Librem 5 On August 24, 2017, Purism started a crowdfunding campaign for the Librem 5, a smartphone aimed to run 100% free software, which would "[focus] on security by design and privacy protection by default". Purism claimed that the phone would become "the world's first ever IP-native mobile handset, using end-to-end encrypted decentralized communica
https://en.wikipedia.org/wiki/Scanning%20tunneling%20spectroscopy
Scanning tunneling spectroscopy (STS), an extension of scanning tunneling microscopy (STM), is used to provide information about the density of electrons in a sample as a function of their energy. In scanning tunneling microscopy, a metal tip is moved over a conducting sample without making physical contact. A bias voltage applied between the sample and tip allows a current to flow between the two. This is as a result of quantum tunneling across a barrier; in this instance, the physical distance between the tip and the sample The scanning tunneling microscope is used to obtain "topographs" - topographic maps - of surfaces. The tip is rastered across a surface and (in constant current mode), a constant current is maintained between the tip and the sample by adjusting the height of the tip. A plot of the tip height at all measurement positions provides the topograph. These topographic images can obtain atomically resolved information on metallic and semi-conducting surfaces However, the scanning tunneling microscope does not measure the physical height of surface features. One such example of this limitation is an atom adsorbed onto a surface. The image will result in some perturbation of the height at this point. A detailed analysis of the way in which an image is formed shows that the transmission of the electric current between the tip and the sample depends on two factors: (1) the geometry of the sample and (2) the arrangement of the electrons in the sample. The arrangement of the electrons in the sample is described quantum mechanically by an "electron density". The electron density is a function of both position and energy, and is formally described as the local density of electron states, abbreviated as local density of states (LDOS), which is a function of energy. Spectroscopy, in its most general sense, refers to a measurement of the number of something as a function of energy. For scanning tunneling spectroscopy the scanning tunneling microscope is use
https://en.wikipedia.org/wiki/OnlineHPC
The OnlineHPC was a free public web service that supplied tools to deal with high performance computers and online workflow editor. OnlineHPC allowed users to design and execute workflows using the online workflow designer and to work with high performance computers – clusters and clouds. Access to high performance resources was available as directly from the service user interface, as from workflow components. The workflow engine of the OnlineHPC service was Taverna as traditionally used for scientific workflow execution in such domains, as bioinformatics, cheminformatics, medicine, astronomy, social science, music, and digital preservation. History OnlineHPC was started at the Institute for Information Transmission Problems in 2012 as a project for the institute’s researchers whose work need access to computer clusters and who are not professional programmers. The project motivation is that there is a gap between researcher skills and competence level needed to run high performance computing. There are at least three barriers on the way to HPC: Researcher needs to find HPC provider and go through procedures to get access; Researcher needs to install and configure numerous low-level software applications and deal with digital certificates to proceed; Researcher needs to get familiar with such technologies and tools as MPI, batch task managers or even web services. The last requirement stops majority of even the stoutest researchers that passed first two levels. The service aims to reduce the barriers by providing a complete pre-configured set of tools required for work with computer clusters: in-browser terminal emulator, files system browser, credentials manager and massive task tool. After a while, it became obvious that engineering and scientific tasks require a more elaborate tool suit that enables researchers to execute the flow of tasks – workflows. Unless there is a number of scientific workflow implementations, they are almost all desktop applicatio
https://en.wikipedia.org/wiki/Apple%20File%20Exchange
Apple File Exchange (AFE) is a utility program for Apple Macintosh computers. It was included on the Apple "Tidbits" or "Install 2" disk in system versions 7.0 through 7.1. In System 7.5 (released in 1994), it was replaced by PC Exchange. Apple File Exchange could read floppy disks from DOS/Windows and ProDOS (Apple II) systems, as well as disks from Macs. This utility enabled Macs to read PC disks, but only if they were inserted after launching Apple File Exchange. If Apple File Exchange was not launched while inserting PC-formatted floppy, the Mac would complain that the disk inserted "was not a Macintosh disk" and requested initialisation. Apple File Exchange was a file content translator, in contrast to the File System Translator of Apple GS/OS which just translated the file system between different computers' storage formats. AFE could convert data files produced by one program for use in another, e.g. between AppleWorks and ClarisWorks. Bugs A high-density diskette in DOS format formatted as a 720K disk would not function correctly; the Mac assumed that any high-density disk has been formatted as an HD disk. To solve this problem, a user could cover the square hole with a piece of tape opposite the write-protect tab, and re-insert the disk. (This square hole identifies the disk as a high-density disk.) Covering the square hole will make it appear to disk drive as a DD disk.
https://en.wikipedia.org/wiki/Physical%20object
In common usage and classical mechanics, a physical object or physical body (or simply an object or body) is a collection of matter within a defined contiguous boundary in three-dimensional space. The boundary surface must be defined and identified by the properties of the material, although it may change over time. The boundary is usually the visible or tangible surface of the object. The matter in the object is constrained (to a greater or lesser degree) to move as one object. The boundary may move in space relative to other objects that it is not attached to (through translation and rotation). An object's boundary may also deform and change over time in other ways. Also in common usage, an object is not constrained to consist of the same collection of matter. Atoms or parts of an object may change over time. An object is usually meant to be defined by the simplest representation of the boundary consistent with the observations. However the laws of physics only apply directly to objects that consist of the same collection of matter. In physics, an object is an identifiable collection of matter, which may be constrained by an identifiable boundary, and may move as a unit by translation or rotation, in 3-dimensional space. Each object has a unique identity, independent of any other properties. Two objects may be identical, in all properties except position, but still remain distinguishable. In most cases the boundaries of two objects may not overlap at any point in time. The property of identity allows objects to be counted. Examples of models of physical bodies include, but are not limited to a particle, several interacting smaller bodies (particulate or otherwise), and continuous media. The common conception of physical objects includes that they have extension in the physical world, although there do exist theories of quantum physics and cosmology which arguably challenge this. In modern physics, "extension" is understood in terms of the spacetime: roughly s
https://en.wikipedia.org/wiki/Double%20switching
Double switching, double cutting, or double breaking is the practice of using a multipole switch to close or open both the positive and negative sides of a DC electrical circuit, or both the hot and neutral sides of an AC circuit. This technique is used to prevent shock hazard in electric devices connected with unpolarised AC power plugs and sockets. Double switching is a crucial safety engineering practice in railway signalling, wherein it is used to ensure that a single false feed of current to a relay is unlikely to cause a wrong-side failure. It is an example of using redundancy to increase safety and reduce the likelihood of failure, analogous to double insulation. Double switching increases the cost and complexity of systems in which it is employed, for example by extra relay contacts and extra relays, so the technique is applied selectively where it can provide a cost-effective safety improvement. Examples Landslip and Washaway Detectors A landslip or washaway detector is buried in the earth embankment, and opens a circuit should a landslide occur. It is not possible to guarantee that the wet earth of the embankment will not complete the circuit which is supposed to break. If the circuit is double cut with positive and negative wires, any wet conductive earth is likely to blow a fuse on the one hand, and short the detecting relay on the other hand, either of which is almost certain to apply the correct warning signal. Accidents Clapham The Clapham Junction rail crash of 1988 was caused in part by the lack of double switching (known as "double cutting" in the British railway industry). The signal relay in question was switched only on the hot side, while the return current came back on an unswitched wire. A loose wire bypassed the contacts by which the train detection relays switched the signal, allowing the signal to show green when in fact there was a stationary train ahead. 35 people were killed in the resultant collision. United Flight 811 A sim
https://en.wikipedia.org/wiki/Metamictisation
Metamictisation (sometimes called metamictization or metamiction) is a natural process resulting in the gradual and ultimately complete destruction of a mineral's crystal structure, leaving the mineral amorphous. The affected material is therefore described as metamict. Certain minerals occasionally contain interstitial impurities of radioactive elements, and it is the alpha radiation emitted from those compounds that is responsible for degrading a mineral's crystal structure through internal bombardment. The effects of metamictisation are extensive: other than negating any birefringence previously present, the process also lowers a mineral's refractive index, hardness, and its specific gravity. The mineral's colour is also affected: metamict specimens are usually green, brown or blackish. Further, metamictisation diffuses the bands of a mineral's absorption spectrum. Curiously and inexplicably, the one attribute which metamictisation does not alter is dispersion. All metamict materials are themselves radioactive, some dangerously so. An example of a metamict mineral is zircon. The presence of uranium and thorium atoms substituting for zirconium in the crystal structure is responsible for the radiation damage in this case. Unaffected specimens are termed high zircon while metamict specimens are termed low zircon. Other minerals known to undergo metamictisation include allanite, gadolinite, ekanite, thorite and titanite. Ekanite is almost invariably found completely metamict as thorium and uranium are part of its essential chemical composition. Metamict minerals can have their crystallinity and properties restored through prolonged annealing. A related phenomenon is the formation of pleochroic halos surrounding minute zircon inclusions within a crystal of biotite or other mineral. The spherical halos are produced by alpha particle radiation from the included uranium- or thorium-bearing species. Such halos can also be found surrounding monazite and other radioacti
https://en.wikipedia.org/wiki/Journal%20of%20Experimental%20Psychopathology
The Journal of Experimental Psychopathology is a continuously published open access journal covering psychopathology. It was established in 2010 and is published by SAGE Publications. It was relaunched as an open access journal in 2018, after it was combined with the pre-existing journal Psychopathology Review. The editor-in-chief is Graham Davey (University of Sussex). According to the Journal Citation Reports, the journal has a 2018 impact factor of 0.812, ranking it 108th out of 130 journals in the category "Psychology, Clinical".
https://en.wikipedia.org/wiki/ModSecurity
ModSecurity, sometimes called Modsec, is an open-source web application firewall (WAF). Originally designed as a module for the Apache HTTP Server, it has evolved to provide an array of Hypertext Transfer Protocol request and response filtering capabilities along with other security features across a number of different platforms including Apache HTTP Server, Microsoft IIS and Nginx. It is free software released under the Apache license 2.0. The platform provides a rule configuration language known as 'SecRules' for real-time monitoring, logging, and filtering of Hypertext Transfer Protocol communications based on user-defined rules. Although not its only configuration, ModSecurity is most commonly deployed to provide protections against generic classes of vulnerabilities using the OWASP ModSecurity Core Rule Set (CRS). This is an open-source set of rules written in ModSecurity's SecRules language. The project is part of OWASP, the Open Web Application Security Project. Several other rule sets are also available. To detect threats, the ModSecurity engine is deployed embedded within the webserver or as a proxy server in front of a web application. This allows the engine to scan incoming and outgoing HTTP communications to the endpoint. Dependent on the rule configuration the engine will decide how communications should be handled which includes the capability to pass, drop, redirect, return a given status code, execute a script, and more. History ModSecurity was first developed by Ivan Ristić, who wrote the module with the end goal of monitoring application traffic on the Apache HTTP Server. The first version was released in November 2002 which supported Apache HTTP Server 1.3.x. Starting in 2004 Ivan created Thinking Stone to continue work on the project full-time. While working on the version 2.0 rewrite Thinking Stone was bought by Breach Security, an American-Israeli security company, in September 2006. Ivan stayed on continuing the development of version 2.
https://en.wikipedia.org/wiki/Motor%20control
Motor control is the regulation of movement in organisms that possess a nervous system. Motor control includes reflexes as well as directed movement. To control movement, the nervous system must integrate multimodal sensory information (both from the external world as well as proprioception) and elicit the necessary signals to recruit muscles to carry out a goal. This pathway spans many disciplines, including multisensory integration, signal processing, coordination, biomechanics, and cognition, and the computational challenges are often discussed under the term sensorimotor control. Successful motor control is crucial to interacting with the world to carry out goals as well as for posture, balance, and stability. Some researchers (mostly neuroscientists studying movement, such as Daniel Wolpert and Randy Flanagan) argue that motor control is the reason brains exist at all. Neural control of muscle force All movements, e.g. touching your nose, require motor neurons to fire action potentials that results in contraction of muscles. In humans, ~150,000 motor neurons control the contraction of ~600 muscles. To produce movements, a subset of 600 muscles must contract in a temporally precise pattern to produce the right force at the right time. Motor units and force production A single motor neuron and the muscle fibers it innervates are called a motor unit. For example, the rectus femoris contains approximately 1 million muscle fibers, which are controlled by around 1000 motor neurons. Activity in the motor neuron causes contraction in all of the innervated muscle fibers so that they function as a unit. Increasing action potential frequency (spike rate) in the motor neuron increases the muscle fiber contraction force, up to the maximal force. The maximal force depends on the contractile properties of the muscle fibers. Within a motor unit, all the muscle fibers are of the same type (e.g. type I (slow twitch) or Type II fibers (fast twitch)), and motor units of mult
https://en.wikipedia.org/wiki/Fluhrer%2C%20Mantin%20and%20Shamir%20attack
In cryptography, the Fluhrer, Mantin and Shamir attack is a stream cipher attack on the widely used RC4 stream cipher. The attack allows an attacker to recover the key in an RC4 encrypted stream from a large number of messages in that stream. The Fluhrer, Mantin and Shamir attack applies to specific key derivation methods, but does not apply in general to RC4-based SSL (TLS), since SSL generates the encryption keys it uses for RC4 by hashing, meaning that different SSL sessions have unrelated keys. However, the closely related bar mitzvah attack, based on the same research and revealed in 2015, does exploit those cases where weak keys are generated by the SSL keying process. Background The Fluhrer, Mantin and Shamir (FMS) attack, published in their 2001 paper "Weaknesses in the Key Scheduling Algorithm of RC4", takes advantage of a weakness in the RC4 key scheduling algorithm to reconstruct the key from encrypted messages. The FMS attack gained popularity in network attack tools including AirSnort, weplab, and aircrack, which use it to recover the key used by WEP protected wireless networks. This discussion will use the below RC4 key scheduling algorithm (KSA). begin ksa(with int keylength, with byte key[keylength]) for i from 0 to 255 S[i] := i endfor j := 0 for i from 0 to 255 j := (j + S[i] + key[i mod keylength]) mod 256 swap(S[i],S[j]) endfor end The following pseudo-random generation algorithm (PRGA) will also be used. begin prga(with byte S[256]) i := 0 j := 0 while GeneratingOutput: i := (i + 1) mod 256 j := (j + S[i]) mod 256 swap(S[i],S[j]) output S[(S[i] + S[j]) mod 256] endwhile end The attack The basis of the FMS attack lies in the use of weak initialization vectors (IVs) used with RC4. RC4 encrypts one byte at a time with a keystream output from prga(); RC4 uses the key to initialize a state machine via ksa(), and then continuously
https://en.wikipedia.org/wiki/Introgressive%20hybridization%20in%20plants
Introgressive hybridization, also known as introgression, is the flow of genetic material between divergent lineages via repeated backcrossing. In plants, this backcrossing occurs when an generation hybrid breeds with one or both of its parental species. Source of variation Although some genera of plants hybridize and introgress more easily than others, in certain scenarios, external factors may contribute to an increased rate of hybridization. The phenomenon known as Hybridization of the Habitat echoes this idea, explaining that disturbances in a natural habitat can lead to species which typically do not hybridize and backcross to do so with relative ease. Plant breeders also manipulate their subjects to hybridize in order to optimize their hardiness, appearance, or whatever desired traits they want to select for. This type of hybridization has been particularly impactful for the production of many crop species, including but not limited to: certain types of rice, corn, wheat, barley, and rye. Natural introgression can occur with many genera and species, but manipulating the gene pool with artificial/forced introgression is useful for honing in on desired characteristics, such as drought tolerance or pest resistance. Background In the early days of hybrid research, it was commonly believed that there was insufficient evidence of hybridization in nature because hybridization would mostly produce sterile or unfit offspring. Through experimentation and improved phylogenetic testing capabilities, we now see that the ability to produce fertile hybrid offspring varies by genus, within the plant kingdom. A few examples of species with the capacity to produce fertile hybrids are given below. Examples of natural introgression Irises One of the most significant early studies of plant hybridization involved three species of irises. Although they commonly form crosses where their natural habitats overlap, there is no evidence that Iris fulva, Iris hexagona, or Iris br
https://en.wikipedia.org/wiki/Kreiss%20matrix%20theorem
In matrix analysis, Kreiss matrix theorem relates the so-called Kreiss constant of a matrix with the power iterates of this matrix. It was originally introduced by Heinz-Otto Kreiss to analyze the stability of finite difference methods for partial difference equations. Kreiss constant of a matrix Given a matrix A, the Kreiss constant 𝒦(A) (with respect to the closed unit circle) of A is defined as while the Kreiss constant 𝒦(A) with respect to the left-half plane is given by Properties For any matrix A, one has that 𝒦(A) ≥ 1 and 𝒦(A) ≥ 1. In particular, 𝒦(A) (resp. 𝒦(A)) are finite only if the matrix A is Schur stable (resp. Hurwitz stable). Kreiss constant can be interpreted as a measure of normality of a matrix. In particular, for normal matrices A with spectral radius less than 1, one has that 𝒦(A) = 1. Similarly, for normal matrices A that are Hurwitz stable, 𝒦(A) = 1. 𝒦(A) and 𝒦(A) have alternative definitions through the pseudospectrum Λ(A): , where p(A) = max{|λ| : λ ∈ Λ(A)}, , where α(A) = max{Re|λ| : λ ∈ Λ(A)}. 𝒦(A) can be computed through robust control methods. Statement of Kreiss matrix theorem Let A be a square matrix of order n and e be the Euler's number. The modern and sharp version of Kreiss matrix theorem states that the inequality below is tight and it follows from the application of Spijker's lemma. There also exists an analogous result in terms of the Kreiss constant with respect to the left-half plane and the matrix exponential: Consequences and applications The value (respectively, ) can be interpreted as the maximum transient growth of the discrete-time system (respectively, continuous-time system ). Thus, the Kreiss matrix theorem gives both upper and lower bounds on the transient behavior of the system with dynamics given by the matrix A: a large (and finite) Kreiss constant indicates that the system will have an accentuated transient phase before decaying to zero.
https://en.wikipedia.org/wiki/Severity%20of%20Alcohol%20Dependence%20Questionnaire
The Severity of Alcohol Dependence Questionnaire (SADQ or SAD-Q) is a 20 item clinical screening tool designed to measure the presence and level of alcohol dependence. It is divided into five sections: Physical withdrawal symptoms Affective withdrawal symptoms Craving and relief drinking Typical daily consumption Reinstatement of dependence after a period of abstinence. Each item is scored on a 4-point scale, giving a possible range of 0 to 60. A score of over 30 indicates severe alcohol dependence. Some local clinical guidelines use the SADQ to predict the levels of medication needed during alcohol detoxification. See also Alcoholism Substance abuse AUDIT Questionnaire CAGE Questionnaire CRAFFT Screening Test Paddington Alcohol Test List of diagnostic classification and rating scales used in psychiatry
https://en.wikipedia.org/wiki/Photofermentation
Photofermentation is the fermentative conversion of organic substrate to biohydrogen manifested by a diverse group of photosynthetic bacteria by a series of biochemical reactions involving three steps similar to anaerobic conversion. Photofermentation differs from dark fermentation because it only proceeds in the presence of light. For example, photo-fermentation with Rhodobacter sphaeroides SH2C (or many other purple non-sulfur bacteria) can be employed to convert small molecular fatty acids into hydrogen and other products. Light-dependent pathways Phototropic bacteria Phototropic bacteria produce hydrogen gas via photofermentation, where the hydrogen is sourced from organic compounds. C6H12O6 + 6H2O ->[{hv}] 6CO2 + 12H2 Photolytic producers Photolytic producers are similar to phototrophs, but source hydrogen from water molecules that are broken down as the organism interacts with light. Photolytic producers consist of algae and certain photosynthetic bacteria. 12H2O ->[{hv}] 12H2 + 6O2(algae) CO + H2O ->[{hv}] H2 + CO2(photolytic bacteria) Sustainable energy production Photofermentation via purple nonsulfur producing bacteria has been explored as a method for the production of biofuel. The natural fermentation product of these bacteria, hydrogen gas, can be harnessed as a natural gas energy source. Photofermentation via algae instead of bacteria is used for bioethanol production, among other liquid fuel alternatives. Mechanism The bacteria and their energy source are held in a bioreactor chamber that is impermeable to air and oxygen free. The proper temperature for the bacterial species is maintained in the bioreactor. The bacteria are sustained with a carbohydrate diet consisting of simple saccharide molecules. The carbohydrates are typically sourced from agricultural or forestry waste. Variations In addition to wild type forms of Rhodopseudomonas palustris, scientists have used genetically modified forms to produce hydrogen as well. Other explor
https://en.wikipedia.org/wiki/YARA
YARA is the name of a tool primarily used in malware research and detection. It provides a rule-based approach to create descriptions of malware families based on regular expression, textual or binary patterns. A description is essentially a YARA rule name, where these rules consist of sets of strings and a boolean expression. History YARA was originally developed by Victor Alvarez of VirusTotal, and released on GitHub in 2013. The name is an abbreviation of YARA: Another Recursive Acronym or Yet Another Ridiculous Acronym. Design YARA by default comes with modules to process PE, ELF analysis, as well as support for the open-source Cuckoo sandbox. See also Sigma Snort
https://en.wikipedia.org/wiki/Wildcard%20mask
A wildcard mask is a mask of bits that indicates which parts of an IP address are available for examination. In the Cisco IOS, they are used in several places, for example: To indicate the size of a network or subnet for some routing protocols, such as OSPF. To indicate what IP addresses should be permitted or denied in access control lists (ACLs). A wildcard mask can be thought of as an inverted subnet mask. For example, a subnet mask of 255.255.255.0 () inverts to a wildcard mask of 0.0.0.255 (). A wild card mask is a matching rule. The rule for a wildcard mask is: 0 means that the equivalent bit must match 1 means that the equivalent bit does not matter Any wildcard bit-pattern can be masked for examination. For example, a wildcard mask of 0.0.0.254 () applied to IP address 10.10.10.2 () will match even-numbered IP addresses 10.10.10.0, 10.10.10.2, 10.10.10.4, 10.10.10.6 etc. Same mask applied to 10.10.10.1 () will match odd-numbered IP addresses 10.10.10.1, 10.10.10.3, 10.10.10.5 etc. A network and wildcard mask combination of 1.1.1.1 0.0.0.0 would match an interface configured exactly with 1.1.1.1 only, and nothing else. Wildcard masks are used in situations where subnet masks may not apply. For example, when two affected hosts fall in different subnets, the use of a wildcard mask will group them together.
https://en.wikipedia.org/wiki/Bryson%20of%20Heraclea
Bryson of Heraclea (, gen.: Βρύσωνος; fl. late 5th-century BCE) was an ancient Greek mathematician and sophist who studied the solving the problems of squaring the circle and calculating pi. Life and work Little is known about the life of Bryson; he came from Heraclea Pontica, and he may have been a pupil of Socrates. He is mentioned in the 13th Platonic Epistle, and Theopompus even claimed in his Attack upon Plato that Plato stole many ideas for his dialogues from Bryson of Heraclea. He is known principally from Aristotle, who criticizes his method of squaring the circle. He also upset Aristotle by asserting that obscene language does not exist. Diogenes Laërtius and the Suda refer several times to a Bryson as a teacher of various philosophers, but since some of the philosophers mentioned lived in the late 4th-century BCE, it is possible that Bryson became confused with Bryson of Achaea, who may have lived around that time. Pi and squaring the circle Bryson, along with his contemporary, Antiphon, was the first to inscribe a polygon inside a circle, find the polygon's area, double the number of sides of the polygon, and repeat the process, resulting in a lower bound approximation of the area of a circle. "Sooner or later (they figured), ...[there would be] so many sides that the polygon ...[would] be a circle." Bryson later followed the same procedure for polygons circumscribing a circle, resulting in an upper bound approximation of the area of a circle. With these calculations, Bryson was able to approximate π and further place lower and upper bounds on π's true value. But due to the complexity of the method, he appears to have made little progress. Aristotle criticized this method, but Archimedes would later use a method similar to that of Bryson and Antiphon to calculate π; however, Archimedes calculated the perimeter of a polygon instead of the area. Robert Kilwardby on Bryson's syllogism The 13th-century English philosopher Robert Kilwardby described Bryson'
https://en.wikipedia.org/wiki/Transcription%20translation%20feedback%20loop
Transcription-translation feedback loop (TTFL) is a cellular model for explaining circadian rhythms in behavior and physiology. Widely conserved across species, the TTFL is auto-regulatory, in which transcription of clock genes is regulated by their own protein products. Discovery Circadian rhythms have been documented for centuries. For example, French astronomer Jean-Jacques d’Ortous de Mairan noted the periodic 24-hour movement of Mimosa plant leaves as early as 1729. However, science has only recently begun to uncover the cellular mechanisms responsible for driving observed circadian rhythms. The cellular basis of circadian rhythms is supported by the fact that rhythms have been observed in single-celled organisms Beginning in the 1970s, experiments conducted by Ron Konopka and colleagues, in which forward genetic methods were used to induce mutation, revealed that Drosophila melanogaster specimens with altered period (Per) genes also demonstrated altered periodicity. As genetic and molecular biology experimental tools improved, researchers further identified genes involved in sustaining normal rhythmic behavior, giving rise to the concept that internal rhythms are modified by a small subset of core clock genes. Hardin and colleagues (1990) were the first to propose that the mechanism driving these rhythms was a negative feedback loop. Subsequent major discoveries confirmed this model; notably experiments led by Thomas K. Darlington and Nicholas Gekakis in the late 1990s that identified clock proteins and characterized their methods in Drosophila and mice, respectively. These experiments gave rise to the transcription-translation feedback loop (TTFL) model that has now become the dominant paradigm for explaining circadian behavior in a wide array of species. General mechanisms of TTFL The TTFL is a negative feedback loop, in which clock genes are regulated by their protein products. Generally, the TTFL involves two main arms: positive regulatory elements th
https://en.wikipedia.org/wiki/Mason%E2%80%93Stothers%20theorem
The Mason–Stothers theorem, or simply Mason's theorem, is a mathematical theorem about polynomials, analogous to the abc conjecture for integers. It is named after Walter Wilson Stothers, who published it in 1981, and R. C. Mason, who rediscovered it shortly thereafter. The theorem states: Let , , and be relatively prime polynomials over a field such that and such that not all of them have vanishing derivative. Then Here is the product of the distinct irreducible factors of . For algebraically closed fields it is the polynomial of minimum degree that has the same roots as ; in this case gives the number of distinct roots of . Examples Over fields of characteristic 0 the condition that , , and do not all have vanishing derivative is equivalent to the condition that they are not all constant. Over fields of characteristic it is not enough to assume that they are not all constant. For example, considered as polynomials over some field of characteristic p, the identity gives an example where the maximum degree of the three polynomials ( and as the summands on the left hand side, and as the right hand side) is , but the degree of the radical is only . Taking and gives an example where equality holds in the Mason–Stothers theorem, showing that the inequality is in some sense the best possible. A corollary of the Mason–Stothers theorem is the analog of Fermat's Last Theorem for function fields: if for , , relatively prime polynomials over a field of characteristic not dividing and then either at least one of , , or is 0 or they are all constant. Proof gave the following elementary proof of the Mason–Stothers theorem. Step 1. The condition implies that the Wronskians , , and are all equal. Write for their common value. Step 2. The condition that at least one of the derivatives , , or is nonzero and that , , and are coprime is used to show that is nonzero. For example, if then so divides (as and are coprime) so (as unless is constant).
https://en.wikipedia.org/wiki/End%20of%20interrupt
An end of interrupt (EOI) is a computing signal sent to a programmable interrupt controller (PIC) to indicate the completion of interrupt processing for a given interrupt. Interrupts are used to facilitate hardware signals sent to the processor that temporarily stop a running program and allow a special program, an interrupt handler, to run instead. An EOI is used to cause a PIC to clear the corresponding bit in the in-service register (ISR), and thus allow more interrupt requests (IRQs) of equal or lower priority to be generated by the PIC. EOIs may indicate the interrupt vector implicitly or explicitly. An explicit EOI vector is indicated with the EOI, whereas an implicit EOI vector will typically use a vector as indicated by the PICs priority schema, for example the highest vector in the ISR. Also, EOIs may be sent at the end of interrupt processing by an interrupt handler, or the operation of a PIC may be set to auto-EOI at the start of the interrupt handler. See also Intel 8259 – notable PIC from Intel Advanced Programmable Interrupt Controller (APIC) OpenPIC and IBM MPIC Inter-processor interrupt (IPI) Interrupt latency Non-maskable interrupt (NMI) IRQL (Windows)
https://en.wikipedia.org/wiki/AppNexus
Xandr, formerly known as AppNexus, is an American multinational technology company operating a cloud-based software platform that enables and optimizes programmatic online advertising. Headquartered in the Flatiron District of New York City, the company has 23 offices in North America, Latin America, Europe, Asia and Australia. AppNexus offers online auction infrastructure and technology for data management, optimization, financial clearing and support for directly negotiated advertising campaigns. It has both demand-side platform (DSP), supply-side platform (SSP), and ad serving functionalities. It integrates with advertising sources including Google's "Authorized Buyers" ad exchange, Magnite, Pubmatic and other aggregators. It operates out of multiple data centers, including one in Amsterdam serving Europe and the Middle East, in a facility shared with Equinix. In 2016, AppNexus was ranked #21 on Forbes Magazine's "The Cloud 100" list. In June 2018, AT&T announced it was acquiring the company and putting it under its Xandr division as a subsidiary. AppNexus was reportedly sold for $1.6 billion, while most news outlets speculated the company did not sell for less than $2 billion. In October 2019, Xandr purchased Clypd, a privately-held technology company focused on enabling programmatic buying of linear television advertising. In December 2021, AT&T announced that they had agreed to sell Xandr to Microsoft for an undisclosed price, subject to customary closing conditions, including regulatory reviews. Founders and financing AppNexus was founded by former Right Media staff, CTO Brian O'Kelley, and Mike Nolet, product manager and director of analytics, with Michael Rubenstein, a former vice president and general manager at Google DoubleClick, who joined AppNexus as president in September 2009. The company was financially backed by Microsoft, Khosla Ventures, First Round Capital, Venrock, Kodiak Venture Partners, Marc Andreessen, Ben Horowitz, and Ron Conway; as
https://en.wikipedia.org/wiki/Face%20and%20neck%20development%20of%20the%20human%20embryo
The face and neck development of the human embryo refers to the development of the structures from the third to eighth week that give rise to the future head and neck. They consist of three layers, the ectoderm, mesoderm and endoderm, which form the mesenchyme (derived form the lateral plate mesoderm and paraxial mesoderm), neural crest and neural placodes (from the ectoderm). The paraxial mesoderm forms structures named somites and somitomeres that contribute to the development of the floor of the brain and voluntary muscles of the craniofacial region. The lateral plate mesoderm consists of the laryngeal cartilages (arytenoid and cricoid). The three tissue layers give rise to the pharyngeal apparatus, formed by six pairs of pharyngeal arches, a set of pharyngeal pouches and pharyngeal grooves, which are the most typical feature in development of the head and neck. The formation of each region of the face and neck is due to the migration of the neural crest cells which come from the ectoderm. These cells determine the future structure to develop in each pharyngeal arch. Eventually, they also form the neurectoderm, which forms the forebrain, midbrain and hindbrain, cartilage, bone, dentin, tendon, dermis, pia mater and arachnoid mater, sensory neurons, and glandular stroma. Pharyngeal arches Pharyngeal arches are formed during the fourth week. Each arch consists of a mesenchymal tissue covered on the outside by ectoderm and on the inside by epithelium of endodermal origin. In human embryology, there are six arches which are separated by pharyngeal grooves externally and pharyngeal pouches internally. These arches contribute to the physical appearance of the embryo because they are the main components that build the face and neck. In addition, the muscular components of each arch have their own cranial nerve, and wherever the muscle cells migrate, they carry their nerve component with them. Plus, each arch has its own arterial component. When neural cells migrate to
https://en.wikipedia.org/wiki/Multichannel%20Television%20Sound
Multichannel Television Sound, better known as MTS, is the method of encoding three additional audio channels into analog 4.5 MHz audio carriers on System M and System N. It was developed by the Broadcast Television Systems Committee, an industry group, and sometimes known as BTSC as a result. MTS worked by adding additional audio signals in otherwise empty portions of the television signal. MTS allowed up to a total of four audio channels. Normally two were broadcast to produce the left and right stereo channels. An additional second audio program (SAP), could be used to broadcast other languages or entirely different audio like weather alerts that could be accessed by the user, typically through a button on their remote control. The fourth channel, PRO, was only used by the broadcasters. History Initial work on design and testing of a stereophonic audio system began in 1975 when Telesonics approached Chicago public television station WTTW. WTTW was producing a music show titled Soundstage at that time, and was simulcasting the stereo audio mix on local FM stations. Telesonics offered a way to send the same stereo audio over the existing television signals, thereby removing the need for the FM simulcast. Telesonics and WTTW formed a working relationship and began developing the system which was similar to FM stereo modulation. Twelve WTTW studio and transmitter engineers added the needed broadcast experience to the relationship. The Telesonics system was tested and refined using the WTTW transmitter facilities on the Sears Tower. In 1979, WTTW had installed a stereo Grass Valley master control switcher and had added a second audio channel to the microwave STL (Studio Transmitter Link). By that time, WTTW engineers had further developed stereo audio on videotape recorders in their plant, using split audio track heads manufactured to their specifications, outboard record electronics, and Dolby noise reduction that allowed Soundstage to be recorded and electronic
https://en.wikipedia.org/wiki/Dead%20on%20arrival
Dead on arrival (DOA), also dead in the field and brought in dead (BID), are terms which indicate that a patient was found to be already clinically dead upon the arrival of professional medical assistance, often in the form of first responders such as emergency medical technicians, paramedics, firefighters, or police. In some jurisdictions, first responders must consult verbally with a physician before officially pronouncing a patient deceased, but once cardiopulmonary resuscitation (CPR) is initiated, it must be continued until a physician can pronounce the patient dead. Dead on arrival can also mean that a person is said by a doctor to be dead upon their arrival at a hospital, emergency room, clinic, or ward. A person can be pronounced dead on arrival if cardiopulmonary resuscitation or mouth-to-mouth resuscitation is found to be futile. Medical DOA When presented with a patient, medical professionals are required to perform cardiopulmonary resuscitation (CPR) unless specific conditions are met that allow them to pronounce the patient as deceased. In most places, these are examples of such criteria: Injuries not compatible with life. These include but are not necessarily limited to decapitation, catastrophic brain trauma, incineration, severing of the body, or injuries that do not permit effective administration of CPR. If a patient has sustained such injuries, it should be intuitively obvious that the patient is non-viable. Rigor mortis, indicating that the patient has been dead for at least a few hours. Rigor mortis can sometimes be difficult to determine, so it is often reported along with other determining factors. Obvious decomposition Livor mortis (lividity), indicating that the body has been pulseless and in the same position long enough for blood to sink and collect within the body, creating purplish discolorations at the lowest points of the body (with respect to gravity) Stillbirth. If it can be determined without a doubt that an infant died pri
https://en.wikipedia.org/wiki/One-hot
In digital circuits and machine learning, a one-hot is a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). A similar implementation in which all bits are '1' except one '0' is sometimes called one-cold. In statistics, dummy variables represent a similar technique for representing categorical data. Applications Digital circuitry One-hot encoding is often used for indicating the state of a state machine. When using binary, a decoder is needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in the nth state if, and only if, the nth bit is high. A ring counter with 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15 flip flops chained in series with the Q output of each flip flop connected to the D input of the next and the D input of the first flip flop connected to the Q output of the 15th flip flop. The first flip flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip flop, which represents the last state. Upon reset of the state machine all of the flip flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip flops advances the one 'hot' bit to the second flip flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state. An address decoder converts from binary to one-hot representation. A priority encoder converts from one-hot representation to binary. Comparison with other encoding methods Advantages Determining the state has a low and constant cost of accessing one flip-flop Changing the state has the constant cost of accessing two flip-flops Easy to design and modify Easy to detect illegal states Takes advantage of an FPGA's abundant flip-flops Using a one-hot implementation typically allows a sta
https://en.wikipedia.org/wiki/NetFPGA
The NetFPGA project is an effort to develop open-source hardware and software for rapid prototyping of computer network devices. The project targeted academic researchers, industry users, and students. It was not the first platform of its kind in the networking community. NetFPGA used an FPGA-based approach to prototyping networking devices. This allows users to develop designs that are able to process packets at line-rate, a capability generally unafforded by software based approaches. NetFPGA focused on supporting developers that can share and build on each other's projects and IP building blocks. History The project began in 2007 as a research project at Stanford University called the NetFPGA-1G. The 1G was originally designed as a tool to teach students about networking hardware architecture and design. The 1G platform consisted of a PCI board with a Xilinx Virtex-II pro FPGA and 4 x 1GigE interfaces feeding into it, along with a downloadable code repository containing an IP library and a few example designs. The project grew and by the end of 2010 more than 1,800 1G boards sold to over 150 educational institutions spanning 15 countries. During that growth the 1G not only gained popularity as a tool for education, but increasingly as a tool for research. By 2011 over 46 academic papers had been published regarding research that used the NetFPGA-1G platform. Additionally, over 40 projects were contributed to the 1G code repository by the end of 2010. In 2009 work began in secrecy on the NetFPGA-10G with 4 x 10 GigE interfaces. The 10G board was also designed with a much larger FPGA, more memory, and a number of other upgrades. The first release of the platform, codenamed “Howth”, was planned for December 24, 2010, and includes a repository similar to that of the 1G, containing a small IP library and two reference designs. From a platform design perspective, the 10G is diverging in a few significant ways from the 1G platform. For instance, the interface standa
https://en.wikipedia.org/wiki/Photosynthesis%20system
Photosynthesis systems are electronic scientific instruments designed for non-destructive measurement of photosynthetic rates in the field. Photosynthesis systems are commonly used in agronomic and environmental research, as well as studies of the global carbon cycle. How photosynthesis systems function Photosynthesis systems function by measuring gas exchange of leaves. Atmospheric carbon dioxide is taken up by leaves in the process of photosynthesis, where is used to generate sugars in a molecular pathway known as the Calvin cycle. This draw-down of induces more atmospheric to diffuse through stomata into the air spaces of the leaf. While stoma are open, water vapor can easily diffuse out of plant tissues, a process known as transpiration. It is this exchange of and water vapor that is measured as a proxy of photosynthetic rate. The basic components of a photosynthetic system are the leaf chamber, infrared gas analyzer (IRGA), batteries and a console with keyboard, display and memory. Modern 'open system' photosynthesis systems also incorporate miniature disposable compressed gas cylinder and gas supply pipes. This is because external air has natural fluctuations in and water vapor content, which can introduce measurement noise. Modern 'open system' photosynthesis systems remove the and water vapour by passage over soda lime and Drierite, then add at a controlled rate to give a stable concentration. Some systems are also equipped with temperature control and a removable light unit, so the effect of these environmental variables can also be measured. The leaf to be analysed is placed in the leaf chamber. The concentrations is measured by the infrared gas analyzer. The IRGA shines infrared light through a gas sample onto a detector. in the sample absorbs energy, so the reduction in the level of energy that reaches the detector indicates the concentration. Modern IRGAs take account of the fact that absorbs energy at similar wavelengths as . Modern IRG
https://en.wikipedia.org/wiki/Q-Pochhammer%20symbol
In mathematical area of combinatorics, the q-Pochhammer symbol, also called the q-shifted factorial, is the product with It is a q-analog of the Pochhammer symbol , in the sense that The q-Pochhammer symbol is a major building block in the construction of q-analogs; for instance, in the theory of basic hypergeometric series, it plays the role that the ordinary Pochhammer symbol plays in the theory of generalized hypergeometric series. Unlike the ordinary Pochhammer symbol, the q-Pochhammer symbol can be extended to an infinite product: This is an analytic function of q in the interior of the unit disk, and can also be considered as a formal power series in q. The special case is known as Euler's function, and is important in combinatorics, number theory, and the theory of modular forms. Identities The finite product can be expressed in terms of the infinite product: which extends the definition to negative integers n. Thus, for nonnegative n, one has and Alternatively, which is useful for some of the generating functions of partition functions. The q-Pochhammer symbol is the subject of a number of q-series identities, particularly the infinite series expansions and which are both special cases of the q-binomial theorem: Fridrikh Karpelevich found the following identity (see for the proof): Combinatorial interpretation The q-Pochhammer symbol is closely related to the enumerative combinatorics of partitions. The coefficient of in is the number of partitions of m into at most n parts. Since, by conjugation of partitions, this is the same as the number of partitions of m into parts of size at most n, by identification of generating series we obtain the identity as in the above section. We also have that the coefficient of in is the number of partitions of m into n or n-1 distinct parts. By removing a triangular partition with n − 1 parts from such a partition, we are left with an arbitrary partition with at most n parts. This gives a weight-p
https://en.wikipedia.org/wiki/Amylase
An amylase () is an enzyme that catalyses the hydrolysis of starch (Latin ) into sugars. Amylase is present in the saliva of humans and some other mammals, where it begins the chemical process of digestion. Foods that contain large amounts of starch but little sugar, such as rice and potatoes, may acquire a slightly sweet taste as they are chewed because amylase degrades some of their starch into sugar. The pancreas and salivary gland make amylase (alpha amylase) to hydrolyse dietary starch into disaccharides and trisaccharides which are converted by other enzymes to glucose to supply the body with energy. Plants and some bacteria also produce amylase. Specific amylase proteins are designated by different Greek letters. All amylases are glycoside hydrolases and act on α-1,4-glycosidic bonds. Classification α-Amylase The α-amylases () (CAS 9014-71-5) (alternative names: 1,4-α-D-glucan glucanohydrolase; glycogenase) are calcium metalloenzymes. By acting at random locations along the starch chain, α-amylase breaks down long-chain saccharides, ultimately yielding either maltotriose and maltose from amylose, or maltose, glucose and "limit dextrin" from amylopectin. They belong to glycoside hydrolase family 13 (https://www.cazypedia.org/index.php/Glycoside_Hydrolase_Family_13). Because it can act anywhere on the substrate, α-amylase tends to be faster-acting than β-amylase. In animals, it is a major digestive enzyme, and its optimum pH is 6.7–7.0. In human physiology, both the salivary and pancreatic amylases are α-amylases. The α-amylase form is also found in plants, fungi (ascomycetes and basidiomycetes) and bacteria (Bacillus). β-Amylase Another form of amylase, β-amylase () (alternative names: 1,4-α-D-glucan maltohydrolase; glycogenase; saccharogen amylase) is also synthesized by bacteria, fungi, and plants. Working from the non-reducing end, β-amylase catalyzes the hydrolysis of the second α-1,4 glycosidic bond, cleaving off two glucose units (maltose) at
https://en.wikipedia.org/wiki/Doob%20decomposition%20theorem
In the theory of stochastic processes in discrete time, a part of the mathematical theory of probability, the Doob decomposition theorem gives a unique decomposition of every adapted and integrable stochastic process as the sum of a martingale and a predictable process (or "drift") starting at zero. The theorem was proved by and is named for Joseph L. Doob. The analogous theorem in the continuous-time case is the Doob–Meyer decomposition theorem. Statement Let be a probability space, with or a finite or an infinite index set, a filtration of , and an adapted stochastic process with for all . Then there exist a martingale and an integrable predictable process starting with such that for every . Here predictable means that is -measurable for every . This decomposition is almost surely unique. Remark The theorem is valid word by word also for stochastic processes taking values in the -dimensional Euclidean space or the complex vector space . This follows from the one-dimensional version by considering the components individually. Proof Existence Using conditional expectations, define the processes and , for every , explicitly by and where the sums for are empty and defined as zero. Here adds up the expected increments of , and adds up the surprises, i.e., the part of every that is not known one time step before. Due to these definitions, (if ) and are -measurable because the process is adapted, and because the process is integrable, and the decomposition is valid for every . The martingale property     a.s. also follows from the above definition (), for every }. Uniqueness To prove uniqueness, let be an additional decomposition. Then the process is a martingale, implying that     a.s., and also predictable, implying that     a.s. for any }. Since by the convention about the starting point of the predictable processes, this implies iteratively that almost surely for all , hence the decomposition is almost surely unique.
https://en.wikipedia.org/wiki/Dancing%20Dots
Dancing Dots Braille Music Technology is an American company based in Philadelphia that was founded in 1992 to develop and adapt music technology for the blind. Its founder, Bill McCann, is a blind musician. Among the products it offers are several programs that produce a musical version of Braille by converting print musical notation, allowing blind musicians access to the scores used by their sighted counterparts. The company also offers programs that aid blind musicians in transcribing their compositions to Braille. Dancing Dots created the latter products to help speed the process of Braille transcription for blind composers, who might otherwise have to wait between two weeks and six months to have their compositions transcribed by one of the less than one hundred certified Braille music transcribers in the United States. History The company was founded in 1992 by Bill McCann, a blind trumpeter. It struggled financially in its early years in the long lead between developing technology and releasing its first product in 1997, a difficult period assisted by federal contracts beginning in 1994. In 1997, the company released its GOODFEEL Braille Music Translator to positive reviews. The product was well received, and its company was a success. In 1999, the company, which was a recipient of a Small Business Innovation Research Grant, was part of a display of assistive technology at the White House. In 2000, Dancing Dots released CakeTalking for SONAR, JAWS scripts and tutorials that provide access to Cakewalk Sonar, a digital audio workstation, for blind or visually impaired users. Products and services Dancing Dots maintains a website at which it markets its products, as well as related and complementary products by other companies. With GOODFEEL combined with a few mainstream products, sighted musicians can prepare a Braille score with no knowledge of braille. Music scanning software can be used to speed up data entry. Blind users can make sound recordings and p
https://en.wikipedia.org/wiki/Drop%20attack
A drop attack is a sudden fall without loss of consciousness. Drop attacks stem from diverse mechanisms, including orthopedic causes (for example, leg weakness and knee instability), hemodynamic causes (for example, transient vertebrobasilar insufficiency, a type of interruption of blood flow to the brain), and neurologic causes (such as epileptic seizures or unstable vestibular function), among other reasons. Those affected typically experience abrupt leg weakness, sometimes after sudden movement of the head. The weakness may persist for hours. The term "drop attack" is used to categorize otherwise unexplained falls from a wide variety of causes and is considered ambiguous medical terminology; drop attacks are currently reported much less often than in the past, possibly as a result of better diagnostic precision. By definition, drop attacks exclude syncopal falls (fainting), which involve short loss of consciousness. In neurology, the term "drop attack" is used to describe certain types of seizure which occur in epilepsy. Drop attacks that have a vestibular origin within the inner ear may be experienced by some people in the later stages of Ménière's disease (these may be referred to as Tumarkin [drop] attacks, or as Tumarkin's otolithic crisis). Drop attacks often occur in elderly people. Falls in older adults happen for many reasons, and the goals of health care include preventing any preventable falls and correctly diagnosing any falls that do happen.
https://en.wikipedia.org/wiki/Souring%20bag
A souring bag (also called émesesley, agiwir, or tanwart) is a hide or leather bag used for fermenting milk. It is used by Berbers, especially Touareg. Nicolaisen (1963) describes the method of souring milk in the Ahaggar (Hoggar Mountains): "Milk of the morning yield is put into a skin bag ("émesesley")... to remain in this bag until the next morning when the churning takes place. During the hot season the "émesesley", filled with milk, is placed in the shade of the tent in daytime, while in winter it is placed in sunshine, or close to the fire. Some milk of the evening yield is sometimes added to the morning milk in the "émesesley". Before churning (into butter) the milk thus soured is poured into the proper churning bag ("agiwir", "tanwart") which is inflated and shaken". Such bags are described in Encyclopedie Berbere and can be found in some ethnographic museums.
https://en.wikipedia.org/wiki/Dopamine%20transporter
The dopamine transporter (DAT) also (sodium-dependent dopamine transporter) is a membrane-spanning protein coded for in the human by the SLC6A3 gene, (also known as DAT1), that pumps the neurotransmitter dopamine out of the synaptic cleft back into cytosol. In the cytosol, other transporters sequester the dopamine into vesicles for storage and later release. Dopamine reuptake via DAT provides the primary mechanism through which dopamine is cleared from synapses, although there may be an exception in the prefrontal cortex, where evidence points to a possibly larger role of the norepinephrine transporter. DAT is implicated in a number of dopamine-related disorders, including attention deficit hyperactivity disorder, bipolar disorder, clinical depression, eating disorders, and substance use disorders. The gene that encodes the DAT protein is located on chromosome 5, consists of 15 coding exons, and is roughly 64 kbp long. Evidence for the associations between DAT and dopamine related disorders has come from a type of genetic polymorphism, known as a variable number tandem repeat, in the SLC6A3 gene, which influences the amount of protein expressed. Function DAT is an integral membrane protein that removes dopamine from the synaptic cleft and deposits it into surrounding cells, thus terminating the signal of the neurotransmitter. Dopamine underlies several aspects of cognition, including reward, and DAT facilitates regulation of that signal. Mechanism DAT is a symporter that moves dopamine across the cell membrane by coupling the movement to the energetically-favorable movement of sodium ions moving from high to low concentration into the cell. DAT function requires the sequential binding and co-transport of two Na+ ions and one Cl− ion with the dopamine substrate. The driving force for DAT-mediated dopamine reuptake is the ion concentration gradient generated by the plasma membrane Na+/K+ ATPase. In the most widely accepted model for monoamine transporter functio
https://en.wikipedia.org/wiki/Minkowski%20functional
In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. If is a subset of a real or complex vector space then the or of is defined to be the function valued in the extended real numbers, defined by where the infimum of the empty set is defined to be positive infinity (which is a real number so that would then be real-valued). The set is often assumed/picked to have properties, such as being an absorbing disk in that guarantee that will be a real-valued seminorm on In fact, every seminorm on is equal to the Minkowski functional (that is, ) of any subset of satisfying (where all three of these sets are necessarily absorbing in and the first and last are also disks). Thus every seminorm (which is a defined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is a with certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm). These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis. In particular, through these relationships, Minkowski functionals allow one to "translate" certain properties of a subset of into certain properties of a function on The Minkowski function is always non-negative (meaning ). This property of being nonnegative stands in contrast to other classes of functions, such as sublinear functions and real linear functionals, that do allow negative values. However, might not be real-valued since for any given the value is a real number if and only if is not empty. Consequently, is usually assumed to have properties (such as being absorbing in for instance) that will guarantee that is real-valued. Definition Let be a subset of a r
https://en.wikipedia.org/wiki/Ayttm
Ayttm (pronounced "item" or "A-Y-T-T-M") is a multi-protocol instant messaging client. It is the heir of the EveryBuddy project. Features Services Ayttm primarily supports one-to-one and group chatting on MSN, Yahoo!, ICQ, AIM, XMPP and IRC. It also has support for sending rudimentary emails via SMTP, which may be used to send SMS via email to SMS gateways. Ayttm also supports webcams on Yahoo! Messenger, and voice chatting over MSN using Ekiga (formerly GnomeMeeting). Service summary: OSCAR (AIM/ICQ) IRC XMPP SMTP (SMS via email to SMS gateway) MSNP (Microsoft Messenger service, commonly known as MSN, .NET, or Live) YMSG (YIM with webcam support) Fallback messaging When contacts belonging to the same person - but in different protocols - are grouped together, Ayttm can automatically continue the conversation using another protocol, when the original protocol connection fails. It is known as fallback messaging to its developers. Autotranslation When a contact is tied to a particular language, messages can be automatically translated using Babelfish. As with most electronic translators, its accuracy can be dubious. Aycryption Aycryption is a filter that facilitates encrypted chat using GPG keys. All outgoing text is encrypted using the remote contact's public key, and incoming encrypted text is decrypted using the local private key. Plugins Ayttm's plugin architecture makes it possible for new protocol support to be added without modifying the core application. Plugins must be compiled against a version of the core and will only work with core versions that are binary-compatible with the core version that the plugin was built against. Five types of plugins are supported: Service plugins - for protocol support. e.g.: MSN. Filter plugins - to modify incoming and outgoing messages. e.g.: Auto translation, aycryption Importers - to import contacts and accounts from other messengers. Smileys - a smiley pack Utility - to add functionality.
https://en.wikipedia.org/wiki/Constitution%20type
Constitution type or body type can refer to a number of attempts to classify human body shapes: Humours (Ayurveda) Somatotype of William Herbert Sheldon Paul Carus's character typology Ernst Kretschmer's character typology Elliot Abravanel's glandular metabolism typology Sasang typology by Je-Ma Lee Bertil Lundman's racial classification system See also Female body shape Enterotype Habitus (disambiguation) Phrenology Physiognomy Human physiology Anthropometry Body shape
https://en.wikipedia.org/wiki/Sergeant%20Stubby
Sergeant Stubby (1916 – March 16, 1926) was a dog and the unofficial mascot of the 102nd Infantry Regiment and was assigned to the 26th (Yankee) Division in World War I. He served for 18 months and participated in 17 battles and four offensives on the Western Front. He saved his regiment from surprise mustard gas attacks, found and comforted the wounded, and allegedly once caught a German soldier by the seat of his pants, holding him there until American soldiers found him. His actions were well-documented in contemporary American newspapers. Stubby has been called the most decorated war dog of the Great War and the only dog to be nominated and promoted to sergeant through combat. Stubby's remains are in the Smithsonian Institution. Stubby is the subject of the 2018 animated film Sgt. Stubby: An American Hero. Early life Stubby was described in contemporaneous news items as a Boston Terrier or "American bull terrier" mutt. Describing him as a dog of "uncertain breed," Ann Bausum wrote that: "The brindle-patterned pup probably owed at least some of his parentage to the evolving family of Boston Terriers, a breed so new that even its name was in flux: Boston Round Heads, American... and Boston Bull Terriers." Stubby was found wandering the grounds of the Yale University campus in New Haven, Connecticut, in July 1917, while members of the 102nd Infantry were training. He hung around as the men drilled and one soldier in particular, Corporal James Robert Conroy (1892–1987), developed a fondness for him. When it came time for the outfit to ship out, Conroy hid Stubby on board the troop ship. As they were getting off the ship in France, he hid Stubby under his overcoat without detection. Upon discovery by Conroy's commanding officer, Stubby saluted him as he had been trained to in camp, and the commanding officer allowed the dog to stay on board. Military service Stubby served with the 102nd Infantry Regiment in the trenches in France for 18 months and participated
https://en.wikipedia.org/wiki/Lee%20distance
In coding theory, the Lee distance is a distance between two strings and of equal length n over the q-ary alphabet } of size . It is a metric defined as If or the Lee distance coincides with the Hamming distance, because both distances are 0 for two single equal symbols and 1 for two single non-equal symbols. For this is not the case anymore; the Lee distance between single letters can become bigger than 1. However, there exists a Gray isometry (weight-preserving bijection) between with the Lee weight and with the Hamming weight. Considering the alphabet as the additive group Zq, the Lee distance between two single letters and is the length of shortest path in the Cayley graph (which is circular since the group is cyclic) between them. More generally, the Lee distance between two strings of length is the length of the shortest path between them in the Cayley graph of . This can also be thought of as the quotient metric resulting from reducing with the Manhattan distance modulo the lattice . The analogous quotient metric on a quotient of modulo an arbitrary lattice is known as a or Mannheim distance. The metric space induced by the Lee distance is a discrete analog of the elliptic space. Example If , then the Lee distance between 3140 and 2543 is . History and application The Lee distance is named after William Chi Yuan Lee (). It is applied for phase modulation while the Hamming distance is used in case of orthogonal modulation. The Berlekamp code is an example of code in the Lee metric. Other significant examples are the Preparata code and Kerdock code; these codes are non-linear when considered over a field, but are linear over a ring.
https://en.wikipedia.org/wiki/Digital%20lollipop
A digital lollipop is an electronic device that synthesizes virtual tastes by stimulating the human tongue with electric currents. The device can produce four primary tastes: sweet, sour, salty, and bitter. Digital lollipops were developed through research led by Nimesha Ranasinghe at the National University of Singapore. Design According to Ranasinghe, "The system can manipulate the properties of electric currents (magnitude, frequency, and polarity: inverse current) to formulate different stimuli. Currently, we are conducting experiments to analyze regional differences of the human tongue for electrical stimulation." The devices generate alternating current signals through a sliver electrode, stimulating the tongue's taste receptors to emulate the major taste components. It also produces small, varying amounts of heat to simulate food. Eventually, the digital lollipop could aid Alzheimer's patients by helping them "either enhance or suppress certain senses". It may also allow people with diabetes to experience sweetness without increasing their blood sugar levels. The National University of Singapore research team is developing Taste Over Internet Protocol (TOIP) that would allow taste information to be communicated between locations. See also Virtual reality Gustatory technology
https://en.wikipedia.org/wiki/Groove%20for%20transverse%20sinus
The groove for transverse sinus is a groove which runs along the internal surface of the occipital bone, running laterally between the superior and inferior fossae of the cruciform eminence. The transverse sinuses travel along this groove. A small or absent bony groove in the occiput in conjunction with the compressible nature of the transverse sinus makes this structure vulnerable to tapering with increased ICP. Additional images See also Internal occipital protuberance Occipital bone Transverse sinus
https://en.wikipedia.org/wiki/List%20of%20semiconductor%20materials
Semiconductor materials are nominally small band gap insulators. The defining property of a semiconductor material is that it can be compromised by doping it with impurities that alter its electronic properties in a controllable way. Because of their application in the computer and photovoltaic industry—in devices such as transistors, lasers, and solar cells—the search for new semiconductor materials and the improvement of existing materials is an important field of study in materials science. Most commonly used semiconductor materials are crystalline inorganic solids. These materials are classified according to the periodic table groups of their constituent atoms. Different semiconductor materials differ in their properties. Thus, in comparison with silicon, compound semiconductors have both advantages and disadvantages. For example, gallium arsenide (GaAs) has six times higher electron mobility than silicon, which allows faster operation; wider band gap, which allows operation of power devices at higher temperatures, and gives lower thermal noise to low power devices at room temperature; its direct band gap gives it more favorable optoelectronic properties than the indirect band gap of silicon; it can be alloyed to ternary and quaternary compositions, with adjustable band gap width, allowing light emission at chosen wavelengths, which makes possible matching to the wavelengths most efficiently transmitted through optical fibers. GaAs can be also grown in a semi-insulating form, which is suitable as a lattice-matching insulating substrate for GaAs devices. Conversely, silicon is robust, cheap, and easy to process, whereas GaAs is brittle and expensive, and insulation layers can not be created by just growing an oxide layer; GaAs is therefore used only where silicon is not sufficient. By alloying multiple compounds, some semiconductor materials are tunable, e.g., in band gap or lattice constant. The result is ternary, quaternary, or even quinary compositions.
https://en.wikipedia.org/wiki/Reverse%20Mathematics%3A%20Proofs%20from%20the%20Inside%20Out
Reverse Mathematics: Proofs from the Inside Out is a book by John Stillwell on reverse mathematics, the process of examining proofs in mathematics to determine which axioms are required by the proof. It was published in 2018 by the Princeton University Press (). Topics The book begins with a historical overview of the long struggles with the parallel postulate in Euclidean geometry, and of the foundational crisis of the late 19th and early 20th centuries, Then, after reviewing background material in real analysis and computability theory, the book concentrates on the reverse mathematics of theorems in real analysis, including the Bolzano–Weierstrass theorem, the Heine–Borel theorem, the intermediate value theorem and extreme value theorem, the Heine–Cantor theorem on uniform continuity, the Hahn–Banach theorem, and the Riemann mapping theorem. These theorems are analyzed with respect to three of the "big five" subsystems of second-order arithmetic, namely arithmetical comprehension, recursive comprehension, and the weak Kőnig's lemma. Audience The book is aimed at a "general mathematical audience" including undergraduate mathematics students with an introductory-level background in real analysis. It is intended both to excite mathematicians, physicists, and computer scientists about the foundational issues in their fields, and to provide an accessible introduction to the subject. However, it is not a textbook; for instance, it has no exercises. One theme of the book is that many theorems in this area require axioms in second-order arithmetic that encompass infinite processes and uncomputable functions. Reception and related reading Jeffry Hirst criticizes the book, writing that "if one is not too obsessive about the details, Proofs from the Inside Out is an interesting introduction," while finding details that he would prefer to be handled differently, in a topic for which details are important. In particular, in this area, there are multiple choices for how to b
https://en.wikipedia.org/wiki/Georgiy%20Shilov
Georgi Evgen'evich Shilov (; 3 February 1917 – 17 January 1975) was a Soviet mathematician and expert in the field of functional analysis, who contributed to the theory of normed rings and generalized functions. He was born in Ivanovo-Voznesensk. After graduating from Moscow State University in 1938, he served in the army during World War II. He earned a doctorate in physical-mathematical sciences in 1951, also at MSU, and briefly taught at Kyiv University until returning as a professor at MSU in 1954. There, he supervised over 40 graduate students, including Mikhail Agranovich, Valentina Borok, Gregory Eskin, and Arkadi Nemirovski. Shilov often collaborated with colleague Israel Gelfand on research that included generalized functions and partial differential equations.
https://en.wikipedia.org/wiki/269%20%28number%29
269 (two hundred [and] sixty-nine) is the natural number between 268 and 270. It is also a prime number. In mathematics 269 is a twin prime, and a Ramanujan prime. It is the largest prime factor of 9! + 1 = 362881, and the smallest natural number that cannot be represented as the determinant of a 10 × 10 (0,1)-matrix.
https://en.wikipedia.org/wiki/Gram%27s%20theorem
In mathematics, Gram's theorem states that an algebraic set in a finite-dimensional vector space invariant under some linear group can be defined by absolute invariants. . It is named after J. P. Gram, who published it in 1874.
https://en.wikipedia.org/wiki/Society%20for%20Hematology%20and%20Stem%20Cells
The Society for Hematology and Stem Cells (formerly the International Society for Experimental Hematology) is a learned society which deals with hematology, the study of the blood system and its diseases, including those caused by exposure to nuclear radiation. It was founded in 1950, and held its first official meeting in Milwaukee in 1972. Its mission statement is: "To promote the scientific knowledge and clinical application of basic hematology, immunology, stem cell research, cell and gene therapy and related aspects of research through publications, discussions, scientific meetings and the support of young investigators." Dr. Margaret Goodell of the Baylor College of Medicine's Center for Cell and Gene Therapy is the current president. At the opening ceremony of the 30th annual meeting of ISEH, Emperor Akihito of Japan praised the "remarkable results obtained by the ISEH today in the treatment of radiation-related disorders", by contrast to the lack of any effective treatment for such disorders in 1945 when atomic bombs were dropped on Hiroshima and Nagasaki. The society has an official journal, Experimental Hematology, which has an impact factor of 2.907.
https://en.wikipedia.org/wiki/Ethmoidal%20labyrinth
The ethmoidal labyrinth or lateral mass of the ethmoid bone consists of a number of thin-walled cellular cavities, the ethmoid air cells, arranged in three groups, anterior, middle, and posterior, and interposed between two vertical plates of bone; the lateral plate forms part of the orbit, the medial plate forms part of the nasal cavity. In the disarticulated bone many of these cells are opened into, but when the bones are articulated, they are closed in at every part, except where they open into the nasal cavity. Surfaces The upper surface of the labyrinth presents a number of half-broken cells, the walls of which are completed, in the articulated skull, by the edges of the ethmoidal notch of the frontal bone. Crossing this surface are two grooves, converted into two openings by articulation with the frontal; they are the anterior and posterior ethmoidal foramina, and open on the inner wall of the orbit. The posterior surface presents large irregular cellular cavities, which are closed in by articulation with the sphenoidal concha and orbital process of palatine bone. The lateral surface is formed of a thin, smooth, oblong plate, the lamina papyracea (os planum), which covers in the middle and posterior ethmoidal cells and forms a large part of the medial wall of the orbit; it articulates above with the orbital plate of the frontal bone, below with the maxilla and orbital process of the palatine, in front with the lacrimal, and behind with the sphenoid. In front of the lamina papyracea are some broken air cells which are overlapped and completed by the lacrimal bone and the frontal process of the maxilla. A curved lamina, the uncinate process, projects downward and backward from this part of the labyrinth; it forms a small part of the medial wall of the maxillary sinus, and articulates with the ethmoidal process of the inferior nasal concha. The medial surface of the labyrinth forms part of the lateral wall of the corresponding nasal cavity. It consists of a t
https://en.wikipedia.org/wiki/Load-link/store-conditional
In computer science, load-linked/store-conditional (LL/SC), sometimes known as load-reserved/store-conditional (LR/SC), are a pair of instructions used in multithreading to achieve synchronization. Load-link returns the current value of a memory location, while a subsequent store-conditional to the same memory location will store a new value only if no updates have occurred to that location since the load-link. Together, this implements a lock-free, atomic, read-modify-write operation. "Load-linked" is also known as load-link, load-reserved, and load-locked. LL/SC was originally proposed by Jensen, Hagensen, and Broughton for the S-1 AAP multiprocessor at Lawrence Livermore National Laboratory. Comparison of LL/SC and compare-and-swap If any updates have occurred, the store-conditional is guaranteed to fail, even if the value read by the load-link has since been restored. As such, an LL/SC pair is stronger than a read followed by a compare-and-swap (CAS), which will not detect updates if the old value has been restored (see ABA problem). Real implementations of LL/SC do not always succeed even if there are no concurrent updates to the memory location in question. Any exceptional events between the two operations, such as a context switch, another load-link, or even (on many platforms) another load or store operation, will cause the store-conditional to spuriously fail. Older implementations will fail if there are any updates broadcast over the memory bus. This is called weak LL/SC by researchers, as it breaks many theoretical LL/SC algorithms. Weakness is relative, and some weak implementations can be used for some algorithms. LL/SC is more difficult to emulate than CAS. Additionally, stopping running code between paired LL/SC instructions, such as when single-stepping through code, can prevent forward progress, making debugging tricky. Nevertheless, LL/SC is equivalent to CAS in the sense that either primitive can be implemented in terms of the other, in O(1)
https://en.wikipedia.org/wiki/In%20vivo%20bioreactor
The in vivo bioreactor is a tissue engineering paradigm that uses bioreactor methodology to grow neotissue in vivo that augments or replaces malfunctioning native tissue. Tissue engineering principles are used to construct a confined, artificial bioreactor space in vivo that hosts a tissue scaffold and key biomolecules necessary for neotissue growth. Said space often requires inoculation with pluripotent or specific stem cells to encourage initial growth, and access to a blood source. A blood source allows for recruitment of stem cells from the body alongside nutrient delivery for continual growth. This delivery of cells and nutrients to the bioreactor eventually results in the formation of a neotissue product. Overview Conceptually, the in vivo bioreactor was borne from complications in a repair method of bone fracture, bone loss, necrosis, and tumor reconstruction known as bone grafting. Traditional bone grafting strategies require fresh, autologous bone harvested from the iliac crest; this harvest site is limited by the amount of bone that can safely be removed, as well as associated pain and morbidity. Other methods include cadaverous allografts and synthetic options (often made of hydroxyapatite) that have become available in recent years. In response to the question of limited bone sourcing, it has been posited that bone can be grown to fit a damaged region within the body through the application of tissue engineering principles. Tissue engineering is a biomedical engineering discipline that combines biology, chemistry, and engineering to design neotissue (newly formed tissue) on a scaffold. Tissues scaffolds are functionally identical to the extracellular matrix found, acting as a site upon which regenerative cellular components adsorb to encourage cellular growth. This cellular growth is then artificially stimulated by additive growth factors in the environment that encourage tissue formation. The scaffold is often seeded with stem cells and growth additi
https://en.wikipedia.org/wiki/Radula%20jonesii
Radula jonesii is a species of liverwort in the family Radulaceae. It is known from a few locations on Madeira and one location on Tenerife. The populations are small and vulnerable. This liverwort forms dark green to olive green mats on rocks or on trees such as Laurus novocanariensis and Ocotea foetens. It is part of the old growth laurel forest ecosystem on the islands. On Madeira this ecosystem is protected.
https://en.wikipedia.org/wiki/Loc%20Blocs
Loc Blocs was a plastic block construction toy set. Never reaching the popularity of Lego bricks, they were marketed in the 1970s and 1980s by Entex Industries and manufactured in Japan as Dia Block by Kawada Co. which still produces sets to this day. They were also sold by Sears under their house brand Brix Blox. Today, similar blocks are still manufactured in Japan as Diabloks and sold in the US under the name Disney Build-It. Compatibility The blocks were of a very similar grid pattern to the LEGO system, but due to existing LEGO patents, were slightly different. Rather than using a stud and tube system, Loc Blocs used a tall stud and short channels on the bottom of bricks. The tall studs were just tall enough to engage the channels. The knobs were too tall and spaced just a little bit off for fitting between LEGO tubes.
https://en.wikipedia.org/wiki/Index%20of%20anatomy%20articles
Articles related to anatomy include: A abdomen abdominal aorta abducens nerve abducens nucleus abducent abducent nerve abduction accessory bone accessory cuneate nucleus accessory nerve accessory olivary nucleus accommodation reflex acetabulum Achilles tendon acoustic nerve acromion adenohypophysis adenoids adipose aditus aditus ad antrum adrenal gland adrenergic afferent neuron agger nasi agnosia agonist alar ligament albuginea alimentary allantois allocortex alpha motor neurons alveolar artery alveolar process alveolus alveus of the hippocampus amatory anatomy amaurosis Ammon's horn ampulla Ampulla of Vater amygdala amygdalofugal pathway amygdaloid amylacea anaesthesia analgesia analogous anastomosis anatomical pathology anatomical position anatomical snuffbox anatomical terms of location anatomical terms of motion anatomy Anatomy of the human heart anconeus angiography angiology angular gyrus anhidrosis animal morphology anisocoria ankle ankle reflex annular ligament annulus of Zinn anomaly anomic aphasia anosognosia ansa cervicalis ansa lenticularis anterior cerebral artery Anterior chamber of eyeball anterior choroidal artery anterior commissure anterior communicating artery anterior corticospinal tract anterior cranial fossa anterior cruciate ligament anterior ethmoidal foramen anterior ethmoidal nerve anterior funiculus anterior horn cells anterior horn of the lateral ventricle anterior hypothalamus anterior inferior cerebellar artery anterior limb of the internal capsule anterior lobe of cerebellum anterior nucleus of the thalamus anterior perforated substance anterior pituitary anterior root anterior spinal artery anterior spinocerebellar tract anterior superior alveolar artery anterior tibial artery anterior vertebral muscle anterior white commissure anterolateral region of the neck anterolateral system antidromic antihelix antrum anulus fibrosus anulus tendineus anus aorta aortic body aponeurosis apophysis appendage appendicular skeleton appendix apros
https://en.wikipedia.org/wiki/Computer%20security%20software
Computer security software or cybersecurity software is any computer program designed to influence information security. This is often taken in the context of defending computer systems or data, yet can incorporate programs designed specifically for subverting computer systems due to their significant overlap, and the adage that the best defense is a good offense. The defense of computers against intrusion and unauthorized use of resources is called computer security. Similarly, the defense of computer networks is called network security. The subversion of computers or their unauthorized use is referred to using the terms cyberwarfare, cybercrime, or security hacking (later shortened to hacking for further references in this article due to issues with hacker, hacker culture and differences in white/grey/black 'hat' color identification). Types Below, various software implementations of Cybersecurity patterns and groups outlining ways a host system attempts to secure itself and its assets from malicious interactions, this includes tools to deter both passive and active security threats. Although both security and usability are desired, today it is widely considered in computer security software that with higher security comes decreased usability, and with higher usability comes decreased security. Prevent access The primary purpose of these types of systems is to restrict and often to completely prevent access to computers or data except to a very limited set of users. The theory is often that if a key, credential, or token is unavailable then access should be impossible. This often involves taking valuable information and then either reducing it to apparent noise or hiding it within another source of information in such a way that it is unrecoverable. Cryptography and Encryption software Steganography and Steganography tools A critical tool used in developing software that prevents malicious access is Threat Modeling. Threat modeling is the process of creati
https://en.wikipedia.org/wiki/Klavdija%20Kutnar
Klavdija Kutnar is a Slovene mathematician. She received her PhD at the University of Primorska (UP) in 2008. She is Rector of the University of Primorska. Biography Klavdija Kutnar was born 23 December 1980, in Ljubljana, Slovenia. She graduated from the Faculty of Education of the University of Ljubljana in 2003, and in 2008 received her PhD in mathematics at the University of Primorska under the supervision of Dragan Marušič. From 2010 to 2012 she was head of the Department of Mathematics at the University of Primorska Institute Andrej Marušič (UP IAM). In 2012, she was elected dean of the University of Primorska Faculty of Mathematics, Natural Sciences and Information Technology (UP FAMNIT) and since 2015 has concurrently been assistant director of UP IAM. In 2018 she was granted the titles of Research Counsellor and Full Professor in Mathematics at the University of Primorska. In 2019 she was elected the fourth rector of the University of Primorska. Research Kutnar's main research area is algebraic graph theory. At the beginning of her research career, she also worked in mathematical chemistry. She is noted for her contribution to the study of the structural properties of particular families of symmetric graphs and in particular, her role in developing the original method of embedding graphs on surfaces to solve special cases of the well-known Lovász conjecture: the problem of finding Hamiltonian paths and cycles in vertex-transitive and Cayley graphs. She is a member of the research program P1-0285, financed by the Slovenian Research and Innovation Agency (ARIS) and led by Dragan Marušič. In 2018, she was the first Slovenian female mathematician to obtain a basic research project financed by the ARIS (J1-9110). Selected publications H. H. Glover, K. Kutnar, A. Malnič and D. Marušič, Hamilton cycles in (2, odd, 3)-Cayley graphs, Proc. London Math. Soc., 104 (Issue 6) (2012), 1171–1197, doi:10.1112/plms/pdr042. K. Kutnar and D. Marušič, Hamilton paths
https://en.wikipedia.org/wiki/Short-rate%20model
A short-rate model, in the context of interest rate derivatives, is a mathematical model that describes the future evolution of interest rates by describing the future evolution of the short rate, usually written . The short rate Under a short rate model, the stochastic state variable is taken to be the instantaneous spot rate. The short rate, , then, is the (continuously compounded, annualized) interest rate at which an entity can borrow money for an infinitesimally short period of time from time . Specifying the current short rate does not specify the entire yield curve. However, no-arbitrage arguments show that, under some fairly relaxed technical conditions, if we model the evolution of as a stochastic process under a risk-neutral measure , then the price at time of a zero-coupon bond maturing at time with a payoff of 1 is given by where is the natural filtration for the process. The interest rates implied by the zero coupon bonds form a yield curve, or more precisely, a zero curve. Thus, specifying a model for the short rate specifies future bond prices. This means that instantaneous forward rates are also specified by the usual formula Short rate models are often classified as endogenous and exogenous. Endogenous short rate models are short rate models where the term structure of interest rates, or of zero-coupon bond prices , is an output of the model, so it is "inside the model" (endogenous) and is determined by the model parameters. Exogenous short rate models are models where such term structure is an input, as the model involves some time dependent functions or shifts that allow for inputing a given market term structure, so that the term structure comes from outside (exogenous). Particular short-rate models Throughout this section represents a standard Brownian motion under a risk-neutral probability measure and its differential. Where the model is lognormal, a variable is assumed to follow an Ornstein–Uhlenbeck process and is assumed to foll
https://en.wikipedia.org/wiki/Expression%20vector
An expression vector, otherwise known as an expression construct, is usually a plasmid or virus designed for gene expression in cells. The vector is used to introduce a specific gene into a target cell, and can commandeer the cell's mechanism for protein synthesis to produce the protein encoded by the gene. Expression vectors are the basic tools in biotechnology for the production of proteins. The vector is engineered to contain regulatory sequences that act as enhancer and promoter regions and lead to efficient transcription of the gene carried on the expression vector. The goal of a well-designed expression vector is the efficient production of protein, and this may be achieved by the production of significant amount of stable messenger RNA, which can then be translated into protein. The expression of a protein may be tightly controlled, and the protein is only produced in significant quantity when necessary through the use of an inducer, in some systems however the protein may be expressed constitutively. Escherichia coli is commonly used as the host for protein production, but other cell types may also be used. An example of the use of expression vector is the production of insulin, which is used for medical treatments of diabetes. Elements An expression vector has features that any vector may have, such as an origin of replication, a selectable marker, and a suitable site for the insertion of a gene like the multiple cloning site. The cloned gene may be transferred from a specialized cloning vector to an expression vector, although it is possible to clone directly into an expression vector. The cloning process is normally performed in Escherichia coli. Vectors used for protein production in organisms other than E.coli may have, in addition to a suitable origin of replication for its propagation in E. coli, elements that allow them to be maintained in another organism, and these vectors are called shuttle vectors. Elements for expression An expression vect
https://en.wikipedia.org/wiki/Nodule%20%28medicine%29
In medicine, nodules are small firm lumps, usually greater than 1 cm in diameter. If filled with fluid they are referred to as cysts. Smaller (less than 0.5 cm) raised soft tissue bumps may be termed papules. The evaluation of a skin nodule includes a description of its appearance, its location, how it feels to touch and any associated symptoms which may give clues to an underlying medical condition. Nodules in skin include dermatofibroma and pyogenic granuloma. Nodules may form on tendons and muscles in response to injury, and are frequently found on vocal cords. They may occur in organs such as the lung, or thyroid, or be a sign in other medical conditions such as rheumatoid arthritis. Characteristics Nodules are small firm lumps usually greater than 1 cm in diameter, found in skin and other organs. If filled with fluid they are usually softer and referred to as cysts. Smaller (less than 0.5 cm) raised soft tissue bumps may be termed papules. Evaluation The evaluation of a skin nodule includes a description of its appearance, its location, how it feels to touch and any associated symptoms which may give clues to an underlying medical condition. Often discovered unintentionally on a chest x-ray, a single nodule in the lung requires assessment to exclude cancer. Conditions Nodules may form on tendons and muscles in response to injury, and are frequently found on vocal cords, They occur in conditions including endometriosis, neurofibromatosis, and in rheumatoid arthritis. They may also feature in Kaposi's sarcoma and gonorrhea. Other examples