source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Electrohydrogenesis
Electrohydrogenesis or biocatalyzed electrolysis is the name given to a process for generating hydrogen gas from organic matter being decomposed by bacteria. This process uses a modified fuel cell to contain the organic matter and water. A small amount, 0.2–0.8 V of electricity is used, the original article reports an overall energy efficiency of 288% can be achieved (this is computed relative to the amount of electricity used, waste heat lowers the overall efficiency). This work was reported by Cheng and Logan. See also Biohydrogen Electrochemical reduction of carbon dioxide Electromethanogenesis Fermentative hydrogen production Microbial fuel cell
https://en.wikipedia.org/wiki/Outline%20of%20computer%20engineering
The following outline is provided as an overview of and topical guide to computer engineering: Computer engineering – discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture. Main articles on computer engineering Computer Computer architecture Computer hardware Computer software Computer science Engineering Electrical engineering Software engineering History of computer engineering General Time line of computing 2400 BC - 1949 - 1950-1979 - 1980-1989 - 1990-1999 - 2000-2009 History of computing hardware up to third generation (1960s) History of computing hardware from 1960s to current History of computer hardware in Eastern Bloc countries History of personal computers History of laptops History of software engineering History of compiler writing History of the Internet History of the World Wide Web History of video games History of the graphical user interface Timeline of computing Timeline of operating systems Timeline of programming languages Timeline of artificial intelligence Timeline of cryptography Timeline of algorithms Timeline of quantum computing Product specific Timeline of DOS operating systems Classic Mac OS History of macOS History of Microsoft Windows Timeline of the Apple II series Timeline of Apple products Timeline of file sharing Timeline of OpenBSD Hardware Digital
https://en.wikipedia.org/wiki/Dough%20conditioner
A dough conditioner, flour treatment agent, improving agent or bread improver is any ingredient or chemical added to bread dough to strengthen its texture or otherwise improve it in some way. Dough conditioners may include enzymes, yeast nutrients, mineral salts, oxidants and reductants, bleaching agents and emulsifiers. They are food additives combined with flour to improve baking functionality. Flour treatment agents are used to increase the speed of dough rising and to improve the strength and workability of the dough. Examples Examples of dough conditioners include ascorbic acid, distilled monoglycerides, citrate ester of monoglycerides, diglycerides, ammonium chloride, enzymes, diacetyl tartaric acid ester of monoglycerides or DATEM, potassium bromate, calcium salts such as calcium iodate, L-cystine, L-cysteine HCl, glycerol monostearate, azodicarbonamide, sodium stearoyl lactylate, sucrose palmitate or sucrose ester, polyoxyethylene sorbitan monostearate or polysorbate, soybean lecithin, and soybean lecithin enriched with lysophospholipids. Less processed dough conditioners include sprouted- or malted-grain flours, soy, milk, wheat germ, eggs, potatoes, gluten, yeast, and extra kneading. Malted, diastatic flours are not typically added by manufacturers to whole-wheat flours. History In the early 1900s it was discovered the use of calcium chloride, ammonium sulfate, and potassium bromate halved the amount of yeast needed to raise dough. These mixtures were generally known as mineral yeast foods or yeast nutrient salts. After they became popular among bakers, one patented yeast food was analyzed by Connecticut Agricultural Experiment Station chief chemist J.P. Street who published in 1917 that it contained, "calcium sulphate, 25; ammonium chlorid, 9.7; potassium bromate, 0.3; sodium chlorid, 25; patent wheat flour, 40." They contain water conditioners, yeast conditioners, and dough conditioners. Yeast nutrients Yeast requires water, carbon sources such as
https://en.wikipedia.org/wiki/Axillary%20joints
The axillary joints are two joints in the axillary region of the body, and include the shoulder joint and the acromioclavicular joint. Shoulder joint The shoulder joint also known as the glenohumeral joint is a synovial ball and socket joint. The shoulder joint involves articulation between the glenoid cavity of the scapula (shoulder blade) and the head of the upper arm bone (humerus) and functions as a diarthrosis and multiaxial joint. Due to the very loose joint capsule that gives a limited interface of the humerus and scapula, it is the most mobile joint of the human body. Acromioclavicular joint The acromioclavicular joint, is the joint at the top of the shoulder. It is the junction between the acromion (part of the scapula that forms the highest point of the shoulder) and the clavicle. It is a plane synovial joint. The acromioclavicular joint allows the arm to be raised above the head. This joint functions as a pivot point (although technically it is a gliding synovial joint), acting like a strut to help with movement of the scapula resulting in a greater degree of arm rotation.
https://en.wikipedia.org/wiki/Precision%20and%20recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula:. Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Written as a formula: . Both precision and recall are therefore based on relevance. Consider a computer program for recognizing dogs (the relevant element) in a digital photograph. Upon processing a picture which contains ten cats and twelve dogs, the program identifies eight dogs. Of the eight elements identified as dogs, only five actually are dogs (true positives), while the other three are cats (false positives). Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program's precision is then 5/8 (true positives / selected elements) while its recall is 5/12 (true positives / relevant elements). Adopting a hypothesis-testing approach from statistics, in which, in this case, the null hypothesis is that a given item is irrelevant (i.e., not a dog), absence of type I and type II errors (i.e., perfect specificity and sensitivity of 100% each) corresponds respectively to perfect precision (no false positive) and perfect recall (no false negative). More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors (false negatives), for a type II error rate o
https://en.wikipedia.org/wiki/Mating%20disruption
Mating disruption (MD) is a pest management technique designed to control certain insect pests by introducing artificial stimuli that confuse the individuals and disrupt mate localization and/or courtship, thus preventing mating and blocking the reproductive cycle. It usually involves the use of synthetic sex pheromones, although other approaches, such as interfering with vibrational communication, are also being developed. History La confusion sexuelle or mating disruption, was first discussed by the Institut national de la recherche agronomique in 1974 in Bordeaux, France. Winemakers in France, Switzerland, Spain, Germany, and Italy were the first to use the method to treat vines against the larvae of the moth genus Cochylis. Mechanism In many insect species of interest to agriculture, such as those in the order Lepidoptera, females emit an airborne trail of a specific chemical blend constituting that species' sex pheromone. This aerial trail is referred to as a pheromone plume. Males of that species use the information contained in the pheromone plume to locate the emitting female (known as a “calling” female). Mating disruption exploits the male insects' natural response to follow the plume by introducing a synthetic pheromone into the insects’ habitat. The synthetic pheromone is a volatile organic chemical designed to mimic the species-specific sex pheromone produced by the female insect. The general effect of mating disruption is to confuse the male insects by masking the natural pheromone plumes, causing the males to follow “false pheromone trails” at the expense of finding mates, and affecting the males’ ability to respond to “calling" females. Consequently, the male population experiences a reduced probability of successfully locating and mating with females, which leads to the eventual cessation of breeding and collapse of the insect infestation. The California Department of Pesticide Regulation, the California Department of Food and Agriculture, and
https://en.wikipedia.org/wiki/Norman%20E.%20Gibbs
Norman E. Gibbs (November 27, 1941 – April 25, 2002) was an American software engineer, scholar and educational leader. He studied to a B.Sc. in mathematics at Ursinus College (1964) and M.Sc. (1966) and Ph.D. (1969) in Computer Science at Purdue University, advised by Robert R. Korfhage. His research area was cycle generation, an area in graph theory. Gibbs joined the faculty at Bowdoin College in Maine, Arizona State University and College of William and Mary (mathematics) in Virginia before moving to Pittsburgh, joining Carnegie Mellon University as professor of computer science and becoming the first director of the educational program at the Software Engineering Institute (1987–97). Since then he was chief information officer at Guilford College in Greensboro and University of Connecticut, jointly serving as professor of Operations and Information management. He eventually worked for Ball State University as chair of computer science (2000–02). Articles A cycle generation algorithm for finite undirected linear graphs, in Jnl. of the ACM, 16(4):564-68, 1969. Tridiagonalization by permutations, in Comm. of the ACM, 17(1):20-24, 1974 (with William G. Poole, jr.) Basic cycle generation, in Comm. of the ACM, 18(5):275-76, 1975 An Algorithm for Reducing the Bandwidth and Profile of a Sparse Matrix, in SIAM Jnr. of Numerical Analysis, 13(2):236-250, 1976 (with W. G. Poole and Paul K. Stockmeyer A hybrid profile reduction algorithm, ACM Trans. on Math. Softw., 2(4):378-387, 1976 An introductory computer science course for all majors, ACM SIGCSE, 9(3):34-38, 1977 A model curriculum for a liberal arts degree in computer science, Comm. of the ACM, 29(3):202-210, 1986 (with Allen B. Tucker) A Master of Software Engineering Curriculum: Recommendations from the Software Engineering Institute, IEEE Computer, 22(9):59-71, 1989 (With Gary A. Ford) Software Engineering and Computer Science: The Impending Split?, in Educ. & Computing. 7(1-2):111-17, 1991 Books Principles o
https://en.wikipedia.org/wiki/Embedded%20pushdown%20automaton
An embedded pushdown automaton or EPDA is a computational model for parsing languages generated by tree-adjoining grammars (TAGs). It is similar to the context-free grammar-parsing pushdown automaton, but instead of using a plain stack to store symbols, it has a stack of iterated stacks that store symbols, giving TAGs a generative capacity between context-free and context-sensitive grammars, or a subset of mildly context-sensitive grammars. Embedded pushdown automata should not be confused with nested stack automata which have more computational power. History and applications EPDAs were first described by K. Vijay-Shanker in his 1988 doctoral thesis. They have since been applied to more complete descriptions of classes of mildly context-sensitive grammars and have had important roles in refining the Chomsky hierarchy. Various subgrammars, such as the linear indexed grammar, can thus be defined. While natural languages have traditionally been analyzed using context-free grammars (see transformational-generative grammar and computational linguistics), this model does not work well for languages with crossed dependencies, such as Dutch, situations for which an EPDA is well suited. A detailed linguistic analysis is available in Joshi, Schabes (1997). Theory An EPDA is a finite state machine with a set of stacks that can be themselves accessed through the embedded stack. Each stack contains elements of the stack alphabet , and so we define an element of a stack by , where the star is the Kleene closure of the alphabet. Each stack can then be defined in terms of its elements, so we denote the th stack in the automaton using a double-dagger symbol: , where would be the next accessible symbol in the stack. The embedded stack of stacks can thus be denoted by . We define an EPDA by the septuple (7-tuple) where is a finite set of states; is the finite set of the input alphabet; is the finite stack alphabet; is the start state; is the set of final states;
https://en.wikipedia.org/wiki/TOXMAP
TOXMAP was a geographic information system (GIS) from the United States National Library of Medicine (NLM) that was deprecated on December 16, 2019. The application used maps of the United States to help users explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory (TRI) and Superfund programs with visual projections and maps. Description TOXMAP helped users create nationwide, regional, or local area maps showing where TRI chemicals are released on-site into the air, water, ground, and by underground injection, as reported by industrial facilities in the United States. It also identified the releasing facilities, color-codes release amounts for a single year or year range, and provides multi-year aggregate chemical release data and trends over time, starting with 1988. Maps also can show locations of Superfund sites on the Agency for Toxic Substances and Disease Registry National Priorities List (NPL), which lists all chemical contaminants present at these sites. TOXMAP is a useful environmental health tool that makes epidemiological and environmental information available to the public. There were two versions of TOXMAP available from its home page: the classic version of TOXMAP released in 2004 and, a newer version released in 2014 that is based on Adobe Flash/Apache Flex technology. In addition to many of the features of TOXMAP classic, the new version provides an improved map appearance and interactive capabilities as well as a more current GIS look-and-feel. This included seamless panning, immediate update of search results when zooming to a location, two collapsible side panels to maximize map size, and automatic size adjustment after a window resize. The new TOXMAP also improved U.S. Census layers and availability by Census Tract (2000 and 2010), Canadian National Pollutant Release Inventory (NPRI) data, U.S. commercial nuclear power plants, as well as improved and updated congressional district boundaries. TOXM
https://en.wikipedia.org/wiki/Double-stranded%20RNA%20viruses
Double-stranded RNA viruses (dsRNA viruses) are a polyphyletic group of viruses that have double-stranded genomes made of ribonucleic acid. The double-stranded genome is used as a template by the viral RNA-dependent RNA polymerase (RdRp) to transcribe a positive-strand RNA functioning as messenger RNA (mRNA) for the host cell's ribosomes, which translate it into viral proteins. The positive-strand RNA can also be replicated by the RdRp to create a new double-stranded viral genome. A distinguishing feature of the dsRNA viruses is their ability to carry out transcription of the dsRNA segments within the capsid, and the required enzymes are part of the virion structure. Double-stranded RNA viruses are classified into two phyla, Duplornaviricota and Pisuviricota (specifically class Duplopiviricetes), in the kingdom Orthornavirae and realm Riboviria. The two phyla do not share a common dsRNA virus ancestor, but evolved their double strands two separate times from positive-strand RNA viruses. In the Baltimore classification system, dsRNA viruses belong to Group III. Virus group members vary widely in host range (animals, plants, fungi, and bacteria), genome segment number (one to twelve), and virion organization (T-number, capsid layers, or turrets). Double-stranded RNA viruses include the rotaviruses, known globally as a common cause of gastroenteritis in young children, and bluetongue virus, an economically significant pathogen of cattle and sheep. The family Reoviridae is the largest and most diverse dsRNA virus family in terms of host range. Classification Two clades of dsRNA viruses exist: the phylum Duplornaviricota and the class Duplopiviricetes, which is in the phylum Pisuviricota. Both are included in the kingdom Orthornavirae in the realm Riboviria. Based on phylogenetic analysis of RdRp, the two clades do not share a common dsRNA ancestor but are instead separately descended from different positive-sense, single-stranded RNA viruses. In the Baltimore class
https://en.wikipedia.org/wiki/Autoregressive%20conditional%20duration
In financial econometrics, an autoregressive conditional duration (ACD, Engle and Russell (1998)) model considers irregularly spaced and autocorrelated intertrade durations. ACD is analogous to GARCH. In a continuous double auction (a common trading mechanism in many financial markets) waiting times between two consecutive trades vary at random. Definition Let denote the duration (the waiting time between consecutive trades) and assume that , where are independent and identically distributed random variables, positive and with and where the series is given by and where , , , .
https://en.wikipedia.org/wiki/Causality%20conditions
In the study of Lorentzian manifold spacetimes there exists a hierarchy of causality conditions which are important in proving mathematical theorems about the global structure of such manifolds. These conditions were collected during the late 1970s. The weaker the causality condition on a spacetime, the more unphysical the spacetime is. Spacetimes with closed timelike curves, for example, present severe interpretational difficulties. See the grandfather paradox. It is reasonable to believe that any physical spacetime will satisfy the strongest causality condition: global hyperbolicity. For such spacetimes the equations in general relativity can be posed as an initial value problem on a Cauchy surface. The hierarchy There is a hierarchy of causality conditions, each one of which is strictly stronger than the previous. This is sometimes called the causal ladder. The conditions, from weakest to strongest, are: Non-totally vicious Chronological Causal Distinguishing Strongly causal Stably causal Causally continuous Causally simple Globally hyperbolic Given are the definitions of these causality conditions for a Lorentzian manifold . Where two or more are given they are equivalent. Notation: denotes the chronological relation. denotes the causal relation. (See causal structure for definitions of , and , .) Non-totally vicious For some points we have . Chronological There are no closed chronological (timelike) curves. The chronological relation is irreflexive: for all . Causal There are no closed causal (non-spacelike) curves. If both and then Distinguishing Past-distinguishing Two points which share the same chronological past are the same point: Equivalently, for any neighborhood of there exists a neighborhood such that no past-directed non-spacelike curve from intersects more than once. Future-distinguishing Two points which share the same chronological future are the same point: Equivalently, for any neigh
https://en.wikipedia.org/wiki/Relativistic%20speed
Relativistic speed refers to speed at which relativistic effects become significant to the desired accuracy of measurement of the phenomenon being observed. Relativistic effects are those discrepancies between values calculated by models considering and not considering relativity. Related words are velocity, rapidity, and celerity which is proper velocity. Speed is a scalar, being the magnitude of the velocity vector which in relativity is the four-velocity and in three-dimension Euclidean space a three-velocity. Speed is empirically measured as average speed, although current devices in common use can estimate speed over very small intervals and closely approximate instantaneous speed. Non-relativistic discrepancies include cosine error which occurs in speed detection devices when only one scalar component of the three-velocity is measured and the Doppler effect which may affect observations of wavelength and frequency. Relativistic effects are highly non-linear and for everyday purposes are insignificant because the Newtonian model closely approximates the relativity model. In special relativity the Lorentz factor is a measure of time dilation, length contraction and the relativistic mass increase of a moving object. See also Lorentz factor Relative velocity Relativistic beaming Relativistic jet Relativistic mass Relativistic particle Relativistic plasma Relativistic wave equations Special relativity Ultrarelativistic limit
https://en.wikipedia.org/wiki/International%20Bibliography%20of%20the%20Social%20Sciences
The International Bibliography of the Social Sciences (IBSS) is a bibliography for social science and interdisciplinary research. The database focuses on the social science disciplines of anthropology, economics, politics and sociology, and related interdisciplinary subjects, such as development studies, human geography and environment and gender studies. It was established in 1951 and prepared by the Fondation Nationale des Sciences Politiques in Paris. Production was transferred to the London School of Economics in 1989, and then to ProQuest in 2010.
https://en.wikipedia.org/wiki/Rectovaginal%20fascia
The rectovaginal fascia (often called rectovaginal septum or sometimes fascia of Otto) is a thin structure separating the vagina and the rectum. This corresponds to the rectoprostatic fascia in the male. Clinical significance Perforations in it can lead to rectocele.
https://en.wikipedia.org/wiki/USP26
USP26 is a peptidase enzyme. The USP26 gene is an X-linked gene exclusively expressed in the testis and it codes for the ubiquitin-specific protease 26. The USP26 gene is found at Xq26.2 on the X-chromosome as a single exon. The enzyme that this gene encodes comprises 913 amino acid residues and it is 104 kilodalton in size, which is transcribed from a sequence of 2794 nucleotide base-pairs on the X-chromosome. The USP26 enzyme is a deubiquitinating enzyme that places a very significant role in the regulation of protein turnover during spermatogenesis. It is a testis-specific enzyme that is solely express in spermatogonia and can prevent the degradation of ubiquitinated USP26 substrates. Recent research has suggested that defects in USP26 may be involved in some cases of male infertility, specifically Sertoli cell-only syndrome, and an absence of sperm in the ejaculate (azoospermia). See also Male infertility
https://en.wikipedia.org/wiki/Bayliss%20and%20Starling%20Society
The Bayliss and Starling Society was founded in 1979 as a forum for research scientists with specific interests in the chemistry, physiology and function of central and autonomic peptides. The society was named in honour of William Bayliss and Ernest Starling, who discovered the gastrointestinal peptide secretin in 1902 and coined the term hormone in 1905. The society's main objective was to "advance education and science by the promotion, for the benefit of the public, the study of the chemistry, physiology and disorders of central and peripheral regulating peptides and by the dissemination of the results of such study and research." In doing so, the Society promoted research into peptides and facilitated scientists with research interests in peptides by aiding in the organisation of symposia and relevant conferences. Additionally the Society offered the John Calam Travelling Fellowship Award for members who wanted to attend national and international academic conferences or visit laboratories to gain experience in new techniques to facilitate their research. The Bayliss and Starling Society merged with The Physiological Society in 2014.
https://en.wikipedia.org/wiki/Through-silicon%20via
In electronic engineering, a through-silicon via (TSV) or through-chip via is a vertical electrical connection (via) that passes completely through a silicon wafer or die. TSVs are high-performance interconnect techniques used as an alternative to wire-bond and flip chips to create 3D packages and 3D integrated circuits. Compared to alternatives such as package-on-package, the interconnect and device density is substantially higher, and the length of the connections becomes shorter. Classification Dictated by the manufacturing process, there exist three different types of TSVs: via-first TSVs are fabricated before the individual component (transistors, capacitors, resistors, etc.) are patterned (front end of line, FEOL), via-middle TSVs are fabricated after the individual component are patterned but before the metal layers (back-end-of-line, BEOL), and via-last TSVs are fabricated after (or during) the BEOL process. Via-middle TSVs are currently a popular option for advanced 3D ICs as well as for interposer stacks. TSVs through the front end of line (FEOL) have to be carefully accounted for during the EDA and manufacturing phases. That is because TSVs induce thermo-mechanical stress in the FEOL layer, thereby impacting the transistor behaviour. Applications Image sensors CMOS image sensors (CIS) were among the first applications to adopt TSV(s) in volume manufacturing. In initial CIS applications, TSVs were formed on the backside of the image sensor wafer to form interconnects, eliminate wire bonds, and allow for reduced form factor and higher-density interconnects. Chip stacking came about only with the advent of backside illuminated (BSI) CIS, and involved reversing the order of the lens, circuitry, and photodiode from traditional front-side illumination so that the light coming through the lens first hits the photodiode and then the circuitry. This was accomplished by flipping the photodiode wafer, thinning the backside, and then bonding it on top of the re
https://en.wikipedia.org/wiki/IdMOC
Integrated discrete Multiple Organ Culture (IdMOC) is an in vitro, cell culture based experimental model for the study of intercellular communication. In conventional in vitro systems, each cell type is studied in isolation ignoring critical interactions between organs or cell types. IdMOC technology is based on the concept that multiple organs signal or communicate via the systemic circulation (i.e., blood). The IdMOC plate consists of multiple inner wells within a large interconnecting chamber. Multiple cell types are first individually seeded in the inner wells and, when required, are flooded with an overlying medium to facilitate well-to-well communication. Test material can be added to the overlying medium and both media and cells can be analyzed individually. Plating of hepatocytes with other organ-specific cells allows evaluation of drug metabolism and organotoxicity. The IdMOC system has numerous applications in drug development, such as the evaluation of drug metabolism and toxicity. It can simultaneously evaluate the toxic potential of a drug on cells from multiple organs and evaluate drug stability, distribution, metabolite formation, and efficacy. By modeling multiple-organ interactions, IdMOC can examine the pharmacological effects of a drug and its metabolites on target and off-target organs as well as evaluate drug-drug interactions by measuring cytochrome P450 (CYP) induction or inhibition in hepatocytes. IdMOC can also be used for routine and high throughput screening of drugs with desirable ADME or ADME-Tox properties. In vitro toxicity screening using hepatocytes in conjunction with other primary cells such as cardiomyocytes (cardiotoxicity model), kidney proximal tubule epithelial cells (nephrotoxicity model), astrocytes (neurotoxicity model), endothelial cells (vascular toxicity model), and airway epithelial cells (pulmonary toxicity model) is invaluable to the drug design and discovery process. The IdMOC was patented by Dr. Albert P. Li in
https://en.wikipedia.org/wiki/Prokaryotic%20cytoskeleton
The prokaryotic cytoskeleton is the collective name for all structural filaments in prokaryotes. It was once thought that prokaryotic cells did not possess cytoskeletons, but advances in visualization technology and structure determination led to the discovery of filaments in these cells in the early 1990s. Not only have analogues for all major cytoskeletal proteins in eukaryotes been found in prokaryotes, cytoskeletal proteins with no known eukaryotic homologues have also been discovered. Cytoskeletal elements play essential roles in cell division, protection, shape determination, and polarity determination in various prokaryotes. Tubulin superfamily FtsZ FtsZ, the first identified prokaryotic cytoskeletal element, forms a filamentous ring structure located in the middle of the cell called the Z-ring that constricts during cell division, similar to the actin-myosin contractile ring in eukaryotes. The Z-ring is a highly dynamic structure that consists of numerous bundles of protofilaments that extend and shrink, although the mechanism behind Z-ring contraction and the number of protofilaments involved are unclear. FtsZ acts as an organizer protein and is required for cell division. It is the first component of the septum during cytokinesis, and it recruits all other known cell division proteins to the division site. Despite this functional similarity to actin, FtsZ is homologous to eukaryal tubulin. Although comparison of the primary structures of FtsZ and tubulin reveal a weak relationship, their 3-dimensional structures are remarkably similar. Furthermore, like tubulin, monomeric FtsZ is bound to GTP and polymerizes with other FtsZ monomers with the hydrolysis of GTP in a mechanism similar to tubulin dimerization. Since FtsZ is essential for cell division in bacteria, this protein is a target for the design of new antibiotics. There currently exist several models and mechanisms that regulate Z-ring formation, but these mechanisms depend on the species. Sever
https://en.wikipedia.org/wiki/Halogen%20bond
In chemistry, a halogen bond occurs when there is evidence of a net attractive interaction between an electrophilic region associated with a halogen atom in a molecular entity and a nucleophilic region in another, or the same, molecular entity. Like a hydrogen bond, the result is not a formal chemical bond, but rather a strong electrostatic attraction. Mathematically, the interaction can be decomposed in two terms: one describing an electrostatic, orbital-mixing charge-transfer and another describing electron-cloud dispersion. Halogen bonds find application in supramolecular chemistry; drug design and biochemistry; crystal engineering and liquid crystals; and organic catalysis. Definition Halogen bonds occur when a halogen atom is electrostatically attracted to a partial negative charge. Necessarily, the atom must be covalently bonded in an antipodal σ-bond; the electron concentration associated with that bond leaves a positively charged "hole" on the other side. Although all halogens can theoretically participate in halogen bonds, the σ-hole shrinks if the electron cloud in question polarizes poorly or the halogen is so electronegative as to polarize the associated σ-bond. Consequently halogen-bond propensity follows the trend F < Cl < Br < I. There is no clear distinction between halogen bonds and expanded octet partial bonds; what is superficially a halogen bond may well turn out to be a full bond in an unexpectedly relevant resonance structure. Donor characteristics A halogen bond is almost collinear with the halogen atom's other, conventional bond, but the geometry of the electron-charge donor may be much more complex. Multi-electron donors such as ethers and amines prefer halogen bonds collinear with the lone pair and donor nucleus. Pyridine derivatives tend to donate halogen bonds approximately coplanar with the ring, and the two CN \cdots X angles are about 120°. Carbonyl, thiocarbonyl-, and selenocarbonyl groups, with a trigonal planar geometry
https://en.wikipedia.org/wiki/OpenBinder
OpenBinder is a system for inter-process communication. It was developed at Be Inc. and then Palm, Inc. and was the basis for the Binder framework now used in the Android operating system developed by Google. OpenBinder allows processes to present interfaces which may be called by other threads. Each process maintains a thread pool which may be used to service such requests. OpenBinder takes care of reference counting, recursion back into the original thread, and the inter-process communication itself. On the Linux version of OpenBinder, the communication is achieved using ioctls on a given file descriptor, communicating with a kernel driver. The kernel-side component of the Linux version of OpenBinder was merged into the Linux kernel mainline in kernel version 3.19, which was released on February 8, 2015.
https://en.wikipedia.org/wiki/Skybirds
Skybirds was a brand name for a series of 1:72 scale wood and metal aircraft model kits produced during the 1930s and 1940s, manufactured by the A. J. Holladay & Co. Some of the Skybird-branded products were die-cast scale model cars, aircraft, military vehicles, figurines, among others. History These kits were designed by pilot and aviation journalist James Hay Stevens and comprised shaped wooden blanks with cast metal detail parts. The kits were intended to educate their assemblers of the aircraft. They were designed to be built similarly to real aircraft construction. The kits were supposedly approved by "educational and air-minded organisations". These were the first model aircraft kits in the world made to a constant scale of 1:72. This scale was later adopted by many other model manufacturers, such as Frog and Airfix. Around 80 different Skybirds kits were released from 1932 onwards, marketed towards those aged 12 and over. Subjects ranged from First World War to Second World War military aircraft, plus a number of inter-war period civilian types. The company endorsed the foundation of clubs, specifically for model-making. Together, these formed the Skybird League which had its own magazine of which new issues were published four times a year. Photographs of aircraft models built could be submitted into competitions, in order to be displayed within the windows of the Hamleys toyshop in London. It was marvelled, by customers, that a photograph of a completed model, if finished properly, looked identical to the original article. Of course, this was somewhat easier to achieve with a completed model in the 1930s due to the fact that all photography was monochromatic. The kits encouraged photographers to experiment with scale and trickery to make the model seem more like an actual vehicle. Magazine readers sent their photographs to Skybirds, hoping them to be published within an upcoming issue. Modellers were also encouraged to produce a diorama of their co
https://en.wikipedia.org/wiki/Jonathan%20S.%20Turner
Jonathan Shields Turner is a senior professor of Computer Science in the School of Engineering and Applied Science at Washington University in St. Louis. His research interests include the design and analysis of high performance routers and switching systems, extensible communication networks via overlay networks, and probabilistic performance of heuristic algorithms for NP-complete problems. Biography Jonathan Shields Turner was born on November 13, 1953, in Boston. Turner started his undergraduate studies at Oberlin College, and later enrolled in the undergraduate engineering program at Washington University. In doing so, he became one of the first dual-degree engineering graduates from Washington University. In 1975, he graduated with a B.A. in Theater from Oberlin College. Then, in 1977, he graduated with a B.S. in Computer Science and B.S. in Electrical Engineering from Washington University Once Turner graduated, he began attending Northwestern University for Computer Science graduate school, and simultaneously began working at Bell Labs as a member of their technical staff. In 1979, he received his M.S. in computer science from Northwestern, and continued on as a doctoral student under the supervision of Hal Sudborough. From 1981 to 1983, he became the principal system architect for the Fast Packet Switching project at Bell Labs. He received eleven patents for his work on the Fast Packet Switching project. In 1982 he published his doctoral dissertation, receiving his Ph.D. in computer science from Northwestern. Turner joined Washington University in 1983 as an assistant professor in the Computer Science and Electrical Engineering departments. In 1986, he published a paper titled "New Directions in Communications (or Which Way to the Information Age)", which forecast the convergence of data, voice, and video traffic on networks, and proposed scalable switching architectures to handle such a traffic load. This paper would later be reprinted in the 50th anniv
https://en.wikipedia.org/wiki/Marion%20J.%20Lamb
Marion Julia Lamb (29 July 1939 – 12 December 2021) was Senior Lecturer at Birkbeck, University of London, before her retirement. She studied the effect of environmental conditions such as heat, radiation and pollution on metabolic activity and genetic mutability in the fruit fly Drosophila. From the late 1980s, Lamb collaborated with Eva Jablonka, researching and writing on the inheritance of epigenetic variations, and in 2005 they co-authored the book Evolution in Four Dimensions, considered by some to be in the vanguard of an ongoing revolution within evolutionary biology. Work on evolutionary themes Building on the approach of evolutionary developmental biology, and recent findings of molecular and behavioral biology, they argue the case for the transmission of not just genes per se, but heritable variations transmitted from generation to generation by whatever means. They suggest that such variation can occur at four levels. Firstly, at the established physical level of genetics. Secondly, at the epigenetic level involving variation in the “meaning” of given DNA strands, in which variations in DNA translation during developmental processes are subsequently transmitted during reproduction, which can then feed back into sequence modification of DNA itself. These epigenetic changes - chemical modifications and markers that change the way enzymes and regulatory proteins have access to DNA - are currently being studied to explain many non-Mendelian patterns of inheritance. The best understood mechanism is nucleotide methylation that silences a gene. Methylation can be inherited during cell division, both asexually (mitotic) during development and wound healing, but in some instances also sexually (meiotic). Methylation is linked in some instances to RNA interference, the new and emerging science of RNA regulation of gene expression. The third dimension comprises the transmission of behavioural traditions. There are for example documented cases of food preferenc
https://en.wikipedia.org/wiki/Aztec%20C
Aztec C is a discontinued C programming language compiler for CP/M-80, MS-DOS, Apple II (both DOS 3.3 and ProDOS), Commodore 64, early Macintosh, Amiga, and Atari ST. It was sold commercially by Manx Software Systems. History Manx Software Systems of Shrewsbury, New Jersey produced C programming language compilers beginning in the 1980s for CP/M, Apple II, IBM PC compatibles, Macintosh, and other systems. Manx was started by Harry Suckow, with partners Thomas Fenwick, and James Goodnow II, the two principal developers. They were all working together at another company at the time. Suckow had started several companies of his own anticipating the impending growth of the personal computer market. A demand came for compilers first and he disengaged himself from the other companies to pursue Manx and Aztec C. Another developer, Chris Macey, assisted them momentarily with 80XX development, apart from other areas. One of the main reasons for Aztec C's early success was the floating point support in the Z80 compiler, which was extended to the Apple II shortly after. During the move to ANSI C in 1989, Robert Sherry represented them on the ANSI committee but left shortly after. He also fixed numerous bugs in the Aztec C after Chris Macey and Thomas Fenwick left the company. By this time Microsoft had targeted competitors for their C compiler and Aztec C was being pushed-out of the general IBM PC compatible compiler market, followed by competition with Apple's MPW C on the Macintosh side and Lattice C on the Amiga after SAS bought them. In 1989 Thomas Fenwick left to work for Microsoft, and James Goodnow worked on Aztec C occasionally but was pursuing other projects outside the company and eventually left the company altogether. Chris Macey returned as a consultant but eventually left to become chief scientist for another company. Throughout the 1990s they continued to make their Aztec C compiler. As their market share dropped, they tried to make the move to specializi
https://en.wikipedia.org/wiki/Lateral%20flow%20test
A lateral flow test (LFT), is an assay also known as a lateral flow device (LFD), lateral flow immunochromatographic assay, or rapid test. It is a simple device intended to detect the presence of a target substance in a liquid sample without the need for specialized and costly equipment. LFTs are widely used in medical diagnostics in the home, at the point of care, and in the laboratory. For instance, the home pregnancy test is an LFT that detects a specific hormone. These tests are simple and economical and generally show results in around five to thirty minutes. Many lab-based applications increase the sensitivity of simple LFTs by employing additional dedicated equipment. Because the target substance is often a biological antigen, many lateral flow tests are rapid antigen tests (RAT or ART). LFTs operate on the same principles of affinity chromatography as the enzyme-linked immunosorbent assays (ELISA). In essence, these tests run the liquid sample along the surface of a pad with reactive molecules that show a visual positive or negative result. The pads are based on a series of capillary beds, such as pieces of porous paper, microstructured polymer, or sintered polymer. Each of these pads has the capacity to transport fluid (e.g., urine, blood, saliva) spontaneously. The sample pad acts as a sponge and holds an excess of sample fluid. Once soaked, the fluid flows to the second conjugate pad in which the manufacturer has stored freeze dried bio-active particles called conjugates (see below) in a salt–sugar matrix. The conjugate pad contains all the reagents required for an optimized chemical reaction between the target molecule (e.g., an antigen) and its chemical partner (e.g., antibody) that has been immobilized on the particle's surface. This marks target particles as they pass through the pad and continue across to the test and control lines. The test line shows a signal, often a color as in pregnancy tests. The control line contains affinity ligands whic
https://en.wikipedia.org/wiki/Jugate
A jugate consists of two portraits side by side to suggest, to the viewer, the closeness of each to the other. The word comes from the Latin, jugatus, meaning joined. On coins, it is commonly used for married couples, brothers, or a father and son. Often this would be a presidential and vice presidential candidates although sometimes a state or local candidate is included with a presidential candidate. Jugates may be seen on medals, pinbacks, buttons, posters or other campaign items. If a third figure appears on the item, it is called a trigate. Gallery
https://en.wikipedia.org/wiki/Viral%20shedding
Viral shedding is the expulsion and release of virus progeny following successful reproduction during a host cell infection. Once replication has been completed and the host cell is exhausted of all resources in making viral progeny, the viruses may begin to leave the cell by several methods. The term is variously used to refer to viral particles shedding from a single cell, from one part of the body into another, and from a body into the environment, where the virus may infect another. Vaccine shedding is a form of viral shedding which can occur in instances of infection caused by some attenuated (or "live virus") vaccines. Means Shedding from a cell into extracellular space Budding (through cell envelope) "Budding" through the cell envelope—in effect, borrowing from the cell membrane to create the virus' own viral envelope— into extracellular space is most effective for viruses that require their own envelope. These include such viruses as HIV, HSV, SARS or smallpox. When beginning the budding process, the viral nucleocapsid cooperates with a certain region of the host cell membrane. During this interaction, the glycosylated viral envelope protein inserts itself into the cell membrane. In order to successfully bud from the host cell, the nucleocapsid of the virus must form a connection with the cytoplasmic tails of envelope proteins. Though budding does not immediately destroy the host cell, this process will slowly use up the cell membrane and eventually lead to the cell's demise. This is also how antiviral responses are able to detect virus-infected cells. Budding has been most extensively studied for viruses of eukaryotes. However, it has been demonstrated that viruses infecting prokaryotes of the domain Archaea also employ this mechanism of virion release. Apoptosis (cell destruction) Animal cells are programmed to self-destruct when they are under viral attack or damaged in some other way. By forcing the cell to undergo apoptosis or cell suicide, rele
https://en.wikipedia.org/wiki/Display%20Serial%20Interface
The Display Serial Interface (DSI) is a specification by the Mobile Industry Processor Interface (MIPI) Alliance aimed at reducing the cost of display controllers in a mobile device. It is commonly targeted at LCD and similar display technologies. It defines a serial bus and a communication protocol between the host, the source of the image data, and the device which is the destination. The interface is closed source, which means that the specification of the interface is not open to the public. The maintenance of the interface is the responsibility of the MIPI Alliance. Only legal entities (e.g., companies) can be members. These members or the persons commissioned and approved by them have access to the specification in order to use it in their possible applications. History The MIPI Alliance was formed in 2003, aiming to establish standards in mobile industry components. The first version of the MIPI DSI, version 1.0 was released in 2005. MIPI DSI v1.1 was released in 2007, and added features such as "Command Mode" for directly sending commands and data to display modules using the display controller. DSI v1.2 was released in 2011, and extended the video packet length and expanded the command mode. DSI v1.3 was released in 2013. DSI versions 1.4 and DSI-2 were released in 2016 and 2018 respectively. Design At the physical layer, DSI specifies a high-speed (e.g., 4.5Gbit/s/lane for D-PHY 2.0) differential signaling point-to-point serial bus. This bus includes one high speed clock lane and one or more data lanes. Each lane is carried on two wires (due to differential signaling). All lanes travel from the DSI host to the DSI device, except for the first data lane (lane 0), which is capable of a bus turnaround (BTA) operation that allows it to reverse transmission direction. When more than one lane is used, they are used in parallel to transmit data, with each sequential bit in the stream traveling on the next lane. That is, if 4 lanes are being used, 4 bits are
https://en.wikipedia.org/wiki/Fencing%20%28computing%29
Fencing is the process of isolating a node of a computer cluster or protecting shared resources when a node appears to be malfunctioning. As the number of nodes in a cluster increases, so does the likelihood that one of them may fail at some point. The failed node may have control over shared resources that need to be reclaimed and if the node is acting erratically, the rest of the system needs to be protected. Fencing may thus either disable the node, or disallow shared storage access, thus ensuring data integrity. Basic concepts A node fence (or I/O fence) is a virtual "fence" that separates nodes which must not have access to a shared resource from that resource. It may separate an active node from its backup. If the backup crosses the fence and, for example, tries to control the same disk array as the primary, a data hazard may occur. Mechanisms such as STONITH are designed to prevent this condition. Isolating a node means ensuring that I/O can no longer be done from it. Fencing is typically done automatically, by cluster infrastructure such as shared disk file systems, in order to protect processes from other active nodes modifying the resources during node failures. Mechanisms to support fencing, such as the reserve/release mechanism of SCSI, have existed since at least 1985. Fencing is required because it is impossible to distinguish between a real failure and a temporary hang. If the malfunctioning node is really down, then it cannot do any damage, so theoretically no action would be required (it could simply be brought back into the cluster with the usual join process). However, because there is a possibility that a malfunctioning node could itself consider the rest of the cluster to be the one that is malfunctioning, a split brain condition could ensue, and cause data corruption. Instead, the system has to assume the worst scenario and always fence in case of problems. Approaches to fencing There are two classes of fencing methods, one which disables
https://en.wikipedia.org/wiki/Hanlin%20eReader
The Hanlin is an e-Reader, an electronic book (e-book) reading device. The Hanlin v3 features a 6" (15 cm), 4-level grayscale electrophoretic display (E Ink material) with a resolution of 600×800 pixels (167 ppi), while the v3+ features a 16-level grayscale display. The Hanlin v5 Mini, features a 5" (15 cm), 8-level grayscale electrophoretic display (E Ink material) with a resolution of 600×800 pixels (200 ppi). The device runs a Linux-based OS. The device is manufactured by the JinKe Electronic Company in China. It is rebranded by various OEMs and sold under the names Bebook, Walkbook, lBook, Iscriptum, Papyre, EZ Reader, Koobe and ECO Reader. The Hanlin eReader works best with EPUB, RTF, FB2, and Mobipocket documents, because of their simplicity, interoperability, and low CPU processing requirements. These files also offer more zoom levels, and more options like search, landscape mode, and text to speech than with PDF, DOC, HTML, or TXT. It also uses JinKe's proprietary WOLF format (file extension .wol). Specifications of Hanlin Models Hardware Size: 184 x 120.5 x 9.9 mm Weight: 210 g, battery included (160 gram for BeBook mini) Screen: 90 x 120 mm (3.54 x 4.72 inches) 600x800 pixels / black and white, 4/16 gray-scale 166 dpi for Hanlin v3/v3+ and 8 gray-scale 200 dpi for Hanlin v5 Daylight readable / No backlight / Portrait and landscape mode SDRAM memory: 32 MB for the v3, 64 MB for the v3+/v6 SD card (supports SDIO) (v3 supports up to 4GB, v5 supports SDHC up to 16GB (supports 32GB unofficially) Connectivity: USB 1.1 (client only) for Hanlin v3 and USB 2.0 for Hanlin v3+/v5 Software Operating system: Linux Document formats: PDF, TXT, RTF, DOC, HTML Help, FB2, HTML, WOL, DJVU, LIT, EPUB, PPT, Mobipocket. Archives support: ZIP, RAR. Supported image format: TIFF, JPEG, GIF, BMP, PNG. Supported sound format: MP3. Product Version V Series: V2, V3, V3+, V5, V60, V90 A Series: A6, A9, A90 See also List of e-book readers External links http://www.bebook.net
https://en.wikipedia.org/wiki/Smallest-circle%20problem
The smallest-circle problem (also known as minimum covering circle problem, bounding circle problem, least bounding circle problem, smallest enclosing circle problem) is a mathematical problem of computing the smallest circle that contains all of a given set of points in the Euclidean plane. The corresponding problem in n-dimensional space, the smallest bounding sphere problem, is to compute the smallest n-sphere that contains all of a given set of points. The smallest-circle problem was initially proposed by the English mathematician James Joseph Sylvester in 1857. The smallest-circle problem in the plane is an example of a facility location problem (the 1-center problem) in which the location of a new facility must be chosen to provide service to a number of customers, minimizing the farthest distance that any customer must travel to reach the new facility. Both the smallest circle problem in the plane, and the smallest bounding sphere problem in any higher-dimensional space of bounded dimension are solvable in worst-case linear time. Characterization Most of the geometric approaches for the problem look for points that lie on the boundary of the minimum circle and are based on the following simple facts: The minimum covering circle is unique. The minimum covering circle of a set S can be determined by at most three points in S which lie on the boundary of the circle. If it is determined by only two points, then the line segment joining those two points must be a diameter of the minimum circle. If it is determined by three points, then the triangle consisting of those three points is not obtuse. Let be any set of points in the plane, and suppose that there are two smallest enclosing disks of , with centers at and . Let be their shared radius, and let be the distance between their centers. Since is a subset of both disks it is a subset of their intersection. However, their intersection is contained within the disk with center and radius , as shown in the
https://en.wikipedia.org/wiki/Nucleotide%20sugars%20metabolism
In nucleotide sugar metabolism a group of biochemicals known as nucleotide sugars act as donors for sugar residues in the glycosylation reactions that produce polysaccharides. They are substrates for glycosyltransferases. The nucleotide sugars are also intermediates in nucleotide sugar interconversions that produce some of the activated sugars needed for glycosylation reactions. Since most glycosylation takes place in the endoplasmic reticulum and golgi apparatus, there are a large family of nucleotide sugar transporters that allow nucleotide sugars to move from the cytoplasm, where they are produced, into the organelles where they are consumed. Nucleotide sugar metabolism is particularly well-studied in yeast, fungal pathogens, and bacterial pathogens, such as E. coli and Mycobacterium tuberculosis, since these molecules are required for the synthesis of glycoconjugates on the surfaces of these organisms. These glycoconjugates are virulence factors and components of the fungal and bacterial cell wall. These pathways are also studied in plants, but here the enzymes involved are less well understood.
https://en.wikipedia.org/wiki/Metal%E2%80%93semiconductor%20junction
In solid-state physics, a metal–semiconductor (M–S) junction is a type of electrical junction in which a metal comes in close contact with a semiconductor material. It is the oldest practical semiconductor device. M–S junctions can either be rectifying or non-rectifying. The rectifying metal–semiconductor junction forms a Schottky barrier, making a device known as a Schottky diode, while the non-rectifying junction is called an ohmic contact. (In contrast, a rectifying semiconductor–semiconductor junction, the most common semiconductor device today, is known as a p–n junction.) Metal–semiconductor junctions are crucial to the operation of all semiconductor devices. Usually an ohmic contact is desired, so that electrical charge can be conducted easily between the active region of a transistor and the external circuitry. Occasionally however a Schottky barrier is useful, as in Schottky diodes, Schottky transistors, and metal–semiconductor field effect transistors. The critical parameter: Schottky barrier height Whether a given metal-semiconductor junction is an ohmic contact or a Schottky barrier depends on the Schottky barrier height, ΦB, of the junction. For a sufficiently large Schottky barrier height, that is, ΦB is significantly higher than the thermal energy kT, the semiconductor is depleted near the metal and behaves as a Schottky barrier. For lower Schottky barrier heights, the semiconductor is not depleted and instead forms an ohmic contact to the metal. The Schottky barrier height is defined differently for n-type and p-type semiconductors (being measured from the conduction band edge and valence band edge, respectively). The alignment of the semiconductor's bands near the junction is typically independent of the semiconductor's doping level, so the n-type and p-type Schottky barrier heights are ideally related to each other by: where Eg is the semiconductor's band gap. In practice, the Schottky barrier height is not precisely constant across the
https://en.wikipedia.org/wiki/Hatta%20number
The Hatta number (Ha) was developed by Shirôji Hatta, who taught at Tohoku University. It is a dimensionless parameter that compares the rate of reaction in a liquid film to the rate of diffusion through the film. For a second order reaction (), the maximum rate of reaction assumes that the liquid film is saturated with gas at the interfacial concentration ; thus, the maximum rate of reaction is . For a reaction order in and order in : It is an important parameter used in Chemical Reaction Engineering.
https://en.wikipedia.org/wiki/Gamow%20factor
The Gamow factor, Sommerfeld factor or Gamow–Sommerfeld factor, named after its discoverer George Gamow or after Arnold Sommerfeld, is a probability factor for two nuclear particles' chance of overcoming the Coulomb barrier in order to undergo nuclear reactions, for example in nuclear fusion. By classical physics, there is almost no possibility for protons to fuse by crossing each other's Coulomb barrier at temperatures commonly observed to cause fusion, such as those found in the sun. When George Gamow instead applied quantum mechanics to the problem, he found that there was a significant chance for the fusion due to tunneling. The probability of two nuclear particles overcoming their electrostatic barriers is given by the following equation: where is the Gamow energy, Here, is the reduced mass of the two particles. The constant is the fine structure constant, is the speed of light, and and are the respective atomic numbers of each particle. While the probability of overcoming the Coulomb barrier increases rapidly with increasing particle energy, for a given temperature, the probability of a particle having such an energy falls off very fast, as described by the Maxwell–Boltzmann distribution. Gamow found that, taken together, these effects mean that for any given temperature, the particles that fuse are mostly in a temperature-dependent narrow range of energies known as the Gamow window. Derivation Gamow first solved the one-dimensional case of quantum tunneling using the WKB approximation. Considering a wave function of a particle of mass m, we take area 1 to be where a wave is emitted, area 2 the potential barrier which has height V and width l (at ), and area 3 its other side, where the wave is arriving, partly transmitted and partly reflected. For a wave number k and energy E we get: where and . This is solved for given A and α by taking the boundary conditions at the both barrier edges, at and , where both and its derivative must be equal
https://en.wikipedia.org/wiki/Fides%20%28reliability%29
Fides (Latin: trust) is a guide allowing estimated reliability calculation for electronic components and systems. The reliability prediction is generally expressed in FIT (number of failures for 109 hours) or MTBF (Mean Time Between Failures). This guide provides reliability data for RAMS (Reliability, Availability, Maintainability, Safety) studies. Purpose Fides is a DGA (French armament industry supervision agency) study conducted by a European consortium formed by eight industrialists from the fields of aeronautics and Defence: Airbus France Eurocopter Nexter Electronics MBDA Missiles Systems Thales Services Thales Airborne Systems Thales Avionics Thales Underwater Systems The first aim of the Fides project was to develop a new reliability assessment method for electronic components which takes into consideration COTS (commercial off-the-shelf) and specific parts and the new technologies. The global aim is to find a replacement to the worldwide reference MIL-HDBK-217F, which is old and has not been revised since 1995 (issue F notice 2). Moreover, the MIL HDBK 217F is very pessimistic for COTS components which are more and more widely used in military and aerospace systems. The second aim was to write a reliability engineering guide in order to provide engineering process and tools to improve reliability in the development of new electronic systems. Method content The Fides guide is made of two distinct parts. The first is a reliability prediction calculation method concerning main electronic component families and complete subassemblies like hard disks or LCD displays. The second part is a process control and audit guide which is a tool to assess the reliability quality and technical know-how in the operating time of the studied product, operational specification and maintenance. Availability The Fides guide is freely available on the Fides reliability website. Standardization The French standardisation organisation UTE (Union Technique de l'Electricité) h
https://en.wikipedia.org/wiki/AT%26T%20DSP1
The AT&T DSP1 was a pioneering digital signal processor (DSP) created by Bell Labs. The DSP1 started in 1977 with a Bell Labs study that recommended creating a large-scale integrated circuit for digital signal processing. It described a basic DSP architecture with multiplier/accumulator, addressing unit, and control; the I/O, data, and control memories were planned to be off-chip until large-scale integration could make a single chip implementation feasible. The DSP1 specification was completed in 1978, with first samples tested in May 1979. This first implementation was a single-chip DSP, containing all functional elements found in today's DSPs including multiplier–accumulator (MAC), parallel addressing unit, control, control memory, data memory, and I/O. It was designed with a 20-bit fixed point data format, and 16-bit coefficients and instructions, implemented in a DRAM process technology. By October 1979 other Bell Labs groups began development using the DSP1, most notably as a key component in AT&T's 5ESS switch.
https://en.wikipedia.org/wiki/Carcinoembryonic%20antigen%20peptide-1
Carcinoembryonic antigen peptide-1 is a nine amino acid peptide fragment of carcinoembryonic antigen (CEA), a protein that is overexpressed in several cancer cell types, including gastrointestinal, breast, and non-small-cell lung. Synonyms: CAP-1 Carcinoembryonic Antigen Peptide-1 Carcinoembryonic Peptide-1 CEA Peptide 1 CEA Peptide 9-mer External links National Cancer Institute Definition of carcinoembryonic antigen peptide 1 Tumor markers Peptides
https://en.wikipedia.org/wiki/Nucleotide%20sugar
Nucleotide sugars are the activated forms of monosaccharides. Nucleotide sugars act as glycosyl donors in glycosylation reactions. Those reactions are catalyzed by a group of enzymes called glycosyltransferases. History The anabolism of oligosaccharides - and, hence, the role of nucleotide sugars - was not clear until the 1950s when Leloir and his coworkers found that the key enzymes in this process are the glycosyltransferases. These enzymes transfer a glycosyl group from a sugar nucleotide to an acceptor. Biological importance and energetics To act as glycosyl donors, those monosaccharides should exist in a highly energetic form. This occurs as a result of a reaction between nucleoside triphosphate (NTP) and glycosyl monophosphate (phosphate at anomeric carbon). The recent discovery of the reversibility of many glycosyltransferase-catalyzed reactions calls into question the designation of sugar nucleotides as 'activated' donors. Types There are nine sugar nucleotides in humans which act as glycosyl donors and they can be classified depending on the type of the nucleoside forming them: Uridine Diphosphate: UDP-α-D-Glc, UDP-α-D-Gal, UDP-α-D-GalNAc, UDP-α-D-GlcNAc, UDP-α-D-GlcA, UDP-α-D-Xyl Guanosine Diphosphate: GDP-α-D-Man, GDP-β-L-Fuc. Cytidine Monophosphate: CMP-β-D-Neu5Ac; in humans, it is the only nucleotide sugar in the form of nucleotide monophosphate. Cytidine Diphosphate: CDP-D-Ribitol (i.e. CMP-[ribitol phosphate]); though not a sugar, the phosphorylated sugar alcohol ribitol phosphate is incorporated into matriglycan as if it were a monosaccharide. In other forms of life many other sugars are used and various donors are utilized for them. All five of the common nucleosides are used as a base for a nucleotide sugar donor somewhere in nature. As examples, CDP-glucose and TDP-glucose give rise to various other forms of CDP and TDP-sugar donor nucleotides. Structures Listed below are the structures of some nucleotide sugars (one example from each type
https://en.wikipedia.org/wiki/Guanosine%20diphosphate%20mannose
Guanosine diphosphate mannose or GDP-mannose is a nucleotide sugar that is a substrate for glycosyltransferase reactions in metabolism. This compound is a substrate for enzymes called mannosyltransferases. Known as donor of activated mannose in all glycolytic reactions, GDP-mannose is essential in eukaryotes. Biosynthesis GDP-mannose is produced from GTP and mannose-6-phosphate by the enzyme mannose-1-phosphate guanylyltransferase. One of the enzymes from the family of nucleootidyl-transferases, GDP-Mannose Pyrophosphorylase (GDP-MP) is an pervasive enzyme found in bacteria, fungi, plants, and animals.
https://en.wikipedia.org/wiki/Microbarom
In acoustics, microbaroms, also known as the "voice of the sea", are a class of atmospheric infrasonic waves generated in marine storms by a non-linear interaction of ocean surface waves with the atmosphere. They typically have narrow-band, nearly sinusoidal waveforms with amplitudes up to a few microbars, and wave periods near 5 seconds (0.2 hertz). Due to low atmospheric absorption at these low frequencies, microbaroms can propagate thousands of kilometers in the atmosphere, and can be readily detected by widely separated instruments on the Earth's surface. History The reason for the discovery of this phenomenon was an accident: the aerologists working at the marine hydrometeorological stations and watercraft drew attention to the strange pain that a person experiences when approaching the surface of a standard meteorological probe (a balloon filled with hydrogen). During one of the expeditions, this effect was demonstrated to the Soviet academician V. V. Shuleikin by the chief meteorologist V. A. Berezkin. This phenomenon drew genuine interest among scientists; in order to study it, special equipment was designed to record powerful but low-frequency vibrations that are not audible to human ears. As a result of several series of experiments, the physical essence of this phenomenon was clarified and in 1935 when V.V. Shuleikin published his first work entirely devoted to the infrasonic nature of the “voice of the sea”. Microbaroms were first described in United States in 1939 by American seismologists Hugo Benioff and Beno Gutenberg at the California Institute of Technology at Pasadena, based on observations from an electromagnetic microbarograph, consisting of a wooden box with a low-frequency loudspeaker mounted on top. They noted their similarity to microseisms observed on seismographs, and correctly hypothesized that these signals were the result of low pressure systems in the Northeast Pacific Ocean. In 1945, Swiss geoscientist L. Saxer showed the first re
https://en.wikipedia.org/wiki/Transmission%20control%20room
A transmission control room (TCR), transmission suite, Tx room, or presentation suite is a room at broadcast facilities and television stations around the world. Compared to a master control room, it is usually smaller in size and is a scaled-down version of centralcasting. A TX room or presentation suite will be staffed 24/7 by presentation coordinators and tape operators and will be fitted out with video play-out systems often using server based broadcast automation. For operational and content qualitative reasons, not more than two television channels are managed from one TCR. Channels with live content and production switching requirements like sports channels have their own dedicated TCRs. A television station may have several TCRs depending on the number of channels they broadcast. Presentation suite The presentation suite is staffed 24/7 by on-air presentation coordinators who are responsible for the continuity and punctual play out of scheduled broadcast programming. Programming may be live from the television studio or played from video tape or from video server playout. When broadcast programming is 'live' the presentation coordinator will override the broadcast automation system and manually switch television programming. The presentation coordinator will directly coordinate live television programming going to air in consultation with master control and the production assistant (PA) or the director's assistant (DA). The presentation coordinator will arrange program source to be allocated by master control and advise the DA as to the start time and count the production in from 10 seconds to first-frame of picture and the DA will count the production out to the television commercial break and so on it continues to the end of the program. Live programming is unpredictable and will affect the scheduled timing of scheduled programing events; the presentation coordinator adjusts programming to bring the schedule back on time by adding or removing fill con
https://en.wikipedia.org/wiki/Candied%20fruit
Candied fruit, also known as glacé fruit, is whole fruit, smaller pieces of fruit, or pieces of peel, placed in heated sugar syrup, which absorbs the moisture from within the fruit and eventually preserves it. Depending on the size and type of fruit, this process of preservation can take from several days to several months. This process allows the fruit to remain edible for up to a year. It has existed since the 14th century. The continual process of drenching the fruit in syrup causes the fruit to become saturated with sugar, preventing the growth of spoilage microorganisms due to the resulting unfavourable osmotic pressure. Fruits which are commonly candied include cherries, pineapple, greengages, pears, peaches and melon, as well as ginger root. The principal candied peels are orange and citron; these, together with candied lemon peel, are the usual ingredients of mixed chopped peel. Candied vegetables are also made, from vegetables such as pumpkin, turnip and carrot. Recipes vary from region to region, but the general principle is to boil the fruit, steep it in increasingly stronger sugar solutions for a number of weeks, and then dry off any remaining water. Uses As well as being eaten as snacks, candied fruits such as cherries and candied peels are commonly used in fruitcakes or pancakes. See also
https://en.wikipedia.org/wiki/P1-derived%20artificial%20chromosome
A P1-derived artificial chromosome, or PAC, is a DNA construct derived from the DNA of P1 bacteriophages and Bacterial artificial chromosome. It can carry large amounts (about 100–300 kilobases) of other sequences for a variety of bioengineering purposes in bacteria. It is one type of the efficient cloning vector used to clone DNA fragments (100- to 300-kb insert size; average,150 kb) in Escherichia coli cells. History of PAC The bacteriophage P1 was first isolated by Dr. Giuseppe Bertani. In his study, he noticed that the lysogen produced abnormal non-continuous phages, and later found phage P1 was produced from the Lisbonne lysogen strain, in addition to bacteriophages P2 and P3. P1 has the ability to copy a bacteria's host genome and integrate that DNA information into other bacteria hosts, also known as generalized transduction. Later on, P1 was developed as a cloning vector by Nat Sternberg and colleagues in the 1990s. It is capable of Cre-Lox recombination. The P1 vector system was first developed to carry relatively large DNA fragments in plasmids (95-100kb). Construction PAC has 2 loxP sites, which can be used by phage recombinases to form the product from its cre-gene recognition during Cre-Lox recombination. This process circularizes the DNA strand, forming a plasmid, which can then be inserted into bacteria such as Escherichia coli. The transformation is usually done by electroporation, which uses electricity to allow the plasmids permeate into the cells. If high expression levels are desired, the P1 lytic replicon can be used in constructs. Electroporation allows for lysogeny of PACs so that they can replicate within cells without disturbing other chromosomes. Comparison with Other Artificial Chromosomes PAC is one of the artificial chromosome vectors. Some other artificial chromosomes include: bacterial artificial chromosome, yeast artificial chromosome and the human artificial chromosome. Compared to other artificial chromosomes, it can carry rel
https://en.wikipedia.org/wiki/Immunocontraception
Immunocontraception is the use of an animal's immune system to prevent it from fertilizing offspring. Contraceptives of this type are not currently approved for human use. Typically immunocontraception involves the administration of a vaccine that induces an adaptive immune response which causes an animal to become temporarily infertile. Contraceptive vaccines have been used in numerous settings for the control of wildlife populations. However, experts in the field believe that major innovations are required before immunocontraception can become a practical form of contraception for human beings. Thus far immunocontraception has focused on mammals exclusively. There are several targets in mammalian sexual reproduction for immune inhibition. They can be organized into three categories. Gamete production Organisms that undergo sexual reproduction must first produce gametes, cells which have half the typical number of chromosomes of the species. Often immunity that prevents gamete production also inhibits secondary sexual characteristics and so has effects similar to castration. Gamete function After gametes are produced in sexual reproduction, two gametes must combine during fertilization to form a zygote, which again has the full typical number of chromosomes of the species. Methods that target gamete function prevent this fertilization from occurring and are true contraceptives. Gamete outcome Shortly after fertilization a zygote develops into a multicellular embryo that in turn develops into a larger organism. In placental mammals this process of gestation occurs inside the reproductive system of the mother of the embryo. Immunity that targets gamete outcome induces abortion of an embryo while it is within its mother's reproductive system. Medical use Immunocontraception in not currently available but is under study. Obstacles Variability of immunogenicity In order for an immunocontraceptive to be palatable for human use, it would need to meet or exceed t
https://en.wikipedia.org/wiki/Capacity%20of%20a%20set
In mathematics, the capacity of a set in Euclidean space is a measure of the "size" of that set. Unlike, say, Lebesgue measure, which measures a set's volume or physical extent, capacity is a mathematical analogue of a set's ability to hold electrical charge. More precisely, it is the capacitance of the set: the total charge a set can hold while maintaining a given potential energy. The potential energy is computed with respect to an idealized ground at infinity for the harmonic or Newtonian capacity, and with respect to a surface for the condenser capacity. Historical note The notion of capacity of a set and of "capacitable" set was introduced by Gustave Choquet in 1950: for a detailed account, see reference . Definitions Condenser capacity Let Σ be a closed, smooth, (n − 1)-dimensional hypersurface in n-dimensional Euclidean space ℝn, n ≥ 3; K will denote the n-dimensional compact (i.e., closed and bounded) set of which Σ is the boundary. Let S be another (n − 1)-dimensional hypersurface that encloses Σ: in reference to its origins in electromagnetism, the pair (Σ, S) is known as a condenser. The condenser capacity of Σ relative to S, denoted C(Σ, S) or cap(Σ, S), is given by the surface integral where: u is the unique harmonic function defined on the region D between Σ and S with the boundary conditions u(x) = 1 on Σ and u(x) = 0 on S; S′ is any intermediate surface between Σ and S; ν is the outward unit normal field to S′ and is the normal derivative of u across S′; and σn = 2πn⁄2 ⁄ Γ(n ⁄ 2) is the surface area of the unit sphere in ℝn. C(Σ, S) can be equivalently defined by the volume integral The condenser capacity also has a variational characterization: C(Σ, S) is the infimum of the Dirichlet's energy functional over all continuously-differentiable functions v on D with v(x) = 1 on Σ and v(x) = 0 on S. Harmonic/Newtonian capacity Heuristically, the harmonic capacity of K, the region bounded by Σ, can be found by taking the condenser capa
https://en.wikipedia.org/wiki/Autologous%20lymphocyte
In transplantation, autologous lymphocytes refers to a person's own white blood cells. Lymphocytes have a number of roles in the immune system, including the production of antibodies and other substances that fight infections and other diseases.
https://en.wikipedia.org/wiki/Ideomotor%20apraxia
Ideomotor Apraxia, often IMA, is a neurological disorder characterized by the inability to correctly imitate hand gestures and voluntarily mime tool use, e.g. pretend to brush one's hair. The ability to spontaneously use tools, such as brushing one's hair in the morning without being instructed to do so, may remain intact, but is often lost. The general concept of apraxia and the classification of ideomotor apraxia were developed in Germany in the late 19th and early 20th centuries by the work of Hugo Liepmann, Adolph Kussmaul, Arnold Pick, Paul Flechsig, Hermann Munk, Carl Nothnagel, Theodor Meynert, and linguist Heymann Steinthal, among others. Ideomotor apraxia was classified as "ideo-kinetic apraxia" by Liepmann due to the apparent dissociation of the idea of the action with its execution. The classifications of the various subtypes are not well defined at present, however, owing to issues of diagnosis and pathophysiology. Ideomotor apraxia is hypothesized to result from a disruption of the system that relates stored tool use and gesture information with the state of the body to produce the proper motor output. This system is thought to be related to the areas of the brain most often seen to be damaged when ideomotor apraxia is present: the left parietal lobe and the premotor cortex. Little can be done at present to reverse the motor deficit seen in ideomotor apraxia, although the extent of dysfunction it induces is not entirely clear. Signs and Symptoms Ideomotor apraxia (IMA) impinges on one's ability to carry out common, familiar actions on command, such as waving goodbye. Persons with IMA exhibit a loss of ability to carry out motor movements, and may show errors in how they hold and move the tool in attempting the correct function. One of the defining symptoms of ideomotor apraxia is the inability to pantomime tool use. As an example, if a normal individual were handed a comb and instructed to pretend to brush his hair, he would grasp the comb proper
https://en.wikipedia.org/wiki/Vierordt%27s%20law
Karl von Vierordt in 1868 was the first to record a law of time perception which relates perceived duration to actual duration over different interval magnitudes, and according to task complexity. Vierordt's law is "a robust phenomenon in time estimation research that has been observed with different time estimation methods". It states that, retrospectively, "short" intervals of time tend to be overestimated, and "long" intervals of time tend to be underestimated. The other major paradigm of time estimation methodology measures time prospectively. See also
https://en.wikipedia.org/wiki/Strengthening%20mechanisms%20of%20materials
Methods have been devised to modify the yield strength, ductility, and toughness of both crystalline and amorphous materials. These strengthening mechanisms give engineers the ability to tailor the mechanical properties of materials to suit a variety of different applications. For example, the favorable properties of steel result from interstitial incorporation of carbon into the iron lattice. Brass, a binary alloy of copper and zinc, has superior mechanical properties compared to its constituent metals due to solution strengthening. Work hardening (such as beating a red-hot piece of metal on anvil) has also been used for centuries by blacksmiths to introduce dislocations into materials, increasing their yield strengths. Basic description Plastic deformation occurs when large numbers of dislocations move and multiply so as to result in macroscopic deformation. In other words, it is the movement of dislocations in the material which allows for deformation. If we want to enhance a material's mechanical properties (i.e. increase the yield and tensile strength), we simply need to introduce a mechanism which prohibits the mobility of these dislocations. Whatever the mechanism may be, (work hardening, grain size reduction, etc.) they all hinder dislocation motion and render the material stronger than previously. The stress required to cause dislocation motion is orders of magnitude lower than the theoretical stress required to shift an entire plane of atoms, so this mode of stress relief is energetically favorable. Hence, the hardness and strength (both yield and tensile) critically depend on the ease with which dislocations move. Pinning points, or locations in the crystal that oppose the motion of dislocations, can be introduced into the lattice to reduce dislocation mobility, thereby increasing mechanical strength. Dislocations may be pinned due to stress field interactions with other dislocations and solute particles, creating physical barriers from second phase p
https://en.wikipedia.org/wiki/Schottky%20anomaly
The Schottky anomaly is an effect observed in solid-state physics where the specific heat capacity of a solid at low temperature has a peak. It is called anomalous because the heat capacity usually increases with temperature, or stays constant. It occurs in systems with a limited number of energy levels so that E(T) increases with sharp steps, one for each energy level that becomes available. Since Cv =(dE/dT), it will experience a large peak as the temperature crosses over from one step to the next. This effect can be explained by looking at the change in entropy of the system. At zero temperature only the lowest energy level is occupied, entropy is zero, and there is very little probability of a transition to a higher energy level. As the temperature increases, there is an increase in entropy and thus the probability of a transition goes up. As the temperature approaches the difference between the energy levels there is a broad peak in the specific heat corresponding to a large change in entropy for a small change in temperature. At high temperatures all of the levels are populated evenly, so there is again little change in entropy for small changes in temperature, and thus a lower specific heat capacity. For a two level system the specific heat coming from the Schottky anomaly has the form: Where Δ is the energy between the two levels. This anomaly is usually seen in paramagnetic salts or even ordinary glass (due to paramagnetic iron impurities) at low temperature. At high temperature the paramagnetic spins have many spin states available, but at low temperatures some of the spin states are "frozen out" (having too high energy due to crystal field splitting), and the entropy per paramagnetic atom is lowered. It was named after Walter H. Schottky. Details In a system where particles can have either a state of energy 0 or , the expected value of the energy of a particle in the canonical ensemble is: with the inverse temperature and the Boltzman
https://en.wikipedia.org/wiki/Schwarzschild%20criterion
Discovered by Karl Schwarzschild, the Schwarzschild criterion is a criterion in astrophysics where a stellar medium is stable against convection when the rate of change in temperature (T) by altitude (Z) satisfies where is gravity and is the heat capacity at constant pressure. If a gas is unstable against convection then if an element is displaced upwards its buoyancy will cause it to keep rising or, if it is displaced downwards, it is denser than its surroundings and will continue to sink. Therefore, the Schwarzschild criterion dictates whether an element of a star will rise or sink if displaced by random fluctuations within the star or if the forces the element experiences will return it to its original position. For the Schwarzschild criterion to hold the displaced element must have a bulk velocity which is highly subsonic. If this is the case then the time over which the pressures surrounding the element changes is much longer than the time it takes for a sound wave to travel through the element and smooth out pressure differences between the element and its surroundings. If this were not the case the element would not hold together as it traveled through the star. In order to keep rising or sinking in the star the displaced element must not be able to become the same density as the gas surrounding it. In other words, it must respond adiabatically to its surroundings. In order for this to be true it must move fast enough for there to be insufficient time for the element to exchange heat with its surroundings. See also Archimedes' principle Brunt–Väisälä frequency Convection
https://en.wikipedia.org/wiki/Transumbilical%20plane
The transumbilical plane or umbilical plane, one of the transverse planes in human anatomy, is a horizontal line that passes through the abdomen at the level of the navel (or umbilicus). In physical examination, clinicians use the transumbilical plane and its intersection with the median plane to divide the abdomen into four quadrants.
https://en.wikipedia.org/wiki/Apply
In mathematics and computer science, apply is a function that applies a function to arguments. It is central to programming languages derived from lambda calculus, such as LISP and Scheme, and also in functional languages. It has a role in the study of the denotational semantics of computer programs, because it is a continuous function on complete partial orders. Apply is also a continuous function in homotopy theory, and, indeed underpins the entire theory: it allows a homotopy deformation to be viewed as a continuous path in the space of functions. Likewise, valid mutations (refactorings) of computer programs can be seen as those that are "continuous" in the Scott topology. The most general setting for apply is in category theory, where it is right adjoint to currying in closed monoidal categories. A special case of this are the Cartesian closed categories, whose internal language is simply typed lambda calculus. Programming In computer programming, apply applies a function to a list of arguments. Eval and apply are the two interdependent components of the eval-apply cycle, which is the essence of evaluating Lisp, described in SICP. Function application corresponds to beta reduction in lambda calculus. Apply function Apply is also the name of a special function in many languages, which takes a function and a list, and uses the list as the function's own argument list, as if the function were called with the elements of the list as the arguments. This is important in languages with variadic functions, because this is the only way to call a function with an indeterminate (at compile time) number of arguments. Common Lisp and Scheme In Common Lisp apply is a function that applies a function to a list of arguments (note here that "+" is a variadic function that takes any number of arguments): (apply #'+ (list 1 2)) Similarly in Scheme: (apply + (list 1 2)) C++ In C++, Bind is used either via the std namespace or via the boost namespace. C# and Java In C# an
https://en.wikipedia.org/wiki/Superior%20mesenteric%20lymph%20nodes
The superior mesenteric lymph nodes may be divided into three principal groups: mesenteric lymph nodes ileocolic lymph nodes mesocolic lymph nodes Structure Mesenteric lymph nodes The mesenteric lymph nodes or mesenteric glands are one of the three principal groups of superior mesenteric lymph nodes and lie between the layers of the mesentery. They number from one hundred to one hundred and fifty, and are sited as two main groups: one ileocolic group lying close to the wall of the small intestine, among the terminal twigs of the superior mesenteric artery; a second larger mesocolic group placed in relation to the loops and primary branches of the vessels. Ileocolic lymph nodes The ileocolic lymph nodes, from ten to twenty in number, form a chain around the ileocolic artery, but tend to subdivide into two groups, one near the duodenum and the other on the lower part of the trunk of the artery. Where the vessel divides into its terminal branches the chain is broken up into several groups: (a) ileal, in relation to the ileal branch of the artery; (b) anterior ileocolic, usually of three glands, in the ileocolic fold, near the wall of the cecum; (c) posterior ileocolic, mostly placed in the angle between the ileum and the colon, but partly lying behind the cecum at its junction with the ascending colon; (d) a single gland, between the layers of the mesenteriole of the appendix; (e) right colic, along the medial side of the ascending colon. Mesocolic lymph nodes The mesocolic lymph nodes are numerous, and lie between the layers of the transverse mesocolon, in close relation to the transverse colon; they are best developed in the neighborhood of the right and left colic flexures. One or two small glands are occasionally seen along the trunk of the right colic artery and others are found in relation to the trunk and branches of the middle colic artery. Function The superior mesenteric glands receive lymph from the jejunum, ileum, cecum, vermiform pr
https://en.wikipedia.org/wiki/Celiac%20lymph%20nodes
The celiac lymph nodes are associated with the branches of the celiac artery. Other lymph nodes in the abdomen are associated with the superior and inferior mesenteric arteries. The celiac lymph nodes are grouped into three sets: the gastric, hepatic and splenic lymph nodes. They receive lymph from the stomach, duodenum, pancreas, spleen, liver, and gall bladder. Additional images
https://en.wikipedia.org/wiki/National%20Institute%20of%20Astrophysics%2C%20Optics%20and%20Electronics
The National Institute of Astrophysics, Optics and Electronics (in Spanish: Instituto Nacional de Astrofísica, Óptica y Electrónica, INAOE) is a Mexican science research institute located in Tonantzintla, Puebla. Founded by presidential decree on November 12, 1971, it has over 100 researchers in Astrophysics, Optics, Electronics and Computing Science, with postgraduate programs in these areas. INAOE is one of 30 public research centers sponsored by the National Council of Science and Technology of Mexico (CONACyT). The Institute, in partnership with the University of Massachusetts Amherst, developed the Large Millimeter Telescope / Gran Telescopio Milimétrico on the Puebla-Veracruz border. The asteroid 14674 INAOE was named after this institute. Structure There are four departments with a number of research groups and laboratories: Astrophysics (José Ramón Valdés Parra)   Visible Astronomical Instrumentation Laboratory and of High Energies (Esperanza Carrasco-Licea)   Millimeter Wavelength Instrumentation Laboratory   Fourier Spectroscopy Laboratory (Fabián Rosales)   Photographic Plates Collection (Raquel Díaz Hernández) Computer Sciences (Ariel Carrasco Ochoa) Machine Learning and Pattern Recognition Natural Language Processing Computer Perception System Engineering Electronics (Alfonso Torres Jacome) Microelectronics Integrated Circuit Design Electronic Instrumentation Communications Optics (Fermín Granados Agustín) Science Group and Optoelectronics Engineering (CIOE) Image-Science Group and Digital Color Photonics Optics Instrumentation Quantum Optics Diffractive Optics Optoelectronics Imaging Science Biophotonics Optical Communications and Optoelectronics Optic Fibers Holography Imaging and Digital Color Optical Instrumentation Optical Microscopy and Dimensional Metrology Diffractive Optics Biomedical Optics Thin-films See also University of California High-Performance AstroComputing Center
https://en.wikipedia.org/wiki/Sacral%20lymph%20nodes
The sacral lymph nodes are placed in the concavity of the sacrum, in relation to the middle and lateral sacral arteries; they receive lymphatics from the rectum and posterior wall of the pelvis.
https://en.wikipedia.org/wiki/Lateral%20shoot
A lateral shoot, commonly known as a branch, is a part of a plant's shoot system that develops from axillary buds on the stem's surface, extending laterally from the plant's stem. Importance to photosynthesis As a plant grows it requires more energy, it also is required to out-compete nearby plants for this energy. One of the ways a plant can compete for this energy is to increase its height, another is to increase its overall surface area. That is to say, the more lateral shoots a plant develops, the more foliage the plant can support increases how much photosynthesis the plant can perform as it allows for more area for the plant to uptake carbon dioxide as well as sunlight. Genes, transcription factors, and growth Through testing with Arabidopsis thaliana (A plant considered a model organism for plant genetic studies) genes including MAX1 and MAX2 have been found to affect growth of lateral shoots. Gene knockouts of these genes cause abnormal proliferation of the plants affected, implying they are used for repressing said growth in wild type plants. Another set of experiments with Arabidopsis thaliana testing genes in the plant hormone florigen, two genes FT and TSF (which are abbreviations for Flowering Locus T, and Twin Sister of FT) when knocked out, appear to affect lateral shoot in a negative fashion. These mutants cause slower growth and improper formation of lateral shoots, which could also mean that lateral shoots are important to florigen's function. Along with general growth there are also transcription factors that directly effect the production of additional lateral shoots like the TCP family (also known as Teosinte branched 1/cycloidea/proliferating cell factor) which are plant specific proteins that suppress lateral shoot branching. Additionally the TCP family has been found to be partially responsible for inhibiting the cell's Growth hormone–releasing hormone (GHRF) which means it also inhibits cell proliferation. See also Apical dominance Sho
https://en.wikipedia.org/wiki/Lajos%20Tak%C3%A1cs
Lajos Takács (August 21, 1924 (Maglód) – December 4, 2015) was a Hungarian mathematician, known for his contributions to probability theory and in particular, queueing theory. He wrote over two hundred scientific papers and six books. He studied at the Technical University of Budapest (1943-1948), taking courses with Charles Jordan and received an M.S. for his dissertation On a Probability-theoretical Investigation of Brownian Motion (1948). From 1945-48 he was a student assistant to Professor Zoltán Bay and participated in his famous experiment of receiving microwave echoes from the Moon (1946). In 1957 he received the Academic Doctor's Degree in Mathematics for his thesis entitled "Stochastic processes arising in the theory of particle counters" (1957). He worked as a mathematician at the Tungsram Research Laboratory (1948–55), the Research Institute for Mathematics of the Hungarian Academy of Sciences (1950–58) and was an associate professor in the Department of Mathematics of the L. Eötvös University (1953–58). He was the first to introduce semi-Markov processes in queueing theory. He took a lecturing appointment at Imperial College in London and London School of Economics (1958), before moving to Columbia University in New York City (1959–66) and Case Western Reserve University in Cleveland (1966–87), advising over twenty Ph.D.-theses. He also held visiting appointments at Bell Labs and IBM Research, had sabbaticals at Stanford University (1966). He was a Professor of Statistics and Probability at Case Western Reserve University from 1966 until he retired as Professor Emeritus in 1987. Takács was married to Dalma Takács, author and professor of English Literature at Notre Dame College of Ohio. He had two daughters, contemporary figurative realist artist, Judy Takács and Susan, a legal assistant. Publications The following is a partial list of publications Some Investigations Concerning Recurrent Stochastic Processes of a Certain Kind, Magyar Tud. Akad.
https://en.wikipedia.org/wiki/Gate%20oxide
The gate oxide is the dielectric layer that separates the gate terminal of a MOSFET (metal–oxide–semiconductor field-effect transistor) from the underlying source and drain terminals as well as the conductive channel that connects source and drain when the transistor is turned on. Gate oxide is formed by thermal oxidation of the silicon of the channel to form a thin (5 - 200 nm) insulating layer of silicon dioxide. The insulating silicon dioxide layer is formed through a process of self-limiting oxidation, which is described by the Deal–Grove model. A conductive gate material is subsequently deposited over the gate oxide to form the transistor. The gate oxide serves as the dielectric layer so that the gate can sustain as high as 1 to 5 MV/cm transverse electric field in order to strongly modulate the conductance of the channel. Above the gate oxide is a thin electrode layer made of a conductor which can be aluminium, a highly doped silicon, a refractory metal such as tungsten, a silicide (TiSi, MoSi2, TaSi or WSi2) or a sandwich of these layers. This gate electrode is often called "gate metal" or "gate conductor". The geometrical width of the gate conductor electrode (the direction transverse to current flow) is called the physical gate width. The physical gate width may be slightly different from the electrical channel width used to model the transistor as fringing electric fields can exert an influence on conductors that are not immediately below the gate. The electrical properties of the gate oxide are critical to the formation of the conductive channel region below the gate. In NMOS-type devices, the zone beneath the gate oxide is a thin n-type inversion layer on the surface of the p-type semiconductor substrate. It is induced by the oxide electric field from the applied gate voltage VG. This is known as the inversion channel. It is the conduction channel that allows the electrons to flow from the source to the drain. Overstressing the gate oxide layer, a
https://en.wikipedia.org/wiki/Ioflupane%20%28123I%29
{{DISPLAYTITLE:Ioflupane (123I)}} Ioflupane (123I) is the international nonproprietary name (INN) of a cocaine analogue which is a neuro-imaging radiopharmaceutical drug, used in nuclear medicine for the diagnosis of Parkinson's disease and the differential diagnosis of Parkinson's disease over other disorders presenting similar symptoms. During the DaT scan procedure it is injected into a patient and viewed with a gamma camera in order to acquire SPECT images of the brain with particular respect to the striatum, a subcortical region of the basal ganglia. The drug is sold under the brand name Datscan and is manufactured by GE Healthcare, formerly Amersham plc. Pharmacology Datscan is a solution of ioflupane (123I) for injection into a living test subject. The iodine introduced during manufacture is a radioactive isotope, iodine-123, and it is the gamma decay of this isotope that is detectable to a gamma camera. 123I has a half-life of approximately 13 hours and a gamma photon energy of 159 keV making it an appropriate radionuclide for medical imaging. The solution also contains 5% ethanol to aid solubility and is supplied sterile since it is intended for intravenous use. Ioflupane has a high binding affinity for presynaptic dopamine transporters (DAT) in the brains of mammals, in particular the striatal region of the brain. A feature of Parkinson's disease is a marked reduction in dopaminergic neurons in the striatal region. By introducing an agent that binds to the dopamine transporters a quantitative measure and spatial distribution of the transporters can be obtained. Method of administration The Datscan solution is supplied ready to inject with a certificate stating the calibration activity and time. The nominal injection activity is 185 MBq and a scan should not be performed with less than 111 MBq. Thyroid blocking via oral administration of 120 mg potassium iodide is recommended to minimize unnecessary excessive uptake of radioiodine. This is typically g
https://en.wikipedia.org/wiki/Juan%20Bimba
Juan Bimba is a fictitious character used in the past as the national personification of Venezuela, but is now regarded as obsolete. According to the local folklore of the region of Cumaná the name comes from a mentally ill local inhabitant of the 1850s; but this version is doubtful. It was first used by Juan Vicente González, a Venezuelan columnist of the 19th century as an example of the average Venezuelan peasant, the prototype of the common people. The cartoon was drawn by cartoonist Mariano Medina Febres in the 1930s Use in politics The name was used and preserved by Andrés Eloy Blanco in several poems and in the Fantoches magazine. A sociopolitical essay by the poet, in 1936, revolving particularly on socialism and communism in Venezuelan history, was entitled Carta a Juan Bimba («A Letter to Juan Bimba»). Acción Democrática, one of the two leading political parties of Venezuela in the 20th century, used and further popularized the name and created an image to accompany the symbolism of their 1963 electoral campaign's motto: El Partido del Pueblo («The People's Party»), especially since the country's supreme court banned the use of their official flag. Recently, former Venezuelan president Hugo Chávez was seen performing a humoristic version of himself as Juan Bimba, particularly during political campaigning tactics, portraying the image of a humble llanero. See also Brother Jonathan Doña Juanita Huaso
https://en.wikipedia.org/wiki/Dirichlet%20energy
In mathematics, the Dirichlet energy is a measure of how variable a function is. More abstractly, it is a quadratic functional on the Sobolev space . The Dirichlet energy is intimately connected to Laplace's equation and is named after the German mathematician Peter Gustav Lejeune Dirichlet. Definition Given an open set and a function the Dirichlet energy of the function  is the real number where denotes the gradient vector field of the function . Properties and applications Since it is the integral of a non-negative quantity, the Dirichlet energy is itself non-negative, i.e. for every function . Solving Laplace's equation for all , subject to appropriate boundary conditions, is equivalent to solving the variational problem of finding a function  that satisfies the boundary conditions and has minimal Dirichlet energy. Such a solution is called a harmonic function and such solutions are the topic of study in potential theory. In a more general setting, where is replaced by any Riemannian manifold , and is replaced by for another (different) Riemannian manifold , the Dirichlet energy is given by the sigma model. The solutions to the Lagrange equations for the sigma model Lagrangian are those functions that minimize/maximize the Dirichlet energy. Restricting this general case back to the specific case of just shows that the Lagrange equations (or, equivalently, the Hamilton–Jacobi equations) provide the basic tools for obtaining extremal solutions. See also Dirichlet's principle Dirichlet eigenvalue Total variation Oscillation Harmonic map
https://en.wikipedia.org/wiki/Allotropes%20of%20sulfur
The element sulfur exists as many allotropes. In number of allotropes, sulfur is second only to carbon. In addition to the allotropes, each allotrope often exists in polymorphs (different crystal structures of the same covalently bonded S molecules) delineated by Greek prefixes (α, β, etc.). Furthermore, because elemental sulfur has been an item of commerce for centuries, its various forms are given traditional names. Early workers identified some forms that have later proved to be single or mixtures of allotropes. Some forms have been named for their appearance, e.g. "mother of pearl sulfur", or alternatively named for a chemist who was pre-eminent in identifying them, e.g. "Muthmann's sulfur I" or "Engel's sulfur". The most commonly encountered form of sulfur is the orthorhombic polymorph of , which adopts a puckered ring – or "crown" – structure. Two other polymorphs are known, also with nearly identical molecular structures. In addition to , sulfur rings of 6, 7, 9–15, 18, and 20 atoms are known. At least five allotropes are uniquely formed at high pressures, two of which are metallic. The number of sulfur allotropes reflects the relatively strong S−S bond of 265 kJ/mol. Furthermore, unlike most elements, the allotropes of sulfur can be manipulated in solutions of organic solvents and are analysed by HPLC. Phase diagram The pressure-temperature (P-T) phase diagram for sulfur is complex (see image). The region labeled I (a solid region), is α-sulfur. High pressure solid allotropes In a high-pressure study at ambient temperatures, four new solid forms, termed II, III, IV, V have been characterized, where α-sulfur is form I. Solid forms II and III are polymeric, while IV and V are metallic (and are superconductive below 10 K and 17 K, respectively). Laser irradiation of solid samples produces three sulfur forms below 200–300 kbar (20–30 GPa). Solid cyclo allotrope preparation Two methods exist for the preparation of the cyclo-sulfur allotropes. One of the m
https://en.wikipedia.org/wiki/Strongly%20minimal%20theory
In model theory—a branch of mathematical logic—a minimal structure is an infinite one-sorted structure such that every subset of its domain that is definable with parameters is either finite or cofinite. A strongly minimal theory is a complete theory all models of which are minimal. A strongly minimal structure is a structure whose theory is strongly minimal. Thus a structure is minimal only if the parametrically definable subsets of its domain cannot be avoided, because they are already parametrically definable in the pure language of equality. Strong minimality was one of the early notions in the new field of classification theory and stability theory that was opened up by Morley's theorem on totally categorical structures. The nontrivial standard examples of strongly minimal theories are the one-sorted theories of infinite-dimensional vector spaces, and the theories ACFp of algebraically closed fields of characteristic p. As the example ACFp shows, the parametrically definable subsets of the square of the domain of a minimal structure can be relatively complicated ("curves"). More generally, a subset of a structure that is defined as the set of realizations of a formula φ(x) is called a minimal set if every parametrically definable subset of it is either finite or cofinite. It is called a strongly minimal set if this is true even in all elementary extensions. A strongly minimal set, equipped with the closure operator given by algebraic closure in the model-theoretic sense, is an infinite matroid, or pregeometry. A model of a strongly minimal theory is determined up to isomorphism by its dimension as a matroid. Totally categorical theories are controlled by a strongly minimal set; this fact explains (and is used in the proof of) Morley's theorem. Boris Zilber conjectured that the only pregeometries that can arise from strongly minimal sets are those that arise in vector spaces, projective spaces, or algebraically closed fields. This conjecture was refuted by E
https://en.wikipedia.org/wiki/Searle%20Scholars%20Program
The Searle Scholars Program is a career development award made annually to support 15 young faculty in biomedical research and chemistry at US universities and research centers. The goal of the award is to support to exceptional young scientists who are at the beginning of their independent research careers and are working in the fields of medicine, chemistry, and/or biological sciences. Award history The award was established in 1980 by a donation from trusts established by John G. and Frances C. Searle. John Searle had served as President of G. D. Searle & Company, a pharmaceutical company known for developing the first female birth control pill. The program is funded through the Chicago Community Trust and administered by the Kinship Foundation. Award process Applicants must be pursuing independent research careers in biochemistry, cell biology, genetics, immunology, neuroscience, pharmacology, and related areas in chemistry, medicine, and the biological sciences, and must be in their first or second year of their first tenure-track assistant professor position. Applicants at 176 universities are eligible to be nominated. Grantees receive grants worth $300,000, paid out over the course of 3 years. Recipients As of 2022, 622 Searle Scholars had been selected. Since 1981: 85 recipients have been inducated into the US National Academy of Sciences. 20 recipients have received a MacArthur Fellowship. 2 recipients have won the Nobel Prize. 1981 Dale L. Boger, University of Kansas Walter F. Boron, Yale University G. Charles Dismukes, Princeton University Elaine V. Fuchs, The University of Chicago Stanley M. Goldin, Harvard University and Harvard School of Public Health Leroy F. Liu, The Johns Hopkins University James E. Niedel, Duke University Harry T. Orr, Professor of Pathology University of Minnesota Daniel K. Podolsky, Harvard University and Harvard School of Public Health Lee L. Rubin, The Rockefeller University Wesley J. Thompson, The Univers
https://en.wikipedia.org/wiki/Glyoxalase%20system
The glyoxalase system is a set of enzymes that carry out the detoxification of methylglyoxal and the other reactive aldehydes that are produced as a normal part of metabolism. This system has been studied in both bacteria and eukaryotes. This detoxification is accomplished by the sequential action of two thiol-dependent enzymes; firstly glyoxalase І, which catalyzes the isomerization of the spontaneously formed hemithioacetal adduct between glutathione and 2-oxoaldehydes (such as methylglyoxal) into S-2-hydroxyacylglutathione. Secondly, glyoxalase ІІ hydrolyses these thiolesters and in the case of methylglyoxal catabolism, produces D-lactate and GSH from S-D-lactoyl-glutathione. This system shows many of the typical features of the enzymes that dispose of endogenous toxins. Firstly, in contrast to the amazing substrate range of many of the enzymes involved in xenobiotic metabolism, it shows a narrow substrate specificity. Secondly, intracellular thiols are required as part of its enzymatic mechanism and thirdly, the system acts to recycle reactive metabolites back to a form which may be useful to cellular metabolism. Overview of Glyoxalase Pathway Glyoxalase I (GLO1), glyoxalase II (GLO2), and reduced glutathione (GSH). In bacteria, there is an additional enzyme that functions if there is no GSH, it is called the third glyoxalase protein, glyoxalase 3 (GLO3). GLO3 has not been found in humans yet. The pathway begins with methylglyoxal (MG), which is produced from non-enzymatic reactions with DHAP or G3P produced in glycolysis. Methylglyoxal is then converted into S-d-lactoylglutathione by enzyme GLO1 with a catalytic amount of GSH, of which is hydrolyzed into non-toxic D-lactate via GLO2, during which GSH is reformed to be consumed again by GLO1 with a new molecule of MG. D-lactate ultimately goes on to be metabolized into pyruvate. Regulation There are several small molecule inducers that can induce the glyoxalase pathway by either promoting GLO1 function to
https://en.wikipedia.org/wiki/Ferranti%20effect
In electrical engineering, the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (> 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. It can be stated as a factor, or as a percent increase. It was first observed during the installation of underground cables in Sebastian Ziani de Ferranti's 10,000-volt AC power distribution system in 1887. The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. Therefore, both line inductance and capacitance are responsible for this phenomenon. This can be analysed by considering the line as a transmission line where the source impedance is lower than the load impedance (unterminated). The effect is similar to an electrically short version of the quarter-wave impedance transformer, but with smaller voltage transformation. The Ferranti effect is more pronounced the longer the line and the higher the voltage applied. The relative voltage rise is proportional to the square of the line length and the square of frequency. The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance. An equivalent to the Ferranti effect occurs when inductive current flows through a series capacitance. Indeed, a lagging current flowing through a impedance results in a voltage difference , hence in increased voltage on the receiving side. See also LC circuit "A series resonant circuit provides voltage magnification." Reflections of signals on conducting lines Failure of the first trans-Atlantic telegraph cable Telegrapher's equations Characteristic impedance#Transmission line model
https://en.wikipedia.org/wiki/Logogen%20model
The logogen model of 1969 is a model of speech recognition that uses units called "logogens" to explain how humans comprehend spoken or written words. Logogens are a vast number of specialized recognition units, each able to recognize one specific word. This model provides for the effects of context on word recognition. Overview The word logogen can be traced back to the Greek-language word logos, which means "word", and genus, which means "birth". British scientist John Morton's logogen model was designed to explain word recognition using a new type of unit known as a logogen. A critical element of this theory is the involvement of lexicons, or specialized aspects of memory that include semantic and phonemic information about each item that is contained in memory. A given lexicon consists of many smaller, abstract items known as logogens. Logogens contain a variety of properties about given word such as their appearance, sound, and meaning. Logogens do not store words within themselves, but rather they store information that is specifically necessary for retrieval of whatever word is being searched for. A given logogen will become activated by psychological stimuli or contextual information (words) that is consistent with the properties of that specific logogen and when the logogen's activation level rises to or above its threshold level, the pronunciation of the given word is sent to the output system. Certain stimuli can affect the activation levels of more than one word at a time, usually involving words that are similar to one another. When this occurs, whichever of the words' activation levels reaches the threshold level, it is that word that is then sent to the output system with the subject remaining unaware of any partially excited logogens. This assumption was made by Marslen-Wilson and Welch (1978), who added to the model some assumptions of their own in order to account for their experimental results. They also assumed that the analysis of phonet
https://en.wikipedia.org/wiki/Analytic%20torsion
In mathematics, Reidemeister torsion (or R-torsion, or Reidemeister–Franz torsion) is a topological invariant of manifolds introduced by Kurt Reidemeister for 3-manifolds and generalized to higher dimensions by and . Analytic torsion (or Ray–Singer torsion) is an invariant of Riemannian manifolds defined by as an analytic analogue of Reidemeister torsion. and proved Ray and Singer's conjecture that Reidemeister torsion and analytic torsion are the same for compact Riemannian manifolds. Reidemeister torsion was the first invariant in algebraic topology that could distinguish between closed manifolds which are homotopy equivalent but not homeomorphic, and can thus be seen as the birth of geometric topology as a distinct field. It can be used to classify lens spaces. Reidemeister torsion is closely related to Whitehead torsion; see . It has also given some important motivation to arithmetic topology; see . For more recent work on torsion see the books and . Definition of analytic torsion If M is a Riemannian manifold and E a vector bundle over M, then there is a Laplacian operator acting on the k-forms with values in E. If the eigenvalues on k-forms are λj then the zeta function ζk is defined to be for s large, and this is extended to all complex s by analytic continuation. The zeta regularized determinant of the Laplacian acting on k-forms is which is formally the product of the positive eigenvalues of the laplacian acting on k-forms. The analytic torsion T(M,E) is defined to be Definition of Reidemeister torsion Let be a finite connected CW-complex with fundamental group and universal cover , and let be an orthogonal finite-dimensional -representation. Suppose that for all n. If we fix a cellular basis for and an orthogonal -basis for , then is a contractible finite based free -chain complex. Let be any chain contraction of D*, i.e. for all . We obtain an isomorphism with , . We define the Reidemeister torsion where A is the matrix of with res
https://en.wikipedia.org/wiki/MT-ND5
MT-ND5 is a gene of the mitochondrial genome coding for the NADH-ubiquinone oxidoreductase chain 5 protein (ND5). The ND5 protein is a subunit of NADH dehydrogenase (ubiquinone), which is located in the mitochondrial inner membrane and is the largest of the five complexes of the electron transport chain. Variations in human MT-ND5 are associated with mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS) as well as some symptoms of Leigh's syndrome and Leber's hereditary optic neuropathy (LHON). Structure MT-ND5 is located in mitochondrial DNA from base pair 12,337 to 14,148. The MT-ND5 gene produces a 67 kDa protein composed of 603 amino acids. MT-ND5 is one of seven mitochondrial genes encoding subunits of the enzyme NADH dehydrogenase (ubiquinone), together with MT-ND1, MT-ND2, MT-ND3, MT-ND4, MT-ND4L, and MT-ND6. Also known as Complex I, this enzyme is the largest of the respiratory complexes. The structure is L-shaped with a long, hydrophobic transmembrane domain and a hydrophilic domain for the peripheral arm that includes all the known redox centres and the NADH binding site. MT-ND5 and the rest of the mitochondrially encoded subunits are the most hydrophobic of the subunits of Complex I and form the core of the transmembrane region. Function The MT-ND5 product is a subunit of the respiratory chain Complex I that is supposed to belong to the minimal assembly of core proteins required to catalyze NADH dehydrogenation and electron transfer to ubiquinone (coenzyme Q10). Initially, NADH binds to Complex I and transfers two electrons to the isoalloxazine ring of the flavin mononucleotide (FMN) prosthetic arm to form FMNH2. The electrons are transferred through a series of iron-sulfur (Fe-S) clusters in the prosthetic arm and finally to coenzyme Q10 (CoQ), which is reduced to ubiquinol (CoQH2). The flow of electrons changes the redox state of the protein, resulting in a conformational change and pK shift of the ionizable side chain,
https://en.wikipedia.org/wiki/MT-ND1
MT-ND1 is a gene of the mitochondrial genome coding for the NADH-ubiquinone oxidoreductase chain 1 (ND1) protein. The ND1 protein is a subunit of NADH dehydrogenase, which is located in the mitochondrial inner membrane and is the largest of the five complexes of the electron transport chain. Variants of the human MT-ND1 gene are associated with mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS), Leigh's syndrome (LS), Leber's hereditary optic neuropathy (LHON) and increases in adult BMI. Structure MT-ND1 is located in mitochondrial DNA from base pair 3,307 to 4,262. The MT-ND1 gene produces a 36 kDa protein composed of 318 amino acids. MT-ND1 is one of seven mitochondrial genes encoding subunits of the enzyme NADH dehydrogenase (ubiquinone), together with MT-ND2, MT-ND3, MT-ND4, MT-ND4L, MT-ND5, and MT-ND6. Also known as Complex I, this enzyme is the largest of the respiratory complexes. The structure is L-shaped with a long, hydrophobic transmembrane domain and a hydrophilic domain for the peripheral arm that includes all the known redox centres and the NADH binding site. The MT-ND1 product and the rest of the mitochondrially encoded subunits are the most hydrophobic of the subunits of Complex I and form the core of the transmembrane region. Function MT-ND1-encoded NADH-ubiquinone oxidoreductase chain 1 is a subunit of the respiratory chain Complex I that is supposed to belong to the minimal assembly of core proteins required to catalyze NADH dehydrogenation and electron transfer to ubiquinone (coenzyme Q10). Initially, NADH binds to Complex I and transfers two electrons to the isoalloxazine ring of the flavin mononucleotide (FMN) prosthetic arm to form FMNH2. The electrons are transferred through a series of iron-sulfur (Fe-S) clusters in the prosthetic arm and finally to coenzyme Q10 (CoQ), which is reduced to ubiquinol (CoQH2). The flow of electrons changes the redox state of the protein, resulting in a conformational change
https://en.wikipedia.org/wiki/Privacy-enhancing%20technologies
Privacy-enhancing technologies (PET) are technologies that embody fundamental data protection principles by minimizing personal data use, maximizing data security, and empowering individuals. PETs allow online users to protect the privacy of their personally identifiable information (PII), which is often provided to and handled by services or applications. PETs use techniques to minimize an information system's possession of personal data without losing functionality. Generally speaking, PETs can be categorized as hard and soft privacy technologies. Goals of PETs The objective of PETs is to protect personal data and assure technology users of two key privacy points: their own information is kept confidential, and management of data protection is a priority to the organizations who hold responsibility for any PII. PETs allow users to take one or more of the following actions related to personal data that is sent to and used by online service providers, merchants or other users (this control is known as self-determination). PETs aim to minimize personal data collected and used by service providers and merchants, use pseudonyms or anonymous data credentials to provide anonymity, and strive to achieve informed consent about giving personal data to online service providers and merchants. In Privacy Negotiations, consumers and service providers establish, maintain, and refine privacy policies as individualized agreements through the ongoing choice among service alternatives, therefore providing the possibility to negotiate the terms and conditions of giving personal data to online service providers and merchants (data handling/privacy policy negotiation). Within private negotiations, the transaction partners may additionally bundle the personal information collection and processing schemes with monetary or non-monetary rewards. PETs provide the possibility to remotely audit the enforcement of these terms and conditions at the online service providers and merchants (assu
https://en.wikipedia.org/wiki/Proteinase%20K
In molecular biology, Proteinase K (, protease K, endopeptidase K, Tritirachium alkaline proteinase, Tritirachium album serine proteinase, Tritirachium album proteinase K) is a broad-spectrum serine protease. The enzyme was discovered in 1974 in extracts of the fungus Parengyodontium album (formerly Engyodontium album or Tritirachium album). Proteinase K is able to digest hair (keratin), hence, the name "Proteinase K". The predominant site of cleavage is the peptide bond adjacent to the carboxyl group of aliphatic and aromatic amino acids with blocked alpha amino groups. It is commonly used for its broad specificity. This enzyme belongs to Peptidase family S8 (subtilisin). The molecular weight of Proteinase K is 28,900 daltons (28.9 kDa). Enzyme activity Activated by calcium, the enzyme digests proteins preferentially after hydrophobic amino acids (aliphatic, aromatic and other hydrophobic amino acids). Although calcium ions do not affect the enzyme activity, they do contribute to its stability. Proteins will be completely digested if the incubation time is long and the protease concentration high enough. Upon removal of the calcium ions, the stability of the enzyme is reduced, but the proteolytic activity remains. Proteinase K has two binding sites for Ca2+, which are located close to the active center, but are not directly involved in the catalytic mechanism. The residual activity is sufficient to digest proteins, which usually contaminate nucleic acid preparations. Therefore, the digestion with Proteinase K for the purification of nucleic acids is usually performed in the presence of EDTA (inhibition of metal-ion dependent enzymes such as nucleases). Proteinase K is also stable over a wide pH range (4–12), with a pH optimum of pH 8.0. An elevation of the reaction temperature from 37 °C to 50–60 °C may increase the activity several times, like the addition of 0.5–1% sodium dodecyl sulfate (SDS) or Guanidinium chloride (3 M), Guanidinium thiocyanate (1 M) and
https://en.wikipedia.org/wiki/Natural%20landscape
A natural landscape is the original landscape that exists before it is acted upon by human culture. The natural landscape and the cultural landscape are separate parts of the landscape. However, in the 21st century, landscapes that are totally untouched by human activity no longer exist, so that reference is sometimes now made to degrees of naturalness within a landscape. In Silent Spring (1962) Rachel Carson describes a roadside verge as it used to look: "Along the roads, laurel, viburnum and alder, great ferns and wildflowers delighted the traveler’s eye through much of the year" and then how it looks now following the use of herbicides: "The roadsides, once so attractive, were now lined with browned and withered vegetation as though swept by fire". Even though the landscape before it is sprayed is biologically degraded, and may well contains alien species, the concept of what might constitute a natural landscape can still be deduced from the context. The phrase "natural landscape" was first used in connection with landscape painting, and landscape gardening, to contrast a formal style with a more natural one, closer to nature. Alexander von Humboldt (1769 – 1859) was to further conceptualize this into the idea of a natural landscape separate from the cultural landscape. Then in 1908 geographer Otto Schlüter developed the terms original landscape (Urlandschaft) and its opposite cultural landscape (Kulturlandschaft) in an attempt to give the science of geography a subject matter that was different from the other sciences. An early use of the actual phrase "natural landscape" by a geographer can be found in Carl O. Sauer's paper "The Morphology of Landscape" (1925). Origins of the term The concept of a natural landscape was first developed in connection with landscape painting, though the actual term itself was first used in relation to landscape gardening. In both cases it was used to contrast a formal style with a more natural one, that is closer to nature. Chu
https://en.wikipedia.org/wiki/Hepatic%20lymph%20nodes
The hepatic lymph nodes consist of the following groups: (a) hepatic, on the stem of the hepatic artery, and extending upward along the common bile duct, between the two layers of the lesser omentum, as far as the porta hepatis; the cystic gland, a member of this group, is placed near the neck of the gall-bladder; (b) subpyloric, four or five in number, in close relation to the bifurcation of the gastroduodenal artery, in the angle between the superior and descending parts of the duodenum; an outlying member of this group is sometimes found above the duodenum on the right gastric (pyloric) artery. The lymph nodes of the hepatic chain receive afferents from the stomach, duodenum, liver, gall-bladder, and pancreas; their efferents join the celiac group of preaortic lymph nodes. Cancer prognosis and treatment Hepatic artery lymph nodes are commonly resected during a Whipple procedure. In a Whipple procedure, outcomes favored those who had no hepatic artery lymph node involvement. A particularly large hepatic artery lymph node, positioned on the anterior aspect of the common hepatic artery, is thought to play an important role in pancreatic cancer. When metastatic disease is identified in the hepatic artery lymph node during pancreatic cancer surgery, longterm outcomes are worse.
https://en.wikipedia.org/wiki/Gastric%20lymph%20nodes
The gastric lymph nodes are lymph nodes which drain the stomach and consist of two sets, superior and inferior: The superior gastric lymph nodes () accompany the left gastric artery and are divisible into three groups: Upper, on the stem of the artery; Lower, accompanying the descending branches of the artery along the cardiac half of the lesser curvature of the stomach, between the two layers of the lesser omentum; Paracardial outlying members of the gastric lymph nodes, disposed in a manner comparable to a chain of beads around the neck of the stomach. They receive their afferents from the stomach; their efferents pass to the celiac group of preaortic lymph nodes. The inferior gastric lymph nodes (; right gastroepiploic lymph nodes), four to seven in number, lie between the two layers of the greater omentum along the pyloric half of the greater curvature of the stomach.
https://en.wikipedia.org/wiki/Wilberforce%20pendulum
A Wilberforce pendulum, invented by British physicist Lionel Robert Wilberforce around 1896, consists of a mass suspended by a long helical spring and free to turn on its vertical axis, twisting the spring. It is an example of a coupled mechanical oscillator, often used as a demonstration in physics education. The mass can both bob up and down on the spring, and rotate back and forth about its vertical axis with torsional vibrations. When correctly adjusted and set in motion, it exhibits a curious motion in which periods of purely rotational oscillation gradually alternate with periods of purely up and down oscillation. The energy stored in the device shifts slowly back and forth between the translational 'up and down' oscillation mode and the torsional 'clockwise and counterclockwise' oscillation mode, until the motion eventually dies away. Despite the name, in normal operation it does not swing back and forth as ordinary pendulums do. The mass usually has opposing pairs of radial 'arms' sticking out horizontally, threaded with small weights that can be screwed in or out to adjust the moment of inertia to 'tune' the torsional vibration period. Explanation The device's intriguing behavior is caused by a slight coupling between the two motions or degrees of freedom, due to the geometry of the spring. When the weight is moving up and down, each downward excursion of the spring causes it to unwind slightly, giving the weight a slight twist. When the weight moves up, it causes the spring to wind slightly tighter, giving the weight a slight twist in the other direction. So when the weight is moving up and down, each oscillation gives a slight alternating rotational torque to the weight. In other words, during each oscillation some of the energy in the translational mode leaks into the rotational mode. Slowly the up and down movement gets less, and the rotational movement gets greater, until the weight is just rotating and not bobbing. Similarly, when the
https://en.wikipedia.org/wiki/George%20Piranian
George Piranian (; May 2, 1914 – August 31, 2009) was a Swiss-American mathematician. Piranian was internationally known for his research in complex analysis, his association with Paul Erdős, and his editing of the Michigan Mathematical Journal. Early life and education Piranian was born in Thalwil outside Zürich, Switzerland. His father, Patvakan Piranian, was originally from Armenia. George and his brother David at home were called Gevorg and Davit, the Armenian versions of their names. His family immigrated to Logan, Utah, in 1929. Piranian received a B.Sc. in agriculture and M.Sc. in botany (1937) at Utah State University. As a Rhodes scholar, Piranian first "tasted blood" in mathematics at Hertford College, Oxford. After returning to the United States, Piranian earned his Ph.D. in mathematics under Szolem Mandelbrojt at Rice University (1943). Piranian's dissertation was entitled A Study of the Position and Nature of the Singularities of Functions Given by Their Taylor Series. Piranian joined the faculty at University of Michigan in 1945. Michigan Mathematical Journal In 1952, Piranian, along with Paul Erdős, Fritz Herzog and Arthur J. Lohwater, founded the Michigan Mathematical Journal; leadership in editing was assumed by Piranian in 1954. Piranian co-authored a research paper with Erdős and Herzog; as a consequence he has an Erdős number of one. Piranian's editing was renowned in mathematics. Teaching Piranian's teaching captivated several future research mathematicians. Piranian also was an advisor with the Honors Program at the College of Literature, Science and the Arts at the University of Michigan. Teaching of Theodore Kaczynski In the 1960s, Piranian taught and advised Theodore Kaczynski, who was a Ph.D. student in mathematics. In the 1990s, Kaczynski was convicted of the Unabomber crimes.
https://en.wikipedia.org/wiki/Information%20card
An information card (or i-card) is a personal digital identity that people can use online, and the key component of an identity metasystem. Visually, each i-card has a card-shaped picture and a card name associated with it that enable people to organize their digital identities and to easily select one they want to use for any given interaction. The information card metaphor has been implemented by identity selectors like Windows CardSpace, DigitalMe or Higgins Identity Selector. An identity metasystem is an interoperable architecture for digital identity that enables people to have and employ a collection of digital identities based on multiple underlying technologies, implementations, and providers. Using this approach, customers can continue to use their existing identity infrastructure investments, choose the identity technology that works best for them, and more easily migrate from old technologies to new technologies without sacrificing interoperability with others. The identity metasystem is based upon the principles in "The Laws of Identity". Overview There are three participants in digital identity interactions using information cards: Identity providers issue digital identities for you. For example, businesses might issue identities to their customers, governments might vouch for the identities of their citizens, credit card issuers might provide identities enabling payment, online services could provide verified data such as age, and individuals might use self-issued identities to log onto websites. Relying parties (RPs) accept identities for you. Online services that you use can accept digital identities that you choose and use the information provided by them on your behalf, with your consent. Subject is yourself, the party in control of all these interactions. The subject can choose which of its applicable digital identities to use with the relying party. Selectors An identity selector is used to store, manage, and use their digital identities.
https://en.wikipedia.org/wiki/Generalized%20Ozaki%20cost%20function
In economics the generalized-Ozaki cost is a general description of cost described by Shuichi Nakamura. For output y, at date t and a vector of m input prices p, the generalized-Ozaki cost, c, is Discussion In econometrics it is often desirable to have a model of the cost of production of a given output with given inputs—or in common terms, what it will cost to produce some number of goods at prevailing prices, or given prevailing prices and a budget, how much can be made. Generally there are two parts to a cost function, the fixed and variable costs involved in production. The marginal cost is the change in the cost of production for a single unit. Most cost functions then take the price of the inputs and adjust for different factors of production, typically, technology, economies of scale, and elasticities of inputs. Traditional cost functions include Cobb–Douglas and the constant elasticity of substitution models. These are still used because for a wide variety of activities, effects such as varying ability to substitute materials does not change. For example, for people running a bake sale, the ability to substitute one kind of chocolate chip for another will not vary over the number of cookies they can bake. However, as economies of scale and changes in substitution become important models that handle these effects become more useful, such as the transcendental log cost function. The traditional forms are economically homothetic. This means they can be expressed as a function, and that function can be broken into an outer part and an inner part. The inner part will appear once as a term in the outer part, and the inner part will be monotonically increasing, or to say it another way, it never goes down. However, empirically in the areas of trade and production, homoethetic and monolithic functional models do not accurately predict results. One example is in the gravity equation for trade, or how much will two countries trade with each other based on GDP a
https://en.wikipedia.org/wiki/Katanosin
Katanosins are a group of antibiotics (also known as lysobactins). They are natural products with strong antibacterial potency. So far, katanosin A and katanosin B (lysobactin) have been described. Sources Katanosins have been isolated from the fermentation broth of microorganisms, such as Cytophaga. or the Gram-negative bacterium Lysobacter sp. Structure Katanosins are cyclic depsipeptides (acylcyclodepsipeptides). These non-proteinogenic structures are not regular proteins from primary metabolism. Rather, they originate from bacterial secondary metabolism. Accordingly, various non-proteinogenic (non-ribosomal) amino acids are found in katanosins, such as 3-hydroxyleucine, 3-hydroxyasparagine, allothreonine and 3-hydroxyphenylalanine. All katanosins have a cyclic and a linear segment (“lariat structure”). The peptidic ring is closed with an ester bond (lactone). Katanosin A and B differ in the amino acid position 7. The minor metabolite katanosin A has a valine in this position, whereas the main metabolite katanosin B carries an isoleucine. Biological activity Katanosin antibiotics target the bacterial cell wall biosynthesis. They are highly potent against problematic Gram-positive hospital pathogens such as staphylococci and enterococci. Their promising biological activity attracted various biological and chemical research groups. Their in-vitro potency is comparable with the current “last defence” antibiotic vancomycin. Chemical synthesis The first total syntheses of katanosin B (lysobactin) have been described in 2007.
https://en.wikipedia.org/wiki/Allergic%20response
An allergic response is a hypersensitive immune reaction to a substance that normally is harmless or would not cause an immune response in everyone. An allergic response may cause harmful symptoms such as itching or inflammation or tissue injury. Mechanism Allergies are an abnormal immune reaction. The human immune system is designed to protect the body from potential harm and in people who have allergies the immune system will react to allergens (substances that trigger an immune response). The immune system will produce immunoglobulin E, IgE, antibodies for each allergen. The antibodies will cause cells in the body to produce histamine. This histamine will act on different areas of the body (eyes, throat, nose, gastrointestinal tract, skin or lungs) to produce symptoms of an allergic reaction. The allergic response is not limited to a certain amount of exposure. If the body is exposed to the allergen multiple times the immune system will react every time the allergen is present. The reason why people get allergies is not known. The allergens are not passed down through generations. It is believed if parents have allergies the child is more likely to be allergic to the same allergens. Some common symptoms include itchiness, swelling, running nose, watery eyes, coughing, wheezing, trouble breathing, hives, rashes, mucus production, or a more severe reaction anaphylaxis. Allergic responses and the severity vary from person to person. Many substances can trigger an allergic reaction. Common triggers of a reaction include foods, likes nuts, eggs, milk, gluten, fruit and vegetables; insect bites from bees or wasps (often a severe response occurs); environmental factors such as pollen, dust, mold, plants like grass or trees, animal dander; medications or chemicals. Some people experience an allergic response to cold or hot temperatures outside, jewelry or sunlight. There are daily treatments to reduce the severity of the allergic response. Often these treatments
https://en.wikipedia.org/wiki/Personal%20genomics
Personal genomics or consumer genetics is the branch of genomics concerned with the sequencing, analysis and interpretation of the genome of an individual. The genotyping stage employs different techniques, including single-nucleotide polymorphism (SNP) analysis chips (typically 0.02% of the genome), or partial or full genome sequencing. Once the genotypes are known, the individual's variations can be compared with the published literature to determine likelihood of trait expression, ancestry inference and disease risk. Automated high-throughput sequencers have increased the speed and reduced the cost of sequencing, making it possible to offer whole genome sequencing including interpretation to consumers since 2015 for less than $1,000. The emerging market of direct-to-consumer genome sequencing services has brought new questions about both the medical efficacy and the ethical dilemmas associated with widespread knowledge of individual genetic information. In personalized medicine Personalized medicine is a medical method that targets treatment structures and medicinal decisions based on a patient's predicted response or risk of disease. The National Cancer Institute or NCI, an arm of the National Institutes of Health, lists a patient's genes, proteins, and environment as the primary factors analyzed to prevent, diagnose, and treat disease through personalized medicine. There are various subcategories of the concept of personalized medicine such as predictive medicine, precision medicine and stratified medicine. Although these terms are used interchangeably to describe this practice, each carries individual nuances. Predictive medicine describes the field of medicine that utilizes information, often obtained through personal genomics techniques, to both predict the possibility of disease, and institute preventative measures for a particular individual. Precision medicine is a term very similar to personalized medicine in that it focuses on a patient's genes, env
https://en.wikipedia.org/wiki/Generalized%20Hebbian%20algorithm
The generalized Hebbian algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989, it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons. Theory The GHA combines Oja's rule with the Gram-Schmidt process to produce a learning rule of the form , where defines the synaptic weight or connection strength between the th input and th output neurons, and are the input and output vectors, respectively, and is the learning rate parameter. Derivation In matrix form, Oja's rule can be written , and the Gram-Schmidt algorithm is , where is any matrix, in this case representing synaptic weights, is the autocorrelation matrix, simply the outer product of inputs, is the function that diagonalizes a matrix, and is the function that sets all matrix elements on or above the diagonal equal to 0. We can combine these equations to get our original rule in matrix form, , where the function sets all matrix elements above the diagonal equal to 0, and note that our output is a linear neuron. Stability and PCA Applications The GHA is used in applications where a self-organizing map is necessary, or where a feature or principal components analysis can be used. Examples of such cases include artificial intelligence and speech and image processing. Its importance comes from the fact that learning is a single-layer process—that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer, thus avoiding the
https://en.wikipedia.org/wiki/Synaptic%20weight
In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research. Computation In a computational neural network, a vector or set of inputs and outputs , or pre- and post-synaptic neurons respectively, are interconnected with synaptic weights represented by the matrix , where for a linear neuron . where the rows of the synaptic matrix represent the vector of synaptic weights for the output indexed by . The synaptic weight is changed by using a learning rule, the most basic of which is Hebb's rule, which is usually stated in biological terms as Neurons that fire together, wire together. Computationally, this means that if a large signal from one of the input neurons results in a large signal from one of the output neurons, then the synaptic weight between those two neurons will increase. The rule is unstable, however, and is typically modified using such variations as Oja's rule, radial basis functions or the backpropagation algorithm. Biology For biological networks, the effect of synaptic weights is not as simple as for linear neurons or Hebbian learning. However, biophysical models such as BCM theory have seen some success in mathematically describing these networks. In the mammalian central nervous system, signal transmission is carried out by interconnected networks of nerve cells, or neurons. For the basic pyramidal neuron, the input signal is carried by the axon, which releases neurotransmitter chemicals into the synapse which is picked up by the dendrites of the next neuron, which can then generate an action potential which is analogous to the output signal in the computational case. The synaptic weight in this process is determined by several variable factors: How well the input signal propagates through the axon (see
https://en.wikipedia.org/wiki/Speech%20science
Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics. Speech production The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation). Respiration Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by
https://en.wikipedia.org/wiki/Sextuple%20bond
A sextuple bond is a type of covalent bond involving 12 bonding electrons and in which the bond order is 6. The only known molecules with true sextuple bonds are the diatomic dimolybdenum (Mo2) and ditungsten (W2), which exist in the gaseous phase and have boiling points of and respectively. Theoretical analysis Roos et al argue that no stable element can form bonds of higher order than a sextuple bond, because the latter corresponds to a hybrid of the s orbital and all five d orbitals, and f orbitals contract too close to the nucleus to bond in the lanthan­ides. Indeed, quantum mechanical calculations have revealed that the di­molybdenum bond is formed by a combination of two σ bonds, two π bonds and two δ bonds. (Also, the σ and π bonds contribute much more significantly to the sextuple bond than the δ bonds.) Although no φ bonding has been reported for transition metal dimers, it is predicted that if any sextuply-bonded actinides were to exist, at least one of the bonds would likely be a φ bond as in quintuply-bonded diuranium and di­neptunium. No sextuple bond has been observed in lanthanides or actinides. For the majority of elements, even the possibility of a sextuple bond is foreclosed, because the d electrons ferromagnetically couple, instead of bonding. The only known exceptions are dimolybdenum and ditungsten. Quantum-mechanical treatment The formal bond order of a molecule is half the number of bonding electrons surplus to antibonding electrons; for a typical molecule, it attains exclusively integer values. A full quantum treatment requires a more nuanced picture, in which electrons may exist in a superposition, contributing fractionally to both bonding and antibonding orbitals. In a formal sextuple bond, there would be different electron pairs; an effective sextuple bond would then have all six contributing almost entirely to bonding orbitals. In Roos et al's calculations, the effective bond order could be determined by the formula whe
https://en.wikipedia.org/wiki/National%20Health%20and%20Nutrition%20Examination%20Survey
The National Health and Nutrition Examination Survey (NHANES) is a survey research program conducted by the National Center for Health Statistics (NCHS) to assess the health and nutritional status of adults and children in the United States, and to track changes over time. The survey combines interviews, physical examinations and laboratory tests. The NHANES interview includes demographic, socioeconomic, dietary, and health-related questions. The examination component consists of medical, dental, and physiological measurements, as well as laboratory tests administered by medical personnel. The first NHANES was conducted in 1971, and in 1999 the surveys became an annual event; the first report on the topic was published in 2001. NHANES findings are used to determine the prevalence of major diseases and risk factors for diseases. Information is used to assess nutritional status and its association with health promotion and disease prevention. NHANES findings are also the basis for national standards for such measurements as height, weight, and blood pressure. NHANES data are used in epidemiological studies and health sciences research (including biomarkers of aging), which help develop sound public health policy, direct and design health programs and services, expand health knowledge, extend healthspan and lifespan. Follow-up studies using NHANES data were made possible by creating linked mortality files and files based on Medicare and Medicaid data. See also National Archive of Computerized Data on Aging
https://en.wikipedia.org/wiki/Floral%20Genome%20Project
The Floral Genome Project is a collaborative research cooperation primarily between Penn State University, University of Florida, and Cornell University. The initial funding came from a grant of $7.4 million from the National Science Foundation. The Floral Genome Project was initiated to bridge the genomic gap between the most broadly studied plant model systems. According to the website, the following are the aims of the project: External links Official Website Genome projects Botany University of Florida Cornell University
https://en.wikipedia.org/wiki/Biological%20neuron%20model
Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of neurons. In particular, these models describe how the voltage potential across the cell membrane changes over time. In an experimental setting, stimulating neurons with an electrical current generates an action potential (or spike), that propagates down the neuron's axon. This spike branches out to a large number of downstream neurons, where the signals terminate at synapses. As many as 85% of neurons in the neocortex, the outermost layer of the mammalian brain, consist of excitatory pyramidal neurons, and each pyramidal neuron receives tens of thousands of inputs from other neurons. Thus, spiking neurons are a major information processing unit of the nervous system. One such example of a spiking neuron model may be a highly detailed mathematical model that includes spatial morphology. Another may be a conductance-based neuron model that views neurons as points and describes the membrane voltage dynamics as a function of transmembrane currents. A mathematically simpler "integrate-and-fire" model significantly simplifies the description of ion channel and membrane potential dynamics (initially studied by Lapique in 1907). Introduction: Biological background, classification and aims of neuron models Non-spiking cells, spiking cells, and their measurement Not all the cells of the nervous system produce the type of spike that define the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike. Furthermore, many cells in the nervous system are not classified as neurons but instead are classified as glia. Neuronal activity can be measured with different experimental techniques, such as the "Whole cell" measurement technique, which captures the spiking activity of a single neuron and produces full amplitude action potentials. With extracellular measurement techniques an electrode (or array of se
https://en.wikipedia.org/wiki/VoFR
Voice over Frame Relay (VoFR) is a protocol to transfer voice over Frame Relay networks. VoFR uses two sub-protocols, FRF.11 and FRF.12. FRF.11 defines the frame format of VoFR, and FRF.12 is used for packet fragmentation and reassembly.
https://en.wikipedia.org/wiki/Proximity%20communication
Proximity communication is a Sun microsystems technology of wireless chip-to-chip communications. Partly by Robert Drost and Ivan Sutherland. Research done as part of High Productivity Computing Systems DARPA project. Proximity communication replaces wires by capacitive coupling, promises significant increase in communications speed between chips in an electronic system, among other benefits. Partially funded by a $50 million award from the Defense Advanced Research Projects Agency. Comparing traditional area ball bonding, proximity communication has one order smaller scale, so it can be two order denser (in terms of connection number/PIN) than ball bonding. This technique requires very good alignment between chips and very small gaps between transmitting (Tx) and receiving (Rx) parts (2-3 micrometers), which can be destroyed by thermal expansion, vibration, dust, etc. Chip transmitter consists (according to presentation slide) of big 32x32 array of very small Tx micropads, 4x4 array of bigger Rx micropads (four times bigger than tx micropad), and two linear arrays of 14 X vernier and 14 Y vernier. Proximity communication can be used with 3D packing on chips in Multi-Chip Module, allowing to connect several MCM without sockets and wires. Speed was up to 1.35 Gbit/s/channel in tests of 16 channel systems. BER < 10−12. Static power is 3.6 mW/channel, dynamic power is 3.9 pJ/bit. External links Slides by Robert J. Drost List of Drost patents in Sun, most of which is about Proximity communication Semiconductors Semiconductor technology Microtechnology Sun Microsystems
https://en.wikipedia.org/wiki/Master%20of%20the%20Lamps
Master of the Lamps is a music video game published in 1985 by Activision. It was released for the Amstrad CPC, Apple II, Atari 8-bit family, Commodore 64, and MSX. Plot The death of an Arabian prince's father, the king, shatters three enchanted oil lamps, freeing the three genies trapped within. The genies overrun the palace; to contain them, the prince must reassemble the three broken lamps. The player, in the role of the prince wearing a white thawb and red keffiyeh, must journey into the seven dens of each genie, as each den contains one of the lamp pieces. Gameplay Gameplay alternates between two modes. In the first, the prince maneuvers a flying carpet through a winding tunnel to a genie's den. In practice, this requires the player to direct the carpet over diamond-shaped gates as they appear; failure to do so returns the prince to the beginning of the tunnel. Once in the den, the second mode, the player strikes a gong thrice to summon the genie. The genie draws from a hookah, and blows out a ball of smoke. From the smoke emerges a sequence of tones, which the player must repeat in a call-and-response pattern. In order to play a tone, the player must strike the corresponding gong. If the player strikes the incorrect gong, or strikes the correct gong too early, the genie's magic transports the prince to the beginning of the tunnel. In the seven dens of the first genie, each tone is audible, and manifests as a colored quaver (eighth note) that floats toward the ground. In the seven dens of the second genie, the tone is inaudible, so the player must match the color of the note to the color of the corresponding gong. In the seven dens of the third genie, the tone is audible, but no note appears; the player must recognize the note's pitch, and strike the correct gong. When the player passes the trial, a gateway to another tunnel opens. After passing the musical trials of the three genies, the player navigates one final tunnel to the palace. If the player succ
https://en.wikipedia.org/wiki/Critical%20state%20soil%20mechanics
Critical state soil mechanics is the area of soil mechanics that encompasses the conceptual models that represent the mechanical behavior of saturated remolded soils based on the Critical State concept. Formulation The Critical State concept is an idealization of the observed behavior of saturated remoulded clays in triaxial compression tests, and it is assumed to apply to undisturbed soils. It states that soils and other granular materials, if continuously distorted (sheared) until they flow as a frictional fluid, will come into a well-defined critical state. At the onset of the critical state, shear distortions occur without any further changes in mean effective stress , deviatoric stress (or yield stress, , in uniaxial tension according to the von Mises yielding criterion), or specific volume : where, However, for triaxial conditions . Thus, All critical states, for a given soil, form a unique line called the Critical State Line (CSL) defined by the following equations in the space : where , , and are soil constants. The first equation determines the magnitude of the deviatoric stress needed to keep the soil flowing continuously as the product of a frictional constant (capital ) and the mean effective stress . The second equation states that the specific volume occupied by unit volume of flowing particles will decrease as the logarithm of the mean effective stress increases. History In an attempt to advance soil testing techniques, Kenneth Harry Roscoe of Cambridge University, in the late forties and early fifties, developed a simple shear apparatus in which his successive students attempted to study the changes in conditions in the shear zone both in sand and in clay soils. In 1958 a study of the yielding of soil based on some Cambridge data of the simple shear apparatus tests, and on much more extensive data of triaxial tests at Imperial College London from research led by Professor Sir Alec Skempton at Imperial College, led to the publication of th
https://en.wikipedia.org/wiki/Madelung%20equations
In theoretical physics, the Madelung equations, or the equations of quantum hydrodynamics, are Erwin Madelung's equivalent alternative formulation of the Schrödinger equation, written in terms of hydrodynamical variables, similar to the Navier–Stokes equations of fluid dynamics. The derivation of the Madelung equations is similar to the de Broglie–Bohm formulation, which represents the Schrödinger equation as a quantum Hamilton–Jacobi equation. Equations The Madelung equations are quantum Euler equations: where is the flow velocity, is the mass density, is the Bohm quantum potential, is the potential from the Schrödinger equation. The circulation of the flow velocity field along any closed path obeys the auxiliary condition for all integers . Derivation The Madelung equations are derived by writing the wavefunction in polar form: and substituting this form into the Schrödinger equation The flow velocity is defined by from which we also find that where is the probability current of standard quantum mechanics. The quantum force, which is the negative of the gradient of the quantum potential, can also be written in terms of the quantum pressure tensor: where The integral energy stored in the quantum pressure tensor is proportional to the Fisher information, which accounts for the quality of measurements. Thus, according to the Cramér–Rao bound, the Heisenberg uncertainty principle is equivalent to a standard inequality for the efficiency of measurements. The thermodynamic definition of the quantum chemical potential follows from the hydrostatic force balance above: According to thermodynamics, at equilibrium the chemical potential is constant everywhere, which corresponds straightforwardly to the stationary Schrödinger equation. Therefore, the eigenvalues of the Schrödinger equation are free energies, which differ from the internal energies of the system. The particle internal energy is calculated as and is related to the local Carl Friedrich
https://en.wikipedia.org/wiki/Debtor%20days
The debtors days ratio measures how quickly cash is being collected from debtors. The longer it takes for a company to collect, the greater the number of debtors days. Debtor days can also be referred to as Debtor collection period. Another common ratio is the creditors days ratio. Definition or when