source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/1176%20Peak%20Limiter
The 1176 Peak Limiter is a dynamic range compressor designed by the American engineer Bill Putnam and introduced by UREI in 1967. Derived from the 175 and 176 tube compressors, it marked the transition from vacuum tubes to solid-state technology. With its distinctive tone and its wide range of sounds, deriving from the Class A amplifiers, its input and output transformers, the uncommonly fast attack and release times and their program dependence, and different compression ratios and modes, the 1176 was immediately appreciated by engineers and producers and established as a studio standard through the years. At the time of its introduction, it was the first true peak limiter with all solid-state circuitry. The 1176LN was inducted into the TECnology Hall of Fame in 2008. History In 1966, the engineer Bill Putnam, founder of Universal Audio, began to employ the recently invented field-effect transistors (FET), replacing vacuum tubes in his equipment designs. After successfully adapting the 108 tube microphone preamplifier into the new FET-based 1108, he redesigned the 175 and 176 variable-mu tube compressors into the new 1176 compressor. The initial units (A and AB revisions) were available in 1967 and were informally referred as "blue stripe" for their blue-colored meter section. Revision C, designed in 1970, saw one of the major design evolutions, with less noise and harmonic distortion. It was renamed to 1176LN and the face color changed to the now familiar solid black. Bill Putnam sold UREI in 1985 and Revision H was the last series produced by the original company. However, the company was re-established as Universal Audio in 1999 by the sons Bill Putnam, Jr. and Jim Putnam, and re-issued the 1176LN as its first product. The original design was reproduced and revised thanks to the extensive design notes left by Bill Putnam. Design The 1176 uses a field-effect transistor (FET) to obtain gain reduction arranged in a feedback configuration. As its predecessor
https://en.wikipedia.org/wiki/Long-lived%20fission%20product
Long-lived fission products (LLFPs) are radioactive materials with a long half-life (more than 200,000 years) produced by nuclear fission of uranium and plutonium. Because of their persistent radiotoxicity, it is necessary to isolate them from humans and the biosphere and to confine them in nuclear waste repositories for geological periods of time. Evolution of radioactivity in nuclear waste Nuclear fission produces fission products, as well as actinides from nuclear fuel nuclei that capture neutrons but fail to fission, and activation products from neutron activation of reactor or environmental materials. Short-term The high short-term radioactivity of spent nuclear fuel is primarily from fission products with short half-life. The radioactivity in the fission product mixture is mostly short-lived isotopes such as 131I and 140Ba, after about four months 141Ce, 95Zr/95Nb and 89Sr take the largest share, while after about two or three years the largest share is taken by 144Ce/144Pr, 106Ru/106Rh and 147Pm. Note that in the case of a release of radioactivity from a power reactor or used fuel, only some elements are released. As a result, the isotopic signature of the radioactivity is very different from an open air nuclear detonation where all the fission products are dispersed. Medium-lived fission products After several years of cooling, most radioactivity is from the fission products caesium-137 and strontium-90, which are each produced in about 6% of fissions, and have half-lives of about 30 years. Other fission products with similar half-lives have much lower fission product yields, lower decay energy, and several (151Sm, 155Eu, 113mCd) are also quickly destroyed by neutron capture while still in the reactor, so are not responsible for more than a tiny fraction of the radiation production at any time. Therefore, in the period from several years to several hundred years after use, radioactivity of spent fuel can be modeled simply as exponential decay of the 137Cs
https://en.wikipedia.org/wiki/Configuration%20space%20%28physics%29
In classical mechanics, the parameters that define the configuration of a system are called generalized coordinates, and the space defined by these coordinates is called the configuration space of the physical system. It is often the case that these parameters satisfy mathematical constraints, such that the set of actual configurations of the system is a manifold in the space of generalized coordinates. This manifold is called the configuration manifold of the system. Notice that this is a notion of "unrestricted" configuration space, i.e. in which different point particles may occupy the same position. In mathematics, in particular in topology, a notion of "restricted" configuration space is mostly used, in which the diagonals, representing "colliding" particles, are removed. Example: a particle in 3D space The position of a single particle moving in ordinary Euclidean 3-space is defined by the vector , and therefore its configuration space is . It is conventional to use the symbol for a point in configuration space; this is the convention in both the Hamiltonian formulation of classical mechanics, and in Lagrangian mechanics. The symbol is used to denote momenta; the symbol refers to velocities. A particle might be constrained to move on a specific manifold. For example, if the particle is attached to a rigid linkage, free to swing about the origin, it is effectively constrained to lie on a sphere. Its configuration space is the subset of coordinates in that define points on the sphere . In this case, one says that the manifold is the sphere, i.e. . For n disconnected, non-interacting point particles, the configuration space is . In general, however, one is interested in the case where the particles interact: for example, they are specific locations in some assembly of gears, pulleys, rolling balls, etc. often constrained to move without slipping. In this case, the configuration space is not all of , but the subspace (submanifold) of allowable positions
https://en.wikipedia.org/wiki/Anterior%20jugular%20vein
The anterior jugular vein is a vein in the neck. Structure The anterior jugular vein lies lateral to the cricothyroid ligament. It begins near the hyoid bone by the confluence of several superficial veins from the submandibular region. Its tributaries are some laryngeal veins, and occasionally a small thyroid vein. It descends between the median line and the anterior border of the sternocleidomastoid muscle, and, at the lower part of the neck, passes beneath that muscle to open into the termination of the external jugular vein, or, in some instances, into the subclavian vein. Just above the sternum the two anterior jugular veins communicate by a transverse trunk, the venous jugular arch, which receive tributaries from the inferior thyroid veins; each also communicates with the internal jugular. There are no valves in this vein. The pretracheal lymph nodes follow the anterior jugular vein on each side of the midline. Variation The anterior jugular vein varies considerably in size, bearing usually an inverse proportion to the external jugular. Most frequently, there are two anterior jugulars, a right and left. However, there is sometimes only one. A duplicate anterior jugular vein may be present on one side, which may cross over the midline. Clinical significance Ultrasound The anterior jugular vein, if present, is easily identified using ultrasound of the neck. Tracheotomy The anterior jugular vein may be damaged during tracheotomy, causing significant bleeding. The significant variation in vein course, such as duplicate veins, creates this risk. Performing a midline incision helps to avoid the anterior jugular vein. Additional images
https://en.wikipedia.org/wiki/8-bit%20clean
8-bit clean is an attribute of computer systems, communication channels, and other devices and software, that process 8-bit character encodings without treating any byte as an in-band control code. History Until the early 1990s, many programs and data transmission channels were character-oriented and treated some characters, e.g., ETX, as control characters. Other assumed a stream of seven-bit characters, with values between 0 and 127; for example, the ASCII standard used only seven bits per character, avoiding an 8-bit representation in order to save on data transmission costs. On computers and data links using 8-bit bytes this left the top bit of each byte free for use as a parity, flag bit, or meta data control bit. 7-bit systems and data links are unable to directly handle more complex character codes which are commonplace in non-English-speaking countries with larger alphabets. Binary files of octets cannot be transmitted through 7-bit data channels directly. To work around this, binary-to-text encodings have been devised which use only 7-bit ASCII characters. Some of these encodings are uuencoding, Ascii85, SREC, BinHex, kermit and MIME's Base64. EBCDIC-based systems cannot handle all characters used in UUencoded data. However, the base64 encoding does not have this problem. SMTP and NNTP 8-bit cleanness Historically, various media were used to transfer messages, some of them only supporting 7-bit data, so an 8-bit message had high chances to be garbled during transmission in the 20th century. But some implementations really did not care about formal discouraging of 8-bit data and allowed high bit set bytes to pass through. Such implementations are said to be 8-bit clean. In general, a communications protocol is said to be 8-bit clean if it correctly passes through the high bit of each byte in the communication process. Many early communications protocol standards, such as (for SMTP), (for NNTP) and , were designed to work over such "7-bit" communicatio
https://en.wikipedia.org/wiki/Intercarpal%20joints
The intercarpal joints (joints of the carpal bones of the wrist) can be subdivided into three sets of joints (also called articulations): Those of the proximal row of carpal bones, those of the distal row of carpal bones, and those of the two rows with each other. Articulations The bones in each carpal row interlock with each other and each row can therefore be considered a single joint. In the proximal row a limited degree of mobility is possible, but the bones of the distal row are connected to each other and to the metacarpal bones by strong ligaments that make this row and the metacarpus a functional entity. Proximal row The joints of the proximal row are arthrodial joints, The scaphoid, lunate, and triquetrum are connected by dorsal, volar, and interosseous ligaments. The dorsal intercarpal ligament are two in number and placed transversely behind the bones of the first row; they connect the scaphoid and lunate, and the lunate and triquetrum. The palmar intercarpal ligaments are also two, connect the scaphoid and lunate, and the lunate and triangular; they are less strong than the dorsal, and placed very deeply behind the Flexor tendons and the volar radiocarpal ligament. The interosseous intercarpal ligaments are two narrow bundles, one connecting the lunate with the scaphoid, the other joining it to the triangular. They are on a level with the superior surfaces of these bones, and their upper surfaces are smooth, and form part of the convex articular surface of the wrist-joint. The ligaments connecting the pisiform bone are the articular capsule and the two volar ligaments. The articular capsule is a thin membrane which connects the pisiform to the triangular; it is lined by synovial membrane. The two volar ligaments are strong fibrous bands; one, the pisohamate ligament, connects the pisiform to the hamate, the other, the pisometacarpal ligament, joins the pisiform to the base of the fifth metacarpal bone. These ligaments are, in reality, prolong
https://en.wikipedia.org/wiki/Viewing%20angle
In display technology parlance, viewing angle is the angle at which a display can be viewed with an acceptable visual performance. In a technical context, the angular range is called viewing cone defined by a multitude of viewing directions. The viewing angle can be an angular range over which the display view is acceptable, or it can be the angle of generally acceptable viewing, such as a twelve o'clock viewing angle for a display optimized or viewing from the top. The image may seem garbled, poorly saturated, of poor contrast, blurry, or too faint outside the stated viewing angle range, the exact mode of "failure" depends on the display type in question. For example, some projection screens reflect more light perpendicular to the screen and less light to the sides, making the screen appear much darker (and sometimes colors distorted) if the viewer is not in front of the screen. Many manufacturers of projection screens thus define the viewing angle as the angle at which the luminance of the image is exactly half of the maximum. With LCD screens, some manufacturers have opted to measure the contrast ratio and report the viewing angle as the angle where the contrast ratio exceeds 5:1 or 10:1, giving minimally acceptable viewing conditions. The viewing angle is measured from one direction to the opposite, giving a maximum of 180° for a flat, one-sided screen. A display may exhibit different behavior in horizontal and vertical axes, requiring users and manufacturers to specify maximum usable viewing angles in both directions. Usually, the screens are designed to facilitate greater viewing angles at the horizontal level, and smaller angles at the vertical level, should the two of them differ in magnitude. The viewing angle for some displays is specified in only a general direction, such as 6 o'clock or 12 o'clock. Early LCDs had strikingly narrow viewing cones, a situation that has been improved with current technology. Narrow viewing cones of some types of display
https://en.wikipedia.org/wiki/Ecash
Ecash was conceived by David Chaum as an anonymous cryptographic electronic money or electronic cash system in 1982. It was realized through his corporation Digicash and used as micropayment system at one US bank from 1995 to 1998. Design Chaum published the idea of anonymous electronic money in a 1983 paper; eCash software on the user's local computer stored money in a digital format, cryptographically signed by a bank. The user could spend the digital money at any shop accepting eCash, without having to open an account with the vendor first, or transmitting credit card numbers. Security was ensured by public key digital signature schemes. The RSA blind signatures achieved unlinkability between withdrawal and spend transactions. Depending on the payment transactions, one distinguishes between on-line and off-line electronic cash: If the payee has to contact a third party (e.g., the bank or the credit-card company acting as an acquirer) before accepting a payment, the system is called an on-line system. In 1990, Chaum together with Moni Naor proposed the first off-line e-cash system, which was also based on blind signatures. History Chaum started the company DigiCash in 1989 with "ecash" as its trademark. He raised $10 million from David Marquardt and by 1997 Nicholas Negroponte was its chairman. Yet, in the United States, only one bank the Mark Twain bank in Saint Louis, MO implemented ecash, testing it as micropayment system; Similar to credit cards, the system was free to purchasers, while merchants paid a transaction fee. After a three-year trial that signed up merely 5,000 customers, the system was dissolved in 1998, one year after the bank had been purchased by Mercantile Bank, a large issuer of credit cards. David Chaum opined then “As the Web grew, the average level of sophistication of users dropped. It was hard to explain the importance of privacy to them”. In Europe, with fewer credit cards and more cash transactions, micropayment technologies made
https://en.wikipedia.org/wiki/Everysight
Everysight Ltd. is an Israeli technology company established in 2014 as a spinoff of Elbit Systems. Everysight develops smartglasses based on augmented reality technology for the civilian market. The company's main product is Raptor smartglasses. History Early years Everysight's first generation projection system was developed in 2004. This version was a small micro-HUD, which used a staged beam combiner integrated within a panel window device located in front of the user's eye. This allowed the user to view the surroundings while looking at real-time information, projected from the combiners, and perceived as an augmented reality graphic layer floating in front of the user. An eMagin OLED display component enabled users to view the projected information even in full sunlight. The display system was connected by cable to a Sony U subnotebook running Windows XP, which interfaced wirelessly with a variety of ANT+ sensors while running custom software designed for cycling and skiing. 2006–2007 – 2nd generation The team developed the second generation of smartglasses based on the free space principle, with an off-axis non-forming exit pupil, whereby the glasses' lenses themselves are used as a beam combiner in such a way that the projected light can return to the user's eye. Other than the lens itself, there is no additional element in front of the eye, preventing sight obstruction and increasing eye safety. Additionally, the lenses featured a spherical structure delivering built-in correction for optical distortions. The optical solution combined the use of a mini OLED display with significantly low power consumption. The optical solution proved to perform with extremely high efficiency while enabling a high-contrast display that remained visible even in full sunlight. The model was powered by a specially adapted, PDA-like computer running Windows CE, which was connected to the glasses by cable and ran various software apps. The computer included a wireless int
https://en.wikipedia.org/wiki/%C3%89va%20Tardos
Éva Tardos (born 1 October 1957) is a Hungarian mathematician and the Jacob Gould Schurman Professor of Computer Science at Cornell University. Tardos's research interest is algorithms. Her work focuses on the design and analysis of efficient methods for combinatorial optimization problems on graphs or networks. She has done some work on network flow algorithms like approximation algorithms for network flows, cut, and clustering problems. Her recent work focuses on algorithmic game theory and simple auctions. Education and career Tardos received her Dipl. Math in 1981 and her Ph.D. 1984 from the Faculty of Sciences of the Eötvös Loránd University under her advisor András Frank. She was the Chair of the Department of Computer Science at Cornell from 2006-2010, and she is currently serving as the Associate Dean of the College of Computing and Information Science. She was editor-in-Chief of SIAM Journal on Computing from 2004–2009, and is currently the Economics and Computation area editor of the Journal of the ACM as well as on the Board of Editors of Theory of Computing. She has co-authored with Jon Kleinberg a textbook called Algorithm Design (). Honors and awards Tardos has been elected to the National Academy of Engineering (2007), the American Academy of Arts and Sciences, and the National Academy of Sciences (2013) and the American Philosophical Society (2020) She is also an ACM Fellow (since 1998), a Fellow of INFORMS, and a Fellow of the American Mathematical Society (2013) She is the recipient of Packard, Sloan Foundation, and Guggenheim fellowships. She is the winner of the Fulkerson Prize (1988), the George B. Dantzig Prize (2006), the Van Wijngaarden Award (2011), the Gödel Prize (2012) and the EATCS Award (2017), In 2018 the Association for Women in Mathematics and Society for Industrial and Applied Mathematics selected her as their annual Sonia Kovalevsky Lecturer. In 2019 she was awarded the IEEE John von Neumann Medal. Personal Tardos is married
https://en.wikipedia.org/wiki/Molecular%20Systems%20Biology
Molecular Systems Biology is a peer-reviewed open-access scientific journal covering systems biology at the molecular level (examples include: genomics, proteomics, metabolomics, microbial systems, the integration of cell signaling and regulatory networks), synthetic biology, and systems medicine. It was established in 2005 and published by the Nature Publishing Group on behalf of the European Molecular Biology Organization. As of December 2013, it is published by EMBO Press.
https://en.wikipedia.org/wiki/Derivative%20chromosome
A derivative chromosome (der) is a structurally rearranged chromosome generated either by a chromosome rearrangement involving two or more chromosomes or by multiple chromosome aberrations within a single chromosome (e.g. an inversion and a deletion of the same chromosome, or deletions in both arms of a single chromosome). The term always refers to the chromosome that has an intact centromere. Derivative chromosomes are designated by the abbreviation der when used to describe a Karyotype. The derivative chromosome must be specified in parentheses followed by all aberrations involved in this derivative chromosome. The aberrations must be listed from pter to qter and not be separated by a comma. For example, 46,XY,der(4)t(4;8)(p16;q22)t(4;9)(q31;q31) would refer to a derivative chromosome 4 which is the result of a translocation between the short arm of chromosome 4 at region 1, band 6 and the long arm of chromosome 8 at region 2, band 2, and a translocation between the long arm of chromosome 4 at region 3, band 1 and the long arm of chromosome 9 at region 3, band 1. As for the initial string "46,XY", it only signifies that this translocation is occurring in an organism, which has this set of chromosomes, i.e. a human being.
https://en.wikipedia.org/wiki/Alan%20Woodward%20%28computer%20scientist%29
Alan Woodward is a British computer scientist at the University of Surrey. He is a specialist in computer security and a core member of the Surrey Centre for Cyber Security. He studied physics as an undergraduate student and conducted research in signal processing as a postgraduate student. Both of these were at the University of Southampton. He has worked in government service, business as well as academia. In addition to his academic qualifications, his practical accomplishments have resulted in him being elected Fellow of the Institute of Physics, a Chartered Physicist, Chartered IT Practitioner, Chartered Engineer, Fellow of the British Computer Society, a EUR ING, and Fellow of the Royal Statistical Society.
https://en.wikipedia.org/wiki/Over%20the%20Rainbow
"Over the Rainbow" is a ballad by Harold Arlen with lyrics by Yip Harburg. It was written for the 1939 film The Wizard of Oz, in which it was sung by actress Judy Garland in her starring role as Dorothy Gale. It won the Academy Award for Best Original Song and became Garland's signature song. About five minutes into the film, Dorothy sings the song after failing to get Aunt Em, Uncle Henry, and the farmhands to listen to her story of an unpleasant incident involving her dog, Toto, and the town spinster, Miss Gulch (Margaret Hamilton). Aunt Em tells her to "find yourself a place where you won't get into any trouble". This prompts her to walk off by herself, musing to Toto, "Someplace where there isn't any trouble. Do you suppose there is such a place, Toto? There must be. It's not a place you can get to by a boat, or a train. It's far, far away. Behind the moon, beyond the rain", at which point she begins singing. Background Composer Harold Arlen and lyricist Yip Harburg often worked in tandem, Harburg generally suggesting an idea or title for Arlen to set to music, before Harburg contributed the lyrics. For their work together on The Wizard of Oz, Harburg claimed his inspiration was "a ballad for a little girl who... was in trouble and... wanted to get away from... Kansas. A dry, arid, colorless place. She had never seen anything colorful in her life except the rainbow". Arlen decided the idea needed "a melody with a long broad line". By the time all the other songs for the film had been written, Arlen was feeling the pressure of not having the song for the Kansas scene. He often carried blank pieces of music manuscript in his pockets to jot down short melodic ideas. Arlen described how the inspiration for the melody to "Over the Rainbow" came to him suddenly while his wife Anya drove: "I said to Mrs. Arlen... 'let's go to Grauman's Chinese ... You drive the car, I don't feel too well right now.' I wasn't thinking of work. I wasn't consciously thinking of work,
https://en.wikipedia.org/wiki/Index%20of%20physics%20articles%20%28%21%24%40%29
The index of physics articles is split into multiple pages due to its size. To navigate by individual letter use the table of contents below. !$@ 't Hooft loop 't Hooft symbol 't Hooft–Polyakov monopole (2+1)-dimensional topological gravity (n-p) reaction (−1)F ΔT Indexes of physics articles
https://en.wikipedia.org/wiki/Reinsurance
Reinsurance is insurance that an insurance company purchases from another insurance company to insulate itself (at least in part) from the risk of a major claims event. With reinsurance, the company passes on ("cedes") some part of its own insurance liabilities to the other insurance company. The company that purchases the reinsurance policy is referred to as the "ceding company" or "cedent". The company issuing the reinsurance policy is referred to as the "reinsurer". In the classic case, reinsurance allows insurance companies to remain solvent after major claims events, such as major disasters like hurricanes or wildfires. In addition to its basic role in risk management, reinsurance is sometimes used to reduce the ceding company's capital requirements, or for tax mitigation or other purposes. The reinsurer may be either a specialist reinsurance company, which only undertakes reinsurance business, or another insurance company. Insurance companies that accept reinsurance refer to the business as "assumed reinsurance". There are two basic methods of reinsurance: Facultative Reinsurance, which is negotiated separately for each insurance policy that is reinsured. Facultative reinsurance is normally purchased by ceding companies for individual risks not covered, or insufficiently covered, by their reinsurance treaties, for amounts in excess of the monetary limits of their reinsurance treaties and for unusual risks. Underwriting expenses, and in particular personnel costs, are higher for such business because each risk is individually underwritten and administered. However, as they can separately evaluate each risk reinsured, the reinsurer's underwriter can price the contract more accurately to reflect the risks involved. Ultimately, a facultative certificate is issued by the reinsurance company to the ceding company reinsuring that one policy, and is used for high-value or hazardous risks. Treaty Reinsurance means that the ceding company and the reinsurer negoti
https://en.wikipedia.org/wiki/Three-state%20logic
In digital electronics, a tri-state or three-state buffer is a type of digital buffer that has three stable states: a high output state, a low output state, and a high-impedance state. In the high-impedance state, the output of the buffer is disconnected from the output bus, allowing other devices to drive the bus without interference from the tri-state buffer. This can be useful in situations where multiple devices are connected to the same bus and need to take turns accessing it. Systems implementing three-state logic on their bus are known as a three-state bus or tri-state bus. Tri-state buffers are commonly used in bus-based systems, where multiple devices are connected to the same bus and need to share it. For example, in a computer system, multiple devices such as the CPU, memory, and peripherals may be connected to the same data bus. To ensure that only one device can transmit data on the bus at a time, each device is equipped with a tri-state buffer. When a device wants to transmit data, it activates its tri-state buffer, which connects its output to the bus and allows it to transmit data. When the transmission is complete, the device deactivates its tri-state buffer, which disconnects its output from the bus and allows another device to access the bus. Tri-state buffers can be implemented using gates, flip-flops, or other digital logic circuits. They are useful for reducing crosstalk and noise on a bus, and for allowing multiple devices to share the same bus without interference. Uses The basic concept of the third state, high impedance (Hi-Z), is to effectively remove the device's influence from the rest of the circuit. If more than one device is electrically connected to another device, putting an output into the Hi-Z state is often used to prevent short circuits, or one device driving high (logical 1) against another device driving low (logical 0). Three-state buffers can also be used to implement efficient multiplexers, especially those with large
https://en.wikipedia.org/wiki/Path%20integral%20molecular%20dynamics
Path integral molecular dynamics (PIMD) is a method of incorporating quantum mechanics into molecular dynamics simulations using Feynman path integrals. In PIMD, one uses the Born–Oppenheimer approximation to separate the wavefunction into a nuclear part and an electronic part. The nuclei are treated quantum mechanically by mapping each quantum nucleus onto a classical system of several fictitious particles connected by springs (harmonic potentials) governed by an effective Hamiltonian, which is derived from Feynman's path integral. The resulting classical system, although complex, can be solved relatively quickly. There are now a number of commonly used condensed matter computer simulation techniques that make use of the path integral formulation including Centroid Molecular Dynamics (CMD), Ring Polymer Molecular Dynamics (RPMD), and the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method. The same techniques are also used in path integral Monte Carlo (PIMC). Combination with other simulation techniques Applications The technique has been used to calculate time correlation functions.
https://en.wikipedia.org/wiki/List%20of%20symbols%20of%20Scientology
This is a list of symbols of Scientology, the Church of Scientology, and related organizations. List Trademarks All official symbols of Scientology are trademarks held by the Religious Technology Center (RTC). They are said by the center to be used "on Scientology religious materials to signify their authenticity ... and provide a legal mechanism to ensure the spiritual technologies are orthodox and ministered according to Mr. Hubbard's Scripture. These marks also provide the means to prevent anyone from engaging in some distorted use of Mr. Hubbard's writings, thereby ensuring the purity of the religion for all eternity."
https://en.wikipedia.org/wiki/Yusu%20Wang
Yusu Wang is a Chinese computer scientist and mathematician who works as a professor at the Halıcıoğlu Data Science Institute at the University of California, San Diego . Her research concerns computational geometry and computational topology, including results on discrete Laplace operators, curve simplification, and Fréchet distance. Education and career Wang graduated from Tsinghua University in 1998. She completed her Ph.D. in computer science at Duke University in 2004. Her dissertation, Geometric and Topological Methods in Protein Structure Analysis, was jointly supervised by Pankaj K. Agarwal and Herbert Edelsbrunner. After postdoctoral research with Leonidas J. Guibas at Stanford University, Wang joined the faculty of the Ohio State University in 2005, and she was promoted to the rank of full professor there in 2017. She moved to her current position at the University of California, San Diego in 2020. Service Wang is on the editorial boards of the SIAM Journal on Computing and Journal of Computational Geometry. With Gill Barequet, Wang was program co-chair of the 2019 Symposium on Computational Geometry.
https://en.wikipedia.org/wiki/Snapshot%20%28computer%20storage%29
In computer systems, a snapshot is the state of a system at a particular point in time. The term was coined as an analogy to that in photography. Rationale A full backup of a large data set may take a long time to complete. On multi-tasking or multi-user systems, there may be writes to that data while it is being backed up. This prevents the backup from being atomic and introduces a version skew that may result in data corruption. For example, if a user moves a file into a directory that has already been backed up, then that file would be completely missing on the backup media, since the backup operation had already taken place before the addition of the file. Version skew may also cause corruption with files which change their size or contents underfoot while being read. One approach to safely backing up live data is to temporarily disable write access to data during the backup, either by stopping the accessing applications or by using the locking API provided by the operating system to enforce exclusive read access. This is tolerable for low-availability systems (on desktop computers and small workgroup servers, on which regular downtime is acceptable). High-availability 24/7 systems, however, cannot bear service stoppages. To avoid downtime, high-availability systems may instead perform the backup on a snapshot—a read-only copy of the data set frozen at a point in time—and allow applications to continue writing to their data. Most snapshot implementations are efficient and can create snapshots in O(1). In other words, the time and I/O needed to create the snapshot does not increase with the size of the data set; by contrast, the time and I/O required for a direct backup is proportional to the size of the data set. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only, and use a system of pointers to reference the initial snapshot. This method of pointer-based snapshots consumes less disk capacity tha
https://en.wikipedia.org/wiki/Septum
In biology, a septum (Latin for something that encloses; : septa) is a wall, dividing a cavity or structure into smaller ones. A cavity or structure divided in this way may be referred to as septate. Examples Human anatomy Interatrial septum, the wall of tissue that is a sectional part of the left and right atria of the heart Interventricular septum, the wall separating the left and right ventricles of the heart Lingual septum, a vertical layer of fibrous tissue that separates the halves of the tongue Nasal septum: the cartilage wall separating the nostrils of the nose Alveolar septum: the thin wall which separates the alveoli from each other in the lungs Orbital septum, a palpebral ligament in the upper and lower eyelids Septum pellucidum or septum lucidum, a thin structure separating two fluid pockets in the brain Uterine septum, a malformation of the uterus Penile septum, a fibrous wall between the two corpora cavernosa penis Septum glandis, partition of the ventral aspect of the glans penis Scrotal septum, layer of tissue dividing the scrotum Vaginal septum, a lateral or transverse partition inside the vagina Intermuscular septa separating the muscles of the arms and legs Histological septa are seen throughout most tissues of the body, particularly where they are needed to stiffen soft cellular tissue, and they also provide planes of ingress for small blood vessels. Because the dense collagen fibres of a septum usually extend out into the softer adjacent tissues, microscopic fibrous septa are less clearly defined than the macroscopic types of septa listed above. In rare instances, a septum is a cross-wall. Thus it divides a structure into smaller parts. Cell biology The septum (cell biology) is the boundary formed between dividing cells in the course of cell division. Fungus A partition dividing filamentous hyphae into discrete cells in fungi. Botany A partition that separates the locules of a fruit, anther, or sporangium. Zoology A cora
https://en.wikipedia.org/wiki/Urine%20flow%20rate
Urine flow rate or urinary flow rate is the volumetric flow rate of urine during urination. It is a measure of the quantity of urine excreted in a specified period of time (per second or per minute). It is measured with uroflowmetry, a type of flow measurement. The letters "V" (for volume) and "Q" (a conventional symbol for flow rate) are both used as a symbol for urine flow rate. The V often has a dot (overdot), that is, V̇ ("V-dot"). Qmax indicates the maximum flow rate. Qmax is used as an indicator for the diagnosis of enlarged prostate. A lower Qmax may indicate that the enlarged prostate puts pressure on the urethra, partially occluding it. Uroflowmetry is performed by urinating into a special urinal, toilet, or disposable device that has a measuring device built in. The average rate changes with age. Clinical usage Changes in the urine flow rate can be indicative of kidney, prostate or other renal disorders. Similarly, by measuring urine flow rate, it is possible to calculate the clearance of metabolites that are used as clinical markers for disease. The urinary flow rate in males with benign prostate hyperplasia is influenced, although not statistically by voiding position. In a meta-analysis on the influence of voiding position in males on urodynamics, males with this condition showed an improvement of 1.23 ml/s in the sitting position. Healthy, young males were not influenced by changing voiding position. See also Urodynamics
https://en.wikipedia.org/wiki/National%20Cyber%20Range
The National Cyber Range is a cyber range project being overseen by DARPA to build a scale model of the Internet that can be used to carry out cyber war games. The project serves as a test range where the military can create antivirus technologies to guard against cyberterrorism and attacks from hackers. Several organisations are involved in the development of the network, including Johns Hopkins University in Baltimore and Lockheed Martin. More than $500m has been allocated by the Department of Defense to develop "cyber technologies."
https://en.wikipedia.org/wiki/Oslo%20Analyzer
The Oslo Analyzer (1938 – 1954) was a mechanical analog differential analyzer, a type of computer, built in Norway from 1938 to 1942. It was the largest computer of its kind in the world when completed. The differential analyzer was based on the same principles as the pioneer machine developed by Vannevar Bush at MIT. It was designed and built by Svein Rosseland in cooperation with chief engineer Lie (1909-1983) of the Norwegian commercial instrument manufacturer Gundersen & Løken. The machine was installed at the first floor of the Institute for Theoretical Astrophysics at the University of Oslo. The building as well as the machine was financed in large parts by grants from The Rockefeller Foundation. Rosseland visited MIT for several months in 1933, and studied Bush's work. Rosseland's design was a substantial development from Bush's machine, and much more compact. The machine had twelve integrators (compared to six of the original MIT machine) and could calculate differential equations of the twelfth order, or two simultaneous equations of the sixth order. When it was finished, the Oslo Analyzer was the most powerful of its kind in the world. Upon the German occupation of Norway on April 9, 1940, Rosseland realized that the machine might become a desirable research tool in the German war effort. So Rosseland personally removed all precision fabricated integration wheels and buried the wheels in sealed packages in the garden behind the institute. The machine contributed to a number of scientific projects, both domestic and international. When it was dismantled, sections of it were put on display at the Norwegian Museum of Science and Technology.
https://en.wikipedia.org/wiki/Bagpipe%20theorem
In mathematics, the bagpipe theorem of describes the structure of the connected (but possibly non-paracompact) ω-bounded surfaces by showing that they are "bagpipes": the connected sum of a compact "bag" with several "long pipes". Statement A space is called ω-bounded if the closure of every countable set is compact. For example, the long line and the closed long ray are ω-bounded but not compact. When restricted to a metric space ω-boundedness is equivalent to compactness. The bagpipe theorem states that every ω-bounded connected surface is the connected sum of a compact connected surface and a finite number of long pipes. A space P is called a long pipe if there exist subspaces each of which is homeomorphic to such that for we have and the boundary of in is homeomorphic to . The simplest example of a pipe is the product of the circle and the long closed ray , which is an increasing union of copies of the half-open interval , pasted together with the lexicographic ordering. Here, denotes the first uncountable ordinal number, which is the set of all countable ordinals. Another (non-isomorphic) example is given by removing a single point from the "long plane" where is the long line, formed by gluing together two copies of at their endpoints to get a space which is "long at both ends". There are in fact different isomorphism classes of long pipes. The bagpipe theorem does not describe all surfaces since there are many examples of surfaces that are not ω-bounded, such as the Prüfer manifold.
https://en.wikipedia.org/wiki/Mean%20absolute%20difference
The mean absolute difference (univariate) is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean absolute difference, which is the mean absolute difference divided by the arithmetic mean, and equal to twice the Gini coefficient. The mean absolute difference is also known as the absolute mean difference (not to be confused with the absolute value of the mean signed difference) and the Gini mean difference (GMD). The mean absolute difference is sometimes denoted by Δ or as MD. Definition The mean absolute difference is defined as the "average" or "mean", formally the expected value, of the absolute difference of two random variables X and Y independently and identically distributed with the same (unknown) distribution henceforth called Q. Calculation Specifically, in the discrete case, For a random sample of size n of a population distributed uniformly according to Q, by the law of total expectation the (empirical) mean absolute difference of the sequence of sample values yi, i = 1 to n can be calculated as the arithmetic mean of the absolute value of all possible differences: if Q has a discrete probability function f(y), where yi, i = 1 to n, are the values with nonzero probabilities: In the continuous case, if Q has a probability density function f(x): An alternative form of the equation is given by: if Q has a cumulative distribution function F(x) with quantile function Q(F), then, since f(x)=dF(x)/dx and Q(F(x))=x, it follows that: Relative mean absolute difference When the probability distribution has a finite and nonzero arithmetic mean AM, the relative mean absolute difference, sometimes denoted by Δ or RMD, is defined by The relative mean absolute difference quantifies the mean absolute difference in comparison to the size of the mean and is a dimensionless quantity. The relative mean absolute difference is eq
https://en.wikipedia.org/wiki/Slippery%20sequence
A slippery sequence is a small section of codon nucleotide sequences (usually UUUAAAC) that controls the rate and chance of ribosomal frameshifting. A slippery sequence causes a faster ribosomal transfer which in turn can cause the reading ribosome to "slip." This allows a tRNA to shift by 1 base (−1) after it has paired with its anticodon, changing the reading frame. A −1 frameshift triggered by such a sequence is a Programmed −1 Ribosomal Frameshift. It is followed by a spacer region, and an RNA secondary structure. Such sequences are common in virus polyproteins. The frameshift occurs due to wobble pairing. The Gibbs free energy of secondary structures downstream give a hint at how often frameshift happens. Tension on the mRNA molecule also plays a role. A list of slippery sequences found in animal viruses is available from Huang et al. Slippery sequences that cause a 2-base slip (−2 frameshift) have been constructed out of the HIV UUUUUUA sequence. See also Nucleic acid tertiary structure Open reading frame Ribosomal frameshifting Translational frameshift Transposable element
https://en.wikipedia.org/wiki/Chorology
Chorology (from Greek , khōros, "place, space"; and , -logia) can mean the study of the causal relations between geographical phenomena occurring within a particular region the study of the spatial distribution of organisms (biogeography). In geography, the term was first used by Strabo. In the twentieth century, Richard Hartshorne worked on that notion again. The term was popularized by Ferdinand von Richthofen. See also Chorography Khôra
https://en.wikipedia.org/wiki/Tricholoma%20portentosum
Tricholoma portentosum, commonly known as the charbonnier, streaked tricholoma, or sooty head, in North America, is a grey-capped edible mushroom of the large genus Tricholoma. It is found in woodlands in Europe and North America. Taxonomy The species was originally described as Agaricus portentosus by Elias Magnus Fries in 1821, before being placed in the genus Tricholoma by Lucien Quélet in 1872. At least three varieties have been described: var. album has an all white cap, var. lugdunense has a paler cap, and var. boutevillei has a very dark cap and is the form which grows with oak and beech. The genus name Tricholoma comes from the Ancient Greek θρίξ (trix), τριχός (trichos), "hair", and λῶμα (lôma), "fringe", and refers to the fibrils on the caps of many species of the genus. The species epithet, portentosum, comes from the Latin portentosus, meaning marvellous or prodigious, and describes its taste. Description It is a large, imposing mushroom, with a convex cap in diameter with a boss. The cap is sticky when wet and has an irregularly lobed margin. It is dark grey in colour with darker grey to blackish streaks perpendicular to the margins. The grey colour fades towards the margins and may be tinged with yellow or purple. The crowded adnate gills are white, and the solid stipe is white with a yellow tinge at the top. It measures high and wide. The spore print is white. It has a farinaceous smell and taste. Older specimens are often eaten by slugs, and the stem is recommended to be removed before cooking. It can be pickled. Habitat and distribution The fruit bodies appear in late autumn in coniferous woodland in Europe and North America. Ectomycorrhizal, it is most commonly associated with Pinus sylvestris, but also sometimes oak (Quercus) or beech (Fagus) on sandy soils. It has been declining since the 1980s in the Netherlands and is now rare there, and uncommon in Britain but is common in France where it is sometime seen in wild mushroom markets.
https://en.wikipedia.org/wiki/Satplan
Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem, which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT. Given a problem instance in planning, with a given initial state, a given set of actions, a goal, and a horizon length, a formula is generated so that the formula is satisfiable if and only if there is a plan with the given horizon length. This is similar to simulation of Turing machines with the satisfiability problem in the proof of Cook's theorem. A plan can be found by testing the satisfiability of the formulas for different horizon lengths. The simplest way of doing this is to go through horizon lengths sequentially, 0, 1, 2, and so on. See also Graphplan
https://en.wikipedia.org/wiki/Cheekwood%20Botanical%20Garden%20and%20Museum%20of%20Art
Cheekwood is a historic estate on the western edge of Nashville, Tennessee that houses the Cheekwood Estate & Gardens. Formerly the residence of Nashville's Cheek family, the Georgian-style mansion was opened as a botanical garden and art museum in 1960. History Christopher Cheek founded a wholesale grocery business in Nashville in the 1880s. His son, Leslie Cheek, joined him as a partner, and by 1915 was president of the family-owned company. Leslie's wife, Mabel Wood, was a member of a prominent Clarksville, Tennessee, family. Meanwhile, Joel Owsley Cheek, Leslie's cousin, had developed an acclaimed blend of coffee that was marketed through Nashville's finest hotel, the Maxwell House Hotel. Cheek's extended family, including Leslie and Mabel Cheek, were investors. In 1928, the Postum Cereals Company (now General Foods) purchased Maxwell House's parent company, Cheek-Neal Coffee, for more than $40 million. After the sale of the family business, Leslie Cheek bought of woodland in West Nashville for a country estate. He hired New York residential and landscape architect Bryant Fleming to design the house and gardens, and gave him full control over every detail of the project, including interior furnishings. The resulting limestone mansion and extensive formal gardens were completed in 1932. The estate design was inspired by the grand English manors of the 18th century. Leslie Cheek died just two years after moving into the mansion. Mabel Cheek and their daughter, Huldah Cheek Sharp, lived at Cheekwood until the 1950s, when Huldah Sharp and her husband offered the property as a site for a botanical garden and art museum. The Exchange Club of Nashville, the Horticultural Society of Middle Tennessee and other civic groups led the redevelopment of the property aided by funds raised from the sale of the former building of the defunct Nashville Museum of Art. The new Cheekwood museum opened in 1960. Art museum Cheekwood's art collection was founded in 1959
https://en.wikipedia.org/wiki/Up%20%28film%20series%29
The Up series''' of documentary films follows the lives of ten males and four females in England beginning in 1964, when they were seven years old. The first film was titled Seven Up!, with later films adjusting the number in the title to match the age of the subjects at the time of filming. The documentary has had nine episodes—one every seven years—thus spanning 56 years. The series has been produced by Granada Television for ITV, which has broadcast all of them except 42 Up (1998), which was broadcast on BBC One. Individual films and the series as a whole have received numerous accolades; in 1991, the then-latest installment, 28 Up, was chosen for Roger Ebert's list of the ten greatest films of all time. The children were selected for the original programme to represent the range of socio-economic backgrounds in Britain at that time, with the expectation that each child's social class would determine their future. The first instalment was made as a one-off edition of Granada Television's series, World in Action, directed by Canadian Paul Almond, with involvement by "a fresh-faced young researcher, a middle-class Cambridge graduate", Michael Apted, whose role in the initial programme included "trawling the nation's schools for 14 suitable subjects". About the first programme, Apted has said:It was Paul's film ... but he was more interested in making a beautiful film about being seven, whereas I wanted to make a nasty piece of work about these kids who have it all, and these other kids who have nothing. After Almond's direction of the original programme, director Michael Apted continued the series with new instalments every seven years, filming material from those of the fourteen who chose to participate. The aim of the continuing series is stated at the beginning of 7 Up as: "Why did we bring these together? Because we wanted a glimpse of England in the year 2000. The union leader and the business executive of the year 2000 are now seven years old." The most rece
https://en.wikipedia.org/wiki/Internal%20border%20control
Controls imposed on internal borders within a single state or territory include measures taken by governments to monitor and regulate the movement of people, animals, and goods across land, air, and maritime borders through border controls. Background Internal border controls are measures implemented to control the flow of people or goods within a given country. Such measures take a variety of forms ranging from the imposition of border checkpoints to the issuance of internal travel documents and vary depending on the circumstances in which they are implemented. Circumstances resulting in internal border controls include increasing security around border areas (e.g. internal checkpoints in America or Bhutan near border regions), preserving the autonomy of autonomous or minority areas (e.g. border controls between Peninsular Malaysia, Sabah, and Sarawak; border controls between Hong Kong, Macau, and mainland China), preventing unrest between ethnic groups (e.g. Northern Ireland's peace walls, border controls in Tibet and Northeastern India), and disputes between rival governments (e.g. between the Republic of China and the People's Republic of China). During the COVID-19 pandemic, temporary internal border controls were introduced in jurisdictions across the globe. For instance, travel between Australian states and territories was prohibited or restricted by state governments at various points of the pandemic either in conjunction with sporadic lockdowns or as a stand-alone response to COVID-19 outbreaks in neighbouring states. Internal border controls were also introduced at various stages of Malaysia's Movement Control Order, per which interstate travel was restricted depending on the severity of ongoing outbreaks. Similarly, internal controls were introduced by national authorities within the Schengen Area, though the European Union ultimately rejected the idea of suspending the Schengen Agreement per se. Examples Asia Internal border controls exist in many p
https://en.wikipedia.org/wiki/Vai%C5%9Be%E1%B9%A3ika%20S%C5%ABtra
Vaiśeṣika Sūtra (Sanskrit: वैशेषिक सूत्र), also called Kanada sutra, is an ancient Sanskrit text at the foundation of the Vaisheshika school of Hindu philosophy. The sutra was authored by the Hindu sage Kanada, also known as Kashyapa. According to some scholars, he flourished before the advent of Buddhism because the Vaiśeṣika Sūtra makes no mention of Buddhism or Buddhist doctrines; however, the details of Kanada's life are uncertain, and the Vaiśeṣika Sūtra was likely compiled sometime between 6th and 2nd century BCE, and finalized in the currently existing version before the start of the common era. A number of scholars have commented on it since the beginning of common era; the earliest commentary known is the Padartha Dharma Sangraha of Prashastapada. Another important secondary work on Vaiśeṣika Sūtra is Maticandra's Dasha padartha sastra which exists both in Sanskrit and its Chinese translation in 648 CE by Yuanzhuang. The Vaiśeṣika Sūtra is written in aphoristic sutras style, and presents its theories on the creation and existence of the universe using naturalistic atomism, applying logic and realism, and is one of the earliest known systematic realist ontology in human history. The text discusses motions of different kind and laws that govern it, the meaning of dharma, a theory of epistemology, the basis of Atman (self, soul), and the nature of yoga and moksha. The explicit mention of motion as the cause of all phenomena in the world and several propositions about it make it one of the earliest texts on physics. Etymology The name Vaiśeṣika Sūtra (Sanskrit: वैशेषिक सूत्र) is derived from viśeṣa, विशेष, which means "particularity", that is to be contrasted from "universality". The classes particularity and universality belong to different categories of experience. Manuscripts Till the 1950s, only one manuscript of Vaiseshika sutra was known and this manuscript was part of a bhasya by the 15th century Sankaramisra. Scholars had doubted its authenticity,
https://en.wikipedia.org/wiki/Mir-589%20microRNA%20precursor%20family
In molecular biology mir-589 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. Molecular targets miR-589 has been implicated in the regulation of antimicrobial targets in bovine lung alveolar macrophages, and alongside other miRNAs has been suggested to play a part in the immune response against Mycobacterium tuberculosis. It has additionally been linked to HLA-G expression, having been found to target the 3'UTR 14-base pair sequence region of the HLA-G gene. See also MicroRNA
https://en.wikipedia.org/wiki/Urine-diverting%20dry%20toilet
A urine-diverting dry toilet (UDDT) is a type of dry toilet with urine diversion that can be used to provide safe, affordable sanitation in a variety of contexts worldwide. The separate collection of feces and urine without any flush water has many advantages, such as odor-free operation and pathogen reduction by drying. While dried feces and urine harvested from UDDTs can be and routinely are used in agriculture (respectively, as a soil amendment and nutrient-rich fertilizer—this practice being known as reuse of excreta in agriculture), many UDDT installations do not apply any sort of recovery scheme. The UDDT is an example of a technology that can be used to achieve a sustainable sanitation system. This dry excreta management system (or "dry sanitation" system) is an alternative to pit latrines and flush toilets, especially where water is scarce, a connection to a sewer system and centralized wastewater treatment plant is not feasible or desired, fertilizer and soil conditioner are needed for agriculture, or groundwater pollution should be minimized. There are several types of UDDTs: the single vault type which has only one feces vault; the double vault type which has two feces vaults that are used alternately; and the mobile or portable UDDTs, which are a variation of the single vault type and are commercially manufactured or homemade from simple materials. A UDDT can be configured as a sitting toilet (with a urine diversion pedestal or bench) or as a squatting toilet (with a urine diversion squatting pan). The most important design elements of the UDDT are: source separation of urine and feces; waterless operation; and ventilated vaults (also called "chambers") or removable containers for feces storage and treatment. If anal cleansing takes place with water (i.e., the users are "washers" rather than "wipers"), then this anal cleansing water must be drained separately and not be allowed to enter the feces vault. Some type of dry cover material is usually added
https://en.wikipedia.org/wiki/The%20Machine%20%28computer%20architecture%29
The Machine is the name of an experimental computer made by Hewlett Packard Enterprise. It was created as part of a research project to develop a new type of computer architecture for servers. The design focused on a “memory centric computing” architecture, where NVRAM replaced traditional DRAM and disks in the memory hierarchy. The NVRAM was byte addressable and could be accessed from any CPU via a photonic interconnect. The aim of the project was to build and evaluate this new design. Hardware overview The Machine was a computer cluster with many individual nodes connected over a memory fabric. The fabric interconnect used VCSEL-based silicon photonics with a custom chip called the X1. Access to memory is non-uniform and may include multiple hops. The Machine was envisioned to be a rack-scale computer initially with 80 processors and 320 TB of fabric attached memory, with potential for scaling to more enclosures up to 32 ZB. The fabric attached memory is not cache coherent and requires software to be aware of this property. Since traditional locks need cache coherency, hardware was added to the bridges to do atomic operations at that level. Each node also has a limited amount of local private cache-coherent memory (256 GB). Storage and compute on each node had completely separate power domains. The whole fabric attached memory of The Machine is too large to be mapped into a processor's virtual address space (which was 48-bits wide). A way is needed to map windows of the fabric attached memory into processor memory. Therefore, communication between each node SoC and the memory pool goes through an FPGA-based “Z-bridge” component that manages memory mapping of the local SoC to the fabric attached memory. The Z-bridge deals with two different kinds of addresses: 53-bit logical Z addresses and 75-bit Z addresses, which allows addressing 8PB and 32ZB respectively. Each Z-bridge also contained a firewall to enforce access control. The interconnect protocol was develo
https://en.wikipedia.org/wiki/Kramers%27%20law
Kramers' law is a formula for the spectral distribution of X-rays produced by an electron hitting a solid target. The formula concerns only bremsstrahlung radiation, not the element specific characteristic radiation. It is named after its discoverer, the Dutch physicist Hendrik Anthony Kramers. The formula for Kramers' law is usually given as the distribution of intensity (photon count) against the wavelength of the emitted radiation: The constant K is proportional to the atomic number of the target element, and is the minimum wavelength given by the Duane–Hunt law. The maximum intensity is at . The intensity described above is a particle flux and not an energy flux as can be seen by the fact that the integral over values from to is infinite. However, the integral of the energy flux is finite. To obtain a simple expression for the energy flux, first change variables from (the wavelength) to (the angular frequency) using and also using . Now is that quantity which is integrated over from 0 to to get the total number (still infinite) of photons, where : The energy flux, which we will call (but which may also be referred to as the "intensity" in conflict with the above name of ) is obtained by multiplying the above by the energy : for for . It is a linear function that is zero at the maximum energy .
https://en.wikipedia.org/wiki/Certificate%20revocation
In public key cryptography, a certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker could exploit such a compromised or misissued certificate until expiry. Hence, revocation is an important part of a public key infrastructure. Revocation is performed by the issuing certificate authority, which produces a cryptographically authenticated statement of revocation. For distributing revocation information to clients, the timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns. If revocation information is unavailable (either due to an accident or an attack), clients must decide whether to fail-hard and treat a certificate as if it is revoked (and so degrade availability) or to fail-soft and treat it as unrevoked (and allow attackers to sidestep revocation). Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services, Web browsers limit the revocation checks they will perform, and will fail soft where they do. Certificate revocation lists are too bandwidth-costly for routine use, and the Online Certificate Status Protocol presents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking. Glossary of acronyms History The Heartbleed vulnerability, which was disclosed in 2014, triggered a mass revocation of certificates, as their private keys may have been leaked. GlobalSign revoked over 50% of their issued certificates. StartCom was criticised for issuing free certificates but then charging for revocation. A 2015 study found an overall revocation rate of 8% for certificates used on the Web, though this may have been elevated due to Heartbleed. Despite Web security being a priority for most browsers, due to the latency and
https://en.wikipedia.org/wiki/Nonlinear%20system%20identification
System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and include industrial processes, control systems, economic data, biology and the life sciences, medicine, social systems and many more. A nonlinear system is defined as any system that is not linear, that is any system that does not satisfy the superposition principle. This negative definition tends to obscure that there are very many different types of nonlinear systems. Historically, system identification for nonlinear systems has developed by focusing on specific classes of system and can be broadly categorized into five basic approaches, each defined by a model class: Volterra series models, Block-structured models, Neural network models, NARMAX models, and State-space models. There are four steps to be followed for system identification: data gathering, model postulate, parameter identification, and model validation. Data gathering is considered as the first and essential part in identification terminology, used as the input for the model which is prepared later. It consists of selecting an appropriate data set, pre-processing and processing. It involves the implementation of the known algorithms together with the transcription of flight tapes, data storage and data management, calibration, processing, analysis, and presentation. Moreover, model validation is necessary to gain confidence in, or reject, a particular model. In particular, the parameter estimation and the model validation are integral parts of the system identification. Validation refers to the process of confirming the conceptual model and demonstrating an adequate correspondence between the computational results of the model and the actual data. Volterra series methods The early work was dominated by methods based on the Volterra seri
https://en.wikipedia.org/wiki/HMAC-based%20one-time%20password
HMAC-based one-time password (HOTP) is a one-time password (OTP) algorithm based on HMAC. It is a cornerstone of the Initiative for Open Authentication (OATH). HOTP was published as an informational IETF RFC 4226 in December 2005, documenting the algorithm along with a Java implementation. Since then, the algorithm has been adopted by many companies worldwide (see below). The HOTP algorithm is a freely available open standard. Algorithm The HOTP algorithm provides a method of authentication by symmetric generation of human-readable passwords, or values, each used for only one authentication attempt. The one-time property leads directly from the single use of each counter value. Parties intending to use HOTP must establish some ; typically these are specified by the authenticator, and either accepted or not by the authenticated: A cryptographic hash method H (default is SHA-1) A secret key K, which is an arbitrary byte string and must remain private A counter C, which counts the number of iterations A HOTP value length d (6–10, default is 6, and 6–8 is recommended) Both parties compute the HOTP value derived from the secret key K and the counter C. Then the authenticator checks its locally generated value against the value supplied by the authenticated. The authenticator and the authenticated increment the counter C independently of each other, where the latter may increase ahead of the former, thus a resynchronisation protocol is wise. does not actually require any such, but does make a recommendation. This simply has the authenticator repeatedly try verification ahead of their counter through a window of size s. The authenticator's counter continues forward of the value at which verification succeeds and requires no actions by the authenticated. The recommendation is made that persistent throttling of HOTP value verification take place, to address their relatively small size and thus vulnerability to brute-force attacks. It is suggested that verificatio
https://en.wikipedia.org/wiki/Fine-tuned%20universe
The characterization of the universe as finely tuned suggests that the occurrence of life in the universe is very sensitive to the values of certain fundamental physical constants and that values different from the observed ones are more probable. If the values of any of certain free parameters in contemporary physical theories had differed only slightly from those observed, the evolution of the universe would have proceeded very differently, and "life as we know it" might not have been possible. History In 1913, the chemist Lawrence Joseph Henderson wrote The Fitness of the Environment, one of the first books to explore fine tuning in the universe. Henderson discusses the importance of water and the environment to living things, pointing out that life as it exists on Earth depends entirely on Earth's very specific environmental conditions, especially the prevalence and properties of water. In 1961, physicist Robert H. Dicke claimed that certain forces in physics, such as gravity and electromagnetism, must be perfectly fine-tuned for life to exist in the universe. Fred Hoyle also argued for a fine-tuned universe in his 1984 book The Intelligent Universe. "The list of anthropic properties, apparent accidents of a non-biological nature without which carbon-based and hence human life could not exist, is large and impressive", Hoyle wrote. Belief in the fine-tuned universe led to the expectation that the Large Hadron Collider would produce evidence of physics beyond the Standard Model, such as supersymmetry, but by 2012 it had not produced evidence for supersymmetry at the energy scales it was able to probe. Motivation Physicist Paul Davies has said, "There is now broad agreement among physicists and cosmologists that the Universe is in several respects ‘fine-tuned' for life". However, he continued, "the conclusion is not so much that the Universe is fine-tuned for life; rather it is fine-tuned for the building blocks and environments that life requires." He has a
https://en.wikipedia.org/wiki/Standard%20tuning
In music, standard tuning refers to the typical tuning of a string instrument. This notion is contrary to that of scordatura, i.e. an alternate tuning designated to modify either the timbre or technical capabilities of the desired instrument. Violin family The most popular bowed strings used nowadays belong to the violin family; together with their respective standard tunings, they are: Violin – G3 D4 A4 E5 (ascending perfect fifths, starting from G below middle C) Viola – C3 G3 D4 A4 (a perfect fifth below a violin's standard tuning) Cello – C2 G2 D3 A3 (an octave lower than the viola) Double bass – E1 A1 D2 G2 (ascending perfect fourths, where the highest sounding open string coincides with the G on a cello). Double bass with a low C extension – C1 E1 A1 D2 G2 (the same, except for low C, which is a major third below the low E on a standard 4-string double bass) 5-stringed double bass – B0 E1 A1 D2 G2 (a low B is added, so the tuning remains in perfect fourths) Viol family The double bass is properly the contrabass member of the viol family. Its smaller members are tuned in ascending fourths, with a major third in the middle, as follows: Treble viol – D3 G3 C4 E4 A4 D5 (ascending perfect fourths with the exception of a major third between strings 3 and 4) Tenor viol – G2 C3 F3 A3 D4 G4 (a perfect fifth below the treble viol) Bass viol – D2 G2 C3 E3 A3 D4 (an octave lower than the treble viol) 7-stringed bass viol – A1 D2 G2 C3 E3 A3 D4 (an extra low A is added) A more recent family is the violin octet, which also features a standardized tuning system (see page). Guitar family Guitars and bass guitars have more standard tunings, depending on the number of strings an instrument has. six-string guitar (the most common configuration) – E2 A2 D3 G3 B3 E4 (ascending perfect fourths, with an exception between G and B, which is a major third). Low E falls a major third above the C on a standard-tuned cello. Renaissance lute – E2 A2 D3 F♯3 B3 E4 (used by
https://en.wikipedia.org/wiki/Photon%20epoch
In physical cosmology, the photon epoch was the period in the evolution of the early universe in which photons dominated the energy of the universe. The photon epoch started after most leptons and anti-leptons were annihilated at the end of the lepton epoch, about 10 seconds after the Big Bang. Atomic nuclei were created in the process of nucleosynthesis, which occurred during the first few minutes of the photon epoch. For the remainder of the photon epoch, the universe contained a hot dense plasma of nuclei, electrons and photons. At the start of this period, many photons had sufficient energy to photodissociate deuterium, so those atomic nuclei that formed were quickly separated back into protons and neutrons. By the ten second mark, ever fewer high energy photons were available to photodissociate deuterium, and thus the abundance of these nuclei began to increase. Heavier atoms began to form through nuclear fusion processes: tritium, helium-3, and helium-4. Finally, trace amounts of lithium and beryllium began to appear. Once the thermal energy dropped below 0.03 MeV, nucleosynthesis effectively came to an end. Primordial abundances were now set, with the measured amounts in the modern epoch providing checks on the physical models of this period. 370,000 years after the Big Bang, the temperature of the universe fell to the point where nuclei could combine with electrons to create neutral atoms. As a result, photons no longer interacted frequently with matter, the universe became transparent and the cosmic microwave background radiation was created and then structure formation took place. This is referred to as the surface of last scattering, as it corresponds to a virtual outer surface of the spherical observable universe. See also Timeline of the early universe Big Bang nucleosynthesis Timeline of the Big Bang
https://en.wikipedia.org/wiki/4D-RCS%20Reference%20Model%20Architecture
The 4D/RCS Reference Model Architecture is a reference model for military unmanned vehicles on how their software components should be identified and organized. The 4D/RCS has been developed by the Intelligent Systems Division (ISD) of the National Institute of Standards and Technology (NIST) since the 1980s. This reference model is based on the general Real-time Control System (RCS) Reference Model Architecture, and has been applied to many kinds of robot control, including autonomous vehicle control. Overview 4D/RCS is a reference model architecture that provides a theoretical foundation for designing, engineering, integrating intelligent systems software for unmanned ground vehicles. According to Balakirsky (2003) 4D/RCS is an example of deliberative agent architecture. These architectures "include all systems that plan to meet future goal or deadline. In general, these systems plan on a model of the world rather than planning directly on processed sensor output. This may be accomplished by real-time sensors, a priori information, or a combination of the two in order to create a picture or snapshot of the world that is used to update a world model". The course of action of a deliberative agent architecture is based on the world model and the commanded mission goal, see image. This goal "may be a given system state or physical location. To meet the goal systems of this kind attempts to compute a path through a multi-dimensional space contained in the real world". The 4D/RCS is a hierarchical deliberative architecture, that "plans up to the subsystem level to compute plans for an autonomous vehicle driving over rough terrain. In this system, the world model contains a pre-computed dictionary of possible vehicle trajectories known as an ego-graph as well as information from the real-time sensor processing. The trajectories are computed based on a discrete set of possible vehicle velocities and starting steering angles. All of the trajectories are guaranteed to
https://en.wikipedia.org/wiki/Homologous%20desensitization
Homologous desensitization occurs when a receptor decreases its response to an agonist at high concentration. It is a process through which, after prolonged agonist exposure, the receptor is uncoupled from its signaling cascade and thus the cellular effect of receptor activation is attenuated. Homologous desensitization is distinguished from heterologous desensitization, a process in which repeated stimulation of a receptor by an agonist results in desensitization of the stimulated receptor as well as other, usually inactive, receptors on the same cell. They are sometimes denoted as agonist-dependent and agonist-independent desensitization respectively. While heterologous desensitization occurs rapidly at low agonist concentrations, homologous desensitization shows a dose dependent response and usually begins at significantly higher concentrations. Homologous desensitization serves as a mechanism for tachyphylaxis and helps organisms to maintain homeostasis. The process of homologous desensitization has been extensively studied utilizing G protein–coupled receptors (GPCRs). While the different mechanisms for desensitization are still being characterized, there are currently four known mechanisms: uncoupling of receptors from associated G proteins, endocytosis, degradation, and downregulation. The degradation and downregulation of receptors is often also associated with drug tolerance since it has a longer onset, from hours to days. It has been shown that these mechanisms can happen independently of one another, but that they also influence one another. In addition, the same receptor expressed in different cell types can be desensitized by different mechanisms. Mechanisms For GPCRs generally, each mechanism of homologous desensitization begins with receptor phosphorylation by an associated G protein-coupled receptor kinase (GRK). GRKs selectively modify activated receptors such that no heterogeneous desensitization will occur. This phosphorylation then acts to re
https://en.wikipedia.org/wiki/OPTOS%20formalism
OPTOS (optical properties of textured optical sheets) is a simulation formalism for determining optical properties of sheets with plane-parallel structured interfaces. The method is versatile as interface structures of different optical regimes, e.g. geometrical and wave optics, can be included. It is very efficient due to the re-usability of the calculated light redistribution properties of the individual interfaces. It has so far been mainly used to model optical properties of solar cells and solar modules but it is also applicable for example to LEDs or OLEDs with light extraction structures. History The development of the OPTOS formalism started in 2015 at the Fraunhofer Institute for Solar Energy Systems ISE in Freiburg, Germany. The mathematical formulation has been described in detail in several open access publications. A basic version of the code including documentation with function references has been available since the end of 2015 at the homepage of Fraunhofer ISE. Continuous updates and a list of OPTOS related publications can be found on ResearchGate. OPTOS simulation procedure One key aspect of OPTOS simulations is the division of the modeled system into interface and propagation regions. The light redistribution properties are calculated with the most appropriate method for each interface individually and depending on the relevant structure dimension. Large scale structures can for example be modeled via ray tracing while for interfaces with structure dimensions in the range of the wavelength wave optical approaches like RCWA, FDTD or FEM can be used. System description The discretization of the complete angular space into a fixed number of angle channels, as second key aspect of the OPTOS formalism, allows representing the angular power distribution within the system by a vector v which consists of one entry for each angle channel. The value of the entry is the power fraction of the corresponding angle channel with respect to the total in
https://en.wikipedia.org/wiki/Cobthorn%20Trust
The Cobthorn Trust is a private non-profit trust in the United Kingdom that is dedicated to furthering conservation and preserving rare domestic animal breeds. The Trust was formed in 1986 by its former Director, Andrew Sheppy. Until his death in 2017, the Trust was involved in the conservation of several rare breeds, initiation of the National Poultry Collection, genetic research on Dexter cattle, and the development of conservation grazing.
https://en.wikipedia.org/wiki/Hydrogenated%20starch%20hydrolysates
Hydrogenated starch hydrolysates (HSHs), also known as polyglycitol syrup (INS 964), are mixtures of several sugar alcohols (a type of sugar substitute). Hydrogenated starch hydrolysates were developed by the Swedish company Lyckeby Starch in the 1960s. The HSH family of polyols is an approved food ingredient in Canada, Japan, and Australia. HSH sweeteners provide 40 to 90% sweetness relative to table sugar. Hydrogenated starch hydrolysates are produced by the partial hydrolysis of starch – most often corn starch, but also potato starch or wheat starch. This creates dextrins (glucose and short glucose chains). The hydrolyzed starch (dextrin) then undergoes hydrogenation to convert the dextrins to sugar alcohols. Hydrogenated starch hydrolysates are similar to sorbitol: if the starch is completely hydrolyzed so that only single glucose molecules remain, then after hydrogenation the result is sorbitol. Because in HSHs the starch is not completely hydrolyzed, a mixture of sorbitol, maltitol, and longer chain hydrogenated saccharides (such as maltotriitol) is produced. When no single polyol is dominant in the mix, the generic name hydrogenated starch hydrolysates is used. However, if 50% or more of the polyols in the mixture are of one type, it can be labeled as "sorbitol syrup", or "maltitol syrup", etc. Uses Hydrogenated starch hydrolysates are used commercially in the same way as other common sugar alcohols. They are often used as both a sweetener and as a humectant (moisture-retaining ingredient). As a crystallization modifier, they can prevent syrups from forming crystals of sugar. It is used to add bulk, body, texture, and viscosity to mixtures, and can protect against damage from freezing and drying. HSH products are generally blended with other sweeteners, both caloric and artificial. Health and safety Similar to xylitol, hydrogenated starch hydrolysates are not readily fermented by oral bacteria and are used to formulate sugarless products that do not
https://en.wikipedia.org/wiki/Shim%20%28magnetism%29
A shim is a device used to adjust the homogeneity of a magnetic field. Shims received their name from the purely mechanical shims used to adjust position and parallelity of the pole faces of an electromagnet. Coils used to adjust the homogeneity of a magnetic field by changing the current flowing through it were called "electrical current shims" because of their similar function. Usage in magnetic resonance spectroscopy In NMR and MRI, shimming is used prior to the operation of the magnet to eliminate inhomogeneities in its field. Initially, the magnetic field inside an NMR spectrometer or MRI scanner will be far from homogeneous compared with an "ideal" field of the device. This is a result of production tolerances and of the magnetic field of the environment. Iron constructions in walls and floor of the examination room become magnetized and disturb the field of the scanner. The probe and the sample or the patient become slightly magnetized when brought into the strong magnetic field and create additional inhomogeneous fields. The process of correcting for these inhomogeneities is called shimming the magnet, shimming the probe or shimming the sample, depending on the assumed source of the remaining inhomogeneity. Field homogeneity of the order of 1 ppm over a volume of several liters is needed in an MRI scanner. High-resolution NMR spectroscopy demands field homogeneity better than 1 ppb within a volume of a few milliliters. There are two types of shimming: active and passive. Active shimming uses coils with adjustable current. Passive shimming involves pieces of steel with good magnetic qualities. The steel pieces are placed near the permanent or superconducting magnet. They become magnetized and produce their own magnetic field. In both cases, the additional magnetic fields (produced by coils or steel) add to the overall magnetic field of the superconducting magnet in such a way as to increase the homogeneity of the total field. There are different ways t
https://en.wikipedia.org/wiki/CYP12%20family
Cytochrome P450, family 12, also known as CYP12, is a cytochrome P450 family found in insect genome belongs to Mitochondrial clan CYPs, which is located in the inner membrane of mitochondria(IMM). The first gene identified in this family is the CYP12A1 from the Musca domestica (house fly), which is involved in insecticide resistance. CYP12A1 protein localization in mitochondria by immunohistochemistry and absolute dependence on mitochondrial electron donors adrenodoxin reductase and adrenodoxin. Rabbit gene CYP8B1 was named CYP12 at the beginning of its discovery, because it hydroxylated its sterol substrate on the 12 position. However, CYP12 is a family of insect P450s found in mitochondria, so this gene was renamed to CYP8B1.
https://en.wikipedia.org/wiki/Jack%20o%27%20lantern%20mushroom
Jack o' lantern mushroom is a common name for several fungus species in the genus Omphalotus: Omphalotus illudens of eastern North America Omphalotus olearius occurs in Europe and South Africa Omphalotus olivascens of California and Mexico See also Swamp beacon Swamp candle (disambiguation) Swamp lantern
https://en.wikipedia.org/wiki/Proteobiotics
Proteobiotics are natural metabolites which are produced by fermentation process of specific probiotic strains. These small oligopeptides were originally discovered in and isolated from culture media used to grow probiotic bacteria and may account for some of the health benefits of probiotics. Several genera of probiotic bacteria are known to produce proteobiotics, including Lactococcus spp., Pediococcus spp. Lactobacillus spp. and Bifidobacterium spp. Mode of action Recent studies have explored mode of action of proteobiotics and their potential benefits in maintaining the ratio of beneficial bacteria, lowering bacterial imbalance, and improving gut function. However, any of the statements based on research have not been evaluated by the US Food and Drug Administration. Unlike other molecules produced by probiotic bacteria, such as organic acids and bacteriocins, proteobiotics are natural metabolites which interfere with quorum sensing, the cell-to-cell communications which occur between bacterial cells, mainly by interfering with the LuxS quorum sensing system. These quorum-sensing systems allow bacteria to respond to changes in their environment and play a role in the ability of pathogens to evade host defence mechanisms. By interfering with quorum sensing, proteobiotics inhibit the cascade of events leading to adhesion to, and invasion of, host cells. This is achieved through reduced expression of specific virulence genes (typically found on pathogenicity islands) that facilitate the infection process. Specifically, proteobiotics inhibit virulence genes involved in toxin production, biofilm formation, cell adhesion and invasion. In enterohemorrhagic E. coli and Salmonella spp., genes associated with Type 3 Secretion Systems seem to be the main targets. The degree to which proteobiotics can reduce virulence-gene expression depends on the pathogen and the source of the proteobiotics. Lactobacillus acidophilus-derived proteobiotics down-regulate virulence ge
https://en.wikipedia.org/wiki/Ammonia%20fungi
Ammonia fungi are fungi that develop fruit bodies exclusively or relatively abundantly on soil that has had ammonia or other nitrogen-containing materials added. The nitrogen materials react as bases by themselves, or after decomposition. The addition of ammonia or urea causes numerous chemical and biological changes, for examples, the pH of soil litter is increased to 8–10; the high alkaline conditions interrupts the process of nutrient recycling. The mechanisms of colonization, establishment, and occurrence of fruiting bodies of ammonia fungi has been researched in the field and the laboratory. Species Ascobolus denudatus Calocybe leucocephala Coprinopsis cinerea Coprinopsis echinospora Coprinopsis neolagopus Coprinopsis neophlyctidospora Coprinopsis phlyctidospora Coprinopsis stercorea Crucispora rhombisperma Hebeloma luchuense Hebeloma radicosoides Hebeloma radicosum Hebeloma spoliatum Hebeloma vinosophyllum Laccaria amethystina Laccaria bicolor Sagaranella tylicolor
https://en.wikipedia.org/wiki/Juan%20Fern%C3%A1ndez%20Ridge
The Juan Fernández Ridge is a volcanic island and seamount chain on the Nazca Plate. It runs in a west–east direction from the Juan Fernández hotspot to the Peru–Chile Trench at a latitude of 33° S near Valparaíso. The Juan Fernández Islands are the only seamounts that reach the surface. Subduction of the ridge beneath South America is thought to have caused the Pampean flat-slab and its associated inland tectonic deformation and reduced magmatic activity.
https://en.wikipedia.org/wiki/Shotgun%20email
Shotgun email refers to an email requesting information or action that only requires the efforts of one person but is sent to multiple people in an effort to guarantee that at least one person will respond. The shotgun email often results in multiple people responding to something already accomplished, and therefore results in a loss of overall productivity. Shotgun emailing is considered poor internet etiquette. An example would be a person of authority in a business organization sending out an email to five technicians in the information technology department of his company to let them know his printer is broken. One technician responds with an on-site call and fixes the problem. Later in the day, other technicians follow-up to fix the printer that is already back in order. Shotgun emails can also be request for information or other tasks. The blind shotgun email occurs when the sender uses the blind co-copy feature of an email program to hide the fact that a shotgun email is in use. This is considered particularly deceitful. Shotgun emails are also considered to be shotgun email marketing, in which companies which is mostly related to sending newsletter information, sometimes supporting missions on helping the poor and such messages like that. But what is most reported is that scam emails use the method of Shotgun emails, as one must have approached to, such as winning lottery's, getting free trips to countries while you didn't sign up for and many others like that, to get access of what you are doing.
https://en.wikipedia.org/wiki/Berkovich%20space
In mathematics, a Berkovich space, introduced by , is a version of an analytic space over a non-Archimedean field (e.g. p-adic field), refining Tate's notion of a rigid analytic space. Motivation In the complex case, algebraic geometry begins by defining the complex affine space to be For each we define the ring of analytic functions on to be the ring of holomorphic functions, i.e. functions on that can be written as a convergent power series in a neighborhood of each point. We then define a local model space for to be with A complex analytic space is a locally ringed -space which is locally isomorphic to a local model space. When is a complete non-Archimedean field, we have that is totally disconnected. In such a case, if we continue with the same definition as in the complex case, we wouldn't get a good analytic theory. Berkovich gave a definition which gives nice analytic spaces over such , and also gives back the usual definition over In addition to defining analytic functions over non-Archimedean fields, Berkovich spaces also have a nice underlying topological space. Berkovich spectrum A seminorm on a ring is a non-constant function such that for all . It is called multiplicative if and is called a norm if implies . If is a normed ring with norm then the Berkovich spectrum of , denoted , is the set of multiplicative seminorms on that are bounded by the norm of . The Berkovich spectrum is equipped with the weakest topology such that for any the map is continuous. The Berkovich spectrum of a normed ring is non-empty if is non-zero and is compact if is complete. If is a point of the spectrum of then the elements with form a prime ideal of . The field of fractions of the quotient by this prime ideal is a normed field, whose completion is a complete field with a multiplicative norm; this field is denoted by and the image of an element is denoted by . The field is generated by the image of . Conversely a bounded map from to
https://en.wikipedia.org/wiki/Rebus
A rebus () is a puzzle device that combines the use of illustrated pictures with individual letters to depict words or phrases. For example: the word "been" might be depicted by a rebus showing an illustrated bumblebee next to a plus sign (+) and the letter "n". It was a favourite form of heraldic expression used in the Middle Ages to denote surnames. For example, in its basic form, three salmon (fish) are used to denote the surname "Salmon". A more sophisticated example was the rebus of Bishop Walter Lyhart (d. 1472) of Norwich, consisting of a stag (or hart) lying down in a conventional representation of water. The composition alludes to the name, profession or personal characteristics of the bearer, and speaks to the beholder Non verbis, sed rebus, which Latin expression signifies "not by words but by things" (res, rei (f), a thing, object, matter; rebus being ablative plural). Rebuses within heraldry Rebuses are used extensively as a form of heraldic expression as a hint to the name of the bearer; they are not synonymous with canting arms. A man might have a rebus as a personal identification device entirely separate from his armorials, canting or otherwise. For example, Sir Richard Weston (d. 1541) bore as arms: Ermine, on a chief azure five bezants, whilst his rebus, displayed many times in terracotta plaques on the walls of his mansion Sutton Place, Surrey, was a "tun" or barrel, used to designate the last syllable of his surname. An example of canting arms proper are those of the Borough of Congleton in Cheshire consisting of a conger eel, a lion (in Latin, leo) and a tun (barrel). This word sequence "conger-leo-tun" enunciates the town's name. Similarly, the coat of arms of St. Ignatius Loyola contains wolves (in Spanish, lobo) and a kettle (olla), said by some (probably incorrectly) to be a rebus for "Loyola". The arms of Elizabeth Bowes-Lyon feature bows and lions. Modern rebuses, word plays A modern example of the rebus used as a form of word play
https://en.wikipedia.org/wiki/WorldGaming%20Network
WorldGaming Network (WGN), formerly Virgin Gaming, is an online video gaming platform that hosts head-to-head matches, tournaments and ladders for consoles and PC gamers. WorldGaming has had over 3 million gamers register for its platform worldwide which makes it one of the most robust and dynamic global eSports communities. There have been over 6.7 million matches played over 20,000 tournaments held on WorldGaming.com since 2010. WorldGaming has traditionally focused on the sports games, fighting games and driving games. They had formed a partnership with EA Sports to be integrated into the game and have automatically verified results from the EA servers. These games included the FIFA, Madden and NHL franchises. A partnership was also in place with Take-Two Interactive to be featured and integrated into the NBA2K series of games. WorldGaming has relaunched a new platform with a larger number of games, supporting a wide range of publishers. It remains focused on gaming and tournaments but also incorporates streaming, live events and being the central community for gaming enthusiasts worldwide. On September 18, 2015, it was announced that WorldGaming had been acquired by theatre chain Cineplex Entertainment, which sold it in 2020 to an unnamed private equity firm.
https://en.wikipedia.org/wiki/Excess%20chemical%20potential
In thermodynamics, the excess chemical potential is defined as the difference between the chemical potential of a given species and that of an ideal gas under the same conditions (in particular, at the same pressure, temperature, and composition). The chemical potential of a particle species is therefore given by an ideal part and an excess part. Chemical potential of a pure fluid can be estimated by the Widom insertion method. Derivation and Measurement For a system of diameter and volume , at constant temperature , the classical canonical partition function with a scaled coordinate, the free energy is given by: Combining the above equation with the definition of chemical potential, we get the chemical potential of a sufficiently large system from (and the fact that the smallest allowed change in the particle number is ) wherein the chemical potential of an ideal gas can be evaluated analytically. Now let's focus on since the potential energy of an system can be separated into the potential energy of an system and the potential of the excess particle interacting with the system, that is, and Thus far we converted the excess chemical potential into an ensemble average, and the integral in the above equation can be sampled by the brute force Monte Carlo method. The calculating of excess chemical potential is not limited to homogeneous systems, but has also been extended to inhomogeneous systems by the Widom insertion method, or other ensembles such as NPT and NVE. See also Apparent molar property
https://en.wikipedia.org/wiki/Pipe%20organ%20tuning
This article describes the process and techniques involved in the tuning of a pipe organ. Electronic organs typically do not require tuning. A pipe organ produces sound via hundreds or thousands of organ pipes, each of which produces a single pitch and timbre. The goal of tuning a pipe organ is to adjust the pitch of each pipe so that they all sound in tune with each other. Pitch For many years, there was no pitch standard across Europe. The frequency of (the standard note for tuning musical instruments), for example, could range from =392 Hz in parts of France to =465 Hz (Cornet-ton pitch) in parts of Germany. Organs were often tuned differently than ensembles, even within the same region or town. The modern tuning standard of =440 Hz (=262 Hz) was proposed in 1939, and accepted by the International Organization for Standardization (as ISO 16) in 1955 and again in 1975. Process The first task of an organ tuner is to select a temperament. Generally speaking, the temperament of a pipe organ is part of its design, and is not lightly changed during its lifetime. Equal temperament is very common, but by no means universal. Along with the temperament goes the overall concert pitch of the instrument, often A=440 Hz in modern instruments, but this also is far from universal. The pitch of an organ cannot be significantly changed without major work, as pipes need to be shortened or lengthened. Another important preparation step is to stabilize the temperature of the building in which the organ resides. Ideally, the temperature should be the same as that at which the organ will be typically used, and the temperature should have been stable for many hours before beginning the tuning. The reason for this is that the pitch of organ pipes vary significantly with temperature, and not all pipes vary at the same rate relative to temperature. The actual tuning process begins with the tuning of the "tuning stop", the stop to which most or all other stops will be tuned
https://en.wikipedia.org/wiki/MIT%20NETRA
NETRA is a mobile eye diagnostic device developed at MIT Media Lab consisting of a clip-on eyepiece and a software app for smart phones. The co-inventors include Ramesh Raskar and Vitor Pamplona. It can be seen as the inverse of expensive Shack-Hartmann sensors. NETRA allows for the early, low-cost diagnosis of the most common refractive Refractive Disorders. The subject looks into the device and aligns patterns on the display. By repeating this procedure for eight meridians, the required refractive correction is computed. NETRA exploits the fact that aberrations are expressed using only a few parameters (Spherical, Cylindrical and Axis of Astigmatism) to create an easier user interaction approach. Leveraging mobile connectivity, the system can transmit test data to appropriate facilities for immediate action, aggregate data for use in analysis, or instruct a separate machine for automatic dispensing of spectacles. The key enabling factor for this new technology is the resolution of modern LCDs. The available resolution is now half of the width of the human hair. This is only a third of the working resolution of high-end ophthalmology instruments. At this level, LCDs available in modern mobile phones can be re-purposed to achieve performance that compares with the highest end scientific instruments. NETRA can be thought as a thermometer for visual performance. Just like the thermometer measures the body temperature at one's convenience, the device provides a quantitative measurement of the refractive error without the need of a physician on the ground. The test can be performed anywhere, enabling people to know when they need to see a doctor. The need for accessible, low cost eye diagnostics like NETRA is tremendous and is global in scale—over half a billion people have uncorrected refractive errors.
https://en.wikipedia.org/wiki/Neumann%27s%20law
Neumann's law states that the molecular heat in compounds of analogous constitution is always the same. It is named after German mineralogist and physicist Franz Ernst Neumann.
https://en.wikipedia.org/wiki/Cyclic%20nucleotide-binding%20domain
Proteins that bind cyclic nucleotides (cAMP or cGMP) share a structural domain of about 120 residues. The best studied of these proteins is the prokaryotic catabolite gene activator (also known as the cAMP receptor protein) (gene crp) where such a domain is known to be composed of three alpha-helices and a distinctive eight-stranded, antiparallel beta-barrel structure. There are six invariant amino acids in this domain, three of which are glycine residues that are thought to be essential for maintenance of the structural integrity of the beta-barrel. cAMP- and cGMP-dependent protein kinases (cAPK and cGPK) contain two tandem copies of the cyclic nucleotide-binding domain. The cAPK's are composed of two different subunits, a catalytic chain and a regulatory chain, which contains both copies of the domain. The cGPK's are single chain enzymes that include the two copies of the domain in their N-terminal section. Vertebrate cyclic nucleotide-gated ion-channels also contain this domain. Two such cations channels have been fully characterized, one is found in rod cells where it plays a role in visual signal transduction. Human proteins containing this domain CNBD1; CNGA1; CNGA2; CNGA3; CNGB1; CNGB3; HCN1; HCN2; HCN3; HCN4; KCNH1; KCNH2; KCNH3; KCNH4; KCNH5; KCNH6; KCNH7; KCNH8; PNPLA6; PNPLA7; PRKAR1A; PRKAR1B; PRKAR2A; PRKAR2B; PRKG1; PRKG2; RAPGEF2; RAPGEF3; RAPGEF4; RAPGEF6; RCNC2; SLC9A10; SLC9A11;
https://en.wikipedia.org/wiki/Eclipse%20process%20framework
The Eclipse process framework (EPF) is an open source project that is managed by the Eclipse Foundation. It lies under the top-level Eclipse Technology Project, and has two goals: To provide an extensible framework and exemplary tools for software process engineering - method and process authoring, library management, configuring and publishing a process. To provide exemplary and extensible process content for a range of software development and management processes supporting iterative, agile, and incremental development, and applicable to a broad set of development platforms and applications. For instance, EPF provides the OpenUP, an agile software development process optimized for small projects. By using EPF Composer, engineers can create their own software development process by structuring it using a predefined schema. This schema is an evolution of the SPEM 1.1 OMG specification referred to as the unified method architecture (UMA). Major parts of UMA went into the adopted revision of SPEM, SPEM 2.0. EPF is aiming to fully support SPEM 2.0 in the near future. The UMA and SPEM schemata support the organization of large amounts of descriptions for development methods and processes. Such method content and processes do not have to be limited to software engineering, but can also cover other design and engineering disciplines, such as mechanical engineering, business transformation, and sales cycles. IBM supplies a commercial version, IBM Rational Method Composer. Limitations The "content variability" capability severely limits users to one-to-one mappings. Processes trying to integrate various aspects may require block-copy-paste style clones to get around this limitation. This may be a limitation of the SPEM model and might be based on presumption that agile methods are being described as these methods tend not to have deep dependencies. See also Meta-process modeling
https://en.wikipedia.org/wiki/Isotopic%20analysis%20by%20nuclear%20magnetic%20resonance
Isotopic analysis by nuclear magnetic resonance allows the user to quantify with great precision the differences of isotopic contents on each site of a molecule and thus to measure the specific natural isotope fractionation for each site of this molecule. The SNIF-NMR analytical method was developed to detect the (over) sugaring of wine and enrichment of grape musts, and is mainly used to check the authenticity of foodstuffs (such as wines, spirits, fruit juice, honey, sugar and vinegar) and to control the naturality of some aromatic molecules (such as vanillin, benzaldehyde, raspberry ketone and anethole). The SNIF-NMR method has been adopted by the International Organisation of Vine and Wine (OIV) and the European Union as an official method for wine analysis. It is also an official method adopted by the Association Of Analytical Chemists (AOAC) for analysis of fruit juices, maple syrup, vanillin, and by the European Committee for Standardization (CEN) for vinegar. Background 1981: Invention of the SNIF-NMR method by Professor Gerard Martin, Maryvonne Martin and their team at the University of Nantes / CNRS 1987: Creation of Eurofins Nantes Laboratories - specializing in wine analysis, and purchase of operating the CNRS patent rights (this patent is now public and the name “SNIF-NMR” is now a registered trademark The OIV adopts it as an official method 1987-1990: Eurofins Laboratories apply the SNIF-NMR method to the analysis of fruit juices and certain natural flavors 1990: The SNIF-NMR method is recognized by the European Union as an official method for the analysis of wines → Implementation of the SNIF-NMR method for official laboratories in Europe 1990-1992: the method is tested on aromatic molecules 1996: The SNIF- NMR method is recognized in the United States by the AOAC for fruit juices → Implementation of the SNIF-NMR< method for official laboratories in US 2001: The SNIF-NMR< method is recognized by the AOAC for vanillin 2013: The SNIF-NMR m
https://en.wikipedia.org/wiki/Storage%20Management%20Initiative%20%E2%80%93%20Specification
The Storage Management Initiative Specification, commonly called SMI-S, is a computer data storage management standard developed and maintained by the Storage Networking Industry Association (SNIA). It has also been ratified as an ISO standard. SMI-S is based upon the Common Information Model and the Web-Based Enterprise Management standards defined by the Distributed Management Task Force, which define management functionality via HTTP. The most recent approved version of SMI-S is available on the SNIA website. The main objective of SMI-S is to enable broad interoperable management of heterogeneous storage vendor systems. The current version is SMI-S 1.8.0 Rev 5. Over 1,350 storage products are certified as conformant to SMI-S. Basic concepts SMI-S defines CIM management profiles for storage systems. The entire SMI Specification is categorized in profiles and subprofiles. A profile describes the behavioral aspects of an autonomous, self-contained management domain. SMI-S includes profiles for Arrays, Switches, Storage Virtualizers, Volume Management and several other management domains. In DMTF parlance, an SMI-S provider is an implementation for a specific profile or set of profiles. A subprofile describes a part of a management domain, and can be a common part in more than one profile. At a very basic level, SMI-S entities are divided into two categories: Clients are management software applications that can reside virtually anywhere within a network, provided they have a communications link (either within the data path or outside the data path) to providers. Servers are the devices under management. Servers can be disk arrays, virtualization engines, host bus adapters, switches, tape drives, etc. SMI-S timeline 2000 – Collection of computer storage industry leaders led by Roger Reich begins building an interoperable management backbone for storage and storage networks (named Bluefin) in a small consortia called the Partner Development Process. 2002 – B
https://en.wikipedia.org/wiki/Animal%20migration
Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating. To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern. Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices. Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles. Overview Concepts Migration can take very different forms in different species, and has a variety of causes. As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Wi
https://en.wikipedia.org/wiki/White%20clothing%20in%20Korea
For over a thousand years, a significant proportion of Koreans wore white hanbok, sometimes called minbok (), on a daily basis. From birth to burial, many Korean people across the social spectrum dressed in white. Many Koreans only wore color on special occasions or if their job required a certain uniform. Early evidence of the practice dates from around the 2nd century BCE. It continued until the 1950–1953 Korean War, after which the resulting extreme poverty caused the practice to end. It is not known when, how, or why the practice came about; it possibly arose due to the symbolism of the color white, which was associated with cleanliness and heaven. The Japanese colonial view controversially attributed the Korean penchant for white clothing to mourning rooted in historical suffering. The practice was persistently maintained and defended; it survived at least 25 attempted prohibitions before the colonial period, as well as over 100 regulations or prohibitions during the Japanese colonial period. It survived despite its inconvenience, as stains had to be painstakingly removed from the clothes. Westerners, who began visiting the peninsula in the 19th century, viewed the practice as a curiosity. Japanese people and a number of Korean intellectuals saw it as a frivolous and backward practice, partly because of the maintenance the practice demanded, partly because the maintenance largely encumbered women who did the laundry, and also because some of them believed that the clothes were indeed for mourning. This practice has developed a number of symbolic interpretations. The rigorous defense of the practice and effort needed to maintain it have been seen as symbolic of Korean stubbornness. The Korean ethnonationalist terms paegŭiminjok () and paegŭidongpo (), both roughly meaning white-clothed people, were coined to promote a distinct Korean identity, primarily as a reaction to Japanese assimilationist policies. Description The white hanbok is sometimes called min
https://en.wikipedia.org/wiki/Ferroelectric%20liquid%20crystal%20display
Ferroelectric Liquid Crystal Display (FLCD) is a display technology based on the ferroelectric properties of chiral smectic liquid crystals as proposed in 1980 by Clark and Lagerwall. Reportedly discovered in 1975, several companies pursued the development of FLCD technologies, notably Canon and Central Research Laboratories (CRL), along with others including Seiko, Sharp, Mitsubishi and GEC. Canon and CRL pursued different technological approaches with regard to the switching of display cells, these providing the individual pixels or subpixels, and the production of intermediate pixel intensities between full transparency and full opacity, these differing approaches being adopted by other companies seeking to develop FLCD products. Development By 1985, Seiko had already demonstrated a colour FLCD panel able to display a 10-inch diagonal still image with a resolution of . By 1993, Canon had delivered the first commercial application of the technology in its EZPS Japanese-language desktop publishing system in the form of a 15-inch monochrome display with a reported cost of around £2,000, and the company demonstrated a 21-inch 64-colour display and a 24-inch 16-greyscale display, both with a resolution and able to show "GUI software with multiple windows". Other applications included projectors, viewfinders and printers. The FLCD did not make many inroads as a direct view display device. Manufacturing of larger FLCDs was problematic making them unable to compete against direct view LCDs based on nematic liquid crystals using the Twisted nematic field effect or In-Plane Switching. Today, the FLCD is used in reflective microdisplays based on Liquid Crystal on Silicon technology. Using ferroelectric liquid crystal (FLC) in FLCoS technology allows a much smaller display area which eliminates the problems of manufacturing larger area FLC displays. Additionally, the dot pitch or pixel pitch of such displays can be as low as 6 μm giving a very high resolution display in
https://en.wikipedia.org/wiki/Military%20Intelligence%20Service%20%28United%20States%29
The Military Intelligence Service (, America Rikugun Jōhōbu) was a World War II U.S. military unit consisting of two branches, the Japanese American unit (described here) and the German-Austrian unit based at Camp Ritchie, best known as the "Ritchie Boys". The unit described here was primarily composed of Nisei (second-generation Japanese Americans) who were trained as linguists. Graduates of the MIS language school (MISLS) were attached to other military units to provide translation, interpretation, and interrogation services. Major General Charles Willoughby said, “The Nisei shortened the Pacific War by two years and saved possibly a million American lives.” They served with the United States Army, Navy, and Marine Corps, as well as with British, Australian, New Zealand, Canadian, Chinese, and Indian combat units fighting the Japanese. History The U.S. Army long recognized the need for foreign language comprehension going back to the establishment of West Point in 1802, requiring its cadets to understand French, which was considered the language of diplomacy, as well as the source of the majority of the military engineering texts at the time. Spanish was added to the curriculum following the Mexican-American War and German after World War I. George Strong and Joseph Stilwell, both West Point graduates of the class of 1904, served as military attaches to Japan and China respectively. They took the opportunity to study the local language there, and understood the need to provide language training for enlisted troops. They were among the first US commanders to establish language programs offering classes for both officers and interested enlisted soldiers, teaching rudimentary spoken Chinese and Japanese. As relations worsened with Japan in the buildup to the war, a group of officers with previous tours of duty in Japan including Rufus S. Bratton and Sidney Mashbir recognized the need for an intelligence unit able to comprehend not only the spoken Japanese languag
https://en.wikipedia.org/wiki/Covariant%20derivative
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component (dependent on the embedding) and the intrinsic covariant derivative component. The name is motivated by the importance of changes of coordinate in physics: the covariant derivative transforms covariantly under a general coordinate transformation, that is, linearly via the Jacobian matrix of the transformation. This article presents an introduction to the covariant derivative of a vector field with respect to a vector field, both in a coordinate-free language and using a local coordinate system and the traditional index notation. The covariant derivative of a tensor field is presented as an extension of the same concept. The covariant derivative generalizes straightforwardly to a notion of differentiation associated to a connection on a vector bundle, also known as a Koszul connection. History Historically, at the turn of the 20th century, the covariant derivative was introduced by Gregorio Ricci-Curbastro and Tullio Levi-Civita in the theory of Riemannian and pseudo-Riemannian geometry. Ricci and Levi-Civita (following ideas of Elwin Bruno Christoffel) observed that the Christoffel symbols used to define the curvature could also provide a notion of differentiation which generalized the classical directional derivative of vector fields on a manifold. This new derivativ
https://en.wikipedia.org/wiki/Polly%20and%20Molly
Polly and Molly (born 1997), two ewes, were the first mammals to have been successfully cloned from an adult somatic cell and to be transgenic animals at the same time. This is not to be confused with Dolly the Sheep, the first animal to be successfully cloned from an adult somatic cell where there wasn’t modification carried out on the adult donor nucleus. Polly and Molly, like Dolly the Sheep, were cloned at the Roslin Institute in Edinburgh, Scotland. The creation of Polly and Molly built on the somatic nuclear transfer experiments that led to the cloning of Dolly the Sheep. The crucial difference was that in creating Polly and Molly, scientists used cells into which a new gene had been inserted. The gene chosen was a therapeutic protein to demonstrate the potential of such recombinant DNA technology combined with animal cloning. This could hopefully be used to produce pharmacological and therapeutic proteins to treat human diseases. The protein in question was the human blood clotting factor IX. Another difference from Dolly the Sheep was the source cell type of the nucleus that was transferred. Although Polly and Molly were nuclear clones, they had different mtDNA that was different from the nuclear cells where they received their DNA. Prior to the production of Polly and Molly, the only demonstrated way to make a transgenic animal was by microinjection of DNA into the pronuclei of fertilized oocytes (eggs). However, only a small proportion of the animals will integrate the injected DNA into their genome. In the rare cases that they do integrate this new genetic information, the pattern of expression of the injected transgene's protein due to the random integration is very variable. As the aim of such research is to produce an animal that expresses a particular protein in high levels in, for example, its milk, microinjection is a very costly procedure that does not usually produce the desired animal. In mice, there is an additional option for genetic transfe
https://en.wikipedia.org/wiki/Coupling%20from%20the%20past
Among Markov chain Monte Carlo (MCMC) algorithms, coupling from the past is a method for sampling from the stationary distribution of a Markov chain. Contrary to many MCMC algorithms, coupling from the past gives in principle a perfect sample from the stationary distribution. It was invented by James Propp and David Wilson in 1996. The basic idea Consider a finite state irreducible aperiodic Markov chain with state space and (unique) stationary distribution ( is a probability vector). Suppose that we come up with a probability distribution on the set of maps with the property that for every fixed , its image is distributed according to the transition probability of from state . An example of such a probability distribution is the one where is independent from whenever , but it is often worthwhile to consider other distributions. Now let for be independent samples from . Suppose that is chosen randomly according to and is independent from the sequence . (We do not worry for now where this is coming from.) Then is also distributed according to , because is -stationary and our assumption on the law of . Define Then it follows by induction that is also distributed according to for every . However, it may happen that for some the image of the map is a single element of . In other words, for each . Therefore, we do not need to have access to in order to compute . The algorithm then involves finding some such that is a singleton, and outputting the element of that singleton. The design of a good distribution for which the task of finding such an and computing is not too costly is not always obvious, but has been accomplished successfully in several important instances. The monotone case There is a special class of Markov chains in which there are particularly good choices for and a tool for determining if . (Here denotes cardinality.) Suppose that is a partially ordered set with order , which has a unique minimal element and a unique max
https://en.wikipedia.org/wiki/McKelvey%E2%80%93Schofield%20chaos%20theorem
The McKelvey–Schofield chaos theorem is a result in social choice theory. It states that if preferences are defined over a multidimensional policy space, then majority rule is in general unstable: there is no Condorcet winner. Furthermore, any point in the space can be reached from any other point by a sequence of majority votes. The theorem can be thought of as showing that Arrow's impossibility theorem holds when preferences are restricted to be concave in . The median voter theorem shows that when preferences are restricted to be single-peaked on the real line, Arrow's theorem does not hold, and the median voter's ideal point is a Condorcet winner. The chaos theorem shows that this good news does not continue in multiple dimensions. Richard McKelvey initially proved the theorem for Euclidean preferences. Norman Schofield extended the theorem to the more general class of concave preferences. The figure shows an example. There are three voters in the electorate, with ideal points A, B and C. Voters prefer policies that are closer to them, i.e. they have circular indifference curves. The circles show B's and C's indifference curves through a policy X. If a candidate were to propose X, then the other candidate could beat him by proposing any point in the yellow area. This would be preferred by B and C. Any point in the plane will always have a set of points that are preferred by 2 out of 3 voters. In fact, you can get from any point to any other point by a series of majority votes.
https://en.wikipedia.org/wiki/Address%20generation%20unit
The address generation unit (AGU), sometimes also called address computation unit (ACU), is an execution unit inside central processing units (CPUs) that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements. While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle. Capabilities of an AGU depend on a particular CPU and its architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple operands at a time. Furthermore, some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, bringing further performance improvements by capitalizing on the superscalar nature of advanced CPU designs. For example, Intel incorporates multiple AGUs into its Sandy Bridge and Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parall
https://en.wikipedia.org/wiki/Dave%20Smith%20%28engineer%29
David Joseph Smith (April 2, 1950 – May 31, 2022) was an American engineer and founder of the synthesizer company Sequential. Smith created the first polyphonic synthesizer with fully programmable memory, the Prophet-5, which had a major impact on the music industry. He also led the development of MIDI, a standard interface protocol for synchronizing electronic instruments and audio equipment. In 2005, Smith was inducted into the Mix Foundation TECnology (Technical Excellence and Creativity) Hall of Fame for the MIDI specification. In 2013, he and the Japanese businessman Ikutaro Kakehashi received a Technical Grammy Award for their contributions to the development of MIDI. Career Smith was born on April 2, 1950, in San Francisco. He had degrees in both Computer Science and Electronic Engineering from UC Berkeley. Sequential Circuits and Prophet-5 He purchased a Minimoog in 1972 and later built his own analog sequencer, founding Sequential Circuits in 1974 and advertising his product for sale in Rolling Stone. By 1977 he was working at Sequential full-time, and later that year he designed the Prophet 5, the world's first microprocessor-based musical instrument and also the first programmable polyphonic synth, an innovation that marked a crucial step forward in synthesizer design and functionality. Sequential went on to become one of the most successful music synthesizer manufacturers of the time. MIDI In 1981 Smith set out to create a standard protocol for communication between electronic musical instruments from different manufacturers worldwide. He presented a paper outlining the idea of a Universal Synthesizer Interface (USI) to the Audio Engineering Society (AES) in 1981 after meetings with Tom Oberheim and Roland founder Ikutaro Kakehashi. After some enhancements and revisions, the new standard was introduced as "Musical Instrument Digital Interface" (MIDI) at the Winter NAMM Show in 1983, when a Sequential Circuits Prophet-600 was successfully connecte
https://en.wikipedia.org/wiki/Golden-section%20search
The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio φ:1:φ where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search (also described below) for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953) (see also Avriel and Wilde (1966)). Basic idea The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of a unimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated. The diagram above illustrates a single step in the techniqu
https://en.wikipedia.org/wiki/Discovery%20of%20disease-causing%20pathogens
The discovery of disease-causing pathogens is an important activity in the field of medical science. Many viruses, bacteria, protozoa, fungi, helminthes and prions are identified as a confirmed or potential pathogen. In the United States, a Centers for Disease Control program, begun in 1995, identified over a hundred patients with life-threatening illnesses that were considered to be of an infectious cause, but that could not be linked to a known pathogen. The association of pathogens with disease can be a complex and controversial process, in some cases requiring decades or even centuries to achieve. Factors impairing identification of pathogens Factors which have been identified as impeding the identification of pathogens include the following: 1. Lack of animal models: Experimental infection in animals has been used as a criterion to demonstrate a disease-causing ability, but for some pathogens (such as Vibrio cholerae, which cause disease only in humans) animal models do not exist. In cases where animal models were not available, scientists have sometimes infected themselves or others to determine an organism's disease causing ability. 2. Pre-existing theories of disease: Before a pathogen is well-recognized, scientists may attribute the symptoms of infection to other causes, such as toxicological, psychological, or genetic causes. Once a pathogen has been associated with an illness, researchers have reported difficulty displacing these pre-existing theories. 3. Variable pathogenicity: Infection with pathogens can produce varying responses in hosts, complicating the process of showing a relationship between infection and the pathogen. In some infectious diseases, the severity of symptoms has been shown to be dependent on specific genetic traits of the host. 4. Organisms that look alike but behave differently: In some cases a harmless organism exists which looks identical to a disease causing organism with a microscope, which complicates the discovery proces
https://en.wikipedia.org/wiki/Demographic%20dividend
Demographic dividend, as defined by the United Nations Population Fund (UNFPA), is "the economic growth potential that can result from shifts in a population’s age structure, mainly when the share of the working-age population (15 to 64) is larger than the non-working-age share of the population (14 and younger, and 65 and older)". In other words, it is “a boost in economic productivity that occurs when there are growing numbers of people in the workforce relative to the number of dependents”. UNFPA stated that “A country with both increasing numbers of young people and declining fertility has the potential to reap a demographic dividend." Demographic dividend occurs when the proportion of working people in the total population is high because this indicates that more people have the potential to be productive and contribute to growth of the economy. Due to the dividend between young and old, many argue that there is great potential for economic gains, which has been termed the "demographic gift". In order for economic growth to occur the younger population must have access to quality education, adequate nutrition and health including access to sexual and reproductive health. However, this drop in fertility rates is not immediate. The lag between produces a generational population bulge that surges through society. For a period of time this “bulge” is a burden on society and increases the dependency ratio. Eventually this group begins to enter the productive labor force. With fertility rates continuing to fall and older generations having longer life expectancies, the dependency ratio declines dramatically. This demographic shift initiates the demographic dividend. With fewer younger dependents, due to declining fertility and child mortality rates, and fewer older dependents, due to the older generations having shorter life expectancies, and the largest segment of the population of productive working age, the dependency ratio declines dramatically leading to the
https://en.wikipedia.org/wiki/Rs7997012
In genetics, rs7997012 is a gene variation—a single nucleotide polymorphism (SNP)—in intron 2 of the human HTR2A gene that codes for the 5-HT2A receptor. The SNP varies between adenine (A) and guanine (G) DNA bases with the G-allele being most frequent. A research study found it to be related to antidepressant treatment. The research group reported that a polymorphism (rs1954787) on another gene, the GRIK4, has also shown a treatment-response-association in this kind of treatment. In a Japanese study rs7997012 was not associated with either major depressive disorder or bipolar disorder. Rs6311, rs6313 and His452Tyr (rs6314) are other SNPs in the HTR2A gene. There are many more, even in intron 2 alone.
https://en.wikipedia.org/wiki/Cellular%20differentiation
Cellular differentiation is the process in which a stem cell changes from one type to a differentiated one. Usually, the cell changes to a more specialized type. Differentiation happens multiple times during the development of a multicellular organism as it changes from a simple zygote to a complex system of tissues and cell types. Differentiation continues in adulthood as adult stem cells divide and create fully differentiated daughter cells during tissue repair and during normal cell turnover. Some differentiation occurs in response to antigen exposure. Differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals. These changes are largely due to highly controlled modifications in gene expression and are the study of epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. However, metabolic composition does get altered quite dramatically where stem cells are characterized by abundant metabolites with highly unsaturated structures whose levels decrease upon differentiation. Thus, different cells can have very different physical characteristics despite having the same genome. A specialized type of differentiation, known as terminal differentiation, is of importance in some tissues, including vertebrate nervous system, striated muscle, epidermis and gut. During terminal differentiation, a precursor cell formerly capable of cell division permanently leaves the cell cycle, dismantles the cell cycle machinery and often expresses a range of genes characteristic of the cell's final function (e.g. myosin and actin for a muscle cell). Differentiation may continue to occur after terminal differentiation if the capacity and functions of the cell undergo further changes. Among dividing cells, there are multiple levels of cell potency, which is the cell's ability to differentiate into other cell types. A greater potency indicates a larger n
https://en.wikipedia.org/wiki/The%20Talking%20Stone
"The Talking Stone" is a science fiction mystery short story by American writer Isaac Asimov, which first appeared in the October 1955 issue of The Magazine of Fantasy and Science Fiction and was reprinted in the 1968 collection Asimov's Mysteries. "The Talking Stone" was the second of Asimov's Wendell Urth stories. Plot summary Larry Verdansky, a repair technician assigned alone on Station Five, is interested in "siliconies", the silicon-based life forms found on some asteroids. The creatures typically grow to a maximum size of by absorbing gamma rays from radioactive ores. Some are telepathic. When the space freighter Robert Q appears at the station with a giant of a "silicony" in diameter, Verdansky deduces that the crew has found an incredibly rich source of uranium. Verdansky contacts the authorities, but before a patrol ship can reach her, the Robert Q is hit by a meteor, killing the three human crew members. The silicony itself is fatally injured from the explosive decompression. When questioned, the dying silicony states that the coordinates of its home are written on "the asteroid". Dr. Wendell Urth deduces that the silicony meant that the numbers were actually engraved on the hull of the Robert Q, disguised as serial and registration numbers, since the ship fit the definition of an asteroid (a small body orbiting the Sun) the ship's crew had read to it from an ancient astronomy book.
https://en.wikipedia.org/wiki/Red%20clump
The red clump is a clustering of red giants in the Hertzsprung–Russell diagram at around 5,000 K and absolute magnitude (MV) +0.5, slightly hotter than most red-giant-branch stars of the same luminosity. It is visible as a denser region of the red-giant branch or a bulge towards hotter temperatures. It is prominent in many galactic open clusters, and it is also noticeable in many intermediate-age globular clusters and in nearby field stars (e.g. the Hipparcos stars). The red clump giants are cool horizontal branch stars, stars originally similar to the Sun which have undergone a helium flash and are now fusing helium in their cores. Properties Red clump stellar properties vary depending on their origin, most notably on the metallicity of the stars, but typically they have early K spectral types and effective temperatures around 5,000 K. The absolute visual magnitude of red clump giants near the sun has been measured at an average of +0.81 with metallicities between −0.6 and +0.4 dex. There is a considerable spread in the properties of red clump stars even within a single population of similar stars such as an open cluster. This is partly due to the natural variation in temperatures and luminosities of horizontal branch stars when they form and as they evolve, and partly due to the presence of other stars with similar properties. Although red clump stars are generally hotter than red-giant-branch stars, the two regions overlap and the status of individual stars can only be assigned with a detailed chemical abundance study. Evolution Modelling of the horizontal branch has shown that stars have a strong tendency to cluster at the cool end of the zero age horizontal branch (ZAHB). This tendency is weaker in low metallicity stars, so the red clump is usually more prominent in metal-rich clusters. However, there are other effects, and there are well-populated red clumps in some metal-poor globular clusters. Stars with a similar mass to the sun evolve towards
https://en.wikipedia.org/wiki/List%20of%20Latin-script%20alphabets
The lists and tables below summarize and compare the letter inventories of some of the Latin-script alphabets. In this article, the scope of the word "alphabet" is broadened to include letters with tone marks, and other diacritics used to represent a wide range of orthographic traditions, without regard to whether or how they are sequenced in their alphabet or the table. Parentheses indicate characters not used in modern standard orthographies of the languages, but used in obsolete and/or dialectal forms. Letters contained in the ISO basic Latin alphabet Alphabets that contain only ISO basic Latin letters Among alphabets for natural languages the English,[36] Indonesian, and Malay alphabets only use the 26 letters in both cases. Among alphabets for constructed languages the Ido and Interlingua alphabets only use the 26 letters. Extended by ligatures German (ß), French (æ, œ) Extended by diacritical marks Spanish (ñ), German (ä, ö, and ü), Dutch (ij, and ë) Extended by multigraphs Filipino (ng) Alphabets that contain all ISO basic Latin letters Among alphabets for natural languages the Afrikaans,[54] Aromanian, Azerbaijani (some dialects)[53], Basque,[4], Celtic British, Catalan,[6] Cornish, Czech,[8] Danish,[9] Dutch,[10] Emilian-Romagnol, Filipino,[11] Finnish, French,[12], German,[13] Greenlandic, Hungarian,[15] Javanese, Karakalpak,[23] Kurdish, Modern Latin, Luxembourgish, Norwegian,[9] Oromo[65], Papiamento[63], Polish[22], Portuguese, Quechua, Rhaeto-Romance, Romanian, Slovak,[24] Spanish,[25] Sundanese, Swedish, Tswana,[52] Uyghur, Venda,[51] Võro, Walloon,[27] West Frisian, Xhosa, Zhuang, Zulu alphabets include all 26 letters, at least in their largest version. Among alphabets for constructed languages the Interglossa and Occidental alphabets include all 26 letters. The International Phonetic Alphabet (IPA) includes all 26 letters in their lowercase forms, although g is always single-storey (ɡ) in the IPA and never double-storey (). Alp
https://en.wikipedia.org/wiki/Anterior%20perforated%20substance
The anterior perforated substance is a part of the brain. It is bilateral. It is irregular and quadrilateral. It lies in front of the optic tract and behind the olfactory trigone. Structure The anterior perforated substance is bilateral. It lies in front of the optic tract. It lies behind the olfactory trigone, separated by the fissure prima. Medially and in front, it is continuous with the subcallosal gyrus. Laterally, it is bounded by the lateral stria of the olfactory tract, and is continued into the uncus. Its gray substance is confluent above with that of the corpus striatum, and is perforated anteriorly by numerous small blood vessels that supply such areas as the internal capsule. The anterior cerebral artery arises just below the anterior perforated substance. The middle cerebral artery passes through its lateral two thirds. Blood supply The anterior perforated substance is supplied by lenticulostriate arteries, which branch from the middle cerebral artery. It is also supplied by anterior choroidal artery. Small branches from these create holes, which give the anterior perforated substance its name. History The anterior perforated substance is named after the holes created by small blood vessels that supply it. Additional images See also Posterior perforated substance
https://en.wikipedia.org/wiki/Balanced%20salt%20solution
A balanced salt solution (BSS) is a solution made to a physiological pH and isotonic salt concentration. Solutions most commonly include sodium, potassium, calcium, magnesium, and chloride. Balanced salt solutions are used for washing tissues and cells and are usually combined with other agents to treat the tissues and cells. They provide the cells with water and inorganic ions, while maintaining a physiological pH and osmotic pressure. Sometimes glucose is added as an energy source and phenol red is used as a pH indicator. In medicine, balanced salt solutions can be used as an irrigation solution such as during intraocular surgery and to replace intraocular fluids. Balanced salt solutions Alsever's solution Earle's balanced salt solution (EBSS) Gey's balanced salt solution (GBSS) Hanks' balanced salt solution (HBSS) (Dulbecco's) Phosphate buffered saline (PBS) Puck's balanced salt solution Ringer's balanced salt solution (RBSS) Simm's balanced salt solution (SBSS) TRIS-buffered saline (TBS) Tyrode's balanced salt solution (TBSS) Surgical irrigation solutions BSS (ophthalmic irrigation solution) (produced by Alcon) Composition per 1 mL: sodium chloride (NaCl) 6.4 mg, potassium chloride (KCl) 0.75 mg, calcium chloride dihydrate (CaCl2·2H2O) 0.48 mg, magnesium chloride hexahydrate (MgCl2•6H2O) 0.3 mg, sodium acetate trihydrate (C2H3NaO2·3H2O) 3.9 mg, sodium citrate dihydrate (C6H5Na3O7·2H2O) 1.7 mg, sodium hydroxide and/or hydrochloric acid (to adjust pH), and water for injection. The pH is approximately 7.5. The osmolality is approximately 300 mOsm/Kg. BSS Plus (ophthalmic irrigation solution) (produced by Alcon) Composition per 1 mL (once preparation complete): sodium chloride 7.14 mg (122.17 mmol), potassium chloride 0.38 mg (5.097 mmol), calcium chloride dihydrate 0.154 mg (1.04754 mmol), magnesium chloride hexahydrate 0.2 mg (0.983767 mmol), dibasic sodium phosphate 0.42 mg (2.95858 mmol), sodium bicarbonate 2.1 mg (24.998 mmol), dextrose 0.92
https://en.wikipedia.org/wiki/Random%20search
Random search (RS) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized, and RS can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods. Anderson in 1953 reviewed the progress of methods in finding maximum or minimum of problems using a series of guesses distributed with a certain order or pattern in the parameter searching space, e.g. a confounded design with exponentially distributed spacings/steps. This search goes on sequentially on each parameter and refines iteratively on the best guesses from the last sequence. The pattern can be a grid (factorial) search of all parameters, a sequential search on each parameter, or a combination of both. The method was developed to screen the experimental conditions in chemical reactions by a number of scientists listed in Anderson's paper. A MATLAB code reproducing the sequential procedure for the general non-linear regression of an example mathematical model can be found here (FitNGuess @ GitHub). The name "random search" is attributed to Rastrigin who made an early presentation of RS along with basic mathematical analysis. RS works by iteratively moving to better positions in the search space, which are sampled from a hypersphere surrounding the current position. The algorithm described herein is a type of local random search, where every iteration is dependent on the prior iteration's candidate solution. There are alternative random search methods that sample from the entirety of the search space (for example pure random search or uniform global random search), but these are not described in this article. Random search has been used in artificial neural network for hyper-parameter optimization. If good parts of the search space occupy 5% of the volume the chances of hitting a good configuration in search space is 5%. The probability of finding at
https://en.wikipedia.org/wiki/Coxeter%20notation
In geometry, Coxeter notation (also Coxeter symbol) is a system of classifying symmetry groups, describing the angles between fundamental reflections of a Coxeter group in a bracketed notation expressing the structure of a Coxeter-Dynkin diagram, with modifiers to indicate certain subgroups. The notation is named after H. S. M. Coxeter, and has been more comprehensively defined by Norman Johnson. Reflectional groups For Coxeter groups, defined by pure reflections, there is a direct correspondence between the bracket notation and Coxeter-Dynkin diagram. The numbers in the bracket notation represent the mirror reflection orders in the branches of the Coxeter diagram. It uses the same simplification, suppressing 2s between orthogonal mirrors. The Coxeter notation is simplified with exponents to represent the number of branches in a row for linear diagram. So the An group is represented by [3n−1], to imply n nodes connected by n−1 order-3 branches. Example A2 = [3,3] = [32] or [31,1] represents diagrams or . Coxeter initially represented bifurcating diagrams with vertical positioning of numbers, but later abbreviated with an exponent notation, like [...,3p,q] or [3p,q,r], starting with [31,1,1] or [3,31,1] = or as D4. Coxeter allowed for zeros as special cases to fit the An family, like A3 = [3,3,3,3] = [34,0,0] = [34,0] = [33,1] = [32,2], like = = . Coxeter groups formed by cyclic diagrams are represented by parentheseses inside of brackets, like [(p,q,r)] = for the triangle group (p q r). If the branch orders are equal, they can be grouped as an exponent as the length the cycle in brackets, like [(3,3,3,3)] = [3[4]], representing Coxeter diagram or . can be represented as [3,(3,3,3)] or [3,3[3]]. More complicated looping diagrams can also be expressed with care. The paracompact Coxeter group can be represented by Coxeter notation [(3,3,(3),3,3)], with nested/overlapping parentheses showing two adjacent [(3,3,3)] loops, and is also represented more com
https://en.wikipedia.org/wiki/Azimuthal%20quantum%20number
In quantum mechanics, the azimuthal quantum number is a quantum number for an atomic orbital that determines its orbital angular momentum and describes the shape of the orbital. The azimuthal quantum number is the second of a set of quantum numbers that describe the unique quantum state of an electron (the others being the principal quantum number , the magnetic quantum number , and the spin quantum number ). It is also known as the orbital angular momentum quantum number, orbital quantum number, subsidiary quantum number, or second quantum number, and is symbolized as (pronounced ell). Derivation Connected with the energy states of the atom's electrons are four quantum numbers: n, ℓ, mℓ, and ms. These specify the complete, unique quantum state of a single electron in an atom, and make up its wavefunction or orbital. When solving to obtain the wave function, the Schrödinger equation reduces to three equations that lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The azimuthal quantum number arose in the solution of the polar part of the wave equation as shown below , reliant on the spherical coordinate system, which generally works best with models having some glimpse of spherical symmetry. An atomic electron's angular momentum, , is related to its quantum number by the following equation: where is the reduced Planck's constant, is the orbital angular momentum operator and is the wavefunction of the electron. The quantum number is always a non-negative integer: 0, 1, 2, 3, etc. has no real meaning except in its use as the angular momentum operator. When referring to angular momentum, it is better to simply use the quantum number . Atomic orbitals have distinctive shapes denoted by letters. In the illustration, the letters s, p, and d (a convention originating in spectroscopy) describe the shape of the atomic orbital. Their wavefunctions take the form of spherical harmonics, and
https://en.wikipedia.org/wiki/Eudoxus%20of%20Cnidus
Eudoxus of Cnidus (; , Eúdoxos ho Knídios; ) was an ancient Greek astronomer, mathematician, doctor, and lawmaker. He was a student of Archytas and Plato. All of his original works are lost, though some fragments are preserved in Hipparchus' Commentaries on the Phenomena of Aratus and Eudoxus. Spherics by Theodosius of Bithynia may be based on a work by Eudoxus. Life Eudoxus, son of Aeschines, was born and died in Cnidus (also transliterated Knidos), a city on the southwest coast of Anatolia. The years of Eudoxus' birth and death are not fully known but Diogenes Laërtius gave several biographical details, mentioned that Apollodorus said he reached his acme in the 103rd Olympiad (368–), and claimed he died in his 53rd year. From this 19th century mathematical historians reconstructed dates of 408–, but 20th century scholars found their choices contradictory and prefer a birth year of . His name Eudoxus means "honored" or "of good repute" (, from eu "good" and doxa "opinion, belief, fame", analogous to the Latin Benedictus). According to Diogenes Laërtius, crediting Callimachus' Pinakes, Eudoxus studied mathematics with Archytas (of Tarentum, Magna Graecia) and studied medicine with Philiston the Sicilian. At the age of 23, he traveled with the physician Theomedon—who was his patron and possibly his lover—to Athens to study with the followers of Socrates. He spent two months there—living in Piraeus and walking each way every day to attend the Sophists' lectures—then returned home to Cnidus. His friends then paid to send him to Heliopolis, Egypt for 16 months, to pursue his study of astronomy and mathematics. From Egypt, he then traveled north to Cyzicus, located on the south shore of the Sea of Marmara, the Propontis. He traveled south to the court of Mausolus. During his travels he gathered many students of his own. Around 368 BC, Eudoxus returned to Athens with his students. According to some sources, he assumed headship (scholarch) of the Academy during Plato'
https://en.wikipedia.org/wiki/Peter%20Keevash
Peter Keevash (born 30 November 1978) is a British mathematician, working in combinatorics. He is a professor of mathematics at the University of Oxford and a Fellow of Mansfield College. Early years Keevash was born in Brighton, England, but mostly grew up in Leeds. He competed in the International Mathematical Olympiad in 1995. He entered Trinity College, University of Cambridge, in 1995 and completed his B.A. in mathematics in 1998. He earned his doctorate from Princeton University with Benny Sudakov as advisor. He took a postdoctoral position at the California Institute of Technology before moving to Queen Mary, University of London as a lecturer, and subsequently professor, before his move to Oxford in September 2013. Mathematics Keevash has published many results in combinatorics, particularly in extremal graph and hypergraph theory and Ramsey Theory. In joint work with Tom Bohman he established the best-known lower bound for the off-diagonal Ramsey Number , namely (This result was obtained independently at the same time by Fiz Pontiveros, Griffiths and Morris.) On 15 January 2014, he released a preprint establishing the existence of block designs with arbitrary parameters, provided only that the underlying set is sufficiently large and satisfies certain obviously necessary divisibility conditions. In particular, his work provides the first examples of Steiner systems with parameter t ≥ 6 (and in fact provides such systems for all t). In 2018, he was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro.
https://en.wikipedia.org/wiki/Mannheimia%20virus%20PHL101
Mannheimia virus PHL101 is a virus of the family Myoviridae, genus Baylorvirus. As a member of the group I of the Baltimore classification, Mannheimia virus PHL101 is a dsDNA viruses. All the family Myoviridae members share a nonenveloped morphology consisting of a head and a tail separated by a neck. Its genome is linear. The propagation of the virions includes the attaching to a host cell (a bacterium, as Mannheimia virus PHL101 is a bacteriophage) and the injection of the double stranded DNA; the host transcribes and translates it to manufacture new particles. To replicate its genetic content requires host cell DNA polymerases and, hence, the process is highly dependent on the cell cycle. Mannheimia virus PHL101 is a lysogenic phage. Its genome contains 34,525 base pairs and 50 open reading frames.
https://en.wikipedia.org/wiki/Sargan%E2%80%93Hansen%20test
The Sargan–Hansen test or Sargan's test is a statistical test used for testing over-identifying restrictions in a statistical model. It was proposed by John Denis Sargan in 1958, and several variants were derived by him in 1975. Lars Peter Hansen re-worked through the derivations and showed that it can be extended to general non-linear GMM in a time series context. The Sargan test is based on the assumption that model parameters are identified via a priori restrictions on the coefficients, and tests the validity of over-identifying restrictions. The test statistic can be computed from residuals from instrumental variables regression by constructing a quadratic form based on the cross-product of the residuals and exogenous variables. Under the null hypothesis that the over-identifying restrictions are valid, the statistic is asymptotically distributed as a chi-square variable with degrees of freedom (where is the number of instruments and is the number of endogenous variables). See also Durbin–Wu–Hausman test
https://en.wikipedia.org/wiki/Spermatic%20plexus
The spermatic plexus (or testicular plexus) is derived from the renal plexus, receiving branches from the aortic plexus. It accompanies the internal spermatic artery to the testis. Additional images
https://en.wikipedia.org/wiki/Carbaminohemoglobin
Carbaminohemoglobin (carbaminohaemoglobin BrE) (CO2Hb, also known as carbhemoglobin and carbohemoglobin) is a compound of hemoglobin and carbon dioxide, and is one of the forms in which carbon dioxide exists in the blood. Twenty-three percent of carbon dioxide is carried in blood this way (70% is converted into bicarbonate by carbonic anhydrase and then carried in plasma, 7% carried as free CO2, dissolved in plasma). Synthesis When the tissues release carbon dioxide into the bloodstream, around 10% is dissolved into the plasma. The rest of the carbon dioxide is carried either directly or indirectly by hemoglobin. Approximately 10% of the carbon dioxide carried by hemoglobin is in the form of carbaminohemoglobin. This carbaminohemoglobin is formed by the reaction between carbon dioxide and an amino (-NH2) residue from the globin molecule, resulting in the formation of a carbamino residue (-NH.COO−). The rest of the carbon dioxide is transported in the plasma as bicarbonate anions. Mechanism When carbon dioxide binds to hemoglobin, carbaminohemoglobin is formed, lowering hemoglobin's affinity for oxygen via the Bohr effect. The reaction is formed between a carbon dioxide molecule and an amino residue. In the absence of oxygen, unbound hemoglobin molecules have a greater chance of becoming carbaminohemoglobin. The Haldane effect relates to the increased affinity of de-oxygenated hemoglobin for : offloading of oxygen to the tissues thus results in increased affinity of the hemoglobin for carbon dioxide, and , which the body needs to get rid of, which can then be transported to the lung for removal. Because the formation of this compound generates hydrogen ions, haemoglobin is needed to buffer it. Hemoglobin can bind to four molecules of carbon dioxide. The carbon dioxide molecules form a carbamate with the four terminal-amine groups of the four protein chains in the deoxy form of the molecule. Thus, one hemoglobin molecule can transport four carbon dioxide molec
https://en.wikipedia.org/wiki/Thymidylate%20synthase%20inhibitor
Thymidylate synthase inhibitors are chemical agents which inhibit the enzyme thymidylate synthase and have potential as an anticancer chemotherapy. This inhibition prevents the methylation of C5 of deoxyuridine monophosphate (dUMP) thereby inhibiting the synthesis of deoxythymidine monophosphate (dTMP). The downstream effect is promotion of cell death because cells would not be able to properly undergo DNA synthesis if they are lacking dTMP, a necessary precursor to dTTP. Five agents were in clinical trials in 2002: raltitrexed, pemetrexed, nolatrexed, ZD9331, and GS7904L. Examples include Raltitrexed, used for colorectal cancer since 1998 Fluorouracil, used for colorectal cancer BGC 945 OSI-7904L
https://en.wikipedia.org/wiki/Cellular%20cardiomyoplasty
Cellular cardiomyoplasty, or cell-based cardiac repair, is a new potential therapeutic modality in which progenitor cells are used to repair regions of damaged or necrotic myocardium. The ability of transplanted progenitor cells to improve function within the failing heart has been shown in experimental animal models and in some human clinical trials. In November 2011, a large group of collaborators at Minneapolis Heart Institute Foundation at Abbott Northwestern found no significant difference in left ventricular ejection fraction (LVEF) or other markers, between a group of patients treated with cellular cardiomyoplasty and a group of control patients. In this study, all patients were post MI, post percutaneous coronary intervention (PCI) and that infusion of progenitor cells occurred 2–3 weeks after intervention. In a study that is currently underway (February 2012), however, more positive results were being reported: In the SCIPIO trial, patients treated with autologous cardiac stem cells post MI have been reported to be showing statistically significant increases in LVEF and reduction in infarct size over the control group at four months after implant. Positive results at the one-year mark are even more pronounced. Yet the SCIPIO trial "was recently called into question". Harvard University is "now investigating the integrity of some of the data". The Lancet recently published a non-specific ‘Expression of concern’ about the paper. Subsequently, another preclinical study also raised doubts on the rationale behind using this special kind of cell, as it was found that the special cells only have a minimal ability in generating new cardiomyocytes. Some specialists therefore now raise concerns to continue. Progenitor cell lines To date, the ideal progenitor cells have not been found or created. With the goal of recreating human tissue, the use of embryonic stem cells (ESC) was the initial logical choice. These pluripotent cells can conceptually give rise to any so
https://en.wikipedia.org/wiki/Centro%20Nacional%20de%20Aceleradores
The Centro Nacional de Aceleradores (CNA) is the centre for particle accelerators in Spain and is based in Seville. It was created in 1997. It combines the efforts of the University of Seville, the Regional Government of Andalusia and the Spanish Higher Council for Scientific Research. It is located in the Cartuja 93 Science and Technology Park. It has three different types of ion accelerators (3MV Van de Graaf Tandem, Cyclotron which provides 18 MeV protons and 9 MeV deuterons and a 1 MV Cockcroft-Walton Tandem as a mass spectrometer) for studies in various fields. In addition, they feature a PET/CT scanner for people, new Carbon 14 dating systems (the MiCaDaS) and a 60CO.2 irradiator.