source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Entomological%20Society%20of%20Japan
The Entomological Society of Japan () was founded in 1917 for the purpose of improving and promoting entomology in Japan.
https://en.wikipedia.org/wiki/Ecological%20resilience
In ecology, resilience is the capacity of an ecosystem to respond to a perturbation or disturbance by resisting damage and recovering quickly. Such perturbations and disturbances can include stochastic events such as fires, flooding, windstorms, insect population explosions, and human activities such as deforestation, fracking of the ground for oil extraction, pesticide sprayed in soil, and the introduction of exotic plant or animal species. Disturbances of sufficient magnitude or duration can profoundly affect an ecosystem and may force an ecosystem to reach a threshold beyond which a different regime of processes and structures predominates. When such thresholds are associated with a critical or bifurcation point, these regime shifts may also be referred to as critical transitions. Human activities that adversely affect ecological resilience such as reduction of biodiversity, exploitation of natural resources, pollution, land use, and anthropogenic climate change are increasingly causing regime shifts in ecosystems, often to less desirable and degraded conditions. Interdisciplinary discourse on resilience now includes consideration of the interactions of humans and ecosystems via socio-ecological systems, and the need for shift from the maximum sustainable yield paradigm to environmental resource management and ecosystem management, which aim to build ecological resilience through "resilience analysis, adaptive resource management, and adaptive governance". Ecological resilience has inspired other fields and continues to challenge the way they interpret resilience, e.g. supply chain resilience. Definitions The IPCC Sixth Assessment Report defines resilience as, “not just the ability to maintain essential function, identity and structure, but also the capacity for transformation.” The IPCC considers resilience both in terms of ecosystem recovery as well as the recovery and adaptation of human societies to natural disasters. The concept of resilience in ecolog
https://en.wikipedia.org/wiki/Private%20VLAN
Private VLAN, also known as port isolation, is a technique in computer networking where a VLAN contains switch ports that are restricted such that they can only communicate with a given uplink. The restricted ports are called private ports. Each private VLAN typically contains many private ports, and a single uplink. The uplink will typically be a port (or link aggregation group) connected to a router, firewall, server, provider network, or similar central resource. The concept was primarily introduced as a result of the limitation on the number of VLANs in network switches, a limit quickly exhausted in highly scaled scenarios. Hence, there was a requirement to create multiple network segregations with a minimum number of VLANs. The switch forwards all frames received from a private port to the uplink port, regardless of VLAN ID or destination MAC address. Frames received from an uplink port are forwarded in the normal way (i.e. to the port hosting the destination MAC address, or to all ports of the VLAN for broadcast frames or for unknown destination MAC addresses). As a result, direct peer-to-peer traffic between peers through the switch is blocked, and any such communication must go through the uplink. While private VLANs provide isolation between peers at the data link layer, communication at higher layers may still be possible depending on further network configuration. A typical application for a private VLAN is a hotel or Ethernet to the home network where each room or apartment has a port for Internet access. Similar port isolation is used in Ethernet-based ADSL DSLAMs. Allowing direct data link layer communication between customer nodes would expose the local network to various security attacks, such as ARP spoofing, as well as increase the potential for damage due to misconfiguration. Another application of private VLANs is to simplify IP address assignment. Ports can be isolated from each other at the data link layer (for security, performance,
https://en.wikipedia.org/wiki/MAC-Forced%20Forwarding
MAC-Forced Forwarding (MACFF) is used to control unwanted broadcast traffic and host-to-host communication. This is achieved by directing network traffic from hosts located on the same subnet but at different locations to an upstream gateway device. This provides security at Layer 2 since no traffic is able to pass directly between the hosts. MACFF is suitable for Ethernet networks where a layer 2 bridging device, known as an Ethernet Access Node (EAN), connects Access Routers to their clients. MACFF is configured on the EANs. MACFF is described in RFC 4562, MAC-Forced Forwarding: A Method for Subscriber Separation on an Ethernet Access Network. Allied Telesis switches implement MACFF using DHCP snooping to maintain a database of the hosts that appear on each switch port. When a host tries to access the network through a switch port, DHCP snooping checks the host’s IP address against the database to ensure that the host is valid. MACFF then uses DHCP snooping to check whether the host has a gateway Access Router. If it does, MACFF uses a form of Proxy ARP to reply to any ARP requests, giving the router's MAC address. This forces the host to send all traffic to the router, even traffic destined to a host in the same subnet as the source. The router receives the traffic and makes forwarding decisions based on a set of forwarding rules, typically a QoS policy or a set of filters.
https://en.wikipedia.org/wiki/Web%20of%20Science
The Web of Science (WoS; previously known as Web of Knowledge) is a paid-access platform that provides (typically via the internet) access to multiple databases that provide reference and citation data from academic journals, conference proceedings, and other documents in various academic disciplines. Until 1997, it was originally produced by the Institute for Scientific Information.<noinclude>It is currently owned by Clarivate. History A citation index is built on the fact that citations in science serve as linkages between similar research items, and lead to matching or related scientific literature, such as journal articles, conference proceedings, abstracts, etc. In addition, literature that shows the greatest impact in a particular field, or more than one discipline, can be easily located through a citation index. For example, a paper's influence can be determined by linking to all the papers that have cited it. In this way, current trends, patterns, and emerging fields of research can be assessed. Eugene Garfield, the "father of citation indexing of academic literature", who launched the Science Citation Index, which in turn led to the Web of Science, wrote: Search answer Web of Science is described as a unifying research tool that enables the user to acquire, analyze, and disseminate database information in a timely manner. This is accomplished because of the creation of a common vocabulary, called ontology, for varied search terms and varied data. Moreover, search terms generate related information across categories. Acceptable content for Web of Science is determined by an evaluation and selection process based on the following criteria: impact, influence, timeliness, peer review, and geographic representation. Web of Science employs various search and analysis capabilities. First, citation indexing is employed, which is enhanced by the capability to search for results across disciplines. The influence, impact, history, and methodology of an idea can be
https://en.wikipedia.org/wiki/RA-1%20Enrico%20Fermi
RA-1 Enrico Fermi is a research reactor in Argentina. It was the first nuclear reactor to be built in that country and the first research reactor in the southern hemisphere. Construction started April 1957, with first criticality 20 January 1958. It produced the first medical and industrial radioisotopes made in Argentina, and was used to train staff for the first two nuclear power stations there. It is a pool type, with enriched uranium oxide fuel (20% U-235), light water coolant and moderator, and a graphite reflector. It produces 40 kilowatts of thermal energy at full authorized power. It has been modernized on several occasions, and is currently used for research and teaching. External links Report of the National Atomic Energy Commission of Argentina (CNEA), November 2004, (PDF, 2353KB) El Reactor RA - 1, CNEA web page (in Spanish) El Reactor RA - 1 - Características, CNEA web page (in Spanish) Nuclear research reactors Light water reactors
https://en.wikipedia.org/wiki/Bochner%20identity
In mathematics — specifically, differential geometry — the Bochner identity is an identity concerning harmonic maps between Riemannian manifolds. The identity is named after the American mathematician Salomon Bochner. Statement of the result Let M and N be Riemannian manifolds and let u : M → N be a harmonic map. Let du denote the derivative (pushforward) of u, ∇ the gradient, Δ the Laplace–Beltrami operator, RiemN the Riemann curvature tensor on N and RicM the Ricci curvature tensor on M. Then See also Bochner's formula
https://en.wikipedia.org/wiki/Time%20Sharing%20Operating%20System
Time Sharing Operating System, or TSOS, is a discontinued operating system for RCA mainframe computers of the Spectra 70 series. TSOS was originally designed in 1968 for the Spectra 70/46, a modified version of the 70/45. TSOS quickly evolved into the Virtual Memory Operating System (VMOS) by 1970. VMOS continued to be supported on the later RCA 3 and RCA 7 computer systems. RCA was in the computer business until 1971 when it sold its computer business to Sperry Corporation. Sperry renamed TSOS to VS/9 and continued to market it into the early 1980s. In the mid seventies, an enhanced version of TSOS called BS2000 was offered by the German company Siemens. While Sperry – now Unisys – discontinued VS/9, the BS2000 variant, now called BS2000/OSD, is still offered by Fujitsu and used by their mainframe customers primarily in Germany and other European countries. As the name suggests, TSOS provided time sharing features. Similar to CTSS it provided a common user interface for both time sharing and batch, which was a big advantage over IBM's OS/360 or its successors MVS, OS/390 and z/OS. See also Timeline of operating systems
https://en.wikipedia.org/wiki/Furstenberg%27s%20proof%20of%20the%20infinitude%20of%20primes
In mathematics, particularly in number theory, Hillel Furstenberg's proof of the infinitude of primes is a topological proof that the integers contain infinitely many prime numbers. When examined closely, the proof is less a statement about topology than a statement about certain properties of arithmetic sequences. Unlike Euclid's classical proof, Furstenberg's proof is a proof by contradiction. The proof was published in 1955 in the American Mathematical Monthly while Furstenberg was still an undergraduate student at Yeshiva University. Furstenberg's proof Define a topology on the integers , called the evenly spaced integer topology, by declaring a subset U ⊆  to be an open set if and only if it is a union of arithmetic sequences S(a, b) for a ≠ 0, or is empty (which can be seen as a nullary union (empty union) of arithmetic sequences), where Equivalently, U is open if and only if for every x in U there is some non-zero integer a such that S(a, x) ⊆ U. The axioms for a topology are easily verified: ∅ is open by definition, and is just the sequence S(1, 0), and so is open as well. Any union of open sets is open: for any collection of open sets Ui and x in their union U, any of the numbers ai for which S(ai, x) ⊆ Ui also shows that S(ai, x) ⊆ U. The intersection of two (and hence finitely many) open sets is open: let U1 and U2 be open sets and let x ∈ U1 ∩ U2 (with numbers a1 and a2 establishing membership). Set a to be the least common multiple of a1 and a2. Then S(a, x) ⊆ S(ai, x) ⊆ Ui. This topology has two notable properties: Since any non-empty open set contains an infinite sequence, a finite non-empty set cannot be open; put another way, the complement of a finite non-empty set cannot be a closed set. The basis sets S(a, b) are both open and closed: they are open by definition, and we can write S(a, b) as the complement of an open set as follows: The only integers that are not integer multiples of prime numbers are −1 and +1, i.e. Now, by
https://en.wikipedia.org/wiki/High-redundancy%20actuation
High-redundancy actuation (HRA) is a new approach to fault-tolerant control in the area of mechanical actuation. Overview The basic idea is to use a lot of small actuation elements, so that a fault of one element has only a minor effect on the overall system. This way, a High Redundancy Actuator can remain functional even after several elements are at fault. This property is also called graceful degradation. Fault-tolerant operation in the presence of actuator faults requires some form of redundancy. Actuators are essential, because they are used to keep the system stable and to bring it into the desired state. Both requires a certain amount of power or force to be applied to the system. No control approach can work unless the actuators produce this necessary force. So the common solution is to err on the side of safety by over-actuation: much more control action than strictly necessary is built into the system. For critical systems, the normal approach involves straightforward replication of the actuators. Often three or four actuators are used in parallel for aircraft flight control systems, even if one would be sufficient from a control point of view. So if one actuator fails, the remaining actuator can always keep the system operation. While this approach is certainly successful, it also makes the system expensive, heavy and ineffective. Inspiration of high-redundancy actuation The idea of the high-redundancy actuation (HRA) is inspired by the human musculature. A muscle is composed of many individual muscle cells, each of which provides only a minute contribution to the force and the travel of the muscle. These properties allow the muscle as a whole to be highly resilient to damage of individual cells. Technical realisation The aim of high redundancy actuation is not to produce man-made muscles, but to use the same principle of cooperation in technical actuators to provide intrinsic fault tolerance. To achieve this, a high number of small actuator elemen
https://en.wikipedia.org/wiki/Energy%20transfer%20upconversion
Energy Transfer Upconversion or ETU is a physical principle (most commonly encountered in solid-state laser physics) that involves the excitation of a laser-active ion to a level above that which would be achieved by simple absorption of a pump photon, the required additional energy being transferred from another laser-active ion undergoing nonradiative deexcitation. ETU involves two fundamental ideas: energy transfer and upconversion. The analysis below will discuss ETU in the context of an optically pumped [see optical pumping] solid-state laser. A solid-state laser [see also laser] has laser-active ions embedded in a host medium. Energy may be transferred between these by dipole-dipole interaction (over short distances) or by fluorescence and reabsorption (over longer distances). In the case of ETU it is primarily dipole-dipole energy transfer that is of interest. If a laser-active ion is in an excited state, it can decay to a lower state either radiatively (i.e. energy is conserved by the emission of a photon, as required for laser operation) or nonradiatively. Nonradiative emission may be via Auger decay or via energy transfer to another laser-active ion. If this occurs, the ion receiving the energy will be excited to a higher energy state than that already achieved by absorption of a pump photon. This process of further exciting an already excited laser-active ion is known as photon upconversion. ETU is normally an unwanted effect when building lasers. Nonradiative decay is itself an inefficiency (in a perfect laser every downward transition would be a stimulated emission event), whilst the excitation of the energy-receiving ion can result in heating of the gain medium. When ETU occurs due to a clustering of ions within the host medium, it is sometimes termed concentration quenching.
https://en.wikipedia.org/wiki/Semiconductor%20Science%20and%20Technology
Semiconductor Science and Technology is a peer-reviewed scientific journal covering all applied or explicitly applicable experimental and theoretical studies of the properties of semiconductors and their interfaces, devices, and packaging. The journal publishes different article types including research papers, rapid communications, and topical reviews. The editor-in-chief is Koji Ishibashi (Advanced Device Laboratory, RIKEN, Japan). The previous editors-in-chief were Kornelius Nielsch (University of Hamburg) and Laurens Molenkamp (University of Würzburg). The journal is indexed in Inspec, Chemical Abstracts, Compendex, Applied Science and Technology Abstracts, Applied Science and Technology Index, PASCAL, VINITI Database RAS, and Science Citation Index Expanded.
https://en.wikipedia.org/wiki/Syllabogram
Syllabograms are signs used to write the syllables (or morae) of words. This term is most often used in the context of a writing system otherwise organized on different principles—an alphabet where most symbols represent phonemes, or a logographic script where most symbols represent morphemes—but a system based mostly on syllabograms is a syllabary. Syllabograms in the Maya script most frequently take the form of V (vowel) or CV (consonant-vowel) syllables of which approximately 83 are known. CVC signs are present as well. Two modern well-known examples of syllabaries consisting mostly of CV syllabograms are the Japanese kana, used to represent the same sounds in different occasions. Syllabograms tend not to be used for languages with more complicated syllables: for example English phonotactics allows syllables as complex as CCCVCCCC (as in strengths), generating many thousands of possible syllables and making the use of syllabograms cumbersome. Types of writing system that use syllabograms Syllabary Semi-syllabary
https://en.wikipedia.org/wiki/CCSO%20Nameserver
A CCSO name-server or Ph protocol was an early form of database search on the Internet. In its most common form, it was used to look up information such as telephone numbers and email addresses. Today, this service has been largely replaced by LDAP. It was used mainly in the early-to-middle 1990s. The name-server was developed by Steve Dorner at the University of Illinois at Urbana–Champaign, at the university's Computing and Communications Services Office (CCSO). There also exists an Outlook plugin and standalone application known as OutlookPH. Overview The name-server directories were frequently organized in Gopher hierarchies. The tools "Ph" and "Qi" were the two components of the system: Ph was a client that queried the Qi server. The Ph protocol was formally defined by in September 1998. However, the memo issued at this time references its prior use for an unspecified period of time before this date (work on the protocol started around 1988, and it was in use from around 1991). It defines sixteen keywords that can be used on the server side to define record properties. It also defines how clients should access records on the server and what responses the server should give. Ph server communication takes place on TCP port 105. Command structure All commands and response are initially assumed to be in US-ASCII encoding for historical reasons, unless the client explicitly asks for 8-bit (ISO-8859-1) encoding. As a result, only characters between 0x20 and 0x7E are initially sent by the server in raw form. Other characters, if present in entries, will be escaped using the defined "Quoted-Printable" encoding. The initial request from the client is a text base keyword optionally followed by one or more parameters as defined in the . The server then responds to the request. The following example response to a status request is provided by the RFC memo. C: status S: 100:Qi server $Revision: 1.6 $ S: 100:Ph passwords may be obtained at CCSO Accounting, S: 100:1420
https://en.wikipedia.org/wiki/Swarming%20motility
Swarming motility is a rapid (2–10 μm/s) and coordinated translocation of a bacterial population across solid or semi-solid surfaces, and is an example of bacterial multicellularity and swarm behaviour. Swarming motility was first reported by Jorgen Henrichsen and has been mostly studied in genus Serratia, Salmonella, Aeromonas, Bacillus, Yersinia, Pseudomonas, Proteus, Vibrio and Escherichia. This multicellular behavior has been mostly observed in controlled laboratory conditions and relies on two critical elements: 1) the nutrient composition and 2) viscosity of culture medium (i.e. % agar). One particular feature of this type of motility is the formation of dendritic fractal-like patterns formed by migrating swarms moving away from an initial location. Although the majority of species can produce tendrils when swarming, some species like Proteus mirabilis do form concentric circles motif instead of dendritic patterns. Biosurfactant, quorum sensing and swarming In some species, swarming motility requires the self-production of biosurfactant to occur. Biosurfactant synthesis is usually under the control of an intercellular communication system called quorum sensing. Biosurfactant molecules are thought to act by lowering surface tension, thus permitting bacteria to move across a surface. Cellular differentiation Swarming bacteria undergo morphological differentiation that distinguish them from their planktonic state. Cells localized at migration front are typically hyperelongated, hyperflagellated and grouped in multicellular raft structures. Ecological significance The fundamental role of swarming motility remains unknown. However, it has been observed that active swarming bacteria of Salmonella typhimurium shows an elevated resistance to certain antibiotics compared to undifferentiated cells. See also Bacterial motility
https://en.wikipedia.org/wiki/International%20Mass%20Spectrometry%20Foundation
The International Mass Spectrometry Foundation (IMSF) is a non-profit scientific organization in the field of mass spectrometry. It operates the International Mass Spectrometry Society, which consists of 37 member societies and sponsors the International Mass Spectrometry Conference that is held once every two years. Aims The foundation has four aims: organizing international conferences and workshops in mass spectrometry improving mass spectrometry education standardizing terminology in the field aiding in the dissemination of mass spectrometry through publications Conferences Before the formation of the IMSF, the first International Mass Spectrometry Conference was held in London in 1958 and 41 papers were presented. Since then, conferences were held every three years until 2012, and every two years since. Conference proceedings are published in a book series, Advances in Mass Spectrometry, which is the oldest continuous series of publications in mass spectrometry. The International Mass Spectrometry Society evolved from this series of International Mass Spectrometry Conferences. The IMSF was officially registered in the Netherlands in 1998 following an agreement at the 1994 conference. Past meetings were held in these locations: Awards The society sponsors several awards including the Curt Brunnée Award for achievements in instrumentation by a scientist under 45 years of age, the Thomson Medal Award for achievements in mass spectrometry, as well as travel awards and student paper awards: Curt Brunnée Award winners: See also American Society for Mass Spectrometry British Mass Spectrometry Society Canadian Society for Mass Spectrometry List of female mass spectrometrists
https://en.wikipedia.org/wiki/Di-positronium
Di-positronium, or dipositronium, is an exotic molecule consisting of two atoms of positronium. It was predicted to exist in 1946 by John Archibald Wheeler, and subsequently studied theoretically, but was not observed until 2007 in an experiment performed by David Cassidy and Allen Mills at the University of California, Riverside. The researchers made the positronium molecules by firing intense bursts of positrons into a thin film of porous silicon dioxide. Upon slowing down in the silica, the positrons captured ordinary electrons to form positronium atoms. Within the silica, these were long lived enough to interact, forming molecular di-positronium. Advances in trapping and manipulating positrons, and spectroscopy techniques have enabled studies of Ps–Ps interactions. In 2012, Cassidy et al. were able to produce the excited molecular positronium angular momentum state. See also Hydrogen molecule Hydrogen molecular ion Positronium Protonium Exotic atom
https://en.wikipedia.org/wiki/Victor%20Guillemin
Victor William Guillemin (born 1937 in Boston) is an American mathematician. He works at the Massachusetts Institute of Technology in the field of symplectic geometry, and he has also made contributions to the fields of microlocal analysis, spectral theory, and mathematical physics. Education and career Guillemin obtained a B.A. at Harvard University in 1959, as well as an M. A. at the University of Chicago in 1960. He received a Ph.D. in mathematics from Harvard University in 1962; his thesis, entitled Theory of Finite G-Structures, was written under the direction of Shlomo Sternberg. He worked at Columbia University from 1963 to 1966 and then moved to the Massachusetts Institute of Technology as assistant professor. He become associated professor in 1969 and full professor in 1973. Awards and honors Guillemin was awarded in 1969 a Sloan Research Fellowship, in 1988 a Guggenheim fellowship and in 1996 a Humboldt fellowship. In 1970 he was invited speaker at the International Congress of Mathematicians in Nice. He was elected a fellow of the American Academy of Arts and Sciences in 1984 and of the United States National Academy of Sciences in 1985. In 2003, he was awarded the Leroy P. Steele Prize for Lifetime Achievement by the American Mathematical Society. In 2012 he became a fellow of the American Mathematical Society. Research Guillemin worked in several areas in analysis and geometry, including microlocal analysis, symplectic group actions, and spectral theory of elliptic operators on manifolds. He is the author or co-author of numerous books and monographs, including a widely used textbook on differential topology, written jointly with Alan Pollack in 1974, and a monograph on symplectic geometry in physics, written jointly with Shlomo Sternberg in 1986. Family Victor Guillemins's uncle Ernst Guillemin was a Professor of Electrical Engineering and Computer Science at MIT, his younger brother Robert Charles Guillemin was a sidewalk artist, his brother-
https://en.wikipedia.org/wiki/Cold-fX
Cold-FX is a product derived from the roots of North American ginseng (Panax quinquefolius). It was formulated by Jacqueline Shan and originally manufactured by her company, Afexa Life Sciences (formerly called CV Technologies), which was acquired by Valeant Pharmaceuticals in 2011. There is little evidence to support that Cold-fx is effective in the common cold. All trials have been done by the manufacturer and there has been poor data reporting. According to Health Canada's Natural Health Product Directorate records, the company claims that it may "help reduce the frequency, severity and duration of cold and flu symptoms by boosting the immune system". COLD-FX is licensed by Health Canada as a Natural Health Product. The efficacy of this extract has been tested in clinical trials conducted in collaboration with researchers from Canadian universities. COLD-FX has been assessed in 6 published randomized, double-blinded and placebo controlled clinical trials and 20+ published articles Medical uses There is no evidence that Cold-fx is effective in those infected with the common cold. The effect of preventative use is not clear. When used preventively it makes no difference on the rate of infections. It also appears to have no effect on how bad the infections are. There is tentative evidence that it may lessen the length of sickness when used preventively. Blumenthal from American Botanical Council suggested that COLD-FX “represents a new class of herb-based therapeutic products” and is a “result of intensive scientific research on a natural herb”. Clinical studies involving more than 1600 patients, showed that the active ingredient in COLD-FX can help reduce and prevent common cold and flu symptoms when taken daily., Adverse effects Individuals requiring anti-coagulant therapy such as warfarin should avoid use of American ginseng. Not recommended for individuals with impaired liver or renal function. It is not recommended in those who are pregnant or breastfeedin
https://en.wikipedia.org/wiki/G.8261
ITU-T Recommendation G.8261/Y.1361 (formerly G.pactiming) "Timing and Synchronization Aspects in Packet Networks" specifies the upper limits of allowable network jitter and wander, the minimum requirements that network equipment at the TDM interfaces at the boundary of these packet networks can tolerate, and the minimum requirements for the synchronization function of network equipment. Usage Packet networks have been inherently asynchronous. However, as the communications industry moves toward an all IP core and edge network, there is a need to provide synchronization functionality to traditional TDM-based applications. This is essential for the interworking with PSTN. The goal is provide a Primary Reference Clock (PRC) traceable clock for the TDM applications. External links ITU-T G.8261 recommendation publication Electronics standards Synchronization Packets (information technology)
https://en.wikipedia.org/wiki/International%20Behavioural%20and%20Neural%20Genetics%20Society
The International Behavioural and Neural Genetics Society (IBANGS) is a learned society that was founded in 1996. The goal of IBANGS is "promote and facilitate the growth of research in the field of neural behavioral genetics". Profile Mission The IBANGS mission statement is to promote the field of neurobehavioural genetics by: organizing annual meetings to promote excellence in research on behavioural and neural genetics publishing a scholarly journal, Genes, Brain and Behavior in collaboration with Wiley-Blackwell Awards Each year IBANGS recognizes top scientists in the field of neurobehavioral genetics with: The IBANGS Distinguished Investigator Award for distinguished lifetime contributions to behavioral neurogenetics The IBANGS Young Scientist Award for promising young scientists Travel Awards to attend an IBANGS Annual Meeting for students, postdocs, and junior faculty, financed by a meeting grant from the National Institute on Alcohol Abuse and Alcoholism A Distinguished Service Award for exceptional contributions to the field is given on a more irregular basis and has been awarded only three times, to Benson Ginsburg (2001), Wim Crusio (2011), and John C. Crabbe (2015). History IBANGS was founded in 1996 as the European Behavioural and Neural Genetics Society, with Hans-Peter Lipp as its founding president. The name and scope of EBANGS were changed to "International" at the first meeting of the society in Orléans, France in 1997. IBANGS is a founding member of the Federation of European Neuroscience Societies. The current president is Karla Kaun (2022-2025). Previous presidents have been:
https://en.wikipedia.org/wiki/B%C3%A1rbara%20M.%20Brizuela
Bárbara M. Brizuela is an American mathematics educator, and an associate professor education at Tufts University. Education and career Brizuela was born in the United States, though raised in Argentina and Venezuela. She has an Ed.D from Harvard University where she studied under Eleanor Duckworth. Prior to that, she received a Master of Arts, General Studies in Education from Tufts and a Licenciada en Ciencias Pedagógicas and Licenciada en Psicopedagogía degrees from the Universidad de Belgrano. She was a Spencer Fellow at the Harvard Graduate School of Education from 1997 until 2000 and a Roy E. Larsen Fellow in 1996–1997. She is one of the leaders of the Tufts Math, Science, Technology and Engineering Education graduate research program. In 2008, she received a Fulbright Fellowship. Research Brizuela's main research focus is on mathematics education in early childhood and elementary school. She mainly studies children's learning of written mathematical representations as well as children's construction of algebraic understandings in a line of work called "Early Algebra". She is a member of the Early Algebra Project, an NSF-funded longitudinal study of the effects of introducing some algebraic concepts to children in elementary school, and was the Principal Investigator of a study created to follow up the children of the Early Algebra study into middle and high school, also funded by the NSF. She is also involved in the Noyce Teacher Fellowship Program at Tufts and in the research effort surrounding Tufts's Poincaré Institute for Mathematics Education, an NSF NSF MSP project. Books In 2004, her book Mathematical Development in Young Children: Exploring Notations was published. This book was later translated into Portuguese. In 2007, she published the book Bringing Out the Algebraic Character of Arithmetic: From Children’s Ideas to Classroom Practice with her colleagues Analúcia Schliemann and David Carraher. This book was later translated into Spanish. She
https://en.wikipedia.org/wiki/Curriculum-based%20measurement
Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student's performance in either basic skills or content knowledge. Early history CBM began in the mid-1970s with research headed by Stan Deno at the University of Minnesota. Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were: (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy (reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress. This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning. Increasing importance Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of the No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student's progress toward and achievement of state standards. Key feature Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most imp
https://en.wikipedia.org/wiki/Sign%20war
A sign war is a competition between two or more organizations to gain the best visibility, or simply to engage in friendly "one-upmanship". The goal may be to put up more signs than one's competitors, or it may be to put up wittier signs. Business sign wars Sign wars between local businesses may consist of good spirited jabs at one another. For example, a sign war in Christiansburg, Virginia in 2021 started when a local music store challenged their neighboring shoe store to a sign war. The good-hearted "war" spread across town and attracted national attention. Political campaigns In politics, sign wars are competitions between opposing political campaigns at events and/or locations where campaign visibility is paramount to each side. During a sign war, campaign workers, both staffers and volunteers, seek to have a greater sign presence than their opposition. Sign wars may consist of tens of thousands of signs in standard sizes ranging from placards to 4'x8's and may include a wide variety of signs that have been improvised by campaigns and their volunteers. Sign wars as a campaign tactic Journalists frequently report on sign wars during the campaign season. Particularly for campaigns that aren't large enough to have a regular stream of polling data for the local news media to report on, journalists will use other numbers based metrics such as the number of yard signs for each candidate in the district to help gauge support for individual candidates. Candidates and campaign staff often stoke the fires of election sign wars to claim that their candidate has popular support among the voters in the district. Notable political sign wars A notable sign war occurs during the popular Shad Planking in Wakefield, Virginia. Every April, locals and politicians from all around the Commonwealth gather for some politicking, beer drinking, and fish eating. Nationally, in August 2007, Democrat presidential hopefuls John Edwards and Barack Obama each claimed victory fo
https://en.wikipedia.org/wiki/Computer%20Underground%20Digest
The Computer Underground Digest (CuD) was a weekly online newsletter on early Internet cultural, social, and legal issues published by Gordon Meyer and Jim Thomas from March 1990 to March 2000. History Meyer and Thomas were Criminal Justice professors at Northern Illinois University, and intended the newsletter to cover topical social and legal issues generated during the rise of the telecommunications and the Internet. It existed primarily as an email mailing list and on USENET, though its archives were later provided on a website. The newsletter came to prominence when it published legal commentary and updates concerning the "hacker crackdowns" and federal indictments of Leonard Rose and Craig Neidorf of Phrack. The CuD published commentary from its membership on subjects including the legal and social implications of the growing Internet (and later the web), book reviews of topical publications, and many off-topic postings by its readership. Overtaken by the growth of online forums on the web, it ceased publication in March, 2000. See also Phrack Cult of the Dead Cow
https://en.wikipedia.org/wiki/Flying%20primate%20hypothesis
In evolutionary biology, the flying primate hypothesis is that megabats, a subgroup of Chiroptera (also known as flying foxes), form an evolutionary sister group of primates. The hypothesis began with Carl Linnaeus in 1758, and was again advanced by J.D. Smith in 1980. It was proposed in its modern form by Australian neuroscientist Jack Pettigrew in 1986 after he discovered that the connections between the retina and the superior colliculus (a region of the midbrain) in the megabat Pteropus were organized in the same way found in primates, and purportedly different from all other mammals. This was followed up by a longer study published in 1989, in which this was supported by the analysis of many other brain and body characteristics. Pettigrew suggested that flying foxes, colugos, and primates were all descendants of the same group of early arboreal mammals. The megabat flight and the colugo gliding could be both seen as locomotory adaptations to a life high above the ground. The flying primate hypothesis met resistance from many zoologists. Its biggest challenges were not centered on the argument that megabats and primates are evolutionarily related, which reflects earlier ideas (such as the grouping of primates, tree shrews, colugos, and bats under the same taxonomic group, the Superorder Archonta). Rather, many biologists resisted the implication that megabats and microbats (or echolocating bats) formed distinct branches of mammalian evolution, with flight having evolved twice. This implication was borne out of the fact that microbats do not resemble primates in any of the neural characteristics studied by Pettigrew, instead resembling primitive mammals such as Insectivora in these respects. The advanced brain characters demonstrated in Pteropus could not, therefore, be generalized to imply that all bats are similar to primates. More recently, the flying primate hypothesis was soundly rejected when scientists compared the DNA of bats to that of primates. The
https://en.wikipedia.org/wiki/Bubble%20point
In thermodynamics, the bubble point is the temperature (at a given pressure) where the first bubble of vapor is formed when heating a liquid consisting of two or more components. Given that vapor will probably have a different composition than the liquid, the bubble point (along with the dew point) at different compositions are useful data when designing distillation systems. For a single component the bubble point and the dew point are the same and are referred to as the boiling point. Calculating the bubble point At the bubble point, the following relationship holds: where . K is the distribution coefficient or K factor, defined as the ratio of mole fraction in the vapor phase to the mole fraction in the liquid phase at equilibrium. When Raoult's law and Dalton's law hold for the mixture, the K factor is defined as the ratio of the vapor pressure to the total pressure of the system: Given either of or and either the temperature or pressure of a two-component system, calculations can be performed to determine the unknown information. See also Phase diagram Azeotrope Dew point
https://en.wikipedia.org/wiki/Non-native%20speech%20database
A non-native speech database is a speech database of non-native pronunciations of English. Such databases are used in the development of: multilingual automatic speech recognition systems, text to speech systems, pronunciation trainers, and second language learning systems. List The actual table with information about the different databases is shown in Table 2. Legend In the table of non-native databases some abbreviations for language names are used. They are listed in Table 1. Table 2 gives the following information about each corpus: The name of the corpus, the institution where the corpus can be obtained, or at least further information should be available, the language which was actually spoken by the speakers, the number of speakers, the native language of the speakers, the total amount of non-native utterances the corpus contains, the duration in hours of the non-native part, the date of the first public reference to this corpus, some free text highlighting special aspects of this database and a reference to another publication. The reference in the last field is in most cases to the paper which is especially devoted to describe this corpus by the original collectors. In some cases it was not possible to identify such a paper. In these cases a paper is referenced which is using this corpus is. Some entries are left blank and others are marked with unknown. The difference here is that blank entries refer to attributes where the value is just not known. Unknown entries, however, indicate that no information about this attribute is available in the database itself. As an example, in the Jupiter weather database no information about the origin of the speakers is given. Therefore this data would be less useful for verifying accent detection or similar issues. Where possible, the name is a standard name of the corpus, for some of the smaller corpora, however, there was no established name and hence an identifier had to be created. In such cases, a combinat
https://en.wikipedia.org/wiki/Visible%20Light%20Photon%20Counter
A Visible Light Photon Counter (VLPC) is a photon counting photodetector based on impurity-band conduction in arsenic-doped silicon. They have high quantum efficiency and are able to detect single photons in the visible range of the electromagnetic spectrum. The ability to count the exact number of photons detected is extremely important for quantum key distribution. Rockwell International's Science Center had previously announced the "Solid-State Photomultiplier" (SSPM), a wide-band (0.4–28 µm) detector. In the late 1980s a collaboration – initially consisting of Rockwell and UCLA – began developing scintillating-fiber particle trackers for use at the Superconducting Super Collider, based on a dedicated variant of the SSPM that came to be known as the Visible Light Photon Counter. The operating principles are similar to APDs but based on impurity-band conduction. The devices are made from arsenic-doped silicon and have an impurity band 50 meV below the conduction band, resulting in a gain of to at a reverse bias of only a few volts (e.g. 7 V). The narrow bandgap reduces gain dispersion, resulting in a uniform response to each photon, and hence the output pulse height is proportional to the number of incident photons. VLPCs must operate at cryogenic temperatures (6–10 K). They have a quantum efficiency of 85% at 565 nm and a temporal resolution of several nanoseconds. VLPCs have been used extensively in the central tracking detector of the D0 experiment, and for muon beam-cooling studies for a muon collider (MICE). They have also been evaluated for quantum information science. Notes
https://en.wikipedia.org/wiki/Gilbert%20cell
In electronics, the Gilbert cell is a type of frequency mixer. It produces output signals proportional to the product of two input signals. Such circuits are widely used for frequency conversion in radio systems. The advantage of this circuit is the output current is an accurate multiplication of the (differential) base currents of both inputs. As a mixer, its balanced operation cancels out many unwanted mixing products, resulting in a "cleaner" output. It is a generalized case of an early circuit first used by Howard Jones in 1963, invented independently and greatly augmented by Barrie Gilbert in 1967. It is a specific example of "translinear" design, a current-mode approach to analog circuit design. The specific property of this cell is that the differential output current is a precise algebraic product of its two differential analog current inputs. Function There is little difference between the Jones cell and the translinear multiplier in this topology. In both forms, two differential amplifier stages are formed by emitter-coupled transistor pairs (Q1/Q4, Q3/Q5) whose outputs are connected (currents summed) with opposite phases. The emitter junctions of these amplifier stages are fed by the collectors of a third differential pair (Q2/Q6). The output currents of Q2/Q6 become emitter currents for the differential amplifiers. Simplified, the output current of an individual transistor is given by ic = gm vbe. Its transconductance gm is (at ) about . Combining these equations gives . However, IC here is given by vbe,rf gm,rf. Hence , which is a multiplication of vbe,lo and vbe,rf. Combining the two different stages output currents yields four-quadrant operation. However, in the cells invented by Gilbert, shown in these figures, there are two additional diodes. This is a crucial difference because they generate the logarithm of the associated differential (X) input current so that the exponential characteristics of the following transistors result in an ideally
https://en.wikipedia.org/wiki/Godwin%20Laboratory%2C%20University%20of%20Cambridge
The Godwin Laboratory is a research facility at the University of Cambridge. It was originally set up to investigate radiocarbon dating and its applications, and was one of the first laboratories to determine a radiocarbon calibration curve. The lab is named after the English scientist Harry Godwin. History With the late Professor Sir Nicholas Shackleton in charge, the focus of research shifted to marine isotope records, which document changes in the size of polar ice sheets and temperature changes. This research helped to establish the Milankovitch Theory as the most plausible explanation of glacial/interglacial changes over the past million years, and was continued to develop much more extensive geological timescales, covering the last 30 million years, on the basis of this hypothesis. Other areas researched by members of the laboratory include pollen records and tree rings as a proxy for past climate. The laboratory changed principal allegiance from the Department of Plant Sciences to the Department of Earth Sciences around 1995. In 2005, after Nick Shackleton's retirement, the laboratory was incorporated into the building housing the Department of Earth Sciences, where it continues to operate. It is part of the inter-departmental Godwin Institute for Quaternary Research, a loose collection of Cambridge University research facilities and workers focused on research particularly addressing the history of the last 1.8 million years.
https://en.wikipedia.org/wiki/Vack%C3%A1%C5%99%20oscillator
A Vackář oscillator is a wide range variable frequency oscillator (VFO) which has a near constant output amplitude over its frequency range. It is similar to a Colpitts oscillator or a Clapp oscillator, but those designs do not have a constant output amplitude when tuned. Invention In 1949, the Czech engineer Jiří Vackář published a paper on the design of stable variable-frequency oscillators (VFO). The paper discussed many stability issues such as variations with temperature, atmospheric pressure, component aging, and microphonics. For example, Vackář describes making inductors by first heating the wire and then winding the wire on a stable ceramic coil form. The resulting inductor has a temperature coefficient of 6 to 8 parts per million per degree Celsius. Vackář points out that common air variable capacitors have a stability of 2 parts per thousand; to build a VFO with a stability of 50 parts per million requires that the variable capacitor is only 1/40 of the tuning capacity (.002/40 = 50ppm). The stability requirement also implies the variable capacitor may only tune a limited range of 1:1.025. Larger tuning ranges require switching stable fixed capacitors or inductors. Vackář was interested in high stability designs, so he wanted the highest for his circuits. It is possible to make wide range VFOs with stable output amplitude by heavily damping (loading) the tuned circuit, but that tactic substantially reduces the and the frequency stability. Vackář was also concerned with the amplitude variations of the variable-frequency oscillator as it is tuned through its range. Ideally, an oscillator's loop gain will be unity according to the Barkhausen stability criterion. In practice, the loop gain is adjusted to be a little more than one to get the oscillation started; as the amplitude increases, some gain compression then causes the loop gain to average out over a complete cycle to unity. If the VFO frequency is then adjusted, the gain may increase substanti
https://en.wikipedia.org/wiki/National%20Replacement%20Character%20Set
The National Replacement Character Set (NRCS) was a feature supported by later models of Digital's (DEC) computer terminal systems, starting with the VT200 series in 1983. NRCS allowed individual characters from one character set to be replaced by one from another set, allowing the construction of different character sets on the fly. It was used to customize the character set to different local languages, without having to change the terminal's ROM for different countries, or alternately, include many different sets in a larger ROM. Many 3rd party terminals and terminal emulators supporting VT200 codes also supported NRCS. Description ASCII is a 7-bit standard, allowing a total of 128 characters in the character set. Some of these are reserved as control characters, leaving 96 printable characters. This set of 96 printable characters includes upper and lower case letters, numbers, and basic math and punctuation. ASCII does not have enough room to include other common characters such as multi-national currency symbols or the various accented letters common in European languages. This led to a number of country-specific varieties of 7-bit ASCII with certain characters replaced. For instance, the UK standard simply replaced ASCII's hash mark, #, with the pound symbol, £. This normally led to different models of a given computer terminal or printer, differing only in the glyphs stored in ROM. Some of these were standardized as part of ISO/IEC 646. On an 8-bit clean serial link, ASCII can be expanded to support a total of 256 characters. In this case, instead of replacing the characters in the original printable characters range from 32 to 127, new characters are added in the 128 to 255 range. This offers enough room for a single character set to include all the variety of characters used in North America and western Europe. This capability led to the introduction of the ISO/IEC 8859-1 standard character set containing 191 characters of what it calls the "Latin alphab
https://en.wikipedia.org/wiki/SMS%20%28hydrology%20software%29
SMS (Surface-water Modeling System) is a complete program for building and simulating surface water models from Aquaveo. It features 1D and 2D modeling and a unique conceptual model approach. Currently supported models include ADCIRC, CMS-FLOW2D, FESWMS, TABS, TUFLOW, BOUSS-2D, CGWAVE, STWAVE, CMS-WAVE (WABED), GENESIS, PTM, and WAM. Version 9.2 introduced the use of XMDF (eXtensible Model Data Format), which is a compatible extension of HDF5. XMDF files are smaller and allow faster access times than ASCII files. History SMS was initially developed by the Engineering Computer Graphics Laboratory at Brigham Young University (later renamed in September, 1998 to Environmental Modeling Research Laboratory or EMRL) in the late 1980s on Unix workstations. The development of SMS was funded primarily by The United States Army Corps of Engineers and is still known as the Department of Defense Surface-water Modeling System or DoD SMS. It was later ported to Windows platforms in the mid 1990s and support for HP-UX, IRIX, OSF/1, and Solaris platforms was discontinued. In April 2007, the main software development team at EMRL entered private enterprise as Aquaveo LLC, and continue to develop SMS and other software products, such as WMS (Watershed Modeling System) and GMS (Groundwater Modeling System). Examples of SMS Implementation SMS modeling was used to "determine flooded areas in case of failure or revision of a weir in combination with a coincidental 100-year flood event" (Gerstner, Belzner, and Thorenz, p. 975). Furthermore, "concerning the water level calculations in case of failure of a weir, the Bavarian Environmental Agency provided the Federal Waterways Engineering and Research Institute with those two-dimensional depth-averaged hydrodynamic models, which are covering the whole Bavarian part of the river Main. The models were created with the software Surface-Modeling System (SMS) of Aquaveo LLC" (Gerstner, Belzner, and Thorenz, 976). This article "describes the m
https://en.wikipedia.org/wiki/Nuclebr%C3%A1s%20Equipamentos%20Pesados
The Nuclebrás Equipamentos Pesados S.A. (NUCLEP) is a Brazilian state-owned nuclear company specialized in nuclear engineering and heavy equipment for nuclear, defense, oil and gas industries, founded on 12 April 1975. See also Goiânia accident (Nuclebrás aided in response effort) National Nuclear Energy Commission
https://en.wikipedia.org/wiki/Factorion
In number theory, a factorion in a given number base is a natural number that equals the sum of the factorials of its digits. The name factorion was coined by the author Clifford A. Pickover. Definition Let be a natural number. For a base , we define the sum of the factorials of the digits of , , to be the following: where is the number of digits in the number in base , is the factorial of and is the value of the th digit of the number. A natural number is a -factorion if it is a fixed point for , i.e. if . and are fixed points for all bases , and thus are trivial factorions for all , and all other factorions are nontrivial factorions. For example, the number 145 in base is a factorion because . For , the sum of the factorials of the digits is simply the number of digits in the base 2 representation since . A natural number is a sociable factorion if it is a periodic point for , where for a positive integer , and forms a cycle of period . A factorion is a sociable factorion with , and a amicable factorion is a sociable factorion with . All natural numbers are preperiodic points for , regardless of the base. This is because all natural numbers of base with digits satisfy . However, when , then for , so any will satisfy until . There are finitely many natural numbers less than , so the number is guaranteed to reach a periodic point or a fixed point less than , making it a preperiodic point. For , the number of digits for any number, once again, making it a preperiodic point. This means also that there are a finite number of factorions and cycles for any given base . The number of iterations needed for to reach a fixed point is the function's persistence of , and undefined if it never reaches a fixed point. Factorions for b = (k − 1)! Let be a positive integer and the number base . Then: is a factorion for for all is a factorion for for all . b = k! − k + 1 Let be a positive integer and the number base . Then: is a fact
https://en.wikipedia.org/wiki/Composition%20of%20the%20human%20body
Body composition may be analyzed in various ways. This can be done in terms of the chemical elements present, or by molecular structure e.g., water, protein, fats (or lipids), hydroxylapatite (in bones), carbohydrates (such as glycogen and glucose) and DNA. In terms of tissue type, the body may be analyzed into water, fat, connective tissue, muscle, bone, etc. In terms of cell type, the body contains hundreds of different types of cells, but notably, the largest number of cells contained in a human body (though not the largest mass of cells) are not human cells, but bacteria residing in the normal human gastrointestinal tract. Elements About 99% of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium. All 11 are necessary for life. The remaining elements are trace elements, of which more than a dozen are thought on the basis of good evidence to be necessary for life. All of the mass of the trace elements put together (less than 10 grams for a human body) do not add up to the body mass of magnesium, the least common of the 11 non-trace elements. Other elements Not all elements which are found in the human body in trace quantities play a role in life. Some of these elements are thought to be simple common contaminants without function (examples: caesium, titanium), while many others are thought to be active toxins, depending on amount (cadmium, mercury, lead, radioactives). In humans, arsenic is toxic, and its levels in foods and dietary supplements are closely monitored to reduce or eliminate its intake. Some elements (silicon, boron, nickel, vanadium) are probably needed by mammals also, but in far smaller doses. Bromine is used abundantly by some (though not all) lower organisms, and opportunistically in eosinophils in humans. One study has indicated bromine to be necessary to collagen IV synthe
https://en.wikipedia.org/wiki/All%20fourths%20tuning
Among alternative tunings for the guitar, all-fourths tuning is a regular tuning. In contrast, the standard tuning has one irregularity—a major third between the third and second strings—while having perfect fourths between the other successive strings. The standard tuning's irregular major-third is replaced by a perfect fourth in all-fourths tuning, which has the open notes E2-A2-D3-G3-C4-F4. Among regular tunings, this all-fourths tuning best approximates the standard tuning. In all guitar tunings, the higher-octave version of a chord can be found by translating a chord by twelve frets higher along the fretboard. In every regular tuning, for example in all-fourths tuning, chords and intervals can be moved also diagonally. For all-fourths tuning, all twelve major chords (in the first or open positions) are generated by two chords, the open F major chord and the D major chord. The regularity of chord-patterns reduces the number of finger positions that need to be memorized. Jazz musician Stanley Jordan plays guitar in all-fourths tuning; he has stated that all-fourths tuning "simplifies the fingerboard, making it logical". Among all regular tunings, all-fourths tuning E-A-D-G-C-F is the best approximation of standard tuning, which is more popular. All-fourths tuning is traditionally used for the bass guitar; it is also used for the bajo sexto. Allan Holdsworth stated that if he were to learn the guitar again he would tune it in all-fourths. Relation with all-fifths tuning All-fourths tuning is closely related to all-fifths tuning. All-fourths tuning is based on the perfect fourth (five semitones), and all-fifths tuning is based on the perfect fifth (seven semitones). The perfect-fifth and perfect-fourth intervals are inversions of one another, and the chords of all-fourth and all-fifths are paired as inverted chords. Consequently, chord charts for all-fifths tunings may be used for left-handed all-fourths tuning. See also Scordatura, alternative tunings of
https://en.wikipedia.org/wiki/Protein%20mass%20spectrometry
Protein mass spectrometry refers to the application of mass spectrometry to the study of proteins. Mass spectrometry is an important method for the accurate mass determination and characterization of proteins, and a variety of methods and instrumentations have been developed for its many uses. Its applications include the identification of proteins and their post-translational modifications, the elucidation of protein complexes, their subunits and functional interactions, as well as the global measurement of proteins in proteomics. It can also be used to localize proteins to the various organelles, and determine the interactions between different proteins as well as with membrane lipids. The two primary methods used for the ionization of protein in mass spectrometry are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). These ionization techniques are used in conjunction with mass analyzers such as tandem mass spectrometry. In general, the proteins are analyzed either in a "top-down" approach in which proteins are analyzed intact, or a "bottom-up" approach in which protein are first digested into fragments. An intermediate "middle-down" approach in which larger peptide fragments are analyzed may also sometimes be used. History The application of mass spectrometry to study proteins became popularized in the 1980s after the development of MALDI and ESI. These ionization techniques have played a significant role in the characterization of proteins. (MALDI) Matrix-assisted laser desorption ionization was coined in the late 80's by Franz Hillenkamp and Michael Karas. Hillenkamp, Karas and their fellow researchers were able to ionize the amino acid alanine by mixing it with the amino acid tryptophan and irradiated with a pulse 266 nm laser. Though important, the breakthrough did not come until 1987. In 1987, Koichi Tanaka used the "ultra fine metal plus liquid matrix method" and ionized biomolecules the size of 34,472 Da protein carb
https://en.wikipedia.org/wiki/Tutte%E2%80%93Berge%20formula
In the mathematical discipline of graph theory the Tutte–Berge formula is a characterization of the size of a maximum matching in a graph. It is a generalization of Tutte theorem on perfect matchings, and is named after W. T. Tutte (who proved Tutte's theorem) and Claude Berge (who proved its generalization). Statement The theorem states that the size of a maximum matching of a graph equals where counts how many of the connected components of the graph have an odd number of vertices. Equivalently, the number of unmatched vertices in a maximum matching equals. Explanation Intuitively, for any subset of the vertices, the only way to completely cover an odd component of by a matching is for one of the matched edges covering the component to be incident to . If, instead, some odd component had no matched edge connecting it to , then the part of the matching that covered the component would cover its vertices in pairs, but since the component has an odd number of vertices it would necessarily include at least one leftover and unmatched vertex. Therefore, if some choice of has few vertices but its removal creates a large number of odd components, then there will be many unmatched vertices, implying that the matching itself will be small. This reasoning can be made precise by stating that the size of a maximum matching is at most equal to the value given by the Tutte–Berge formula. The characterization of Tutte and Berge proves that this is the only obstacle to creating a large matching: the size of the optimal matching will be determined by the subset with the biggest difference between its numbers of odd components outside and vertices inside . That is, there always exists a subset such that deleting creates the correct number of odd components needed to make the formula true. One way to find such a set is to choose any maximum matching , and to let be the set of vertices that are either unmatched in , or that can be reached from an unmatched vertex by
https://en.wikipedia.org/wiki/Ion%20cyclotron%20resonance
Ion cyclotron resonance is a phenomenon related to the movement of ions in a magnetic field. It is used for accelerating ions in a cyclotron, and for measuring the masses of an ionized analyte in mass spectrometry, particularly with Fourier transform ion cyclotron resonance mass spectrometers. It can also be used to follow the kinetics of chemical reactions in a dilute gas mixture, provided these involve charged species. Definition of the resonant frequency An ion in a static and uniform magnetic field will move in a circle due to the Lorentz force. The angular frequency of this cyclotron motion for a given magnetic field strength B is given by where z is the number of positive or negative charges of the ion, e is the elementary charge and m is the mass of the ion. An electric excitation signal having a frequency f will therefore resonate with ions having a mass-to-charge ratio m/z given by The circular motion may be superimposed with a uniform axial motion, resulting in a helix, or with a uniform motion perpendicular to the field (e.g., in the presence of an electrical or gravitational field) resulting in a cycloid. Ion cyclotron resonance heating Ion cyclotron resonance heating (or ICRH) is a technique in which electromagnetic waves with frequencies corresponding to the ion cyclotron frequency is used to heat up a plasma. The ions in the plasma absorb the electromagnetic radiation and as a result of this, increase in kinetic energy. This technique is commonly used in the heating of tokamak plasmas. In the solar wind On March 8, 2013, NASA released an article according to which ion cyclotron waves were identified by its solar probe spacecraft called WIND as the main cause for the heating of the solar wind as it rises from the Sun's surface. Before this discovery, it was unclear why the solar wind particles would heat up instead of cool down, when speeding away from the Sun's surface. See also Cyclotron resonance Electron cyclotron resonance
https://en.wikipedia.org/wiki/Distribution%20amplifier
In electronics, a distribution amplifier, or simply distribution amp or DA, is a device that accepts a single input signal and provides this same signal to multiple isolated outputs. These devices allow a signal to be distributed to multiple destinations without ground loops or signal degradation. They are used for a number of common engineering tasks, including multiple amplification, cable television, splitting monitor and front of house mixes, and "tapping" a signal prior to sending it through effects units to preserve a "dry" signal for later experimentation. Audio distribution amplifier An audio distribution amplifier also known as: a press feed; a pool feed; a media feed; press box; or an ADA, takes a single audio feed, usually a line input, but it may be a microphone input, and outputs multiple line or microphone outputs. This can be done using a passive feed, where the signal is split among the outputs, or as an active feed where the outputs are amplified. The primary use of the Audio Distribution Amplifier is to share a single audio feed with multiple members of the press pool. Thus the names press feed, pool feed and media feed. Video distribution amplifier A video distribution amplifier (also known as a distribution amp or VDA) takes a video signal as an input, amplifies it, and outputs the amplified video signal to two or more outputs. It is primarily used to supply a single video signal to multiple pieces of video equipment. It adjusts the amplitude of a video signal to compensate for loss of signal in a video distribution system. Extending the distance of the video signal is the main purpose of the VDA. There are VDAs built for all video formats, NTSC, ATSC, QAM16, QAM32, QAM64, composite video and component video. Their construction and capabilities can be simple; accept input signal, amplify, then output. Others can be more sophisticated and allow remote control from a control station, allow adjustment of the gain, equalization, and provide st
https://en.wikipedia.org/wiki/Development%20of%20Spore
Spore is a video game developed by Maxis and designed by Will Wright, released in September 2008. The game has drawn wide attention for its ability to simulate the development of a species on a galactic scope, using its innovation of user-guided evolution via the use of procedural generation for many of the components of the game, providing vast scope and open-ended gameplay. Spore is a god game. The player molds and guides a species across many generations, growing it from a single-celled organism into a more complex animal. Eventually, the species becomes sentient. The player then begins molding and guiding this species' society, developing it into a space-faring civilization, at which point they can explore the galaxy in a space ship. Spores main innovation is the use of procedural generation for many of the components of the game, providing vast scope and open-endedness. Wright said, "I didn't want to make players feel like Luke Skywalker or Frodo Baggins. I wanted them to be like George Lucas or J. R. R. Tolkien." During the 2007 Technology Entertainment Design (TED) conference, Wright added that he wanted to create a "toy" for kids to inspire long-term thinking, stating, "I think toys can change the world." History and development Spore was originally a working title, suggested by Maxis developer Ocean Quigley, for the game which was first referred to by the general public as SimEverything. Even though SimEverything was a first choice name for Wright, the title Spore stuck. Wright adding it also freed him from the preconceptions another Sim title would have brought, saying "...Not putting 'Sim' in front of it was very refreshing to me. It feels like it wants to be breaking out into a completely different thing than what Sim was." Wright was inspired by the Drake equation and the 1977 film Powers of Ten when developing Spore. Spore'''s development began in 2000, around the time that development began for The Sims Online. The earliest version was inspired by
https://en.wikipedia.org/wiki/Giacinto%20Morera
Giacinto Morera (18 July 1856 – 8 February 1909), was an Italian engineer and mathematician. He is known for Morera's theorem in the theory of functions of a complex variable and for his work in the theory of linear elasticity. Biography Life He was born in Novara on 18 July 1856, the son of Giacomo Morera and Vittoria Unico. According to , his family was a wealthy one, his father being a rich merchant. This occurrence eased him in his studies after the laurea: however, he was an extraordinarily hard worker and he widely used this ability in his researches. After studying in Turin he went to Pavia, Pisa and Leipzig: then he went back to Pavia for a brief period in 1885, and finally he went to Genova in 1886, living here for the next 15 years. While being in Genova he married his fellow-citizen Cesira Faà. From 1901 on to his death he worked in Turin: he died of pneumonia on 8 February 1909. Education and academic career He earned in 1878 the laurea in engineering and then, in 1879, the laurea in mathematics, both awarded him from the Politecnico di Torino: According to , the title of his thesis in the mathematical sciences was: "Sul moto di un punto attratto da due centri fissi colla legge di Newton". In Turin he attended the courses held by Enrico d'Ovidio, Angelo Genocchi and particularly the ones held by Francesco Siacci: later in his life, Morera acknowledged Siacci as his mentor in scientific research and life. After graduating, he followed several advanced courses: he studied in Pavia from 1881 to 1882 under Eugenio Beltrami, Eugenio Bertini and Felice Casorati. In 1883 he was in Pisa under Enrico Betti, Riccardo de Paolis and Ulisse Dini: a year later, he was in Leipzig under Felix Klein, Adolph Mayer and Carl Neumann. In 1885 he went in Berlin in order to follow the lessons of Hermann von Helmholtz, Gustav Kirchhoff, Leopold Kronecker and Karl Weierstrass at the local university: later in the same year, he went back to Italy, briefly working at the Univ
https://en.wikipedia.org/wiki/Affect%20display
Affect displays are the verbal and non-verbal displays of affect (emotion). These displays can be through facial expressions, gestures and body language, volume and tone of voice, laughing, crying, etc. Affect displays can be altered or faked so one may appear one way, when they feel another (e.g., smiling when sad). Affect can be conscious or non-conscious and can be discreet or obvious. The display of positive emotions, such as smiling, laughing, etc., is termed "positive affect", while the displays of more negative emotions, such as crying and tense gestures, is respectively termed "negative affect". Affect is important in psychology as well as in communication, mostly when it comes to interpersonal communication and non-verbal communication. In both psychology and communication, there are a multitude of theories that explain affect and its impact on humans and quality of life. Theoretical perspective Affect can be taken to indicate an instinctual reaction to stimulation occurring before the typical cognitive processes considered necessary for the formation of a more complex emotion. Robert B. Zajonc asserts that this reaction to stimuli is primary for human beings and is the dominant reaction for lower organisms. Zajonc suggests affective reactions can occur without extensive perceptual and cognitive encoding, and can be made sooner and with greater confidence than cognitive judgments. Lazarus on the other hand considers affect to be post-cognitive. That is, affect is elicited only after a certain amount of cognitive processing of information has been accomplished. In this view, an affective reaction, such as liking, disliking, evaluation, or the experience of pleasure or displeasure, is based on a prior cognitive process in which a variety of content discriminations are made and features are identified, examined for their value, and weighted for their contributions. A divergence from a narrow reinforcement model for emotion allows for other perspectives on
https://en.wikipedia.org/wiki/Tephritid%20Workers%20Database
The Tephritid Workers Database is a web-based database for sharing information on tephritid fruit flies. Because these species are one of the most economically important group of insect species that threaten fruit and vegetable production and trade worldwide, a tremendous amount of information is made available each year: new technologies developed, new information on their biology and ecology; new control methods made available, new species identified, new outbreaks recorded and new operational control programmes launched. The TWD allows workers to keep up-to-date on the most recent developments and provides an easily accessible and always available resource. History A group of scientists involved in tephritid fruit fly research and management launched the Tephritid Workers Database in May 2004, with the support of the Insect Pest Control Section of the Joint FAO/IAEA Centre. The Tephritid Workers Database is self-maintained by the participants and its development depends on the active contribution of the members. The TWD database has now more than 1000 members from more than 100 countries and is sponsoring or hosting websites of other regional fruit fly working groups: The Tephritid Workers of Europe Africa and the Middle East (TEAM) The Tephritid Workers of the Western Hemisphere (TWWH) The Tephritid Workers of Asia Australia and Oceania (TAAO) Fruit Fly News In the past, an information service for the tephritid fruit fly workers called FRUIT FLY NEWS (FFN) was issued annually under the auspices of the International Biological Program and then under the International Organisation of Biological Control (IOBC). This newsletter publication was interrupted in 1992 and then resumed in an electronic format since 2009. The first issues tell all the story about the creation of FFN and the Working Group on Fruit Flies (WGFF). International Biological Program (IBP) Fruit Fly News n°1 (1972) Fruit Fly News n°2 (1973) IBP Fruit Fly News n°3 (1974) IOBC/WPRS WG R
https://en.wikipedia.org/wiki/Information%20repository
In information technology, an information repository or simply a repository is "a central place in which an aggregation of data is kept and maintained in an organized way, usually in computer storage." It "may be just the aggregation of data itself into some accessible place of storage or it may also imply some ability to selectively extract data." Universal digital library The concept of a universal digital library was described as "within reach" by a 2012 European Union Copyright Directive which told about Google's attempts to "mass-digitize" what are termed "orphan works" (i.e. out-of-print copyrighted works). The U.S. Copyright Office and the European Union Copyright law have been working on this. Google has reached agreements in France which "lets the publisher choose which works can be scanned or sold." By contrast, Google has been trying in the USA for a "free to digitize and sell any works unless the copyright holders opted out" deal and is still unsuccessful. Information repository Attempts to develop what was called an information repository'' have been underway for decades: In 1989, IBM tried to have OfficeVision combine mainframes and PCs to enable "an information repository." In 2003, Microsoft introduced OneNote as an extension to Microsoft Office 2003; it would support "a personal information repository." In 1996, an 1898-founded library obtained additional funding to expand its mission, and become a major "local resource center and regional information repository." The New York Times described them as "the second largest in the New York City region, second only to the New York Public Library on Fifth Avenue." Their services include "a computer information center devoted to outside-item requests." Federated information repository A federated information repository is an easy way to deploy a secondary tier of data storage that can comprise multiple, networked data storage technologies running on diverse operating systems, where data that no l
https://en.wikipedia.org/wiki/Ounce%20Labs
Ounce Labs (an IBM company) is a Waltham, Massachusetts-based security software vendor. The company was founded in 2002 and created a software analysis product that analyzes source code to identify and remove security vulnerabilities. The security software looks for a range of vulnerabilities that leave an application open to attack. Customers have included GMAC, Lockheed Martin, and the U.S. Navy. On July 28, 2009, Ounce was acquired by IBM, for an undisclosed sum, with the intention of integrating it into IBM's Rational Software business. Platform support Programming languages that are supported by Ounce's security scan include ASP.NET, C, C++, C# and other .NET languages, Java, JSP, VB.NET, classic ASP; and platform support for Windows, Solaris, and Linux.
https://en.wikipedia.org/wiki/Planetary%20nebula%20luminosity%20function
Planetary nebula luminosity function (PNLF) is a secondary distance indicator used in astronomy. It makes use of the [O III] λ5007 forbidden line found in all planetary nebula (PNe) which are members of the old stellar populations (Population II). It can be used to determine distances to both spiral and elliptical galaxies despite their completely different stellar populations and is part of the Extragalactic Distance Scale. Procedure The distance estimate to a galaxy using the PNLF requires discovery of such an object in the target galaxy that is visible at λ5007 but not when the entire spectrum is considered. These points are candidate PNe, however, there are three other types of objects that would also exhibit such an emission line that must be filtered out: HII regions, supernova remnants, and Lyα galaxies. After the PNe are determined, to estimate a distance one must measure their monochromatic [O III] λ5007 luminosity. What remains is a statistical sample of PNe. The observed luminosity function is then fitted to some standard law. Finally, one must estimate the foreground interstellar extinction. The two sources of extinction, are from within the Milky Way and the internal extinction of the target galaxy. The first is well known and can be taken from sources such as reddening maps computed from H I measurements and galaxy counts or from IRAS and DIRBE satellite experiments. The later type of extinction, occurs only in target galaxies which are either late type spiral or irregular. However, this extinction is difficult to measure. In the Milky Way, the scale height of PNe is much bigger than that of the dust. Observational data and models support that this holds true for other galaxies, that the bright edge of the PNLF is primarily due to PNe in front of the dust layer. The data and models support a less than 0.05 apparent magnitude internal extinction of a galaxy's PNe. Physics behind process The PNLF method is unbiased by metallicity. This i
https://en.wikipedia.org/wiki/Plane%20wave%20expansion%20method
Plane wave expansion method (PWE) refers to a computational technique in electromagnetics to solve the Maxwell's equations by formulating an eigenvalue problem out of the equation. This method is popular among the photonic crystal community as a method of solving for the band structure (dispersion relation) of specific photonic crystal geometries. PWE is traceable to the analytical formulations, and is useful in calculating modal solutions of Maxwell's equations over an inhomogeneous or periodic geometry. It is specifically tuned to solve problems in a time-harmonic forms, with non-dispersive media (a reformulation of the method named Inverse dispersion allows frequency-dependent refractive indices). Principles Plane waves are solutions to the homogeneous Helmholtz equation, and form a basis to represent fields in the periodic media. PWE as applied to photonic crystals as described is primarily sourced from Dr. Danner's tutorial. The electric or magnetic fields are expanded for each field component in terms of the Fourier series components along the reciprocal lattice vector. Similarly, the dielectric permittivity (which is periodic along reciprocal lattice vector for photonic crystals) is also expanded through Fourier series components. with the Fourier series coefficients being the K numbers subscripted by m, n respectively, and the reciprocal lattice vector given by . In real modeling, the range of components considered will be reduced to just instead of the ideal, infinite wave. Using these expansions in any of the curl-curl relations like, and simplifying under assumptions of a source free, linear, and non-dispersive region we obtain the eigenvalue relations which can be solved. Example for 1D case For a y-polarized z-propagating electric wave, incident on a 1D-DBR periodic in only z-direction and homogeneous along x,y, with a lattice period of a. We then have the following simplified relations: The constitutive eigenvalue equation we finally have t
https://en.wikipedia.org/wiki/HP%2095LX
The HP 95LX Palmtop PC (F1000A, F1010A), also known as project Jaguar, was Hewlett Packard's first MS-DOS-based pocket computer, or personal digital assistant, introduced in April 1991 in collaboration with Lotus Development Corporation. It can be seen as successor to a series of larger portable PCs like the HP 110 and HP 110 Plus. Hardware HP 95LX had an Intel 8088-clone NEC V20 CPU running at 5.37 MHz with an Intel Corporation System on a chip (SoC) device. It cannot be considered completely PC-compatible because of its quarter-CGA (MDA)-resolution LCD screen. The device also included a CR 2032 lithium coin cell for memory backup when the two AA main batteries ran out. For mass storage, HP 95LX had a single PCMCIA slot which could hold a static RAM card with its own CR 2025 back-up coin cell. An RS-232-compatible serial port was provided, as well as an infrared port for printing on compatible models of Hewlett Packard printers. Display In character mode, the display showed 16 lines of 40 characters, and had no backlight. While most IBM-compatible PCs work with a hardware code page 437, HP 95LX's text mode font was hard-wired to code page 850 instead. Lotus 1-2-3 internally used the Lotus International Character Set (LICS), but characters were translated to code page 850 for display and printing purposes. Software The palmtop ran Microsoft's MS-DOS version 3.22 and had a customized version of Lotus 1-2-3 Release 2.2 built in. Other software in read-only memory (ROM) included a calculator, an appointment calendar, a telecommunications program, and a simple text editor. Successors Successor models to HP 95LX include HP 100LX, HP Palmtop FX, HP 200LX, HP 1000CX, and HP OmniGo 700LX. See also DIP Pocket PC Atari Portfolio Poqet PC Poqet PC Prime Poqet PC Plus Sharp PC-3000 ZEOS Pocket PC Yukyung Viliv N5 Sub-notebook Netbook Palmtop PC Ultra-mobile PC
https://en.wikipedia.org/wiki/Bollard%20pull
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraft. It is defined as the force (usually in tonnes-force or kilonewtons (kN)) exerted by a vessel under full power, on a shore-mounted bollard through a tow-line, commonly measured in a practical test (but sometimes simulated) under test conditions that include calm water, no tide, level trim, and sufficient depth and side clearance for a free propeller stream. Like the horsepower or mileage rating of a car, it is a convenient but idealized number that must be adjusted for operating conditions that differ from the test. The bollard pull of a vessel may be reported as two numbers, the static or maximum bollard pull – the highest force measured – and the steady or continuous bollard pull, the average of measurements over an interval of, for example, 10 minutes. An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load. Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above "normal" tugboats. The worlds strongest tug since its delivery in 2020 is Island Victory (Vard Brevik 831) of Island Offshore, with a bollard pull of . Island Victory is not a typical tug, rather it is a special class of ship used in the petroleum industry called an Anchor Handling Tug Supply vessel. Resistive force is roughly ½ water density times velocity square times area of ship wet surface: Background Unlike in ground vehicles, the statement of installed horsepower is not sufficient to understand how strong a tug is – this is because the tug operates mainly in very low or zero speeds, thus may not be delivering power (power = force × v
https://en.wikipedia.org/wiki/Parity%20of%20zero
In mathematics, zero is an even number. In other words, its parity—the quality of an integer being even or odd—is even. This can be easily verified based on the definition of "even": it is an integer multiple of 2, specifically . As a result, zero shares all the properties that characterize even numbers: for example, 0 is neighbored on both sides by odd numbers, any decimal integer has the same parity as its last digit—so, since 10 is even, 0 will be even, and if is even then has the same parity as —indeed, and always have the same parity. Zero also fits into the patterns formed by other even numbers. The parity rules of arithmetic, such as , require 0 to be even. Zero is the additive identity element of the group of even integers, and it is the starting case from which other even natural numbers are recursively defined. Applications of this recursion from graph theory to computational geometry rely on zero being even. Not only is 0 divisible by 2, it is divisible by every power of 2, which is relevant to the binary numeral system used by computers. In this sense, 0 is the "most even" number of all. Among the general public, the parity of zero can be a source of confusion. In reaction time experiments, most people are slower to identify 0 as even than 2, 4, 6, or 8. Some teachers —and some children in mathematics classes—think that zero is odd, or both even and odd, or neither. Researchers in mathematics education propose that these misconceptions can become learning opportunities. Studying equalities like can address students' doubts about calling 0 a number and using it in arithmetic. Class discussions can lead students to appreciate the basic principles of mathematical reasoning, such as the importance of definitions. Evaluating the parity of this exceptional number is an early example of a pervasive theme in mathematics: the abstraction of a familiar concept to an unfamiliar setting. Why zero is even The standard definition of "even number" can be used
https://en.wikipedia.org/wiki/Krivine%E2%80%93Stengle%20Positivstellensatz
In real algebraic geometry, Krivine–Stengle (German for "positive-locus-theorem") characterizes polynomials that are positive on a semialgebraic set, which is defined by systems of inequalities of polynomials with real coefficients, or more generally, coefficients from any real closed field. It can be thought of as a real analogue of Hilbert's Nullstellensatz (which concern complex zeros of polynomial ideals), and this analogy is at the origin of its name. It was proved by French mathematician and then rediscovered by the Canadian . Statement Let be a real closed field, and = {f1, f2, ..., fm} and = {g1, g2, ..., gr} finite sets of polynomials over in variables. Let be the semialgebraic set and define the preordering associated with as the set where Σ2[1,...,] is the set of sum-of-squares polynomials. In other words, (, ) = + , where is the cone generated by (i.e., the subsemiring of [1,...,] generated by and arbitrary squares) and is the ideal generated by . Let  ∈ [1,...,] be a polynomial. Krivine–Stengle Positivstellensatz states that (i) if and only if and such that . (ii) if and only if such that . The weak is the following variant of the . Let be a real closed field, and , , and finite subsets of [1,...,]. Let be the cone generated by , and the ideal generated by . Then if and only if (Unlike , the "weak" form actually includes the "strong" form as a special case, so the terminology is a misnomer.) Variants The Krivine–Stengle Positivstellensatz also has the following refinements under additional assumptions. It should be remarked that Schmüdgen's Positivstellensatz has a weaker assumption than Putinar's Positivstellensatz, but the conclusion is also weaker. Schmüdgen's Positivstellensatz Suppose that . If the semialgebraic set is compact, then each polynomial that is strictly positive on can be written as a polynomial in the defining functions of with sums-of-squares coefficients, i.e. . Here is said to be strictly
https://en.wikipedia.org/wiki/Link-Local%20Multicast%20Name%20Resolution
The Link-Local Multicast Name Resolution (LLMNR) is a protocol based on the Domain Name System (DNS) packet format that allows both IPv4 and IPv6 hosts to perform name resolution for hosts on the same local link. It is included in Windows Vista, Windows Server 2008, Windows 7, Windows 8 and Windows 10. It is also implemented by systemd-resolved on Linux. LLMNR is defined in RFC 4795 but was not adopted as an IETF standard. As of April 2022, Microsoft has begun the process of phasing out both LLMNR and NetBIOS name resolution in favour of mDNS. Protocol details In responding to queries, responders listen on UDP port 5355 on the following link-scope Multicast address: IPv4 - 224.0.0.252, MAC address 01-00-5E-00-00-FC IPv6 - FF02:0:0:0:0:0:1:3 (this notation can be abbreviated as FF02::1:3), MAC address 33-33-00-01-00-03 The responders also listen on TCP port 5355 on the unicast address that the host uses to respond to queries. Packet header structure ID - A 16-bit identifier assigned by the program that generates any kind of query. QR - Query/Response. OPCODE - A 4-bit field that specifies the kind of query in this message. This value is set by the originator of a query and copied into the response. This specification defines the behavior of standard queries and responses (opcode value of zero). Future specifications may define the use of other opcodes with LLMNR. C - Conflict. TC - TrunCation. T - Tentative. Z - Reserved for future use. RCODE - Response code. QDCOUNT - An unsigned 16-bit integer specifying the number of entries in the question section. ANCOUNT - An unsigned 16-bit integer specifying the number of resource records in the answer section. NSCOUNT - An unsigned 16-bit integer specifying the number of name server resource records in the authority records section. ARCOUNT - An unsigned 16-bit integer specifying the number of resource records in the additional records section. See also Network Basic Input/Output System (NetBIOS
https://en.wikipedia.org/wiki/PGG-glucan
Poly-[1-6]--D-glucopyranosyl-[1-3]--D-glucopyranose glucan (PGG glucan, proprietary name Betafectin) is an anti-infective agent and a form or type of beta-glucan. Betafectin is a PGG-glucan, a novel β-(1,6) branched β-(1,3) glucan, purified from the cell walls of Saccharomyces cerevisiae. It is a macrophage-specific immunomodulator.
https://en.wikipedia.org/wiki/Death%20threat
A death threat is a threat, often made anonymously, by one person or a group of people to kill another person or group of people. These threats are often designed to intimidate victims in order to manipulate their behaviour, in which case a death threat could be a form of coercion. For example, a death threat could be used to dissuade a public figure from pursuing a criminal investigation or an advocacy campaign. Legality In most jurisdictions, death threats are a serious type of criminal offence. Death threats are often covered by coercion statutes. For instance, the coercion statute in Alaska says: Methods A death threat can be communicated via a wide range of media, among these letters, newspaper publications, telephone calls, internet blogs and e-mail. If the threat is made against a political figure, it can also be considered treason. If a threat targets a location that is frequented by people (e.g. a building), it could be a terrorist threat. Sometimes, death threats are part of a wider campaign of abuse targeting a person or a group of people (see terrorism, mass murder). Against a head of state In many governments, including monarchies and republics of all levels of political freedom, threatening to kill the head of state or head of government (such as the sovereign, president, or prime minister) is considered a crime. Punishments for such threats vary. United States law provides for up to five years in prison for threatening any government official, especially the president. In the United Kingdom, under the Treason Felony Act 1848, it is illegal to attempt to kill or deprive the monarch of their throne; this offense was originally punished with penal transportation, and then was changed to the death penalty, and currently the penalty is life imprisonment. Osman warning Named after a high-profile case, Osman v United Kingdom, Osman warnings (also letters or notices) are warnings of a death threat or high risk of murder issued by British police or legal
https://en.wikipedia.org/wiki/Polyembryony
Polyembryony is the phenomenon of two or more embryos developing from a single fertilized egg. Due to the embryos resulting from the same egg, the embryos are identical to one another, but are genetically diverse from the parents. The genetic difference between the offspring and the parents, but the similarity among siblings, are significant distinctions between polyembryony and the process of budding and typical sexual reproduction. Polyembryony can occur in humans, resulting in identical twins, though the process is random and at a low frequency. Polyembryony occurs regularly in many species of vertebrates, invertebrates, and plants. Evolution of polyembryony The evolution of polyembryony and the potential evolutionary advantages that may entail have been studied. In parasitoid wasps, there are several hypotheses surrounding the evolutionary advantages of polyembryony, one of them being that it allows female wasps that are small in size to increase the number of potential offspring in comparison to wasps that are mono embryonic. There are limitations to monoembryony, but with this method of development, multiple embryos can be derived from each of the individual eggs that are laid. The potential advantages of polyembryony in competing invasive plant species has been studied as well. Vertebrates Armadillos are the most well studied vertebrate that undergoes polyembryony, with six species of armadillo in the genus Dasypus that are always polyembryonic. The nine banded armadillo, for instance, always gives birth to four identical young. There are two conditions that are expected to promote the evolution of polyembryony: the mother does not know the environmental conditions of her offspring as in the case of parasitoids, or a constraint on reproduction. It is thought that nine banded armadillos evolved to be polyembryonic because of the latter. Invertebrates A more striking example of the use of polyembryony as a competitive reproductive tool is found in the
https://en.wikipedia.org/wiki/Forebulge
In geology, a forebulge is a flexural bulge in front as a result of a load on the lithosphere, often caused by tectonic interactions and glaciations. An example of forebulge can be seen in the Himalayan foreland basin, a result of the Indian-Eurasian (continent-continent) plate collision, in which the Indian plate subducted and the Eurasian plate created a large load on the lithosphere, leading to the Himalayas and the Ganges foreland basin. Background Forebulge is most commonly found with continent-continent convergent collisions, in which the formation of mountain ranges as the plates collide places a large load on the lithosphere below. The lithosphere flexes in response to the load on the mantle, causing depression and subsidence (foredeep) followed by the forebulging in the front. The forebulge area is lifted by a height that is 4% of the depression height caused by the load. It takes roughly 10,000 to 20,000 years for forebulge to fully develop when the mantle flexure to reach isostatic equilibrium, a process that is controlled by mantle viscosity. Tectonic Forebulge can be seen during the formation of a mountain range, which creates a large load and crustal thickening that leads to lithospheric flexure. Part of the land sinks under the load (Foredeep) while part of the outer land forebulges, leading to the creation of these foreland basins. Forebulge associated with the formation of these basins is most commonly a result of convergent collision. Foreland basins can occur in convergent subduction, but this is rare. These basins are linked to fold-thrust belts, which are divided into three main types: collisional (peripheral), retroarc, and retreating collisional subduction. Collisional and retroarc thrust belts form in collision convergent plates whereas retreating collisional forms when the subduction rate exceeds the convergence rate of the collision. The Persian Gulf foreland basin and forebulge was created as a result of the collision of the Arabian
https://en.wikipedia.org/wiki/Mediastinal%20branches%20of%20thoracic%20part%20of%20aorta
The mediastinal branches are numerous small vessels which supply the lymph glands and loose areolar tissue in the posterior mediastinum.
https://en.wikipedia.org/wiki/Apollo%20Abort%20Guidance%20System
The Apollo Abort Guidance System (AGS, also known as Abort Guidance Section) was a backup computer system providing an abort capability in the event of failure of the Lunar Module's primary guidance system (Apollo PGNCS) during descent, ascent or rendezvous. As an abort system, it did not support guidance for a lunar landing. The AGS was designed by TRW independently of the development of the Apollo Guidance Computer and PGNCS. It was the first navigation system to use a strapdown Inertial Measurement Unit rather than a gimbaled gyrostabilized IMU (as used by PGNCS). Although not as accurate as the gimbaled IMU, it provided satisfactory accuracy with the help of the optical telescope and rendezvous radar. It was also lighter and smaller in size. Description The Abort Guidance System included the following components: Abort Electronic Assembly (AEA): the AGS computer Abort Sensor Assembly (ASA): a simple strapdown IMU Data Entry and Display Assembly (DEDA): the astronaut interface, similar to DSKY The computer used was MARCO 4418 (MARCO stands for Man Rated Computer) whose dimensions were 5 by 8 by 23.75 inches (12.7 by 20.3 by 60.33 centimeters); it weighed 32.7 pounds (14.83 kg) and required 90 watts of power. Because the memory had a serial access it was slower than AGC, although some operations on AEA were performed as fast or faster than on AGC. The computer had the following characteristics: It had 4096 words of memory. Lower 2048 words were erasable memory (RAM), higher 2048 words served as fixed memory (ROM). The fixed and erasable memory were constructed similarly so the ratio between fixed and erasable memory was variable. It was an 18-bit machine, with 17 magnitude bits and a sign bit. The addresses were 13 bits long; MSB indicated index addressing. Data words were two's complement and in fixed-point form. Registers The AEA has the following registers: A: Accumulator (18 bit) M: Memory Register (18 bit), holds data that are being transfe
https://en.wikipedia.org/wiki/Kuratowski%20convergence
In mathematics, Kuratowski convergence or Painlevé-Kuratowski convergence is a notion of convergence for subsets of a topological space. First introduced by Paul Painlevé in lectures on mathematical analysis in 1902, the concept was popularized in texts by Felix Hausdorff and Kazimierz Kuratowski. Intuitively, the Kuratowski limit of a sequence of sets is where the sets "accumulate". Definitions For a given sequence of points in a space , a limit point of the sequence can be understood as any point where the sequence eventually becomes arbitrarily close to . On the other hand, a cluster point of the sequence can be thought of as a point where the sequence frequently becomes arbitrarily close to . The Kuratowski limits inferior and superior generalize this intuition of limit and cluster points to subsets of the given space . Metric Spaces Let be a metric space, where is a given set. For any point and any non-empty subset , define the distance between the point and the subset: For any sequence of subsets of , the Kuratowski limit inferior (or lower closed limit) of as ; isthe Kuratowski limit superior (or upper closed limit) of as ; isIf the Kuratowski limits inferior and superior agree, then the common set is called the Kuratowski limit of and is denoted . Topological Spaces If is a topological space, and are a net of subsets of , the limits inferior and superior follow a similar construction. For a given point denote the collection of open neighbhorhoods of . The Kuratowski limit inferior of is the setand the Kuratowski limit superior is the setElements of are called limit points of and elements of are called cluster points of . In other words, is a limit point of if each of its neighborhoods intersects for all in a "residual" subset of , while is a cluster point of if each of its neighborhoods intersects for all in a cofinal subset of . When these sets agree, the common set is the Kuratowski limit of , denoted . Examples Suppo
https://en.wikipedia.org/wiki/Hydraulic%20Launch%20Assist
Hydraulic Launch Assist (HLA) is the name of a hydraulic hybrid regenerative braking system for land vehicles produced by the Eaton Corporation. Background The HLA system recycles energy by converting kinetic energy into potential energy during deceleration via hydraulics, storing the energy at high pressure in an accumulator filled with nitrogen gas. The energy is then returned to the vehicle during subsequent acceleration thereby reducing the amount of work done by the internal combustion engine. This system provides considerable increase in vehicle productivity while reducing fuel consumption in stop-and-go use profiles like refuse vehicles and other heavy duty vehicles. Parallel vs. series hybrids The HLA system is called a parallel hydraulic hybrid. In parallel systems the original vehicle drive-line remains, allowing the vehicle to operate normally when the HLA system is disengaged. When the HLA is engaged, energy is captured during deceleration and released during acceleration, in contrast to series hydraulic hybrid systems which replace the entire traditional drive-line to provide power transmission in addition to regenerative braking. Hydraulic vs. electric hybrids Hydraulic hybrids are said to be power dense, while electric hybrids are energy dense. This means that electric hybrids, while able to deliver large amounts of energy over long periods of time are limited by the rate at which the chemical energy in the batteries is converted to mechanical energy and . This is largely governed by reaction rates in the battery and current ratings of associated components. Hydraulic hybrids on the other hand are capable of transferring energy at a much higher rate, but are limited by the amount of energy that can be stored. For this reason, hydraulic hybrids lend themselves well to stop-and-go applications and heavy vehicles. Applications Concept vehicles Ford Motor Company included the HLA system in their 2002 F-350 Tonka truck concept vehicle, reported
https://en.wikipedia.org/wiki/Beijing%20World%20Park
Beijing World Park () is a theme park that attempts to give visitors the chance to see the world without having to leave Beijing. The park covers 46.7 hectares and is located in the southwestern Fengtai District of Beijing. It is about 17 km from Tiananmen, the City center, and 40 km from the Capital International Airport. The park opened in 1993 and is estimated to receive 1.5 million visitors annually. The park The entrance to the park is made up of a Gothic castle, Roman corridor, and granite relief sculptures. Immediately inside the gate is an Italian-style terrace garden with grand staircases, fountains, and sculptures inspired by originals from the European Renaissance. More lawns and gardens are scattered throughout the park. On these lawns are miniature models of around 100 of the world's most famous statues including the American Statue of Liberty, Copenhagen's Little Mermaid, Michelangelo's David, and the Venus de Milo. Once inside the gates, Beijing World Park consists of two main parts: the scenic portion and the shopping, dining, and entertainment area. Scenic area The scenic area of the park models itself after the naturalistic layout of the globe, representing the four major oceans, and focusing on five continents: Asia, Africa, Europe, North America, and South America. The park contains about a hundred (109) scaled-down replicas of famous landmarks from nearly 40 countries and regions around the world, including the Tower Bridge in London, the Eiffel Tower in Paris, and the Great Pyramids in Egypt. There is even a miniature Manhattan, complete with twin towers of the World Trade Center. Each landmark represents its country or region of origin and is situated in the park according to its location on the map. Close attention to detail was paid in modeling these landmarks after their originals. For example, detailed carvings and ornamentations are included. Even the materials used are modeled after their originals to create the most authentic look po
https://en.wikipedia.org/wiki/Lanix
Lanix Internacional, S.A. de C.V. is a multinational computer and mobile phone manufacturer company based in Hermosillo, Mexico. Lanix primarily markets and sells its products in Mexico and the Latin American export market. History Lanix was founded in Hermosillo, Sonora, Mexico in 1990, and released its first computer, the PC 286 the same year. Throughout the 1990s Lanix expanded into the development and production of more sophisticated electronics components such as optical drives, servers, memory drives and flash memory. In 2002 Lanix opened its first factory outside of Mexico in Santiago, Chile to cater to the South American market. By 2006 Lanix had gained a market share of 5% of Mexico's electronics market and began diversifying its product line to include LCD televisions and monitors and in 2007 began manufacturing mobile phones. Currently Lanix offers products in the consumer, professional and government markets throughout Latin America. In 2010 Lanix announced an ambitious plan to gain market share in the Latin American computer market and expanded operations to include every country in Latin America Lanix has production facilities at its original headquarters in Hermosillo, Sonora, Mexico and international facilities in Santiago, Chile and Bogota, Colombia. At the 2009 Intel Solutions Summit hosted by Intel, Lanix won an award in the "mobile solution" category. In March 2011, Lanix began offering a system where buyers can custom build their own computer, choosing different types of chipsets, memory, and other components. In 2012 Lanix expanded its product portfolio by integrating its first Smartphone, Ilium S100, and positioned itself as one of the bestselling brands in the Mexican market. In 2015 announces the first smartphone with Windows Phone of the company. In June 2017 Lanix image is renewed by updating its logo, launching new high-end smartphones, and updating its webpage. Products , Lanix manufactures desktops, laptops, tablets, server
https://en.wikipedia.org/wiki/Cobb%E2%80%93Eickelberg%20Seamount%20chain
The Cobb-Eickelberg seamount chain is a range of undersea mountains formed by volcanic activity of the Cobb hotspot located in the Pacific Ocean. The seamount chain extends to the southeast on the Pacific Plate, beginning at the Aleutian Trench and terminating at Axial Seamount, located on the Juan de Fuca Ridge. The seamount chain is spread over a vast length of approximately 1,800 km. The location of the Cobb hotspot that gives rise to these seamounts is 46° N—130° W. The Pacific plate is moving to the northwest over the hotspot, causing the seamounts in the chain to decrease in age to the southeast. Axial is the youngest seamount and is located approximately 480 km west of Cannon Beach, Oregon. The most studied seamounts that make up this chain are Axial, Brown Bear, Cobb, and Patton seamounts. There are many other seamounts in this chain which have not been explored. Formation Seamounts are created at hotspots. These are isolated areas within tectonic plates where plumes of magma rise through the crust and erupt at the surface. This creates a chain of submarine volcanoes and seamounts. The Cobb hotspot is located at the Juan de Fuca Ridge in the Pacific Ocean. The Pacific Plate is moving north-westward direction at a speed of ~5.5 cm per year. Periodic volcanic events have led to magma eruption onto the seafloor, forming seamounts. Given the length of the chain this hotspot must have been active over a period of at least 30 million years (probably longer since older seamount would have been subsided). The last known volcanic activity was at Axial Seamount, which is currently directly overlying the hotspot. The total magmatic flux from the Cobb Hotspot is about 0.3 cubic m/yr. Although the Cobb hotspot is currently located beneath the Juan de Fuca ridge, this has not always been the case. It went under the Juan de Fuca Ridge when the Pacific plate started moving northwest and eventually the boundary came right on top of the hotspot. Currently the Axial seamo
https://en.wikipedia.org/wiki/Simple%20magic%20cube
A simple magic cube is the lowest of six basic classes of magic cubes. These classes are based on extra features required. The simple magic cube requires only the basic features a cube requires to be magic. Namely, all lines parallel to the faces, and all 4 space diagonals sum correctly. i.e. all "1-agonals" and all "3-agonals" sum to No planar diagonals (2-agonals) are required to sum correctly, so there are probably no magic squares in the cube. See also Magic square Magic cube classes
https://en.wikipedia.org/wiki/Unistochastic%20matrix
In mathematics, a unistochastic matrix (also called unitary-stochastic) is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some unitary matrix. A square matrix B of size n is doubly stochastic (or bistochastic) if all its entries are non-negative real numbers and each of its rows and columns sum to 1. It is unistochastic if there exists a unitary matrix U such that This definition is analogous to that for an orthostochastic matrix, which is a doubly stochastic matrix whose entries are the squares of the entries in some orthogonal matrix. Since all orthogonal matrices are necessarily unitary matrices, all orthostochastic matrices are also unistochastic. The converse, however, is not true. First, all 2-by-2 doubly stochastic matrices are both unistochastic and orthostochastic, but for larger n this is not the case. For example, take and consider the following doubly stochastic matrix: This matrix is not unistochastic, since any two vectors with moduli equal to the square root of the entries of two columns (or rows) of B cannot be made orthogonal by a suitable choice of phases. For , the set of orthostochastic matrices is a proper subset of the set of unistochastic matrices. the set of unistochastic matrices contains all permutation matrices and its convex hull is the Birkhoff polytope of all doubly stochastic matrices for this set is not convex for the set of triangle inequality on the moduli of the raw is a sufficient and necessary condition for the unistocasticity for the set of unistochastic matrices takes the form of a centrosymmetric matrix and unistochasticity of any bistochastic matrix B is implied by a non-negative value of its Jarlskog invariant for the relative volume of the set of unistochastic matrices with respect to the Birkhoff polytope of doubly stochastic matrices is for explicit conditions for unistochasticity are not known yet, but there exists a numerical method to verify un
https://en.wikipedia.org/wiki/NAD%2B%20kinase
{{DISPLAYTITLE:NAD+ kinase}} NAD+ kinase (EC 2.7.1.23, NADK) is an enzyme that converts nicotinamide adenine dinucleotide (NAD+) into NADP+ through phosphorylating the NAD+ coenzyme. NADP+ is an essential coenzyme that is reduced to NADPH primarily by the pentose phosphate pathway to provide reducing power in biosynthetic processes such as fatty acid biosynthesis and nucleotide synthesis. The structure of the NADK from the archaean Archaeoglobus fulgidus has been determined. In humans, the genes NADK and MNADK encode NAD+ kinases localized in cytosol and mitochondria, respectively. Similarly, yeast have both cytosolic and mitochondrial isoforms, and the yeast mitochondrial isoform accepts both NAD+ and NADH as substrates for phosphorylation. Reaction ATP + NAD+ ADP + NADP+ Mechanism NADK phosphorylates NAD+ at the 2’ position of the ribose ring that carries the adenine moiety. It is highly selective for its substrates, NAD and ATP, and does not tolerate modifications either to the phosphoryl acceptor, NAD, or the pyridine moiety of the phosphoryl donor, ATP. NADK also uses metal ions to coordinate the ATP in the active site. In vitro studies with various divalent metal ions have shown that zinc and manganese are preferred over magnesium, while copper and nickel are not accepted by the enzyme at all. A proposed mechanism involves the 2' alcohol oxygen acting as a nucleophile to attack the gamma-phosphoryl of ATP, releasing ADP. Regulation NADK is highly regulated by the redox state of the cell. Whereas NAD is predominantly found in its oxidized state NAD+, the phosphorylated NADP is largely present in its reduced form, as NADPH. Thus, NADK can modulate responses to oxidative stress by controlling NADP synthesis. Bacterial NADK is shown to be inhibited allosterically by both NADPH and NADH. NADK is also reportedly stimulated by calcium/calmodulin binding in certain cell types, such as neutrophils. NAD kinases in plants and sea urchin eggs have also been fo
https://en.wikipedia.org/wiki/Articulated%20body%20pose%20estimation
Articulated body pose estimation in computer vision is the study of algorithms and systems that recover the pose of an articulated body, which consists of joints and rigid parts using image-based observations. It is one of the longest-lasting problems in computer vision because of the complexity of the models that relate observation with pose, and because of the variety of situations in which it would be useful. Description Perception of human beings in their neighboring environment is an important capability that robots must possess. If a person uses gestures to point to a particular object, then the interacting machine should be able to understand the situation in real world context. Thus pose estimation is an important and challenging problem in computer vision, and many algorithms have been deployed in solving this problem over the last two decades. Many solutions involve training complex models with large data sets. Pose estimation is a difficult problem and an active subject of research because the human body has 244 degrees of freedom with 230 joints. Although not all movements between joints are evident, the human body is composed of 10 large parts with 20 degrees of freedom. Algorithms must account for large variability introduced by differences in appearance due to clothing, body shape, size, and hairstyles. Additionally, the results may be ambiguous due to partial occlusions from self-articulation, such as a person's hand covering their face, or occlusions from external objects. Finally, most algorithms estimate pose from monocular (two-dimensional) images, taken from a normal camera. Other issues include varying lighting and camera configurations. The difficulties are compounded if there are additional performance requirements. These images lack the three-dimensional information of an actual body pose, leading to further ambiguities. There is recent work in this area wherein images from RGBD cameras provide information about color and depth. Sensors
https://en.wikipedia.org/wiki/Presentation%20%28medical%29
In medicine, a presentation is the appearance in a patient of illness or disease—or signs or symptoms thereof—before a medical professional. In practice, one usually speaks of a patient as presenting with this or that. Examples include: "...Many depressed patients present with medical rather than psychiatric complaints, and those who present with medical complaints are twice as likely to be misdiagnosed as those who present with psychiatric complaints." "...In contrast, poisonings from heavy metal can be subtle and present with a slowly progressive course." "...Some patients present with small unobstructed kidneys, when the diagnosis is easy to miss." "...A total of 7,870,266 patients presented to a public hospital ED from 1 July 2017 to 30 June 2018." See also Presentation (obstetrics)
https://en.wikipedia.org/wiki/Racetrack%20memory
Racetrack memory or domain-wall memory (DWM) is an experimental non-volatile memory device under development at IBM's Almaden Research Center by a team led by physicist Stuart Parkin. In early 2008, a 3-bit version was successfully demonstrated. If it were to be developed successfully, racetrack memory would offer storage density higher than comparable solid-state memory devices like flash memory. Description Racetrack memory uses a spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire about 200 nm across and 100 nm thick. As current is passed through the wire, the domains pass by magnetic read/write heads positioned near the wire, which alter the domains to record patterns of bits. A racetrack memory device is made up of many such wires and read/write elements. In general operational concept, racetrack memory is similar to the earlier bubble memory of the 1960s and 1970s. Delay-line memory, such as mercury delay lines of the 1940s and 1950s, are a still-earlier form of similar technology, as used in the UNIVAC and EDSAC computers. Like bubble memory, racetrack memory uses electrical currents to "push" a sequence of magnetic domains through a substrate and past read/write elements. Improvements in magnetic detection capabilities, based on the development of spintronic magnetoresistive sensors, allow the use of much smaller magnetic domains to provide far higher bit densities. In production, it was expected that the wires could be scaled down to around 50 nm. There were two arrangements considered for racetrack memory. The simplest was a series of flat wires arranged in a grid with read and write heads arranged nearby. A more widely studied arrangement used U-shaped wires arranged vertically over a grid of read/write heads on an underlying substrate. This would allow the wires to be much longer without increasing its 2D area, although the need to move individual domains further along the wires before they reach the read/write hea
https://en.wikipedia.org/wiki/Initial%20value%20formulation%20%28general%20relativity%29
The initial value formulation of general relativity is a reformulation of Albert Einstein's theory of general relativity that describes a universe evolving over time. Each solution of the Einstein field equations encompasses the whole history of a universe – it is not just some snapshot of how things are, but a whole spacetime: a statement encompassing the state of matter and geometry everywhere and at every moment in that particular universe. By this token, Einstein's theory appears to be different from most other physical theories, which specify evolution equations for physical systems; if the system is in a given state at some given moment, the laws of physics allow you to extrapolate its past or future. For Einstein's equations, there appear to be subtle differences compared with other fields: they are self-interacting (that is, non-linear even in the absence of other fields); they are diffeomorphism invariant, so to obtain a unique solution, a fixed background metric and gauge conditions need to be introduced; finally, the metric determines the spacetime structure, and thus the domain of dependence for any set of initial data, so the region on which a specific solution will be defined is not, a priori, defined. There is, however, a way to re-formulate Einstein's equations that overcomes these problems. First of all, there are ways of rewriting spacetime as the evolution of "space" in time; an earlier version of this is due to Paul Dirac, while a simpler way is known after its inventors Richard Arnowitt, Stanley Deser and Charles Misner as ADM formalism. In these formulations, also known as "3+1" approaches, spacetime is split into a three-dimensional hypersurface with interior metric and an embedding into spacetime with exterior curvature; these two quantities are the dynamical variables in a Hamiltonian formulation tracing the hypersurface's evolution over time. With such a split, it is possible to state the initial value formulation of general relativi
https://en.wikipedia.org/wiki/Plant%20Pathology%20%28journal%29
Plant Pathology is a peer-reviewed scientific journal published by Wiley-Blackwell in association with the British Society for Plant Pathology. It was established in 1952 and was originally published by the Ministry of Agriculture. The journal publishes research articles and critical reviews on all aspects of plant pathology except for articles on pesticide and resistance screening. The editor-in-chief is Matt Dickinson.
https://en.wikipedia.org/wiki/List%20of%20widget%20toolkits
This article provides a list of widget toolkits (also known as GUI frameworks), used to construct the graphical user interface (GUI) of programs, organized by their relationships with various operating systems. Low-level widget toolkits Integrated in the operating system macOS uses Cocoa. Mac OS 9 and macOS used to use Carbon for 32-bit applications. The Windows API used in Microsoft Windows. Microsoft had the graphics functions integrated in the kernel until 2006 The Haiku operating system uses an extended and modernised version of the Be API that was used by its predecessor BeOS. Haiku Inc. is expected to drop binary and source compatibility with BeOS at some future time, which will result in a Haiku API. As a separate layer on top of the operating system The X Window System contains primitive building blocks, called Xt or "Intrinsics", but they are mostly only used by older toolkits such as: OLIT, Motif and Xaw. Most contemporary toolkits, such as GTK or Qt, bypass them and use Xlib or XCB directly. The Amiga OS Intuition was formerly present in the Amiga Kickstart ROM and integrated itself with a medium-high level widget library which invoked the Workbench Amiga native GUI. Since Amiga OS 2.0, Intuition.library became disk based and object oriented. Also Workbench.library and Icon.library became disk based, and could be replaced with similar third-party solutions. Since 2005, Microsoft has taken the graphics system out of Windows' kernel. High-level widget toolkits OS dependent On Amiga BOOPSI (Basic Object Oriented Programming System for Intuition) was introduced with OS 2.0 and enhanced Intuition with a system of classes in which every class represents a single widget or describes an interface event. This led to an evolution in which third-party developers each realised their own personal systems of classes. MUI: object-oriented GUI toolkit and the official toolkit for MorphOS. ReAction: object-oriented GUI toolkit and the official toolkit for A
https://en.wikipedia.org/wiki/20%2C000
20,000 (twenty thousand) is the natural number that comes after 19,999 and before 20,001. 20,000 is a round number, and is also in the title of Jules Verne's novel Twenty Thousand Leagues Under the Sea. Selected numbers in the range 20001–29999 20001 to 20999 20002 = number of surface-points of a tetrahedron with edge-length 100 20067 = The smallest number with no entry in the Online Encyclopedia of Integer Sequences (OEIS) 20100 = sum of the first 200 natural numbers (hence a triangular number) 20160 = highly composite number; the smallest order belonging to two non-isomorphic simple groups: the alternating group A8 and the Chevalley group A2(4) 20161 = the largest integer that cannot be expressed as a sum of two abundant numbers 20230 = pentagonal pyramidal number 20412 = Leyland number: 93 + 39 20540 = square pyramidal number 20569 = tetranacci number 20593 = unique prime in base 12 20597 = k such that the sum of the squares of the first k primes is divisible by k. 20736 = 1442 = 124, 1000012, palindromic in base 15 (622615) 20793 = little Schroeder number 20871 = The number of weeks in exactly 400 years in the Gregorian calendar 20903 = first prime of form 120k + 23 that is not a full reptend prime 21000 to 21999 21025 = 1452, palindromic in base 12 (1020112) 21147 = Bell number 21181 = the least of five remaining Seventeen or Bust numbers in the Sierpiński problem 21209 = number of reduced trees with 23 nodes 21856 = octahedral number 21943 = Friedman prime 21952 = 283 21978 = reverses when multiplied by 4: 4 × 21978 = 87912 22000 to 22999 22050 = pentagonal pyramidal number 22140 = square pyramidal number 22222 = repdigit, Kaprekar number: 222222 = 493817284, 4938 + 17284 = 22222 22447 = cuban prime 22527 = Woodall number: 11 × 211 − 1 22621 = repunit prime in base 12 22699 = one of five remaining Seventeen or Bust numbers in the Sierpiński problem 23000 to 23999 23000 = number of primes . 23401 = Leyland number: 65 + 56
https://en.wikipedia.org/wiki/30%2C000
30,000 (thirty thousand) is the natural number that comes after 29,999 and before 30,001. Selected numbers in the range 30001–39999 30001 to 30999 30029 = primorial prime 30030 = primorial 30203 = safe prime 30240 = harmonic divisor number 30323 = Sophie Germain prime and safe prime 30420 = pentagonal pyramidal number 30537 = Riordan number 30694 = open meandric number 30941 = first base 13 repunit prime 31000 to 31999 31116 = octahedral number 31337 = cousin prime, pronounced elite, an alternate way to spell 1337, an obfuscated alphabet made with numbers and punctuation, known and used in the gamer, hacker, and BBS cultures. 31395 = square pyramidal number 31397 = prime number followed by a record prime gap of 72, the first greater than 52 31688 = the number of years approximately equal to 1 trillion seconds 31721 = start of a prime quadruplet 31929 = Zeisel number 32000 to 32999 32043 = smallest number whose square is pandigital. 32045 = can be expressed as a sum of two squares in more ways than any smaller number. 32760 = harmonic divisor number 32761 = 1812, centered hexagonal number 32767 = 215 − 1, largest positive value for a signed (two's complement) 16-bit integer on a computer. 32768 = 215 = 85, maximum absolute value of a negative value for a signed (two's complement) 16-bit integer on a computer. 32800 = pentagonal pyramidal number 32993 = Leyland number 33000 to 33999 33333 = repdigit 33461 = Pell number, Markov number 33511 = square pyramidal number 33781 = octahedral number 34000 to 34999 34560 = 5 superfactorial 34790 = number of non-isomorphic set-systems of weight 13. 34841 = start of a prime quadruplet 34969 = favorite number of the Muppet character Count von Count 35000 to 35999 35720 = square pyramidal number 35840 = number of ounces in a long ton (2,240 pounds) 35890 = tribonacci number 35899 = alternating factorial 35937 = 333, chiliagonal number 35964 = digit-reassembly number 36000 to 36999 3610
https://en.wikipedia.org/wiki/40%2C000
40,000 (forty thousand) is the natural number that comes after 39,999 and before 40,001. It is the square of 200. Selected numbers in the range 40001–49999 40001 to 40999 40320 = smallest factorial (8!) that is not a highly composite number 40425 = square pyramidal number 40585 = largest factorion 40678 = pentagonal pyramidal number 40804 = palindromic square 41000 to 41999 41041 = Carmichael number 41472 = 3-smooth number, number of reduced trees with 24 nodes 41586 = Large Schröder number 41616 = triangular square number 41835 = Motzkin number 41841 = 1/41841 = 0.0000239 is a repeating decimal with period 7 42000 to 42999 42680 = octahedral number 42875 = 353 42925 = square pyramidal number 43000 to 43999 43261 = Markov number 43390 = number of primes . 43560 = pentagonal pyramidal number 43691 = Wagstaff prime 43777 = smallest member of a prime sextuplet 44000 to 44999 44044 = palindrome of 79 after 6 iterations of the "reverse and add" iterative process 44100 = sum of the cubes of the first 20 positive integers 44,100 Hz is a common sampling frequency in digital audio (and is the standard for compact discs). 44444 = repdigit 44721 = smallest positive integer such that the expression − ≤ 10−9 44944 = palindromic square 45000 to 45999 45360 = highly composite number; first number to have 100 factors (including one and itself) 46000 to 46999 46233 = sum of the first eight factorials 46368 = Fibonacci number 46656 = 36, 66, 3-smooth number 46657 = Carmichael number 47000 to 47999 47058 = primary pseudoperfect number 47160 = 10-th derivative of xx at x=1 47321/33461 ≈ √2 48000 to 48999 48734 = number of 22-bead necklaces (turning over is allowed) where complements are equivalent 49000 to 49999 49151 = Woodall number 49152 = 3-smooth number 49726 = pentagonal pyramidal number 49940 = number of 21-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed Primes There
https://en.wikipedia.org/wiki/50%2C000
50,000 (fifty thousand) is the natural number that comes after 49,999 and before 50,001. Selected numbers in the range 50001–59999 50001 to 50999 50069 = 11 + 22 + 33 + 44 + 55 + 66 50400 = highly composite number 50625 = 154, smallest fourth power that can be expressed as the sum of only five distinct fourth powers, palindromic in base 14 (1464114) 50653 = 373, palindromic in base 6 (10303016) 51000 to 51999 51076 = 2262, palindromic in base 15 (1020115) 51641 = Markov number 51984 = 2282 = 373 + 113. the smallest square to the sum of only five distinct fourth powers. 52000 to 52999 52488 = 3-smooth number 52633 = Carmichael number 53000 to 53999 53016 = pentagonal pyramidal number 53361 = 2312 sum of the cubes of the first 21 positive integers 54000 to 54999 54205 = Zeisel number 54688 = 2-automorphic number 54748 = narcissistic number 54872 = 383, palindromic in base 9 (832389) 54901 = chiliagonal number 55000 to 55999 55296 = 3-smooth number 55440 = superior highly composite number; colossally abundant number 55459 = one of five remaining Seventeen or Bust numbers in the Sierpinski problem 55555 = repdigit 55860 = harmonic divisor number 55987 = repunit prime in base 6 56000 to 56999 56011 = Wedderburn-Etherington number 56092 = the number of groups of order 256, see 56169 = 2372, palindromic in octal (155518) 56448 = pentagonal pyramidal number 57000 to 57999 57121 = 2392, palindromic in base 14 (16B6114) 58000 to 58999 58081 = 2412, palindromic in base 15 (1232115) 58367 = smallest integer that cannot be expressed as a sum of fewer than 1079 tenth powers 58786 = Catalan number 58921 = Friedman prime 59000 to 59999 59049 = 2432 = 95 = 310 59051 = Friedman prime 59053 = Friedman prime 59081 = Zeisel number 59263 = Friedman prime 59273 = Friedman prime 59319 = 393 59536 = 2442, palindromic in base 11 (4080411) Primes There are 924 prime numbers between 50000 and 60000.
https://en.wikipedia.org/wiki/60%2C000
60,000 (sixty thousand) is the natural number that comes after 59,999 and before 60,001. It is a round number. It is the value of (F25). Selected numbers in the range 60,000–69,999 60,001 to 60,999 60,049 = Leyland number 60,101 = smallest prime with period of reciprocal 100 61,000 to 61,999 62,000 to 62,999 62,208 = 3-smooth number 62,210 = Markov number 62,745 = Carmichael number 63,000 to 63,999 63,020 = amicable number with 76084 63,360 = inches in a mile 63,600 = number of free 12-ominoes 63,750 = pentagonal pyramidal number 63,973 = Carmichael number 64,000 to 64,999 64,000 = 403 64,009 = sum of the cubes of the first 22 positive integers 64,079 = Lucas number 64,442 = Number of integer degree intersections on Earth: 360 longitudes * 179 latitudes + 2 poles = 64442. 65,000 to 65,999 65,025 = 2552, palindromic in base 11 (4494411) 65,535 = largest value for an unsigned 16-bit integer on a computer. 65,536 = 216 = 164 = 2562 also 2↑↑4 using Knuth's up-arrow notation, smallest integer with exactly 17 divisors, palindromic in base 15 (1464115), number of directed graphs on 4 labeled nodes 65,537 = largest known Fermat prime 65,539 = the 6544th prime number, and both 6544 and 65539 have digital root of 1; a regular prime; a larger member of a twin prime pair; a smaller member of a cousin prime pair; a happy prime; a weak prime; a middle member of a prime triplet, (65537, 65539, 65543); a middle member of a three-term primes in arithmetic progression, (65521, 65539, 65557). 65,792 = Leyland number 66,000 to 66,999 66,012 = tribonacci number 66,049 = 2572, palindromic in hexadecimal (1020116) 66,198 = Giuga number 66,666 = repdigit 67,000 to 67,999 67,081 = 2592, palindromic in base 6 (12343216) 67,171 = 16 + 26 + 36 + 46 + 56 + 66 67,607 = largest of five remaining Seventeen or Bust numbers in the Sierpiński problem 67,626 = pentagonal pyramidal number 68,000 to 68,999 68,921 = 413 69,000 to 69,999 69,632 = Leyland number 6
https://en.wikipedia.org/wiki/70%2C000
70,000 (seventy thousand) is the natural number that comes after 69,999 and before 70,001. It is a round number. Selected numbers in the range 70001–79999 70001 to 70999 71000 to 71999 71656 = pentagonal pyramidal number 72000 to 72999 73000 to 73999 73296 = is the smallest number n, for which n−3, n−2, n−1, n+1, n+2, n+3 are all Sphenic number. 73440 = 15 × 16 × 17 × 18 73712 = number of n-Queens Problem solutions for n = 13 73728 = 3-smooth number 74000 to 74999 74088 = 423 = 23 * 33 * 73 74353 = Friedman prime 74897 = Friedman prime 75000 to 75999 75025 = Fibonacci number, Markov number 75361 = Carmichael number 76000 to 76999 76084 = amicable number with 63020 76424 = tetranacci number 77000 to 77999 77777 = repdigit 77778 = Kaprekar number 78000 to 78999 78125 = 57 78163 = Friedman prime 78498 = the number of primes under 1,000,000 78557 = conjectured to be the smallest Sierpiński number 78732 = 3-smooth number 79000 to 79999 79507 = 433 Primes There are 902 prime numbers between 70000 and 80000.
https://en.wikipedia.org/wiki/80%2C000
80,000 (eighty thousand) is the natural number after 79,999 and before 80,001. Selected numbers in the range 80,000–89,999 80,782 = Pell number P14 81,181 = number of reduced trees with 25 nodes 82,000 = the only currently known number greater than 1 that can be written in bases from 2 through 5 using only 0s and 1s. 82,025 = number of primes . 82,467 = number of square (0,1)-matrices without zero rows and with exactly 6 entries equal to 1 82,656 = Kaprekar number: 826562 = 6832014336; 68320 + 14336 = 82656 82,944 = 3-smooth number: 210 × 34 83,097 = Riordan number 83,160 = highly composite number 83,357 = Friedman prime 83,521 = 174 84,187 – number of parallelogram polyominoes with 15 cells. 84,672 = number of primitive polynomials of degree 21 over GF(2) 85,085 = product of five consecutive primes: 5 × 7 × 11 × 13 × 17 85,184 = 443 86,400 = seconds in a day: 24 × 60 × 60 and common DNS default time to live 87,360 = unitary perfect number 88,789 = the start of a prime 9-tuple, along with 88793, 88799, 88801, 88807, 88811, 88813, 88817, and 88819. 88,888 = repdigit Primes There are 876 prime numbers between 80000 and 90000. 80021, 80039, 80051, 80071, 80077, 80107, 80111, 80141, 80147, 80149, 80153, 80167, 80173, 80177, 80191, 80207, 80209, 80221, 80231, 80233, 80239, 80251, 80263, 80273, 80279, 80287, 80309, 80317, 80329, 80341, 80347, 80363, 80369, 80387, 80407, 80429, 80447, 80449, 80471, 80473, 80489, 80491, 80513, 80527, 80537, 80557, 80567, 80599, 80603, 80611, 80621, 80627, 80629, 80651, 80657, 80669, 80671, 80677, 80681, 80683, 80687, 80701, 80713, 80737, 80747, 80749, 80761, 80777, 80779, 80783, 80789, 80803, 80809, 80819, 80831, 80833, 80849, 80863, 80897, 80909, 80911, 80917, 80923, 80929, 80933, 80953, 80963, 80989, 81001, 81013, 81017, 81019, 81023, 81031, 81041, 81043, 81047, 81049, 81071, 81077, 81083, 81097, 81101, 81119, 81131, 81157, 81163, 81173, 81181, 81197, 81199, 81203, 81223, 81233, 81239, 81281, 81283, 81293, 81299, 81
https://en.wikipedia.org/wiki/90%2C000
90,000 (ninety thousand) is the natural number following 89,999 and preceding 90,001. It is the sum of the cubes of the first 24 positive integers, and is the square of 300. Selected numbers in the range 90,000–99,999 90,625 = the only five-digit automorphic number: 906252 = 8212890625 91,125 = 453 91,144 = Fine number 92,205 = number of 23-bead necklaces (turning over is allowed) where complements are equivalent 92,706 = There is a math puzzle called KAYAK + KAYAK + KAYAK + KAYAK + KAYAK + KAYAK = SPORT, where each letter represents a digit. When one solves the puzzle, KAYAK = 15451, and when one added this up, SPORT = 92,706. 93,312 = Leyland number: 66 + 66. Also a 3-smooth number. 94,249 = palindromic square: 3072 94,932 = Leyland number: 75 + 57 95,121 = Kaprekar number: 951212 = 9048004641; 90480 + 04641 = 95121 95,420 = number of 22-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed 96,557 = Markov number: 52 + 64662 + 965572 = 3 × 5 × 6466 × 96557 97,336 = 463, the largest 5-digit cube 98,304 = 3-smooth number 99,066 = largest number whose square uses all of the decimal digits once: 990662 = 9814072356. It is also strobogrammatic in decimal. 99,856 = 3162, the largest 5-digit square 99,991 = largest five-digit prime number 99,999 = repdigit, Kaprekar number: 999992 = 9999800001; 99998 + 00001 = 99999 Primes There are 879 prime numbers between 90000 and 100000.
https://en.wikipedia.org/wiki/Desorption%20electrospray%20ionization
Desorption electrospray ionization (DESI) is an ambient ionization technique that can be coupled to mass spectrometry (MS) for chemical analysis of samples at atmospheric conditions. Coupled ionization sources-MS systems are popular in chemical analysis because the individual capabilities of various sources combined with different MS systems allow for chemical determinations of samples. DESI employs a fast-moving charged solvent stream, at an angle relative to the sample surface, to extract analytes from the surfaces and propel the secondary ions toward the mass analyzer. This tandem technique can be used to analyze forensics analyses, pharmaceuticals, plant tissues, fruits, intact biological tissues, enzyme-substrate complexes, metabolites and polymers. Therefore, DESI-MS may be applied in a wide variety of sectors including food and drug administration, pharmaceuticals, environmental monitoring, and biotechnology. History DESI has been widely studied since its inception in 2004 by Zoltan Takáts, Justin Wiseman and Bogdan Gologan, in Graham Cooks' group from Purdue University with the goal of looking into methods that didn't require the sample to be inside of a vacuum. Both DESI and direct analysis in real time (DART) have been largely responsible for the rapid growth in ambient ionization techniques, with a proliferation of more than eighty new techniques being found today. These methods allow for complex systems to be analyzed without preparation and throughputs as high as 45 samples a minute. DESI is a combination of popular techniques, such as, electrospray ionization and surface desorption techniques. Electrospray ionization with mass spectrometry was reported by Malcolm Dole in 1968, but John Bennett Fenn was awarded a nobel prize in chemistry for the development of ESI-MS in the late 1980s. Then in 1999, desorption of open surface and free matrix experiments were reported in the literature utilizing an experiment that was called desorption/ionization on sil
https://en.wikipedia.org/wiki/Jurjen%20Ferdinand%20Koksma
Jurjen Ferdinand Koksma (21 April 1904, Schoterland – 17 December 1964, Amsterdam) was a Dutch mathematician who specialized in analytic number theory. Koksma received his Ph.D. degree (cum laude) in 1930 at the University of Groningen under supervision of Johannes van der Corput, with a thesis on Systems of Diophantine Inequalities. Around the same time, aged 26, he was invited to become full professor at the Vrije Universiteit Amsterdam. He accepted and in 1930 became the first professor in mathematics at this university. Koksma is also one of the founders of the Dutch Mathematisch Centrum (today Centrum Wiskunde & Informatica). One of Koksma's main works was the book Diophantische Approximationen, published in 1936 by Springer. He also wrote several papers with Paul Erdős. In 1950 he became member of the Royal Netherlands Academy of Arts and Sciences. Koksma had two brothers, Jan and Marten, who were also mathematicians. See also Denjoy–Koksma inequality Koksma's equivalent classification Koksma–Hlawka inequality Erdős–Turán–Koksma inequality
https://en.wikipedia.org/wiki/Wu%27s%20method%20of%20characteristic%20set
Wenjun Wu's method is an algorithm for solving multivariate polynomial equations introduced in the late 1970s by the Chinese mathematician Wen-Tsun Wu. This method is based on the mathematical concept of characteristic set introduced in the late 1940s by J.F. Ritt. It is fully independent of the Gröbner basis method, introduced by Bruno Buchberger (1965), even if Gröbner bases may be used to compute characteristic sets. Wu's method is powerful for mechanical theorem proving in elementary geometry, and provides a complete decision process for certain classes of problem. It has been used in research in his laboratory (KLMM, Key Laboratory of Mathematics Mechanization in Chinese Academy of Science) and around the world. The main trends of research on Wu's method concern systems of polynomial equations of positive dimension and differential algebra where Ritt's results have been made effective. Wu's method has been applied in various scientific fields, like biology, computer vision, robot kinematics and especially automatic proofs in geometry. Informal description Wu's method uses polynomial division to solve problems of the form: where f is a polynomial equation and I is a conjunction of polynomial equations. The algorithm is complete for such problems over the complex domain. The core idea of the algorithm is that you can divide one polynomial by another to give a remainder. Repeated division results in either the remainder vanishing (in which case the I implies f statement is true), or an irreducible remainder is left behind (in which case the statement is false). More specifically, for an ideal I in the ring k[x1, ..., xn] over a field k, a (Ritt) characteristic set C of I is composed of a set of polynomials in I, which is in triangular shape: polynomials in C have distinct main variables (see the formal definition below). Given a characteristic set C of I, one can decide if a polynomial f is zero modulo I. That is, the membership test is checkable for I, p
https://en.wikipedia.org/wiki/Error%20guessing
In software testing, error guessing is a test method in which test cases used to find bugs in programs are established based on experience in prior testing. The scope of test cases usually rely on the software tester involved, who uses experience and intuition to determine what situations commonly cause software failure, or may cause errors to appear. Typical errors include divide by zero, null pointers, or invalid parameters. Error guessing has no explicit rules for testing; test cases can be designed depending on the situation, either drawing from functional documents or when an unexpected/undocumented error is found while testing operations.
https://en.wikipedia.org/wiki/Lule%C3%A5%20algorithm
The Luleå algorithm of computer science, designed by , is a technique for storing and searching internet routing tables efficiently. It is named after the Luleå University of Technology, the home institute/university of the technique's authors. The name of the algorithm does not appear in the original paper describing it, but was used in a message from Craig Partridge to the Internet Engineering Task Force describing that paper prior to its publication. The key task to be performed in internet routing is to match a given IPv4 address (viewed as a sequence of 32 bits) to the longest prefix of the address for which routing information is available. This prefix matching problem may be solved by a trie, but trie structures use a significant amount of space (a node for each bit of each address) and searching them requires traversing a sequence of nodes with length proportional to the number of bits in the address. The Luleå algorithm shortcuts this process by storing only the nodes at three levels of the trie structure, rather than storing the entire trie. Before building the Luleå trie, the routing table entries need to be preprocessed. Any bigger prefix that overlaps a smaller prefix must be repeatedly split into smaller prefixes, and only the split prefixes which does not overlap the smaller prefix is kept. It is also required that the prefix tree is complete. If there is no routing table entries for the entire address space, it must be completed by adding dummy entries, which only carries the information that no route is present for that range. This enables the simplified lookup in the Luleå trie (). The main advantage of the Luleå algorithm for the routing task is that it uses very little memory, averaging 4–5 bytes per entry for large routing tables. This small memory footprint often allows the entire data structure to fit into the routing processor's cache, speeding operations. However, it has the disadvantage that it cannot be modified easily: small changes t
https://en.wikipedia.org/wiki/Chilling%20requirement
The chilling requirement of a fruit is the minimum period of cold weather after which a fruit-bearing tree will blossom. It is often expressed in chill hours, which can be calculated in different ways, all of which essentially involve adding up the total amount of time in a winter spent at certain temperatures. Some bulbs have chilling requirements to bloom, and some seeds have chilling requirements to sprout. Biologically, the chilling requirement is a way of ensuring that vernalization occurs. Chilling units or chilling hours A chilling unit in agriculture is a metric of a plant's exposure to chilling temperatures. Chilling temperatures extend from freezing point to, depending on the model, or even . Stone fruit trees and certain other plants of temperate climate develop next year's buds in the summer. In the autumn the buds become dormant, and the switch to proper, healthy dormancy is triggered by a certain minimum exposure to chilling temperatures. Lack of such exposure results in delayed and substandard foliation, flowering and fruiting. One chilling unit, in the simplest models, is equal to one hour's exposure to the chilling temperature; these units are summed up for a whole season. Advanced models assign different weights to different temperature bands. Requirements According to Fishman, chilling in trees acts in two stages. The first is reversible: chilling helps to build up the precursor to dormancy, but the process can be easily reversed with a rise in temperature. After the level of precursor reaches a certain threshold, dormancy becomes irreversible and will not be affected by short-term warm temperature peaks. Apples have the highest chilling requirements of all fruit trees, followed by apricots and, lastly, peaches. Apple cultivars have a diverse range of permissible minimum chilling: most have been bred for temperate weather, but Gala and Fuji can be successfully grown in subtropical Bakersfield, California. Peach cultivars in Texas range in
https://en.wikipedia.org/wiki/Otto%20the%20Orange
Otto the Orange is the mascot for the Syracuse Orange, the athletic teams of Syracuse University in Syracuse, New York, USA. Otto is an anthropomorphism of the citrus fruit, wearing a large blue hat and blue pants. Otto can often be seen at Syracuse sporting events in the JMA Wireless Dome and other venues. Mascot history Saltine Warrior The Syracuse mascot was originally a Native American character named "The Saltine Warrior" (Syracuse's unofficial nickname is the Salt City) and "Big Chief Bill Orange". The character was born out of a hoax in which it was claimed that a 16th-century Onondaga chief was unearthed while digging the foundation for the women's gymnasium in 1928. In the mid-1950s, the father of a Lambda Chi Alpha fraternity brother owned a cheerleading camp. He made a Saltine Warrior costume for his son to wear at SU football games. Thus began a nearly forty-year tradition of Lambda Chi brothers serving as SU's mascot. In 1990 however the University opened up the mascot traditions to the entire student body (Daily Orange, February 22, 1990). In December 1977, Native American students successfully petitioned the University to discontinue the Saltine Warrior, citing the mascot's stereotypical portrayal of Native Americans. The mascot was discontinued in 1978. During the 1978 season, the University introduced a Roman gladiator dressed in orange armor, but the idea proved largely unpopular among fans, who regularly booed the mascot. Otto becomes official In the 1980s, a new Syracuse University mascot emerged and was described by Sports Illustrated in 1984 as a "juiced-up, bumbling citrus fruit from which two legs protrude", and quickly became popular on campus. Then, the mascot was simply known as "the Orange", and was designed and crafted by Eric Heath, an SU cheerleader, according to the SU Archives. Early on the mascot had multiple monikers, including Clyde and Woody. In the summer of 1990, the cheerleaders and mascots were at Cheerleading Camp in T
https://en.wikipedia.org/wiki/Printing%20and%20writing%20paper
Printing and writing papers are paper grades used for newspapers, magazines, catalogs, books, notebooks, commercial printing, business forms, stationeries, copying and digital printing. About 1/3 of the total pulp and paper marked (in 2000) is printing and writing papers. The pulp or fibers used in printing and writing papers are extracted from wood using a chemical or mechanical process. Paper standards The ISO 216:2007 is the current international standard for paper sizes, including writing papers and some types of printing papers. This standard describes the paper sizes under what the ISO calls the A, B, and C series formats. Not all countries follow ISO 216. North America, for instance, uses certain terms to describe paper sizes, such as Letter, Legal, Junior Legal, and Ledger or Tabloid. Most types of printing papers also do not follow ISO standards but have features that conform with leading industry standards. These include, among others, ink adhesion, light sensitivity, waterproofing, compatibility with thermal or PSA overlaminate, and glossy or matte finish. Additionally, the American National Standards Institute or ANSI also defined a series of paper sizes, with size A being the smallest and E the largest. These paper sizes have aspect ratios 1:1.2941 and 1:1.5455. Vietnam Types Fine paper Machine finished coated paper Newsprint History The history of paper is often attributed to the Han dynasty (25-220 AD) when Cai Lun, a Chinese court official and inventor, made paper sheets using the “bark of trees, remnants of hemp, rags of cloth, and fishing nets.” Cai Lun's method of papermaking received praise during his time for offering a more convenient alternative to writing on silk or bamboo tablets, which were the traditional materials in ancient Chinese writing. On the other hand, archeological evidence supports that the ancient Chinese military had used paper over a hundred years before Cai Lun's contribution and that maps from early 2nd century B
https://en.wikipedia.org/wiki/Kaspersky%20Internet%20Security
Kaspersky Internet Security (often abbreviated to KIS) was an internet security suite developed by Kaspersky Lab compatible with Microsoft Windows and Mac OS X. Kaspersky Internet Security offers protection from malware, as well as email spam, phishing and hacking attempts, and data leaks. Kaspersky Lab Diagnostics results are distributed to relevant developers through the MIT License. Windows edition Version 2007 (6.0) Version 6.0 was the first release of KIS. PC World magazine praised version 6.0's detection of malware. KIS detected 100 percent of threats on a subset of the January 2006 wild-list, a list of prevalent threats. The suite detected almost 100 (99.57%) percent of adware samples. KIS has the ability to scan within compressed or packed files, detecting 83.3 percent of the "hidden" malware. However, version 6.0 was criticized for not completely removing malware by leaving Registry entries and files. PC World also highlighted the suite's false positives — eight of 20,000 clean files were incorrectly flagged as malicious — and its noticeable impact on computer performance. However, data is cached from each scan, making each subsequent scan faster. The firewall blocked all attacks from inside and outside the computer when tested. The magazine found the graphical user interface to be awkward to navigate. Features such as parental controls and instant messaging protection, found in competing suites from Symantec and McAfee, were not a part of version 6.0. Both CNET and PC World criticized the suite's relatively high retail price, US$79.95. KIS 6.0 supports Windows 98 SE, ME, NT Workstation 4.0, 2000 Professional, XP Home Edition, XP Professional, XP Professional x64, and Vista. 50 megabytes of free space, Internet Explorer 5.5, and Windows Installer 2.0 are required. RAM and CPU requirements are dependent on the operating system. Version 2008 (7.0) Version 7.0 introduced a redesigned GUI. Components were renamed and reorganized; the Anti-hacker module was
https://en.wikipedia.org/wiki/Souders%E2%80%93Brown%20equation
The Souders–Brown equation (named after Mott Souders and George Granger Brown) has been a tool for obtaining the maximum allowable vapor velocity in vapor–liquid separation vessels (variously called flash drums, knockout drums, knockout pots, compressor suction drums and compressor inlet drums). It has also been used for the same purpose in designing trayed fractionating columns, trayed absorption columns and other vapor–liquid-contacting columns. A vapor–liquid separator drum is a vertical vessel into which a liquid and vapor mixture (or a flashing liquid) is fed and wherein the liquid is separated by gravity, falls to the bottom of the vessel, and is withdrawn. The vapor travels upward at a design velocity which minimizes the entrainment of any liquid droplets in the vapor as it exits the top of the vessel. Use The diameter of a vapor–liquid separator drum is dictated by the expected volumetric flow rate of vapor and liquid from the drum. The following sizing methodology is based on the assumption that those flow rates are known. Use a vertical pressure vessel with a length–diameter ratio of about 3 to 4, and size the vessel to provide about 5 minutes of liquid inventory between the normal liquid level and the bottom of the vessel (with the normal liquid level being somewhat below the feed inlet). Calculate the maximum allowable vapor velocity in the vessel by using the Souders–Brown equation: Then the cross-sectional area of the drum can be found from: And the drum diameter is: The drum should have a vapor outlet at the top, liquid outlet at the bottom, and feed inlet at about the half-full level. At the vapor outlet, provide a de-entraining mesh pad within the drum such that the vapor must pass through that mesh before it can leave the drum. Depending upon how much liquid flow is expected, the liquid outlet line should probably have a liquid level control valve. As for the mechanical design of the drum (materials of construction, wall thickness, corrosio
https://en.wikipedia.org/wiki/Metagame%20analysis
Metagame analysis involves framing a problem situation as a strategic game in which participants try to realise their objectives by means of the options available to them. The subsequent meta-analysis of this game gives insight in possible strategies and their outcome. Origin Metagame theory was developed by Nigel Howard in the 1960s as a reconstruction of mathematical game theory on a non-quantitative basis, hoping that it would thereby make more practical and intuitive sense . Metagame analysis reflects on a problem in terms of decision issues, and stakeholders who may exert different options to gain control over these issues. The analysis reveals what likely scenarios exist, and who has the power to control the course of events. The practical application of metagame theory is based on the analysis of options method, first applied to study problems like the strategic arms race and nuclear proliferation. Method Metagame analysis proceeds in three phases: analysis of options, scenario development, and scenario analysis. Analysis of options The first phase of analysis of options consists of the following four steps: Structure the problem by identifying the issues to be decided. Identify the stakeholders who control the issues, either directly or indirectly. Make an inventory of policy options by means of which the stakeholders control the issues. Determine the dependencies between the policy options. The dependencies between options should typically be formulated as "option X can only be implemented if option Y is also implemented", or "options Y and Z are mutually exclusive". The result is a metagame model, which can then be analysed in different ways. Scenario development The possible outcomes of the game, based on the combination of options, are called scenarios. In theory, a game with N stakeholders s1, ..., sN who have Oi options (i = 1, ..., N), there are O1×...×ON possible outcomes. As the number of stakeholders and the number of the options they have in
https://en.wikipedia.org/wiki/Bicentric%20polygon
In geometry, a bicentric polygon is a tangential polygon (a polygon all of whose sides are tangent to an inner incircle) which is also cyclic — that is, inscribed in an outer circle that passes through each vertex of the polygon. All triangles and all regular polygons are bicentric. On the other hand, a rectangle with unequal sides is not bicentric, because no circle can be tangent to all four sides. Triangles Every triangle is bicentric. In a triangle, the radii r and R of the incircle and circumcircle respectively are related by the equation where x is the distance between the centers of the circles. This is one version of Euler's triangle formula. Bicentric quadrilaterals Not all quadrilaterals are bicentric (having both an incircle and a circumcircle). Given two circles (one within the other) with radii R and r where , there exists a convex quadrilateral inscribed in one of them and tangent to the other if and only if their radii satisfy where x is the distance between their centers. This condition (and analogous conditions for higher order polygons) is known as Fuss' theorem. Polygons with n > 4 A complicated general formula is known for any number n of sides for the relation among the circumradius R, the inradius r, and the distance x between the circumcenter and the incenter. Some of these for specific n are: where and Regular polygons Every regular polygon is bicentric. In a regular polygon, the incircle and the circumcircle are concentric—that is, they share a common center, which is also the center of the regular polygon, so the distance between the incenter and circumcenter is always zero. The radius of the inscribed circle is the apothem (the shortest distance from the center to the boundary of the regular polygon). For any regular polygon, the relations between the common edge length a, the radius r of the incircle, and the radius R of the circumcircle are: For some regular polygons which can be constructed with compass and ruler, we have
https://en.wikipedia.org/wiki/A%20Disappearing%20Number
A Disappearing Number is a 2007 play co-written and devised by the Théâtre de Complicité company and directed and conceived by English playwright Simon McBurney. It was inspired by the collaboration during the 1910s between the pure mathematicians Srinivasa Ramanujan from India, and the Cambridge University don G.H. Hardy. It was a co-production between the UK-based theatre company Complicite and Theatre Royal, Plymouth, and Ruhrfestspiele, Wiener Festwochen, and the Holland Festival. A Disappearing Number premiered in Plymouth in March 2007, toured internationally, and played at The Barbican Centre in Autumn 2007 and 2008 and at Lincoln Center in July 2010. It was directed by Simon McBurney with music by Nitin Sawhney. The production is 110 minutes with no intermission. The piece was co-devised and written by the cast and company. The cast in order of appearance: Firdous Bamji, Saskia Reeves, David Annen, Paul Bhattacharjee, Shane Shambu, Divya Kasturi and Chetna Pandya. Plot Ramanujan first attracted Hardy's attention by writing him a letter in which he proved that where the notation indicates a Ramanujan summation. Hardy realised that this confusing presentation of the series 1 + 2 + 3 + 4 + ⋯ was an application of the Riemann zeta function with . Ramanujan's work became one of the foundations of bosonic string theory, a precursor of modern string theory. The play includes live tabla playing, which "morphs seductively into pure mathematics", as the Financial Times review put it, "especially when … its rhythms shade into chants of number sequences reminiscent of the libretto to Philip Glass's Einstein on the Beach. One can hear the beauty of the sequences without grasping the rules that govern them." The play has two strands of narrative and presents strong visual and physical theatre. It interweaves the passionate intellectual relationship between Hardy and the more intuitive Ramanujan, with the present-day story of Ruth, an English maths lecturer, and
https://en.wikipedia.org/wiki/Alternate-Phase%20Return-to-Zero
Alternate-Phase Return-to-Zero (APRZ) is an optical line code. In APRZ the field intensity drops to zero between consecutive bits, and the field phase alternates between neighbouring bits, so that if the phase of the signal is, for example, 0 in even bits (bit number 2n), the phase in odd bit slots (bit number 2n+1) will be ΔΦ, the phase alternation amplitude. Special cases Return-to-zero can be seen as a special case of APRZ in which ΔΦ=0, while Carrier-Suppressed Return-to-Zero (CSRZ) can be viewed as a special case of APRZ in which ΔΦ=π (and the duty cycle is 67%, at least in the standard form of CSRZ). APRZ can be used to generate specific optical modulation formats, for example, APRZ-OOK, in which data is coded on the intensity of the signal using a binary scheme (light on=1, light off=0). APRZ is often used to designate APRZ-OOK. Characteristics The characteristic properties of an APRZ signal are those to have a spectrum similar to that of an RZ signal, except that frequency peaks at a spacing of BR/2 as opposed to BR are observed (where BR is the bit rate). Line codes Fiber-optic communications
https://en.wikipedia.org/wiki/Carrier-Suppressed%20Return-to-Zero
Carrier-Suppressed Return-to-Zero (CSRZ) is an optical line code. In CSRZ the field intensity drops to zero between consecutive bits (RZ), and the field phase alternates by π radians between neighbouring bits, so that if the phase of the signal is e.g. 0 in even bits (bit number 2n), the phase in odd bit slots (bit number 2n+1) will be π, the phase alternation amplitude. In its standard form CSRZ is generated by a single Mach–Zehnder modulator (MZM), driven by two sinusoidal waves at half the bit rate BR, and in phase opposition. This gives rise to characteristically broad pulses (duty cycle 67%). The signal format Alternate-Phase Return-to-Zero (APRZ) can be viewed as a generalisation of CSRZ in which the phase alternation can take up any value ΔΦ (and not necessarily only π) and the duty cycle is also a free parameter. CSRZ can be used to generate specific optical modulation formats, e.g. CSRZ-OOK, in which data is coded on the intensity of the signal using a binary scheme (light on=1, light off=0), or CSRZ-DPSK, in which data is coded on the differential phase of the signal, etc. CSRZ is often used to designate APRZ-OOK. The characteristic properties of an CSRZ signal are those to have a spectrum similar to that of an RZ signal, except that frequency peaks (still at a spacing of BR) are shifted by BR/2 with respect to RZ, so that no peak is present at the carrier and power is ideally zero at the carrier frequency (hence the name). Compared to standard RZ-OOK, the CSRZ-OOK is considered to be more tolerant to filtering and chromatic dispersion, thanks to its narrower spectrum. Further reading Line codes Fiber-optic communications
https://en.wikipedia.org/wiki/Feynman%20checkerboard
The Feynman checkerboard, or relativistic chessboard model, was Richard Feynman’s sum-over-paths formulation of the kernel for a free spin-½ particle moving in one spatial dimension. It provides a representation of solutions of the Dirac equation in (1+1)-dimensional spacetime as discrete sums. The model can be visualised by considering relativistic random walks on a two-dimensional spacetime checkerboard. At each discrete timestep the particle of mass moves a distance to the left or right ( being the speed of light). For such a discrete motion, the Feynman path integral reduces to a sum over the possible paths. Feynman demonstrated that if each "turn" (change of moving from left to right or conversely) of the space–time path is weighted by (with denoting the reduced Planck's constant), in the limit of infinitely small checkerboard squares the sum of all weighted paths yields a propagator that satisfies the one-dimensional Dirac equation. As a result, helicity (the one-dimensional equivalent of spin) is obtained from a simple cellular-automata-type rule. The checkerboard model is important because it connects aspects of spin and chirality with propagation in spacetime and is the only sum-over-path formulation in which quantum phase is discrete at the level of the paths, taking only values corresponding to the 4th roots of unity. History Feynman invented the model in the 1940s while developing his spacetime approach to quantum mechanics. He did not publish the result until it appeared in a text on path integrals coauthored by Albert Hibbs in the mid 1960s. The model was not included with the original path-integral article because a suitable generalization to a four-dimensional spacetime had not been found. One of the first connections between the amplitudes prescribed by Feynman for the Dirac particle in 1+1 dimensions, and the standard interpretation of amplitudes in terms of the kernel, or propagator, was established by Jayant Narlikar in a detailed a
https://en.wikipedia.org/wiki/Jigsaw%20puzzle%20accessories
Jigsaw puzzle accessories are the different accessories used by jigsaw puzzle enthusiasts in pursuit of their hobby. History Jigsaw puzzles were made commercially available in England by John Spilsbury, around 1760 and have been widely accepted home entertainment in the UK ever since. Jigsaws enjoy similar popularity throughout Europe, and in the American Great Depression jigsaw puzzles sold at the rate of 10 million per week. It is perhaps therefore surprising that companies who produce games and puzzles have been slow to exploit the commercial opportunities afforded by so many enthusiasts who require something on which to construct their jigsaws along with methods of storing and displaying them. The first references to any kind of jigsaw puzzle accessory can be found around 1900 when a "Frame" was first included in Dutch jigsaw puzzle boxes so that a completed puzzle could be permanently saved. The idea was not successful and was soon discontinued. A similar fate befell the mahogany and walnut "Puzzle Trays" that were advertised in Viking's Picture Puzzle Weekly in America during the 1930s In the late 1980s, Falcon Games in England decided to tackle the intellectual property issue by route of applying for a trademark and on 4 August 1989 their self-explanatory Jigroll name was registered (UK Patent Office Reference 1318441). Although many companies have since copied the functionality of the Jigroll, none have been able to give their products the same name and in jigsaw puzzle parlance "Jigroll" has almost become a generic term for all jigsaw mats and rolls. Falcon enjoyed similar success with the "Porta Puzzle" mark registered on 9 March 1993 (UK Patent Office Reference 1528876) for "Folders and cases made of plastics and/or card for holding and carrying jigsaw puzzles". Since the registration of this mark there have been a number of innovations and improvements to the original design, both by the current owners of the mark and other companies, but coll