source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Deep%20inelastic%20scattering | In particle physics, deep inelastic scattering is the name given to a process used to probe the insides of hadrons (particularly the baryons, such as protons and neutrons), using electrons, muons and neutrinos. It was first attempted in the 1960s and 1970s and provided the first convincing evidence of the reality of quarks, which up until that point had been considered by many to be a purely mathematical phenomenon. It is an extension of Rutherford scattering to much higher energies of the scattering particle and thus to much finer resolution of the components of the nuclei.
Henry Way Kendall, Jerome Isaac Friedman and Richard E. Taylor were joint recipients of the Nobel Prize of 1990 "for their pioneering investigations concerning deep inelastic scattering of electrons on protons and bound neutrons, which have been of essential importance for the development of the quark model in particle physics."
Description
To explain each part of the terminology, "scattering" refers to the lepton's (electron, muon, etc.) deflection. Measuring the angles of deflection gives information about the nature of the process. "Inelastic" means that the target absorbs some kinetic energy. In fact, at the very high energies of leptons used, the target is "shattered" and emits many new particles. These particles are hadrons and, to oversimplify greatly, the process is interpreted as a constituent quark of the target being "knocked out" of the target hadron, and due to quark confinement, the quarks are not actually observed but instead produce the observable particles by hadronization. The "deep" refers to the high energy of the lepton, which gives it a very short wavelength and hence the ability to probe distances that are small compared with the size of the target hadron, so it can probe "deep inside" the hadron. Also, note that in the perturbative approximation it is a high-energy virtual photon emitted from the lepton and absorbed by the target hadron which transfers energy to one o |
https://en.wikipedia.org/wiki/Inoculation%20needle | An inoculation needle is a laboratory equipment used in the field of microbiology to transfer and inoculate living microorganisms. It is one of the most commonly implicated biological laboratory tools and can be disposable or re-usable. A standard reusable inoculation needle is made from nichrome or platinum wire affixed to a metallic handle. A disposable inoculation needle is often made from plastic resin. The base of the needle is dulled, resulting in a blunted end.
Uses
Inoculation needles are primarily applied in microbiology for studying bacteria and fungi on semi-solid media. Biotechnology, cell biology and immunology may also utilize needle-oriented culture methods.
The application of inoculation needles focuses on the inoculation and isolation of very defined regions of the cultures and the requirements of least disturbance between two closely crowded microbial colonies. It can also be used in harpooning under a low magnification microscope.
Streaking on streak plates, fish tail inoculation of slant cultures and the inoculation of stab cultures can be done with the inoculation needle. Stab cultures specifically require the inoculation needle and is used to study cell motility, microbial oxygen requirements using Thioglycolate cultures, and the gelatin liquefaction of bacteria.
Operation
Sterilization
The inoculation needle is sterilized using the aseptic technique. An open flame from an incinerator, a bunsen burner, or an alcohol burner is used to flame along the tip and the length of the needle that is to be in contact with the inoculum (or the propagule). For ease of manipulation it is common practice to hold the needle with the dominant hand as if handling a pencil. The needle will be flamed at a downward angle through the flame's inner cone until it is red-hot. The downward angle will minimize the amount of microbial aerosols created.
Inoculation needles must be sterilized prior and following contact with microbial life forms to ensure no contam |
https://en.wikipedia.org/wiki/Mason%27s%20mark | A mason's mark is an engraved symbol often found on dressed stone in buildings and other public structures.
In stonemasonry
Regulations issued in Scotland in 1598 by James VI's Master of Works, William Schaw, stated that on admission to the guild, every mason had to enter his name and his mark in a register. There are three types of marks used by stonemasons.
Banker marks were made on stones before they were sent to be used by the walling masons. These marks served to identify the banker mason who had prepared the stones to their paymaster. This system was employed only when the stone was paid for by measure, rather than by time worked. For example, the 1306 contract between Richard of Stow, mason, and the Dean and Chapter of Lincoln Cathedral, specified that the plain walling would be paid for by measure, and indeed banker marks are found on the blocks of walling in this cathedral. Conversely, the masons responsible for walling the eastern parts of Exeter Cathedral were paid by the week, and consequently few banker marks are found on this part of the cathedral. Banker marks make up the majority of masons' marks, and are generally what are meant when the term is used without further specification.
Assembly marks were used to ensure the correct installation of important pieces of stonework. For example, the stones on the window jambs in the chancel of North Luffenham church in Rutland are each marked with a Roman numeral, directing the order in which the stones were to be installed.
Quarry stones were used to identify the source of a stone, or occasionally the quality.
In Freemasonry
Freemasonry, a fraternal order that uses an analogy to stonemasonry for much of its structure, also makes use of marks. A Freemason who takes the degree of Mark Master Mason will be asked to create his own Mark, as a type of unique signature or identifying badge. Some of these can be quite elaborate.
Gallery of mason's marks
See also
Benchmark (surveying)
Builder's signature
|
https://en.wikipedia.org/wiki/African%20swine%20fever%20virus | African swine fever virus (ASFV) is a large, double-stranded DNA virus in the Asfarviridae family. It is the causative agent of African swine fever (ASF). The virus causes a hemorrhagic fever with high mortality rates in domestic pigs; some isolates can cause death of animals as quickly as a week after infection. It persistently infects its natural hosts, warthogs, bushpigs, and soft ticks of the genus Ornithodoros, which likely act as a vector, with no disease signs. It does not cause disease in humans. ASFV is endemic to sub-Saharan Africa and exists in the wild through a cycle of infection between ticks and wild pigs, bushpigs, and warthogs. The disease was first described after European settlers brought pigs into areas endemic with ASFV, and as such, is an example of an emerging infectious disease.
ASFV replicates in the cytoplasm of infected cells. It is the only virus with a double-stranded DNA genome known to be transmitted by arthropods.
Virology
ASFV is a large (175–215 nm), icosahedral, double-stranded DNA virus with a linear genome of 189 kilobases containing more than 180 genes. The number of genes differs slightly among different isolates of the virus. ASFV has similarities to the other large DNA viruses, e.g., poxvirus, iridovirus, and mimivirus. In common with other viral hemorrhagic fevers, the main target cells for replication are those of monocyte, macrophage lineage. Entry of the virus into the host cell is receptor-mediated, but the precise mechanism of endocytosis is presently unclear.
The virus encodes enzymes required for replication and transcription of its genome, including elements of a base excision repair system, structural proteins, and many proteins that are not essential for replication in cells, but instead have roles in virus survival and transmission in its hosts. Virus replication takes place in perinuclear factory areas. It is a highly orchestrated process with at least four stages of transcription—immediate-early, early, in |
https://en.wikipedia.org/wiki/Human%20Genetic%20Diversity%3A%20Lewontin%27s%20Fallacy | "Human Genetic Diversity: Lewontin's Fallacy" is a 2003 paper by A. W. F. Edwards. He criticises an argument first made in Richard Lewontin's 1972 article "The Apportionment of Human Diversity", that the practice of dividing humanity into races is taxonomically invalid because any given individual will often have more in common genetically with members of other population groups than with members of their own. Edwards argued that this does not refute the biological reality of race since genetic analysis can usually make correct inferences about the perceived race of a person from whom a sample is taken, and that the rate of success increases when more genetic loci are examined.
Edwards' paper was reprinted, commented upon by experts such as Noah Rosenberg, and given further context in an interview with philosopher of science Rasmus Grønfeldt Winther in a 2018 anthology. Edwards' critique is discussed in a number of academic and popular science books, with varying degrees of support.
Some scholars, including Winther and Jonathan Marks, dispute the premise of "Lewontin's fallacy", arguing that Edwards' critique does not actually contradict Lewontin's argument. A 2007 paper in Genetics by David J. Witherspoon et al. concluded that the two arguments are in fact compatible, and that Lewontin's observation about the distribution of genetic differences across ancestral population groups applies "even when the most distinct populations are considered and hundreds of loci are used".
Lewontin's argument
In the 1972 study "The Apportionment of Human Diversity", Richard Lewontin performed a fixation index (FST) statistical analysis using 17 markers, including blood group proteins, from individuals across classically defined "races" (Caucasian, African, Mongoloid, South Asian Aborigines, Amerinds, Oceanians, and Australian Aborigines). He found that the majority of the total genetic variation between humans (i.e., of the 0.1% of DNA that varies between individuals), 85.4%, is |
https://en.wikipedia.org/wiki/Posynomial | A posynomial, also known as a posinomial in some literature, is a function of the form
where all the coordinates and coefficients are positive real numbers, and the exponents are real numbers. Posynomials are closed under addition, multiplication, and nonnegative scaling.
For example,
is a posynomial.
Posynomials are not the same as polynomials in several independent variables. A polynomial's exponents must be non-negative integers, but its independent variables and coefficients can be arbitrary real numbers; on the other hand, a posynomial's exponents can be arbitrary real numbers, but its independent variables and coefficients must be positive real numbers. This terminology was introduced by Richard J. Duffin, Elmor L. Peterson, and Clarence Zener in their seminal book on geometric programming.
Posynomials are a special case of signomials, the latter not having the restriction that the be positive. |
https://en.wikipedia.org/wiki/Darwin%20Mounds | Darwin Mounds is a large field of undersea sand mounds situated off the north west coast of Scotland that were first discovered in May 1998. They provide a unique habitat for ancient deep water coral reefs and were found using remote sensing techniques during surveys funded by the oil industry and steered by the joint industry and United Kingdom government group the Atlantic Frontier Environment Network (AFEN) (Masson and Jacobs 1998). The mounds were named after the research vessel, itself named for the eminent naturalist and evolutionary theorist Charles Darwin.
The mounds are about below the surface of the North Atlantic ocean, approximately north-west of Cape Wrath, the north-west tip of mainland Scotland. There are hundreds of mounds in the field, which in total cover approximately . Individual mounds are typically circular, up to high and wide. Most of the mounds are also distinguished by the presence of an additional feature referred to as a 'tail'. The tails are of a variable extent and may merge with others, but are generally a teardrop shape and are orientated south-west of the mound. The mound-tail feature of the Darwin Mounds is apparently unique globally.
Composition
The mounds are mostly sand, currently interpreted as "sand volcanoes". These features are caused when fluidised sand "de-waters" and the fluid bubbles up through the sand, pushing the sediment up into a cone shape. Sand volcanoes are common in the Devonian fossil record in UK, and in seismically active areas of the planet. In this case, tectonic activity is unlikely; some form of slumping on the south-west side of the undersea (Wyville-Thomson) Ridge being a more likely cause. The tops of the mounds have living stands of Lophelia and blocky rubble (interpreted as coral debris). The mounds provide one of the largest known northerly cold-water habitats for coral species. The mounds are also unusual in that Lophelia pertusa, a cold water coral, appears to be growing on sand rather than a |
https://en.wikipedia.org/wiki/Chondritic%20uniform%20reservoir | The CHondritic Uniform Reservoir (CHUR) is a scientific model in astrophysics and geochemistry for the mean chemical composition of the part of the Solar Nebula from which, during the formation of the Solar System, chondrites formed. This hypothetical chemical reservoir is thought to have been similar in composition to the current photosphere of the Sun.
When the Sun formed from its protostar, around 4.56 billion years ago, the solar wind blew all gas particles from the central part of the Solar Nebula. In this way most lighter volatiles (e.g. hydrogen, helium, oxygen, carbon dioxide), that had not yet condensed in the inner, warmer regions of the nebula, were lost. This fractionation process is the reason why the terrestrial planets and asteroid belt are relatively enriched in heavy elements in respect to the Sun or the gas planets.
Certain type of meteorites, CI-chondrites, have chemical compositions that are almost identical to the solar photosphere, except for the abundances of volatiles. Because the Sun contains 99.86% of the mass of the Solar System, they are considered to have the same composition as the solar nebula (with the exception of volatile loss) and are therefore representative of the material from which the terrestrial planets, including the Earth, were formed.
See also
Chondrite
Nebular hypothesis
Astrophysics
Geochemistry |
https://en.wikipedia.org/wiki/Janez%20Lawson | Janez Yvonne Lawson Bordeaux (February 22, 1930 – November 24, 1990) was an American chemical engineer who became one of NASA's computers. She was the first African-American hired into a technical position at Jet Propulsion Laboratory. She programmed the IBM 701.
Early life and education
Lawson was born on February 22, 1930, in Santa Monica, California. Her parents were Hilliard Lawson and Bernice Lawson. She attended Belmont High School and graduated in 1948. Lawson completed a bachelor's degree in chemical engineering at the University of California, Los Angeles in 1952. She was a straight-A student and President of the Delta Sigma Theta sorority.
Career
Despite her qualifications, Lawson could not get work as a chemical engineer because of her race and gender. She saw an advertisement for a job as a computer in Pasadena. There was discussion about whether or not she should get the job, but Macie Roberts stood up for her. Lawson got the job, and in 1953 was one of the first Jet Propulsion Laboratory employees to be sent to a training course at IBM. Lawson was the first African-American hired into a technical position at Jet Propulsion Laboratory. She was promoted to mathematician in 1954. She became skilled at programming during the course, using a keypunch and learning speedcoding. Lawson lived in Los Angeles and would commute for over an hour to the Jet Propulsion Laboratory every day. Lawson joined the Ramo-Wooldridge Corporation in the late 1950s. |
https://en.wikipedia.org/wiki/Ekiga | Ekiga (formerly called GnomeMeeting) is a VoIP and video conferencing application for GNOME and Microsoft Windows. It is distributed as free software under the terms of the GNU GPL-2.0-or-later. It was the default VoIP client in Ubuntu until October 2009, when it was replaced by Empathy. Ekiga supports both the SIP and H.323 (based on OPAL) protocols and is fully interoperable with any other SIP compliant application and with Microsoft NetMeeting. It supports many high-quality audio and video codecs.
Ekiga was initially written by Damien Sandras in order to graduate from the University of Louvain (UCLouvain). It is currently developed by a community-based team led by Sandras. The logo was designed based on his concept by Andreas Kwiatkowski.
Ekiga.net was also a free and private SIP registrar, which enabled its members to originate and terminate (receive) calls from and to each other directly over the Internet.
The service was discontinued at the end of 2018.
Features
Features of Ekiga include:
Integration
Ekiga is integrated with a number of different software packages and protocols such as LDAP directories registration and browsing along with support for Novell Evolution so that contacts are shared between both programs and zeroconf (Apple Bonjour) support. It auto-detects devices including USB, ALSA and legacy OSS soundcards, Video4linux and FireWire camera.
User interface
Ekiga supports a Contact list based interface along with Presence support with custom messages. It allows for the monitoring of contacts and viewing call history along with an addressbook, dialpad, and chat window. SIP URLs and H.323/callto support is built-in along with full-screen videoconferencing (accelerated using a graphics card).
Technical features
Call forwarding on busy, no answer, always (SIP and H.323)
Call transfer (SIP and H.323)
Call hold (SIP and H.323)
DTMF support (SIP and H.323)
Basic instant messaging (SIP)
Text chat (SIP and H.323)
Register with several regi |
https://en.wikipedia.org/wiki/Bouchard%27s%20nodes | Bouchard's nodes are hard, bony outgrowths or gelatinous cysts on the proximal interphalangeal joints (the middle joints of fingers or toes). They are seen in osteoarthritis, where they are caused by formation of calcific spurs of the articular (joint) cartilage. Much less commonly, they may be seen in rheumatoid arthritis, where nodes are caused by antibody deposition to the synovium.
Bouchard's nodes are comparable in presentation to Heberden's nodes, which are similar osteoarthritic growths on the distal interphalangeal joints, but are significantly less common.
Eponym
Bouchard's nodes are named after French pathologist Charles Jacques Bouchard (1837–1915).
See also
Heberden's node |
https://en.wikipedia.org/wiki/Progressive%20Graphics%20File | PGF (Progressive Graphics File) is a wavelet-based bitmapped image format that employs lossless and lossy data compression. PGF was created to improve upon and replace the JPEG format. It was developed at the same time as JPEG 2000 but with a focus on speed over compression ratio.
PGF can operate at higher compression ratios without taking more encoding/decoding time and without generating the characteristic "blocky and blurry" artifacts of the original DCT-based JPEG standard. It also allows more sophisticated progressive downloads.
Color models
PGF supports a wide variety of color models:
Grayscale with 1, 8, 16, or 31 bits per pixel
Indexed color with palette size of 256
RGB color image with 12, 16 (red: 5 bits, green: 6 bits, blue: 5 bits), 24, or 48 bits per pixel
ARGB color image with 32 bits per pixel
L*a*b color image with 24 or 48 bits per pixel
CMYK color image with 32 or 64 bits per pixel
Technical discussion
PGF claims to achieve an improved compression quality over JPEG adding or improving features such as scalability. Its compression performance is similar to the original JPEG standard. Very low and very high compression rates (including lossless compression) are also supported in PGF. The ability of the design to handle a very large range of effective bit rates is one of the strengths of PGF. For example, to reduce the number of bits for a picture below a certain amount, the advisable thing to do with the first JPEG standard is to reduce the resolution of the input image before encoding it — something that is ordinarily not necessary for that purpose when using PGF because of its wavelet scalability properties.
The PGF process chain contains the following four steps:
Color space transform (in case of color images)
Discrete Wavelet Transform
Quantization (in case of lossy data compression)
Hierarchical bit-plane run-length encoding
Color components transformation
Initially, images have to be transformed from the RGB color space to ano |
https://en.wikipedia.org/wiki/Tajima%27s%20D | Tajima's D is a population genetic test statistic created by and named after the Japanese researcher Fumio Tajima. Tajima's D is computed as the difference between two measures of genetic diversity: the mean number of pairwise differences and the number of segregating sites, each scaled so that they are expected to be the same in a neutrally evolving population of constant size.
The purpose of Tajima's D test is to distinguish between a DNA sequence evolving randomly ("neutrally") and one evolving under a non-random process, including directional selection or balancing selection, demographic expansion or contraction, genetic hitchhiking, or introgression. A randomly evolving DNA sequence contains mutations with no effect on the fitness and survival of an organism. The randomly evolving mutations are called "neutral", while mutations under selection are "non-neutral". For example, a mutation that causes prenatal death or severe disease would be expected to be under selection. In the population as a whole, the frequency of a neutral mutation fluctuates randomly (i.e. the percentage of individuals in the population with the mutation changes from one generation to the next, and this percentage is equally likely to go up or down) through genetic drift.
The strength of genetic drift depends on population size. If a population is at a constant size with constant mutation rate, the population will reach an equilibrium of gene frequencies. This equilibrium has important properties, including the number of segregating sites , and the number of nucleotide differences between pairs sampled (these are called pairwise differences). To standardize the pairwise differences, the mean or 'average' number of pairwise differences is used. This is simply the sum of the pairwise differences divided by the number of pairs, and is often symbolized by .
The purpose of Tajima's test is to identify sequences which do not fit the neutral theory model at equilibrium between mutation and gene |
https://en.wikipedia.org/wiki/Fluxomics | Fluxomics describes the various approaches that seek to determine the rates of metabolic reactions within a biological entity. While metabolomics can provide instantaneous information on the metabolites in a biological sample, metabolism is a dynamic process. The significance of fluxomics is that metabolic fluxes determine the cellular phenotype. It has the added advantage of being based on the metabolome which has fewer components than the genome or proteome.
Fluxomics falls within the field of systems biology which developed with the appearance of high throughput technologies. Systems biology recognizes the complexity of biological systems and has the broader goal of explaining and predicting this complex behavior.
Metabolic flux
Metabolic flux refers to the rate of metabolite conversion in a metabolic network. For a reaction this rate is a function of both enzyme abundance and enzyme activity. Enzyme concentration is itself a function of transcriptional and translational regulation in addition to the stability of the protein. Enzyme activity is affected by the kinetic parameters of the enzyme, the substrate concentrations, the product concentrations, and the effector molecules concentration. The genomic and environmental effects on metabolic flux are what determine healthy or diseased phenotype.
Fluxome
Similar to genome, transcriptome, proteome, and metabolome, the fluxome is defined as the complete set of metabolic fluxes in a cell. However, unlike the others the fluxome is a dynamic representation of the phenotype. This is due to the fluxome resulting from the interactions of the metabolome, genome, transcriptome, proteome, post-translational modifications and the environment.
Flux analysis technologies
Two important technologies are flux balance analysis (FBA) and 13C-fluxomics. In FBA metabolic fluxes are estimated by first representing the metabolic reactions of a metabolic network in a numerical matrix containing the stoichiometric coeffi |
https://en.wikipedia.org/wiki/Cray-3 | The Cray-3 was a vector supercomputer, Seymour Cray's designated successor to the Cray-2. The system was one of the first major applications of gallium arsenide (GaAs) semiconductors in computing, using hundreds of custom built ICs packed into a CPU. The design goal was performance around 16 GFLOPS, about 12 times that of the Cray-2.
Work started on the Cray-3 in 1988 at Cray Research's (CRI) development labs in Chippewa Falls, Wisconsin. Other teams at the lab were working on designs with similar performance. To focus the teams, the Cray-3 effort was moved to a new lab in Colorado Springs, Colorado later that year. Shortly thereafter, the corporate headquarters in Minneapolis decided to end work on the Cray-3 in favor of another design, the Cray C90. In 1989 the Cray-3 effort was spun off to a newly formed company, Cray Computer Corporation (CCC).
The launch customer, Lawrence Livermore National Laboratory, cancelled their order in 1991 and a number of company executives left shortly thereafter. The first machine was finally ready in 1993, but with no launch customer, it was instead loaned as a demonstration unit to the nearby National Center for Atmospheric Research in Boulder. The company went bankrupt in May 1995, and the machine was officially decommissioned.
With the delivery of the first Cray-3, Seymour Cray immediately moved on to the similar-but-improved Cray-4 design, but the company went bankrupt before it was completely tested. The Cray-3 was Cray's last completed design; with CCC's bankruptcy, he formed SRC Computers to concentrate on parallel designs, but died in a car accident in 1996 before this work was delivered.
History
Background
Seymour Cray began the design of the Cray-3 in 1985, as soon as the Cray-2 reached production. Cray generally set himself the goal of producing new machines with ten times the performance of the previous models. Although the machines did not always meet this goal, this was a useful technique in defining the project |
https://en.wikipedia.org/wiki/400%20%28number%29 | 400 (four hundred) is the natural number following 399 and preceding 401.
Mathematical properties
400 is the square of 20. 400 is the sum of the powers of 7 from 0 to 3, thus making it a repdigit in base 7 (1111).
A circle is divided into 400 grads, which is equal to 360 degrees and 2π radians. (Degrees and radians are the SI accepted units).
400 is a self number in base 10, since there is no integer that added to the sum of its own digits results in 400. On the other hand, 400 is divisible by the sum of its own base 10 digits, making it a Harshad number.
Other fields
Four hundred is also
.400 (2 hits out of 5 at-bats) is a numerically significant annual batting average statistic in Major League Baseball, last accomplished by Ted Williams of the Boston Red Sox in 1941.
The number of days in a Gregorian calendar year changes according to a cycle of exactly 400 years, of which 97 are leap years and 303 are common.
The Sun is approximately 400 times the size of the Moon but also is approximately 400 times further away, creating the temporary illusion in which the Sun and Moon in Earth's sky appear as if of similar size.
In gematria 400 is the largest single number that can be represented without using the Sophit forms (see Kaph, Mem, Nun, Pe, and Tzade).
Integers from 401 to 499
400s
401
401 is a prime number, tetranacci number, Chen prime, prime index prime
Eisenstein prime with no imaginary part
Sum of seven consecutive primes (43 + 47 + 53 + 59 + 61 + 67 + 71)
Sum of nine consecutive primes (29 + 31 + 37 + 41 + 43 + 47 + 53 + 59 + 61)
Mertens function returns 0,
Member of the Mian–Chowla sequence.
402
402 = 2 × 3 × 67, sphenic number, nontotient, Harshad number, number of graphs with 8 nodes and 9 edges
HTTP status code for "Payment Required", area code for Nebraska
403
403 = 13 × 31, heptagonal number, Mertens function returns 0.
First number that is the product of an emirp pair.
HTTP 403, the status code for "Forbidden"
Also in the name of |
https://en.wikipedia.org/wiki/Ann%20Arbor%20staging | Ann Arbor staging is the staging system for lymphomas, both in Hodgkin's lymphoma (formerly designated Hodgkin's disease) and non-Hodgkin lymphoma (abbreviated NHL). It was initially developed for Hodgkin's, but has some use in NHL. It has roughly the same function as TNM staging in solid tumors.
The stage depends on both the place where the malignant tissue is located (as located with biopsy, CT scanning, gallium scan and increasingly positron emission tomography) and on systemic symptoms due to the lymphoma ("B symptoms": night sweats, weight loss of >10% or fevers).
Principal stages
The principal stage is determined by location of the tumor:
Stage I indicates that the cancer is located in a single region, usually one lymph node and the surrounding area. Stage I often will not have outward symptoms.
Stage II indicates that the cancer is located in two separate regions, an affected lymph node or lymphatic organ and a second affected area, and that both affected areas are confined to one side of the diaphragm—that is, both are above the diaphragm, or both are below the diaphragm.
Stage III indicates that the cancer has spread to both sides of the diaphragm, including one organ or area near the lymph nodes or the spleen.
Stage IV indicates diffuse or disseminated involvement of one or more extralymphatic organs, including any involvement of the liver, bone marrow, or nodular involvement of the lungs.
Modifiers
These letters can be appended to some stages:
A or B: the absence of constitutional (B-type) symptoms is denoted by adding an "A" to the stage; the presence is denoted by adding a "B" to the stage.
S: is used if the disease has spread to the spleen.
E: is used if the disease is "extranodal" (not in the lymph nodes) or has spread from lymph nodes to adjacent tissue.
X: is used if the largest deposit is >10 cm large ("bulky disease"), or whether the mediastinum is wider than ⅓ of the chest on a chest X-ray.
Type of staging
The nature of the staging is |
https://en.wikipedia.org/wiki/Software%20configuration%20management | In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine the "what, when, why and who" of the change. If a configuration is working well, SCM can determine how to replicate it across many hosts.
The acronym "SCM" is also expanded as source configuration management process and software change and configuration management. However, "configuration" is generally understood to cover changes typically made by a system administrator.
Purposes
The goals of SCM are generally:
Configuration identification - Identifying configurations, configuration items and baselines.
Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
Build management - Managing the process and tools used for builds.
Process management - Ensuring adherence to the organization's development process.
Environment management - Managing the software and hardware that host the system.
Teamwork - Facilitate team interactions related to the process.
Defect tracking - Making sure every defect has traceability back to the source.
With the introduction of cloud computing and DevOps the purposes of SCM tools have become merged in some cases. The SCM tools themselves have become virtual appliances that can be instantiated as v |
https://en.wikipedia.org/wiki/Mathematical%20principles%20of%20reinforcement | The mathematical principles of reinforcement (MPR) constitute of a set of mathematical equations set forth by Peter Killeen and his colleagues attempting to describe and predict the most fundamental aspects of behavior (Killeen & Sitomer, 2003).
The three key principles of MPR, arousal, constraint, and coupling, describe how incentives motivate responding, how time constrains it, and how reinforcers become associated with specific responses, respectively. Mathematical models are provided for these basic principles in order to articulate the necessary detail of actual data.
First principle: arousal
The first basic principle of MPR is arousal. Arousal refers to the activation of behavior by the presentation of incentives. An increase in activity level following repeated presentations of incentives is a fundamental aspect of conditioning. Killeen, Hanson, and Osborne (1978) proposed that adjunctive (or schedule induced) behaviors are normally occurring parts of an organism's repertoire. Delivery of incentives increases the rate of adjunctive behaviors by generating a heightened level of general activity, or arousal, in organisms.
Killeen & Hanson (1978) exposed pigeons to a single daily presentation of food in the experimental chamber and measured general activity for 15 minutes after a feeding. They showed that activity level increased slightly directly following a feeding and then decreased slowly over time. The rate of decay can be described by the following function:
= y-intercept (responses per minute)
= time in seconds since feeding
= time constant
= base of natural logarithm
The time course of the entire theoretical model of general activity is modeled by the following equation:
= arousal
= temporal inhibition
= competing behaviors
To better conceptualize this model, imagine how rate of responding would appear with each of these processes individually. In the absence of temporal inhibition or competing responses, arousal level would remain high a |
https://en.wikipedia.org/wiki/Royal%20College%20of%20Pathologists | The Royal College of Pathologists (RCPath) is a professional membership organisation.
Its main function is the overseeing of postgraduate training, and its Fellowship Examination (FRCPath) is recognised as the standard assessment of fitness to practise in this branch of medicine.
Constitution
The Royal College of Pathologists is a professional membership organisation, to maintain the standards and reputation of British pathology, through training, assessments, examinations and professional development. It is a registered charity and is not a trades union. Its 11,000 members work in hospital laboratories, universities and industry worldwide.
History
The College of Pathologists was founded in 1962, to optimise postgraduate training in the relatively young science of pathology, with its high importance in the diagnostic process, and the increasing range of specialist studies within it. The college received its royal charter in 1970 and its Patron is Her Majesty Queen Elizabeth II.
Training and examinations
The Fellowship Examination of the Royal College of Pathologists (FRCPath) is the main method of assessment for UK pathology training - evaluation of a candidate's training programme, indicating fitness to practise, whilst also marking the entry into independent practice and the beginning of continuing professional development. Upon successful completion, trainees are awarded Fellowship status of the Royal College of Pathologists.
Fellowship may also be awarded on the basis of submitted published works, though this does not contribute to the award of the Certificate of Completion of Training and is not a mark of eligibility for appointment to a Consultant post or unsupervised practice.
The college runs a national scheme for overseeing of continued education of pathologists in clinical practice, as well as sponsoring workshops, lectures and courses.
Disciplines
The following are disciplines of pathology which the college oversees:
Histopathology
Neuropatholog |
https://en.wikipedia.org/wiki/4th%20meridian%20east | The meridian 4° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, the Atlantic Ocean, Europe, Africa, the Southern Ocean, and Antarctica to the South Pole.
The 4th meridian east forms a great circle with the 176th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 4th meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="125" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | North Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Islands of Goeree-Overflakkee and Schouwen-Duiveland Peninsulas of Tholen and Zuid-Beveland Zeelandic Flanders
|-
|
! scope="row" |
|
|-
|
! scope="row" |
| Passing just west of Reims (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Mediterranean Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Island of Menorca
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Mediterranean Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
|
! scope="row" |
| For about 8 km
|-
|
! scope="row" |
| For about 5 km
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Southern Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" | Antarctica
| Queen Maud Land, claimed by
|-
|}
e004th meridian east |
https://en.wikipedia.org/wiki/IP-DECT | IP-DECT is a technology used for on-site wireless communications. It uses the DECT air interface for reliable wireless voice and data communication between handsets and base stations and the well established VoIP technology for the corded voice communication between base stations and server functions.
The advantage is the circuit switched approach and therefore better specified quality of service for vocal communication than with Wireless LAN.
A DECT phone must remain in proximity to its own base (or repeaters thereof), and WLAN devices have a better range given sufficient access points, however voice over WLAN handsets impose significant design and maintenance complexity for large networks to ensure roaming facilities and high quality-of-service.
There are some of the traditional telephone equipment manufacturers and smaller enterprises that offer IP-DECT systems, both for residential (single-cell base station/access points), as well as for enterprise usage (multi-cell with multiple base stations/access points, and/or seamless handoff between cells) where it is important to cover large areas with a maintained speech path.
Companies
For enterprise use the following vendors produce IP-DECT systems:
Aastra (DeTeWe)
Ascom (company)
Gigaset
Mitel
NEC
Panasonic
Spectralink 7000 series (Polycom, Kirk)
Alcatel-Lucent
Ericsson-LG
See also
CAT-iq
Voice over WLAN |
https://en.wikipedia.org/wiki/Line%E2%80%93plane%20intersection | In analytic geometry, the intersection of a line and a plane in three-dimensional space can be the empty set, a point, or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point.
Distinguishing these cases, and determining equations for the point and line in the latter cases, have use in computer graphics, motion planning, and collision detection.
Algebraic form
In vector notation, a plane can be expressed as the set of points for which
where is a normal vector to the plane and is a point on the plane. (The notation denotes the dot product of the vectors and .)
The vector equation for a line is
where is a unit vector in the direction of the line, is a point on the line, and is a scalar in the real number domain. Substituting the equation for the line into the equation for the plane gives
Expanding gives
And solving for gives
If then the line and plane are parallel. There will be two cases: if then the line is contained in the plane, that is, the line intersects the plane at each point of the line. Otherwise, the line and plane have no intersection.
If there is a single point of intersection. The value of can be calculated and the point of intersection, , is given by
.
Parametric form
A line is described by all points that are a given direction from a point. A general point on a line passing through points and can be represented as
where is the vector pointing from to .
Similarly a general point on a plane determined by the triangle defined by the points , and can be represented as
where is the vector pointing from to , and is the vector pointing from to .
The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation:
This can be rewritten as
which can be expressed in matrix |
https://en.wikipedia.org/wiki/Laplace%20transform%20applied%20to%20differential%20equations | In mathematics, the Laplace transform is a powerful integral transform used to switch a function from the time domain to the s-domain. The Laplace transform can be used in some cases to solve linear differential equations with given initial conditions.
First consider the following property of the Laplace transform:
One can prove by induction that
Now we consider the following differential equation:
with given initial conditions
Using the linearity of the Laplace transform it is equivalent to rewrite the equation as
obtaining
Solving the equation for and substituting with one obtains
The solution for f(t) is obtained by applying the inverse Laplace transform to
Note that if the initial conditions are all zero, i.e.
then the formula simplifies to
An example
We want to solve
with initial conditions f(0) = 0 and f′(0)=0.
We note that
and we get
The equation is then equivalent to
We deduce
Now we apply the Laplace inverse transform to get
Bibliography
A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002.
Integral transforms
Differential equations
Differential calculus
Ordinary differential equations |
https://en.wikipedia.org/wiki/GMail%20Drive | GMail Drive was a free third-party Windows Shell namespace extension ("add-on") for Google's Gmail. GMail Drive was not supported by Google. It allowed a user to access a virtual drive stored in a Gmail account by causing the contents of the Gmail account to appear as a new network share on the user's workstation. In order to use this add-on, the user needed a Gmail e-mail account. The add-on enabled the user to use the standard Windows desktop file copy and paste commands to transfer files to and from the Gmail account as if it were a drive on the user's computer. Gmail Drive was based upon GmailFS, a file system developed by Richard Jones. GMail Drive was published in 2004 and functional as early as 2005 which predates Google's later implementation of Google Drive released on April 24, 2012. As of 2015, the official extension page declares the project dead.
Function
In order for GMail Drive to operate, the computer must be connected to the Internet and the user must have a Gmail account. A broadband connection is preferable, though not necessary, as all operations are done through Gmail and consequently over the Internet. GMail Drive uses the inbox of the Gmail account to store files and creates a virtual filesystem on top of the Gmail account, enabling the user to save and retrieve files stored on the Gmail account directly from inside Windows Explorer. GMail Drive adds a new virtual drive to the computer under the My Computer folder, where the user can create new folders, copy and drag-and-drop files to, but does not give an actual drive letter, such as C:, preventing its use in all console applications, and some older Windows applications.
When the user creates a new file using GMail Drive, it generates an e-mail and posts it to the Gmail account's inbox. The e-mail appears in the normal Inbox folder when using the normal Gmail interface, and the file is attached as an e-mail attachment. GMail Drive periodically checks the mail account (using the Gmail sea |
https://en.wikipedia.org/wiki/Infinite%20loop | In computer programming, an infinite loop (or endless loop) is a sequence of instructions that, as written, will continue endlessly, unless an external intervention occurs ("pull the plug"). It may be intentional.
Overview
This differs from "a type of computer program that runs the same instructions continuously until it is either stopped or interrupted". Consider the following pseudocode:
how_many = 0
while is_there_more_data() do
how_many = how_many + 1
end
display "the number of items counted = " how_many
The same instructions were run continuously until it was stopped or interrupted . . . by the FALSE returned at some point by the function is_there_more_data.
By contrast, the following loop will not end by itself:
birds = 1
fish = 2
while birds + fish > 1 do
birds = 3 - birds
fish = 3 - fish
end
birds will alternate being 1 or 2, while fish will alternate being 2 or 1. The loop will not stop unless an external intervention occurs ("pull the plug").
Details
An infinite loop is a sequence of instructions in a computer program which loops endlessly, either due to the loop having no terminating condition, having one that can never be met, or one that causes the loop to start over. In older operating systems with cooperative multitasking, infinite loops normally caused the entire system to become unresponsive. With the now-prevalent preemptive multitasking model, infinite loops usually cause the program to consume all available processor time, but can usually be terminated by the user. Busy wait loops are also sometimes called "infinite loops". Infinite loops are one possible cause for a computer "freezing"; others include thrashing, deadlock, and access violations.
Intended vs unintended looping
Looping is repeating a set of instructions until a specific condition is met. An infinite loop occurs when the condition will never be met, due to some inherent characteristic of the loop.
Intentional looping
There are a few situations when this is desired |
https://en.wikipedia.org/wiki/Markov%20odometer | In mathematics, a Markov odometer is a certain type of topological dynamical system. It plays a fundamental role in ergodic theory and especially in orbit theory of dynamical systems, since a theorem of H. Dye asserts that every ergodic nonsingular transformation is orbit-equivalent to a Markov odometer.
The basic example of such system is the "nonsingular odometer", which is an additive topological group defined on the product space of discrete spaces, induced by addition defined as , where . This group can be endowed with the structure of a dynamical system; the result is a conservative dynamical system.
The general form, which is called "Markov odometer", can be constructed through Bratteli–Vershik diagram to define Bratteli–Vershik compactum space together with a corresponding transformation.
Nonsingular odometers
Several kinds of non-singular odometers may be defined.
These are sometimes referred to as adding machines.
The simplest is illustrated with the Bernoulli process. This is the set of all infinite strings in two symbols, here denoted by endowed with the product topology. This definition extends naturally to a more general odometer defined on the product space
for some sequence of integers with each
The odometer for for all is termed the dyadic odometer, the von Neumann–Kakutani adding machine or the dyadic adding machine.
The topological entropy of every adding machine is zero. Any continuous map of an interval with a topological entropy of zero is topologically conjugate to an adding machine, when restricted to its action on the topologically invariant transitive set, with periodic orbits removed.
Dyadic odometer
The set of all infinite strings in strings in two symbols has a natural topology, the product topology, generated by the cylinder sets. The product topology extends to a Borel sigma-algebra; let denote that algebra. Individual points are denoted as
The Bernoulli process is conventionally endowed with a collection of measure |
https://en.wikipedia.org/wiki/Dressing%20overall | Dressing overall consists of stringing international maritime signal flags on a ship from stemhead to masthead, from masthead to masthead (if the vessel has more than one mast) and then down to the taffrail. It is a sign of celebration, and is done for celebratory occasions, anniversaries and events, whether national, local or personal.
Practice varies from country to country as to the order in which the signal flags are placed on the "dressing lines": in some places a specific order is laid down, in others there is no such provision; either way, the intention is to produce a random succession of flags (i.e. not conveying any words or other messages), with the numerical and other pennants spaced equally and regularly along the line. Custom and regulations require that national or other flags not be mixed in with the signal flags when dressing a ship overall.
When a ship is properly dressed overall in harbor, ensigns (in addition to the one flown in the usual position at the stern) should fly at each masthead, unless displaced by another flag, e.g., that of a flag officer. A ship underway does not array herself with signal flags, but the masthead ensign(s) would still signify that she is dressed while underway. |
https://en.wikipedia.org/wiki/PowerVM%20Lx86 | PowerVM Lx86 was a binary translation layer for IBM's System p servers. It enabled 32-bit x86 Linux binaries to run unmodified on the Power ISA-based hardware. IBM used this feature to migrate x86 Linux servers to the PowerVM virtualized environment; it was supported on all POWER5 and POWER6 hardware as well as BladeCenter JS21 and JS22 systems.
In contrast to regular emulators only the instructions are translated, not the entire system, thus making it fast and flexible. The Lx86 software senses that it is executing x86 code and translates it to PowerPC code at execution, and these instructions are later cached ensuring that the translation process only has to take place once, further reducing the drop in performance usually associated with emulation. Lx86 does not support applications that access hardware directly, like kernel modules. Earlier versions of Lx86 did not run code that makes use of SSE instructions, though as of version 1.3.2 the SSE and SSE2 instruction sets were supported.
The product was at first marketed as System p AVE (System p Application Virtual Environment) and was incorrectly reported as PAVE (Portable Advanced Virtualization Emulator) in the press but the name has since changed to PowerVM Lx86. Lx86 was based on the QuickTransit dynamic translator from Transitive, the same that Apple uses for its Rosetta emulation layer that enables Mac OS X to run unmodified PowerPC binaries on their Intel-based Macintoshes.
All versions and releases of the Lx86 product were withdrawn from marketing in September 2011, with support discontinued in April 2013. |
https://en.wikipedia.org/wiki/ASA%20Gold%20Medal | The ASA Gold Medal is an annual award presented by the Acoustical Society of America (ASA) to individuals in recognition of outstanding contributions to acoustics. The Gold Medal was first presented in 1954 and is the highest award of the ASA. Past recipients, which include the Nobel Laureate Georg von Békésy, are listed below.
Recipients
Notes
See also
List of physics awards |
https://en.wikipedia.org/wiki/Levenshtein%20automaton | In computer science, a Levenshtein automaton for a string w and a number n is a finite-state automaton that can recognize the set of all strings whose Levenshtein distance from w is at most n. That is, a string x is in the formal language recognized by the Levenshtein automaton if and only if x can be transformed into w by at most n single-character insertions, deletions, and substitutions.
Applications
Levenshtein automata may be used for spelling correction, by finding words in a given dictionary that are close to a misspelled word. In this application, once a word is identified as being misspelled, its Levenshtein automaton may be constructed, and then applied to all of the words in the dictionary to determine which ones are close to the misspelled word. If the dictionary is stored in compressed form as a trie, the time for this algorithm (after the automaton has been constructed) is proportional to the number of nodes in the trie, significantly faster than using dynamic programming to compute the Levenshtein distance separately for each dictionary word.
It is also possible to find words in a regular language, rather than a finite dictionary, that are close to a given target word, by computing the Levenshtein automaton for the word, and then using a Cartesian product construction to combine it with an automaton for the regular language, giving an automaton for the intersection language. Alternatively, rather than using the product construction, both the Levenshtein automaton and the automaton for the given regular language may be traversed simultaneously using a backtracking algorithm.
Levenshtein automata are used in Lucene for full-text searches that can return relevant documents even if the query is misspelled.
Construction
For any fixed constant n, the Levenshtein automaton for w and n may be constructed in time O(|w|).
Mitankin studies a variant of this construction called the universal Levenshtein automaton, determined only by a numeric parameter n, th |
https://en.wikipedia.org/wiki/Cullin | Cullins are a family of hydrophobic scaffold proteins which provide support for ubiquitin ligases (E3). All eukaryotes appear to have cullins. They combine with RING proteins to form Cullin-RING ubiquitin ligases (CRLs) that are highly diverse and play a role in myriad cellular processes, most notably protein degradation by ubiquitination.
The human genome contains eight cullin genes
CUL1, part of SCF complex
CUL2, part of ECS complex (Elongin C - CUL2 - SOCS-box)
CUL3, part of CUL3-BTB complex
CUL4A
CUL4B
CUL5
CUL7
CUL9, also known as PARC
There is also a more distant member called ANAPC2 (or APC2), part of the Anaphase-promoting complex.
CUL1, 2, 3, 4A, 4B, 5 and 7 each form part of a multi-subunit ubiquitin complex.
Cullin-RING ubiquitin ligases
Cullin-RING ubiquitin ligases (CRLs), such as Cul1 (SCF) play an essential role in targeting proteins for ubiquitin-mediated destruction; as such, they are diverse in terms of composition and function, regulating many different processes from glucose sensing and DNA replication to limb patterning and circadian rhythms. The catalytic core of CRLs consists of a RING protein and a cullin family member. For Cul1, the C-terminal cullin-homology domain binds the RING protein. The RING protein appears to function as a docking site for ubiquitin-conjugating enzymes (E2s). Other proteins contain a cullin-homology domain, such as CUL9, also known as p53 cytoplasmic anchor PARC, and the ANAPC2 subunit of the anaphase-promoting complex/cyclosome; both CUL9 and ANAPC2 have ubiquitin ligase activity. The N-terminal region of cullins is more variable, and is used to interact with specific adaptor proteins.
Modification by NEDD8
With the exception of ANAPC2, each member of the cullin family is modified by Nedd8 and several cullins function in Ubiquitin-dependent proteolysis, a process in which the 26S proteasome recognises and subsequently degrades a target protein tagged with K48-linked poly-ubiquitin chains. Nedd8/Rub1 is |
https://en.wikipedia.org/wiki/Gastric%20electrical%20stimulation | Gastric electrical stimulation, also known as implantable gastric stimulation, is the use of specific devices to provide electrical stimulation to the stomach to try to bring about weight loss in those who are overweight or improve gastroparesis.
Gastric electrical stimulation is a pacemaker-like device with electrical connections to the surface of the stomach. The device works by disrupting of the motility cycle or stimulating enteric nervous system. There are a number of different devices on the market including Transend, Maestro, and Diamond.
Medical uses
These devices are for treatment of gastroparesis. The best available evidence, however, find that they are of questionable utility for this condition.
As of 2017 it is not approved for use for obesity in the United States. The first studies done did not find a benefit, however, research is ongoing.
Mechanism of action
Once food leaves the stomach and enters the duodenum, the gut-brain-liver axis is activated, which involves signaling between the gastrointestinal tract and the nervous system. For patients without type 2 diabetes, the gastric transit time of food is estimated to be 30–45 minutes (the time from food ingestion to food leaving the stomach into the duodenum). In type 2 diabetes, the neurohormonal communication system is impaired. Delayed signaling within the gut-brain-liver axis leads to high blood glucose concentration after meals.
Usage
There are approximately 4,000 gastric pacer surgeries a year in the United States.
Approval
The Diamond system first received CE mark in 2007 and is approved for sale in Europe, Australia, and Hong Kong. It is not approved in the United States for obesity. |
https://en.wikipedia.org/wiki/Application-specific%20instruction%20set%20processor | An application-specific instruction set processor (ASIP) is a component used in system on a chip design. The instruction set architecture of an ASIP is tailored to benefit a specific application. This specialization of the core provides a tradeoff between the flexibility of a general purpose central processing unit (CPU) and the performance of an application-specific integrated circuit (ASIC).
Some ASIPs have a configurable instruction set. Usually, these cores are divided into two parts: static logic which defines a minimum ISA (instruction-set architecture) and configurable logic which can be used to design new instructions. The configurable logic can be programmed either in the field in a similar fashion to a field-programmable gate array (FPGA) or during the chip synthesis. ASIPs have two ways of generating code: either through a retargetable code generator or through a retargetable compiler generator. The retargetable code generator uses the application, ISA, and Architecture Template to create the code generator for the object code. The retargetable compiler generator uses only the ISA and Architecture Template as the basis for creating the compiler. The application code will then be used by the compiler to create the object code.
ASIPs can be used as an alternative of hardware accelerators for baseband signal processing or video coding. Traditional hardware accelerators for these applications suffer from inflexibility. It is very difficult to reuse the hardware datapath with handwritten finite-state machines (FSM). The retargetable compilers of ASIPs help the designer to update the program and reuse the datapath. Typically, the ASIP design is more or less dependent on the tool flow because designing a processor from scratch can be very complicated. One approach is to describe the processor using a high level language and then to automatically generate the ASIP's software toolset.
Examples
RISC-V Instruction Set Architecture (ISA) provides minimum base ins |
https://en.wikipedia.org/wiki/Max-flow%20min-cut%20theorem | In computer science and optimization theory, the max-flow min-cut theorem states that in a flow network, the maximum amount of flow passing from the source to the sink is equal to the total weight of the edges in a minimum cut, i.e., the smallest total weight of the edges which if removed would disconnect the source from the sink.
This is a special case of the duality theorem for linear programs and can be used to derive Menger's theorem and the Kőnig–Egerváry theorem.
Definitions and statement
The theorem equates two quantities: the maximum flow through a network, and the minimum capacity of a cut of the network. To state the theorem, each of these notions must first be defined.
Network
A network consists of
a finite directed graph , where V denotes the finite set of vertices and is the set of directed edges;
a source and a sink ;
a capacity function, which is a mapping denoted by or for . It represents the maximum amount of flow that can pass through an edge.
Flows
A flow through a network is a mapping denoted by or , subject to the following two constraints:
Capacity Constraint: For every edge ,
Conservation of Flows: For each vertex apart from and (i.e. the source and sink, respectively), the following equality holds:
A flow can be visualized as a physical flow of a fluid through the network, following the direction of each edge. The capacity constraint then says that the volume flowing through each edge per unit time is less than or equal to the maximum capacity of the edge, and the conservation constraint says that the amount that flows into each vertex equals the amount flowing out of each vertex, apart from the source and sink vertices.
The value of a flow is defined by
where as above is the source and is the sink of the network. In the fluid analogy, it represents the amount of fluid entering the network at the source. Because of the conservation axiom for flows, this is the same as the amount of flow leaving the network at |
https://en.wikipedia.org/wiki/Palaeoaplysina | Palaeoaplysina is a genus of tabular, calcified fossils that are a component of many Late Palaeozoic reefs. The fossil acted as a baffle to trap sediment. Historically interpreted as a sponge or hydrozoan, recent studies are converging to its classification in the coralline stem group, placing it among the red algae.
Morphology
The thalloid organism had a series of internal canals opening on one side of the body (presumably the upper side), and volcano-like protuberances on that same side inviting comparison to filter-feeding organisms. On the other hand, it seems to have had a calcified cellular make up akin to that of the coralline reds, suggesting that it was either a stem-group coralline or a coralline-encrusted filter feeder.
Distribution
The organism is widespread in the tropical and near-tropical margin of the Laurentian continent (45–15°N), but is not found elsewhere. Its oldest reported occurrence is Middle Pennsylvanian (mid- to late Moscovian) and youngest is the late Sakmarian. It acts as an important reservoir rock for oil deposits.
See also |
https://en.wikipedia.org/wiki/UltraSPARC%20II | The UltraSPARC II, code-named "Blackbird", is a microprocessor implementation of the SPARC V9 instruction set architecture (ISA) developed by Sun Microsystems. Marc Tremblay was the chief architect. Introduced in 1997, it was further development of the UltraSPARC operating at higher clock frequencies of 250 MHz, eventually reaching 650 MHz.
The die contained 5.4 million transistors and had an area of 149 mm. It was fabricated by Texas Instruments in their 0.35 μm process, dissipated 25 W at 205 MHz, and used a 2.5 V power supply. L2 cache capacity was 1 to 4 MB.
In 1999, the UltraSPARC II was ported to a 0.25 μm process. This version was code-named "Sapphire-Black". It operated at 360 to 480 MHz, possessed a die area of 126 mm, dissipated 21 W at 400 MHz and the power supply voltage was reduced to 1.9 V. Supported L2 cache capacity was increased to 1 to 8 MB.
Derivatives
The UltraSPARC II was the basis for four derivatives.
UltraSPARC IIi
The UltraSPARC IIi "Sabre" featuring on-chip PCI controller was a low-cost version introduced in 1997 that operated at 270 to 360 MHz. It was fabricated in a 0.35 μm process and possessed a die size of 156 mm. It dissipated 21 W and used a 1.9 V power supply. It had a 256 KB to 2 MB L2 cache. In 1998, a version code-named Sapphire-Red, was fabricated in a 0.25 μm process, enabling the microprocessor to operate at 333 to 480 MHz. It dissipated 21 W at 440 MHz and used a 1.9 V power supply.
UltraSPARC IIe
The UltraSPARC IIe "Hummingbird" was an embedded version introduced in 2000 that operated at 400 to 500 MHz, fabricated in a 0.18 μm process with aluminium interconnects. It dissipated a maximum of 13 W at 500 MHz, used a 1.5 to 1.7 V power supply and had a 256 KB L2 cache.
UltraSPARC IIe+
The UltraSPARC IIe+ or IIi was introduced in 2002. Code-named "Phantom", it operated at 550 to 650 MHz and was fabricated in a 0.18 μm process with copper interconnect. It dissipated 17.6 W and used a 1.7 V power supply. It had a 512 |
https://en.wikipedia.org/wiki/Three%20utilities%20problem | The classical mathematical puzzle known as the three utilities problem or sometimes water, gas and electricity asks for non-crossing connections to be drawn between three houses and three utility companies in the plane. When posing it in the early 20th century, Henry Dudeney wrote that it was already an old problem. It is an impossible puzzle: it is not possible to connect all nine lines without crossing. Versions of the problem on nonplanar surfaces such as a torus or Möbius strip, or that allow connections to pass through other houses or utilities, can be solved.
This puzzle can be formalized as a problem in topological graph theory by asking whether the complete bipartite graph , with vertices representing the houses and utilities and edges representing their connections, has a graph embedding in the plane. The impossibility of the puzzle corresponds to the fact that is not a planar graph. Multiple proofs of this impossibility are known, and form part of the proof of Kuratowski's theorem characterizing planar graphs by two forbidden subgraphs, one of which The question of minimizing the number of crossings in drawings of complete bipartite graphs is known as Turán's brick factory problem, and for the minimum number of crossings is one.
is a graph with six vertices and nine edges, often referred to as the utility graph in reference to the problem. It has also been called the Thomsen graph after 19th-century chemist Julius Thomsen. It is a well-covered graph, the smallest triangle-free cubic graph, and the smallest non-planar minimally rigid graph.
History
A review of the history of the three utilities problem is given by . He states that most published references to the problem characterize it as "very ancient". In the earliest publication found by Kullman, names it "water, gas, and electricity". However, Dudeney states that the problem is "as old as the hills...much older than electric lighting, or even gas". Dudeney also published the same puzzle previou |
https://en.wikipedia.org/wiki/Spectral%20slope | In astrophysics and planetary science, spectral slope, also called spectral gradient, is a measure of dependence of the reflectance on the wavelength.
In digital signal processing, it is a measure of how quickly the spectrum of an audio sound tails off towards the high frequencies, calculated using a linear regression.
Spectral slope in astrophysics and planetary science
The visible and infrared spectrum of the reflected sunlight is used to infer physical and chemical properties of the surface of a body. Some objects are brighter (reflect more) in longer wavelengths (red). Consequently, in visible light they will appear redder than objects showing no dependence of reflectance on the wavelength.
The diagram illustrates three slopes:
a red slope, the reflectance is increasing with the wavelengths
flat spectrum (in black)
And a blue slope, the reflectance actually diminishing with the wavelengths
The slope (spectral gradient) is defined as:
where is the reflectance measured with filters F0, F1 having the central wavelengths λ0 and λ1, respectively.
The slope is typically expressed in percentage increase of reflectance (i.e. reflexivity) per unit of wavelength: %/100 nm (or % /1000 Å)
The slope is mostly used in near infrared part of the spectrum while colour indices are commonly used in the visible part of the spectrum.
The trans-Neptunian object Sedna is a typical example of a body showing a steep red slope (20%/100 nm) while Orcus' spectrum appears flat in near infra-red.
Spectral slope in audio
The spectral "slope" of many natural audio signals (their tendency to have less energy at high frequencies) has been known for many years, and the fact that this slope is related to the nature of the sound source. One way to quantify this is by applying linear regression to the Fourier magnitude spectrum of the signal, which produces a single number indicating the slope of the line-of-best-fit through the spectral data.
Alternative ways to characterise a sound sign |
https://en.wikipedia.org/wiki/Cognitive%20load | In cognitive psychology, cognitive load refers to the amount of working memory resources used. However, it is essential to distinguish it from the actual construct of Cognitive Load (CL) or Mental Workload (MWL), which is studied widely in many disciplines. According to work conducted in the field of instructional design and pedagogy, broadly, there are three types of cognitive load: intrinsic cognitive load is the effort associated with a specific topic; extraneous cognitive load refers to the way information or tasks are presented to a learner; and germane cognitive load refers to the work put into creating a permanent store of knowledge (a schema). However, over the years, the additivity of these types of cognitive load has been investigated and questioned. Now it is believed that they circularly influence each other.
Cognitive load theory was developed in the late 1980s out of a study of problem solving by John Sweller. Sweller argued that instructional design can be used to reduce cognitive load in learners.
Much later, other researchers developed a way to measure perceived mental effort which is indicative of cognitive load. Task-invoked pupillary response is a reliable and sensitive measurement of cognitive load that is directly related to working memory. Information may only be stored in long term memory after first being attended to, and processed by, working memory. Working memory, however, is extremely limited in both capacity and duration. These limitations will, under some conditions, impede learning. Heavy cognitive load can have negative effects on task completion, and it is important to note that the experience of cognitive load is not the same in everyone. The elderly, students, and children experience different, and more often higher, amounts of cognitive load.
The fundamental tenet of cognitive load theory is that the quality of instructional design will be raised if greater consideration is given to the role and limitations of working memory. |
https://en.wikipedia.org/wiki/Mersenne%20Twister | The Mersenne Twister is a general-purpose pseudorandom number generator (PRNG) developed in 1997 by and . Its name derives from the fact that its period length is chosen to be a Mersenne prime.
The Mersenne Twister was designed specifically to rectify most of the flaws found in older PRNGs.
The most commonly used version of the Mersenne Twister algorithm is based on the Mersenne prime . The standard implementation of that, MT19937, uses a 32-bit word length. There is another implementation (with five variants) that uses a 64-bit word length, MT19937-64; it generates a different sequence.
Application
Software
The Mersenne Twister is used as default PRNG by the following software:
Programming languages: Dyalog APL, IDL, R, Ruby, Free Pascal, PHP, Python (also available in NumPy, however the default was changed to PCG64 instead as of version 1.17), CMU Common Lisp, Embeddable Common Lisp, Steel Bank Common Lisp, Julia (up to Julia 1.6 LTS, still available in later, but a better/faster RNG used by default as of 1.7)
Linux libraries and software: GLib, GNU Multiple Precision Arithmetic Library, GNU Octave, GNU Scientific Library
Other: Microsoft Excel, GAUSS, gretl, Stata, SageMath, Scilab, Maple, MATLAB
It is also available in Apache Commons, in the standard C++ library (since C++11), and in Mathematica. Add-on implementations are provided in many program libraries, including the Boost C++ Libraries, the CUDA Library, and the NAG Numerical Library.
The Mersenne Twister is one of two PRNGs in SPSS: the other generator is kept only for compatibility with older programs, and the Mersenne Twister is stated to be "more reliable". The Mersenne Twister is similarly one of the PRNGs in SAS: the other generators are older and deprecated. The Mersenne Twister is the default PRNG in Stata, the other one is KISS, for compatibility with older versions of Stata.
Advantages
Permissively-licensed and patent-free for all variants except CryptMT.
Passes numerous tests fo |
https://en.wikipedia.org/wiki/Y%20service | The "Y" service was a network of British signals intelligence collection sites, the Y-stations. The service was established during the First World War and used again during the Second World War. The sites were operated by a range of agencies including the Army, Navy and RAF plus the Foreign Office (MI6 and MI5), General Post Office and Marconi Company receiving stations ashore and afloat. There were more than 600 receiving sets in use at Y-stations during the Second World War.
Background
The "Y" stations tended to be one of two types, for intercepting the signals and for identifying where they were coming from. Sometimes both functions were operated at the same site, with the direction finding (D/F) hut being a few hundred metres from the main interception building, because of the need to minimise interference. The sites collected radio traffic which was then either analysed locally or if encrypted, passed for processing initially to the Admiralty Room 40 in London and during World War II to the Government Code and Cypher School at Bletchley Park in Buckinghamshire. In the Second World War a large house called "Arkley View" on the outskirts of Barnet (now part of the London Borough of Barnet) acted as a data collection centre, where traffic was collated and passed to Bletchley Park and it also acted as a Y station.
Many amateur radio (ham) operators supported the work of the Y stations, being enrolled as "Voluntary Interceptors". Much of the traffic intercepted by the Y stations was recorded by hand and sent to Bletchley by motorcycle couriers, and later by teleprinter over post office land lines. The name derived from Wireless Interception (WI). The term was also used for similar stations attached to the India outpost of the Intelligence Corps, the Wireless Experimental Centre (WEC) outside Delhi.
Direction-finding Y stations
Specially constructed Y stations undertook High-frequency direction finding of wireless transmissions. This became particularly important |
https://en.wikipedia.org/wiki/Comparative%20anatomy | Comparative anatomy is the study of similarities and differences in the anatomy of different species. It is closely related to evolutionary biology and phylogeny (the evolution of species).
The science began in the classical era, continuing in the early modern period with work by Pierre Belon who noted the similarities of the skeletons of birds and humans.
Comparative anatomy has provided evidence of common descent, and has assisted in the classification of animals.
History
The first specifically anatomical investigation separate from a surgical or medical procedure is associated by Alcmaeon of Croton. Leonardo da Vinci made notes for a planned anatomical treatise in which he intended to compare the hands of various animals including bears. Pierre Belon, a French naturalist born in 1517, conducted research and held discussions on dolphin embryos as well as the comparisons between the skeletons of birds to the skeletons of humans. His research led to modern comparative anatomy.
Around the same time, Andreas Vesalius was also making some strides of his own. A young anatomist of Flemish descent made famous by a penchant for amazing charts, he was systematically investigating and correcting the anatomical knowledge of the Greek physician Galen. He noticed that many of Galen's observations were not even based on actual humans. Instead, they were based on animals such as apes, monkeys, and oxen. In fact, he entreated his students to do the following, in substitution for human skeletons, as cited by Edward Tyson : "If you can't happen to see any of these, dissect an Ape, carefully view each Bone, &c. ..." Then he advises what sort of Apes to make choice of, as most resembling a Man : And conclude "One ought to know the Structure of all the Bones either in a Humane Body, or in an Apes ; 'tis best in both ; and then to go to the Anatomy of the Muscles." Up until that point, Galen and his teachings had been the authority on human anatomy. The irony is that Galen himsel |
https://en.wikipedia.org/wiki/Coital%20incontinence | Coital incontinence (CI) is urinary leakage that occurs during either penetration or orgasm and can occur with a sexual partner or with masturbation. It has been reported to occur in 10% to 27% of sexually active women with urinary continence problems. There is evidence to suggest links between urinary leakage at penetration and urodynamic stress incontinence, and between urinary leakage at orgasm and detrusor overactivity.
Coital incontinence is physiologically distinct from female ejaculation, with which it is sometimes confused. |
https://en.wikipedia.org/wiki/Vena%20comitans | Vena comitans is Latin for accompanying vein and is also known as a satellite vein. It refers to a vein that is usually paired, with both veins lying on the sides of an artery. They are found in close proximity to arteries so that the pulsations of the artery aid venous return. Because they are generally found in pairs, they are often referred to by their plural form: venae comitantes.
Venae comitantes are usually found with certain smaller arteries, especially those in the extremities. Larger arteries, on the other hand, generally do not have venae comitantes. They usually have a single, similarly sized vein which is not as intimately associated with the artery.
Examples
Examples of arteries and their venae comitantes:
Radial artery and radial veins
Ulnar artery and ulnar veins
Brachial artery and brachial veins
Anterior tibial artery and anterior tibial veins
Posterior tibial artery and posterior tibial veins
Fibular artery and fibular veins
Examples of arteries that do not have venae comitantes (i.e. those that have "regular" veins):
Axillary artery and the axillary vein
Subclavian artery and the subclavian vein |
https://en.wikipedia.org/wiki/Invariant%20manifold | In dynamical systems, a branch of mathematics, an invariant manifold is a topological manifold that is invariant under the action of the dynamical system. Examples include the slow manifold, center manifold, stable manifold, unstable manifold, subcenter manifold and inertial manifold.
Typically, although by no means always, invariant manifolds are constructed as a 'perturbation' of an invariant subspace about an equilibrium.
In dissipative systems, an invariant manifold based upon the gravest, longest lasting modes forms an effective low-dimensional, reduced, model of the dynamics.
Definition
Consider the differential equation
with flow being the solution of the differential equation with .
A set is called an invariant set for the differential equation if, for each , the solution , defined on its maximal interval of existence, has its image in . Alternatively, the orbit
passing through each lies in . In addition, is called an invariant manifold if is a manifold.
Examples
Simple 2D dynamical system
For any fixed parameter , consider the variables governed by the pair of coupled differential equations
The origin is an equilibrium. This system has two invariant manifolds of interest through the origin.
The vertical line is invariant as when the -equation becomes which ensures remains zero. This invariant manifold, , is a stable manifold of the origin (when ) as all initial conditions lead to solutions asymptotically approaching the origin.
The parabola is invariant for all parameter . One can see this invariance by considering the time derivative and finding it is zero on as required for an invariant manifold. For this parabola is the unstable manifold of the origin. For this parabola is a center manifold, more precisely a slow manifold, of the origin.
For there is only an invariant stable manifold about the origin, the stable manifold including all .
Invariant manifolds in non-autonomous dynamical systems
A differential equation |
https://en.wikipedia.org/wiki/Homo%20consumericus | Homo consumericus (mock Latin for consumerist person) is a neologism used in the social sciences, notably by Gad Saad in his book: The Evolutionary Bases of Consumption and by Gilles Lipovetsky in Le Bonheur Paradoxal. According to these and other scholars the phenomenon of mass consumption could be compared to certain traits of human psychology described by evolutionary scientists pointing out similarities between Darwinian principles and consumer behaviour. Lipovetsky has noted that modern times have brought about the rise of a third type of Homo consumericus, who is unpredictable and insatiable.
A similar expression "Homo Consumens" was used by Erich Fromm in Socialist Humanism, written in 1965. There Fromm wrote on Homo consumens: "Homo consumens is the man whose main goal is not primarily to own things, but to consume more and more, and thus to compensate for his inner vacuity, passivity, loneliness, and anxiety."
The expression "Homo Consumens" was used by several other authors, including Mihailo Marković.
See also
Anti-consumerism
Commodity fetishism
Consumerism
Cultural studies
Gilles Lipovetsky |
https://en.wikipedia.org/wiki/GCube%20system | gCube is an open source software system specifically designed and developed to enact the building and operation of a Data Infrastructure providing their users with a rich array of services suitable for supporting the co-creation of Virtual Research Environments and promoting the implementation of open science workflows and practices. It is at the heart of the D4Science Data Infrastructure.
It is primarily organised in a number of web service called to offer functionality supporting the phases of knowledge production and sharing. In addition, it consists of a set of software libraries supporting service development, service-to-service integration, and service capabilities extension, and a set of portlets dedicated to realise user interface constituents facilitating the exploitation of one or more services.
It is designed and conceived to enact system of systems. In fact, its gCube services rely on standards and mediators to interact with other services as well as are made available by standard and APIs to make it possible for clients to use them. For instance, the DataMiner service implements the Web Processing Service protocol to facilitate clients to execute processes.
The set of components dealing with Identity and Access Management rely on Keycloak and federates other IDMs thus making the overall Authentication and the Authorization management compliant with open standards such as OAuth2, User-Managed Access (UMA), and OpenID Connect (OIDC)protocols.
The Catalogue relies on DCAT, OAI-PMH, and Catalogue Service for the Web to collect contents from other catalogues and data sources and offers its content by DCAT, OAI-PMH, and a proprietary REST API (gCat REST API).
Its Continuous Integration/Continuous Delivery pipeline implemented by Jenkins represents an innovative approach to software delivering conceived to be scalable and easy to maintain and upgrade at a minimal cost (see Jenkins Case Study).
History
gCube has been developed in the context of the D4 |
https://en.wikipedia.org/wiki/Flooding%20%28computer%20networking%29 | Flooding is used in computer network routing algorithms in which every incoming packet is sent through every outgoing link except the one it arrived on.
Flooding is used in bridging and in systems such as Usenet and peer-to-peer file sharing and as part of some routing protocols, including OSPF, DVMRP, and those used in ad-hoc wireless networks (WANETs).
Types
There are generally two types of flooding available, uncontrolled flooding and controlled flooding.
In uncontrolled flooding each node unconditionally distributes packets to each of its neighbors. Without conditional logic to prevent indefinite recirculation of the same packet, broadcast storms are a hazard.
Controlled flooding has its own two algorithms to make it reliable, SNCF (Sequence Number Controlled Flooding) and RPF (reverse-path forwarding). In SNCF, the node attaches its own address and sequence number to the packet, since every node has a memory of addresses and sequence numbers. If it receives a packet in memory, it drops it immediately while in RPF, the node will only send the packet forward. If it is received from the next node, it sends it back to the sender.
Algorithms
There are several variants of flooding algorithms. Most work roughly as follows:
Each node acts as both a transmitter and a receiver.
Each node tries to forward every message to every one of its neighbors except the source node.
This results in every message eventually being delivered to all reachable parts of the network.
Algorithms may need to be more complex than this, since, in some case, precautions have to be taken to avoid wasted duplicate deliveries and infinite loops, and to allow messages to eventually expire from the system.
Selective flooding
A variant of flooding called selective flooding partially addresses these issues by only sending packets to routers in the same direction. In selective flooding, the routers don't send every incoming packet on every line but only on those lines which are going approxim |
https://en.wikipedia.org/wiki/HITRAN | HITRAN (an acronym for High Resolution Transmission) molecular spectroscopic database is a compilation of spectroscopic parameters used to simulate and analyze the transmission and emission of light in gaseous media, with an emphasis on planetary atmospheres. The knowledge of spectroscopic parameters for transitions between energy levels in molecules (and atoms) is essential for interpreting and modeling the interaction of radiation (light) within different media.
For half a century, HITRAN has been considered to be an international standard which provides the user a recommended value of parameters for millions of transitions for different molecules. HITRAN includes both experimental and theoretical data which are gathered from a worldwide network of contributors as well as from articles, books, proceedings, databases, theses, reports, presentations, unpublished data, papers in-preparation and private communications. A major effort is then dedicated to evaluating and processing the spectroscopic data. A single transition in HITRAN has many parameters, including a default 160-byte fixed-width format used since HITRAN2004. Wherever possible, the retrieved data are validated against accurate laboratory data.
The original version of HITRAN was compiled by the US Air Force Cambridge Research Laboratories (1960s) in order to enable surveillance of military aircraft detected through the terrestrial atmosphere. One of the early applications of HITRAN was a program called Atmospheric Radiation Measurement (ARM) for the US Department of Energy. In this program spectral atmospheric measurements were made around the globe in order to better understand the balance between the radiant energy that reaches Earth from the sun and the energy that flows from Earth back out to space. The US Department of Transportation also utilized HITRAN in its early days for monitoring the gas emissions (NO, SO2, NO2) of super-sonic transports flying at high altitude. HITRAN was first made public |
https://en.wikipedia.org/wiki/Blue%20sky%20catastrophe | The blue sky catastrophe is a form of orbital indeterminacy, and an element of bifurcation theory.
Orbital dynamics
Blue sky catastrophe is a type of bifurcation of a periodic orbit. In other words, it describes a sort of behaviour stable solutions of a set of differential equations can undergo as the equations are gradually changed. This type of bifurcation is characterised by both the period and length of the orbit approaching infinity as the control parameter approaches a finite bifurcation value, but with the orbit still remaining within a bounded part of the phase space, and without loss of stability before the bifurcation point. In other words, the orbit vanishes into the blue sky.
Applications of blue sky catastrophe in other fields
The bifurcation has found application in, amongst other places, slow-fast models of computational neuroscience. The possibility of the phenomenon was raised by David Ruelle and Floris Takens in 1971, and explored by R.L. Devaney and others in the following decade. More compelling analysis was not performed until the 1990s.
This bifurcation has also been found in the context of fluid dynamics, namely in double-diffusive convection of a small Prandtl number fluid. Double diffusive convection occurs when convection of the fluid is driven by both thermal and concentration gradients, and the temperature and concentration diffusivities take different values. The bifurcation is found in an orbit that is born in a global saddle-loop bifurcation, becomes chaotic in a period doubling cascade, and disappears in the blue sky catastrophe. |
https://en.wikipedia.org/wiki/Indexing%20Service | Indexing Service (originally called Index Server) was a Windows service that maintained an index of most of the files on a computer to improve searching performance on PCs and corporate computer networks. It updated indexes without user intervention. In Windows Vista it was replaced by the newer Windows Search Indexer. The IFilter plugins to extend the indexing capabilities to more file formats and protocols are compatible between the legacy Indexing Service how and the newer Windows Search Indexer.
History
Indexing Service was a desktop search service included with Windows NT 4.0 Option Pack as well as Windows 2000 and later. The first incarnation of the indexing service was shipped in August 1996 as a content search system for Microsoft's web server software, Internet Information Services. Its origins, however, date further back to Microsoft's Cairo operating system project, with the component serving as the Content Indexer for the Object File System. Cairo was eventually shelved, but the content indexing capabilities would go on to be included as a standard component of later Windows desktop and server operating systems, starting with Windows 2000, which includes Indexing Service 3.0.
In Windows Vista, the content indexer was replaced with the Windows Search indexer which was enabled by default. Indexing Service is still included with Windows Server 2008 but is not installed or running by default.
Indexing Service has been deprecated in Windows 7 and Windows Server 2008 R2. It has been removed from Windows 8.
Search interfaces
Comprehensive searching is available after initial building of the index, which can take up to hours or days, depending on the size of the specified directories, the speed of the hard drive, user activity, indexer settings and other factors. Searching using Indexing service works also on UNC paths and/or mapped network drives if the sharing server indexes appropriate directory and is aware of its sharing.
Once the indexing service ha |
https://en.wikipedia.org/wiki/Test%20suite | In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviors. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Collections of test cases are sometimes termed a test plan, a test script, or even a test scenario.
Types
Occasionally, test suites are used to group similar test cases together. A system might have a smoke test suite that consists only of smoke tests or a test suite for some specific functionality in the system. It may also contain all tests and signify if a test should be used as a smoke test or for some specific functionality.
In model-based testing, one distinguishes between abstract test suites, which are collections of abstract test cases derived from a high-level model of the system under test, and executable test suites, which are derived from abstract test suites by providing the concrete, lower-level details needed to execute this suite by a program. An abstract test suite cannot be directly used on the actual system under test (SUT) because abstract test cases remain at a high abstraction level and lack concrete details about the SUT and its environment. An executable test suite works on a sufficiently detailed level to correctly communicate with the SUT and a test harness is usually present to interface the executable test suite with the SUT.
A test suite for a primality testing subroutine might consist of a list of numbers and their primality (prime or composite), along with a testing subroutine. The testing subroutine would supply each number in the list to the primality tester, and verify that the result of each test is correct.
See also
Scenario test
Software |
https://en.wikipedia.org/wiki/KCNV2 | Potassium voltage-gated channel subfamily V member 2 is a protein that in humans is encoded by the KCNV2 gene. The protein encoded by this gene is a voltage-gated potassium channel subunit. |
https://en.wikipedia.org/wiki/Tex36 | Testis expressed 36, TEX36, is a protein that in humans is encoded by the tex36 gene. TEX36 interacts with proteins involved in the MAP kinase family, supporting that TEX36 may be regulated with on or off configurations. The encoded protein is highly expressed in fetal, testes, and placental tissues and has background expression levels in adults. There are also many motifs specific to male sex determination and spermatogenic factors, suggesting that it is involved in development.
Gene
The verified gene sequence was confirmed in NCBI on November 26, 2016. The coding region spans 106,622 bases, and within that region are 4 exons.
Aliases
TEX36, is also known by the aliases C10orf122 and BA383C5.1. It has cytogenetic bands at 10q26.13. TEX36 is also a member of the Human CCDS set CCDS44493.1.
Locus
The gene spans from 125576522 to 125683144 in the human genome. Figured is the genomic context of TEX36. There is aldolase, fructose-biphosphate A2 along with several uncharacterized loci in the gene neighborhood of TEX36.
mRNA
There are two variants of TEX36, transcripts 1 and 2. The two differ by an alternately spliced 4th exon. Validated variants, 1 and 2, both contain 4 exons, but variant 1 has a longer transcript with 922 base pairs, whereas variant 2 contains 777 base pairs with a different terminating region. Only variant 1 is the protein encoding transcript.
Protein
Variant 1 encodes protein testis expressed 36. TEX36 does not have any confirmed isoforms. The unmodified TEX36 consists of 186 amino acids and has a molecular weight of 21,545 daltons, with an isoelectric point of 9.1. Amino acids serine and lysine are highly represented in the protein at a higher frequency than observed in most proteins in vertebrates.
Domains & Motifs
Secondary Structure
Based on conservation through multiple sequence alignments and multiple secondary structure prediction algorithms, TEX36 is composed of 4 alpha helices and 5 beta strands.
Post-Translational |
https://en.wikipedia.org/wiki/Calcium-binding%20EGF%20domain | In molecular biology, the calcium-binding EGF domain is an EGF-like domain of about forty amino-acid residues found in epidermal growth factor (EGF). This domain is present in a large number of membrane-bound and extracellular, mostly animal, proteins. Many of these proteins require calcium for their biological function and a calcium-binding site has been found at the N-terminus of some EGF-like domains. Calcium-binding may be crucial for numerous protein-protein interactions.
For human coagulation factor IX it has been shown that the calcium-ligands form a pentagonal bipyramid. The first, third and fourth conserved negatively charged or polar residues are side chain ligands. The latter is possibly hydroxylated. A conserved aromatic residue, as well as the second conserved negative residue, are thought to be involved in stabilising the calcium-binding site.
As in non-calcium binding EGF-like domains, there are six conserved cysteines and the structure of both types is very similar as calcium-binding induces only strictly local structural changes. |
https://en.wikipedia.org/wiki/Arend%20Heyting |
Arend Heyting (; 9 May 1898 – 9 July 1980) was a Dutch mathematician and logician.
Biography
Heyting was a student of Luitzen Egbertus Jan Brouwer at the University of Amsterdam, and did much to put intuitionistic logic on a footing where it could become part of mathematical logic. Heyting gave the first formal development of intuitionistic logic in order to codify Brouwer's way of doing mathematics. The inclusion of Brouwer's name in the Brouwer–Heyting–Kolmogorov interpretation is largely honorific, as Brouwer was opposed in principle to the formalisation of certain intuitionistic principles (and went as far as calling Heyting's work a "sterile exercise").
In 1942 he became a member of the Royal Netherlands Academy of Arts and Sciences.
Heyting was born in Amsterdam, Netherlands, and died in Lugano, Switzerland.
Selected publications
Heyting, A. (1930) Die formalen Regeln der intuitionistischen Logik. (German) 3 parts, In: Sitzungsberichte der preußischen Akademie der Wissenschaften. phys.-math. Klasse, 1930, 42–56, 57-71, 158-169.
Heyting, A. (1934) Mathematische Grundlagenforschung. Intuitionismus. Beweistheorie. Springer, Berlin.
Heyting, A. (1941) Untersuchungen der intuitionistische Algebra. (German) Verh. Nederl. Akad. Wetensch. Afd. Natuurk. Sect. 1. 18. no. 2, 36 pp.
Heyting, A. (1956) Intuitionism. An introduction. North-Holland Publishing Co., Amsterdam.
Heyting, A. (1959) Axioms for intuitionistic plane affine geometry. The axiomatic method. With special reference to geometry and physics. Proceedings of an International Symposium held at the Univ. of Calif., Berkeley, Dec. 26, 1957–Jan 4, 1958 (edited by L. Henkin, P. Suppes and A. Tarski) pp. 160–173 Studies in Logic and the Foundations of Mathematics North-Holland Publishing Co., Amsterdam.
Heyting, A. (1962) After thirty years. 1962 Logic, Methodology and Philosophy of Science (Proc. 1960 Internat. Congr.) pp. 194–197 Stanford Univ. Press, Stanford, Calif.
Heyting, A. (1963) Axiomatic projec |
https://en.wikipedia.org/wiki/ICOMP%20%28index%29 | iCOMP for Intel Comparative Microprocessor Performance was an index published by Intel used to measure the relative performance of its microprocessors.
There were three revisions of the iCOMP index. Version 1.0 (1992) was benchmarked against the 486SX 25, while version 2.0 (1996) was benchmarked against the Pentium 120. For Version 3.0 (1999) it was Pentium II at 350MHz.
See also
PR rating |
https://en.wikipedia.org/wiki/Balance%20of%20angular%20momentum | The balance of angular momentum or Euler's second law in classical mechanics is a law of physics, stating that to alter the angular momentum of a body a torque must be applied to it.
An example of use is the playground merry-go-round in the picture. To put it in rotation it must be pushed. Technically one summons a torque that feeds angular momentum to the merry-go-round. The torque of frictional forces in the bearing and drag, however, make a resistive torque that will gradually lessen the angular momentum and eventually stop rotation.
The mathematical formulation states that the rate of change of angular momentum about a point , is equal to the sum of the external torques acting on that body about that point:
The point is a fixed point in an inertial system or the center of mass of the body. In the special case, when external torques vanish, it shows that the angular momentum is preserved. The d'Alembert force counteracting the change of angular momentum shows as a gyroscopic effect.
From the balance of angular momentum follows the equality of corresponding shear stresses or the symmetry of the Cauchy stress tensor. The same follows from the Boltzmann Axiom, according to which internal forces in a continuum are torque-free. Thus the balance of angular momentum, the symmetry of the Cauchy stress tensor, and the Boltzmann Axiom in continuum mechanics are related terms.
Especially in the theory of the top the balance of angular momentum plays a crucial part. In continuum mechanics it serves to exactly determine the skew-symmetric part of the stress tensor.
The balance of angular momentum is, besides the Newtonian laws, a fundamental and independent principle and was introduced as such first by Swiss mathematician and physicist Leonhard Euler in 1775.
History
Swiss mathematician Jakob I Bernoulli applied the balance of angular momentum in 1703 – without explicitly formulating it – to find the center of oscillation of a pendulum, which he had already done |
https://en.wikipedia.org/wiki/History%20of%20the%20Berkeley%20Software%20Distribution | The History of the Berkeley Software Distribution begins in the 1970s.
1BSD (PDP-11)
The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry who had been on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day (sometimes during the day, sometimes during the night). A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project.
Also in 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution (1BSD), which was released on March 9, 1978. 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out.
2BSD (PDP-11)
The Second Berkeley Software Distribution (2BSD), released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor (a visual version of ex) and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A further feature was a networking package called Berknet, developed by Eric Schmidt as part of his master's thesis work, that could connect up to twenty-six compu |
https://en.wikipedia.org/wiki/Trimeric%20autotransporter%20adhesin | In molecular biology, trimeric autotransporter adhesins (TAAs), are proteins found on the outer membrane of Gram-negative bacteria. Bacteria use TAAs in order to infect their host cells via a process called cell adhesion. TAAs also go by another name, oligomeric coiled-coil adhesins, which is shortened to OCAs. In essence, they are virulence factors, factors that make the bacteria harmful and infective to the host organism.
TAAs are just one of many methods bacteria use to infect their hosts, infection resulting in diseases such as pneumonia, sepsis, and meningitis. Most bacteria infect their host through a method named the secretion pathway. TAAs are part of the secretion pathway, to be more specific the type Vc secretion system.
Trimeric autotransporter adhesins have a unique structure. The structure they hold is crucial to their function. They all appear to have a head-stalk-anchor structure. Each TAA is made up of three identical proteins, hence the name trimeric. Once the membrane anchor has been inserted into the outer membrane, the passenger domain passes through it into the host extracellular environment autonomously, hence the description of autotransporter. The head domain, once assembled, then adheres to an element of the host extracellular matrix, for example, collagen, fibronectin, etc.
Molecular structure
Most TAAs have a similar protein structure. When observed with electron microscopy, the structure has been described as a "lollipop" shape consisting of an N-terminal head domain, a stalk domain, and a C-terminal membrane anchor domain. Often, the literature refers to these as Passenger domain, containing the N-terminal, head, neck, and coiled-coil stalk, and the Translocation domain, referring to the C-terminal membrane anchor. Although all TAAs carry a membrane anchor in common, they may not all contain both a stalk and a head as well. In addition, all membrane anchor domains are of the left-handed parallel beta-roll type.
Extended Signal Pepti |
https://en.wikipedia.org/wiki/Proca%20action | In physics, specifically field theory and particle physics, the Proca action describes a massive spin-1 field of mass m in Minkowski spacetime. The corresponding equation is a relativistic wave equation called the Proca equation. The Proca action and equation are named after Romanian physicist Alexandru Proca.
The Proca equation is involved in the Standard Model and describes there the three massive vector bosons, i.e. the Z and W bosons.
This article uses the (+−−−) metric signature and tensor index notation in the language of 4-vectors.
Lagrangian density
The field involved is a complex 4-potential , where is a kind of generalized electric potential and is a generalized magnetic potential. The field transforms like a complex four-vector.
The Lagrangian density is given by:
where is the speed of light in vacuum, is the reduced Planck constant, and is the 4-gradient.
Equation
The Euler–Lagrange equation of motion for this case, also called the Proca equation, is:
which is equivalent to the conjunction of
with (in the massive case)
which may be called a generalized Lorenz gauge condition. For non-zero sources, with all fundamental constants included, the field equation is:
When , the source free equations reduce to Maxwell's equations without charge or current, and the above reduces to Maxwell's charge equation. This Proca field equation is closely related to the Klein–Gordon equation, because it is second order in space and time.
In the vector calculus notation, the source free equations are:
and is the D'Alembert operator.
Gauge fixing
The Proca action is the gauge-fixed version of the Stueckelberg action via the Higgs mechanism. Quantizing the Proca action requires the use of second class constraints.
If , they are not invariant under the gauge transformations of electromagnetism
where is an arbitrary function.
See also
Electromagnetic field
Photon
Quantum electrodynamics
Quantum gravity
Vector boson
Relativistic wave equations
K |
https://en.wikipedia.org/wiki/Flora%20Svecica | Flora Svecica ("Flora of Sweden", ed. 1, Stockholm, 1745; ed. 2 Stockholm, 1755) was written by Swedish botanist, physician, zoologist and naturalist Carl Linnaeus (1707–1778).
This was the first full account of the plants growing in Sweden and one of the first examples of the Flora in the modern idiom. The full title of the publication was Flora Svecica: Enumerans Plantas Sueciae Indigenas Cum Synopsi Classium Ordinumque, Characteribus Generum, Differentiis Specierum, Synonymis Citationibusque Selectis - Locis Regionibusque Natalibus - Descriptionibus Habitualibus Nomina Incolarum Et Qualitat.
Bibliographic details
Full bibliographic details including exact dates of publication, pagination, editions, facsimiles, brief outline of contents, location of copies, secondary sources, translations, reprints, manuscripts, travelogues, and commentaries are given in Stafleu and Cowan's Taxonomic Literature.
See also
Flora Lapponica |
https://en.wikipedia.org/wiki/Field%20effect%20%28chemistry%29 | A field effect is the polarization of a molecule through space. The effect is a result of an electric field produced by charge localization in a molecule. This field, which is substituent and conformation dependent, can influence structure and reactivity by manipulating the location of electron density in bonds and/or the overall molecule. The polarization of a molecule through its bonds is a separate phenomenon known as induction. Field effects are relatively weak, and diminish rapidly with distance, but have still been found to alter molecular properties such as acidity.
Field sources
Field effects can arise from the electric dipole field of a bond containing an electronegative atom or electron-withdrawing substituent, as well as from an atom or substituent bearing a formal charge. The directionality of a dipole, and concentration of charge, can both define the shape of a molecule's electric field which will manipulate the localization of electron density toward or away from sites of interest, such as an acidic hydrogen. Field effects are typically associated with the alignment of a dipole field with respect to a reaction center. Since these are through space effects, the 3D structure of a molecule is an important consideration. A field may be interrupted by other bonds or atoms before propagating to a reactive site of interest. Atoms of differing electronegativities can move closer together resulting in bond polarization through space that mimics the inductive effect through bonds. Bicycloheptane and bicyclooctane (seen left) are two compounds in which the change in acidity with substitution was attributed to the field effect. The C-X dipole is oriented away from the carboxylic acid group, and can draw electron density away because the molecule center is empty, with a low dielectric constant, so the electric field is able to propagate with minimal resistance.
Utility of effect
A dipole can align to stabilize or destabilize the formation or loss of a charge, |
https://en.wikipedia.org/wiki/Equivalence%20group | An equivalence group is a set of unspecified cells that have the same developmental potential or ability to adopt various fates. Our current understanding suggests that equivalence groups are limited to cells of the same ancestry, also known as sibling cells. Often, cells of an equivalence group adopt different fates from one another.
Equivalence groups assume various potential fates in two general, non-mutually exclusive ways. One mechanism, induction, occurs when a signal originating from outside of the equivalence group specifies a subset of the naïve cells. Another mode, known as lateral inhibition, arises when a signal within an equivalence group causes one cell to adopt a dominant fate while others in the group are inhibited from doing so. In many examples of equivalence groups, both induction and lateral inhibition are used to define patterns of distinct cell types.
Cells of an equivalence group that do not receive a signal adopt a default fate. Alternatively, cells that receive a signal take on different fates. At a certain point, the fates of cells within an equivalence group become irreversibly determined, thus they lose their multipotent potential. The following provides examples of equivalence groups studied in nematodes and ascidians.
Vulva Precursor Cell Equivalence Group
Introduction
A classic example of an equivalence group is the vulva precursor cells (VPCs) of nematodes. In Caenorhabditis elegans self-fertilized eggs exit the body through the vulva. This organ develops from a subset of cell of an equivalence group consisting of six VPCs, P3.p-P8.p, which lie ventrally along the anterior-posterior axis. In this example a single overlying somatic cells, the anchor cell, induces nearby VPCs to take on vulva fates 1° (P6.p) and 2° (P5.p and P7.p). VPCs that are not induced form the 3° lineage (P3.p, P4.p and P8.p), which make epidermal cells that fuse to a large syncytial epidermis (see image).
The six VPCs form an equivalence group beca |
https://en.wikipedia.org/wiki/Negative%20impedance%20converter | The negative impedance converter (NIC) is an active circuit which injects energy into circuits in contrast to an ordinary load that consumes energy from them. This is achieved by adding or subtracting excessive varying voltage in series to the voltage drop across an equivalent positive impedance. This reverses the voltage polarity or the current direction of the port and introduces a phase shift of 180° (inversion) between the voltage and the current for any signal generator. The two versions obtained are accordingly a negative impedance converter with voltage inversion (VNIC) and a negative impedance converter with current inversion (INIC). The basic circuit of an INIC and its analysis is shown below.
Basic circuit and analysis
INIC is a non-inverting amplifier (the op-amp and the voltage divider , on the figure) with a resistor () connected between its output and input. The op-amp output voltage is
The current going from the operational amplifier output through resistor toward the source is , and
So the input experiences an opposing current that is proportional to , and the circuit acts like a resistor with negative resistance
In general, elements , , and need not be pure resistances (i.e., they may be capacitors, inductors, or impedance networks).
Application
By using an NIC as a negative resistor, it is possible to let a real generator behave (almost) like an ideal generator, (i.e., the magnitude of the current or of the voltage generated does not depend on the load).
An example for a current source is shown in the figure on the right. The current generator and the resistor within the dotted line is the Norton representation of a circuit comprising a real generator and is its internal resistance. If an INIC is placed in parallel to that internal resistance, and the INIC has the same magnitude but inverted resistance value, there will be and in parallel. Hence, the equivalent resistance is
That is, the combination of the real generator and the I |
https://en.wikipedia.org/wiki/Kuwahara%20filter | The Kuwahara filter is a non-linear smoothing filter used in image processing for adaptive noise reduction. Most filters that are used for image smoothing are linear low-pass filters that effectively reduce noise but also blur out the edges. However the Kuwahara filter is able to apply smoothing on the image while preserving the edges. It is named after Michiyoshi Kuwahara, Ph.D., who worked at Kyoto and Osaka Sangyo Universities in Japan, developing early medical imaging of dynamic heart muscle in the 1970s and 80s.
The Kuwahara operator
Suppose that is a grey scale image and that we take a square window of size centered around a point in the image. This square can be divided into four smaller square regions each of which will be
where is the cartesian product. Pixels located on the borders between two regions belong to both regions so there is a slight overlap between subregions.
The arithmetic mean and standard deviation of the four regions centered around a pixel (x,y) are calculated and used to determine the value of the central pixel. The output of the Kuwahara filter for any point is then given by where .
This means that the central pixel will take the mean value of the area that is most homogenous. The location of the pixel in relation to an edge plays a great role in determining which region will have the greater standard deviation. If for example the pixel is located on a dark side of an edge it will most probably take the mean value of the dark region. On the other hand, should the pixel be on the lighter side of an edge it will most probably take a light value. On the event that the pixel is located on the edge it will take the value of the more smooth, least textured region. The fact that the filter takes into account the homogeneity of the regions ensures that it will preserve the edges while using the mean creates the blurring effect.
Similarly to the median filter the Kuwahara filter uses a sliding window approach to access every pi |
https://en.wikipedia.org/wiki/List%20of%20personal%20information%20managers | The following is a list of personal information managers (PIMs) and online organizers.
Applications
Discontinued applications
See also
Comparisons
Comparison of email clients
Comparison of file managers
Comparison of note-taking software
Comparison of reference management software
Comparison of text editors
Comparison of wiki software
Comparison of word processors
Lists
List of outliners
Comparison of project management software
List of text editors
List of wiki software
External links
Lists of software |
https://en.wikipedia.org/wiki/Glabella | The glabella, in humans, is the area of skin between the eyebrows and above the nose. The term also refers to the underlying bone that is slightly depressed, and joins the two brow ridges. It is a cephalometric landmark that is just superior to the nasion.
Etymology
The term for the area is derived from the Latin , meaning 'smooth, hairless'.
In medical science
The skin of the glabella may be used to measure skin turgor in suspected cases of dehydration by gently pinching and lifting it. When released, the glabella of a dehydrated patient tends to remain extended ("tented"), rather than returning to its normal shape.
See also
Glabellar reflex |
https://en.wikipedia.org/wiki/Viterbi%20decoder | A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been
encoded using a convolutional code or trellis code.
There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm). The Viterbi algorithm is the most resource-consuming, but it does the maximum likelihood decoding. It is most often used for decoding convolutional codes with constraint lengths k≤3, but values up to k=15 are used in practice.
Viterbi decoding was developed by Andrew J. Viterbi and published in the paper
There are both hardware (in modems) and software implementations of a Viterbi decoder.
Viterbi decoding is used in the iterative Viterbi decoding algorithm.
Hardware implementation
A hardware Viterbi decoder for basic (not punctured) code usually consists of the following major blocks:
Branch metric unit (BMU)
Path metric unit (PMU)
Traceback unit (TBU)
Branch metric unit (BMU)
A branch metric unit's function is to calculate branch metrics, which are normed distances between every possible symbol in the code alphabet, and the received symbol.
There are hard decision and soft decision Viterbi decoders. A hard decision Viterbi decoder receives a simple bitstream on its input, and a Hamming distance is used as a metric. A soft decision Viterbi decoder receives a bitstream containing information about the reliability of each received symbol. For instance, in a 3-bit encoding, this reliability information can be encoded as follows:
Of course, it is not the only way to encode reliability data.
The squared Euclidean distance is used as a metric for soft decision decoders.
Path metric unit (PMU)
A path metric unit summarizes branch metrics to get metrics for paths, where K is the constraint length of the code, one of which can eventually be chosen as optimal. Every clock it makes decisions, throwing off wittingly nonoptimal paths. The results of these decisions are written to the memory of a traceback unit.
The co |
https://en.wikipedia.org/wiki/VELCT | Velocity Energy-efficient and Link-aware Cluster-Tree (VELCT) is a cluster and tree-based topology management protocol for mobile wireless sensor networks (MWSNs).
See also
DCN
DCT
CIDT |
https://en.wikipedia.org/wiki/Pink%20noise | Pink noise, noise or fractional noise or fractal noise is a signal or process with a frequency spectrum such that the power spectral density (power per frequency interval) is inversely proportional to the frequency of the signal. In pink noise, each octave interval (halving or doubling in frequency) carries an equal amount of noise energy.
Pink noise sounds like a waterfall. It is often used to tune loudspeaker systems in professional audio. Pink noise is one of the most commonly observed signals in biological systems.
The name arises from the pink appearance of visible light with this power spectrum. This is in contrast with white noise which has equal intensity per frequency interval.
Definition
Within the scientific literature, the term 1/f noise is sometimes used loosely to refer to any noise with a power spectral density of the form
where f is frequency, and 0 < α < 2, with exponent α usually close to 1. One-dimensional signals with α = 1 are usually called pink noise.
The following function describes a length one-dimensional pink noise signal (i.e. a Gaussian white noise signal with zero mean and standard deviation , which has been suitably filtered), as a sum of sine waves with different frequencies, whose amplitudes fall off inversely with the square root of frequency (so that power, which is the square of amplitude, falls off inversely with frequency), and phases are random:
are iid chi-distributed variables, and are uniform random.
In a two-dimensional pink noise signal, the amplitude at any orientation falls off inversely with frequency. A pink noise square of length can be written as:
General 1/f α-like noises occur widely in nature and are a source of considerable interest in many fields. Noises with α near 1 generally come from condensed-matter systems in quasi-equilibrium, as discussed below. Noises with a broad range of α generally correspond to a wide range of non-equilibrium driven dynamical systems.
Pink noise sources include fli |
https://en.wikipedia.org/wiki/British%20Mycological%20Society | The British Mycological Society is a learned society established in 1896 to promote the study of fungi.
Formation
The British Mycological Society (BMS) was formed by the combined efforts of two local societies: the Woolhope Naturalists' Field Club of Hereford and the Yorkshire Naturalists’ Union. The Curator of the Hereford Club, Dr. H. G. Bull, convinced the members in 1867 to undertake the particular study of mushrooms. While the mycological efforts of the Club diminished somewhat after Dr. Bull's death, the Union of Yorkshire founded its Mycological Committee in 1892. This Committee attracted the involvement of many eminent mycologists including George Edward Massee (1845–1917), James Needham (1849–1913), Charles Crossland (1844-1916), and Henry Thomas Soppitt (1843-1899). Mycologist Kathleen Sampson was a member for sixty years, as well as serving as president in 1938.
The need for a national organisation and the need for a journal to publish their observations led Cooke, Rea, Massee, and other mycologists (including Charles Crossland and James Needham) to found the Society in 1896. The Society's founding officers were Rea (Secretary), Crossland (Treasurer), and Massee (President). The choice of the latter as President was based on his international reputation (with more than 250 mycological publications) and role as the mycologist at the Royal Botanical Gardens, Kew (where he replaced Cooke as mycologist in 1893). In 1897, Rea assumed the additional role of Treasurer, also continuing as Secretary (until 1918), and was also Editor (until 1930). However, Massee and a number of Yorkshire mycologists soon left the BMS, preferring to remain with the Yorkshire Naturalists' Union.
Membership
By 1903, the Society's Members numbered over a hundred, which had increased to over four hundred (by shortly after World War II), and had reached over two thousand by 2006.
Before World War II, Honorary Membership was awarded to:
1905 Émile Boudier (1828–1920)
1916 P |
https://en.wikipedia.org/wiki/Rice%20allergy | Rice allergy is a type of food allergy. People allergic to rice react to various rice proteins after they eat rice or breathe the steam from cooking rice. Although some reactions might lead to severe health problems, doctors can diagnose rice allergy with many methods and help allergic people to avoid reactions.
Symptoms and signs
Some rice proteins are regarded as the causes of allergy in people. People allergic to rice might experience sneezing, runny nose, itching, asthma, stomachache, hives, sores in the mouth, or eczema after they eat rice. Besides eating rice, people with a rice allergy can have reactions breathing rice steam from cooked rice. In severe cases, death may result.
Diagnosis
People suspected of having a rice allergy can try diet avoidance on their own. First, they have to avoid rice for a couple of weeks. If they don’t have symptoms in the avoidance period but have those when exposed to rice, they are most likely allergic to rice.
Specific rice IgE, a kind of antibody in human blood, will rise significantly when people are allergic to rice. A blood test shows the level of the antibody.
Skin prick test, the most efficient diagnosis, shows the reactions in a short period. After being pricked in their skin with some rice mixture, allergic people will get itching and swelling in about 30 minutes.
Treatment
Some symptoms might weaken if people get allergy shots. After getting several treatments for a long time, some allergic people will not have reactions afterwards.
Some reactions have been eased by replacing original rice with genetically modified rice. This is regarded as a new choice for rice allergic people.
Reactions might lessen by staying away from rice long-term.
Prevalence
Unlike other food allergies, rice allergy is relatively uncommon. It has been reported worldwide but mostly in China, Japan or Korea. Because rice is a major food in Asia, people from Asia are exposed to higher allergy risk than people from other areas. |
https://en.wikipedia.org/wiki/Immunogenic%20cell%20death | Immunogenic cell death is any type of cell death eliciting an immune response. Both accidental cell death and regulated cell death can result in immune response. Immunogenic cell death contrasts to forms of cell death (apoptosis, autophagy or others) that do not elicit any response or even mediate immune tolerance.
The name 'immunogenic cell death' is also used for one specific type of regulated cell death that initiates an immune response after stress to endoplasmic reticulum.
Types of immunogenic cell death
Immunogenic cell death types are divided according to molecular mechanisms leading up to, during and following the death event. The immunogenicity of a specific cell death is determined by antigens and adjuvant released during the process.
Accidental cell death
Accidental cell death is the result of physical, chemical or mechanical damage to a cell, which exceeds its repair capacity. It is an uncontrollable process, leading to loss of membrane integrity. The result is the spilling of intracellular components, which may mediate an immune response.
Immunogenic cell death or ICD
ICD or immunogenic apoptosis is a form of cell death resulting in a regulated activation of the immune response. This cell death is characterized by apoptotic morphology, maintaining membrane integrity. Endoplasmic reticulum (ER) stress is generally recognised as a causative agent for ICD, with high production of reactive oxygen species (ROS). Two groups of ICD inducers are recognised. Type I inducers cause stress to the ER only as collateral damage, mainly targeting DNA or chromatin maintenance apparatus or membrane components. Type II inducers target the ER specifically. ICD is induced by some cytostatic agents such as anthracyclines, oxaliplatin and bortezomib, or radiotherapy and photodynamic therapy (PDT). Some viruses can be listed among biological causes of ICD. Just as immunogenic death of infected cells induces immune response to the infectious agent, immunogenic death of c |
https://en.wikipedia.org/wiki/Generic%20Substation%20Events | Generic Substation Events (GSE) is a control model defined as per IEC 61850 which provides a fast and reliable mechanism of transferring event data over entire electrical substation networks. When implemented, this model ensures the same event message is received by multiple physical devices using multicast or broadcast services. The GSE control model is further subdivided into GOOSE (Generic Object Oriented Substation Events) and GSSE (Generic Substation State Events).
Generic Object Oriented Substation Events
Generic Object Oriented Substation Events (GOOSE) is a controlled model mechanism in which any format of data (status, value) is grouped into a data set and transmitted within a time period of 4 milliseconds. The following mechanisms are used to ensure specified transmission speed and reliability.
GOOSE data is directly embedded into Ethernet data packets and works on publisher-subscriber mechanism on multicast or broadcast MAC addresses.
GOOSE uses VLAN and priority tagging as per IEEE 802.1Q to have separate virtual network within the same physical network and sets appropriate message priority level.
Enhanced retransmission mechanisms - The same GOOSE message is retransmitted with varying and increasing re-transmission intervals. A new event occurring within any GOOSE dataset element will result in the existing GOOSE retransmission message being stopped. A state number within the GOOSE protocol identifies whether a GOOSE message is a new message or a retransmitted message.
GOOSE messages are designed to be brand independent. Some vendors offer intelligent electronic devices (IED) that fully support IEC 61850 for a truly interoperable approach within the substation network without requiring vendor specific cables or algorithms.
Generic Substation State Events
Generic Substation State Events (GSSE) is an extension of event transfer mechanism in UCA2.0. Only Status data can be exchanged through GSSE and it uses a status list (string of bits) rather th |
https://en.wikipedia.org/wiki/Urine%20collection%20device | A urine collection device or UCD is a device that allows the collection of urine for analysis (as in medical or forensic urinalysis) or for purposes of simple elimination (as in vehicles engaged in long voyages and not equipped with toilets, particularly aircraft and spacecraft). UCDs of the latter type are sometimes called piddle packs.
Similar devices are used, primarily by men, to manage urinary incontinence. These devices attached to the outside of the penile area and direct urine into a separate collection chamber such as a leg or bedside bag. There are several varieties of external urine collection devices on the market today including male external catheters also known as urisheaths or Texas/condom catheters, urinals and hydrocolloid-based devices.
External products should not be used by any individual who experiences urinary retention without overflow incontinence.
Description
A urine collection device allows an individual to empty their bladder into a container hygienically and without spilling urine.
Condom catheters
Condom catheters, also known as male external catheters, urisheaths, or Texas catheters, are made of silicone or latex (depending on the brand/manufacturer) and cover the penis just like a condom but with an opening at the end to allow the connection to the urine bag. The sheath is worn over the penis and looks like a condom (hence the name). It stays in place by use of an adhesive, that can either be built into the sheath or come as a separate adhesive liner. The urine gets funneled away from the body, keeping the skin dry at all times. The urine runs into a urine bag that is attached at the bottom of the external catheter. During the day, a drainable leg bag can be used, and at night it is recommended to use a large-capacity bedside drainage bag. Male external catheters are designed to be worn 24/7 and changed daily – and can be used by men with both light and severe incontinence. Male external catheters come in several sizes and len |
https://en.wikipedia.org/wiki/Write%20precompensation | Write precompensation (abbreviated WPcom in the literature) is a technical aspect of the design of hard disks, floppy disks and other digital magnetic recording devices. It is the modification of the analog write signal, shifting transitions somewhat in time, in such a way as to ensure that the signal that will later be read back will be as close as possible to the unmodified write signal. It is required because of the non-linear properties of magnetic recording surfaces.
A higher amount of precompensation is needed to write data in sectors that are closer to the center of the disk. In constant angular velocity (CAV) recording, in which the disk spins at a constant speed no matter where the data is written, the sectors closest to the spindle are packed tighter than the outer sectors and so require a slightly different timing to write the data in the most reliable way. CAV recording is used by most floppy disk systems and by older hard disk systems; the term CAV is not applicable to non-circular media, such as magnetic tapes. On magnetic tapes, precompensation is usually constant throughout the tape.
History
In the past one of the hard disk parameters stored in a PC's CMOS memory was the WPcom number, a marker of the track where stronger precompensation begins, i.e. the transitions are shifted further in time. This was needed by the old MFM and RLL hard disk controllers in common use until the early 1990s. These controllers were usually housed on plug-in cards which could be plugged into the mainboard of the computer; in any case they were external to the actual drive and could deal with many different drives; thus they needed to be told some parameters about the particular drive type in use by the computer. One of these parameters was the WPcom number. This scheme allowed only two different precompensation strengths per disk, a lower one for the outer tracks and a higher one for the inner tracks, however this was enough for the simple low capacity drives of those |
https://en.wikipedia.org/wiki/Mini%E2%80%93mental%20state%20examination | The mini–mental state examination (MMSE) or Folstein test is a 30-point questionnaire that is used extensively in clinical and research settings to measure cognitive impairment. It is commonly used in medicine and allied health to screen for dementia. It is also used to estimate the severity and progression of cognitive impairment and to follow the course of cognitive changes in an individual over time; thus making it an effective way to document an individual's response to treatment. The MMSE's purpose has been not, on its own, to provide a diagnosis for any particular nosological entity.
Administration of the test takes between 5 and 10 minutes and examines functions including registration (repeating named prompts), attention and calculation, recall, language, ability to follow simple commands and orientation. It was originally introduced by Folstein et al. in 1975, in order to differentiate organic from functional psychiatric patients but is very similar to, or even directly incorporates, tests which were in use previous to its publication. This test is not a mental status examination. The standard MMSE form which is currently published by Psychological Assessment Resources is based on its original 1975 conceptualization, with minor subsequent modifications by the authors.
Advantages to the MMSE include requiring no specialized equipment or training for administration, and has both validity and reliability for the diagnosis and longitudinal assessment of Alzheimer's disease. Due to its short administration period and ease of use, it is useful for cognitive assessment in the clinician's office space or at the bedside. Disadvantages to the utilization of the MMSE is that it is affected by demographic factors; age and education exert the greatest effect. The most frequently noted disadvantage of the MMSE relates to its lack of sensitivity to mild cognitive impairment and its failure to adequately discriminate patients with mild Alzheimer's disease from normal pati |
https://en.wikipedia.org/wiki/Atropine | Atropine is a tropane alkaloid and anticholinergic medication used to treat certain types of nerve agent and pesticide poisonings as well as some types of slow heart rate, and to decrease saliva production during surgery. It is typically given intravenously or by injection into a muscle. Eye drops are also available which are used to treat uveitis and early amblyopia. The intravenous solution usually begins working within a minute and lasts half an hour to an hour. Large doses may be required to treat some poisonings.
Common side effects include dry mouth, abnormally large pupils, urinary retention, constipation, and a fast heart rate. It should generally not be used in people with closed-angle glaucoma. While there is no evidence that its use during pregnancy causes birth defects, this has not been well studied so sound clinical judgment should be used. It is likely safe during breastfeeding. It is an antimuscarinic (a type of anticholinergic) that works by inhibiting the parasympathetic nervous system.
Atropine occurs naturally in a number of plants of the nightshade family, including deadly nightshade (belladonna), Jimson weed, and mandrake. It was first isolated in 1833, It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
Eyes
Topical atropine is used as a cycloplegic, to temporarily paralyze the accommodation reflex, and as a mydriatic, to dilate the pupils. Atropine degrades slowly, typically wearing off in 7 to 14 days, so it is generally used as a therapeutic mydriatic, whereas tropicamide (a shorter-acting cholinergic antagonist) or phenylephrine (an α-adrenergic agonist) is preferred as an aid to ophthalmic examination.
In refractive and accommodative amblyopia, when occlusion is not appropriate sometimes atropine is given to induce blur in the good eye. Evidence suggests that atropine penalization is just as effective as occlusion in improving visual acuity.
Antimuscarinic topi |
https://en.wikipedia.org/wiki/Windows%20App%20SDK | Windows App SDK (formerly known as Project Reunion) is a software development kit (SDK) from Microsoft that provides a unified set of APIs and components that can be used to develop desktop applications for both Windows 11 and Windows 10 version 1809 and later. The purpose of this project is to offer a decoupled implementation of capabilities which were previously tightly-coupled to the UWP app model. Windows App SDK allows native Win32 (USER32/GDI32) or .NET (WPF/WinForms) developers alike a path forward to enhance their apps with modern features.
It follows that Windows App SDK is not intended to replace the Windows SDK. By exposing a common application programming interface (API) primarily using the Windows Runtime (WinRT) through generated WinMD metadata, the tradeoffs which once characterized either app model are largely eliminated. NuGet packages for version 1.4 were released in August 2023 after approximately four months of development.
Features and components
While Microsoft has developed a number of new features, some of the features listed below are abstractions of functionality provided by existing APIs.
WinUI 3
Most of the investment into the decoupled UI stack has gone towards bug fixes, improvements to the debugging experience, and simplifying the window management capabilities made possible by switching from CoreWindow. An API abstracting USER32/GDI32 primitives known as AppWindow was introduced to expose a unified set of windowing capabilities and enable support for custom window controls.
WebView2
A replacement for the UWP WebView control was announced early on. This is because it was based on an unsupported browser engine. A new Chromium-based control, named WebView2, was developed and can be used from WinUI as well as other supported app types.
Packaging
While MSIX is included in the Windows App SDK and considered to be the recommended application packaging format, a design goal was to allow for unpackaged apps. These apps can be deployed |
https://en.wikipedia.org/wiki/Dot-com%20company | A dot-com company, or simply a dot-com (alternatively rendered dot.com, dot com, dotcom or .com), is a company that does most of its business on the Internet, usually through a website on the World Wide Web that uses the popular top-level domain ".com". As of 2021, .com is by far the most used TLD, with almost half of all registrations.
The suffix .com in a URL usually (but not always) refers to a commercial or for-profit entity, as opposed to a non-commercial entity or non-profit organization, which usually use .org. The name for the domain came from the word commercial, as that is the main intended use. Since the .com companies are web-based, often their products or services are delivered via web-based mechanisms, even when physical products are involved. On the other hand, some .com companies do not offer any physical products.
History
Origin of the .com domain (1985-1991)
The .com top-level domain (TLD) was one of the first seven created when the Internet was first implemented in 1985; the others were .mil, .gov, .edu, .net, .int, and .org. The United States Department of Defense originally controlled the domain, but control was later transferred to the National Science Foundation as it was mainly used for non-defense-related purposes.
Beginning of online commerce and rise in valuation (1992-1999)
With the creation of the World Wide Web in 1991, many companies began creating websites to sell their products. In 1994, the first secure online credit card transaction was made using the NetMarket platform. By 1995, over 40 million people were using the Internet. That same year, companies including Amazon.com and eBay were launched, paving the way for future e-commerce companies. At the time of Amazon's IPO in 1997, they were recording a 900% increase in revenue over the previous year. By 1998, with a valuation of over $14 billion, they were still not making a profit. The same phenomenon occurred with many other internet companiesventure capitalists were eage |
https://en.wikipedia.org/wiki/Eel%20as%20food | Eels are elongated fish, ranging in length from to . Adults range in weight from 30 grams to over 25 kilograms. They possess no pelvic fins, and many species also lack pectoral fins. The dorsal and anal fins are fused with the caudal or tail fin, forming a single ribbon running along much of the length of the animal.
Most eels live in the shallow waters of the ocean and burrow into sand, mud, or amongst rocks. A majority of eel species are nocturnal and thus are rarely seen. Sometimes, they are seen living together in holes, or "eel pits". Some species of eels live in deeper water on the continental shelves and over the slopes deep as . Only members of the family Anguillidae regularly inhabit fresh water, but they too return to the sea to breed.
Eel blood is poisonous to humans and other mammals, but both cooking and the digestive process destroy the toxic protein. The toxin derived from eel blood serum was used by Charles Richet in his Nobel Prize-winning research, in which Richet discovered anaphylaxis by injecting it into dogs and observing the effect.
The Jewish laws of Kashrut forbid the eating of eels. Similarly, according to the King James Version of the Old Testament, it is acceptable to eat fin fish, but fish like eels are an abomination and should not be eaten.
Japan consumes more than 70 percent of the global eel catch.
Dishes and Cuisines
Freshwater eels (unagi) and marine eels (anago, conger eel) are commonly used in Japanese cuisine; foods such as unadon and unajuu are popular but expensive. Eels are also very popular in Chinese cuisine and are prepared in many different ways. Hong Kong eel prices have often reached 1000 HKD per kilogram and once exceeded 5000 HKD per kilogram. Eel is also popular in Korean cuisine and is seen as a source of stamina for men. The European eel and other freshwater eels are eaten in Europe, the United States, and other places. Traditional east London foods are jellied eels and pie and mash, although their demand has |
https://en.wikipedia.org/wiki/Open%20Insulin%20Project | The Open Insulin Project is a community of researchers and advocates working to develop an open-source protocol for producing insulin that is affordable, has transparent pricing, and is community-owned.
History
The Open Insulin Project was started in 2015 by Anthony Di Franco, himself a type 1 diabetic. He started the project in response to the unreasonably high prices of insulin in the US. The project has been housed in Counter Culture Labs, a community laboratory and maker space in the Bay Area. Other collaborators include ReaGent, BioCurious and BioFoundry.
Goals
The project aims to develop both the methodology and hardware to allow communities and individuals to produce medical-grade insulin for the treatment of diabetes. These methods will be low-cost in order to combat the high price of insulin in places like the US. There is also potential for small-scale distributed production that may allow for improved insulin access in places with poor availability infrastructure. Access to insulin remains so insufficient around the globe that "Half of all people who need insulin lack the financial or logistical means to obtain adequate supplies".
Motivation
Researcher Frederick Banting famously refused to put his name on the patent after discovering insulin in 1923. The original patent for insulin was later sold by his collaborators for just $1 to the University of Toronto in an effort to make it as available as possible. Despite this, for various reasons, there remains no generic version of insulin available in the US. Insulin remains controlled by a small number of large pharmaceutical companies and sold at prices unaffordable to many who rely on it to live, particularly those without insurance. This lack of availability has led to fatalities, such as Alec Smith, who died in 2017 due to lack of insulin. The Open Insulin Project is motivated by the urgent need to protect the health of those with diabetes regardless of their economic or employment status by develop |
https://en.wikipedia.org/wiki/William%20Lawrence%20Tower | William Lawrence Tower (22 December 1872– July 1967) was an American zoologist, born in Halifax, Massachusetts. He was educated at the Lawrence Scientific School (Harvard), the Harvard Graduate School, and the University of Chicago (B. S., 1902), where he taught thereafter, becoming associate professor in 1911.
Research
Tower was notable for his experimental work in heredity, investigating the inheritance of acquired characteristics and the laws of heredity in beetles and publishing An Investigation of Evolution in Chrysomelid Beetles of the Genus Leptinotarsa (1906). This study is probably the first (albeit possibly discredited) of mutation in animals. He published also The Development of the Colors and Color Patterns of Coleoptera (1903) and, with Coulter, Castle, Davenport and East, an essay on Heredity and Eugenics (1912).
Tower was caught up in personal and professional scandals. He resigned from the University of Chicago in 1917 following a very public divorce, but by then he had become a source of discontent among students and faculty. His professed atheism caused offense to some, including graduate student Warder Clyde Allee. Tower caused political friction within the department and many members distrusted his professional ethics. Experimental results which Tower reported in 1906 and 1910 were found to include serious discrepancies which he declined to explain. His claim that experimental results had been lost in a fire increased his colleagues' skepticism. William Bateson, T. D. A. Cockerell, and R. A. Gortner were particularly critical of his work. A more positive reception came from the botanist Henry Chandler Cowles.
It was suggested that his research may have been faked. The geneticist William E. Castle who visited Tower's laboratory was not impressed by the experimental conditions. He later concluded that Tower had faked his data. Castle found the fire suspicious and also Tower's claim that a steam leak in his greenhouse had destroyed all his beetl |
https://en.wikipedia.org/wiki/Supervisory%20circuit | Supervisory circuits are electronic circuits that monitor one or more parameters of systems such as power supplies and microprocessors which must be maintained within certain limits, and take appropriate action if a parameter goes out of bounds, creating an unacceptable or dangerous situation.
Supervisory circuits are known by a variety of names, including battery monitors, power supply monitors, supply supervisory circuits, and reset circuits.
Thermal protection
A thermal protection circuit consists of a temperature-monitoring circuit and a control circuit. The control circuit may either shut down the circuitry it is protecting, reduce the power available in order to avoid overheating, or notify the system (software or user). These circuits may be quite complex, programmable and software-run, or simple with predefined limits.
Overvoltage and undervoltage protection
Voltage protection circuits protect circuitry from either overvoltage or undervoltage; either of these situations can have detrimental effects. Supervisory circuits that specifically focus on voltage regulation are often sold as supply voltage supervisors and will reset the protected circuit when the voltage returns to operating range.
Two types of overvoltage protection devices are currently used: clamping, which passes through voltages up to a certain level, and foldback, which shunts voltage away from the load. The shunting creates a short circuit which removes power from the protected circuitry. In certain applications this circuitry can reset itself after the dangerous condition has passed.
Fire Alarm Systems
Fire Alarm systems often supervise inputs and outputs with an end-of-line device such as a resistor or capacitor. The system looks for changes in resistance or capacitance values to determine if the circuit has an abnormal condition. |
https://en.wikipedia.org/wiki/University%20of%20Montenegro%20Faculty%20of%20Biotechnology | The University of Montenegro Biotechnical Faculty (Montenegrin: Biotehnički fakultet Univerziteta Crne Gore Биотехнички факултет Универзитета Црне Горе) is one of the educational institutions of the University of Montenegro. The building is located in Podgorica, at the University campus.
History
Scientific research in agriculture in Montenegro was established in 1937, when the Ministry of Agriculture of the Kingdom of Yugoslavia decided to establish the experimental State Research Station for Southern Cultures in Bar. After World War II, the founding of new institutions and renewal of previously existing ones followed, such as:
the Institute for Agricultural Research in Podgorica (1945)
the Institute for Animal Husbandry (Livestock) in Nikšić
the Institute for Southern Cultures and Viticulture in Bar (1947)
Soil Testing Units in Bar (1949)
Veterinary Diagnostic Units in Titograd (1950) and
the Centre for Fruit Growing in Bijelo Polje (1952).
The Agricultural Institute was established in 1961, as a result of merging the above-mentioned institutions. It functioned under that name until 1997, when it was transformed into the Biotechnical Institute by including the Forestry Sector into one unique scientific research institution. By establishing the Studies of Agriculture, in 2005, the Institute developed into a faculty, and in 2008 formally changed its name into the Biotechnical Faculty.
Organization
The Faculty of Biotechnology has adequately equipped classrooms and laboratories situated in the Faculty's buildings in Podgorica, Bar and Bijelo Polje, as well as experimental plots for a part of students’ professional practice and organization of Faculty production.
Undergraduate academic studies
The undergraduate academic studies are divided in two study groups:
Plant production
Undergraduate applied studies of Continental fruit growing in Bijelo Polje
Undergraduate applied studies of Mediterranean fruit growing in Bar
Cattle breeding
Specialist ac |
https://en.wikipedia.org/wiki/Seckel%20syndrome | Seckel syndrome, or microcephalic primordial dwarfism (also known as bird-headed dwarfism, Harper's syndrome, Virchow–Seckel dwarfism and bird-headed dwarf of Seckel) is an extremely rare congenital nanosomic disorder. Inheritance is autosomal recessive. It is characterized by intrauterine growth restriction and postnatal dwarfism with a small head, narrow bird-like face with a beak-like nose, large eyes with down-slanting palpebral fissures, receding mandible and intellectual disability.
A mouse model has been developed. This mouse model is characterized by a severe deficiency of ATR protein. These mice have high levels of replicative stress and DNA damage. Adult Seckel mice display accelerated aging. These findings are consistent with the DNA damage theory of aging.
History
The syndrome was named after German-American physician Helmut Paul George Seckel (1900–1960). The synonym Harper's syndrome was named after pediatrician Rita G. Harper.
Symptoms and signs
Symptoms include:
intellectual disability (more than half of the patients have an IQ below 50)
microcephaly
sometimes pancytopenia (low blood counts)
cryptorchidism in males
low birth weight
dislocations of pelvis and elbow
unusually large eyes
blindness or visual impairment
large, low-set ears
small chin due to receded lower jaw
Genetics
It is believed to be caused by defects of genes on chromosome 3 and 18. One form of Seckel syndrome can be caused by mutation in the gene encoding the ataxia telangiectasia and Rad3-related protein () which maps to chromosome 3q22.1-q24. This gene is central in the cell's DNA damage response and repair mechanism.
Types include:
Diagnosis
Treatment
See also
Koo-Koo the Bird Girl |
https://en.wikipedia.org/wiki/Yankee%20screwdriver | The trade name "Yankee" screwdriver was first marketed by North Brothers Manufacturing Company in ≈16 April 1895, with the No. ≠130 spiral ratchet screwdriver. Yankee soon became and still is a well-known name in automatic spiral ratchet screwdrivers, with several other models, and model improvements patented by North Bros. over a 40-year period.
The term "Yankee screwdriver" is often used to describe push/pull type screwdriver other than one manufactured by North Brothers Mfg. Co. or Stanley Tools, who purchased the rights to the well-known Yankee brand or trade name in the 1940s from North Brothers. North Brothers always marked the tools they manufactured with the Yankee name, and in most cases the North Bros. name and location as well.
All spiral ratchet screwdriver models made by Stanley did have the Yankee trade name on them, or at least until the 1960s when the Handyman trade name became as well known as the Yankee trade name, so Stanley Tools marked certain models with both the Handyman and Yankee brand name on them, and usually the Stanley name was on them as well. The Handyman trade name was not limited to a line of screwdriver models, as the same name was marked on a complete line of planes, drills and other tools specifically marketed to the home user.
Sizes
There were 3 different size spring chucks, and therefore 3 different shank size tips or sometimes called points, to fit various models.
Generally all tips made by North Brothers or Stanley were stamped with the corresponding number of the model screwdriver they would fit, but the stamped numbers are often difficult to see, so it's a good idea to know the size you need before you set out to find tips for your screwdriver.
The smallest size was the number 35, so any of the model numbers with 35 in the number, like No. 135A is the smallest tip shank diameter, measuring 7/32" (5.5mm) diameter shank. (Note that all the handyman models with 33 in their model number also have the No. 35 size chucks, the |
https://en.wikipedia.org/wiki/Younger%20Dryas%20impact%20hypothesis | The Younger Dryas impact hypothesis (YDIH) or Clovis comet hypothesis is a speculative attempt to explain the onset of the Younger Dryas (YD) cooling at the end of the Last Glacial Period, around 12,900 years ago. The hypothesis is controversial and not widely accepted by relevant experts.
It is an alternative to the long-standing and widely accepted explanation that it was caused by a significant reduction in, or shutdown of the North Atlantic Conveyor due to a sudden influx of freshwater from Lake Agassiz and deglaciation in North America. The YDIH posits that fragments of a large (more than 4 kilometers in diameter), disintegrating asteroid or comet struck North America, South America, Europe, and western Asia, coinciding with the beginning of the Younger Dryas cooling event. Advocates proposed the existence of a Younger Dryas boundary (YDB) layer that can be identified by materials they interpret as evidence of multiple meteor air bursts and/or impacts across a large fraction of Earth’s surface. However, inconsistencies have been identified in the graphs they published, and authors have not yet responded to requests for clarification and have never made their raw data available. Some YDIH proponents have also proposed that this event triggered extensive biomass burning, a brief impact winter and the Younger Dryas abrupt climate change, contributed to extinctions of late Pleistocene megafauna, and resulted in the end of the Clovis culture.
Comet research group (CRG)
Members of this group have been criticized for promoting pseudoscience, pseudoarchaeology, and pseudohistory, engaging in cherry-picking of data based on confirmation bias, seeking to persuade via the bandwagon fallacy, and even engaging in intentional misrepresentations of archaeological and geological evidence. For example, physicist Mark Boslough, a specialist in planetary impact hazards and asteroid impact avoidance, has pointed out many problems with the credibility and motivations of indivi |
https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s%20extension%20theorem | In measure theory, Carathéodory's extension theorem (named after the mathematician Constantin Carathéodory) states that any pre-measure defined on a given ring of subsets R of a given set Ω can be extended to a measure on the σ-ring generated by R, and this extension is unique if the pre-measure is σ-finite. Consequently, any pre-measure on a ring containing all intervals of real numbers can be extended to the Borel algebra of the set of real numbers. This is an extremely powerful result of measure theory, and leads, for example, to the Lebesgue measure.
The theorem is also sometimes known as the Carathéodory–Fréchet extension theorem, the Carathéodory–Hopf extension theorem, the Hopf extension theorem and the Hahn–Kolmogorov extension theorem.
Introductory statement
Several very similar statements of the theorem can be given. A slightly more involved one, based on semi-rings of sets, is given further down below. A shorter, simpler statement is as follows. In this form, it is often called the Hahn–Kolmogorov theorem.
Let be an algebra of subsets of a set Consider a set function
which is finitely additive, meaning that
for any positive integer and disjoint sets in
Assume that this function satisfies the stronger sigma additivity assumption
for any disjoint family of elements of such that (Functions obeying these two properties are known as pre-measures.) Then,
extends to a measure defined on the -algebra generated by ; that is, there exists a measure
such that its restriction to coincides with
If is -finite, then the extension is unique.
Comments
This theorem is remarkable for it allows one to construct a measure by first defining it on a small algebra of sets, where its sigma additivity could be easy to verify, and then this theorem guarantees its extension to a sigma-algebra. The proof of this theorem is not trivial, since it requires extending from an algebra of sets to a potentially much bigger sigma-algebra, guaranteeing that |
https://en.wikipedia.org/wiki/Too%20Much%20Coffee%20Man | Too Much Coffee Man (TMCM) is an American satirical superhero created by cartoonist Shannon Wheeler. Too Much Coffee Man wears what appears to be a spandex version of old-fashioned red "long johns" with a large mug attached atop his head. He is an anxious Everyman who broods about the state of the world, from politics to people, exchanging thoughts with friends and readers.
The strip is most often presented as a single page in alternative press newspapers, though occasionally the story arc stretches into multi-page stories. TMCM has appeared in comic strips, minicomics, webcomics, comic books, magazines, books, and operas. The Too Much Coffee Man comic book won the 1995 Eisner Award for Best New Series.
Publication history
Creation
Too Much Coffee Man first appeared in 1991, in the Too Much Coffee Man Minicomic, as a self-promotion for Wheeler's book Children with Glue (Blackbird Comics, 1991). The minicomics, which appeared in many different formats, even one issued as a one-inch square, were self-published, photocopied, and handmade by Wheeler in initial runs of 300 black-and-white copies.
Wheeler said he created Too Much Coffee Man to make more accessible themes he had begun in a college newspaper. He said in 2011:
Newspaper strip
Too Much Coffee Man started as a one-page ongoing strip running in The Daily Texan in 1991. Over time, it became syndicated to a number of alternative weeklies throughout the U.S.
With the January 23, 2006, installment, the "Too Much Coffee Man" strip was retitled "How to Be Happy, with Too Much Coffee Man". On February 6, 2006, the title was simplified to "How to Be Happy", and Too Much Coffee Man did not appear in the strip again until January 21, 2008.
Comics
Solo title
Wheeler published four issues of the Too Much Coffee Man minicomic in 1991–1992.
Wheeler self-published the Too Much Coffee Man comic book via Adhesive Comics between 1993 and 2005. The first five issues were dated from July 1993 to February 1996. These w |
https://en.wikipedia.org/wiki/Androstenediol%20sulfate | Androstenediol sulfate, also known as androst-5-ene-3β,17β-diol 3β-sulfate, is an endogenous, naturally occurring steroid and a urinary metabolites of androstenediol. It is a steroid sulfate which is formed from sulfation of androstenediol by steroid sulfotransferase and can be desulfated back into androstenediol by steroid sulfatase.
See also
Steroid sulfate
C19H30O5S |
https://en.wikipedia.org/wiki/Cicutoxin | Cicutoxin is a naturally-occurring poisonous chemical compound produced by several plants from the family Apiaceae including water hemlock (Cicuta species) and water dropwort (Oenanthe crocata). The compound contains polyene, polyyne, and alcohol functional groups and is a structural isomer of oenanthotoxin, also found in water dropwort. Both of these belong to the C17-polyacetylenes chemical class.
It causes death by respiratory paralysis resulting from disruption of the central nervous system. It is a potent, noncompetitive antagonist of the gamma-aminobutyric acid (GABA) receptor. In humans, cicutoxin rapidly produces symptoms of nausea, emesis and abdominal pain, typically within 60 minutes of ingestion. This can lead to tremors, seizures, and death. LD50(mouse; i.p.) ~9 mg/kg
History
Johann Jakob Wepfer's book Cicutae Aquaticae Historia Et Noxae Commentario Illustrata was published in 1679; it contains the earliest published report of toxicity associated with Cicuta plants. The name cicutoxin was coined by Boehm in 1876 for the toxic compound arising from the plant Cicuta virosa, and he also extracted and named the isomeric toxin oenanthotoxin from Oenanthe crocata. A review published in 1911 examined 27 cases of cicutoxin poisoning, 21 of which had resulted in death – though some of these cases involved deliberate poisoning. This review included a case where a family of five used Cicuta extracts as a topical treatment for itching, resulting in the deaths of two children, a report that suggests that cicutoxin may be absorbed through the skin. A review from 1962 examined 78 cases, 33 of which resulted in death, and cases of cicutoxin poisoning continue to occur:
A child used the stem of a plant as a toy whistle and died of cicutoxin poisoning
A 14-year-old boy died 20 hours after consuming a 'wild carrot' in 2001
In 1992, two brothers were foraging for wild ginseng and found a hemlock root. One of them ate three bites of the supposed ginseng root and |
https://en.wikipedia.org/wiki/Meru%20Networks | Meru Networks was a supplier of wireless local area networks (WLANs) to healthcare, enterprise, hospitality, K-12 education, higher education, and other markets. Founded in 2002 and headquartered in Sunnyvale, California, United States, the company made its initial public offering in March 2010, and was acquired by Fortinet in May 2015.
History
Meru Networks was founded in 2002 to address issues with legacy wireless networking architectures that support two separate access networks: a wired network for business-specific applications and a wireless network for casual use. This causes problems ranging from co-channel interference to the inability of micro-cellular systems to scale up. Meru Networks develops and markets a virtualized wireless LAN solution that enables enterprises to migrate applications from wired networks to wireless networks and become what Meru refers to as the "All-wireless enterprise." The company uses an approach to wireless networking that employs virtualization technology to create a self-monitoring wireless network that provides access to applications, improved application performance, and a greater ability to run converged applications, such as voice, video and data, over a wireless network.
The company’s current products address the IEEE 802.11ac and 802.11n wireless networking standards, The company focuses on a “Virtual Cell” approach to Wi-Fi. Under a single service assurance platform, it aggregates access points and the controller needed to manage them. This simplifies the management of access points, cuts the number of access points needed on a wireless network, and eliminates bandwidth contention issues.
In 2011, the company was awarded ITP Technology Best Wireless Solution and was listed as a Health Management Technology Coolest Products.
Timeline
February 2002: Meru Networks founded by Dr. Vaduvur Bharghavan, Srinath Sarang, Sung-Wook Han, Joe Epstein,
June 2005: Company receives $12 million in Series C funding
May 2006: |
https://en.wikipedia.org/wiki/Zwanzig%20projection%20operator | The Zwanzig projection operator is a mathematical device used in statistical mechanics. It operates in the linear space of phase space functions and projects onto the linear subspace of "slow" phase space functions. It was introduced by Robert Zwanzig to derive a generic master equation. It is mostly used in this or similar context in a formal way to derive equations of motion for some "slow" collective variables.
Slow variables and scalar product
The Zwanzig projection operator operates on functions in the -dimensional phase space of point particles with coordinates and momenta .
A special subset of these functions is an enumerable set of "slow variables" . Candidates for some of these variables might be the long-wavelength Fourier components of the mass density and the long-wavelength Fourier components of the momentum density with the wave vector identified with . The Zwanzig projection operator relies on these functions but does not tell how to find the slow variables of a given Hamiltonian .
A scalar product between two arbitrary phase space functions and is defined by the equilibrium correlation
where
denotes the microcanonical equilibrium distribution. "Fast" variables, by definition, are orthogonal to all functions of under this scalar product. This definition states that fluctuations of fast and slow variables are uncorrelated, and according to the ergodic hypothesis this also is true for time averages. If a generic function is correlated with some slow variables, then one may subtract functions of slow variables until there remains the uncorrelated fast part of . The product of a slow and a fast variable is a fast variable.
The projection operator
Consider the continuous set of functions with constant. Any phase space function depending on only through is a function of the , namely
A generic phase space function decomposes according to
where is the fast part of . To get an expression for the slow part of take the scalar product |
https://en.wikipedia.org/wiki/VSAN | A virtual storage area network (virtual SAN, VSAN or vSAN) is a logical representation of a physical storage area network (SAN). A VSAN abstracts the storage-related operations from the physical storage layer, and provides shared storage access to the applications and virtual machines by combining the servers' local storage over a network into a single or multiple storage pools.
The use of VSANs allows the isolation of traffic within specific portions of the network. If a problem occurs in one VSAN, that problem can be handled with a minimum of disruption to the rest of the network. VSANs can also be configured separately and independently.
Technology
Operation
A VSAN operates as a dedicated piece of software responsible for storage access, and depending on the vendor, can run either as a virtual storage appliance (VSA), a storage controller that runs inside an isolated virtual machine (VM) or as an ordinary user-mode application, such as StarWind Virtual SAN, or DataCore SANsymphony. Alternatively it can be implemented as a kernel-mode loadable module, such as VMware vSAN, or Microsoft Storage Spaces Direct (S2D). A VSAN can be tied to a specific hypervisor, known as hypervisor-dedicated, or it can allow different hypervisors, known as hypervisor-agnostic.
Different vendors have different requirements for the minimum number of nodes that participate in a resilient VSAN cluster. The minimum requirement is to have at least 2 for high availability.
All-flash versus hybrid VSAN
Data center operators can deploy VSANs in an all-flash environment or a hybrid configuration, where flash is only used at the caching layer, and traditional spinning disk storage is used everywhere else. All-flash VSANs are higher performing, but as of 2019 were more expensive than hybrid networks.
Protocols
For sharing storage over a network, VSAN utilizes protocols including Fibre Channel (FC), Internet Small Computer Systems Interface (iSCSI), Server Message Block (SMB), and Network |
https://en.wikipedia.org/wiki/Gray%20death | Gray death is a slang term which refers to a potent mixture of synthetic opioids, for example benzimidazole opioids or fentanyl analogues, often sold on the street misleadingly as "heroin". However, other substances such as cocaine have also been laced with opioids that resulted in illness and death.
Etymology
The first batch of gray death had a characteristic gray color.
Detected samples
Samples have been found to contain heroin, fentanyl, carfentanil, and the designer drug U-47700. A mixture of drugs misleadingly called 2C-B had been found to contain fentanyl in Argentina.
Deaths
In February 2022, 24 people in Argentina died after using cocaine laced with carfentanil.
Dangers
As with other illicit narcotics, gray death carries a higher risk of serious adverse effects than prescribed opioids due to the unknown and inconsistent composition of the product. As such, even experienced opioid users risk serious injury or death when taking this drug mixture.
Treatment
Reversing a gray death overdose may require multiple doses of naloxone. By contrast, an overdose from morphine or from high-purity heroin would ordinarily need only one dose. This difficulty is regularly encountered when treating overdoses of high-affinity opioids in the fentanyl chemical family or with buprenorphine. The greater affinity of these substances for the μ-opioid receptor impedes the activity of naloxone, which is an antagonist at the receptor. Increasing the dosage of naloxone or its frequency of administration may be required to counteract respiratory depression.
History
The substance first appeared in America and was thought to be a unique chemical compound before being identified as a mixture of drugs.
See also
List of opioids
List of designer drugs
Opioid epidemic in the United States
Mickey Finn (drugs)
Whoonga |
https://en.wikipedia.org/wiki/Bradbury%E2%80%93Nielsen%20shutter | A Bradbury–Nielsen shutter (or Bradbury–Nielsen gate) is a type of electrical ion gate, which was first proposed in an article by Norris Bradbury and Russel A. Nielsen, where they used it as an electron filter. Today they are used in the field of mass spectrometry where they are used in both TOF mass spectrometers and in ion mobility spectrometers
, as well as Hadamard transform mass spectrometers (a variant of TOF-MS). The Bradbury–Nielsen shutter is ideal for injecting short pulses of ions and can be used to improve the mass resolution of TOF instruments by reducing the initial pulse size as compared to other methods of ion injection.
Theory of operation
The concept behind the Bradbury–Nielsen shutter is to apply a high frequency voltage in a 180° out-of-phase manner to alternate wires in a grid which is orthogonal to the path of the ion beam. This results in charged particles only passing directly through the shutter at certain times in the voltage phase (φ=nπ/2), when the potential difference between the grid wires is zero. At other times the ion beam is deflected to some angle by the potential difference between the neighboring wires. This deflection is divergent with ions that pass through alternate slits being deflected in opposite directions. The maximum deflection angle can be calculated by
tan α = k Vp / V0
where α is the deflection angle, k is a deflection constant, Vp is the wire voltage (+Vp on one wire set and -Vp on the other), and V0 is the ion acceleration voltage in eV. The deflection constant k can be calculated by
k = π / 2ln[cot(πR/2d)]
where R is the wire radius and d is the wire spacing.
Micromachined ion gates
A Bradbury-Nielsen Gate micromachined from a silicon on insulator wafer has been reported. |
https://en.wikipedia.org/wiki/Mucoprotective | Mucoprotective agents are pharmaceutical or herbal medicines that protect mucous membrane tissues. They include such things as demulcents. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.