source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/PGreen
|
The pGreen plasmids are vectors for plant transformation. They were first described in 2000 as components of a novel T-DNA binary system. The supporting web page provides supplementary information and ongoing support to researchers to request their plasmid resources. As these plasmids have been taken up by the research community, the plasmids have been developed, expanding the resources available to the community.
Researchers are encouraged to contribute to this research community by submitting their vector sequence to genbank and providing a description of the plasmid on the site.
pGreenI and pGreenII
pGreen is the original pGreen plasmid. pGreenII features plasmid backbone modification to improve plasmid stability.
T-DNA regions
No transformation selection
pGreenII 0000: minimal T-DNA with Left and Right border, lacZ gene for blue/white selection during cloning multiple cloning site derived from pBluescript.
pGreenII 62-SK: derived from pGreenII 0000, the LacZ blue/white cloning selection has been replaced with a 35S-MCS-CaMV cassette that allows the insertion of a gene of interest into a 35S over-expression cassette. The multiple cloning site (MCS) is derived from pBluescript.
Kanamycin selection
pGreenII 0029: derived from pGreenII 0000, a nos-kan cassette has been inserted into the HpaI site of the Left Border, providing resistance to kanamycin during plant transformation selection.
pGreenII 0029 62-SK: derived from pGreenII 0029, the LacZ blue/white cloning selection has been replaced with a 35S-MCS-CaMV cassette that allows the insertion of a gene of interest into a 35S over-expression cassette. The MCS is derived from pBluescript.
Hygromycin selection
pGreenII 0179: derived from pGreenII 0000, a 35S-hyg cassette has been inserted into the HpaI site of the Left Border, providing resistance to hygromycin during plant transformation selection.
Bialaphos selection
pGreenII 0229: derived from pGreenII 0000, a nos-bar cassette has been inserted into th
|
https://en.wikipedia.org/wiki/Schumacher%20criteria
|
Schumacher criteria are diagnostic criteria that were previously used for identifying multiple sclerosis (MS). Multiple sclerosis, understood as a central nervous system (CNS) condition, can be difficult to diagnose since its signs and symptoms may be similar to other medical problems. Medical organizations have created diagnostic criteria to ease and standardize the diagnostic process especially in the first stages of the disease. Schumacher criteria were the first internationally recognized criteria for diagnosis, and introduced concepts still in use, as CDMS (clinically definite MS).
Sometimes it has been stated that the only proved diagnosis of MS is autopsy, or occasionally biopsy, where lesions typical of MS can be detected through histopathological techniques, and that sensitivity and specificity should be calculated for any given criteria
Context
Historically, the first widespread set of criteria were the Schumacher criteria (also spelled sometimes Schumacker). Currently, testing of cerebrospinal fluid obtained from a lumbar puncture can provide evidence of chronic inflammation of the central nervous system, looking for oligoclonal bands of IgG on electrophoresis, which are inflammation markers found in 75–85% of people with MS., but at the time of Schumacher criteria, oligoclonal bands tests were not available, and they also lacked MRI.
The most commonly used diagnostic tools at that time were evoked potentials. The nervous system of a person with MS responds less actively to stimulation of the optic nerve and sensory nerves due to demyelination of such pathways. These brain responses can be examined using visual and sensory evoked potentials.
Therefore, clinical data alone had to be used for a diagnosis of MS. Schumacher et al. proposed three classifications based in clinical observation: CDMS (clinically definite), PrMS(probable MS) and PsMS(possible MS).
Summary
To get a diagnosis of CDMS a patient must show the following:
Clinical signs of a pr
|
https://en.wikipedia.org/wiki/Galenic%20formulation
|
Galenic formulation deals with the principles of preparing and compounding medicines in order to optimize their absorption. Galenic formulation is named after Claudius Galen, a 2nd Century AD Greek physician, who codified the preparation of drugs using multiple ingredients. Today, galenic formulation is part of pharmaceutical formulation. The pharmaceutical formulation of a medicine affects the pharmacokinetics, pharmacodynamics and safety profile of a drug.
See also
Formulations
Pharmaceutical formulation
ADME
Pharmacology
Medicinal chemistry
Pesticide formulation
Medicinal chemistry
|
https://en.wikipedia.org/wiki/Interceptor%20pattern
|
In the field of software development, an interceptor pattern is a software design pattern that is used when software systems or frameworks want to offer a way to change, or augment, their usual processing cycle. For example, a (simplified) typical processing sequence for a web-server is to receive a URI from the browser, map it to a file on disk, open the file and send its contents to the browser. Any of these steps could be replaced or changed, e.g. by replacing the way URIs are mapped to filenames, or by inserting a new step which processes the files contents.
Key aspects of the pattern are that the change is transparent and used automatically. In essence, the rest of the system does not have to know something has been added or changed and can keep working as before. To facilitate this, a predefined interface for extension has to be implemented, some kind of dispatching mechanism is required where interceptors are registered (this may be dynamic, at runtime, or static, e.g. through configuration files) and context objects are provided, which allow access to the framework's internal state.
Uses and context
Typical users of this pattern are web-servers (as mentioned above), object- and message-oriented middleware
An example of implementation of this pattern is the javax.servlet.Filter interface, which is part of Java Platform, Enterprise Edition.
Aspect-oriented programming (AOP) can also be used in some situations to provide the capability of an interceptor, although AOP doesn't use the elements typically defined for the interceptor pattern.
|
https://en.wikipedia.org/wiki/Graduated%20optimization
|
Graduated optimization is a global optimization technique that attempts to solve a difficult optimization problem by initially solving a greatly simplified problem, and progressively transforming that problem (while optimizing) until it is equivalent to the difficult optimization problem.
Technique description
Graduated optimization is an improvement to hill climbing that enables a hill climber to avoid settling into local optima. It breaks a difficult optimization problem into a sequence of optimization problems, such that the first problem in the sequence is convex (or nearly convex), the solution to each problem gives a good starting point to the next problem in the sequence, and the last problem in the sequence is the difficult optimization problem that it ultimately seeks to solve. Often, graduated optimization gives better results than simple hill climbing. Further, when certain conditions exist, it can be shown to find an optimal solution to the final problem in the sequence. These conditions are:
The first optimization problem in the sequence can be solved given the initial starting point.
The locally convex region around the global optimum of each problem in the sequence includes the point that corresponds to the global optimum of the previous problem in the sequence.
It can be shown inductively that if these conditions are met, then a hill climber will arrive at the global optimum for the difficult problem. Unfortunately, it can be difficult to find a sequence of optimization problems that meet these conditions. Often, graduated optimization yields good results even when the sequence of problems cannot be proven to strictly meet all of these conditions.
Some examples
Graduated optimization is commonly used in image processing for locating objects within a larger image. This problem can be made to be more convex by blurring the images. Thus, objects can be found by first searching the most-blurred image, then starting at that point and searching withi
|
https://en.wikipedia.org/wiki/National%20Strategy%20for%20Trusted%20Identities%20in%20Cyberspace
|
The National Strategy for Trusted Identities in Cyberspace (NSTIC) is a US government initiative announced in April 2011 to improve the privacy, security and convenience of sensitive online transactions through collaborative efforts with the private sector, advocacy groups, government agencies, and other organizations.
The strategy imagined an online environment where individuals and organizations can trust each other because they identify and authenticate their digital identities and the digital identities of organizations and devices. It was promoted to offer, but not mandate, stronger identification and authentication while protecting privacy by limiting the amount of information that individuals must disclose.
Description
The strategy was developed with input from private sector lobbyists, including organizations representing 18 business groups, 70 nonprofit and federal advisory groups, and comments and dialogue from the public.
The strategy had four guiding principles:
privacy-enhancing and voluntary
secure and resilient
interoperable
cost-effective and easy to use.
The NSTIC described a vision compared to an ecosystem where individuals, businesses, and other organizations enjoy greater trust and security as they conduct sensitive transactions online. Technologies, policies, and agreed upon standards would securely support transactions ranging from anonymous to fully authenticated and from low to high value in such an imagined world.
Implementation included three initiatives:
The Identity Ecosystem Steering Group (IDESG), the private sector-led organization developing the Identity Ecosystem Framework;
Funding pilot projects that NSTIC said embrace and advance guiding principles; and
The Federal Cloud Credential Exchange (FCCX), the U.S. federal government service for government agencies to accept third-party issued credentials approved under the FICAM scheme.
NSTIC was announced during the Presidency of Barack Obama near the end of his first term o
|
https://en.wikipedia.org/wiki/Viral%20shedding
|
Viral shedding is the expulsion and release of virus progeny following successful reproduction during a host cell infection. Once replication has been completed and the host cell is exhausted of all resources in making viral progeny, the viruses may begin to leave the cell by several methods.
The term is variously used to refer to viral particles shedding from a single cell, from one part of the body into another, and from a body into the environment, where the virus may infect another.
Vaccine shedding is a form of viral shedding which can occur in instances of infection caused by some attenuated (or "live virus") vaccines.
Means
Shedding from a cell into extracellular space
Budding (through cell envelope)
"Budding" through the cell envelope—in effect, borrowing from the cell membrane to create the virus' own viral envelope— into extracellular space is most effective for viruses that require their own envelope. These include such viruses as HIV, HSV, SARS or smallpox. When beginning the budding process, the viral nucleocapsid cooperates with a certain region of the host cell membrane. During this interaction, the glycosylated viral envelope protein inserts itself into the cell membrane. In order to successfully bud from the host cell, the nucleocapsid of the virus must form a connection with the cytoplasmic tails of envelope proteins. Though budding does not immediately destroy the host cell, this process will slowly use up the cell membrane and eventually lead to the cell's demise. This is also how antiviral responses are able to detect virus-infected cells. Budding has been most extensively studied for viruses of eukaryotes. However, it has been demonstrated that viruses infecting prokaryotes of the domain Archaea also employ this mechanism of virion release.
Apoptosis (cell destruction)
Animal cells are programmed to self-destruct when they are under viral attack or damaged in some other way. By forcing the cell to undergo apoptosis or cell suicide, rele
|
https://en.wikipedia.org/wiki/KT5720
|
KT5720 is a kinase inhibitor with specificity towards protein kinase A. It is a semi-synthetic derivative of K252a and analog of staurosporine.
|
https://en.wikipedia.org/wiki/TO-263
|
The Double Decawatt Package, D2PAK, SOT404 or DDPAK, standardized as TO-263, is a semiconductor package type intended for surface mounting on circuit boards. The TO-263 is designed by Motorola. They are similar to the earlier TO-220-style packages intended for high power dissipation but lack the extended metal tab and mounting hole, while representing a larger version of the TO-252, also known as DPAK, SMT package. As with all SMT packages, the pins on a D2PAK are bent to lie against the PCB surface. The TO-263 can have 3 to 7 terminals.
Dimensions
Variants
Texas instruments has a smaller version of the TO-263: the TO-263 THIN. The height of the TO-263 THIN is 2 mm instead of the standard 4.5 mm.
See also
TO-220, through hole version of the TO-263
|
https://en.wikipedia.org/wiki/PiggyBac%20transposon%20system
|
The PiggyBac (PB) transposon is a mobile genetic element that efficiently transposes between vectors and chromosomes via a "cut and paste" mechanism. During transposition, the PB transposase recognizes transposon-specific inverted terminal repeat sequences (ITRs) located on both ends of the transposon vector and efficiently moves the contents from the original sites and integrates them into TTAA chromosomal sites. The powerful activity of the PiggyBac transposon system enables genes of interest between the two ITRs in the PB vector to be easily mobilized into target genomes. The TTAA-specific transposon piggyBac is rapidly becoming a highly useful transposon for genetic engineering of a wide variety of species, particularly insects. They were discovered in 1989 by Malcolm Fraser at the University of Notre Dame.
Origin
The TTAA-specific, short repeat elements are a group of transposons that share similarity of structure and properties of movement. These elements were originally defined in the Cabbage Looper, but appear to be common among other animals as well. They might prove to be useful tools for the transformation of insects. The original identification of these unusual TTAA-specific elements came through a somewhat unconventional route relative to most other Class II mobile elements. Spontaneous plaque morphology mutants of baculoviruses were observed to arise during propagation of these viruses in the TN-368 cell line. Genetic characterization of these mutations often revealed an associated insertion of host-derived DNAs, some of which appeared to be transposons.
Several different mobile host DNA insertions have been identified within the few-polyhedra (FP) locus of the baculoviruses AcMNPV and GmMNPV. The insertions most extensively studied are those now designated as tagalong (formerly TFP3) and piggyBac (formerly IFP2). These insertions exhibit a unique preference for TTAA target sites, whether inserting within the viral FP-locus or at other regions of
|
https://en.wikipedia.org/wiki/Well-ordering%20principle
|
In mathematics, the well-ordering principle states that every non-empty set of positive integers contains a least element. In other words, the set of positive integers is well-ordered by its "natural" or "magnitude" order in which precedes if and only if is either or the sum of and some positive integer (other orderings include the ordering ; and ).
The phrase "well-ordering principle" is sometimes taken to be synonymous with the "well-ordering theorem". On other occasions it is understood to be the proposition that the set of integers contains a well-ordered subset, called the natural numbers, in which every nonempty subset contains a least element.
Properties
Depending on the framework in which the natural numbers are introduced, this (second-order) property of the set of natural numbers is either an axiom or a provable theorem. For example:
In Peano arithmetic, second-order arithmetic and related systems, and indeed in most (not necessarily formal) mathematical treatments of the well-ordering principle, the principle is derived from the principle of mathematical induction, which is itself taken as basic.
Considering the natural numbers as a subset of the real numbers, and assuming that we know already that the real numbers are complete (again, either as an axiom or a theorem about the real number system), i.e., every bounded (from below) set has an infimum, then also every set of natural numbers has an infimum, say . We can now find an integer such that lies in the half-open interval , and can then show that we must have , and in .
In axiomatic set theory, the natural numbers are defined as the smallest inductive set (i.e., set containing 0 and closed under the successor operation). One can (even without invoking the regularity axiom) show that the set of all natural numbers such that " is well-ordered" is inductive, and must therefore contain all natural numbers; from this property one can conclude that the set of all natural numbers is also wel
|
https://en.wikipedia.org/wiki/What%20the%20Bleep%20Do%20We%20Know%21%3F
|
What the Bleep Do We Know!? (stylized as What tнē #$*! D̄ө ωΣ (k)πow!? and What the #$*! Do We Know!?) is a 2004 American pseudo-scientific film that posits a spiritual connection between quantum physics and consciousness. The plot follows the fictional story of a photographer, using documentary-style interviews and computer-animated graphics, as she encounters emotional and existential obstacles in her life and begins to consider the idea that individual and group consciousness can influence the material world. Her experiences are offered by the filmmakers to illustrate the film's scientifically unsupported thesis about quantum physics and consciousness.
Bleep was conceived and its production funded by William Arntz, who co-directed the film along with Betsy Chasse and Mark Vicente; all three were students of Ramtha's School of Enlightenment. A moderately low-budget independent film, it was promoted using viral marketing methods and opened in art-house theaters in the western United States, winning several independent film awards before being picked up by a major distributor and eventually grossing over $10 million. The 2004 theatrical release was succeeded by a substantially changed, extended home media version in 2006.
The film has been described as an example of quantum mysticism, and has been criticized for both misrepresenting science and containing pseudoscience. While many of its interviewees and subjects are professional scientists in the fields of physics, chemistry, and biology, one of them has noted that the film quotes him out of context.
Synopsis
Filmed in Portland, Oregon, What the Bleep Do We Know!? presents a viewpoint of the physical universe and human life within it, with connections to neuroscience and quantum physics. Some ideas discussed in the film are:
That the universe is best seen as constructed from thoughts and ideas rather than from matter.
That "empty space" is not empty.
That matter is not solid, and electrons are able to pop in
|
https://en.wikipedia.org/wiki/Neurofeedback
|
Neurofeedback is a type of biofeedback that focuses on the neuronal activity of the brain. The training method is based on reinforcement learning, where real-time feedback provided to the trainee is supposed to reward and reinforce desired brain activity or inhibit unfavorable activity patterns. In short, it is self-regulating training that generally utilizes EEG for operant conditioning.
Different mental states (for example, concentration, relaxation, creativity, distractibility, rumination, etc.) are associated with different brain activities or brain states.
Similarly, symptoms of mental or brain-related health issues are associated with neuronal overarousal, underarousal, disinhibition, or instability. Thus, neurofeedback tries to yield symptom relief through an improved regulation of neuronal activity.
Apart from being a therapeutic approach, neurofeedback is increasingly used for healthy people as well, aiming at improved cognitive regulation skills according to individual goals and needs.
There are various methods of providing feedback of neurological activity. The most common application uses the measurement of electroencephalography (EEG), where the electrical activity of the brain is recorded by electrodes placed on the scalp. Other, less usual methods, rely on functional magnetic resonance (fMRI), functional near-infrared spectroscopy (fNIRS), or hemoencephalography biofeedback (HEG).
History
In 1898, Edward Thorndike formulated the law of effect. In his work, he theorized that behavior is shaped by satisfying or discomforting consequences. This set the foundation for operant conditioning.
In 1924, the German psychiatrist Hans Berger connected several electrodes to a patient's scalp and detected a small current by using a ballistic galvanometer. In his subsequent studies, Berger analyzed EEGs qualitatively, but in 1932, G. Dietsch applied Fourier analysis to seven EEG records and later became the first researcher to apply quantitative EEG (QEEG).
|
https://en.wikipedia.org/wiki/Service%20layer
|
In intelligent networks (IN) and cellular networks, service layer is a conceptual layer within a network service provider architecture. It aims at providing middleware that serves third-party value-added services and applications at a higher application layer. The service layer provides capability servers owned by a telecommunication network service provider, accessed through open and secure Application Programming Interfaces (APIs) by application layer servers owned by third-party content providers. The service layer also provides an interface to core networks at a lower resource layer. The lower layers may also be named control layer and transport layer (the transport layer is also referred to as the access layer in some architectures).
The concept of service layer is used in contexts such as Intelligent networks (IN), WAP, 3G and IP Multimedia Subsystem (IMS). It is defined in the 3GPP Open Services Architecture (OSA) model, which reused the idea of the Parlay API for third-party servers.
In software design, for example Service-oriented architecture, the concept of service layer has a different meaning.
Service layer in IMS
The service layer of an IMS architecture provides multimedia services to the overall IMS network. This layer contains network elements which connect to the Serving-CSCF (Call Session Control Function) using the IP multimedia Subsystem Service Control Interface (ISC). The ISC interface uses the SIP signalling protocol.
Elements of the IMS service layer
The network elements contained within the service layer are generically referred to as 'service platforms' however the 3GPP specification (3GPP TS 23.228 V8.7.0) defines several types of service platforms:
SIP Application Server
OSA Service Capability Server
IM-SSF
SIP Application Server
The SIP Application Server (AS) performs the same function as a Telephony Application Server in a pre-IMS network, however it is specifically tailored to support the SIP signalling protocol for use in
|
https://en.wikipedia.org/wiki/Adaptive%20resonance%20theory
|
Adaptive resonance theory (ART) is a theory developed by Stephen Grossberg and Gail Carpenter on aspects of how the brain processes information. It describes a number of neural network models which use supervised and unsupervised learning methods, and address problems such as pattern recognition and prediction.
The primary intuition behind the ART model is that object identification and recognition generally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up' sensory information. The model postulates that 'top-down' expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge that is also called incremental learning.
Learning model
The basic ART system is an unsupervised learning model. It typically consists of a comparison field and a recognition field composed of neurons, a vigilance parameter (threshold of recognition), and a reset module.
The comparison field takes an input vector (a one-dimensional array of values) and transfers it to its best match in the recognition field.
Its best match is the single neuron whose set of weights (weight vector) most closely matches the input vector.
Each recognition field neuron outputs a negative signal (proportional to that neuron's quality of match to the input vector) to each of the other recognition field neurons and thus inhibits their output.
In this way the recognition field exhibits lateral inhibition, allowing each neuron in it to represent a category to which input vectors a
|
https://en.wikipedia.org/wiki/The%20Six%20Arrows
|
The Six Arrows () is the symbol and flag of the Turkish Republican People's Party (CHP). The arrows represent the fundamental pillars of Kemalism, Turkey's founding ideology. These are Republicanism, Populism, Nationalism, Laicism, Statism, and Reformism. The arrows are believed to have been conceived by İsmail Hakkı Tonguç, a Turkish scientist who made significant contributions to the Turkish education system. The principles of the Six Arrows were added to the Turkish Constitution on 5 February 1937. From August 1938 the flag was hoisted at all official buildings.
See also
Arrow Cross
Arrow (symbol)
Three Arrows
Circassian flag
Three Principles of the People
Pancasila (politics)
Yoke and arrows
|
https://en.wikipedia.org/wiki/Mediastinum
|
The mediastinum (from ;: mediastina) is the central compartment of the thoracic cavity. Surrounded by loose connective tissue, it is an undelineated region that contains a group of structures within the thorax, namely the heart and its vessels, the esophagus, the trachea, the phrenic and cardiac nerves, the thoracic duct, the thymus and the lymph nodes of the central chest.
Anatomy
The mediastinum lies within the thorax and is enclosed on the right and left by pleurae. It is surrounded by the chest wall in front, the lungs to the sides and the spine at the back. It extends from the sternum in front to the vertebral column behind. It contains all the organs of the thorax except the lungs. It is continuous with the loose connective tissue of the neck.
The mediastinum can be divided into an upper (or superior) and lower (or inferior) part:
The superior mediastinum starts at the superior thoracic aperture and ends at the thoracic plane.
The inferior mediastinum from this level to the diaphragm. This lower part is subdivided into three regions, all relative to the pericardium – the anterior mediastinum being in front of the pericardium, the middle mediastinum contains the pericardium and its contents, and the posterior mediastinum being behind the pericardium.
Anatomists, surgeons, and clinical radiologists compartmentalize the mediastinum differently. For instance, in the radiological scheme of Felson, there are only three compartments (anterior, middle, and posterior), and the heart is part of the middle (inferior) mediastinum.
Thoracic plane
The transverse thoracic plane, thoracic plane, plane of Louis or plane of Ludwig is an important anatomical plane at the level of the sternal angle and the T4/T5 intervertebral disc. It serves as an imaginary boundary that separates the superior and inferior mediastinum.
A number of important anatomical structures and transitions occur at the level of the thoracic plane, including:
The carinal bifurcation of the trachea
|
https://en.wikipedia.org/wiki/Hello%20Brother%20%281999%20film%29
|
Hello Brother is a 1999 Indian Hindi-language romantic fantasy action film starring Salman Khan, Arbaaz Khan, Rani Mukerji and Shakti Kapoor. It was directed by Salman Khan's brother Sohail Khan. The film is an adaptation of 1992 Malayalam film Aayushkalam.
Plot
Hero (Salman Khan) works for a courier company owned by Khanna (Shakti Kapoor). He is spirited and humorous and is in love with Rani (Rani Mukerji) but she simply thinks of Hero as a very good friend. Enter Inspector Vishal (Arbaaz Khan) who works in the narcotics department. Vishal, who is stationed at a different Police Department, transfers to Mumbai/Goa. Vishal suspects Khanna to be involved in a drug ring and confronts him. Hero stands up to Vishal and defends his boss, but soon learns the truth behind Khanna. In a confrontation, Khanna kills Hero and shoots Vishal in the heart. The police department decides to transplant Hero's heart into Vishal's body.
Hero appears as a ghost now and can only be seen by Vishal, since his heart is in Vishal's body. Hero says that he will only rest in peace after Khanna is killed, thus avenging him. Vishal decides to go about doing this, and Rani and Vishal begin to fall in love with each other. Hero dislikes this and tries to foil Vishal's plans of getting close to Rani. But Hero and Vishal start getting closer and work together, becoming good friends after Vishal confronts Khanna's drug company.
Khanna arranges to leave the country but kidnaps Rani. Vishal and Hero go to save Rani and fight off Khanna's henchmen, but Vishal gets injured. Hero then goes inside him and controls his fighting moves, helping him to beat the thugs. Rani witnesses this and realizes that only Hero could perform such moves. She calls out Hero's name, though Vishal does not know that Hero is inside him. Khanna shoots Vishal. Hero takes Vishal's hand and shoots Khanna with the gun.
Khanna finally dies, and as Rani is comforting Vishal, he tells her that Hero is with them and he loves Rani.
|
https://en.wikipedia.org/wiki/John%20Bandler
|
John William Bandler (9 November 1941 - 28 September 2023) was a Canadian professor, engineer, entrepreneur, artist, speaker, playwright, and author of fiction and nonfiction. Bandler is known for his invention of space mapping technology and his contributions to device modeling, computer-aided design, microwave engineering, mathematical optimization, and yield-driven design.
Early life and education
The only child of parents who escaped from Nazi-occupied Vienna to Cyprus, from where they were subsequently evacuated along with other Jewish refugees in 1941, Bandler was born in Jerusalem. After the War, his parents returned to Cyprus, where Bandler attended the Junior School in Nicosia, and, for a year, The English School in Nicosia. After a brief stay in Vienna in 1956, he left for England and completed his schooling in London.
He entered Imperial College of Science and Technology, University of London, in 1960, graduating in 1963 with First Class Honours in Electrical Engineering; and in 1967 with a Ph.D. in Microwaves. In 1976 he received his D.Sc. (Eng.) from the University of London in Microwaves, Computer-aided Design, and Optimization of Circuits and Systems.
Career
Bandler worked as an engineer at Mullard Research Laboratories (later called Philips Research Laboratories) in Redhill, Surrey, England, from 1966 to 1967. He was a Postdoctoral Fellow and Sessional Lecturer at the University of Manitoba from 1967 to 1969.
Bandler joined McMaster University in 1969 as an assistant professor, becoming associate professor in 1971 and Professor in 1974. He served as chairman of the Department of Electrical Engineering from 1978 to 1979 and was Dean of the Faculty of Engineering from 1979 to 1981. During his time at McMaster, Bandler was coordinator of the Group on Simulation, Optimization and Control from 1973 until 1983, when he formed the Simulation Optimization Systems Research Laboratory. Dr. Bandler became a Professor Emeritus of McMaster University in 2000
|
https://en.wikipedia.org/wiki/Whitney%20Smith
|
Whitney Smith Jr. (February 26, 1940 – November 17, 2016) was an American vexillologist. He coined the term vexillology, which refers to the scholarly analysis of all aspects of flags. He was a founder of several vexillology organizations. Smith was a Laureate and a Fellow of the International Federation of Vexillological Associations.
Early life and education
Whitney Smith Jr. was born on February 26, 1940, to Mildred and Whitney Smith. As a youth, he lived in Lexington and Winchester, Massachusetts. With Patriots' Day memories and a 1946 gift of The Golden Encyclopedia, Smith's interest in flags was started.
At Harvard, he studied political science and received a bachelor's degree in the field in 1961. During his time at Harvard, Smith designed the flag of Guyana after corresponding with Guyanese President Cheddi Jagan via mail. He received his doctorate in political science at Boston University in 1964; political symbolism was the subject of his dissertation.
Career
Smith had his first article published at age 18. By 1960, he was consulting with the Encyclopaedia Britannica.
In 1961, Smith and colleague Gerhard Grahl co-founded The Flag Bulletin (), the world's first journal about flags. The following year, Smith established The Flag Research Center at his home and was its director.
Smith worked with Klaes Sierksma to organize the First International Congress of Vexillology (Muiderberg, Netherlands) in 1965. They joined Louis Mühlemann in founding the International League of Vexillologists and were members of its Governing Board on September 5, 1965, and operated until September 3, 1967. The league was replaced by the International Federation of Vexillological Associations (known by its French acronym FIAV) with Smith as vice-president of the Provisional Council as of September 3, 1967. In 1969, Smith moved from being FIAV Provisional Council vice-president to being the first Secretary-General of FIAV. Smith was also responsible for founding the North Amer
|
https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20Riemann%20zeta%20function
|
In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. It is often denoted and is named after the mathematician Bernhard Riemann. When the argument is a real number greater than one, the zeta function satisfies the equation
It can therefore provide the sum of various convergent infinite series, such as Explicit or numerically efficient formulae exist for at integer arguments, all of which have real values, including this example. This article lists these formulae, together with tables of values. It also includes derivatives and some series composed of the zeta function at integer arguments.
The same equation in above also holds when is a complex number whose real part is greater than one, ensuring that the infinite sum still converges. The zeta function can then be extended to the whole of the complex plane by analytic continuation, except for a simple pole at . The complex derivative exists in this more general region, making the zeta function a meromorphic function. The above equation no longer applies for these extended values of , for which the corresponding summation would diverge. For example, the full zeta function exists at (and is therefore finite there), but the corresponding series would be whose partial sums would grow indefinitely large.
The zeta function values listed below include function values at the negative even numbers (, ), for which and which make up the so-called trivial zeros. The Riemann zeta function article includes a colour plot illustrating how the function varies over a continuous rectangular region of the complex plane. The successful characterisation of its non-trivial zeros in the wider plane is important in number theory, because of the Riemann hypothesis.
The Riemann zeta function at 0 and 1
At zero, one has
At 1 there is a pole, so ζ(1) is not finite but the left and right limits are:
Since it is a pole of first order, it has a complex residue
Positiv
|
https://en.wikipedia.org/wiki/Stability%20radius
|
In mathematics, the stability radius of an object (system, function, matrix, parameter) at a given nominal point is the radius of the largest ball, centered at the nominal point, all of whose elements satisfy pre-determined stability conditions. The picture of this intuitive notion is this:
where denotes the nominal point, denotes the space of all possible values of the object , and the shaded area, , represents the set of points that satisfy the stability conditions. The radius of the blue circle, shown in red, is the stability radius.
Abstract definition
The formal definition of this concept varies, depending on the application area. The following abstract definition is quite useful
where denotes a closed ball of radius in centered at .
History
It looks like the concept was invented in the early 1960s. In the 1980s it became popular in control theory and optimization. It is widely used as a model of local robustness against small perturbations in a given nominal value of the object of interest.
Relation to Wald's maximin model
It was shown that the stability radius model is an instance of Wald's maximin model. That is,
where
The large penalty () is a device to force the player not to perturb the nominal value beyond the stability radius of the system. It is an indication that the stability model is a model of local stability/robustness, rather than a global one.
Info-gap decision theory
Info-gap decision theory is a recent non-probabilistic decision theory. It is claimed to be radically different from all current theories of decision under uncertainty. But it has been shown that its robustness model, namely
is actually a stability radius model characterized by a simple stability requirement of the form where denotes the decision under consideration, denotes the parameter of interest, denotes the estimate of the true value of and denotes a ball of radius centered at .
Since stability radius models are designed to deal with smal
|
https://en.wikipedia.org/wiki/Embryonic%20diapause
|
Embryonic diapause (delayed implantation in mammals) is a reproductive strategy used by a number of animal species across different biological classes. In more than 130 types of mammals where this takes place, the process occurs at the blastocyst stage of embryonic development, and is characterized by a dramatic reduction or complete cessation of mitotic activity, arresting most often in the G0 or G1 phase of division.
In placental embryonic diapause, the blastocyst does not immediately implant in the uterus after sexual reproduction has resulted in the zygote, but rather remains in this non-dividing state of dormancy until conditions allow for attachment to the uterine wall to proceed as normal. As a result, the normal gestation period is extended for a species-specific time.
Diapause provides a survival advantage to offspring, because birth or emergence of young can be timed to coincide with the most hospitable conditions, regardless of when mating occurs or length of gestation; any such gain in survival rates of progeny confers an evolutionary advantage.
Evolutionary significance
Organisms which undergo embryonic diapause are able to synchronize the birth of offspring to the most favorable conditions for reproductive success, irrespective of when mating took place. Many different factors can induce embryonic diapause, such as the time of year, temperature, lactation and supply of food.
Embryonic diapause is a relatively widespread phenomenon outside of mammals, with known occurrence in the reproductive cycles of many insects, nematodes, fish, and other non-mammalian vertebrates. It has been observed in approximately 130 mammalian species, which is less than two percent of all species of mammals. These include certain rodents, bears, armadillos, mustelids (e.g. weasels and badgers), and marsupials (e.g. kangaroos). Some groups only have one species that undergoes embryonic diapause, such as the roe deer in the order Artiodactyla.
Experimental induction of emb
|
https://en.wikipedia.org/wiki/Taijitu
|
In Chinese philosophy, a taijitu () is a symbol or diagram () representing Taiji () in both its monist (wuji) and its dualist (yin and yang) in application as a deductive and inductive theoretical model. Such a diagram was first introduced by Neo-Confucian philosopher Zhou Dunyi (; 1017–1073) of the Song Dynasty in his Taijitu shuo ().
The Daozang, a Taoist canon compiled during the Ming era, has at least half a dozen variants of the taijitu. The two most similar are the Taiji Xiantiandao and the diagrams, both of which have been extensively studied during the Qing period for their possible connection with Zhou Dunyi's taijitu.
Ming period author Lai Zhide (1525–1604) simplified the taijitu to a design of two interlocking spirals with two black-and-white dots superimposed on them, became synonymous with the Yellow River Map. This version was represented in Western literature and popular culture in the late 19th century as the "Great Monad", this depiction became known as the "yin-yang symbol" since the 1960s. The contemporary Chinese term for the modern symbol is referred to as "the two-part Taiji diagram".
Ornamental patterns with visual similarity to the "yin yang symbol" are found in archaeological artefacts of European prehistory; such designs are sometimes descriptively dubbed "yin yang symbols" in archaeological literature by modern scholars.
Structure
The taijitu consists of five parts. Strictly speaking, the "yin and yang symbol", itself popularly called taijitu, represents the second of these five parts of the diagram.
At the top, an empty circle depicts the absolute (wuji). According to Zhou, wuji is also a synonym for taiji.
A second circle represents the Taiji as harboring Dualism, yin and yang, represented by filling the circle in a black-and-white pattern. In some diagrams, there is a smaller empty circle at the center of this, representing Emptiness as the foundation of duality.
Below this second circle is a five-part diagram representing the
|
https://en.wikipedia.org/wiki/Appeal%20to%20the%20stone
|
Appeal to the stone, also known as argumentum ad lapidem, is a logical fallacy that dismisses an argument as untrue or absurd. The dismissal is made by stating or reiterating that the argument is absurd, without providing further argumentation. This theory is closely tied to proof by assertion due to the lack of evidence behind the statement and its attempt to persuade without providing any evidence.
Appeal to the stone is a logical fallacy. Specifically, it is an informal fallacy, which means that it relies on inductive reasoning in an argument to justify an assertion. Informal fallacies contain erroneous reasoning in content of the argument and not the form or structure of it, as opposed to formal fallacies, which contain erroneous reasoning in argument form.
Example
Speaker A: Infectious diseases are caused by tiny organisms that are not visible to unaided eyesight.
Speaker B: Your statement is false.
Speaker A: Why do you think that it is false?
Speaker B: It sounds like nonsense.
Speaker B denies Speaker A's claim without providing evidence to support their denial. This may not be unreasonable if the claim is inherently self-contradictory ("I am not speaking to you right now") or too malformed to be a sensical claim at all, of course.
History
Origin
The name "appeal to the stone" originates from an argument between Dr. Samuel Johnson and James Boswell over George Berkeley's theory of subjective idealism (known previously as "immaterialism"). Subjective idealism states that reality is dependent on a person's perceptions of the world and that material objects are intertwined with one's perceptions of these material objects.
Johnson's intent, apparently, was to imply that it was absurd of Berkeley to call such a stone "immaterial," when in fact Johnson could kick it with his foot.
Classification
Informal logical fallacies
Informal logical fallacies are misconceptions derived from faulty reasoning. Informal logical fallacies use inductive reasoning a
|
https://en.wikipedia.org/wiki/Long-lived%20plasma%20cell
|
Long-lived plasma cells (LLPCs) are a distinct subset of plasma cells that play a crucial role in maintaining humoral memory and long-term immunity.
They continuously produce and secrete high-affinity antibodies into the bloodstream, conversely to memory B cells, which are quiescent and respond quickly to antigens upon recall.
Initially, it was believed that memory B cells replenish LLPCsi. However, allergen-specific IgE production through bone marrow transplantation in non-allergic individuals suggests LLPCs may be long-lived. Allergies developed without antigenic re-stimulation. That led to the understanding that LLPCs are long-lived cells, contributing to the sustained production of specific antibodies
Niche of LLPCs
The niche for long-lived plasma cells is a subject of ongoing research, and while some aspects are understood, many questions remain. LLPCs are not inherently long-lived, and their survival relies on accessing specific pro-survival niches in the bone marrow (BM), secondary lymphoid organs, mucosal tissues, and sites of inflammation. The BM has traditionally been considered the primary residence for LLPCs, offering a dynamic microenvironment that supports the formation of complex niches. However, recent studies have revealed that LLPCs can also reside in other locations, such as gut-associated lymphoid tissue (GALT), where they primarily produce IgA antibodies.
Cell Markers
Clear markers that distinguish LLPCs have yet to be fully elucidated. However, LLPCs exhibit a gene expression signature characterised by down regulating antigen presentation and B-cell receptor (BCR) function-related genes. Conversely, only a tiny number of genes are upregulated in LLPC. That includes anti-apoptotic genes such as MCL1 and ZNF667, ER stress-associated genes like ERO1LB and MANF, and the retention of TFBS and SRF in the bone marrow.
Furthermore, expression levels of surface markers, such as CD38 and CD19, vary among plasma cells and are associated with functiona
|
https://en.wikipedia.org/wiki/Stress%20fiber
|
Stress fibers are contractile actin bundles found in non-muscle cells. They are composed of actin (microfilaments) and non-muscle myosin II (NMMII), and also contain various crosslinking proteins, such as α-actinin, to form a highly regulated actomyosin structure within non-muscle cells. Stress fibers have been shown to play an important role in cellular contractility, providing force for a number of functions such as cell adhesion, migration and morphogenesis.
Structure
Stress fibers are primarily composed of actin and myosin. Actin is a ~43kDa globular protein, and can polymerize to form long filamentous structures. These filaments are made of two strands of actin monomers (or protofilaments) wrapping around each other, to create a single actin filament. Because actin monomers are not symmetrical molecules, their filaments have polarity based upon the structure of the actin monomer, which will allow one end of the actin filament to polymerize faster than the other. The end that can polymerize faster is known as the plus-end, whereas the end that polymerizes slower is known as the minus-end. Stress fibers are usually composed of 10-30 actin filaments. Stress fibers are composed of antiparallel microfilaments: actin filaments are bundled along their length, and plus-ends and minus-ends co-mingle at each end of the bundle. The antiparallel arrangement of actin filaments within stress fibers is reinforced by α-actinin, an actin filament crosslinking protein which contains antiparallel actin-binding domains. These bundles are then cross-linked by NMMII to form stress fibers.
Assembly and regulation
The Rho family of GTPases regulate many aspects of actin cytoskeletal dynamics, including stress fiber formation. RhoA (sometimes referred to as just 'Rho') is responsible for the formation of stress fibers, and its activity in stress fiber formation was first discovered by Ridley and Hall in 1992. When bound to GTP, Rho activates Rho-associated coiled-coil forming kinas
|
https://en.wikipedia.org/wiki/Timaeus%20%28dialogue%29
|
Timaeus (; , ) is one of Plato's dialogues, mostly in the form of long monologues given by Critias and Timaeus, written 360 BC. The work puts forward reasoning on the possible nature of the physical world and human beings and is followed by the dialogue Critias.
Participants in the dialogue include Socrates, Timaeus, Hermocrates, and Critias. Some scholars believe that it is not the Critias of the Thirty Tyrants who appears in this dialogue, but his grandfather, who is also named Critias. It has been suggested from some traditions (Diogenes Laertius (VIII 85) from Hermippus of Smyrna (3rd century BC) and Timon of Phlius ( 320 – 235 BC) that Timaeus was influenced by a book about Pythagoras, written by Philolaus, although this assertion is generally considered false.
Introduction
The dialogue takes place the day after Socrates described his ideal state. In Plato's works, such a discussion occurs in the Republic. Socrates feels that his description of the ideal state was not sufficient for the purposes of entertainment and that "I would be glad to hear some account of it engaging in transactions with other states" (19b).
Hermocrates wishes to oblige Socrates and mentions that Critias knows just the account (20b) to do so. Critias proceeds to tell the story of Solon's journey to Egypt where he hears the story of Atlantis, and how Athens used to be an ideal state that subsequently waged war against Atlantis (25a). Critias believes that he is getting ahead of himself, and mentions that Timaeus will tell part of the account from the origin of the universe to man.
Critias also cites the Egyptian priest in Sais about long-term factors on the fate of mankind:There have been, and will be again, many destructions of mankind arising out of many causes; the greatest have been brought about by the agencies of fire and water, and other lesser ones by innumerable other causes. There is a story that even you [Greeks] have preserved, that once upon a time, Phaethon, the son of
|
https://en.wikipedia.org/wiki/Dynamics%20of%20the%20celestial%20spheres
|
Ancient, medieval and Renaissance astronomers and philosophers developed many different theories about the dynamics of the celestial spheres. They explained the motions of the various nested spheres in terms of the materials of which they were made, external movers such as celestial intelligences, and internal movers such as motive souls or impressed forces. Most of these models were qualitative, although a few of them incorporated quantitative analyses that related speed, motive force and resistance.
The celestial material and its natural motions
In considering the physics of the celestial spheres, scholars followed two different views about the material composition of the celestial spheres. For Plato, the celestial regions were made "mostly out of fire" on account of fire's mobility. Later Platonists, such as Plotinus, maintained that although fire moves naturally upward in a straight line toward its natural place at the periphery of the universe, when it arrived there, it would either rest or move naturally in a circle. This account was compatible with Aristotle's meteorology of a fiery region in the upper air, dragged along underneath the circular motion of the lunar sphere. For Aristotle, however, the spheres themselves were made entirely of a special fifth element, Aether (Αἰθήρ), the bright, untainted upper atmosphere in which the gods dwell, as distinct from the dense lower atmosphere, Aer (Ἀήρ). While the four terrestrial elements (earth, water, air and fire) gave rise to the generation and corruption of natural substances by their mutual transformations, aether was unchanging, moving always with a uniform circular motion that was uniquely suited to the celestial spheres, which were eternal. Earth and water had a natural heaviness (gravitas), which they expressed by moving downward toward the center of the universe. Fire and air had a natural lightness (levitas), such that they moved upward, away from the center. Aether, being neither heavy nor light,
|
https://en.wikipedia.org/wiki/Normalized%20number
|
In applied mathematics, a number is normalized when it is written in scientific notation with one non-zero decimal digit before the decimal point. Thus, a real number, when written out in normalized scientific notation, is as follows:
where n is an integer, are the digits of the number in base 10, and is not zero. That is, its leading digit (i.e., leftmost) is not zero and is followed by the decimal point. Simply speaking, a number is normalized when it is written in the form of a × 10n where 1 ≤ a < 10 without leading zeros in a. This is the standard form of scientific notation. An alternative style is to have the first non-zero digit after the decimal point.
Examples
As examples, the number 918.082 in normalized form is
while the number in normalized form is
Clearly, any non-zero real number can be normalized.
Other bases
The same definition holds if the number is represented in another radix (that is, base of enumeration), rather than base 10.
In base b a normalized number will have the form
where again and the digits, are integers between and .
In many computer systems, binary floating-point numbers are represented internally using this normalized form for their representations; for details, see normal number (computing). Although the point is described as floating, for a normalized floating-point number, its position is fixed, the movement being reflected in the different values of the power.
See also
Significand
Normal number (computing)
|
https://en.wikipedia.org/wiki/Hypermastigia
|
Hypermastigia (hypermastigids) within microbiology, is the name used for a group of flagellate parasites which were placed under the excavata class. They are now treated as belonging to one of the groups Tritrichomonadea, Hypotrichomonadea, or Trichomonadea within the Parabasalia.
|
https://en.wikipedia.org/wiki/Self-sovereign%20identity
|
Self-sovereign identity (SSI) is an approach to digital identity that gives individuals control over the information they use to prove who they are to websites, services, and applications across the web. Without SSI, individuals with persistent accounts (identities) across the internet must rely on a number of large identity providers, such as Facebook (Facebook Connect) and Google (Google Sign-In), that have control of the information associated with their identity. If a user chooses not to use a large identity provider, then they have to create new accounts with each service provider, which fragments their web experiences. Self-sovereign identity offers a way to avoid these two undesirable alternatives. In a self-sovereign identity system, the user accesses services in a streamlined and secure manner, while maintaining control over the information associated with their identity.
Background
The TCP/IP protocol provides identifiers for machines, but not for the people and organisations operating the machines. This makes the network-level identifiers on the internet hard to trust and rely on for information and communication for a number of reasons: 1) hackers can easily change a computer’s hardware or IP address, 2) services provide identifiers for the user, not the network. The absence of reliable identifiers is one of the primary sources of cybercrime, fraud, and threats to privacy on the internet.
With the advent of blockchain technology, a new model for decentralized identity emerged in 2015. The FIDO Alliance proposed an identity model that was no longer account-based, but identified people through direct, private, peer-to-peer connections secured by public/private key cryptography. Self-Sovereign Identity (SSI) summarises all components of the decentralized identity model: digital wallets, digital credentials, and digital connections.
Technical aspects
SSI addresses the difficulty of establishing trust in an interaction. In order to be trusted, one party
|
https://en.wikipedia.org/wiki/MSX%20Video%20access%20method
|
The ColecoVision, SG-1000, CreatiVision, and first-generation MSX computers use the TMS9918A Video Display processor (VDP), which has its own 16 KiB of video memory that was not shared with main memory. Compared to the unified system and video memory used by other 8-bit computers of the time, such as the Apple II, ZX Spectrum, and Commodore 64, separate memory has the advantage of freeing up of the Z80 processor's 64 KiB address space for main RAM, and the VDP does not need to steal CPU cycles to access video memory. The disadvantage is that the program has to use the CPU's dedicated I/O instructions to command the VDP to manipulate the contents of the video RAM. This not only slows down video access but also makes the porting of games from unified-memory platforms more difficult. Attempts of porting ZX Spectrum games (in the UK, the system most similar to the MSX) were often thwarted by this difference. Also, programmers had to learn to optimally use the more advanced capabilities of the VDP.
The TMS9918A's method of accessing the video RAM is slower than direct access, as used in unified-memory computers, because accessing video memory involved first outputting the low- then the hi-byte of the (14-bit) video memory address to I/O port $99, then one or more bytes of 8-bit data to port $98. After each write, the memory pointer advances to the next address, so consecutive addresses can be written to with repeated OUT instructions to $98. Z80 had as fast OTIR/OTDR block instructions which could be used instead of LDIR/LDDR, still, allowed VRAM access rate was restricted unless during vertical blanking.
However, because of the screen layout, (which was top-down for each character of 8 lines then advancing to the next character) this was difficult to use for programmers who tried to convert existing software originally written for a system that had another arrangement of the screen layout. So when trying to use the TMS9918A high resolution mode video memory in the con
|
https://en.wikipedia.org/wiki/Arithmetic%20derivative
|
In number theory, the Lagarias arithmetic derivative or number derivative is a function defined for integers, based on prime factorization, by analogy with the product rule for the derivative of a function that is used in mathematical analysis.
There are many versions of "arithmetic derivatives", including the one discussed in this article (the Lagarias arithmetic derivative), such as Ihara's arithmetic derivative and Buium's arithmetic derivatives.
Early history
The arithmetic derivative was introduced by Spanish mathematician Josè Mingot Shelly in 1911. The arithmetic derivative also appeared in the 1950 Putnam Competition.
Definition
For natural numbers , the arithmetic derivative is defined as follows:
for any prime .
for any (Leibniz rule).
Extensions beyond natural numbers
Edward J. Barbeau extended the domain to all integers by showing that the choice , which uniquely extends the domain to the integers and is consistent with the product formula. Barbeau also further extended it to the rational numbers, showing that the familiar quotient rule gives a well-defined derivative on :
Victor Ufnarovski and Bo Åhlander expanded it to the irrationals that can be written as the product of primes raised to arbitrary rational powers, allowing expressions like to be computed.
The arithmetic derivative can also be extended to any unique factorization domain (UFD), such as the Gaussian integers and the Eisenstein integers, and its associated field of fractions. If the UFD is a polynomial ring, then the arithmetic derivative is the same as the derivation over said polynomial ring. For example, the regular derivative is the arithmetic derivative for the rings of univariate real and complex polynomial and rational functions, which can be proven using the fundamental theorem of algebra.
The arithmetic derivative has also been extended to the ring of integers modulo n.
Elementary properties
The Leibniz rule implies that (take ) and (take ).
The power rule i
|
https://en.wikipedia.org/wiki/Propagation%20constant
|
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the dimensionless change in magnitude or phase per unit length. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than base 10 that is used in telecommunications in other situations. The quantity measured, such as voltage, is expressed as a sinusoidal phasor. The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change.
Alternative names
The term "propagation constant" is somewhat of a misnomer as it usually varies strongly with ω. It is probably the most widely used term but there are a large variety of alternative names used by various authors for this quantity. These include transmission parameter, transmission function, propagation parameter, propagation coefficient and transmission constant. If the plural is used, it suggests that α and β are being referenced separately but collectively as in transmission parameters, propagation parameters, etc. In transmission line theory, α and β are counted among the "secondary coefficients", the term secondary being used to contrast to the primary line coefficients. The primary coefficients are the physical properties of the line, namely R,C,L and G, from which the secondary coefficients may be derived using the telegrapher's equation. Note that in the field of transmission lines, the term transmission coefficient has a different meanin
|
https://en.wikipedia.org/wiki/Gutmann%20method
|
The Gutmann method is an algorithm for securely erasing the contents of computer hard disk drives, such as files. Devised by Peter Gutmann and Colin Plumb and presented in the paper Secure Deletion of Data from Magnetic and Solid-State Memory in July 1996, it involved writing a series of 35 patterns over the region to be erased.
The selection of patterns assumes that the user does not know the encoding mechanism used by the drive, so it includes patterns designed specifically for three types of drives. A user who knows which type of encoding the drive uses can choose only those patterns intended for their drive. A drive with a different encoding mechanism would need different patterns.
Most of the patterns in the Gutmann method were designed for older MFM/RLL encoded disks. Gutmann himself has noted that more modern drives no longer use these older encoding techniques, making parts of the method irrelevant. He said "In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques".
Since about 2001, some ATA IDE and SATA hard drive manufacturer designs include support for the ATA Secure Erase standard, obviating the need to apply the Gutmann method when erasing an entire drive. The Gutmann method does not apply to USB sticks: an 2011 study reports that 71.7% of data remained available. On solid state drives it resulted in 0.8 - 4.3% recovery.
Background
The delete function in most operating systems simply marks the space occupied by the file as reusable (removes the pointer to the file) without immediately removing any of its contents. At this point the file can be fairly easily recovered by numerous recovery applications. However, once the space is overwritten with other data, there is no known way to use software to recover it. It cannot be done with software alone since the storag
|
https://en.wikipedia.org/wiki/Prophenoloxidase
|
Prophenoloxidase (proPO) is a modified form of the complement response found in some invertebrates, including insects, crabs and worms. It is a copper-containing metalloprotein.
A major innate defense system in invertebrates is the melanization of pathogens and damaged tissues. This important process is controlled by the enzyme phenoloxidase (PO). The conversion of prophenoloxidase to the active form of the enzyme can be brought about by minuscule amounts of molecules such as lipopolysaccharide, peptidoglycan and beta-1,3-glucans from microorganisms.
However, it still has many arguments in the innate immune function, especially in model invertebrate animal. The proPO homologous-protein in mammal also does not have any immune activity. Thus, it might be difficult to conclude its function in immunity.
|
https://en.wikipedia.org/wiki/Local%20insertion
|
In broadcasting, local insertion (known in the United Kingdom as an opt-out) is the act or capability of a broadcast television station, radio station or cable system to insert or replace part of a network feed with content unique to the local station or system. Most often this is a station identification (required by the broadcasting authority such as the U.S. Federal Communications Commission), but is also commonly used for television or radio advertisements, or a weather or traffic report. A digital on-screen graphic ("dog" or "bug"), commonly a translucent watermark, may also be keyed (superimposed) with a television station ID over the network feed using a character generator using genlock. In cases where individual broadcast stations carry programs separate from those shown on the main network, this is known as regional variation (in the United Kingdom) or an opt-out (in Canada and the United States).
Automated local insertion used to be triggered with in-band signaling, such as DTMF tones or sub-audible sounds (such as 25 Hz), but is now done with out-of-band signaling, such as analog signal subcarriers via communications satellite, or now more commonly via digital signals; broadcast automation equipment can then handle these automatically. In an emergency, such as severe weather, local insertion may also occur instantly through command from another network or other source (such as the Emergency Alert System or First Warning). In this case, the most urgent warning messages may interrupt without delay, while others may be worked into a normal break in programming within 15 minutes of their initial issuance.
Within individual programs
In the United States, insertion can easily be heard every evening on the nationally syndicated radio show Delilah, where the host does a pre-recorded station-specific voiceover played over a music bed from the network. When host Delilah Rene says "this is Delilah", her voice (often in a slightly different tone or mood than what
|
https://en.wikipedia.org/wiki/Mintmaster%20mark
|
Mintmaster marks (German: Münzmeisterzeichen, abbreviation Mmz.) are often the initials of the mintmaster of a mint or small symbols (cross, star, coat of arms, heraldic device, etc.) for example at the size of the letters on a coin inscription to denote the coins made under his direction. With his mark, the mintmaster assumed responsibility for ensuing the coins issued by his mint were in accordance with the regulations. Mintmaster marks were used as early as the time of bracteate coinage in the Holy Roman Empire, but these can only rarely be deciphered. All mintmaster marks since the beginning of the minting of Thalers have been identified.
The picture on the right shows the mintmaster's mark, an acorn on a stem, of the Dresden mintmaster, Constantin Rothe, on a Reichstaler issued under Duke John George II of Saxony from the year 1662.
Variants
Sometimes there are pictographs and letters on a coin. In this case, the pictorial symbol is usually found in the circumscription of the coin and the letters are divided in the field on both sides of the coin's crest. Mintmasters often used their coats of arms as mintmaster symbols. For example in the Electorate of Saxony:
Constantin Rothe, mintmaster from 1640 to 1678 in Dresden, put the letters C-R on his coins and also the acorn on a stem from his family coat of arms.
Andreas Alnpeck, the last mintmaster of the Freiberg Mint, used a six-pointed star from 1546 to 1555 and from 1554 to 1555 also the eagle's head from his coat of arms as the mintmaster's mark.
Ernst Peter Hecht, mintmaster 1693–1714 in Leipzig, used the letters E P H as the mintmaster's mark and also the pike from his coat of arms.
In Brandenburg:
Paul Mühlrad, mintmaster 1538-1542 in Berlin put a mill wheel on his coinage.
In Mecklenburg:
Johann Hund (1512–1526) used a dog as his canting arms and subsidiary image in the corners of the cross on the Rostock schillings.
In Florence:
Alongside the marks of issue, mintmasters also set their coats of a
|
https://en.wikipedia.org/wiki/Comparative%20Tracking%20Index
|
The Comparative Tracking Index (CTI) is used to measure the electrical breakdown (tracking) properties of an insulating material. Tracking is an electrical breakdown on the surface of an insulating material wherein an initial exposure to electrical arcing heat carbonizes the material. The carbonized areas are more conductive than the pristine insulator, increasing current flow, resulting in increased heat generation, and eventually the insulation becomes completely conductive.
Details
A large voltage difference gradually creates a conductive leakage path across the surface of the material by forming a carbonized track. Testing method is specified in IEC standard 60112 and ASTM D3638.
To measure the tracking, 50 drops of 0.1% ammonium chloride solution are dropped on the material, and the voltage measured for a 3 mm thickness is considered representative of the material performance. Also term PTI (Proof Tracking Index) is used: it means voltage at which during testing on five samples the samples pass the test with no failures.
Performance Level Categories (PLC) were introduced to avoid excessive implied precision and bias.
The CTI value is used for electrical safety assessment of electrical apparatus, as for instance carried out by testing and certification laboratories. The minimum required creepage distances over an insulating material between electrically conducting parts in apparatus, especially between parts with a high voltage and parts that can be touched by human users, is dependent on the insulator's CTI value. Also for internal distances in an apparatus by maintaining CTI based distances, the risk of fire is reduced.
Creepage distance requirement depends on the CTI. Material which CTI is unknown are classified in IIIb group. There are no CTI requirement for glass, ceramic, and other inorganic material which do not breakdown on the surface.
The better the insulation, the higher the CTI (positive relationship). In terms of clearance, a higher CTI value
|
https://en.wikipedia.org/wiki/ITU%20T.61
|
T.61 is an ITU-T Recommendation for a Teletex character set. T.61 predated Unicode,
and was the primary character set in ASN.1 used in early versions of X.500 and X.509
for encoding strings containing characters used in Western European languages. It is also used by older versions of LDAP. While T.61 continues to be supported in modern versions of X.500 and X.509, it has been deprecated in favor of Unicode. It is also called Code page 1036, CP1036, or IBM 01036.
While ASN.1 does see wide use and the T.61 character set is used on some standards using ASN.1 (for example in RSA Security's PKCS #9), the 1988-11 version of the T.61 standard itself was superseded by a never-published 1993-03 version; the 1993-03 version was withdrawn by the ITU-T. The 1988-11 version is still available.
T.61 was one of the encodings supported by Mozilla software in email and HTML until 2014, when the supported encodings were limited to those in the WHATWG Encoding Standard (although T.61 remained supported for LDAP).
Code page layout
The following table maps the T.61 characters to their equivalent Unicode code points.
See ITU T.51 for a description of how the accents at 0xC0..CF worked. They prefix the letters, as opposed to postfix used by Unicode.
See also
ITU T.51
Footnotes
|
https://en.wikipedia.org/wiki/Partial%20fraction%20decomposition
|
In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is, a fraction such that the numerator and the denominator are both polynomials) is an operation that consists of expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator.
The importance of the partial fraction decomposition lies in the fact that it provides algorithms for various computations with rational functions, including the explicit computation of antiderivatives, Taylor series expansions, inverse Z-transforms, and inverse Laplace transforms. The concept was discovered independently in 1702 by both Johann Bernoulli and Gottfried Leibniz.
In symbols, the partial fraction decomposition of a rational fraction of the form where and are polynomials, is its expression as
where
is a polynomial, and, for each ,
the denominator is a power of an irreducible polynomial (that is not factorable into polynomials of positive degrees), and
the numerator is a polynomial of a smaller degree than the degree of this irreducible polynomial.
When explicit computation is involved, a coarser decomposition is often preferred, which consists of replacing "irreducible polynomial" by "square-free polynomial" in the description of the outcome. This allows replacing polynomial factorization by the much easier-to-compute square-free factorization. This is sufficient for most applications, and avoids introducing irrational coefficients when the coefficients of the input polynomials are integers or rational numbers.
Basic principles
Let
be a rational fraction, where and are univariate polynomials in the indeterminate over a field. The existence of the partial fraction can be proved by applying inductively the following reduction steps.
Polynomial part
There exist two polynomials and such that
and
where denotes the degree of the polynomial .
This results immediately from the Euclidean division of
|
https://en.wikipedia.org/wiki/Law%20of%20Demeter
|
The Law of Demeter (LoD) or principle of least knowledge is a design guideline for developing software, particularly object-oriented programs. In its general form, the LoD is a specific case of loose coupling. The guideline was proposed by Ian Holland at Northeastern University towards the end of 1987, and the following three recommendations serve as a succinct summary:
Each unit should have only limited knowledge about other units: only units "closely" related to the current unit.
Each unit should only talk to its friends; don't talk to strangers.
Only talk to your immediate friends.
The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents), in accordance with the principle of "information hiding". It may be viewed as a corollary to the principle of least privilege, which dictates that a module possess only the information and resources necessary for its legitimate purpose.
It is so named for its origin in the Demeter Project, an adaptive programming and aspect-oriented programming effort. The project was named in honor of Demeter, “distribution-mother” and the Greek goddess of agriculture, to signify a bottom-up philosophy of programming which is also embodied in the law itself.
History
The law dates back to 1987 when it was first proposed by Ian Holland, who was working on the Demeter Project. The Demeter Project was the birthplace of a lot of AOP (Aspect Oriented Programming) principles.
A quote in one of the remainders of the project seems to clarify the origins of the name:
In object-oriented programming
An object a can request a service (call a method) of an object instance b, but object a should not "reach through" object b to access yet another object, c, to request its services. Doing so would mean that object a implicitly requires greater knowledge of object b's internal structure.
Instead, b's interface should be modified if necessary so it
|
https://en.wikipedia.org/wiki/SNMPTT
|
SNMPTT is an SNMP trap handler written in Perl for use with the NET-SNMP/UCD-SNMP snmptrapd program. Received traps are translated into user friendly messages using variable substitution. Output can be to STDOUT, text log file, syslog, NT Event Log, MySQL (Linux/Windows), PostgreSQL, or an ODBC database. User defined programs can also be executed.
Distribution
SNMPTT can be downloaded from the SourceForge project page or the project web page.
Books
Information on SNMPTT is available in the following books:
Turnbull, James; (2006) Pro Nagios 2.0 - San Francisco: Apress
Schubert, Max et al.; (2008) Nagios 3 Enterprise Network Monitoring - Syngress
Barth, Wolfgang; (2008) "Nagios: System And Network Monitoring, 2nd edition'' - No Starch Press
Internet Protocol based network software
Free network management software
Multi-agent network management software
Network analyzers
Nagios
|
https://en.wikipedia.org/wiki/Fold%20%28higher-order%20function%29
|
In functional programming, fold (also termed reduce, accumulate, aggregate, compress, or inject) refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Typically, a fold is presented with a combining function, a top node of a data structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure's hierarchy, using the function in a systematic way.
Folds are in a sense dual to unfolds, which take a seed value and apply a function corecursively to decide how to progressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacing it with the results of applying a combining function at each node on its terminal values and the recursive results (catamorphism, versus anamorphism of unfolds).
As structural transformations
Folds can be regarded as consistently replacing the structural components of a data structure with functions and values. Lists, for example, are built up in many functional languages from two primitives: any list is either an empty list, commonly called nil ([]), or is constructed by prefixing an element in front of another list, creating what is called a cons node ( Cons(X1,Cons(X2,Cons(...(Cons(Xn,nil))))) ), resulting from application of a cons function (written down as a colon (:) in Haskell). One can view a fold on lists as replacing the nil at the end of the list with a specific value, and replacing each cons with a specific function. These replacements can be viewed as a diagram:
There's another way to perform the structural transformation in a consistent manner, with the order of the two links of each node flipped when fed into the combining function:
These pictures illustrate right and left fold of a list visually. They also highlight
|
https://en.wikipedia.org/wiki/Noise%20control
|
Noise control or noise mitigation is a set of strategies to reduce noise pollution or to reduce the impact of that noise, whether outdoors or indoors.
Overview
The main areas of noise mitigation or abatement are: transportation noise control, architectural design, urban planning through zoning codes, and occupational noise control. Roadway noise and aircraft noise are the most pervasive sources of environmental noise. Social activities may generate noise levels that consistently affect the health of populations residing in or occupying areas, both indoor and outdoor, near entertainment venues that feature amplified sounds and music that present significant challenges for effective noise mitigation strategies.
Multiple techniques have been developed to address interior sound levels, many of which are encouraged by local building codes. In the best case of project designs, planners are encouraged to work with design engineers to examine trade-offs of roadway design and architectural design. These techniques include design of exterior walls, party walls, and floor and ceiling assemblies; moreover, there are a host of specialized means for damping reverberation from special-purpose rooms such as auditoria, concert halls, entertainment and social venues, dining areas, audio recording rooms, and meeting rooms.
Many of these techniques rely upon material science applications of constructing sound baffles or using sound-absorbing liners for interior spaces. Industrial noise control is a subset of interior architectural control of noise, with emphasis on specific methods of sound isolation from industrial machinery and for protection of workers at their task stations.
Sound masking is the active addition of noise to reduce the annoyance of certain sounds, the opposite of soundproofing.
Standards, recommendations, and guidelines
Organizations each have their own standards, recommendations/guidelines, and directives for what levels of noise workers are permitted to be ar
|
https://en.wikipedia.org/wiki/Generic%20flatness
|
In algebraic geometry and commutative algebra, the theorems of generic flatness and generic freeness state that under certain hypotheses, a sheaf of modules on a scheme is flat or free. They are due to Alexander Grothendieck.
Generic flatness states that if Y is an integral locally noetherian scheme, is a finite type morphism of schemes, and F is a coherent OX-module, then there is a non-empty open subset U of Y such that the restriction of F to u−1(U) is flat over U.
Because Y is integral, U is a dense open subset of Y. This can be applied to deduce a variant of generic flatness which is true when the base is not integral. Suppose that S is a noetherian scheme, is a finite type morphism, and F is a coherent OX module. Then there exists a partition of S into locally closed subsets S1, ..., Sn with the following property: Give each Si its reduced scheme structure, denote by Xi the fiber product , and denote by Fi the restriction ; then each Fi is flat.
Generic freeness
Generic flatness is a consequence of the generic freeness lemma. Generic freeness states that if A is a noetherian integral domain, B is a finite type A-algebra, and M is a finite type B-module, then there exists a non-zero element f of A such that Mf is a free Af-module. Generic freeness can be extended to the graded situation: If B is graded by the natural numbers, A acts in degree zero, and M is a graded B-module, then f may be chosen such that each graded component of Mf is free.
Generic freeness is proved using Grothendieck's technique of dévissage. Another version of generic freeness can be proved using Noether's normalization lemma.
|
https://en.wikipedia.org/wiki/2000%20%28number%29
|
2000 (two thousand) is a natural number following 1999 and preceding 2001.
It is:
the highest number expressible using only two unmodified characters in Roman numerals (MM)
an Achilles number
smallest four digit eban number
Selected numbers in the range 2001–2999
2001 to 2099
2001 – sphenic number
2002 – palindromic number
2003 – Sophie Germain prime and the smallest prime number in the 2000s
2004 – Area of the 24th crystagon
2005 – A vertically symmetric number
2006 – number of subsets of {1,2,3,4,5,6,7,8,9,10,11} with relatively prime elements
2007 – 22007 + 20072 is prime
2008 – number of 4 X 4 matrices with nonnegative integer entries and row and column sums equal to 3
2009 = 74 − 73 − 72
2010 – number of compositions of 12 into relatively prime parts
2011 – sexy prime with 2017, sum of eleven consecutive primes: 2011 = 157 + 163 + 167 + 173 + 179 + 181 + 191 + 193 + 197 + 199 + 211
2012 – The number 8 × 102012 − 1 is a prime number
2013 – number of widely totally strongly normal compositions of 17
2014 – 5 × 22014 - 1 is prime
2015 – Lucas–Carmichael number
2016 – triangular number, number of 5-cubes in a 9-cube, Erdős–Nicolas number, 211-25.
2017 – Mertens function zero, sexy prime with 2011
2018 – Number of partitions of 60 into prime parts
2019 – smallest number that can be represented as the sum of 3 prime squares 6 different ways: 2019 = 72 + 112 + 432 = 72 + 172 + 412 = 132 + 132 + 412 = 112 + 232 + 372 = 172 + 192 + 372 = 232 + 232 + 312.
2020 – sum of the totient function for the first 81 integers
2021 = 43 * 47, consecutive prime numbers, next is 2491
2022 – non-isomorphic colorings of a toroidal 3 × 3 grid using exactly three colors under translational symmetry, beginning of a run of 4 consecutive Niven numbers
2023 = 7 * 17 * 17 – multiple of 7 with digit sum equal to 7, sum of squares of digits equals 17
2024 – tetrahedral number
2025 = 452, sum of the cubes of the first nine integers, centered octagonal number
2027 – s
|
https://en.wikipedia.org/wiki/Conpoy
|
Conpoy or dried scallop is a type of Cantonese dried seafood product that is made from the adductor muscle of scallops. The smell of conpoy is marine, pungent, and reminiscent of certain salt-cured meats. Its taste is rich in umami due to its high content of various free amino acids, such as glycine, alanine, and glutamic acid. It is also rich in nucleic acids such as inosinic acid, amino acid byproducts such as taurine, and minerals, such as calcium and zinc.
Conpoy is produced by cooking raw scallops and then drying them.
Terminology
Conpoy is a loanword from the Cantonese pronunciation of 乾貝 (), which literally means "dried shell(fish)".
Usage
In Hong Kong, conpoy from two types of scallops are common. Conpoy made from Atrina pectinata or (江珧) from mainland China is small and milder in taste. Patinopecten yessoensis or (扇貝), a sea scallop imported from Japan (hotategai, 帆立貝 in Japanese), produces a conpoy that is stronger and richer in taste .
As with many dried foods, conpoy was originally made as a way to preserve seafood in times of excess. In more recent times its use in cuisine has been elevated to gourmet status. Conpoy has a strong and distinctive flavor that can be easily identified when used in rice congee, stir fries, stews, and sauces.
XO sauce, a seasoning used for frying vegetables or seafoods in Cantonese cuisine, contains significant quantities of conpoy. For example, the Lee Kum Kee formulation lists conpoy as the third ingredient on its label.
See also
|
https://en.wikipedia.org/wiki/Cora%20Slocomb%20di%20Brazza
|
Cora Slocomb di Brazza (January 7, 1862 – August 24, 1944) was an American heiress and Italian activist, businesswoman, and philanthropist. Born into a wealthy family in New Orleans, she relocated to Connecticut after her father's death and was raised in Quaker traditions. Privately tutored, she studied in France, Germany and the Isle of Wight, taking painting lessons with Frank Duveneck. In 1887, she went to Italy and married Detalmo Savorgnan di Brazza, brother of explorer Pierre Savorgnan de Brazza. They settled in his family estate at the in Moruzzo in the Province of Udine, wintering in Rome. She created a lace-making school and eventually opened seven Brazza Lace Cooperative Schools. Besides promoting basic education, the schools taught bobbin lace-making and marketed the wares to help women rise above poverty. Speaking four languages, Slocomb di Brazza printed various language pamphlets to attract interest from abroad in their products. She displayed the goods of the Lace Cooperative Schools at trade shows and world fairs. She also was successful in a drive to reduce US import duties on handcrafted items in 1897, arguing that the tariffs would drive up immigration.
Involved in the peace movement from 1889, Slocomb di Brazza created the peace flag and was the founder of the International Council of Women's Committee on Social Peace and International Arbitration in 1897. The committee worked to create agreements for nations to solve conflicts diplomatically and avoid war. Aligned with her peace work, she undertook numerous humanitarian drives to assist immigrant communities, reduce strife caused by cultural differences, and improve Italian–American relations. Slocomb di Brazza campaigned against the death penalty, fighting for a pardon and then assisting accused murderer, Maria Barbella, in gaining a second trial, at which she was acquitted. She attended the 1903 and 1904 Congresses of the International Council of Women, representing the Consiglio Nazionale d
|
https://en.wikipedia.org/wiki/Cyanazine
|
Cyanazine is a herbicide that belongs to the group of triazines. Cyanazine inhibits photosynthesis and is therefore used as a herbicide.
History
Cyanazine is used as a herbicide to control annual grasses and broadleaf weeds. It belongs to the group of triazine herbicides, just as atrazine. These pesticides work by inhibiting photosynthesis. The majority of the cyanazine used is used for corn. In 1985 this was 96% of the used cyanazine. The Environmental Protection Agency (EPA) made a profile on the Health and Environmental effects of cyanazine in 1984.
In 1971 cyanazine was brought on the market under the names ‘Bladex’ and ‘Fortol’ by Shell. Cyanazine and the other triazines have been among the group of most heavily used herbicides in the mid-west and the United States of America.
In 2002 the European Union pesticides database disapproved the usage of cyanazine as a herbicide. It is classified as a teratogen on the Hazardous Substance List, already in 1986.
Structure and reactivity
Cyanazine is the common name for 2-chloro-4-(1-cyano-1-methylethyl-amino)-6-ethylamine-1,3,5-triazine. The molecular formula for this compound is , molecular weight is 240.695 g/mol. Cyanazine is a white or colourless crystalline solid. The melting point is around 166.5-167.0 °C. The logP is 2.22.
Cyanazine is not very reactive in neutral and slightly acidic/basic media, it is hydrolysed by strong acids and bases. It is stable to heat, light and to hydrolysis. It is also stable to UV irradiation under practical conditions. Cyanazine can decompose on heating. This produces corrosive fumes of hydrogen chloride, nitrogen oxides and cyanides.
Cyanazine has one of the lowest rate constant of reactivity with ozone from different pesticides. Among four different herbicide groups, cyanazine degrades the fastest in soil.
Synthesis
Cyanazine is a chloro-1,3,5-triazine that is substituted at positions 6 and 4 by an ethyl amino and an amino group respectively. It can be prepared by react
|
https://en.wikipedia.org/wiki/Backplane
|
A backplane (or "backplane system") is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications.
A backplane is generally differentiated from a motherboard by the lack of on-board processing and storage elements. A backplane uses plug-in cards for storage and processing.
Usage
Early microcomputer systems like the Altair 8800 used a backplane for the processor and expansion cards.
Backplanes are normally used in preference to cables because of their greater reliability. In a cabled system, the cables need to be flexed every time that a card is added or removed from the system; this flexing eventually causes mechanical failures. A backplane does not suffer from this problem, so its service life is limited only by the longevity of its connectors. For example, DIN 41612 connectors (used in the VMEbus system) have three durability grades built to withstand (respectively) 50, 400 and 500 insertions and removals, or "mating cycles". To transmit information, Serial Back-Plane technology uses a low-voltage differential signaling transmission method for sending information.
In addition, there are bus expansion cables which will extend a computer bus to an external backplane, usually located in an enclosure, to provide more or different slots than the host computer provides. These cable sets have a transmitter board located in the computer, an expansion board in the remote backplane, and a cable between the two.
Active versus passive backplanes
Backplanes have grown in complexity from the simple Industry Standard Architecture (ISA) (used in the original IBM PC) or S-100 style where all the connectors
|
https://en.wikipedia.org/wiki/Quantum%20spin%20liquid
|
In condensed matter physics, a quantum spin liquid is a phase of matter that can be formed by interacting quantum spins in certain magnetic materials. Quantum spin liquids (QSL) are generally characterized by their long-range quantum entanglement, fractionalized excitations, and absence of ordinary magnetic order.
The quantum spin liquid state was first proposed by physicist Phil Anderson in 1973 as the ground state for a system of spins on a triangular lattice that interact antiferromagnetically with their nearest neighbors, i.e. neighboring spins seek to be aligned in opposite directions. Quantum spin liquids generated further interest when in 1987 Anderson proposed a theory that described high temperature superconductivity in terms of a disordered spin-liquid state.
Basic properties
The simplest kind of magnetic phase is a paramagnet, where each individual spin behaves independently of the rest, just like atoms in an ideal gas. This highly disordered phase is the generic state of magnets at high temperatures, where thermal fluctuations dominate. Upon cooling, the spins will often enter a ferromagnet (or antiferromagnet) phase. In this phase, interactions between the spins cause them to align into large-scale patterns, such as domains, stripes, or checkerboards. These long-range patterns are referred to as "magnetic order," and are analogous to the regular crystal structure formed by many solids.
Quantum spin liquids offer a dramatic alternative to this typical behavior. One intuitive description of this state is as a "liquid" of disordered spins, in comparison to a ferromagnetic spin state, much in the way liquid water is in a disordered state compared to crystalline ice. However, unlike other disordered states, a quantum spin liquid state preserves its disorder to very low temperatures. A more modern characterization of quantum spin liquids involves their topological order, long-range quantum entanglement properties, and anyon excitations.
Examples
Severa
|
https://en.wikipedia.org/wiki/ACS%20Synthetic%20Biology
|
ACS Synthetic Biology is a peer-reviewed scientific journal published by the American Chemical Society. It began publishing accepted articles in the Fall of 2011, with the first full monthly issue published in January 2012. It covers all aspects of synthetic biology, including molecular, systems, and synthetic research. The founding editor-in-chief is Christopher Voigt (Massachusetts Institute of Technology).
Types of articles
The journal publishes
Letters: Short reports of original research focused on an individual finding
Articles: Original research presenting findings of immediate, broad interest.
Reviews: Expert perspectives and analyses of recently published research
Technical Notes: Concise communications that focus on the characterization of new or interesting tools and websites
Tutorials: Detailed descriptions of synthetic, computational, and systems methodologies
|
https://en.wikipedia.org/wiki/Camillo%20De%20Lellis
|
Camillo De Lellis (born 11 June 1976) is an Italian mathematician who is active in the fields of calculus of variations, hyperbolic systems of conservation laws, geometric measure theory and fluid dynamics. He is a permanent faculty member in the School of Mathematics at the Institute for Advanced Study. He is also one of the two managing editors of Inventiones Mathematicae.
Biography
Prior joining the faculty of the Institute for Advanced Study, De Lellis was a professor of mathematics at the University of Zurich from 2004 to 2018. Before this, he was a postdoctoral researcher at ETH Zurich and at the Max Planck Institute for Mathematics in the Sciences. He received his PhD in mathematics from the Scuola Normale Superiore at Pisa, under the guidance of Luigi Ambrosio in 2002.
Scientific activity
De Lellis has given a number of remarkable contributions in different fields related to partial differential equations. In geometric measure theory he has been interested in the study of regularity and singularities of minimising hypersurfaces, pursuing a program aimed at disclosing new aspects of the theory started by Almgren in his "Big regularity paper".
There Almgren proved his famous regularity theorem asserting that the singular set of an m-dimensional mass-minimizing surface has dimension at most m − 2. De Lellis has also worked on various aspects of the theory of hyperbolic systems of conservation laws and of incompressible fluid dynamics. In particular, together with László Székelyhidi Jr., he has introduced the use of convex integration methods and differential inclusions to analyse non-uniqueness issues for weak solutions to the Euler equation.
Recognition
De Lellis has been awarded the Stampacchia Medal in 2009, the Fermat Prize in 2013 and the Caccioppoli Prize in 2014. He has been invited speaker at the International Congress of Mathematicians in 2010 and plenary speaker at the European Congress of Mathematics in 2012. In 2012 he has also been awarded a Eu
|
https://en.wikipedia.org/wiki/History%20of%20robots
|
The history of robots has its origins in the ancient world. During the industrial revolution, humans developed the structural engineering capability to control electricity so that machines could be powered with small motors. In the early 20th century, the notion of a humanoid machine was developed.
The first uses of modern robots were in factories as industrial robots. These industrial robots were fixed machines capable of manufacturing tasks which allowed production with less human work. Digitally programmed industrial robots with artificial intelligence have been built since the 2000s.
Early legends
Concepts of artificial servants and companions date at least as far back as the ancient legends of Cadmus, who is said to have sown dragon teeth that turned into soldiers and Pygmalion whose statue of Galatea came to life. Many ancient mythologies included artificial people, such as the talking mechanical handmaidens (Ancient Greek: (Kourai Khryseai); "Golden Maidens") built by the Greek god Hephaestus (Vulcan to the Romans) out of gold.
The Buddhist scholar Daoxuan (596-667 AD) described humanoid automata crafted from metals that recite sacred texts in a cloister which housed a fabulous clock. The "precious metal-people" weeped when Buddha Shakyamuni died. Humanoid automations also feature in the Epic of King Gesar, a Central Asian cultural hero.
Early Chinese lore on the legendary carpenter Lu Ban and the philosopher Mozi described mechanical imitations of animals and demons. The implications of humanoid automatons were discussed in Liezi (4th century CE), a compilation of Daoist texts which went on to become a classic. In chapter 5 King Mu of Zhou is on tour of the West and upon asking the craftsman Master Yan Shi "What can you do?" the royal court is presented with an artificial man. The automation was indistinguishable from a human and performed various tricks for the king and his entourage. But the king flew into a rage when apparently the automation starte
|
https://en.wikipedia.org/wiki/Coorgi%E2%80%93Cox%20alphabet
|
The Coorgi–Cox alphabet is an alphabet developed by the linguist Gregg M. Cox that is used by a number of individuals within Kodagu district of India to write the endangered Dravidian language of Kodava, also known sometimes as Coorgi.
The script uses a combination of 26 consonant letters, eight vowel letters and a diphthong marker. Each letter represents a single sound and there are no capital letters. A computer-based font has been created.
The script was developed out of the request by a group of Kodava individuals to have a distinct script for Kodava Takk, to distinguish the language. Kodava Takk is generally written in the Kannada script, but can also be found written in the Malayalam script, especially along the borders with Kerala. The new script is intended as a unified writing system for all Kodava Takk speakers.
In order to introduce the script, 10,000 CD booklets and 25,000 post cards with various scenes from the region were produced and distributed throughout the Coorg area in March and April 2005. Several books are being planned including a phrase book and dictionary.
|
https://en.wikipedia.org/wiki/Field%20with%20one%20element
|
In mathematics, the field with one element is a suggestive name for an object that should behave similarly to a finite field with a single element, if such a field could exist. This object is denoted F1, or, in a French–English pun, Fun. The name "field with one element" and the notation F1 are only suggestive, as there is no field with one element in classical abstract algebra. Instead, F1 refers to the idea that there should be a way to replace sets and operations, the traditional building blocks for abstract algebra, with other, more flexible objects. Many theories of F1 have been proposed, but it is not clear which, if any, of them give F1 all the desired properties. While there is still no field with a single element in these theories, there is a field-like object whose characteristic is one.
Most proposed theories of F1 replace abstract algebra entirely. Mathematical objects such as vector spaces and polynomial rings can be carried over into these new theories by mimicking their abstract properties. This allows the development of commutative algebra and algebraic geometry on new foundations. One of the defining features of theories of F1 is that these new foundations allow more objects than classical abstract algebra does, one of which behaves like a field of characteristic one.
The possibility of studying the mathematics of F1 was originally suggested in 1956 by Jacques Tits, published in , on the basis of an analogy between symmetries in projective geometry and the combinatorics of simplicial complexes. F1 has been connected to noncommutative geometry and to a possible proof of the Riemann hypothesis.
History
In 1957, Jacques Tits introduced the theory of buildings, which relate algebraic groups to abstract simplicial complexes. One of the assumptions is a non-triviality condition: If the building is an n-dimensional abstract simplicial complex, and if , then every k-simplex of the building must be contained in at least three n-simplices. This is analogo
|
https://en.wikipedia.org/wiki/Spatial%20descriptive%20statistics
|
Spatial descriptive statistics is the intersection of spatial statistics and descriptive statistics; these methods are used for a variety of purposes in geography, particularly in quantitative data analyses involving Geographic Information Systems (GIS).
Types of spatial data
The simplest forms of spatial data are gridded data, in which a scalar quantity is measured for each point in a regular grid of points, and point sets, in which a set of coordinates (e.g. of points in the plane) is observed. An example of gridded data would be a satellite image of forest density that has been digitized on a grid. An example of a point set would be the latitude/longitude coordinates of all elm trees in a particular plot of land. More complicated forms of data include marked point sets and spatial time series.
Measures of spatial central tendency
The coordinate-wise mean of a point set is the centroid, which solves the same variational problem in the plane (or higher-dimensional Euclidean space) that the familiar average solves on the real line — that is, the centroid has the smallest possible average squared distance to all points in the set.
Measures of spatial dispersion
Dispersion captures the degree to which points in a point set are separated from each other. For most applications, spatial dispersion should be quantified in a way that is invariant to rotations and reflections. Several simple measures of spatial dispersion for a point set can be defined using the covariance matrix of the coordinates of the points. The trace, the determinant, and the largest eigenvalue of the covariance matrix can be used as measures of spatial dispersion.
A measure of spatial dispersion that is not based on the covariance matrix is the average distance between nearest neighbors.
Measures of spatial homogeneity
A homogeneous set of points in the plane is a set that is distributed such that approximately the same number of points occurs in any circular region of a given area.
|
https://en.wikipedia.org/wiki/Great%20Migrations
|
Great Migrations is a seven-episode nature documentary television miniseries that airs on the National Geographic Channel, featuring the great migrations of animals around the globe. The seven-part show is the largest programming event in the ten-year history of the channel and is part of the largest cross-platform initiative since the founding of the National Geographic Society. It was filmed in HD, and premiered on November 7, 2010 with accompanying coverage in the National Geographic magazine and an official companion book.
Episodes
Great Migrations debuted on November 7, 2010 worldwide. The series airs on the Sundays of the same month, spread across four hour-long chapters, excluding three supplemental hours which run on other dates. The National Geographic Channel estimated that the show's premiere would be accessible in 330 million homes across the globe.
Production
The production team traveled over two and a half years tracking multiple species ranging from army ants to Mali elephants. Cinematographers went to great lengths to film the species and their migratory habits, although none were hurt in the process. Filming provided rare footage of various animal scenes, including the documentation of an elephant's funeral for the first time outside East Africa. Various technologies were used to film the show, such as the use of high-tech tags on monarch butterflies and elephant seals.
The Cineflex Heligimbal gyrostabilized cameras was widely utilized in the production. It allows rock-solid closeups to be shot from a kilometer up, a height that does not disturb the animals being filmed. The series also uses the ultra-slow motion Phantom HD camera by Vision Research and the "beyond high-def" Red camera.
Reception
Great Migrations was acclaimed, with considerable praise its cinematography and photography. The Washington Post remarked on the show's "compelling grandeur"; reviewer Tom Shales noted how contemporary nature films would inevitably be compared with the
|
https://en.wikipedia.org/wiki/HTC%20Shift
|
HTC Shift (code name: Clio) is an Ultra-Mobile PC by HTC.
Features
Dual Operating System
Microsoft Windows Vista Business 32-Bit (notebook mode)
SnapVUE (PDA mode)
Processor
Intel A110 Stealey CPU 800 MHz (for Windows Vista)
ARM11 CPU (for SnapVUE)
Memory and Storage
1 GB RAM (notebook mode)
64 MB RAM (PDA mode)
40/60 GB HDD
SD card slot
Intel GMA 950 graphics
Communications
Quad band GSM / GPRS / EDGE (data only): GSM 850, GSM 900, GSM 1800, GSM 1900
Triband UMTS / HSDPA (data only): UMTS 850, UMTS 1900, UMTS 2100
Wi-Fi 802.11 b/g
Bluetooth v2.0
USB port
7" display
Active TFT touchscreen, 16M colors
800 x 480 pixels (Wide-VGA), 7 inches
QWERTY keyboard
Handwriting recognition
Fingerprint Recognition
Ringtones
MP3
Dual speakers
Upgrading
In November 2011 the team from DistantEarth have succeeded in loading the developer preview of Windows 8 onto the HTC Shift.
|
https://en.wikipedia.org/wiki/Bell%E2%80%93LaPadula%20model
|
The Bell–LaPadula Model (BLP) is a state machine model used for enforcing access control in government and military applications. It was developed by David Elliott Bell, and Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell, to formalize the U.S. Department of Defense (DoD) multilevel security (MLS) policy. The model is a formal state transition model of computer security policy that describes a set of access control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive (e.g., "Top Secret"), down to the least sensitive (e.g., "Unclassified" or "Public").
The Bell–LaPadula model is an example of a model where there is no clear distinction between protection and security.
Features
The Bell–LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell–LaPadula model is built on the concept of a state machine with a set of allowable states in a computer system. The transition from one state to another state is defined by transition functions. A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classi
|
https://en.wikipedia.org/wiki/Bennett%27s%20laws
|
Bennett's laws of quantum information are:
1 qubit 1 bit (classical),
1 qubit 1 ebit (entanglement bit),
1 ebit + 1 qubit 2 bits (i.e. superdense coding),
1 ebit + 2 bits 1 qubit (i.e. quantum teleportation),
where indicates "can do the job of".
These principles were formulated around 1993 by Charles H. Bennett.
|
https://en.wikipedia.org/wiki/Size-exclusion%20chromatography
|
Size-exclusion chromatography, also known as molecular sieve chromatography, is a chromatographic method in which molecules in solution are separated by their size, and in some cases molecular weight. It is usually applied to large molecules or macromolecular complexes such as proteins and industrial polymers. Typically, when an aqueous solution is used to transport the sample through the column, the technique is known as gel-filtration chromatography, versus the name gel permeation chromatography, which is used when an organic solvent is used as a mobile phase. The chromatography column is packed with fine, porous beads which are commonly composed of dextran, agarose, or polyacrylamide polymers. The pore sizes of these beads are used to estimate the dimensions of macromolecules. SEC is a widely used polymer characterization method because of its ability to provide good molar mass distribution (Mw) results for polymers.
Size exclusion chromatography (SEC) is fundamentally different from all other chromatographic techniques in that separation is based on a simple procedure of classifying molecule sizes rather than any type of interaction.
Applications
The main application of size-exclusion chromatography is the fractionation of proteins and other water-soluble polymers, while gel permeation chromatography is used to analyze the molecular weight distribution of organic-soluble polymers. Either technique should not be confused with gel electrophoresis, where an electric field is used to "pull" molecules through the gel depending on their electrical charges. The amount of time a solute remains within a pore is dependent on the size of the pore. Larger solutes will have access to a smaller volume and vice versa. Therefore, a smaller solute will remain within the pore for a longer period of time compared to a larger solute.
Another use of size exclusion chromatography is to examine the stability and characteristics of natural organic matter in water. In this method, Ma
|
https://en.wikipedia.org/wiki/Deflagration
|
Deflagration (Lat: de + flagrare, 'to burn down') is subsonic combustion in which a pre-mixed flame propagates through an explosive or a mixture of fuel and oxidizer. Deflagrations in high and low explosives may, or may not transition to a detonation depending upon confinement and other factors. Deflagrations in fuel/oxidizer mixtures may also transition to detonations depending on confinement and other factors. Most fires found in daily life are diffusion flames. Deflagrations with flame speeds in the range of 1 m/sec differ from detonations which propagate supersonically through shock waves with detonation velocities in the range of kilometers/second.
Applications
Deflagrations are often used in engineering applications when the force of the expanding gas is used to move an object such as a projectile down a barrel, or a piston in an internal combustion engine. Deflagration systems and products can also be used in mining, demolition and stone quarrying via gas pressure blasting as a beneficial alternative to high explosives.
Terminology of explosive safety
When studying or discussing explosive safety, or the safety of systems containing explosives, the terms deflagration, detonation and deflagration-to-detonation transition (commonly referred to as DDT) must be understood and used appropriately to convey relevant information. As explained above, a deflagration is a subsonic reaction, whereas a detonation is a supersonic (greater than the sound speed of the material) reaction. Distinguishing between a deflagration or a detonation can be difficult to impossible to the casual observer. Rather, confidently differentiating between the two requires instrumentation and diagnostics to ascertain reaction speed in the affected material. Therefore, when an unexpected event or an accident occurs with an explosive material or an explosive-containing system it is usually impossible to know whether the explosive deflagrated or detonated as both can appear as very violent, en
|
https://en.wikipedia.org/wiki/Differentially%20methylated%20region
|
Differentially methylated regions (DMRs) are genomic regions with different DNA methylation status across different biological samples and regarded as possible functional regions involved in gene transcriptional regulation. The biological samples can be different cells/tissues within the same individual, the same cell/tissue at different times, cells/tissues from different individuals, even different alleles in the same cell.
DNA is mostly methylated at a CpG site, which is a cytosine followed by a guanine. The “p” refers to the phosphate linker between them. DMR usually involves adjacent sites or a group of sites close together that have different methylation patterns between samples. CpG islands appear to be unmethylated in most of the normal tissues, however, are highly methylated in cancer tissues.
There are several different types of DMRs. These include tissue-specific DMR (tDMR), cancer-specific DMR (cDMR), development stages (dDMRs), reprogramming-specific DMR (rDMR), allele-specific DMR (AMR), and aging-specific DMR (aDMR). DNA methylation is associated with cell differentiation and proliferation.
|
https://en.wikipedia.org/wiki/Zoid
|
In botany, a zoid or zoïd is a reproductive cell that possesses one or more flagella, and is capable of independent movement. Zoid can refer to either an asexually reproductive spore or a sexually reproductive gamete. In sexually reproductive gametes, zoids can be either male or female depending on the species. For example, some brown alga (Phaeophyceae) reproduce by producing multi-flagellated male and female gametes that recombine to form the diploid sporangia. Zoids are primarily found in some protists, diatoms, green alga, brown alga, non-vascular plants, and a few vascular plants (ferns, cycads, and Ginkgo biloba). The most common classification group that produces zoids is the heterokonts or stramenopiles. These include green alga, brown alga, oomycetes, and some protists. The term is generally not used to describe motile, flagellated sperm found in animals. Zoid is also commonly confused for zooid which is a single organism that is part of a colonial animal.
Diversity of zoids
A zoid contains one or more flagella for motility. In the various species that produce zoids, there is a high level of diversity in the number of flagella produced. The heterokonts generally produce zoids with 2 flagella, while the Ginkgo biloba produce zoids with tens of thousands of flagella. The position of the flagella and the arrangement of the microtubules varies among species as well. The following sections will briefly outline general characteristics of the zoids found in each subset as well as provide specific examples.
Zoids in heterokonts
Heterokonts are a diverse group of eukaryotic organisms that include diatoms, green algae, and brown algae. The defining characteristic of this group is their bi-flagellate, motile sperm (zoid). The two flagella are most commonly positioned apically or sub-apically depending on the type of heterokont. One flagella, the tinsel flagella, is generally longer and covered with bristles. The other flagella is typically shorter, poten
|
https://en.wikipedia.org/wiki/The%20Eleventh%20Hour%20%28book%29
|
The Eleventh Hour: A Curious Mystery is an illustrated children's book by Graeme Base. In it, Horace the Elephant holds a party for his eleventh birthday, to which he invites his ten best friends (various animals) to play eleven games and share in a feast that he has prepared. However, at the time they are to eat—11:00—they are startled to find that someone has already eaten all the food. They accuse each other until, finally, they're left puzzled as to who could have eaten it all. It is left up to the reader to solve the mystery, through careful analysis of the pictures on each page and the words in the story.
The book was a joint-winner of the "Picture Book of the Year" award from The Children's Book Council of Australia.
History
Base was inspired to write the book by reading Agatha Christie novels. He travelled to Kenya and Tanzania in 1987 observing animals in game parks and collecting ideas for the book.
Style
Written in rhyme, the book includes large and lavish full-page illustrations of Horace's opulent house and the events of the party, packed with hidden details. The author invites the reader to deduce the identity of the thief by examining the illustrations and making deductions and observations. Also among the details in the illustrations are hidden messages, ciphers, and codes for amateur cryptographers (for example, one page's border consists of Morse code while another page set in the ballroom contains musical clues as to which guest is guilty). The biggest and most noticeable clue lies in a paragraph of ciphertext at the end of the book, which is to be decrypted, once the reader has discovered the identity of the thief, by means of a Caesar cipher mapping A to the first letter of the guilty animal's name. The solution to the cipher confirms the answer to the puzzle and offers an additional challenge to the reader.
The final portion of the book contains the answers to almost all of the clues in the book (including the cipher), and how to solve the
|
https://en.wikipedia.org/wiki/Variety%20%28botany%29
|
In botanical nomenclature, variety (abbreviated var.; in ) is a taxonomic rank below that of species and subspecies, but above that of form. As such, it gets a three-part infraspecific name. It is sometimes recommended that the subspecies rank should be used to recognize geographic distinctiveness, whereas the variety rank is appropriate if the taxon is seen throughout the geographic range of the species.
Example
The pincushion cactus, Escobaria vivipara (Nutt.) Buxb., is a wide-ranging variable species occurring from Canada to Mexico, and found throughout New Mexico below about . Nine varieties have been described. Where the varieties of the pincushion cactus meet, they intergrade. The variety Escobaria vivipara var. arizonica is from Arizona, while Escobaria vivipara var. neo-mexicana is from New Mexico.
See also Capsicum annuum var. glabriusculum
Definitions
The term is defined in different ways by different authors. However, the International Code of Nomenclature for Cultivated Plants, while recognizing that the word "variety" is often used to denote "cultivar", does not accept this usage. Variety is defined in the code as follows: "Variety (varietas) the category in the botanical nomenclatural hierarchy between species and form (forma)". The code acknowledges the other usage as follows: "term used in some national and international legislation for a clearly distinguishable taxon below the rank of species; generally, in legislative texts, a term equivalent to cultivar. See also: cultivar and variety (varietas)".
A variety will have an appearance distinct from other varieties, but will hybridize freely with those other varieties (if brought into contact).
Other nomenclature uses
In plant breeding nomenclature, at least in countries that are signatory to the UPOV Convention, "variety" or "plant variety" is a legal term.
In zoological nomenclature, the only allowed rank below that of species is that of subspecies. A name that was published before 1961 as
|
https://en.wikipedia.org/wiki/G%C3%BCnther%20Laukien%20Prize
|
The Günther Laukien Prize is a prize presented at the Experimental Nuclear Magnetic Resonance Conference "to recognize recent cutting-edge experimental NMR research with a high probability of enabling beneficial new applications". The prize was established in 1999 in memoriam to Günther Laukien, who was a pioneer in NMR research. The prize money of $20,000 is financed by Bruker, the company founded by Laukien. The recipients of the Günther Laukien Prize have been:
2023 Lyndon Emsley and Anne Lesage
2022 Michael Garwood
2021 Gareth Morris
2020 Simon Duckett, Konstantin Ivanov, and Warren S. Warren
2019 Geoffrey Bodenhausen, and Christian Griesinger
2018 Gerhard Wagner
2017 Kurt Zilm and Bernd Reif
2016 Robert S. Balaban and Peter van Zijl
2015 Arthur Palmer III
2014 Marc Baldus, Mei Hong, Ann McDermott, Beat H. Meier, Hartmut Oschkinat, and Robert Tycko
2013 Clare Grey
2012 Klaes Golman and Jan Henrik Ardenkjaer-Larsen
2011 Daniel Rugar, John Mamin, and John Sidles
2010 Paul Callaghan
2009 Daniel Weitekamp
2008 Malcolm Levitt
2007 Robert G. Griffin
2006 Thomas Szyperski, Eriks Kupce, Ray Freeman, and Rafael Bruschweiler
2005 Stephan Grzesiek
2004 Lewis E. Kay
2003 Jacob Schaefer
2002 Ad Bax, Aksel Bothner-By and James Prestegard
2001 Peter Boesiger, Klaas Prüßmann and Markus Weiger
2000 Lucio Frydman
1999 Konstantin Pervushin, Roland Riek, Gerhard Wider, and Kurt Wuthrich
See also
List of physics awards
|
https://en.wikipedia.org/wiki/Input%20shaping
|
In control theory, input shaping is an open-loop control technique for reducing vibrations in computer-controlled machines. The method works by creating a command signal that cancels its own vibration. That is, a vibration excited by previous parts of the command signal is cancelled by vibration excited by latter parts of the command. Input shaping is implemented by convolving a sequence of impulses, known as an input shaper, with any arbitrary command. The shaped command that results from the convolution is then used to drive the system. If the impulses in the shaper are chosen correctly, then the shaped command will excite less residual vibration than the unshaped command. The amplitudes and time locations of the impulses are obtained from the system's natural frequencies and damping ratios. Shaping can be made very robust to errors in the system parameters.
|
https://en.wikipedia.org/wiki/S9%20fraction
|
The S9 fraction is the product of an organ tissue homogenate used in biological assays. The S9 fraction is most frequently used in assays that measure the metabolism of drugs and other xenobiotics. It is defined by the U.S. National Library of Medicine's "IUPAC Glossary of Terms Used in Toxicology" as the "Supernatant fraction obtained from an organ (usually liver) homogenate by centrifuging at 9000 g for 20 minutes in a suitable medium; this fraction contains cytosol and microsomes." The microsomes component of the S9 fraction contain cytochrome P450 isoforms (phase I metabolism) and other enzyme activities. The cytosolic portion contains the major part of the activities of transferases (phase II metabolism). The S9 fraction is easier to prepare than purified microsomes.
Applications
The S9 fraction has been used in conjunction with the Ames test to assess the mutagenic potential of chemical compounds. Chemical substances sometimes require metabolic activation in order to become mutagenic. Furthermore, the metabolic enzymes of bacteria used in the Ames test differ substantially from those in mammals. Therefore, to mimic the metabolism of test substance that would occur in mammals, the S9 fraction is often added to the Ames test.
The S9 fraction has also been used to assess the metabolic stability of candidate drugs.
|
https://en.wikipedia.org/wiki/Information%20model
|
An information model in software engineering is a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context.
Overview
The term information model in general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised to facility information model, building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility.
Within the field of software engineering and data modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas.
Information modeling languages
In 1976, an entity-relationship (ER) graphic notation was introduced by Peter Chen. He stressed that it was a "semantic" modelling technique and independent of any database mode
|
https://en.wikipedia.org/wiki/Species%20nova
|
In biological taxonomy, a (plural: ; abbreviation: plural abbreviation: ) is a new species. The phrase is Latin, and is used after a binomial name that is being published for the first time.
An example is the species of miniature frog, Paedophryne amauensis, originally described as Paedophryne amauensis sp. nov. in PLOS ONE in 2012.
The term should not be confused with , used when a previously named taxon is moved to a different genus or species, or its rank is changed.
See also
Glossary of scientific naming
Species description
|
https://en.wikipedia.org/wiki/Phenomenology%20%28physics%29
|
In physics, phenomenology is the application of theoretical physics to experimental data by making quantitative predictions based upon known theories. It is related to the philosophical notion of the same name in that these predictions describe anticipated behaviors for the phenomena in reality. Phenomenology stands in contrast with experimentation in the scientific method, in which the goal of the experiment is to test a scientific hypothesis instead of making predictions.
Phenomenology is commonly applied to the field of particle physics, where it forms a bridge between the mathematical models of theoretical physics (such as quantum field theories and theories of the structure of space-time) and the results of the high-energy particle experiments. It is sometimes used in other fields such as in condensed matter physics and plasma physics, when there are no existing theories for the observed experimental data.
Applications in particle physics
Standard Model consequences
Within the well-tested and generally accepted Standard Model, phenomenology is the calculating of detailed predictions for experiments, usually at high precision (e.g., including radiative corrections).
Examples include:
Next-to-leading order calculations of particle production rates and distributions.
Monte Carlo simulation studies of physics processes at colliders.
Extraction of parton distribution functions from data.
CKM matrix calculations
The CKM matrix is useful in these predictions:
Application of heavy quark effective field theory to extract CKM matrix elements.
Using lattice QCD to extract quark masses and CKM matrix elements from experiment.
Theoretical models
In Physics beyond the Standard Model, phenomenology addresses the experimental consequences of new models: how their new particles could be searched for, how the model parameters could be measured, and how the model could be distinguished from other, competing models.
Phenomenological analysis
Phenomenological anal
|
https://en.wikipedia.org/wiki/Deep%20Carbon%20Observatory
|
The Deep Carbon Observatory (DCO) is a global research program designed to transform understanding of carbon's role in Earth. DCO is a community of scientists, including biologists, physicists, geoscientists and chemists, whose work crosses several traditional disciplinary lines to develop the new, integrative field of deep carbon science. To complement this research, the DCO's infrastructure includes public engagement and education, online and offline community support, innovative data management, and novel instrumentation development.
In December 2018, researchers announced that considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, live up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.
History
In 2007, Robert Hazen, a Senior Staff Scientist at the Carnegie Institution’s Geophysical Laboratory (Washington, DC) spoke at the Century Club in New York, on the origins of life on Earth and how geophysical reactions may have played a critical role in the development of life on Earth. Jesse Ausubel, a faculty member at Rockefeller University and Program Director at the Alfred P. Sloan Foundation, was in attendance and later sought out Hazen's book, Genesis: The Scientific Quest for Life’s Origins.
After two years of planning and collaboration, Hazen and colleagues officially launched the Deep Carbon Observatory (DCO) in August 2009, with its secretariat based at the Geophysical Laboratory of the Carnegie Institution of Washington, DC. Hazen and Ausubel, along with input from over 100 scientists invited to participate in the Deep Carbon Cycle Workshop in 2008, expanded their original idea. No longer focused solely on the origin of life on Earth, the group instead clarified their position to further human understanding of Earth, carbon, that critical element, had to take center stage.
Deep carbon cycle
The Deep Carbon Observator
|
https://en.wikipedia.org/wiki/Topology%20table
|
A topology table is used by routers that route traffic in a network. It consists of all routing tables inside the Autonomous System where the router is positioned. Each router using the routing protocol EIGRP then maintains a topology table for each configured network protocol — all routes learned, that are leading to a destination are found in the topology table. EIGRP must have a reliable connection. The routing table of all routers of an Autonomous System is same.
Routing
Table
|
https://en.wikipedia.org/wiki/MTOR%20inhibitors
|
mTOR inhibitors are a class of drugs that inhibit the mammalian target of rapamycin (mTOR) (also known as the mechanistic target of rapamycin), which is a serine/threonine-specific protein kinase that belongs to the family of phosphatidylinositol-3 kinase (PI3K) related kinases (PIKKs). mTOR regulates cellular metabolism, growth, and proliferation by forming and signaling through two protein complexes, mTORC1 and mTORC2. The most established mTOR inhibitors are so-called rapalogs (rapamycin and its analogs), which have shown tumor responses in clinical trials against various tumor types.
History
The discovery of mTOR was made 1994 while investigating the mechanism of action of its inhibitor, rapamycin. Rapamycin was first discovered in 1975 in a soil sample from Easter Island of South Pacific, also known as Rapa Nui, from where its name is derived. Rapamycin is a macrolide, produced by the microorganism Streptomyces hygroscopicus and showed antifungal properties. Shortly after its discovery, immunosuppressive properties were detected, which later led to the establishment of rapamycin as an immunosuppressant. In the 1980s, rapamycin was also found to have anticancer activity although the exact mechanism of action remained unknown until many years later.
In the 1990s there was a dramatic change in this field due to studies on the mechanism of action of rapamycin and the identification of the drug target. It was found that rapamycin inhibited cellular proliferation and cell cycle progression. Research on mTOR inhibition has been a growing branch in science and has promising results.
Protein kinases and their inhibitors
In general, protein kinases are classified in two major categories based on their substrate specificity, protein tyrosine kinases and protein serine/threonine kinases. Dual-specificity kinases are subclass of the tyrosine kinases.
mTOR is a kinase within the family of phosphatidylinositol-3 kinase-related kinases (PIKKs), which is a family of serine
|
https://en.wikipedia.org/wiki/Doctor%20Atomic
|
Doctor Atomic is an opera by the contemporary American composer John Adams, with libretto by Peter Sellars. It premiered at the San Francisco Opera on October 1, 2005. The work focuses on how leading figures at Los Alamos dealt with the great stress and anxiety of preparing for the test of the first atomic bomb (the "Trinity" test).
In 2007, a documentary was made by Jon H. Else about the creation of the opera and collaboration between Adams and Sellars, titled Wonders Are Many.
Composition history
The first act takes place about a month before the bomb is to be tested, and the second act is set in the early morning of July 16, 1945 (the day of the test). During the second act, time is shown slowing down for the characters and then snapping back to the clock. The opera ends in the final, prolonged moment before the bomb is detonated.
Although the original commission for the opera suggested that U.S. physicist J. Robert Oppenheimer, the "father of the atomic bomb", be fashioned as a 20th-century Doctor Faustus, Adams and Sellars deliberately worked to avoid this characterization. Alice Goodman worked for two years with Adams on the project before leaving. She objected to the characterization of Edward Teller, as dictated by the original commission.
The work centers on key players in the Manhattan Project, especially Robert Oppenheimer and General Leslie Groves. It also features Kitty Oppenheimer, Robert's wife. Sellars adapted the libretto from primary historical sources.
Doctor Atomic is similar in style to previous Adams operas Nixon in China and The Death of Klinghoffer, both of which explored the characters and personalities of figures who were involved in historical incidents, rather than a re-enactment of the events themselves.
Libretto
Sellars adapted much of the text for the opera from declassified U.S. government documents and communications among the scientists, government officials, and military personnel who were involved in the project. He also inc
|
https://en.wikipedia.org/wiki/Dust%20bathing
|
Dust bathing (also called sand bathing) is an animal behavior characterized by rolling or moving around in dust, dry earth or sand, with the likely purpose of removing parasites from fur, feathers or skin. Dust bathing is a maintenance behavior performed by a wide range of mammalian and avian species. For some animals, dust baths are necessary to maintain healthy feathers, skin, or fur, similar to bathing in water or wallowing in mud. In some mammals, dust bathing may be a way of transmitting chemical signals (or pheromones) to the ground which marks an individual's territory.
Birds
Birds crouch close to the ground while taking a dust bath, vigorously wriggling their bodies and flapping their wings. This disperses loose substrate into the air. The birds spread one or both wings which allows the falling substrate to fall between the feathers and reach the skin. The dust bath is often followed by thorough shaking to further ruffle the feathers which may be accompanied with preening using the bill.
The California quail is a highly sociable bird; one of their daily communal activities is a dust bath. A group of quail will select an area where the ground has been freshly turned or is soft. Using their underbellies, they burrow downward into the soil about . They then wriggle about in the indentations, flapping their wings and ruffling their feathers, causing dust to rise in the air. They seem to prefer sunny places in which to create these dust baths. An ornithologist is able to detect the presence of quail in an area by spotting the circular indentations left behind in the soft dirt, some in diameter.
Birds without a uropygial gland (e.g., the emu, kiwi, ostrich and bustard) rely on dust bathing to keep their feathers healthy and dry.
Domestic chicken
Dust bathing has been extensively studied in the domestic hen. In normal dust bathing, the hen initially scratches and bill-rakes at the ground, then erects her feathers and squats. Once lying down, the behavior c
|
https://en.wikipedia.org/wiki/Local%20convex%20hull
|
Local convex hull (LoCoH) is a method for estimating size of the home range of an animal or a group of animals (e.g. a pack of wolves, a pride of lions, or herd of buffaloes), and for constructing a utilization distribution. The latter is a probability distribution that represents the probabilities of finding an animal within a given area of its home range at any point in time; or, more generally, at points in time for which the utilization distribution has been constructed. In particular, different utilization distributions can be constructed from data pertaining to particular periods of a diurnal or seasonal cycle.
Utilization distributions are constructed from data providing the location of an individual or several individuals in space at different points in time by associating a local distribution function with each point and then summing and normalizing these local distribution functions to obtain a distribution function that pertains to the data as a whole. If the local distribution function is a parametric distribution, such as a symmetric bivariate normal distribution then the method is referred to as a kernel method, but more correctly should be designated as a parametric kernel method. On the other hand, if the local kernel element associated with each point is a local convex polygon constructed from the point and its k-1 nearest neighbors, then the method is nonparametric and referred to as a k-LoCoH or fixed point LoCoH method. This is in contrast to r-LoCoH (fixed radius) and a-LoCoH (adaptive radius) methods.
In the case of LoCoH utilization distribution constructions, the home range can be taken as the outer boundary of the distribution (i.e. the 100th percentile). In the case of utilization distributions constructed from unbounded kernel elements, such as bivariate normal distributions, the utilization distribution is itself unbounded. In this case the most often used convention is to regard the 95th percentile of the utilization distribution
|
https://en.wikipedia.org/wiki/Computer%20terminal
|
A computer terminal is an electronic or electromechanical hardware device that can be used for entering data into, and transcribing data from, a computer or a computing system. The teletype was an example of an early-day hard-copy terminal and predated the use of a computer screen by decades.
Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input, yet as the technology improved and video displays were introduced, terminals pushed these older forms of interaction from the industry. A related development was time-sharing systems, which evolved in parallel and made up for any inefficiencies in the user's typing ability with the ability to support multiple users on the same machine, each at their own terminal or terminals.
The function of a terminal is typically confined to transcription and input of data; a device with significant local, programmable data-processing capability may be called a "smart terminal" or fat client. A terminal that depends on the host computer for its processing power is called a "dumb terminal" or a thin client. In the era of serial (RS-232) terminals there was a conflicting usage of the term "smart terminal" as a dumb terminal with no user-accessible local computing power but a particularly rich set of control codes for manipulating the display; this conflict was not resolved before hardware serial terminals became obsolete.
A personal computer can run terminal emulator software that replicates functions of a real-world terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system, either over a direct serial connection or over a network using, e.g., SSH.
History
The console of Konrad Zuse's Z3 had a keyboard in 1941, as did the Z4 in 1942–1945. But these consoles could only be used to enter numeric inputs and were thus analogous to those of calculating machines; programs, commands, and other data were entered via paper tape. Both machines had a
|
https://en.wikipedia.org/wiki/Spontaneous%20absolute%20asymmetric%20synthesis
|
Spontaneous absolute asymmetric synthesis is a chemical phenomenon that stochastically generates chirality based on autocatalysis and small fluctuations in the ratio of enantiomers present in a racemic mixture. In certain reactions which initially do not contain chiral information, stochastically distributed enantiomeric excess can be observed. The phenomenon is different from chiral amplification, where enantiomeric excess is present from the beginning and not stochastically distributed. Hence, when the experiment is repeated many times, the average enantiomeric excess approaches 0%. The phenomenon has important implications concerning the origin of homochirality in nature.
|
https://en.wikipedia.org/wiki/DoITPoMS
|
Dissemination of IT for the Promotion of Materials Science (DoITPoMS) is a web-based educational software resource designed to facilitate the teaching and learning of Materials science, at the tertiary level for free.
History
The DoITPoMS project originated in the early 1990s, incorporating customized online sources into the curriculum of the Materials Science courses in the Natural Sciences Tripos of the University Cambridge. The initiative became formalized in 2000, with the start of a project supported by the UK national Fund for the Development of Teaching and Learning (FDTL). This was led by the Department of Materials Science and Metallurgy at the University of Cambridge with five partner institutions, including the University of Leeds, London Metropolitan University, the University of Manchester, Oxford Brookes University, and the University of Sheffield. This period of cooperation lasted for about 10 years.
The FDTL project was aimed at building on expertise concerning the use of Information Technology (IT) to enhance the student learning experience and to disseminate these techniques within the Materials Education community in the UK and globally. This was done by creating an archive of background information, such as video clips, micrographs, simulations, etc, and libraries of teaching and learning packages (TLPs) that covers a particular topic, which were designed both for independent usage by students and as a teaching aid for educators. A vital feature of these packages is a high level of user interactivity.
DoITPoMS has no commercial sponsors and no advertising is permitted on the site. The background science to the resources within DoITPoMS has all been input by unpaid volunteers, most of whom have been academics based in universities. A single person retains responsibility for a particular resource, and these people are credited to the site. While the logo of University of Cambridge does appear on the site, is content is available freely and lice
|
https://en.wikipedia.org/wiki/Word%20%28computer%20architecture%29
|
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.
The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).
Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (KW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (KW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes.
Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bit
|
https://en.wikipedia.org/wiki/Lateral%20computing
|
Lateral computing is a lateral thinking approach to solving computing problems.
Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.
The traditional or conventional approach to solving computing problems is to either build mathematical models or have an IF- THEN -ELSE structure. For example, a brute-force search is used in many chess engines, but this approach is computationally expensive and sometimes may arrive at poor solutions. It is for problems like this that lateral computing can be useful to form a better solution.
A simple problem of truck backup can be used for illustrating lateral-computing. This is one of the difficult tasks for traditional computing techniques, and has been efficiently solved by the use of fuzzy logic (which is a lateral computing technique). Lateral-computing sometimes arrives at a novel solution for particular computing problem by using the model of how living beings, such as how humans, ants, and honeybees, solve a problem; how pure crystals are formed by annealing, or evolution of living beings or quantum mechanics etc.
From lateral-thinking to lateral-computing
Lateral thinking is technique for creative thinking for solving problems. The brain as center of thinking has a self-organizing information system. It tends to create patterns and traditional thinking process uses them to solve problems. The lateral thinking technique proposes to escape from this patterning to arrive at better solutions through new ideas. Provocative use of information processing is the basic underlying principle of lateral thinking,
The provocative operator (PO) is something which characterizes lateral thinking. Its function is to generate new ideas by provocation an
|
https://en.wikipedia.org/wiki/Coercivity
|
Coercivity, also called the magnetic coercivity, coercive field or coercive force, is a measure of the ability of a ferromagnetic material to withstand an external magnetic field without becoming demagnetized. Coercivity is usually measured in oersted or ampere/meter units and is denoted .
An analogous property in electrical engineering and materials science, electric coercivity, is the ability of a ferroelectric material to withstand an external electric field without becoming depolarized.
Ferromagnetic materials with high coercivity are called magnetically hard, and are used to make permanent magnets. Materials with low coercivity are said to be magnetically soft. The latter are used in transformer and inductor cores, recording heads, microwave devices, and magnetic shielding.
Definitions
Coercivity in a ferromagnetic material is the intensity of the applied magnetic field (H field) required to demagnetize that material, after the magnetization of the sample has been driven to saturation by a strong field. This demagnetizing field is applied opposite to the original saturating field. There are however different definitions of coercivity, depending on what counts as 'demagnetized', thus the bare term "coercivity" may be ambiguous:
The normal coercivity, , is the H field required to reduce the magnetic flux (average B field inside the material) to zero.
The intrinsic coercivity, , is the H field required to reduce the magnetization (average M field inside the material) to zero.
The remanence coercivity, , is the H field required to reduce the remanence to zero, meaning that when the H field is finally returned to zero, then both B and M also fall to zero (the material reaches the origin in the hysteresis curve).
The distinction between the normal and intrinsic coercivity is negligible in soft magnetic materials, however it can be significant in hard magnetic materials. The strongest rare-earth magnets lose almost none of the magnetization at HCn.
Experi
|
https://en.wikipedia.org/wiki/Photon%20surface
|
Photon sphere (definition):
A photon sphere of a static spherically symmetric metric is a timelike hypersurface if the deflection angle of a light ray with the closest distance of approach diverges as
For a general static spherically symmetric metric
the photon sphere equation is:
The concept of a photon sphere in a static spherically metric was generalized to a photon surface of any metric.
Photon surface (definition) :
A photon surface of (M,g) is an immersed, nowhere spacelike hypersurface S of (M, g) such that, for every point p∈S and every null vector k∈TpS, there exists a null geodesic :(-ε,ε)→M of (M,g) such that (0)=k, |γ|⊂S.
Both definitions give the same result for a general static spherically symmetric metric.
Theorem:
Subject to an energy condition, a black hole in any spherically symmetric spacetime must be surrounded by a photon sphere. Conversely, subject to an energy condition, any photon sphere must cover more than a certain amount of matter, a black hole, or a naked singularity.
|
https://en.wikipedia.org/wiki/Paradoxoglanis%20caudivittatus
|
Paradoxoglanis caudivittatus is a species of electric catfish endemic to the Democratic Republic of the Congo where it is found in the Tshuapa and Lukenie River systems. This species grows to a length of SL.
|
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Lithuania
|
The following is a list of flags of Lithuania.
National flag and State flag
Government flags
Military flags
Historical flags
Soviet occupation
County flags
Each county of Lithuania has adopted a flag, each of them conforming to a pattern: a blue rectangle, with ten instances of the Cross of Vytis appearing in gold, acts as a fringe to the central feature of the flag, which is chosen by the county itself. Most of the central designs were adapted from the counties' coat of arms.
See also
Flag of Lithuania
Armorial of Lithuania
Flags of Lithuania
Flags
Lists and galleries of flags
Flags
|
https://en.wikipedia.org/wiki/Squashed%20entanglement
|
Squashed entanglement, also called CMI entanglement (CMI can be pronounced "see me"), is an information theoretic measure of quantum entanglement for a bipartite quantum system. If is the density matrix of a system composed of two subsystems and , then the CMI entanglement of system is defined by
where is the set of all density matrices for a tripartite system such that . Thus, CMI entanglement is defined as an extremum of a functional of . We define , the quantum Conditional Mutual Information (CMI), below. A more general version of Eq.(1) replaces the ``min" (minimum) in Eq.(1) by an ``inf" (infimum). When is a pure state,
, in agreement with the definition of entanglement of formation for pure states. Here is the Von Neumann entropy of density matrix .
Motivation for definition of CMI entanglement
CMI entanglement has its roots in classical (non-quantum) information theory, as we explain next.
Given any two random variables , classical information theory defines the mutual information, a measure of correlations, as
For three random variables , it defines the CMI as
It can be shown that .
Now suppose is the density matrix for a tripartite system . We will represent the partial trace of with respect to one or two of its subsystems by with the symbol for the traced system erased. For example, . One can define a quantum analogue of Eq.(2) by
and a quantum analogue of Eq.(3) by
It can be shown that . This inequality is often called the strong-subadditivity property of quantum entropy.
Consider three random variables with probability distribution , which we will abbreviate as . For those special of the form
it can be shown that . Probability distributions of the form Eq.(6) are in fact described by the Bayesian network shown in Fig.1.
One can define a classical CMI entanglement by
where is the set of all probability distributions in three random variables , such that for all . Because, given a probability distribution , one can always
|
https://en.wikipedia.org/wiki/Matrox%20G200
|
The G200 is a 2D, 3D, and video accelerator chip for personal computers designed by Matrox. It was released in 1998.
History
Matrox had been known for years as a significant player in the high-end 2D graphics accelerator market. Cards they produced were excellent Windows accelerators, and some of the later cards such as Millennium and Mystique excelled at MS-DOS as well. Matrox stepped forward in 1994 with their Impression Plus to innovate with one of the first 3D accelerator boards, but that card only could accelerate a very limited feature set (no texture mapping), and was primarily targeted at CAD applications.
Matrox, seeing the slow but steady growth in interest in 3D graphics on PCs with NVIDIA, Rendition, and ATI's new cards, began experimenting with 3D acceleration more aggressively and produced the Mystique. Mystique was their most feature-rich 3D accelerator in 1997, but still lacked key features including bilinear filtering. Then, in early 1998, Matrox teamed up with PowerVR to produce an add-in 3D board called Matrox m3D using the PowerVR PCX2 chipset. This board was one of the very few times that Matrox would outsource for their graphics processor, and was certainly a stop-gap measure to hold out until the G200 project was ready to go.
Overview
With the G200, Matrox aimed to combine its past products' competent 2D and video acceleration with a full-featured 3D accelerator. The G200 chip was used on several boards, most notably the Millennium G200 and Mystique G200. Millennium G200 received the new SGRAM memory and a faster RAMDAC, while Mystique G200 was cheaper and equipped with slower SDRAM memory but gained a TV-out port. Most G200 boards shipped standard with 8 MB RAM and were expandable to 16 MB with an add-on module. The cards also had ports for special add-on boards, such as the Rainbow Runner, which could add various functionality.
G200 was Matrox's first fully AGP-compliant graphics processor. While the earlier Millennium II had been adapte
|
https://en.wikipedia.org/wiki/Kolmogorov%27s%20two-series%20theorem
|
In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.
Statement of the theorem
Let be independent random variables with expected values and variances , such that converges in ℝ and converges in ℝ. Then converges in ℝ almost surely.
Proof
Assume WLOG . Set , and we will see that with probability 1.
For every ,
Thus, for every and ,
While the second inequality is due to Kolmogorov's inequality.
By the assumption that converges, it follows that the last term tends to 0 when , for every arbitrary .
|
https://en.wikipedia.org/wiki/Sensory%20substitution
|
Sensory substitution is a change of the characteristics of one sensory modality into stimuli of another sensory modality.
A sensory substitution system consists of three parts: a sensor, a coupling system, and a stimulator. The sensor records stimuli and gives them to a coupling system which interprets these signals and transmits them to a stimulator. In case the sensor obtains signals of a kind not originally available to the bearer it is a case of sensory augmentation. Sensory substitution concerns human perception and the plasticity of the human brain; and therefore, allows us to study these aspects of neuroscience more through neuroimaging.
Sensory substitution systems may help people by restoring their ability to perceive certain defective sensory modality by using sensory information from a functioning sensory modality.
History
The idea of sensory substitution was introduced in the 1980s by Paul Bach-y-Rita as a means of using one sensory modality, mainly tactition, to gain environmental information to be used by another sensory modality, mainly vision. Thereafter, the entire field was discussed by Chaim-Meyer Scheff in "Experimental model for the study of changes in the organization of human sensory information processing through the design and testing of non-invasive prosthetic devices for sensory impaired people". The first sensory substitution system was developed by Bach-y-Rita et al. as a means of brain plasticity in congenitally blind individuals. After this historic invention, sensory substitution has been the basis of many studies investigating perceptive and cognitive neuroscience. Sensory substitution is often employed to investigate predictions of the embodied cognition framework. Within the theoretical framework specifically the concept of sensorimotor contingencies is investigated utilizing sensory substitution. Furthermore, sensory substitution has contributed to the study of brain function, human cognition and rehabilitation.
Physiology
W
|
https://en.wikipedia.org/wiki/Voitenko%20compressor
|
The Voitenko compressor is a shaped charge adapted from its original purpose of piercing thick steel armour to the task of accelerating shock waves. It was proposed by Anatoly Emelyanovich Voitenko (Анатолий Емельянович Войтенко), a Soviet scientist, in 1964. It slightly resembles a wind tunnel.
The Voitenko compressor initially separates a test gas from a shaped charge with a malleable steel plate. When the shaped charge detonates, most of its energy is focused on the steel plate, driving it forward and pushing the test gas ahead of it. Ames Research Center translated this idea into a self-destroying shock tube. A shaped charge accelerated the gas in a 3-cm glass-walled tube 2 meters in length. The velocity of the resulting shock wave was a phenomenal . The apparatus exposed to the detonation was, of course, completely destroyed, but not before useful data was extracted. In a typical Voitenko compressor, a shaped charge accelerates hydrogen gas, which in turn accelerates a thin disk up to about 40 km/s. A slight modification to the Voitenko compressor concept is a super-compressed detonation, a device that uses a compressible liquid or solid fuel in the steel compression chamber instead of a traditional gas mixture. A further extension of this technology is the explosive diamond anvil cell, utilizing multiple opposed shaped-charge jets projected at a single steel-encapsulated fuel, such as hydrogen. The fuels used in these devices, along with the secondary combustion reactions and long blast impulse, produce similar conditions to those encountered in fuel-air and thermobaric explosives.
This method of detonation produces energies over 100 keV (~109 K temperatures), suitable not only for nuclear fusion, but other higher-order quantum reactions as well. The UTIAS explosive-driven-implosion facility was used to produce stable, centered and focused hemispherical implosions to generate neutrons from D–D reactions. The simplest and most direct method proved to be in a
|
https://en.wikipedia.org/wiki/Bufothionine
|
Bufothionine is a sulfur-containing compound which is present in the bufotoxins secreted by the parotoid gland of certain toads of the genera Bufo and Chaunus. This specific compound can be found in the skin of certain species of toad such as the Asiatic Toad, Chaunus arunco, Chaunus crucifer, Chaunus spinulosus, and Chaunus arenarum.
Research
In ancient times, cinobufacini, which is extracted from the skin and the parotid venom glands of toad of the bufo genus was used to treat symptoms like swelling and pain. In the present time, cinobufacinin injections are used to achieve satisfactory effect on Hepatocellular carcinoma (HCC) in China. Bufothionine is a major active component of cinobufacini. Bufothionine has been shown to suppress growth of cancerous liver cells in vitro . In vivo, bufothionine has also been showing relieved symptoms and anti inflammatory activities in tumor bearing mice. Experiments were conducted in which cultured cancer cells were shown to have an increase in G2-M damage checkpoint, ensuring that growth of the cell will not continue until the damage to the DNA is corrected while also showing a drop in the G0 and G1 activity, which pertains to phase where there is cell growth and RNA production.
Bufothionine is shown to induce autophagy in hepatocellular carcinomas by inhibiting JAK2/STAT3 pathways, which may present possibilities of anti cancer mechanism in bufothionine through cinobufacini injections. Similarly, bufothionine has also been shown to increase the chances of cell death and decrease cell growth of gastric cancer related cells by inhibiting the PIM3 gene, which, in cancerous cells, increases the resistance of chemotherapeutic treatments. In glioblastoma multiforme (GBM), bufothionine presents anti-tumor activities in the GBM cells lines U87 and U373 by triggering endoplasmic reticulum stress to lead cell death in the U87 and U373 cells.
|
https://en.wikipedia.org/wiki/Mimecast
|
Mimecast Limited is an American–British, Jersey-domiciled company specializing in cloud-based email management for Microsoft Exchange and Microsoft Office 365, including security, archiving, and continuity services to protect business mail.
History
Mimecast was founded in 2003 by Peter Bauer and Neil Murray. It has offices in London, Boston, Chicago, San Francisco, Dallas, Cape Town, Johannesburg, Melbourne, Amsterdam, Munich and Israel. On October 16, 2015, Mimecast announced that it filed its registration statement for a proposed initial public offering (IPO). Mimecast began trading on the Nasdaq Global Select Market under the ticker symbol "MIME" on November 19, 2015. The offering closed on November 24, 2015.
Acquisitions
On July 10, 2018, Mimecast acquired cybersecurity training start up Ataata.
On July 31, 2018, Mimecast acquired Solebit.
On November 14, 2019, Mimecast acquired DMARC Analyzer.
On January 6, 2020, Mimecast acquired Segasec.
On May 19, 2022, Mimecast was acquired by and become a wholly-owned subsidiary of Magnesium Bidco Limited, an affiliate of Permira Holdings Ltd.
Founding
Mimecast co-founder and CEO, Peter Bauer, previously founded FAB Technology in the mid-nineties and sold it to Idion. Earlier, Peter trained as a Microsoft systems engineer and worked with corporate messaging systems. Mimecast co-founder and CTO is Neil Murray, previously CTO at Global Technology Services and founder of Pro-Solutions.
Other executives include Mimecast Chief Scientist Nathaniel Borenstein, who was amongst the original designers of the MIME protocol for formatting multimedia Internet electronic mail - he sent the world's first e-mail attachment on 11 March 1992.
Technology
The service uses a massively-parallel grid infrastructure for email storage and processing through geographically dispersed data centers. Its Mail Transfer Agent provides intelligent email routing based on server or user mailbox location.
Email Security
Secure Email Gateway:
|
https://en.wikipedia.org/wiki/Alewife%20%28multiprocessor%29
|
Alewife was a cache coherent multiprocessor developed in the early 1990s by a group led by Anant Agarwal at the Massachusetts Institute of Technology. It was based on a network of up to 512 processing nodes, each of which used the Sparcle computer architecture, which was formed by modifying a Sun Microsystems SPARC CPU to include the APRIL techniques for fast context switches.
The Alewife project was one of two predecessors cited by the creators of the popular Beowulf cluster multiprocessor.
|
https://en.wikipedia.org/wiki/Velvet%20assembler
|
Velvet is an algorithm package that has been designed to deal with de novo genome assembly and short read sequencing alignments. This is achieved through the manipulation of de Bruijn graphs for genomic sequence assembly via the removal of errors and the simplification of repeated regions. Velvet has also been implemented in commercial packages, such as Sequencher, Geneious, MacVector and BioNumerics.
Introduction
The development of next-generation sequencers (NGS) allowed for increased cost effectiveness on very short read sequencing. The manipulation of de Bruijn graphs as a method for alignment became more realistic but further developments were needed to address issues with errors and repeats. This led to the development of Velvet by Daniel Zerbino and Ewan Birney at the European Bioinformatics Institute in the United Kingdom.
Velvet works by efficiently manipulating de Bruijn graphs through simplification and compression, without the loss of graph information, by converging non-intersecting paths into single nodes. It eliminates errors and resolves repeats by first using an error correction algorithm that merges sequences together. Repeats are then removed from the sequence via the repeat solver that separates paths which share local overlaps.
The combination of short reads and read pairs allows Velvet to resolve small repeats and produce contigs of reasonable length. This application of Velvet can produce contigs with a N50 length of 50 kb on paired-end prokaryotic data and a 3 kb length for regions of mammalian data.
Algorithm
As already mentioned Velvet uses the de Bruijn graph to assemble short reads. More specifically Velvet represents each different k-mer obtained from the reads by a unique node on the graph. Two nodes are connected if its k-mers have a k-1 overlap. In other words, an arc from node A to node B exists if the last k-1 characters of the k-mer, represented by A, are the first k-1 characters of the k-mer represented by B. The following
|
https://en.wikipedia.org/wiki/Mir-330%20microRNA%20precursor%20family
|
In molecular biology mir-330 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
See also
MicroRNA
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.