source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Fibromyalgia | Fibromyalgia is a medical condition defined by the presence of chronic widespread pain, fatigue, waking unrefreshed, cognitive symptoms, lower abdominal pain or cramps, and depression. Other symptoms include insomnia and a general hypersensitivity.
The cause of fibromyalgia is unknown, but is believed to involve a combination of genetic and environmental factors. Environmental factors may include psychological stress, trauma, and certain infections. The pain appears to result from processes in the central nervous system and the condition is referred to as a "central sensitization syndrome".
The treatment of fibromyalgia is symptomatic and multidisciplinary. The European Alliance of Associations for Rheumatology strongly recommends aerobic and strengthening exercise. Weak recommendations are given to mindfulness, psychotherapy, acupuncture, hydrotherapy, and meditative exercise such as qigong, yoga, and tai chi. The use of medication in the treatment of fibromyalgia is debated, although antidepressants can improve quality of life. The medications duloxetine, milnacipran, or pregabalin have been approved by the US Food and Drug Administration (FDA) for the management of fibromyalgia. Other common helpful medications include serotonin-noradrenaline reuptake inhibitors, nonsteroidal anti-inflammatory drugs, and muscle relaxants. Q10 coenzyme and vitamin D supplements may reduce pain and improve quality of life. While fibromyalgia is persistent in nearly all patients, it does not result in death or tissue damage.
Fibromyalgia is estimated to affect 2–4% of the population. Women are affected about twice as often as men. Rates appear similar in different areas of the world and among different cultures. Fibromyalgia was first defined in 1990, with updated criteria in 2011, 2016, and 2019. The term "fibromyalgia" is from Neo-Latin fibro-, meaning "fibrous tissues", Greek μυο- myo-, "muscle", and Greek άλγος algos, "pain"; thus, the term literally means "muscle and fibrous |
https://en.wikipedia.org/wiki/EF%20hand | The EF hand is a helix–loop–helix structural domain or motif found in a large family of calcium-binding proteins.
The EF-hand motif contains a helix–loop–helix topology, much like the spread thumb and forefinger of the human hand, in which the Ca2+ ions are coordinated by ligands within the loop. The motif takes its name from traditional nomenclature used in describing the protein parvalbumin, which contains three such motifs and is probably involved in muscle relaxation via its calcium-binding activity.
The EF-hand consists of two alpha helices linked by a short loop region (usually about 12 amino acids) that usually binds calcium ions. EF-hands also appear in each structural domain of the signaling protein calmodulin and in the muscle protein troponin-C.
Calcium ion binding site
The calcium ion is coordinated in a pentagonal bipyramidal configuration. The six residues involved in the binding are in positions 1, 3, 5, 7, 9 and 12; these residues are denoted by X, Y, Z, -Y, -X and -Z. The invariant Glu or Asp at position 12 provides two oxygens for liganding calcium (bidentate ligand).
The calcium ion is bound by both protein backbone atoms and by amino acid side chains, specifically those of the anionic amino acid residues aspartate and glutamate. These residues are negatively charged and will make a charge-interaction with the positively charged calcium ion. The EF hand motif was among the first structural motifs whose sequence requirements were analyzed in detail. Five of the loop residues bind calcium and thus have a strong preference for oxygen-containing side chains, especially aspartate and glutamate. The sixth residue in the loop is necessarily glycine due to the conformational requirements of the backbone. The remaining residues are typically hydrophobic and form a hydrophobic core that binds and stabilizes the two helices.
Upon binding to Ca2+, this motif may undergo conformational changes that enable Ca2+-regulated functions as seen in Ca2+ effecto |
https://en.wikipedia.org/wiki/Vouch%20by%20Reference | Vouch by Reference (VBR) is a protocol used in Internet mail systems for implementing sender certification by third-party entities. Independent certification providers vouch for the reputation of senders by verifying the domain name that is associated with transmitted electronic mail. VBR information can be used by a message transfer agent, a mail delivery agent or by an email client.
The protocol is intended to become a standard for email sender certification, and is described in RFC 5518.
Operation
Email sender
A user of a VBR email certification service signs its messages using DomainKeys Identified Mail (DKIM) and includes a VBR-Info field in the signed header. The sender may also use the Sender Policy Framework to authenticate its domain name. The VBR-Info: header field contains the domain name that is being certified, typically the responsible domain in a DKIM signature (d= tag), the type of content in the message, and a list of one or more vouching services, that is the domain names of the services that vouch for the sender for that kind of content:
VBR-Info: md=domain.name.example; mc=type; mv=vouching.example:vouching2.example
Email receiver
An email receiver can authenticate the message's domain name using DKIM or SPF, thus finding the domains that are responsible for the message. It then obtains the name of a vouching service that it trusts, either from among the set supplied by the sender or from a locally configured set of preferred vouching services. Using the Domain Name System, the receiver can verify whether a vouching service actually vouches for a given domain. To do so, the receiver queries a TXT resource record for the name composed:
domain.name.example._vouch.vouching.example
The returned data, if any, is a space-delimited list of all the types that the service vouches, given as lowercase ASCII. They should match the self-asserted message content. The types defined are transaction, list, and all. Auditing the message may allow to |
https://en.wikipedia.org/wiki/Likelihood-ratio%20test | In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models, specifically one found by maximization over the entire parameter space and another found after imposing some constraint, based on the ratio of their likelihoods. If the constraint (i.e., the null hypothesis) is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.
The likelihood-ratio test, also known as Wilks test, is the oldest of the three classical approaches to hypothesis testing, together with the Lagrange multiplier test and the Wald test. In fact, the latter two can be conceptualized as approximations to the likelihood-ratio test, and are asymptotically equivalent. In the case of comparing two models each of which has no unknown parameters, use of the likelihood-ratio test can be justified by the Neyman–Pearson lemma. The lemma demonstrates that the test has the highest power among all competitors.
Definition
General
Suppose that we have a statistical model with parameter space . A null hypothesis is often stated by saying that the parameter is in a specified subset of . The alternative hypothesis is thus that is in the complement of , i.e. in , which is denoted by . The likelihood ratio test statistic for the null hypothesis is given by:
where the quantity inside the brackets is called the likelihood ratio. Here, the notation refers to the supremum. As all likelihoods are positive, and as the constrained maximum cannot exceed the unconstrained maximum, the likelihood ratio is bounded between zero and one.
Often the likelihood-ratio test statistic is expressed as a difference between the log-likelihoods
where
is the logarithm of the maximized likelihood function , and is the maximal value in the special case that the nul |
https://en.wikipedia.org/wiki/Moran%27s%20I | In statistics, Moran's I is a measure of spatial autocorrelation developed by Patrick Alfred Pierce Moran. Spatial autocorrelation is characterized by a correlation in a signal among nearby locations in space. Spatial autocorrelation is more complex than one-dimensional autocorrelation because spatial correlation is multi-dimensional (i.e. 2 or 3 dimensions of space) and multi-directional.
Global Moran's I
Global Moran's I is a measure of the overall clustering of the spatial data. It is defined as
where
is the number of spatial units indexed by and ;
is the variable of interest;
is the mean of ;
are the elements of a matrix of spatial weights with zeroes on the diagonal (i.e., );
and is the sum of all (i.e. ).
Defining weights matrices
The value of can depend quite a bit on the assumptions built into the spatial weights matrix . The matrix is required because, in order to address spatial autocorrelation and also model spatial interaction, we need to impose a structure to constrain the number of neighbors to be considered. This is related to Tobler's first law of geography, which states that Everything depends on everything else, but closer things more so—in other words, the law implies a spatial distance decay function, such that even though all observations have an influence on all other observations, after some distance threshold that influence can be neglected.
The idea is to construct a matrix that accurately reflects your assumptions about the particular spatial phenomenon in question. A common approach is to give a weight of 1 if two zones are neighbors, and 0 otherwise, though the definition of 'neighbors' can vary. Another common approach might be to give a weight of 1 to nearest neighbors, 0 otherwise. An alternative is to use a distance decay function for assigning weights. Sometimes the length of a shared edge is used for assigning different weights to neighbors. The selection of spatial weights matrix should be guided by theory about |
https://en.wikipedia.org/wiki/Network-Integrated%20Multimedia%20Middleware | The Network-Integrated Multimedia Middleware (NMM) is a flow graph based multimedia framework. NMM allows creating distributed multimedia applications: local and remote multimedia devices or software components can be controlled transparently and integrated into a common multimedia processing flow graph. NMM is implemented in C++, a programming language, and NMM-IDL, an interface description language (IDL). NMM is a set of cross-platform libraries and applications for the operating systems Linux, OS X, Windows, and others. A software development kit (SDK) is also provided.
NMM is released under dual-licensing. The Linux, OS X, and PS3 versions are distributed for free as open-source software under the terms and conditions of the GNU General Public License (GPL). The Windows version is distributed for free as binary version under the terms and conditions of the NMM Non-Commercial License (NMM-NCL). All NMM versions (i.e., for all supported operating systems) are also distributed under a commercial license with full warranty, which allows developing closed-source proprietary software atop NMM.
See also
Java Media Framework
DirectShow
QuickTime
Helix DNA
MPlayer
VLC media player (VLC)
Video wall
Sources
Linux gains open source multimedia middleware
KDE to gain cutting-edge multimedia technology
Multimedia barriers drop at CeBIT in March
A Survey of Software Infrastructures and Frameworks for Ubiquitous Computing
External links
NMM homepage
Computer networking |
https://en.wikipedia.org/wiki/Landau%20kernel | The Landau kernel is named after the German number theorist Edmund Landau. The kernel is a summability kernel defined as:
where the coefficients are defined as follows
Visualisation
Using integration by parts, one can show that:
Hence, this implies that the Landau Kernel can be defined as follows:
Plotting this function for different values of n reveals that as n goes to infinity, approaches the Dirac delta function, as seen in the image, where the following functions are plotted.
Properties
Some general properties of the Landau kernel is that it is nonnegative and continuous on . These properties are made more concrete in the following section.
Dirac sequences
The third bullet point means that the area under the graph of the function becomes increasingly concentrated close to the origin as n approaches infinity. This definition lends us to the following theorem.
Proof: We prove the third property only. In order to do so, we introduce the following lemma:
Proof of the Lemma:
Using the definition of the coefficients above, we find that the integrand is even, we may writecompleting the proof of the lemma. A corollary of this lemma is the following:
See also
Poisson Kernel
Fejer Kernel
Dirichlet Kernel |
https://en.wikipedia.org/wiki/Two-factor%20theory%20of%20intelligence | Charles Spearman developed his two-factor theory of intelligence using factor analysis. His research not only led him to develop the concept of the g factor of general intelligence, but also the s factor of specific intellectual abilities. L. L. Thurstone, Howard Gardner, and Robert Sternberg also researched the structure of intelligence, and in analyzing their data, concluded that a single underlying factor was influencing the general intelligence of individuals. However, Spearman was criticized in 1916 by Godfrey Thomson, who claimed that the evidence was not as crucial as it seemed. Modern research is still expanding this theory by investigating Spearman's law of diminishing returns, and adding connected concepts to the research.
Spearman's two-factor theory of intelligence
In 1904, Charles Spearman had developed a statistical procedure called factor analysis. In factor analysis, related variables are tested for correlation to each other, then the correlation of the related items are evaluated to find clusters or groups of the variables. Spearman tested how well people performed on various tasks relating to intelligence. Such tasks include: distinguishing pitch, perceiving weight and colors, directions, and mathematics. When analyzing the data he collected, Spearman noted that those that did well in one area also scored higher in other areas. With this data, Spearman concluded that there must be one central factor that influences our cognitive abilities. Spearman termed this general intelligence g.
Structure of intelligence debate
Due to the controversy of the structure of intelligence, other psychologists also published their relevant research. Other than Charles Spearman, three others developed a hypothesis regarding the structure of intelligence. L. L. Thurstone tested subjects on 56 different abilities; from his data he established seven primary mental abilities relating to intelligence. He categorized them as: spatial ability, numerical ability, word |
https://en.wikipedia.org/wiki/Protocooperation | Protocooperation is where two species interact with each other beneficially; they have no need to interact with each other - they interact purely for the gain that they receive from doing this. It is not at all necessary for protocooperation to occur; growth and survival is possible in the absence of the interaction. The interaction that occurs can be between different kingdoms.
The term, initially used for intraspecific interactions, was popularized by Eugene Odum (1953), although other authors prefer to use the terms "cooperation" or "mutualism".
Mutualism
Protocooperation is a form of mutualism, but the cooperating species do not depend on each other for survival. An example of protocooperation happens between soil bacteria or fungi, and the plants that occur growing in the soil. None of the species rely on the relationship for survival, but all of the fungi, bacteria and higher plants take part in shaping soil composition and fertility. Soil bacteria and fungi interrelate with each other, forming nutrients essential to the plants survival. The plants obtain nutrients from root nodules and decomposing organic substance. Plants benefit by getting essential mineral nutrients and carbon dioxide. The plants do not need these mineral nutrients but do help the plant grow even further.
Examples
Ants and aphids
A further example of protocooperation is the connection between ants and aphids. The ant searches for food on trees and shrubs that are hosts to honeydew-secreting species such as aphids, mealybugs, and some scales. The ant gathers the sugary substance and takes it to its nest as food for its offspring. It has been known for the ant to stimulate the aphid to secrete honeydew straight into its mouth. Some ant species even look after the honeydew producers from natural predators. In areas where the ant inhabited the same ecosystem as the aphid, the plants they inhabit normally suffer from a higher presence of aphids which is detrimental to the plant but not to |
https://en.wikipedia.org/wiki/Daylife | Daylife was an online publishing company which offered cloud-based tools for web publishers, marketers and developers. It provided digital media management tools and content feeds to publishers, brand marketers and developers. Daylife was founded in 2006, raised $15 million from several investors, including Getty Images, and was acquired in 2012 by NewsCred. The company was headquartered in downtown New York City.
Daylife's products included the Daylife Publisher Suite, a range of APIs, and a set of "hosted solutions" including Smart Topics, Smart Galleries, and Smart Sections. The hosted solutions were all launched in partnership with Getty Images, and they allow publishers to source, manage and compose sites, media components, pages, and complete sections of content. Daylife's technology analyzes over 100,000 curated content feeds and allows publishers to curate and automate media to enhance proprietary content.
Clients included USA Today, Bloomberg Businessweek, NPR, Mashable, Sky News, Forbes, Thomson Reuters, and over 80 others.
The company seems to have shut down after 2016.
Publisher Suite
The Daylife Publisher Suite allows publishers and marketers to deploy on-demand media features and apps from the cloud onto any digital channel with a few clicks. All the features and apps are managed from a simple browser-based dashboard.
Smart Galleries
SmartGalleries is a suite of tools that allow publishers to create image galleries as customizable widgets or in full-page formats. Publishers can hand-select images or automatically fill galleries based on keywords. Daylife and Getty Images launched SmartGalleries in September 2009 in conjunction with their investment announcement.
Smart Topics
SmartTopics are tools for publishers to create media-rich pages on specific topics, linking to proprietary content and related media such as videos, images, links and tweets, selected by the publisher.
Smart Sections
SmartSections are tools that allow publishers to compose |
https://en.wikipedia.org/wiki/Nuclear%20Implosions | Nuclear Implosions: The Rise and Fall of the Washington Public Power Supply System is a 2008 book by Daniel Pope, a history professor at the University of Oregon, which traces the history of the Washington Public Power Supply System, a public agency which undertook to build five large nuclear power plants, one of the most ambitious U.S. construction projects in the 1970s.
By 1983, cost overruns and delays, along with a slowing of electricity demand growth, led to cancellation of two plants and a construction halt on two others. Moreover, the agency defaulted on $2.25 billion of municipal bonds, which is still the largest municipal bond default in U.S. history. The court case that followed took nearly a decade to resolve.
See also
Anti-nuclear movement in the United States
List of books about nuclear issues
Nuclear power in the United States
Satsop, Washington
Bond insurance |
https://en.wikipedia.org/wiki/Infiltration%20%28medical%29 | Infiltration is the diffusion or accumulation (in a tissue or cells) of foreign substances in amounts excess of the normal. The material collected in those tissues or cells is called infiltrate.
Definitions of infiltration
As part of a disease process, infiltration is sometimes used to define the invasion of cancer cells into the underlying matrix or the blood vessels. Similarly, the term may describe the deposition of amyloid protein. During leukocyte extravasation, white blood cells move in response to cytokines from within the blood, into the diseased or infected tissues, usually in the same direction as a chemical gradient, in a process called chemotaxis. The presence of lymphocytes in tissue in greater than normal numbers is likewise called infiltration.
As part of medical intervention, local anaesthetics may be injected at more than one point so as to infiltrate an area prior to a surgical procedure. However, the term may also apply to unintended iatrogenic leakage of fluids from phlebotomy or intravenous drug delivery procedures, a process also known as extravasation or "tissuing".
Causes
Infiltration may be caused by:
Puncture of distal vein wall during venipuncture
Puncture of any portion of the vein wall by mechanical friction from the catheter/needle cannula
Dislodgement of the catheter/needle cannula from the intima of the vein which may be a result of a poorly secured IV device or inappropriate choice of venous site to puncture.
Improper cannula size or excessive delivery rate of the fluid
Signs and symptoms
The signs and symptoms of infiltration include:
Inflammation at or near the insertion site with swollen, taut skin with pain
Blanching and coolness of skin around IV site
Damp or wet dressing
Slowed or stopped infusion
No backflow of blood into IV tubing on lowering the solution container.
Grading
Nursing treatment
The use of warm compresses to treat infiltration has become controversial. It has been found that cold compresses may b |
https://en.wikipedia.org/wiki/American%20Society%20of%20Naturalists | The American Society of Naturalists was founded in 1883 and is one of the oldest professional societies dedicated to the biological sciences in North America. The purpose of the Society is "to advance and diffuse knowledge of organic evolution and other broad biological principles so as to enhance the conceptual unification of the biological sciences."
Founded in Massachusetts with Alpheus Spring Packard Jr. as its first president, it was called the Society of Naturalists of the Eastern United States until 1886.
The scientific journal The American Naturalist is published on behalf of the society, which also holds an annual meeting with a scientific program of symposia and contributed papers and posters. It also confers a number of awards for achievement in evolutionary biology and/or ecology, including the Sewall Wright Award (named in honor of Sewall Wright) for senior researchers making "fundamental contributions ... to the conceptual unification of the biological sciences", the E. O. Wilson award for "significant contributions" from naturalists in mid-career, the Jasper Loftus-Hills Young Investigators Award for promising scientists early in their careers, and also the Ruth Patrick Student Poster Award. |
https://en.wikipedia.org/wiki/Sambucus%20canadensis | Sambucus canadensis, the American black elderberry, Canada elderberry, or common elderberry, is a species of elderberry native to a large area of North America east of the Rocky Mountains, south to Bolivia. It grows in a variety of conditions including both wet and dry soils, primarily in sunny locations.
Description
It is a deciduous suckering shrub growing to tall. The leaves are arranged in opposite pairs, pinnate with five to nine leaflets, the leaflets around long and 5 cm broad. In summer, it bears large ( diameter) corymbs of white flowers above the foliage, the individual flowers diameter, with five petals.
The fruit (known as an elderberry) is a dark purple to black berry 3–5 mm diameter, produced in drooping clusters in the fall.
Taxonomy
It is closely related to the European Sambucus nigra. Some authors treat it as conspecific, under the name Sambucus nigra subsp. canadensis.
Toxicity
Inedible parts of the plant, such as the leaves, stems, roots, seeds and unripe fruits, can be toxic at lethal doses due to the presence of cyanogenic glycosides and alkaloids. Traditional methods of consuming elderberry includes jams, jellies, and syrups, all of which cook down the fruit and strain out the seeds.
Unpublished research may show that S. canadensis (American elderberry) has lower cyanide levels than apple juice, and that its fruit does not contain enough beta-glucosidase (which convert glucosides into cyanide) to create cyanide within that biochemical pathway. For comparisons, assuming S. nigra has levels of no more than 25 micrograms of cyanogenic glycosides/milligram of berry weight, assuming all of the glycosides were converted to cyanide, and assuming a toxicity of 50 mg for a 50 kg vertebrate, one would need to eat 2 kilograms (~4.4 pounds) of berries in one sitting to reach the lower limits of lethal toxicity (1 mg cyanide/kg of weight). For the upper limits (3 mg/kg), one would need to eat 6 kg or ~13 pounds.
Uses
The flower, known as elderflo |
https://en.wikipedia.org/wiki/Maximum%20agreement%20subtree%20problem | The maximum agreement subtree problem is any of several closely related problems in graph theory and computer science. In all of these problems one is given a collection of trees each containing leaves. The leaves of these trees are given labels from some set with so that no pair of leaves in the same tree sharing the same label, within the same tree the labelling for each leaf is distinct. In this problem one would like to find the largest subset such that the minimal spanning subtrees containing the leaves in , of are the "same" while preserving the labelling.
Formulations
Maximum homeomorphic agreement subtree
This version requires that the subtrees are homeomorphic to one another.
Rooted maximum homeomorphic agreement subtree
This version is the same as the maximum homeomorphic agreement subtree, but we further assume that are rooted and that the subtrees contain the root node. This version of the maximum agreement subtree problem is used for the study of phylogenetic trees. Because of its close ties with phylogeny this formulation is often what is mean when one refers to the "maximum agreement subtree" problem.
Other variants
There exits other formulations for example the (rooted) maximum isomorphic agreement subtree where we require the subtrees to be isomorphic to one another.
See also
Frequent subtree mining |
https://en.wikipedia.org/wiki/Strengthen%20the%20Arm%20of%20Liberty%20Monument%20%28Fayetteville%2C%20Arkansas%29 | The Strengthen the Arm of Liberty Monument in Fayetteville, Arkansas, is a replica of the Statue of Liberty (Liberty Enlightening the World). It was placed by the Boy Scouts of America as part of its 1950s-era campaign, "Strengthen the Arm of Liberty".
It is located in front of Washington Regional Medical Center on North Hills Blvd.
The statue was removed from the National Register of Historic Places in 2012 when it was improperly moved to its new location, but was later relisted.
See also
Scouting museums
Scouting memorials
National Register of Historic Places listings in Washington County, Arkansas |
https://en.wikipedia.org/wiki/Cospeciation | Cospeciation is a form of coevolution in which the speciation of one species dictates speciation of another species and is most commonly studied in host-parasite relationships. In the case of a host-parasite relationship, if two hosts of the same species get within close proximity of each other, parasites of the same species from each host are able to move between individuals and mate with the parasites on the other host. However, if a speciation event occurs in the host species, the parasites will no longer be able to "cross over" because the two new host species no longer mate and, if the speciation event is due to a geographic separation, it is very unlikely the two hosts will interact at all with each other. The lack of proximity between the hosts ultimately prevents the populations of parasites from interacting and mating. This can ultimately lead to speciation within the parasite.
According to Fahrenholz's rule, first proposed by Heinrich Fahrenholz in 1913, when host-parasite cospeciation has occurred, the phylogenies of the host and parasite come to mirror each other. In host-parasite phylogenies, and all species phylogenies for that matter, perfect mirroring is rare. Host-parasite phylogenies can be altered by host switching, extinction, independent speciation, and other ecological events, making cospeciation harder to detect. However, cospeciation is not limited to parasitism, but has been documented in symbiotic relationships like those of gut microbes in primates.
Fahrenholz's rule
In 1913, Heinrich Fahrenholz proposed that the phylogenies of both the host and parasite will eventually become congruent, or mirror each other when cospeciation occurs. More specifically, more closely related parasite species will be found on closely related species of host. Thus, to determine if cospeciation has occurred within a host-parasite relationship, scientists have used comparative analyses on the host and parasite phylogenies.
In 1968, Daniel Janzen proposed an |
https://en.wikipedia.org/wiki/Italian%20Federation%20of%20Agroindustrial%20Workers | The Italian Federation of Agroindustrial Workers (, FLAI) is a trade union representing workers in the food and agriculture sectors in Italy.
The union was founded in 1988, when the National Federation of Italian Agricultural Labourers and Employees merged with the Italian Federation of Sugar, Food Industry and Tobacco Workers. Like its predecessors, it affiliated to the Italian General Confederation of Labour. By 1998, it had 314,552 members, of whom 79% worked in agriculture, and the remainder in food.
General Secretaries
1988: Angelo Lana
1992: Gianfranco Benzi
2000: Franco Chiriaco
2008: Stefania Crogi
2016: Ivana Galli
2019: Giovanni Mininni
External links |
https://en.wikipedia.org/wiki/Temperature-programmed%20reduction | Temperature-programmed reduction is a technique for the characterization of solid materials and is often used in the field of heterogeneous catalysis to find the most efficient reduction conditions, an oxidized catalyst precursor is submitted to a programmed temperature rise while a reducing gas mixture is flowed over it. It was developed by John Ward Jenkins whilst developing heterogeneous catalysts for Shell Oil company, but was never patented.
Process description
A simple container (U-tube) is filled with a solid or catalyst. This sample vessel is positioned in a furnace with temperature control equipment. A thermocouple is placed in the solid for temperature measurement. The air originally present in the container is flushed out with an inert gas (nitrogen, argon). Flow controllers are used to add hydrogen (for example, 10% hydrogen in nitrogen). The composition of the gaseous mixture is measured at the exit of the sample container with appropriate detectors (thermal conductivity detector, mass spectrometer). Now, the sample in the oven is heated up on predefined values. Heating rates are usually between 1 K/min and 20 K/min. If a reduction takes place at a certain temperature, hydrogen is consumed, which is recorded by the detector. In practice the production of water is a more accurate way of measuring the reduction. This is due to the potential for varying hydrogen concentrations at the inlet, so the decrease in this number may not be precise, however as the starting concentration of water will be zero, any increase can be measured more accurately.
See also
Thermal desorption spectroscopy |
https://en.wikipedia.org/wiki/Reflective%20programming | In computer science, reflective programming or reflection is the ability of a process to examine, introspect, and modify its own structure and behavior.
Historical background
The earliest computers were programmed in their native assembly languages, which were inherently reflective, as these original architectures could be programmed by defining instructions as data and using self-modifying code. As the bulk of programming moved to higher-level compiled languages such as Algol, Cobol, Fortran, Pascal, and C, this reflective ability largely disappeared until new programming languages with reflection built into their type systems appeared.
Brian Cantwell Smith's 1982 doctoral dissertation introduced the notion of computational reflection in procedural programming languages and the notion of the meta-circular interpreter as a component of 3-Lisp.
Uses
Reflection helps programmers make generic software libraries to display data, process different formats of data, perform serialization or deserialization of data for communication, or do bundling and unbundling of data for containers or bursts of communication.
Effective use of reflection almost always requires a plan: A design framework, encoding description, object library, a map of a database or entity relations.
Reflection makes a language more suited to network-oriented code. For example, it assists languages such as Java to operate well in networks by enabling libraries for serialization, bundling and varying data formats. Languages without reflection such as C are required to use auxiliary compilers for tasks like Abstract Syntax Notation to produce code for serialization and bundling.
Reflection can be used for observing and modifying program execution at runtime. A reflection-oriented program component can monitor the execution of an enclosure of code and can modify itself according to a desired goal of that enclosure. This is typically accomplished by dynamically assigning program code at runtime.
In obje |
https://en.wikipedia.org/wiki/Casein%20kinase%202 | Casein kinase 2 ()(CK2/CSNK2) is a serine/threonine-selective protein kinase that has been implicated in cell cycle control, DNA repair, regulation of the circadian rhythm, and other cellular processes. De-regulation of CK2 has been linked to tumorigenesis as a potential protection mechanism for mutated cells. Proper CK2 function is necessary for survival of cells as no knockout models have been successfully generated.
Structure
CK2 typically appears as a tetramer of two α subunits; α being 42 kDa and α’ being 38 kDa, and two β subunits, each weighing in at 28 kDa. The β regulatory domain only has one isoform and therefore within the tetramer will have two β subunits. The catalytic α domains appear as an α or α’ variant and can either be formed in a homodimer (α & α, or α’ & α’) formation or heterodimer formation (α & α’). It is worth noting that other β isoforms have been found in other organisms but not in humans.
The α subunits do not require the β regulatory subunits to function, this allows dimers to form of the catalytic domains independent of β subunit transcription. The presence of these α subunits does have an effect on the phosphorylation targets of CK2. A functional difference between α and α’ has been found but the exact nature of differences isn't fully understood yet. An example is that Caspase 3 is preferentially phosphorylated by α’ based tetramers over α based tetramers.
Function
CK2 is a protein kinase responsible for phosphorylation of substrates in various pathways within a cell; ATP or GTP can be used as phosphate source. CK2 has a dual functionality with involvement in cell growth/proliferation and suppression of apoptosis. CK2s anti-apoptotic function is in the continuation of the cell cycle; from G1 to S phase and G2 to M phase checkpoints. This function is achieved by protecting proteins from caspase-mediated apoptosis via phosphorylation of sites adjacent to the caspase cleavage site, blocking the activity of caspase proteins. CK2 also |
https://en.wikipedia.org/wiki/School%20of%20Biological%20Sciences%2C%20University%20of%20Manchester | The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs |
https://en.wikipedia.org/wiki/Open%20Agent%20Architecture | Open Agent Architecture, or OAA for short, is a framework for integrating a community of heterogeneous software agents in a distributed environment. It is also a research project of the SRI International Artificial Intelligence Center.
Roughly, the architecture is that a central "blackboard" server holds a list of tasks while a group of agents executes these tasks based on their specific capabilities.
Agents working in the structure of an OAA framework are built to universal communication and functional standards and are based on the Interagent Communication Language. The language is platform-independent and allows agents to collaborate by delegating and receiving work requests.
Open Agent Architecture was first proposed in the late 1990s and was later used as a foundation for the DARPA-funded CALO artificial intelligence project. |
https://en.wikipedia.org/wiki/Fabunan%20Antiviral%20Injection | The Fabunan Antiviral Injection (FAI) is a patent medicine sold by US-based Filipino doctors Ruben and Willie Fabunan, who claim it can treat dengue fever, chikungunya, dog bite, snakebite, and HIV/AIDS.
Formulation
Fabunan contains procaine hydrochloride, a water-soluble ester anesthetic, and dexamethasone sodium phosphate, a corticosteroid with well-known anti-inflammatory and immunosuppressant properties. The solution is intended to be administered as an intramuscular injection.
Validity of claims
The patent application cites six case studies for conditions such as dengue, dengue hemorrhagic fever and AIDS, which were all conducted at the Fabunan Medical Clinic in Burgos. To date, no registered clinical trials of the Fabunan Antiviral Injection have been performed to validate the Fabunans' claims.
COVID-19 claims
Recent claims promoted on social media that it can cure COVID-19 are not supported by the Philippine government, which has issued a cease and desist order to Fabunan Medical Clinic in Zambales, prompting the clinic to stop its operations on April 2. On April 15, 2020, the fact-checking website Rappler warned against false claims on YouTube and Facebook that the so-called treatment had been approved, and pointed out that on April 8, 2020, the FDA warned the public against the use of drugs or vaccines that are not yet certified to treat COVID-19, particularly the Fabunan Antiviral Injection. Similarly, claims popularly spread in YouTube videos in June 2020 that Fabunan has been approved in Indonesia have been demonstrated to be false.
See also
List of unproven methods against COVID-19 |
https://en.wikipedia.org/wiki/Uridine%20diphosphate | Uridine diphosphate, abbreviated UDP, is a nucleotide diphosphate. It is an ester of pyrophosphoric acid with the nucleoside uridine. UDP consists of the pyrophosphate group, the pentose sugar ribose, and the nucleobase uracil.
UDP is an important factor in glycogenesis. Before glucose can be stored as glycogen in the liver and muscles, the enzyme UDP-glucose pyrophosphorylase forms a UDP-glucose unit by combining glucose 1-phosphate with uridine triphosphate, cleaving a pyrophosphate ion in the process. Then, the enzyme glycogen synthase combines UDP-glucose units to form a glycogen chain. The UDP molecule is cleaved from the glucose ring during this process and can be reused by UDP-glucose pyrophosphorylase.
See also
DNA
Nucleoside
Nucleotide
Oligonucleotide
RNA
UGGT |
https://en.wikipedia.org/wiki/Meat%20spoilage | The spoilage of meat occurs, if the meat is untreated, in a matter of hours or days and results in the meat becoming unappetizing, poisonous, or infectious. Spoilage is caused by the practically unavoidable infection and subsequent decomposition of meat by bacteria and fungi, which are borne by the animal itself, by the people handling the meat, and by their implements. Meat can be kept edible for a much longer time – though not indefinitely – if proper hygiene is observed during production and processing, and if appropriate food safety, food preservation and food storage procedures are applied.
Infection
The organisms spoiling meat may infect the animal either while still alive ("endogenous disease") or may contaminate the meat after its slaughter ("exogenous
disease"). There are numerous diseases that humans may contract from endogenously infected meat, such as anthrax, bovine tuberculosis, brucellosis, salmonellosis, listeriosis, trichinosis or taeniasis.
Infected meat, however, should be eliminated through systematic meat inspection in production, and consequently, consumers will more often encounter meat exogenously spoiled by bacteria or fungi after the death of the animal. One source of infectious organisms is bacteraemia, the presence of bacteria in the blood of slaughtered animals. The large intestine of animals contains some 3.3×1013 viable bacteria, which may infect the flesh after death if the carcass is improperly dressed. Contamination can also occur at the slaughterhouse through the use of improperly cleaned slaughter or dressing implements, such as powered knives, on which bacteria persist. A captive bolt pistol's bolt alone may carry about 400,000 bacteria per square centimeter. After slaughter, care must be taken not to infect the meat through contact with any of the various sources of infection in the abattoir, notably the hides and soil adhering to them, water used for washing and cleaning, the dressing implements and the slaughterhouse person |
https://en.wikipedia.org/wiki/Jonas%20Kubilius | Jonas Kubilius (27 July 1921 – 30 October 2011) was a Lithuanian mathematician who worked in probability theory and number theory. He was rector of Vilnius University for 32 years, and served one term in the Lithuanian parliament.
Life and education
Kubilius was born in Fermos village, Eržvilkas county, Jurbarkas District Municipality, Lithuania on 27 July 1921. He graduated from Raseiniai high school in 1940 and entered Vilnius University, from which he graduated summa cum laude in 1946 after taking off a year to teach mathematics in middle school.
Kubilius received the Candidate of Sciences degree in 1951 from Leningrad University. His thesis, written under Yuri Linnik, was titled Geometry of Prime Numbers. He received the Doctor of Sciences degree (habilitation) in 1957 from the Steklov Institute of Mathematics in Moscow.
Career
Kubilius had simultaneous careers at Vilnius University and at the Lithuanian Academy of Sciences. He continued working at the university after receiving his bachelor's degree in 1946, and worked as a lecturer and assistant professor after receiving his Candidate degree in 1951. In 1958 he was promoted to professor and was elected rector of the university. He retired from the rector's position in 1991 after serving almost 33 years, and remained a professor in the university.
During the Khrushchev Thaw in the middle 1950s there were attempts to make the university "Lithuanian" by encouraging the use of the Lithuanian language in place of Russian and to revive the Department of Lithuanian Literature. This work was started by the rector Juozas Bulavas, but Stalinists objected and Bulavas was dismissed. Kubilius replaced him as rector and was more successful in resisting pressure to Russify the University: he returned Lithuanian language and culture to the forefront of the University. Česlovas Masaitis attributes Kubilius's success to "his ability to manipulate within the complex bureaucratic system of the Soviet Union and mainly because |
https://en.wikipedia.org/wiki/Rake%20%28angle%29 | A rake is an angle of slope measured from horizontal, or in some contexts from a vertical line 90° perpendicular to horizontal.
A 60° rake would mean that the line is pointing 60° up from horizontal, either forwards or backwards relative to the object.
Usage
Though the term may be used in a general manner, it is commonly applied in several specific contexts.
The rake of a ship's prow is the angle at which the prow rises from the water (the rake below water being called the bow rake).
A motorcycle or bicycle fork rake is the angle at which the forks are angled down towards the ground. See also caster angle, which is the angular displacement of the steering axis from the vertical axis of a steered wheel.
In machining and sawing the rake angle is the angle from the cutting head to the object being worked on (with a perpendicular angle conventionally being a 0° rake).
In geology the rake is the angle at which one rock moves against another in a geological fault.
In a theatre or opera house the stage can be raked to slope up towards the back of the stage to allow better viewing for the audience.
See also
Pitch angle, one of the angular degrees of freedom of any stiff body (for example a vehicle), describing rotation about the side-to-side axis |
https://en.wikipedia.org/wiki/ASIMO | ASIMO (Advanced Step in Innovative Mobility) is a humanoid robot created by Honda in 2000. It is displayed in the Miraikan museum in Tokyo, Japan. On 8 July 2018, Honda posted the last update of Asimo through their official page stating that it would be ceasing all development and production of Asimo robots in order to focus on more practical applications using the technology developed through Asimo's lifespan. It made its last active appearance in March 2022, over 20 years after its first, as Honda announced that they are retiring the robot to concentrate on remote-controlled, avatar-style, robotic technology.
There are four published models of the Asimo. A few years after the release in 2002 there were 20 units of the first Asimo model produced. As of February 2009, there were over 100 ASIMO units in existence.
Development
Honda began developing humanoid robots in the 1980s, including several prototypes that preceded ASIMO. It was the company's goal to create a walking robot. E0 was the first bipedal (two-legged) model produced as part of the Honda E series, which was an early experimental line of self-regulating, humanoid walking robot with wireless movements created between 1986 and 1993.
This was followed by the Honda P series of robots produced from 1993 through 1997. The research made
on the E- and P-series led to the creation of ASIMO. Development began at Honda's Wako Fundamental Technical Research Center in Japan in 1999 and ASIMO was unveiled in October 2000. ASIMO is an acronym which stands for Advanced Step in Innovative Mobility. The Japanese word Asi also stands for 'leg' and Mo for 'mobility'. ASIMO is pronounced as '' and means 'also legs'.
In 2018, Honda ceased the commercial development of ASIMO, although it will continue to be developed as a research platform and make public appearances.
Form
ASIMO stands tall and weighs . Research conducted by Honda found that the ideal height for a mobility assistant robot was between 120 cm and the hei |
https://en.wikipedia.org/wiki/FOUP | FOUP (an acronym for Front Opening Unified Pod or Front Opening Universal Pod) is a specialized plastic carrier designed to hold silicon wafers securely and safely in a controlled environment, and to allow the wafers to be transferred between machines for processing or measurement.
FOUPs began to appear along with the first 300mm wafer processing tools in the mid 1990s. The size of the wafers and their comparative lack of rigidity meant that SMIF pods were not a viable form factor. FOUP standards were developed by SEMI and SEMI members to ensure that FOUPs and all equipment that interacts with FOUPs work together seamlessly. Transitioning from a SMIF pod to a FOUP design, the removable cassette used to hold wafers was replaced by fixed wafer columns. The door was relocated from a bottom orientation to a front orientation, where automated handling equipment can access the wafers. Pitch for a 300 mm FOUP is 10 mm, while 13 slot FOUPs can have a pitch up to 20 mm. The weight of a fully loaded 25 wafer FOUP is between 7 and 9 kilograms which means that automated material handling systems are essential for all but the smallest of fabrication plants. To allow this, each FOUP has coupling plates and interface holes to allow the FOUP to be positioned on a load port, and to be picked up and transferred by the AMHS (Automated Material Handling System) to other process tools or to storage locations such as a stocker or undertrack storage. FOUPs may use RF tags that allow them to be identified by RF readers on tools or AMHS. FOUPs are available in several colors, depending on the customer's wish.
FOUPs have begun to have the capability to have a purge gas applied by process, measurement and storage tools in an effort to increase device yield.
FOSB
FOSB is an acronym for Front Opening Shipping Box. FOSBs are used for transporting wafers between manufacturing facilities.
Manufacturers
3S Korea
CKplas
Danichi Shoji
Entegris
E-SUN System Technology
Gudeng Precisi |
https://en.wikipedia.org/wiki/Folk%20biology | Folk biology (or folkbiology) is the cognitive study of how people classify and reason about the organic world. Humans everywhere classify animals and plants into obvious species-like groups. The relationship between a folk taxonomy and a scientific classification can assist in understanding how evolutionary theory deals with the apparent constancy of "common species" and the organic processes centering on them. From the vantage of evolutionary psychology, such natural systems are arguably routine "habits of mind", a sort of heuristic used to make sense of the natural world. |
https://en.wikipedia.org/wiki/Plurix | Plurix is a Unix-like operating system developed in Brazil in the early 1980s.
Overview
Plurix was developed in the Federal University of Rio de Janeiro (UFRJ), at the Electronic Computing Center (NCE).
The NCE researchers, after returning from postgraduate courses in the USA, attempted to license the UNIX source code from AT&T in the late 1970s without success. In 1982, due to AT&T refusing to license the code, a development team led by Newton Faller decided to initiate the development of an alternative system, called Plurix (**), using as reference UNIX Version 7, the most recent at the time, that they had running on an old Motorola computer system.
In 1985, the Plurix system was up and running on the Pegasus 32-X, a shared-memory, multi-processor computer also designed at NCE. Plurix was licensed to some Brazilian companies in 1988.
Two other Brazilian universities also developed their own UNIX systems: Universidade Federal de Minas Gerais (UFMG) developed the DCC-IX operating system, and University of São Paulo (USP) developed the REAL operating system in 1987.
The NCE/UFRJ also offered technical courses on OS design and implementation to local computer companies, some of which later produced their own proprietary UNIX systems. In fact, these Brazilian companies first created an organization of companies interested in UNIX (called API) and tried to license UNIX from AT&T. Their attempts were frustrated at the end of 1986, when AT&T canceled negotiations with API.
Some of these companies, EDISA, COBRA, and SOFTEC, invested in the development of their own systems, EDIX, SOX and ANALIX, respectively.
AT&T License
When AT&T finally licensed their code to Brazilian companies, the majority of them decided to drop their local development, use the licensed code, and just "localize" the system for their purposes.
COBRA and NCE/UFRJ kept developing, and tried to convince the Brazilian government to prohibit the further entrance of AT&T UNIX into Brazil, since the |
https://en.wikipedia.org/wiki/List%20of%20antiviral%20drugs | Antiviral drugs are different from antibiotics. Flu antiviral drugs are different from antiviral drugs used to treat other infectious diseases such as COVID-19. Antiviral drugs prescribed to treat COVID-19 are not approved or authorized to treat flu. |
https://en.wikipedia.org/wiki/Taurus%20Project | The Taurus Project of the German aims to re-create the extinct aurochs, the wild ancestor of domestic cattle, by cross-breeding Heck cattle (themselves bred in the 1920s and 1930s in an attempt to replicate the aurochs) with aurochs-like cattle, mostly from Southern Europe. Herds of these cross-bred Taurus cattle have been established in Germany, Denmark, Hungary and Latvia, and are used in conservation of natural landscapes and biodiversity.
History
In 1996 the conservation group in Germany started to crossbreed Heck cattle with primitive cattle from Southern Europe such as Chianina, Sayaguesa Cattle and the Spanish fighting bull in the Lippeaue reserve near the town of Soest. The purpose was and is an increased resemblance to the extinct aurochs, because they considered Heck cattle not satisfying. For example, they write in one of their publications: "The 'recreations' by the Heck brothers are too small, too short-legged, not elegant and their horns are not satisfying". Therefore, the goal is to breed cattle that are considerably larger, more long-legged and long-snouted and have horns curving forwards, in addition to possessing the wild type colour scheme that was already present in the population. In 2003 breeding herds were started in Hungary and Denmark, and in 2004 one was begun in Latvia.
Germany
In Germany, Taurus cattle herds are crossed with Chianina and Sayaguesa, two very tall breeds, and initially also the Spanish fighting bull (Toro de Lidia). The crossbred animals in the Lippeaue reserve, the most important breeding location, are composed of 47% Sayaguesa, 29% Heck cattle, 20% Chianina and 4% Lidia on average.
Taurus cattle are listed in the herdbook X of the German Heck cattle association VFA. There is an increasing interest of Heck cattle breeders in using Taurus cattle because of their larger resemblance to the aurochs, so that there is a continuum between Taurus cattle and un-crossed Heck cattle.
Hungary
Hortobágy National Park in Hunga |
https://en.wikipedia.org/wiki/Acute%20%28medicine%29 | In medicine, describing a disease as acute denotes that it is of recent onset; it occasionally denotes a short duration. The quantification of how much time constitutes "short" and "recent" varies by disease and by context, but the core denotation of "acute" is always qualitatively in contrast with "chronic", which denotes long-lasting disease (for example, in acute leukaemia and chronic leukaemia).
In the context of the mass noun "acute disease", it refers to the acute phase (that is, a short course) of any disease entity. For example, in an article on ulcerative enteritis in poultry, the author says, "in acute disease there may be increased mortality without any obvious signs", referring to the acute form or phase of ulcerative enteritis.
Meaning variations
A mild stubbed toe is an acute injury. Similarly, many acute upper respiratory infections and acute gastroenteritis cases in adults are mild and usually resolve within a few days or weeks.
The term "acute" is also included in the definition of several diseases, such as severe acute respiratory syndrome, acute leukaemia, acute myocardial infarction, and acute hepatitis. This is often to distinguish diseases from their chronic forms, such as chronic leukaemia, or to highlight the sudden onset of the disease, such as acute myocardial infarct.
Related terminology
Related terms include:
Acute care
Acute care is the early and specialist management of adult patients who have a wide range of medical conditions requiring urgent or emergency care usually within 48 hours of admission or referral from other specialties.
Acute hospitals are those intended for short-term medical and/or surgical treatment and care which is a medical speciality of acute medicine, as often primary care is not positioned to assume this role. |
https://en.wikipedia.org/wiki/Apache%20Axis | Apache Axis (Apache eXtensible Interaction System) is an open-source, XML based Web service framework. It consists of a Java and a C++ implementation of the SOAP server, and various utilities and APIs for generating and deploying Web service applications. Using Apache Axis, developers can create interoperable, distributed computing applications. Axis development takes place under the auspices of the Apache Software Foundation.
Axis for Java
When using the Java version of Axis, there are two ways to expose Java code as Web service. The easiest one is to use Axis native JWS (Java Web Service) files.
Another way is to use custom deployment. Custom deployment enables you to customize resources that should be exposed as Web services.
See also Apache Axis2.
JWS Web service creation
JWS files contain Java class source code that should be exposed as Web service. The main difference between an ordinary java file and jws file is the file extension. Another difference is that jws files are deployed as source code and not compiled class files.
The following example will expose methods add and subtract of class Calculator.
public class Calculator
{
public int add(int i1, int i2)
{
return i1 + i2;
}
public int subtract(int i1, int i2)
{
return i1 - i2;
}
}
JWS Web service deployment
Once the Axis servlet is deployed, you need only to copy the jws file to the Axis directory on the server. This will work if you are using an
Apache Tomcat container. In the case that you are using another web container, custom WAR archive creation will be required.
JWS Web service access
JWS Web service is accessible using the URL http://localhost:8080/axis/Calculator.jws. If you are running a custom configuration of Apache Tomcat or a different container, the URL might be different.
Custom deployed Web service
Custom Web service deployment requires a specific deployment descriptor called WSDD (Web Service Deployment Descriptor) syntax. It can be used to sp |
https://en.wikipedia.org/wiki/Cocooning%20%28immunization%29 | Cocooning, also known as the Cocoon Strategy, is a vaccination strategy to protect infants and other vulnerable individuals from infectious diseases by vaccinating those in close contact with them. If the people most likely to transmit an infection are immune, their immunity creates a "cocoon" of protection around the newborn (or other vulnerable person).
Cocooning is especially commonly used for pertussis. It aims to protect newborn infants from becoming infected with pertussis by administering DTaP/Tdap (tetanus, diphtheria and acellular pertussis) booster vaccine to parents, family members and any individuals who would come into regular contact with the newborn infant. By vaccinating these individuals with a pertussis booster, a pool of persons is established around the newborn who are themselves protected from getting pertussis and passing it on to the infant, thereby creating a "cocoon" of protection around the newborn. Young infants have the highest rate of pertussis; in 87-100% of all deaths caused by pertussis, the victim is an infant of less than 6 months of age, too young to have finished acquiring vaccine-induced immunity. Adolescents and young adults whose immunity has just worn off are often infected, but very unlikely to die. They can, however, infect others. 35% to 68% of infants infected with pertussis are infected by a close contact, most commonly the mother. Cocooning prevents about 20% of infant pertussis cases; vaccination during pregnancy prevents more (33%).
Rationale
Some people cannot be fully protected from vaccine-preventable diseases by direct vaccination. These are often people with weak immune systems, who are more likely to get seriously ill. Their risk of infection can be significantly reduced if those who are most likely to infect them get the appropriate vaccines.
Vaccination works by training the immune system to react promptly to an infection, warding off illness (acquired immunity). People with weak immune systems may have dif |
https://en.wikipedia.org/wiki/Lactifluus%20corrugis | Lactifluus corrugis (formerly Lactarius corrugis), commonly known as the corrugated-cap milky, is an edible species of fungus in the family Russulaceae. It was first described by American mycologist Charles Horton Peck in 1880.
Description
Along with Lactifluus volemus, L. corrugis is considered a choice edible mushroom. The latex of both species stains brown.
See also
List of Lactifluus species |
https://en.wikipedia.org/wiki/Fairy%20ring | A fairy ring, also known as fairy circle, elf circle, elf ring or pixie ring, is a naturally occurring ring or arc of mushrooms. They are found mainly in forested areas, but also appear in grasslands or rangelands. Fairy rings are detectable by sporocarps (fungal spore pods) in rings or arcs, as well as by a necrotic zone (dead grass), or a ring of dark green grass. Fungus mycelium is present in the ring or arc underneath. The rings may grow to over in diameter, and they become stable over time as the fungus grows and seeks food underground.
Fairy rings are the subject of much folklore and myth worldwide—particularly in Western Europe. They are often seen as hazardous or dangerous places, and linked with witches or the Devil in folklore. Conversely, they can sometimes be linked with good fortune.
Genesis
The mycelium of a fungus growing in the ground absorbs nutrients by secretion of enzymes from the tips of the hyphae (threads making up the mycelium). This breaks down larger molecules in the soil into smaller molecules that are absorbed through the hyphae walls near their growing tips. The mycelium moves outward from the center, and when the nutrients in the center are exhausted, the center dies, forming a living ring, from which the fairy ring arises.
There are two theories regarding the process involved in creating fairy rings. One states that the fairy ring is begun by a spore from the sporocarpus. The underground presence of the fungus can also cause withering or varying colour or growth of the grass above. The second theory, which is presented in the investigations of Japanese scientists on the Tricholoma matsutake species, shows that fairy rings could be established by connecting neighbouring oval genets of these mushrooms. If they make an arc or a ring, they continuously grow about the centre of this object.
Necrotic or rapid growth zones
One of the manifestations of fairy ring growth is a necrotic zone—an area in which grass or other plant life has wi |
https://en.wikipedia.org/wiki/Vertical%20pressure%20variation | Vertical pressure variation is the variation in pressure as a function of elevation. Depending on the fluid in question and the context being referred to, it may also vary significantly in dimensions perpendicular to elevation as well, and these variations have relevance in the context of pressure gradient force and its effects. However, the vertical variation is especially significant, as it results from the pull of gravity on the fluid; namely, for the same given fluid, a decrease in elevation within it corresponds to a taller column of fluid weighing down on that point.
Basic formula
A relatively simple version of the vertical fluid pressure variation is simply that the pressure difference between two elevations is the product of elevation change, gravity, and density. The equation is as follows:
where
is pressure,
is density,
is acceleration of gravity, and
is height.
The delta symbol indicates a change in a given variable. Since is negative, an increase in height will correspond to a decrease in pressure, which fits with the previously mentioned reasoning about the weight of a column of fluid.
When density and gravity are approximately constant (that is, for relatively small changes in height), simply multiplying height difference, gravity, and density will yield a good approximation of pressure difference. If the pressure at one point in a liquid with uniform density ρ is known to be P0, then the pressure at another point is P1:
where h1 - h0 is the vertical distance between the two points.
Where different fluids are layered on top of one another, the total pressure difference would be obtained by adding the two pressure differences; the first being from point 1 to the boundary, the second being from the boundary to point 2; which would just involve substituting the and values for each fluid and taking the sum of the results. If the density of the fluid varies with height, mathematical integration would be required.
Whether or not density and gr |
https://en.wikipedia.org/wiki/1/N%20expansion | In quantum field theory and statistical mechanics, the 1/N expansion (also known as the "large N" expansion) is a particular perturbative analysis of quantum field theories with an internal symmetry group such as SO(N) or SU(N). It consists in deriving an expansion for the properties of the theory in powers of , which is treated as a small parameter.
This technique is used in QCD (even though is only 3 there) with the gauge group SU(3). Another application in particle physics is to the study of AdS/CFT dualities.
It is also extensively used in condensed matter physics where it can be used to provide a rigorous basis for mean-field theory.
Example
Starting with a simple example — the O(N) φ4 — the scalar field φ takes on values in the real vector representation of O(N). Using the index notation for the N "flavors" with the Einstein summation convention and because O(N) is orthogonal, no distinction will be made between covariant and contravariant indices. The Lagrangian density is given by
where runs from 1 to N. Note that N has been absorbed into the coupling strength λ. This is crucial here.
Introducing an auxiliary field F;
In the Feynman diagrams, the graph breaks up into disjoint cycles, each made up of φ edges of the same flavor and the cycles are connected by F edges (which have no propagator line as auxiliary fields do not propagate).
Each 4-point vertex contributes λ/N and hence, 1/N. Each flavor cycle contributes N because there are N such flavors to sum over. Note that not all momentum flow cycles are flavor cycles.
At least perturbatively, the dominant contribution to the 2k-point connected correlation function is of the order (1/N)k-1 and the other terms are higher powers of 1/N. Performing a 1/N expansion gets more and more accurate in the large N limit. The vacuum energy density is proportional to N, but can be ignored due to non-compliance with general relativity assumptions.
Due to this structure, a different graphical notation to denote |
https://en.wikipedia.org/wiki/Biology%20of%20Sex%20Differences | Biology of Sex Differences is an online-only open access scientific journal covering the biological basis of sex differences in humans and other animals. It was established in 2010 and is published by BioMed Central on behalf of the Organization for the Study of Sex Differences, of which it is the official journal, as well as the Society for Women's Health Research. The editor-in-chief is Jill Becker (University of Michigan). According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.811. |
https://en.wikipedia.org/wiki/RethinkDB | RethinkDB is a free and open-source, distributed document-oriented database originally created by the company of the same name. The database stores JSON documents with dynamic schemas, and is designed to facilitate pushing real-time updates for query results to applications. Initially seed funded by Y Combinator in June 2009, the company announced in October 2016 that it had been unable to build a sustainable business and its products would in future be entirely open-sourced without commercial support.
The CNCF (Cloud Native Computing Foundation) then purchased the rights to the RethinkDB source code and contributed it to the Linux Foundation.
History
RethinkDB was founded in 2009, and open-sourced at version 1.2 in 2012. In 2015, RethinkDB released version 2.0, announcing that it was production-ready. On October 5, 2016, the company announced it was shutting down, transitioning members of its engineering team to Stripe, and would no longer offer production support. On February 6, 2017, The Cloud Native Computing Foundation purchased the rights to the source code and relicensed it under the Apache License 2.0.
ReQL
RethinkDB uses the ReQL query language, an internal (embedded) domain-specific language officially available for Ruby, Python, Java and JavaScript (including Node.js).
It has support for table joins, groupings, aggregations and functions.
There are also unofficial, community-supported drivers for other languages, including C#, Clojure, Erlang, Go, Haskell, Lua, and PHP.
Popularity
According to the DB-Engines ranking, as of February 2016, it was the 46th most popular database.
Comparison with other document databases
A distinguishing feature of RethinkDB is the first class support for real-time change feeds. A change query returns a cursor which allows blocking or non-blocking requests to keep track of a potentially infinite stream of real-time changes.
Fork
Due to seeming stagnation, RethinkDB was forked by members of the community on May 17, 2018 |
https://en.wikipedia.org/wiki/Conversion%20of%20units | Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors which change the measured quantity value without changing its effects. Unit conversion is often easier within the metric or the SI than in others, due to the regular 10-base in all units and the prefixes that increase or decrease by 3 powers of 10 at a time.
Overview
The process of conversion depends on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as:
The precision and accuracy of measurement and the associated uncertainty of measurement.
The statistical confidence interval or tolerance interval of the initial measurement.
The number of significant figures of the measurement.
The intended use of the measurement including the engineering tolerances.
Historical definitions of the units and their derivatives used in old measurements; e.g., international foot vs. US survey foot.
Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called soft conversion. It does not involve changing the physical configuration of the item being measured.
By contrast, a hard conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used.
Factor-label method
The factor-label method, also known as the unit-factor method or the unity bracket method, is a widely used technique for unit conversions using the rules of algebra.
The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that an |
https://en.wikipedia.org/wiki/Fractal%20in%20soil%20mechanics | A fractal is an irregular geometric object with an infinite nesting of structure at all scales. It is mainly applicable in soil chromatography and soil micromorphology (Anderson, 1997). Internal structure, pore size distribution and pore geometry can be identified by using fractal dimension at nano scale. As soil is heterogeneous the pore spaces are made up of macropores, micropores and mesopores. When soil is studied in nanoscale it the macropore are composed of micro and meso pore and further they are composed of organo-mineral complex.
The fractal approach to soil mechanics is a new line of thought. It was first raised in "Fractal Character Of Grain-Size Distribution Of Expansion Soils" by Yongfu Xu and Songyu Liu, published in 1999, by Fractals. There are several problems in soil mechanics which can be dealt by applying a fractal approach. One of these problems is the determination of soil-water-characteristic curve (also called (water retention curve) and/or capillary pressure curve). It is a time-consuming process considering usual laboratory experiments. Many scientists have been involved in making mathematical models of soil-water-characteristic curve (SWCC) in which constants are related to the fractal dimension of pore size distribution or particle size distribution of the soil. After the great mathematician Benoît Mandelbrot—father of fractal mathematics—showed the world fractals, Scientists of Agronomy, Agricultural engineering and Earth Scientists have developed more fractal-based models.
All of these models have been used to extract hydraulic properties of soils and the potential capabilities of fractal mathematics to investigate mechanical properties of soils. Therefore, it is really important to use such physically based models to promote our understanding of the mechanics of the soils. It can be of great help for researchers in the area of unsaturated soil mechanics. Mechanical parameters can also be driven from such models and of course it needs |
https://en.wikipedia.org/wiki/Scleroderma%20cepa | Scleroderma cepa is an ectomycorrhizal fungus used as a soil inoculant in agriculture and horticulture. It is poisonous. |
https://en.wikipedia.org/wiki/Net6 | Net6 was a startup founded in 2000 that created products in Security, Voice over IP VoIP protocols and SSL VPN.
History
Net6 was originally called WebUnwired and was founded in 2000 by Murli Thirumale (an Ex VP and GM at Hewlett-Packard), Goutham Rao (an Operating Systems Architect at Intel), Jon Thies and Russell Lentini. Thirumale was the CEO of Net6 and Rao was the CTO and Chief Architect. The company originally focused on secure application access from mobile devices, and later shifted focus toward VoIP protocols and SSL VPN technology. The company created two hardware appliances called the Application Gateway for VoIP and the Access Gateway for SSL VPNs.
Partnerships
Net6 secured OEM deals for its product lines in 2000 from Cisco Systems. In 2002 and 2003, Net6 added further OEMs for its product lines from Nortel Networks, Avaya Systems and Siemens. Net6 primarily sold its products through their partner channels.
Funding
In 2000, Net6, then known as WebUnwired secured $8M in series 'A' funding from Sierra Ventures and Olympic Venture Partners.
In 2004, Net6 raised an additional $8M in series B funding from Bank of America venture partners.
Acquisition
In December 2004, Net6 Inc was acquired by Citrix Systems for $50M. The acquisition marked Citrix's entry into the telecommunications security space, and Citrix continues to market the Access Gateway product under the Citrix Access Gateway(tm) product name, led by the management team of Murli Thirumale (CEO), Goutham Rao (CTO), Russell Lentini and Jon Thies (Principal Architects), Gordon Payne (VP of Marketing) and Joe Eskew (VP of Sales).
Recognition
Access Gateway with 2007 Reader Trust Award in the 10th annual SC Magazine Award
Gartner Reports Citrix Access Gateway in Leadership Quadrant |
https://en.wikipedia.org/wiki/Physics%20in%20the%20medieval%20Islamic%20world | The natural sciences saw various advancements during the Golden Age of Islam (from roughly the mid 8th to the mid 13th centuries), adding a number of innovations to the Transmission of the Classics (such as Aristotle, Ptolemy, Euclid, Neoplatonism). During this period, Islamic theology was encouraging of thinkers to find knowledge. Thinkers from this period included Al-Farabi, Abu Bishr Matta, Ibn Sina, al-Hassan Ibn al-Haytham and Ibn Bajjah. These works and the important commentaries on them were the wellspring of science during the medieval period. They were translated into Arabic, the lingua franca of this period.
Islamic scholarship in the sciences had inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further. However the Islamic world had a greater respect for knowledge gained from empirical observation, and believed that the universe is governed by a single set of laws. Their use of empirical observation led to the formation of crude forms of the scientific method. The study of physics in the Islamic world started in Iraq and Egypt.
Fields of physics studied in this period include optics, mechanics (including statics, dynamics, kinematics and motion), and astronomy.
Physics
Islamic scholarship had inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method. With Aristotelian physics, physics was seen as lower than demonstrative mathematical sciences, but in terms of a larger theory of knowledge, physics was higher than astronomy; many of whose principles derive from physics and metaphysics. The primary subject of physics, according to Aristotle, was motion or change; there were three factors involved with this change, underlying thing, privation, and form. In his Metaphysics, Aristotle believed that the Unmoved Mover was responsible for the movement of the |
https://en.wikipedia.org/wiki/CRC-based%20framing | CRC-based framing is a kind of frame synchronization used in Asynchronous Transfer Mode (ATM) and other similar protocols.
The concept of CRC-based framing was developed by StrataCom, Inc. in order to improve the efficiency of a pre-standard Asynchronous Transfer Mode (ATM) link protocol. This technology was ultimately used in the principal link protocols of ATM itself and was one of the most significant developments of StrataCom. An advanced version of CRC-based framing was used in the ITU-T SG15 G.7041 Generic Framing Procedure (GFP), which itself is used in several packet link protocols.
Overview of CRC-based framing
The method of CRC-Based framing re-uses the header cyclic redundancy check (CRC), which is present in ATM and other similar protocols, to provide framing on the link with no additional overhead. In ATM, this field is known as the Header Error Control/Check (HEC) field. It consists of the remainder of the division of the 32 bits of the header (taken as the coefficients of a polynomial over the field with two elements) by the polynomial . The pattern 01010101 is XORed with the 8-bit remainder before being inserted in the last octet of the header.
Constantly checked as data is transmitted, this scheme is able to correct single-bit errors and detect many multiple-bit errors.
For a tutorial and an example of computing the CRC see mathematics of cyclic redundancy checks.
The header CRC/HEC is needed for another purpose within an ATM system, to improve the robustness in cell delivery. Using this same CRC/HEC field for the second purpose of link framing provided a significant improvement in link efficiency over what other methods of framing, because no additional bits were required for this second purpose.
A receiver utilizing CRC-based framing bit-shifts along the received bit stream until it finds a bit position where the header CRC is correct for a number of times. The receiver then declares that it has found the frame. A hysteresis function is appli |
https://en.wikipedia.org/wiki/Fas%20ligand | Fas ligand (FasL or CD95L or CD178) is a type-II transmembrane protein expressed on cytotoxic T lymphocytes and natural killer (NK) cells. Its binding with Fas receptor (FasR) induces programmed cell death in the FasR-carrying target cell. Fas ligand/receptor interactions play an important role in the regulation of the immune system and the progression of cancer.
Structure
Fas ligand or FasL is a homotrimeric type II transmembrane protein that belongs to the tumor necrosis factor (TNF) family. It signals through trimerization of FasR, which spans the membrane of the target cell. This trimerization usually leads to apoptosis, or programmed cell death.
Soluble Fas ligand is generated by cleaving membrane-bound FasL at a conserved cleavage site by the external matrix metalloproteinase MMP-7.
Receptors
FasR: The Fas receptor (FasR), or CD95, is the most intensely studied member of the death receptor family. The gene is situated on chromosome 10 in humans and 19 in mice. Previous reports have identified as many as eight splice variants, which are translated into seven isoforms of the protein. Many of these isoforms are rare haplotypes that are usually associated with a state of disease. Apoptosis-inducing Fas receptor is dubbed isoform 1 and is a type 1 transmembrane protein. It consists of three cysteine-rich pseudorepeats, a transmembrane domain, and an intracellular death domain.
DcR3: Decoy receptor 3 (DcR3) is a recently discovered decoy receptor of the tumor necrosis factor superfamily that binds to FasL, LIGHT, and TL1A. DcR3 is a soluble receptor that has no signal transduction capabilities (hence a "decoy") and functions to prevent FasR-FasL interactions by competitively binding to membrane-bound Fas ligand and rendering them inactive.
Cell signaling
Fas forms the death-inducing signaling complex (DISC) upon ligand binding. Membrane-anchored Fas ligand trimer on the surface of an adjacent cell causes trimerization of Fas receptor. This event is also m |
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Astronomy%20and%20Astrophysics | The Annual Review of Astronomy and Astrophysics is an annual peer-reviewed scientific journal published by Annual Reviews. The co-editors are Ewine van Dishoeck and Robert C. Kennicutt. The journal reviews scientific literature pertaining to local and distant celestial entities throughout the observable universe, as well as cosmology, instrumentation, techniques, and the history of developments. It was established in 1963.
History
In November 1960, the board of directors of the nonprofit publisher Annual Reviews began investigating the need for a new journal of review articles that covered developments in astronomy and astrophysics. The board consulted an advisory group of experts, including Ronald Bracewell, Robert Jastrow, Joseph Kaplan, Paul Merrill, Otto Struve, and Harold Urey. The editorial committee met in August 1961 to determine the authors and topics for the first volume, which was published in 1963. As of 2020, it was published both in print and electronically.
It defines its scope as covering significant developments in astronomy and astrophysics, including the Sun, the Solar System, exoplanets, stars, the interstellar medium, the Milky Way and other galaxies, galactic nuclei, cosmology, and the instrumentation and techniques used for research and analysis. As of 2023, Journal Citation Reports gives the journal an impact factor of 33.3, ranking it first out of 69 journals in the category "Astronomy and Astrophysics". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, Inspec, and Academic Search, among others.
Editorial processes
The Annual Review of Astronomy and Astrophysics is led by the editor or co-editors. They are is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Re |
https://en.wikipedia.org/wiki/Tests%20of%20relativistic%20energy%20and%20momentum | Tests of relativistic energy and momentum are aimed at measuring the relativistic expressions for energy, momentum, and mass. According to special relativity, the properties of particles moving approximately at the speed of light significantly deviate from the predictions of Newtonian mechanics. For instance, the speed of light cannot be reached by massive particles.
Today, those relativistic expressions for particles close to the speed of light are routinely confirmed in undergraduate laboratories, and necessary in the design and theoretical evaluation of collision experiments in particle accelerators. See also Tests of special relativity for a general overview.
Overview
In classical mechanics, kinetic energy and momentum are expressed as
On the other hand, special relativity predicts that the speed of light is constant in all inertial frames of references. The relativistic energy–momentum relation reads:
,
from which the relations for rest energy , relativistic energy (rest + kinetic) , kinetic energy , and momentum of massive particles follow:
,
where . So relativistic energy and momentum significantly increase with speed, thus the speed of light cannot be reached by massive particles. In some relativity textbooks, the so-called "relativistic mass" is used as well. However, this concept is considered disadvantageous by many authors, instead the expressions of relativistic energy and momentum should be used to express the velocity dependence in relativity, which provide the same experimental predictions.
Early experiments
First experiments capable of detecting such relations were conducted by Walter Kaufmann, Alfred Bucherer and others between 1901 and 1915. These experiments were aimed at measuring the deflection of beta rays within a magnetic field so as to determine the mass-to-charge ratio of electrons. Since the charge was known to be velocity independent, any variation had to be attributed to alterations in the electron's momentum or mass (forme |
https://en.wikipedia.org/wiki/Refractive%20index%20and%20extinction%20coefficient%20of%20thin%20film%20materials | A. R. Forouhi and I. Bloomer deduced dispersion equations for the refractive index, n, and extinction coefficient, k, which were published in 1986 and 1988. The 1986 publication relates to amorphous materials, while the 1988 publication relates to crystalline. Subsequently, in 1991, their work was included as a chapter in “The Handbook of Optical Constants”. The Forouhi–Bloomer dispersion equations describe how photons of varying energies interact with thin films. When used with a spectroscopic reflectometry tool, the Forouhi–Bloomer dispersion equations specify n and k for amorphous and crystalline materials as a function of photon energy E. Values of n and k as a function of photon energy, E, are referred to as the spectra of n and k, which can also be expressed as functions of the wavelength of light, λ, since E = HC/λ. The symbol h represents Planck’s constant and c, the speed of light in vacuum. Together, n and k are often referred to as the “optical constants” of a material (though they are not constants since their values depend on photon energy).
The derivation of the Forouhi–Bloomer dispersion equations is based on obtaining an expression for k as a function of photon energy, symbolically written as k(E), starting from first principles quantum mechanics and solid state physics. An expression for n as a function of photon energy, symbolically written as n(E), is then determined from the expression for k(E) in accordance to the Kramers–Kronig relations which states that n(E) is the Hilbert transform of k(E).
The Forouhi–Bloomer dispersion equations for n(E) and k(E) of amorphous materials are given as:
The five parameters A, B, C, Eg, and n(∞) each have physical significance. Eg is the optical energy band gap of the material. A, B, and C depend on the band structure of the material. They are positive constants such that 4C-B2 > 0. Finally, n(∞), a constant greater than unity, represents the value of n at E = ∞. The parameters B0 and C0 in the equation for |
https://en.wikipedia.org/wiki/Porella%20platyphylla | Porella platyphylla is a species of liverwort belonging to the family Porellaceae. It is native to Eurasia and North America. |
https://en.wikipedia.org/wiki/OpenKeychain | OpenKeychain is a free and open-source mobile app for the Android operating system that provides strong, user-based encryption which is compatible with the OpenPGP standard. This allows users to encrypt, decrypt, sign, and verify signatures for text, emails, and files. The app allows the user to store the public keys of other users with whom they interact, and to encrypt files such that only a specified user can decrypt them. In the same manner, if a file is received from another user and its public keys are saved, the receiver can verify the authenticity of that file and decrypt it if necessary. As of August 2021, it is no longer actively developed.
K-9 Mail Support
Together with K-9 Mail, it supports end-to-end encrypted emails via the OpenPGP INLINE and PGP/MIME formats. The developers of OpenKeychain and K-9 Mail are trying to change the way user interfaces for email encryption are designed. They propose to remove the ability to create encrypted-only emails and hide the case of signed-only emails. Instead, they focus on end-to-end security that provides confidentiality and authenticity by always encrypting and signing emails together.
Reception
OpenKeychain is listed on the official OpenPGP homepage and the well-known developer collective Guardian Project recommends it instead of APG to encrypt emails. TechRepublic published an article about it and conclude that "OpenKeychain happens to be one of the easiest encryption tools available for Android (that also happens to best follow OpenPGP standards)." The publisher Heise reviewed it in their c't Android magazine 2016 and discussed OpenKeychain's backup mechanism. The academic community uses OpenKeychain for experimental evaluations: It has been used as an example where cryptographic operations could be executed in a Trusted Execution Environment. Furthermore, modern alternatives for public key fingerprints have been implemented by other researchers. In 2016, the German Federal Office for Information Security |
https://en.wikipedia.org/wiki/Fermentation%20in%20food%20processing | In food processing, fermentation is the conversion of carbohydrates to alcohol or organic acids using microorganisms—yeasts or bacteria—under anaerobic (oxygen-free) conditions. Fermentation usually implies that the action of microorganisms is desired. The science of fermentation is known as zymology or zymurgy.
The term "fermentation" sometimes refers specifically to the chemical conversion of sugars into ethanol, producing alcoholic drinks such as wine, beer, and cider. However, similar processes take place in the leavening of bread (CO2 produced by yeast activity), and in the preservation of sour foods with the production of lactic acid, such as in sauerkraut and yogurt.
Other widely consumed fermented foods include vinegar, olives, and cheese. More localised foods prepared by fermentation may also be based on beans, grain, vegetables, fruit, honey, dairy products, and fish.
History and prehistory
Brewing and winemaking
Natural fermentation precedes human history. Since ancient times, humans have exploited the fermentation process. The earliest archaeological evidence of fermentation is 13,000-year-old residues of a beer, with the consistency of gruel, found in a cave near Haifa in Israel. Another early alcoholic drink, made from fruit, rice, and honey, dates from 7000 to 6600 BC, in the Neolithic Chinese village of Jiahu, and winemaking dates from ca. 6000 BC, in Georgia, in the Caucasus area. Seven-thousand-year-old jars containing the remains of wine, now on display at the University of Pennsylvania, were excavated in the Zagros Mountains in Iran. There is strong evidence that people were fermenting alcoholic drinks in Babylon ca. 3000 BC, ancient Egypt ca. 3150 BC, pre-Hispanic Mexico ca. 2000 BC, and Sudan ca. 1500 BC.
Discovery of the role of yeast
The French chemist Louis Pasteur founded zymology, when in 1856 he connected yeast to fermentation.
When studying the fermentation of sugar to alcohol by yeast, Pasteur concluded that the fermentation wa |
https://en.wikipedia.org/wiki/One-way%20wave%20equation | A one-way wave equation is a first-order partial differential equation describing one wave traveling in a direction defined by the vector wave velocity. It contrasts with the second-order two-way wave equation describing a standing wavefield resulting from superposition of two waves in opposite directions (using the squared scalar wave velocity). In the one-dimensional case, the one-way wave equation allows wave propagation to be calculated without the mathematical complication of solving a 2nd order differential equation. Due to the fact that in the last decades no 3D one-way wave equation could be found numerous approximation methods based on the 1D one-way wave equation are used for 3D seismic and other geophysical calculations, see also the section .
One-dimensional case
The scalar second-order (two-way) wave equation describing a standing wavefield can be written as:
where is the coordinate, is time, is the displacement, and is the wave velocity.
Due to the ambiguity in the direction of the wave velocity, , the equation does not contain information about the wave direction and therefore has solutions propagating in both the forward () and backward () directions. The general solution of the equation is the summation of the solutions in these two directions:
where and are the displacement amplitudes of the waves running in and direction.
When a one-way wave problem is formulated, the wave propagation direction has to be (manually) selected by keeping one of the two terms in the general solution.
Factoring the operator on the left side of the equation yields a pair of one-way wave equations, one with solutions that propagate forwards and the other with solutions that propagate backwards.
The forward- and backward-travelling waves are described respectively,
The one-way wave equations can also be physically derived directly from specific acoustic impedance.
In a longitudinal plane wave, the specific impedance determines the local proportiona |
https://en.wikipedia.org/wiki/G.9972 | G.9972 (also known as G.cx) is a Recommendation developed by ITU-T that specifies a coexistence mechanism for networking transceivers capable of operating over electrical power line wiring. It allows G.hn devices to coexist with other devices implementing G.9972 and operating on the same power line wiring.
G.9972 received consent during the meeting of ITU-T Study Group 15, on October 9, 2009, and final approval on June 11, 2010.
G.9972 specifies two mechanisms for coexistence between G.hn home networks and broadband over power lines (BPL) Internet access networks:
Frequency-division multiplexing (FDM), in which the available spectrum is divided into two parts: frequencies below 10 or 14 MHz (specific value can be selected by the access network) are reserved for the access network, while frequencies above them are reserved for the in-home network.
Time-division multiplexing (TDM), in which the available channel time is split equally between both networks. 50% of time slots are allocated for the access network, and 50% are allocated to the in-home network. |
https://en.wikipedia.org/wiki/Warabimochi | is a wagashi (Japanese confection) made from warabiko (bracken starch) and covered or dipped in kinako (sweet toasted soybean flour). Kuromitsu syrup is sometimes poured on top before serving as an added sweetener.
Overview
Warabimochi is a traditional Japanese dessert that is believed that its ancient origins dating back to the Heian period (794-1185) in Japan, and it was a popular delicacy among the aristocracy. It was one of the favorite treats of Emperor Daigo. Hayashi Razan's "Heishin kikō (Travelogue of 1616) [...], which is considered to be the first travel diary to mention food on the road," highlighted Warabimochi as did other Tōkaidō travel guides in the 1600s. The dessert became more widespread during the Edo period (1603-1868) when it was served in tea houses as part of the traditional Japanese tea ceremony. It is now popular in the summertime, especially in the Kansai region and Okinawa, and it is often sold from trucks, similar to an ice cream truck in Western countries.
Warabimochi differs from true mochi made from glutinous rice. Mochi, refers to sticky food generally made with glutinous rice or waxy starch, is categorized into Tsuki-mochi and Kone-mochi. Tsuki-mochi is a rice cake made by pounding steamed glutinous rice. Although Warabimochi is not made made from glutinous rice or other waxy starches, it is called "mochi" for its sticky texture.
Warabimochi is also frequently made with katakuriko (potato starch) instead of bracken starch due to cost and availability. In 2021, Warabi starch sold for JPY 12,000–15,000 (USD 116–145)/kg, and it was 30–35 times more expensive than sweet potato or tapioca starch and 20–24 times more expensive than sago starch. |
https://en.wikipedia.org/wiki/Livestock%20branding | Livestock branding is a technique for marking livestock so as to identify the owner. Originally, livestock branding only referred to hot branding large stock with a branding iron, though the term now includes alternative techniques. Other forms of livestock identification include freeze branding, inner lip or ear tattoos, earmarking, ear tagging, and radio-frequency identification (RFID), which is tagging with a microchip implant. The semi-permanent paint markings used to identify sheep are called a paint or color brand. In the American West, branding evolved into a complex marking system still in use today.
History
The act of marking livestock with fire-heated marks to identify ownership has origins in ancient times, with use dating back to the ancient Egyptians around 2,700 BCE. Among the ancient Romans, the symbols used for brands were sometimes chosen as part of a magic spell aimed at protecting animals from harm.
In English lexicon, the word "brand", common to most Germanic languages (from which root also comes "burn", cf. German Brand "burning, fire"), originally meant anything hot or burning, such as a "firebrand", a burning stick. By the European Middle Ages, it commonly identified the process of burning a mark into stock animals with thick hides, such as cattle, so as to identify ownership under animus revertendi. The practice became particularly widespread in nations with large cattle grazing regions, such as Spain.
These European customs were imported to the Americas and were further refined by the vaquero tradition in what today is the southwestern United States and northern Mexico. In the American West, a "branding iron" consisted of an iron rod with a simple symbol or mark, which cowboys heated in a fire. After the branding iron turned red hot, the cowboy pressed the branding iron against the hide of the cow. The unique brand meant that cattle owned by multiple ranches could then graze freely together on the open range. Cowboys could then separate t |
https://en.wikipedia.org/wiki/GS-6620 | GS-6620 is an antiviral drug which is a nucleotide analogue. It was developed for the treatment of Hepatitis C but while it showed potent antiviral effects in early testing, it could not be successfully formulated into an oral dosage form due to low and variable absorption in the intestines which made blood levels unpredictable. It has however continued to be researched as a potential treatment for other viral diseases such as Ebola virus disease. |
https://en.wikipedia.org/wiki/K-noid | In differential geometry, a k-noid is a minimal surface with k catenoid openings. In particular, the 3-noid is often called trinoid. The first k-noid minimal surfaces were described by Jorge and Meeks in 1983.
The term k-noid and trinoid is also sometimes used for constant mean curvature surfaces, especially branched versions of the unduloid ("triunduloids").
k-noids are topologically equivalent to k-punctured spheres (spheres with k points removed). k-noids with symmetric openings can be generated using the Weierstrass–Enneper parameterization . This produces the explicit formula
where is the Gaussian hypergeometric function and denotes the real part of .
It is also possible to create k-noids with openings in different directions and sizes, k-noids corresponding to the platonic solids and k-noids with handles. |
https://en.wikipedia.org/wiki/Merk%C3%A9n | Merkén or merquén (from the Mapuche mezkeñ [ or merkeñ [) is a smoked chili pepper (or in Spanish, ají) used as a condiment that is often combined with other ingredients when in ground form. Merkén is a traditional condiment in Mapuche cuisine in Chile.
Ingredients
The base ingredient of merkén is the smoked pepper "cacho de cabra" (Capsicum annuum var. longum), a dried, smoked, red pepper that is sometimes ground with toasted coriander seed and salt. The peppers are dried naturally in the sun and are then smoked over a wood fire. They are then stored by being hung to dry prior to grinding. Once reduced to powder or flakes, the peppers are often mixed with salt and roasted ground coriander seed.
Commercially, merkén pepper with only an addition of salt is known as "natural merken" (merkén natural), while "special merkén" (merkén especial) contains coriander seeds. The composition of special merkén is about 70% chili, 20% salt, and 10% coriander seed.
Culinary use
Merkén originates primarily from the cuisine of the Mapuches of the Araucanía Region of Chile, but is also used in the Chilean cuisine as a replacement for fresh chili. Since the beginning of the 21st century, merkén has drawn the attention of professional chefs and has begun to find an international market, at the same time, having a widespread use in Chilean cuisine.
Merkén is most commonly used as:
A general condiment for seasoning dishes such as lentils, gold potatoes, and sautéed vegetables
A dry rub for tuna, lamb, pork, duck or beef
A sprinkle, spice rub, or boiling spice for seafood including crab
An addition to stews, savory pies, and purees
A spice for ceviches
An addition to cow or goat cheese
An addition to peanuts or salty olives
See also
List of smoked foods |
https://en.wikipedia.org/wiki/Machine%20learning%20control | Machine learning control (MLC) is a subfield of machine learning, intelligent control and control theory
which solves optimal control problems with methods of machine learning.
Key applications are complex nonlinear systems
for which linear control theory methods are not applicable.
Types of problems and tasks
Four types of problems are commonly encountered.
Control parameter identification: MLC translates to a parameter identification if the structure of the control law is given but the parameters are unknown. One example is the genetic algorithm for optimizing coefficients of a PID controller or discrete-time optimal control.
Control design as regression problem of the first kind: MLC approximates a general nonlinear mapping from sensor signals to actuation commands, if the sensor signals and the optimal actuation command are known for every state. One example is the computation of sensor feedback from a known full state feedback. A neural network is commonly used technique for this task.
Control design as regression problem of the second kind: MLC may also identify arbitrary nonlinear control laws which minimize the cost function of the plant. In this case, neither a model, nor the control law structure, nor the optimizing actuation command needs to be known. The optimization is only based on the control performance (cost function) as measured in the plant. Genetic programming is a powerful regression technique for this purpose.
Reinforcement learning control: The control law may be continually updated over measured performance changes (rewards) using reinforcement learning.
MLC comprises, for instance, neural network control,
genetic algorithm based control,
genetic programming control,
reinforcement learning control,
and has methodological overlaps with other data-driven control,
like artificial intelligence and robot control.
Applications
MLC has been successfully applied
to many nonlinear control problems,
exploring unknown and often unexpected |
https://en.wikipedia.org/wiki/Chronosequence | A chronosequence describes a set of ecological sites that share similar attributes but represent different ages.
A common assumption in establishing chronosequences is that no other variable besides age (such as various abiotic components and biotic components) has changed between sites of interest. Because this assumption cannot always be tested for environmental study sites, the use of chronosequences in field successional studies has recently been debated.
Applications
Forest sciences
Since many processes in forest ecology take a long time (decades or centuries) to develop, chronosequence methods are used to represent and study the time-dependent development of a forest. Field data from a forest chronosequence can be collected in a short period of several months.
Soil science
Chronosequences used in soil studies consist of sites that have developed over different periods of time with relatively small differences in other soil-forming factors. Such groups of sites are used to assess the influence of time as a factor in pedogenesis.
Ecology
Chronosequences are often used to study the changes in plant communities during succession. A classic example of using chronosequences to study ecological succession is in the study of plant and microbial succession in recently deglaciated zones. For example, a study from 2005 used the distance from the nose of a glacier as a proxy for site age. |
https://en.wikipedia.org/wiki/Heat%20Wave%20%28Irving%20Berlin%20song%29 | "Heat Wave" is a popular song written by Irving Berlin for the 1933 musical As Thousands Cheer, and introduced in the show by Ethel Waters.
Film appearances
1938: The song was featured in the film Alexander's Ragtime Band, where it was performed by Ethel Merman.
1946: It was also featured in the film Blue Skies, where it was performed by Olga San Juan.
1954: There's No Business Like Show Business, where it was performed by Marilyn Monroe. (Note: based on the lyrics alone, the Marilyn song is different, and within the film's narrative, Monroe's version is a sexier variant of the original that's "stolen" from Ethel Merman's character).
1954: A snippet of the song can be heard in a medley in the film White Christmas, sung by Bing Crosby and Danny Kaye.
1981: Miss Piggy sings it in The Muppets Go to the Movies.
1993: A snippet of the song can be heard in the film Grumpy Old Men, sung by Ella Fitzgerald.
Notable recordings
There were three chart hits in 1933 by:
Ethel Waters
Glen Gray and the Casa Loma Orchestra – vocal by Mildred Bailey
Meyer Davis – vocal by Charlotte Murray.
Other versions
1934: Sol K. Bright & His Hollywaiians
1952: Lee Wiley on the album Lee Wiley Sings Irving Berlin.
1955: Margaret Whiting for Capitol Records CL14242.
1956: Bing Crosby also recorded the song on his album Bing Sings Whilst Bregman Swings.
1958: Ella Fitzgerald sang the song on her album Ella Fitzgerald Sings the Irving Berlin Songbook.
1961: Enoch Light gave a symphonic treatment of the song, which can be found on the album Stereo 35-MM.
1975: Bing Crosby on his 1975 album At My Time of Life.
1979: James White and the Blacks on the 1979 album Off White.
1995: Patti LuPone and the Hollywood Bowl Orchestra on the album Heatwave: Patti LuPone Sings Irving Berlin. |
https://en.wikipedia.org/wiki/Tuberosity%20of%20the%20ulna | The tuberosity of the ulna is a rough eminence on the proximal end of the ulna. It occurs at the junction of the antero-inferior surface of the coronoid process with the front of the body. It provides an insertion point to a tendon of the brachialis (the oblique cord of the brachialis is attached to the lateral border). |
https://en.wikipedia.org/wiki/Dutch%20childcare%20benefits%20scandal | The Dutch childcare benefits scandal ( or , ) is a political scandal in the Netherlands concerning false allegations of fraud made by the Tax and Customs Administration while attempting to regulate the distribution of childcare benefits. Between 2005 and 2019, authorities wrongly accused an estimated 26,000 parents of making fraudulent benefit claims, requiring them to pay back the allowances they had received in their entirety. In many cases, this sum amounted to tens of thousands of euros, driving families into severe financial hardship.
The scandal was brought to public attention in September 2018. Investigators have subsequently described the working procedure of the Tax and Customs Administration as "discriminatory" and filled with "institutional bias". On 15 January 2021, two months before the 2021 general election, the third Rutte cabinet resigned over the scandal following a parliamentary inquiry into the matter, which concluded that "fundamental principles of the rule of law" had been violated.
Background
Childcare benefits in the Netherlands
Childcare in the Netherlands is not free and parents are generally required to pay for the costs by themselves. However, part of the costs may be covered by childcare benefit, which is available to families in which all parents are either employed or enrolled in secondary or tertiary education or a civic integration course. The amount of childcare benefit is calculated as a percentage of the hourly rate of the childcare centre or childminding agency, ranging from 33.3 to 96.0% depending on the parents' collective income and the number of children.
Each year, the government sets a maximum hourly rate for which families may receive childcare benefit. Any amount exceeding the maximum hourly rate must be fully paid by the parents. The number of childcare hours for which a family is entitled to childcare benefit depends on the number of hours that each parent works. The maximum is 230 hours per month per child. Parent |
https://en.wikipedia.org/wiki/Panorama%20of%20the%20City%20of%20New%20York | The Panorama of the City of New York is an urban model of New York City that is a centerpiece of the Queens Museum. It was originally created for the 1964 New York World's Fair.
Early history
Commissioned by Robert Moses as a celebration of the City's municipal infrastructure, this model includes every single building constructed before 1992 in all five boroughs, at a scale of 1 inch = 100 feet (1:1200). The Panorama was built by a team of 100 people working for the architectural model makers Raymond Lester Associates in West Nyack, New York in the three years before the opening of the 1964 World's Fair. The model was constructed in 273 sections of Formica boards and polyurethane foam, originally depicting 830,000 individual structures; the section showing the Far Rockaway neighborhood was never installed, due to space limitations, and is normally kept in storage.
Displayed alongside the modern city, the 1964 exhibition also included a 1:300 diorama of a "Castello Model" based on the 17th century Castello Plan, borrowed from Museum of the City of New York.
The Panorama was one of the most successful attractions at the 1964 Fair, with "millions" of people paying 10 cents each for a 9-minute simulated helicopter ride around the City, a dark ride narrated by Lowell Thomas to a text written by Harvey Yale Gross. It was one of three colossal representations of geography at the fair, alongside the Unisphere and the New York State Pavilion.
The panorama was also intended to serve as a standing urban planning tool after the fair, after Moses' vision. In this way it anticipated the technology of a 3D city model, though in practice it was of limited utility. It did however, play a role in the defeat of Donald Trump's 1980s Television City proposal, as a model put on the panorama by activists demonstrated the relative size of the development. Additionally, the opening of the Panorama was set to coincide with the 300-year anniversary of the English takeover of New Amster |
https://en.wikipedia.org/wiki/Weil%27s%20criterion | In mathematics, Weil's criterion is a criterion of André Weil for the Generalized Riemann hypothesis to be true. It takes the form of an equivalent statement, to the effect that a certain generalized function is positive definite.
Weil's idea was formulated first in a 1952 paper. It is based on the explicit formulae of prime number theory, as they apply to Dirichlet L-functions, and other more general global L-functions. A single statement thus combines statements on the complex zeroes of all Dirichlet L-functions.
Weil returned to this idea in a 1972 paper, showing how the formulation extended to a larger class of L-functions (Artin-Hecke L-functions); and to the global function field case. Here the inclusion of Artin L-functions, in particular, implicates Artin's conjecture; so that the criterion involves a Generalized Riemann Hypothesis plus Artin Conjecture.
The case of function fields, of curves over finite fields, is one in which the analogue of the Riemann Hypothesis is known, by Weil's classical work begun in 1940; and Weil also proved the analogue of the Artin Conjecture. Therefore, in that setting, the criterion can be used to show the corresponding statement of positive-definiteness does hold. |
https://en.wikipedia.org/wiki/Num%C3%A9raire | The numéraire (or numeraire) is a basic standard by which value is computed. In mathematical economics it is a tradable economic entity in terms of whose price the relative prices of all other tradables are expressed. In a monetary economy, one of the functions of money is to act as the numéraire, i.e. to serve as a unit of account and therefore provide a common benchmark relative to which the value of various goods and services can be measured against.
Using a numeraire, whether monetary or some consumable good, facilitates value comparisons when only the relative prices are relevant, as in general equilibrium theory. When economic analysis refers to a particular good as the numéraire, one says that all other prices are normalized by the price of that good. For example, if a unit of good g has twice the market value of a unit of the numeraire, then the (relative) price of g is 2. Since the value of one unit of the numeraire relative to one unit of itself is 1, the price of the numeraire is always 1.
Change of numéraire
In a financial market with traded securities, one may use a numéraire to price assets. For instance, let be the price at time of $1 that was invested in the money market at time . The fundamental theorem of asset pricing says that all assets priced in terms of the numéraire (in this case, ), are martingales with respect to a risk-neutral measure, say . That is:
Now, suppose that is another strictly positive traded asset (and hence a martingale when priced in terms of the money market). Then we can define a new probability measure by the Radon–Nikodym derivative
Then it can be shown that is a martingale under when priced in terms of the new numéraire :
This technique has many important applications in LIBOR and swap market models, as well as commodity markets. Jamshidian (1989) first used it in the context of the Vasicek model for interest rates in order to calculate bond options prices. Geman, El Karoui and Rochet (1995) introduced the |
https://en.wikipedia.org/wiki/Nigerian%20naira | The naira (sign: ₦; code: NGN; , , , ) is the currency of Nigeria. One naira is divided into 100 kobo.
The Central Bank of Nigeria (CBN) is the sole issuer of legal tender money throughout the Federal Republic of Nigeria. It controls the volume of money supplied in the economy in order to ensure monetary and price stability. The Currency Operations Department of the CBN is in charge of currency management, through the designs, procurement, distribution and supply, processing, reissue and disposal or disintegration of bank notes and coins.
A major cash crunch occurred in February 2023 when the Nigerian government used a currency note changeover—delivering too few of the new notes into circulation—to attempt to force citizens to use a newly-created government-sponsored central bank digital currency. This led to extensive street protests.
History
The history of the currency according to the government.
The naira was introduced on 1 January 1973, replacing the Nigerian pound at a rate of £1 = ₦2. The coins of the new currency were the first coins issued by an independent Nigeria, as all circulating coins of the Nigerian pound were all struck by the colonial government of the Federation of Nigeria in 1959, with the name of Queen Elizabeth II on the obverse. This also made Nigeria the last country in the world to abandon the £sd currency system in favour of a decimal currency system. There was a government plan to redenominate the naira at 100:1 in 2008, but the plan was suspended. The currency sign is .
The name "Naira" was coined from the word "Nigeria" by Obafemi Awolowo. However, Naira as a currency was launched by Shehu Shagari as minister of finance in 1973.
The Central Bank of Nigeria claimed that they attempted to control the annual inflation rate below 10%. In 2011, the CBN increased key interest rate six times, rising from 6.25% to 12%. On 31 January 2012, the CBN decided to maintain the key interest rate at 12%, in order to reduce the impact of inflation |
https://en.wikipedia.org/wiki/Astrophysics | Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl |
https://en.wikipedia.org/wiki/Interplanetary%20medium | The interplanetary medium (IPM) or interplanetary space consists of the mass and energy which fills the Solar System, and through which all the larger Solar System bodies, such as planets, dwarf planets, asteroids, and comets, move. The IPM stops at the heliopause, outside of which the interstellar medium begins. Before 1950, interplanetary space was widely considered to either be an empty vacuum, or consisting of "aether".
Composition and physical characteristics
The interplanetary medium includes interplanetary dust, cosmic rays, and hot plasma from the solar wind. The density of the interplanetary medium is very low, decreasing in inverse proportion to the square of the distance from the Sun. It is variable, and may be affected by magnetic fields and events such as coronal mass ejections. Typical particle densities in the interplanetary medium are about 5-40 particles/cm, but exhibit substantial variation. In the vicinity of the Earth, it contains about 5 particles/cm, but values as high as 100 particles/cm have been observed.
The temperature of the interplanetary medium varies through the solar system. Joseph Fourier estimated that interplanetary medium must have temperatures comparable to those observed at Earth's poles, but on faulty grounds: lacking modern estimates of atmospheric heat transport, he saw no other means to explain the relative consistency of earth's climate. A very hot interplanetary medium remained a minor position among geophysicists as late as 1959, when Chapman proposed a temperature on the order of 10000 K, but observation in Low Earth orbit of the exosphere soon contradicted his position. In fact, both Fourier and Chapman's final predictions were correct: because the interplanetary medium is so rarefied, it does not exhibit thermodynamic equilibrium. Instead, different components have different temperatures. The solar wind exhibits temperatures consistent with Chapman's estimate in cislunar space, and dust particles near Earth's |
https://en.wikipedia.org/wiki/Pseudo-monotone%20operator | In mathematics, a pseudo-monotone operator from a reflexive Banach space into its continuous dual space is one that is, in some sense, almost as well-behaved as a monotone operator. Many problems in the calculus of variations can be expressed using operators that are pseudo-monotone, and pseudo-monotonicity in turn implies the existence of solutions to these problems.
Definition
Let (X, || ||) be a reflexive Banach space. A map T : X → X∗ from X into its continuous dual space X∗ is said to be pseudo-monotone if T is a bounded operator (not necessarily continuous) and if whenever
(i.e. uj converges weakly to u) and
it follows that, for all v ∈ X,
Properties of pseudo-monotone operators
Using a very similar proof to that of the Browder–Minty theorem, one can show the following:
Let (X, || ||) be a real, reflexive Banach space and suppose that T : X → X∗ is bounded, coercive and pseudo-monotone. Then, for each continuous linear functional g ∈ X∗, there exists a solution u ∈ X of the equation T(u) = g. |
https://en.wikipedia.org/wiki/Zemor%27s%20decoding%20algorithm | In coding theory, Zemor's algorithm, designed and developed by Gilles Zemor, is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman.
Zemor considered a typical class of Sipser–Spielman construction of expander codes, where the underlying graph is bipartite graph. Sipser and Spielman introduced a constructive family of asymptotically good linear-error codes together with a simple parallel algorithm that will always remove a constant fraction of errors. The article is based on Dr. Venkatesan Guruswami's course notes
Code construction
Zemor's algorithm is based on a type of expander graphs called Tanner graph. The construction of code was first proposed by Tanner. The codes are based on double cover , regular expander , which is a bipartite graph. =, where is the set of vertices and is the set of edges and = and = , where and denotes sets of vertices. Let be the number of vertices in each group, i.e, . The edge set be of size = and every edge in has one endpoint in both and . denotes the set of edges containing .
Assume an ordering on , therefore ordering will be done on every edges of for every . Let finite field , and for a word in , let the subword of the word will be indexed by . Let that word be denoted by . The subset of vertices and induces every word a partition into non-overlapping sub-words , where ranges over the elements of .
For constructing a code , consider a linear subcode , which is a code, where , the size of the alphabet is . For any vertex , let be some ordering of the vertices of adjacent to . In this code, each bit is linked with an edge of .
We can define the code to be the set of binary vectors of such that, for every vertex of , is a code word of . In this case, we can consider a special case when every edge of is adjacent to exactly vertices of . It means that and make up, respectively, the vertex set and edge set of regular |
https://en.wikipedia.org/wiki/Radio%20clock | A radio clock or radio-controlled clock (RCC), and often colloquially (and incorrectly) referred to as an "atomic clock", is a type of quartz clock or watch that is automatically synchronized to a time code transmitted by a radio transmitter connected to a time standard such as an atomic clock. Such a clock may be synchronized to the time sent by a single transmitter, such as many national or regional time transmitters, or may use the multiple transmitters used by satellite navigation systems such as Global Positioning System. Such systems may be used to automatically set clocks or for any purpose where accurate time is needed. Radio clocks may include any feature available for a clock, such as alarm function, display of ambient temperature and humidity, broadcast radio reception, etc.
One common style of radio-controlled clock uses time signals transmitted by dedicated terrestrial longwave radio transmitters, which emit a time code that can be demodulated and displayed by the radio controlled clock. The radio controlled clock will contain an accurate time base oscillator to maintain timekeeping if the radio signal is momentarily unavailable. Other radio controlled clocks use the time signals transmitted by dedicated transmitters in the shortwave bands. Systems using dedicated time signal stations can achieve accuracy of a few tens of milliseconds.
GPS satellite receivers also internally generate accurate time information from the satellite signals. Dedicated GPS timing receivers are accurate to better than 1 microsecond; however, general-purpose or consumer grade GPS may have an offset of up to one second between the internally calculated time, which is much more accurate than 1 second, and the time displayed on the screen.
Other broadcast services may include timekeeping information of varying accuracy within their signals.
Single transmitter
Radio clocks synchronized to a terrestrial time signal can usually achieve an accuracy within a hundredth of a second r |
https://en.wikipedia.org/wiki/Paleohispanic%20scripts | The Paleohispanic scripts are the writing systems created in the Iberian Peninsula before the Latin alphabet became the main script. Most of them are unusual in that they are semi-syllabic rather than purely alphabetic, despite having supposedly developed, in part, from the Phoenician alphabet.
Paleohispanic scripts are known to have been used from the 5th century BCE — possibly from the 7th century, in the opinion of some researchers — until the end of the 1st century BCE or the beginning of the 1st century CE, and were the main scripts used to write the Paleohispanic languages. Some researchers conclude that their origin may lie solely with the Phoenician alphabet, while others believe the Greek alphabet may also have had a role.
Scripts
The Paleohispanic scripts are classified into three major groups: southern, northern, and Greco-Iberian, with differences both in the shapes of the glyphs and in their values.
Inscriptions in the southern scripts have been found mainly in the southern half of the Iberian Peninsula. They represent only 5% of the inscriptions found, and mostly read from right to left (like the Phoenician alphabet). They are:
the Espanca script (known from a single tablet, and the earliest attestation of an alphabetical order among the Paleohispanic scripts);
the Tartessian or Southwest script, also known as South Lusitanian;
the Southeastern Iberian script, also known as Meridional.
Inscriptions in the northern scripts have been found mainly in the northeast of the Iberian Peninsula. They represent 95% of the inscriptions found, and mostly read from left to right (like the Greek alphabet). They are:
the Northeastern Iberian script, also known as Levantine;
Dual variant
Non-dual variant
the Celtiberian script
Western variant
Eastern variant.
The Greco-Iberian alphabet was a direct adaptation of the Ionic variety of the Greek alphabet, and only found in a small region on the Mediterranean coast in the modern provinces of Alicante and |
https://en.wikipedia.org/wiki/Electric%20bacteria | Electric bacteria are forms of bacteria that directly consume and excrete electrons at different energy potentials without requiring the metabolization of any sugars or other nutrients. This form of life appears to be especially adapted to low-oxygen environments. Most life forms require an oxygen environment in which to release the excess of electrons which are produced in metabolizing sugars. In a low oxygen environment, this pathway for releasing electrons is not available. Instead, electric bacteria "breathe" metals instead of oxygen, which effectively results in both an intake of and excretion of electrical charges.
Some electric bacteria:
Shewanella
Geobacter
Methanobacterium palustre
Methanococcus maripaludis
Mycobacterium smegmatis
Modified Escherichia coli
A broad group of 30 bacteria varieties
See also
Electron transport chain |
https://en.wikipedia.org/wiki/Nordtvedt%20effect | In theoretical astrophysics, the Nordtvedt effect refers to the relative motion between the Earth and the Moon that would be observed if the gravitational self-energy of a body contributed differently to its gravitational mass than to its inertial mass. If observed, the Nordtvedt effect would violate the strong equivalence principle, which indicates that an object's movement in a gravitational field does not depend on its mass or composition. No evidence of the effect has been found.
The effect is named after Kenneth L. Nordtvedt, who first demonstrated that some theories of gravity suggest that massive bodies should fall at different rates, depending upon their gravitational self-energy.
Nordtvedt then observed that if gravity did in fact violate the strong equivalence principle, then the more-massive Earth should fall towards the Sun at a slightly different rate than the Moon, resulting in a polarization of the lunar orbit. To test for the existence (or absence) of the Nordtvedt effect, scientists have used the Lunar Laser Ranging experiment, which is capable of measuring the distance between the Earth and the Moon with near-millimetre accuracy. Thus far, the results have failed to find any evidence of the Nordtvedt effect, demonstrating that if it exists, the effect is exceedingly weak. Subsequent measurements and analysis to even higher precision have improved constraints on the effect. Measurements of Mercury's orbit by the MESSENGER Spacecraft have further refined the Nordvedt effect to be below an even smaller scale.
A wide range of scalar–tensor theories have been found to naturally lead to a tiny effect only, at present epoch. This is due to a generic attractive mechanism that takes place during the cosmic evolution of the universe. Other screening mechanisms (chameleon, pressuron, Vainshtein etc.) could also be at play.
See also
Galileo's Leaning Tower of Pisa experiment |
https://en.wikipedia.org/wiki/Battery%20simulator | A battery simulator is an electronic device designed to test battery chargers by simulating the behavior of a battery during the charging process.
Characteristics
Highlights in the battery simulator are the IGBT or MOSFET high frequency regulator (which allows the equipment to work with constant current and voltage), the programmable digital panel.
A battery simulator may have the following features:
An IGBT or MOSFET high frequency regulator
Digital voltmeter
Analogue ammeter
Test voltage selector
Potentiometer fine tension adjustment.
Potentiometer current selection (0-200 A)
self-test
Automatic stop in case of failure
Thermal protection in case of overtemperature
Functioning
Battery simulator mimics a battery’s electrical characteristic of outputting a voltage and is able to source as well as sink current. This type of power supply is called two-quadrant power supply. In contrast, a conventional power supply can only source current when the voltage is positive.
A battery simulator may be able to set the simulated battery voltage either remotely via PC or manually. Often battery simulators have built-in voltage and current display and monitoring. For example, the user selects the voltage of the battery to be simulated, using the potentiometer knob for adjusting the voltage, while the current value is displayed on the digital screen. An independent potentiometer is available to select the maximum current that the equipment can source or sink.
Battery Charger Testing
The basic use of battery simulator is replacing a real battery with a simulator. This enables the testing of the charger both during development and during production testing.
Once the simulated battery voltage is set, the user connects the charger to be tested to the input of the simulator. The charger will detect that a battery has been connected and the charging process will begin. The simulator keeps the voltage constant at the set value, while the analogue ammeter indicates the c |
https://en.wikipedia.org/wiki/Pro-aging%20trance | Pro-aging trance, also known as pro-aging edifice, is a term coined by British author and biomedical gerontologist Aubrey de Grey to describe the broadly positive and fatalistic attitude toward aging in society.
Overview
According to de Grey, the pro-aging trance explains why many people gloss over aging through irrational thought patterns. The concept says that the thought of one's own body slowly but ceaselessly deteriorating is so burdensome that it seems most sensible from a psychological point of view to try to put it out of one's mind. Since aging has been present throughout human history, this coping strategy would be deeply rooted in human thinking. It is striking that, in defending their point of view, those affected often commit fallacies which, from experience, would not be expected of them in a different context.
The name, according to de Grey, comes from the similarity of persons affected to hypnotised people, whose subconscious minds in the trance state prefer to resort to illogical explanations rather than abandon a deeply-held belief.
The pro-aging trance consists both in the belief that the aging process is inevitable and therefore will not be prevented even by future developments, and in the view that any success in the fight against aging would have mainly negative consequences. Examples cited include boredom, overpopulation, unresolved problems regarding current pension systems, and dictators living forever, but there is no nuanced and factual discussion of counter-arguments and proposed solutions and no juxtaposition or weighing of these potential disadvantages with the benefits of eliminating aging (such as saving about 100,000 lives per day).
De Grey assumes that robust mouse rejuvenation will provide a paradigm shift in society in this regard.
Issues
The phenomenon of the pro-aging trance is a hurdle in the rapid development of anti-aging medicine. The reason is that it takes time for people to break out of it and the result of lacking |
https://en.wikipedia.org/wiki/Computation%20history | In computer science, a computation history is a sequence of steps taken by an abstract machine in the process of computing its result. Computation histories are frequently used in proofs about the capabilities of certain machines, and particularly about the undecidability of various formal languages.
Formally, a computation history is a (normally finite) sequence of configurations of a formal automaton. Each configuration fully describes the status of the machine at a particular point. To be valid, certain conditions must hold:
the first configuration must be a valid initial configuration of the automaton and
each transition between adjacent configurations must be valid according to the transition rules of the automaton.
In addition, to be complete, a computation history must be finite and
the final configuration must be a valid terminal configuration of the automaton.
The definitions of "valid initial configuration", "valid transition", and "valid terminal configuration" vary for different kinds of formal machines.
A deterministic automaton has exactly one computation history for a given initial configuration, though the history may be infinite and therefore incomplete.
Finite State Machines
For a finite state machine , a configuration is simply
the current state of the machine, together with the remaining input. The first configuration must be the initial state of and the complete input. A transition from a configuration to
a configuration is allowed if for
some input symbol and if has a transition from
to on input . The final
configuration must have the empty string as its remaining
input; whether has accepted or rejected the input depends
on whether the final state is an accepting state.
Turing Machines
Computation histories are more commonly used in reference to Turing machines. The configuration of a single-tape Turing machine consists of the contents of the tape, the position of the read/write head on the tape, and the current stat |
https://en.wikipedia.org/wiki/List%20of%20single%20cell%20omics%20methods | A list of more than 100 different single cell sequencing (omics) methods have been published. The large majority of methods are paired with short-read sequencing technologies, although some of them are compatible with long read sequencing.
List |
https://en.wikipedia.org/wiki/Factorization%20algebra | In mathematics and mathematical physics, a factorization algebra is an algebraic structure first introduced by Beilinson and Drinfel'd in an algebro-geometric setting as a reformulation of chiral algebras, and also studied in a more general setting by Costello to study quantum field theory.
Definition
Prefactorization algebras
A factorization algebra is a prefactorization algebra satisfying some properties, similar to sheafs being a presheaf with extra conditions.
If is a topological space, a prefactorization algebra of vector spaces on is an assignment of vector spaces to open sets of , along with the following conditions on the assignment:
For each inclusion , there's a linear map
There is a linear map for each finite collection of open sets with each and the pairwise disjoint.
The maps compose in the obvious way: for collections of opens , and an open satisfying and , the following diagram commutes.
So resembles a precosheaf, except the vector spaces are tensored rather than (direct-)summed.
The category of vector spaces can be replaced with any symmetric monoidal category.
Factorization algebras
To define factorization algebras, it is necessary to define a Weiss cover. For an open set, a collection of opens is a Weiss cover of if for any finite collection of points in , there is an open set such that .
Then a factorization algebra of vector spaces on is a prefactorization algebra of vector spaces on so that for every open and every Weiss cover of , the sequence
is exact. That is, is a factorization algebra if it is a cosheaf with respect to the Weiss topology.
A factorization algebra is multiplicative if, in addition, for each pair of disjoint opens , the structure map
is an isomorphism.
Algebro-geometric formulation
While this formulation is related to the one given above, the relation is not immediate.
Let be a smooth complex curve. A factorization algebra on consists of
A quasicoherent sheaf over for any finite s |
https://en.wikipedia.org/wiki/Akamptisomer | An akamptisomer is a type of conformational isomer characterized by a hindered inversion of a bond angle. It was first discovered in 2018 in a series of bridged porphyrin molecules. |
https://en.wikipedia.org/wiki/Nutritional%20epigenetics | Nutritional epigenetics is a science that studies the effects of nutrition on gene expression and chromatin accessibility. It is a subcategory of nutritional genomics that focuses on the effects of bioactive food components on epigenetic events.
History
Changes to children’s genetic profiles caused by fetal nutrition have been observed as early as the Dutch famine of 1944-1945. Due to malnutrition in pregnant mothers, children born during this famine were more likely to exhibit health issues such as heart disease, obesity, schizophrenia, depression, and addiction.
Biologists Randy Jirtle and Robert A. Waterland became early pioneers of nutritional epigenetics after publishing their research on the effects of a pregnant mother’s diet on her offspring’s gene functions in the research journal Molecular and Cellular Biology in 2003.
Research
Researchers in nutritional epigenetics study the interaction between molecules in food and molecules that control gene expression, which leads to areas of focus such as dietary methyl groups and DNA methylation. Nutrients and bioactive food components affect epigenetics by inhibiting enzymatic activity related to DNA methylation and histone modifications. Because methyl groups are used for suppression of undesirable genes, a mother’s level of dietary methyl consumption can significantly alter her child’s gene expression, especially during early development. Furthermore, nutrition can affect methylation as the process continues throughout an individual’s adult life. Because of this, nutritional epigeneticists have studied food as a form of molecular exposure.
Bioactive food components that influence epigenetic processes range from vitamins such as A, B6, and B12 to alcohol and elements such as arsenic, cadmium, and selenium. Dietary methyl supplements such as extra folic acid and choline can also have adverse effects on epigenetic gene regulation.
Researchers have considered dietary exposure to heavy metals such as mercury and |
https://en.wikipedia.org/wiki/Haplogroup%20I-Z63 | Haplogroup I-Z63, also known as I1a3 per the International Society of Genetic Genealogy ('ISOGG), is a Y chromosome haplogroup. It is correlated with a DYS456 value inferior to 15, but there are exceptions.
I-Z63 is most common in England, Scotland, Germany, Fennoscandia and Poland. Its progenitor is assumed to have lived in Jutland at around 2500 BCE. Within Fennoscandia, I-Z63 has a particularly strong association with Finland. To date, ancient I-Z63 has been found archeologically in the Jutland area of Schleswig, Germany along with Poland and Italy.
Origins
On the basis of analysing samples of volunteers in YDNA sequencing, the YDNA analysis company YFull estimated that I-Z63 formed 4,600 years ago (2600 BC) (95% CI 5,100 <-> 4,000 ybp) with a TMRCA (Time to Most Recent Common Ancestor) of 4,400 years (95% CI 4,900 <-> 3,900 ybp) before present.
Geographically I-Z63 is believed to have arisen in or near what is now Denmark (based in part on the current distribution of this haplogroup). The current distribution of I-Z63 shows that there is a very high concentration of I-Z63 on the British Isles. At the same time, the archeological record presents a strong association of I-Z63 to the Wielbark culture and by extension with the Goths. There is a proposed link between the Goths and British migration, the so-called "Jutish Hypothesis". The "Jutish hypothesis" claims that the Jutes may be synonymous with the Geats of southern Sweden or their neighbours, the Gutes. The evidence adduced for this theory includes:
primary sources referring to the Geats (Geátas) by alternative names such as Iútan, Iótas and Eotas;
Asser in his Life of Alfred (893) identifies the Jutes with the Goths (in a passage claiming that Alfred the Great was descended, through his mother, Osburga, from the ruling dynasty of the Jutish kingdom of Wihtwara, on the Isle of Wight), and;
the Gutasaga (13th Century) states that some inhabitants of Gotland left for mainland Europe; large burial sites |
https://en.wikipedia.org/wiki/List%20of%20Afghan%20flags | This is a list of flags associated with Afghanistan.
National flag
Standards of the head of state
Loya Jirga
Military flags
Army
Air Force
Police
Customs service
Olympic Committee
Historical flags
Political flags
Political partys flags
Rebel groups flags
This table does not include flags derived from rebels that became national flags. Such cases occurred once during the Saqqawists period in 1929 and twice in connection with the Taliban takeovers in 1996 and 2021.
Other
Ethnic groups flags
Red Crescent Society
Corporations
Airlines
Unknown flags
Misattributed flags
See also
Flag of Afghanistan
Emblem of Afghanistan
Afghan rebel flags |
https://en.wikipedia.org/wiki/Diino | Diino was a cloud storage provider, offering online backup data storage and file sharing. The company, Diino Systems AB, was founded 2004 and was based Stockholm, Sweden, with sales offices in Atlanta, London and Mexico City. Its owners include Swisscom.
The service ran on Windows, Mac, Linux, Android, iPhone and iPad platforms, and allowed users to create simple automated rules for protecting data by moving it into a Diino account. The service was offered directly to consumers and SME:s, but also indirectly with a white label solution via partners such as telecom operators, ISP:s and large consumer brands.
In 2012, the service was later taken over by Swiss Picture Bank with the intention to continue to run the service as before but now via the new Swedish company, New Diino AB. New Diino AB was declared bankrupt in September 2019.
See also
Comparison of online backup services |
https://en.wikipedia.org/wiki/Multi-factor%20authentication | Multi-factor authentication (MFA; two-factor authentication, or 2FA, along with similar terms) is an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more pieces of evidence (or factors) to an authentication mechanism. MFA protects personal data—which may include personal identification or financial assets—from being accessed by an unauthorized third party that may have been able to discover, for example, a single password.
A third-party authenticator (TPA) app enables two-factor authentication, usually by showing a randomly generated and frequently changing code to use for authentication.
Factors
Authentication takes place when someone tries to log into a computer resource (such as a computer network, device, or application). The resource requires the user to supply the identity by which the user is known to the resource, along with evidence of the authenticity of the user's claim to that identity. Simple authentication requires only one such piece of evidence (factor), typically a password. For additional security, the resource may require more than one factor—multi-factor authentication, or two-factor authentication in cases where exactly two pieces of evidence are to be supplied.
The use of multiple authentication factors to prove one's identity is based on the premise that an unauthorized actor is unlikely to be able to supply the factors required for access. If, in an authentication attempt, at least one of the components is missing or supplied incorrectly, the user's identity is not established with sufficient certainty and access to the asset (e.g., a building, or data) being protected by multi-factor authentication then remains blocked. The authentication factors of a multi-factor authentication scheme may include:
Something the user has: Any physical object in the possession of the user, such as a security token (USB stick), a bank card, a key, etc.
Something t |
https://en.wikipedia.org/wiki/Bayesian%20Analysis%20%28journal%29 | Bayesian Analysis is an open-access peer-reviewed scientific journal covering theoretical and applied aspects of Bayesian methods. It is published by the International Society for Bayesian Analysis and is hosted at the Project Euclid web site.
Bayesian Analysis is abstracted and indexed by Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2011 impact factor of 1.650. |
https://en.wikipedia.org/wiki/Zero-product%20property | In algebra, the zero-product property states that the product of two nonzero elements is nonzero. In other words,
This property is also known as the rule of zero product, the null factor law, the multiplication property of zero, the nonexistence of nontrivial zero divisors, or one of the two zero-factor properties. All of the number systems studied in elementary mathematics — the integers , the rational numbers , the real numbers , and the complex numbers — satisfy the zero-product property. In general, a ring which satisfies the zero-product property is called a domain.
Algebraic context
Suppose is an algebraic structure. We might ask, does have the zero-product property? In order for this question to have meaning, must have both additive structure and multiplicative structure. Usually one assumes that is a ring, though it could be something else, e.g. the set of nonnegative integers with ordinary addition and multiplication, which is only a (commutative) semiring.
Note that if satisfies the zero-product property, and if is a subset of , then also satisfies the zero product property: if and are elements of such that , then either or because and can also be considered as elements of .
Examples
A ring in which the zero-product property holds is called a domain. A commutative domain with a multiplicative identity element is called an integral domain. Any field is an integral domain; in fact, any subring of a field is an integral domain (as long as it contains 1). Similarly, any subring of a skew field is a domain. Thus, the zero-product property holds for any subring of a skew field.
If is a prime number, then the ring of integers modulo has the zero-product property (in fact, it is a field).
The Gaussian integers are an integral domain because they are a subring of the complex numbers.
In the strictly skew field of quaternions, the zero-product property holds. This ring is not an integral domain, because the multiplication is not |
https://en.wikipedia.org/wiki/Andr%C3%A1s%20Vasy | András Vasy (born 1969 in Hungary) is an American, Hungarian mathematician working in the areas of partial differential equations, microlocal analysis, scattering theory, and inverse problems. He is currently a professor of mathematics at Stanford University.
Education and career
Vasy attended Stanford University, obtaining his BS in Physics and MS in Mathematics in 1993. He received his PhD from MIT under the supervision of Richard B. Melrose in 1997. Following his postdoctoral appointment at the University of California, Berkeley, he joined the MIT faculty as an assistant professor in 1999. He was awarded tenure at MIT in 2005 during a long-term stay at Northwestern University before moving to Stanford in 2006.
Awards and honors
Vasy was an Alfred P. Sloan Research Fellow from 2002 to 2004, and a Clay Research Fellow from 2004 to 2006. He was elected a Fellow of the American Mathematical Society in 2012. He was an invited speaker at the International Congress of Mathematicians in Seoul in 2014. In 2017, he was awarded the Bôcher Prize of the American Mathematical Society.
Research
The unifying feature of Vasy's work is the application of tools from microlocal analysis to problems in hyperbolic partial differential or pseudo-differential equations. He analyzed the propagation of singularities for solutions of wave equations on manifolds with corners or more complicated boundary structures, partially in joint work with Richard Melrose and Jared Wunsch. For his paper on a unified approach to scattering theory on asymptotically hyperbolic spaces and spacetimes arising in Einstein's theory of general relativity such as de Sitter space and Kerr-de Sitter spacetimes, he was awarded the Bôcher Prize in 2017. This paper led to further advances, including the proof, by Vasy and Peter Hintz, of the global nonlinear stability of the Kerr-de Sitter family of black hole spacetimes, and a new proof of Smale's conjecture for Anosov flows by Semyon Dyatlov and Maciej Zwors |
https://en.wikipedia.org/wiki/Glycolipid%20transfer%20protein | Glycolipid transfer protein is a cytosolic protein that catalyses the transfer of glycolipids between different intracellular membranes.
It was discovered by Raymond J. Metz and Norman S. Radin in 1980 and partially purified and characterized in 1982.
Recent reviews on structure and possible function are available.
This protein transports primarily different glycosphingolipids and glyceroglycolipids between intracellular membranes, but not phospholipids. It might be also involved in translocation of glucosylceramides. It was found in brain, kidney, spleen, lung, cerebellum, liver and heart.
Human proteins containing this domain
GLTP; PLEKHA8; PLEKHA9; |
https://en.wikipedia.org/wiki/Katalon%20Studio | Katalon Platform is an automation testing software tool developed by Katalon, Inc. The software is built on top of the open-source automation frameworks Selenium, Appium with a specialized IDE interface for web, API, mobile and desktop application testing. Its initial release for internal use was in January 2015. Its first public release was in September 2016. In 2018, the software acquired 9% of market penetration for UI test automation, according to The State of Testing 2018 Report by SmartBear.
Katalon is recognized as a March 2019 and March 2020 Gartner Peer Insights Customers’ Choice for Software Test Automation.
Platform
Katalon Platform provides a dual interchangeable interface for creating test cases: a manual view for the less technical users and a script view gearing toward experienced testers to author automation tests with syntax highlight and intelligent code completion.
Katalon Platform follows the Page Object Model pattern. GUI elements on web, mobile, and desktop apps can be captured using the recording utility and stored into the Object Repository, which is accessible and reusable across different test cases.
Test cases can be structured using test suites with environment variables. Test execution can be parameterized and parallelized using profiles.
Remote execution in Katalon Platform can be triggered by CI systems via Docker container or command line interface (CLI).
From version 7.4.0, users are able to execute test cases from Selenium projects, along with the previous migration from TestNG and JUnit to Katalon Platform.
In version 7.8, users can save team effort while debugging with smart troubleshooting approaches offered via highlight features: Time Capsule, Browser-based Video Recorder, Self-healing and Test Failure Snapshots.
Provided in the latest version 8.4.0 is the native integration with Azure DevOps (ADO) which enables users to easily map test cases in Azure DevOps to automated test cases in Katalon Platform. Additionally, th |
https://en.wikipedia.org/wiki/Advisory%20Committee%20on%20Mathematics%20Education | The Advisory Committee on Mathematics Education (ACME) is a British policy council for the Royal Society based in London, England. Founded in 2002 by the Royal Society and the Joint Mathematical Council, ACME analyzes mathematics education practices and provides advice on education policy. ACME is funded by the Gatsby Charitable Foundation (2002-2015) and the Department for Education.
Members
The committee chair is appointed for a three-year term. As of 2018, the membership is composed of:
Frank Kelly (Chair)
Martin Bridson
Paul Glaister
Paul Golby
Jeremy Hodgen
Mary McAlinden
Lynne McClure
Emma McCoy
Jil Matheson
David Spiegelhalter
Sally Bridgeland |
https://en.wikipedia.org/wiki/Global%20Anabaptist%20Mennonite%20Encyclopedia%20Online | The Global Anabaptist Mennonite Encyclopedia Online (GAMEO) is an online encyclopedia of topics relating to Mennonites and Anabaptism. The mission of the project is to provide free, reliable, English-language information on Anabaptist-related topics.
GAMEO was started in 1996 as the Canadian Mennonite Encyclopedia Online by the Mennonite Historical Society of Canada. In 2005 the project was renamed to its current title and the scope expanded with the additional partnership of the Mennonite Brethren Historical Commission and the Mennonite Church USA Archives. The collaboration has since further expanded, with the addition of the Mennonite Central Committee in 2006, the Mennonite World Conference in January 2007, and the Institute for the Study of Global Anabaptism in 2011.
Starting as a database of Anabaptist groups in Canada, GAMEO secured rights to copy and update the Mennonite Encyclopedia published by Herald Press in the 1950s and 1990. A project goal was to have the entire contents of the Mennonite Encyclopedia, including the supplement volume published in 1990, available on the web. This was accomplished by February 2009, at which time the encyclopedia contained more than 14,000 articles. Articles are either adapted from other scholarly works or assigned to knowledgeable authors and then undergo editorial review before publication to the website.
Sam Steiner of Waterloo, Ontario, served as managing editor from the encyclopedia's inception until 2012. Richard D. Thiessen of Abbotsford, British Columbia, was managing editor from 2012 to May 2017. John D. Roth of Goshen College, Goshen, Indiana, was appointed General Editor in May, 2017.
See also
Mennonite Historical Library |
https://en.wikipedia.org/wiki/Commodification%20of%20nature | The commodification of nature is an area of research within critical environmental studies that is concerned with the ways in which natural entities and processes are made exchangeable through the market, and the implications thereof.
Drawing upon the work of Karl Marx, Karl Polanyi, James O’Connor and David Harvey, this area of work is normative and critical, based in Marxist geography and political ecology. Theorists use a commodification framing in order to contest the perspectives of "market environmentalism," which sees marketization as a solution to environmental degradation. The environment has been a key site of conflict between proponents of the expansion of market norms, relations and modes of governance and those who oppose such expansion. Critics emphasize the contradictions and undesirable physical and ethical consequences brought about by the commodification of natural resources (as inputs to production and products) and processes (environmental services or conditions).
Most researchers who employ a commodification of nature framing invoke a Marxian conceptualization of commodities as "objects produced for sale on the market" that embody both use and exchange value. Commodification itself is a process by which goods and services not produced for sale are converted into an exchangeable form. It involves multiple elements, including privatization, alienation, individuation, abstraction, valuation and displacement.
As capitalism expands in breadth and depth, more and more things previously external to the system become “internalized,” including entities and processes that are usually considered "natural." Nature, as a concept, however, is very difficult to define, with many layers of meaning, including external environments as well as humans themselves. Political ecology and other critical conceptions draw upon strands within Marxist geography that see nature as "socially produced," with no neat boundary separating the "social" from the "natural." |
https://en.wikipedia.org/wiki/Inborn%20errors%20of%20immunity | Inborn errors of immunity (IEI) are genetic mutations that result in an increased susceptibility to infectious disease, autoinflammatory disease, allergy, or autoimmunity. Inborn errors include, but are not limited to, primary immunodeficiencies. As of 2020, there are 431 identified inborn errors of immunity.
Types
As of 2020, there are 431 IEIs, which are divided into three categories:
Primary immunodeficiencies
Mendelian infections
Monogenic infections
Causes
A variety of mutations can cause inborn errors of immunity. These include loss of function, gain of function, and loss of expression.
Epidemiology
IEIs were historically considered very rare, affecting only 1 in 10,000 – 50,000 births. As more IEIs are described and clinical phenotypes are defined more precisely, their true prevalence may be more common. More recent estimates place prevalence at 1 in 1,000 – 10,000 births.
History
The first human IEI described was epidermodysplasia verruciformis in 1946, with the first primary immunodeficiency (X-linked agammaglobulinemia) described in 1952.
In 1973, the World Health Organization (WHO) established the Inborn Errors of Immunity Committee for the purpose of classifying and identifying immune defects in humans. In the 1990s, the WHO decided to focus on more common disease, and the committee was taken on by the International Union of Immunological Societies. This relationship was made official in 2008.
See also
List of primary immunodeficiencies |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.