source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/GameKey | GameKeys are expansion modules made by Jakks Pacific for the purpose of adding games to GameKey-ready entries in their Plug It In & Play TV Games product line.
History
The GameKey was first announced at the 2005 International Toy Fair, and the first products were released in July 2005.
GameKeys were mainly marketed for the Namco Ms. Pac-Man controller, but different GameKeys existed for other TV Games manufactured by Jakks Pacific, including Nicktoons, Star Wars, and Disney. There were also GameKeys that never saw release due to market failure, such as Fantastic Four and Capcom.
GameKeys were discontinued in late 2006 after struggling for a year.
List of GameKey-Ready Controllers |
https://en.wikipedia.org/wiki/Lexicographic%20optimization | Lexicographic optimization is a kind of Multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Often, the different objectives can be ranked in order of importance to the decision-maker, so that objective is the most important, objective is the next most important, and so on. Lexicographic optimization presumes that the decision-maker prefers even a very small increase in , to even a very large increase in etc. Similarly, the decision-maker prefers even a very small increase in , to even a very large increase in etc. In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values. Lexicographic optimization is sometimes called preemptive optimization, since a small increase in one objective value preempts a much larger increase in less important objective values.
As an example, consider a firm which puts safety above all. It wants to maximize the safety of its workers and customers. Subject to attaining the maximum possible safety, it wants to maximize profits. This firm performs lexicographic optimization, where denotes safety and denotes profits.
As another example, in project management, when analyzing PERT networks, one often wants to minimize the mean completion time, and subject to this, minimize the variance of the completion time.
Notation
A lexicographic maximization problem is often written as:where are the functions to maximize, ordered from the most to the least important; is the vector of decision variables; and is the feasible set - the set of possible values of . A lexicographic minimization problem can be defined analogously.
Algorithms
There are several algorithms for solving lexicographic optimization problems.
Sequential algorithm for general objectives
A leximin optimization problem with n objectives can be solv |
https://en.wikipedia.org/wiki/Phenome-wide%20association%20study | In genetics and genetic epidemiology, a phenome-wide association study, abbreviated PheWAS, is a study design in which the association between single-nucleotide polymorphisms or other types of DNA variants is tested across a large number of different phenotypes. The aim of PheWAS studies (or PheWASs) is to examine the causal linkage between known sequence differences and any type of trait, including molecular, biochemical, cellular, and especially clinical diagnoses and outcomes. It is a complementary approach to the genome-wide association study, or GWAS, methodology. A fundamental difference between GWAS and PheWAS designs is the direction of inference: in a PheWAS it is from exposure (the DNA variant) to many possible outcomes, that is, from SNPs to differences in phenotypes and disease risk. In a GWAS, the polarity of analysis is from one or a few phenotypes to many possible DNA variants. The approach has proven useful in rediscovering previously reported genotype-phenotype associations, as well as in identifying new ones.
The PheWAS approach was originally developed due to the widespread availability of both anonymized human clinical electronic health record (EHR) data and matched genotype data, using phenotypes defined by groupings of (ICD) codes called phecodes. Massive genome and phenome data sets for model organisms were being assembled have also proved effective for PheWAS. PheWASs have also been conducted using data from existing epidemiological studies. In 2010, a proof-of-concept PheWAS study was published based on EHR billing codes from a single study site. Though this study was generally underpowered, its results suggested the potential existence of new associations between multiple phenotypes, possibly due to a common underlying cause. This paper also coined the abbreviation "PheWAS". As of 2019, PheWAS in the EHR has been conducted using ICD-9-CM, ICD-10, and ICD-10-CM diagnosis codes.
Methods
PheWAS initially started from the growing use of EM |
https://en.wikipedia.org/wiki/Fiduccia%E2%80%93Mattheyses%20algorithm | A classical approach to solve the Hypergraph bipartitioning problem is an iterative heuristic by Charles Fiduccia and Robert Mattheyses. This heuristic is commonly called the FM algorithm.
Introduction
FM algorithm is a linear time heuristic for improving network partitions.
New features to K-L heuristic:
Aims at reducing net-cut costs; the concept of cutsize is extended to hypergraphs.
Only a single vertex is moved across the cut in a single move.
Vertices are weighted.
Can handle "unbalanced" partitions; a balance factor is introduced.
A special data structure is used to select vertices to be moved across the cut to improve running time.
Time complexity O(P), where P is the total # of terminals.
F–M heuristic: notation
Input: A hypergraph with a vertex (cell) set and a hyperedge (net) set
n(i): # of cells in Net i; e.g., n(1) = 4
s(i): size of Cell i
p(i): # of pins of Cell i; e.g., p(1) = 4
C: total # of cells; e.g., C = 13
N: total # of nets; e.g., N = 4
P: total # of pins; P = p(1) + … + p(C) = n(1) + … + n(N)
Area ratio r, 0< r<1
Output: 2 partitions
Cutsetsize is minimized
|A|/(|A|+|B|) ≈ r
See also
Graph partition
Kernighan–Lin algorithm |
https://en.wikipedia.org/wiki/MyDLP | MyDLP is a data loss prevention solution originally available released as free and open source software. Supported data inspection channels include web, mail, instant messaging, file transfer to removable storage devices and printers. The MyDLP development project originally made its source code available under the terms of the GNU General Public License.
MyDLP was one of the first free software projects for data loss prevention, but was acquired by the Comodo Group in May 2014. Comodo has since begun marketing the Enterprise version through its Comodo Security Solutions subsidiary, while the free version has been removed from the website. The open source code has not been updated since early 2014.
Subprojects
As of October 2010, MyDLP included the following subprojects:
MyDLP Network: Network server of the project, which was used for high load network operations such as intercepting TCP connections and hosting MyDLP network services.
MyDLP Endpoint: Remote agent of the project, which ran on endpoint machines in order to inspect end user operations such as copying a file to an external device, printing a document and capturing screenshots.
MyDLP Web UI: Management interface for system administrators to configure MyDLP. It pushed relevant parts of system configuration to both MyDLP Network and MyDLP Endpoint.
Platforms and interfaces
MyDLP Network was mostly written in Erlang, because of its performance on concurrent network operations. Python was also used for a few exceptional cases. This subsystem could run on any platform that supported Erlang and Python.
MyDLP Endpoint was developed for Windows platforms, and it was written in C++, C#.
MyDLP Web UI was written in PHP and Adobe Flex. It used MySQL in order to store user configurations.
Features
As of October 2010, MyDLP included widespread data loss prevention features such as text extraction from binary formats, incident management queue, source code detection and data identification methods for ba |
https://en.wikipedia.org/wiki/Small-signal%20model | Small-signal modeling is a common analysis technique in electronics engineering used to approximate the behavior of electronic circuits containing nonlinear devices with linear equations. It is applicable to electronic circuits in which the AC signals (i.e., the time-varying currents and voltages in the circuit) are small relative to the DC bias currents and voltages. A small-signal model is an AC equivalent circuit in which the nonlinear circuit elements are replaced by linear elements whose values are given by the first-order (linear) approximation of their characteristic curve near the bias point.
Overview
Many of the electrical components used in simple electric circuits, such as resistors, inductors, and capacitors are linear. Circuits made with these components, called linear circuits, are governed by linear differential equations, and can be solved easily with powerful mathematical frequency domain methods such as the Laplace transform.
In contrast, many of the components that make up electronic circuits, such as diodes, transistors, integrated circuits, and vacuum tubes are nonlinear; that is the current through them is not proportional to the voltage, and the output of two-port devices like transistors is not proportional to their input. The relationship between current and voltage in them is given by a curved line on a graph, their characteristic curve (I-V curve). In general these circuits don't have simple mathematical solutions. To calculate the current and voltage in them generally requires either graphical methods or simulation on computers using electronic circuit simulation programs like SPICE.
However in some electronic circuits such as radio receivers, telecommunications, sensors, instrumentation and signal processing circuits, the AC signals are "small" compared to the DC voltages and currents in the circuit. In these, perturbation theory can be used to derive an approximate AC equivalent circuit which is linear, allowing the AC beh |
https://en.wikipedia.org/wiki/IBM%20Network%20Control%20Program | The IBM Network Control Program, or NCP, was software that ran on a 37xx communications controller and managed communication with remote devices. NCP provided services comparable to the data link layer and Network Layer functions in the OSI model of a Wide area network.
Overview
The original IBM Network Control Program ran on the 3705-I and supported access to older devices by application programs using Telecommunications Access Method (TCAM). With the advent of Systems Network Architecture (SNA), NCP was enhanced to connect cluster controllers (such as the IBM 3270) to application programs using TCAM and later to application programs using Virtual Telecommunications Access Method (VTAM). Subsequent versions of NCP were released to run on the IBM 3704, IBM 3705-II, IBM 3725. IBM 3720, or IBM 3745 Communications Controllers, all of which SNA defined as a SNA Physical Unit Type 4 (PU4). A PU4 usually had SDLC links to remote cluster controllers (PU1/PU2) or to other PU4s. Polling and addressing of the cluster controllers was performed by the NCP without mainframe intervention.
In 2005 IBM introduced Communications Controller for Linux (CCL), a software product that allows an unmodified NCP to run on the mainframe, eliminating the need for a separate communications controller in some cases.
A local NCP connected to a System/370 channel via single address.
A remote NCP had no direct connection to a mainframe but was connected to a local NCP via one or more high-speed SDLC links.
Notes |
https://en.wikipedia.org/wiki/Nitrogen%20nutrition%20in%20the%20arbuscular%20mycorrhizal%20system | Nitrogen nutrition in the arbuscular mycorrhizal system refers to...
Role of nitrogen
Nitrogen is a vital macronutrient for plants, necessary for the biosynthesis of many basic cellular components, such as DNA, RNA and proteins. Nitrogen is obtained by plants through roots from inorganic or organic sources, such as amino acids. In agricultural settings, nitrogen may be a limiting factor for plant growth and yield, and in total, as a critical cellular component that a plant deficient in this nitrogen will shunt resources away from its shoot in order to expand its root system so that it can acquire more nitrogen.
Arbuscular mycorrhizal fungi are divided into two parts depending on where the mycelium is located. The intra-radical mycelia (IRM) are found within the root itself while the extra-radical mycelium (ERM) are tiny hyphal threads which reach far out into the soil. The IRM is the site of nutrient exchange between the symbionts, while the ERM effectively serves as an extension of the plant's root system by increasing the surface area available for nutrient acquisition, including nitrogen, which can be taken up in the form of ammonium, nitrate or from organic sources. Working with an in vitro system, studies have shown that as much as 29% to 50% of the root nitrogen was taken up via the fungus. This is also true in in planta studies, such as an experiment in which the researchers showed that 75% of the nitrogen in a young maize leaf originated from the ERM.
Mechanism of action
The precise mechanism(s) by which nitrogen is taken up from the soil by the ERM, transported to the IRM, and then turned over to the plant are still under investigation. Toward elucidating the mechanisms through which nitrogen transfer is completed, the sum of numerous studies have provided the necessary tools to study this process. For example, the detection and measurement of gene expression has enabled researchers to determine which genes are up-regulated in the plant and fungus under |
https://en.wikipedia.org/wiki/Gibbon%E2%80%93human%20last%20common%20ancestor | The phylogenetic split of the superfamily Hominoidea (apes) into the Hylobatidae (gibbons) and Hominidae (great apes) families (also dubbed "gibbon–human last common ancestor", GHLCA) is dated to the early Miocene, roughly .
Hylobatidae has four gibbon genera (Hylobates with 9 species, Hoolock with 3 species, Nomascus with 7 species and Symphalangus with only 1 species) containing 20 different species. Hominidae has two subfamilies, Ponginae (orangutans) and Homininae (African apes, including the human lineage).
Evolutionary history
A 2014 whole-genome molecular dating analysis indicated that the gibbon lineage diverged from that of great apes (Hominidae) around 17 million years ago (), based on certain assumptions about the generation time and mutation rate.
The extinct Bunopithecus sericus was a gibbon or gibbon-like ape. Adaptive divergence associated with chromosomal rearrangements led to rapid radiation of the four genera within the Hylobatidae lineage between about 7 to 5 Mya. Each genus comprises a distinct, well-delineated lineage, but the sequence and timing of divergences among these genera have been hard to resolve due to radiative speciations and extensive incomplete lineage sorting. Recent coalescent-based analysis of both the coding and noncoding parts of the genome suggests that the most likely sequence of species divergences in the Hylobatidae lineage is (Hylobates, (Nomascus, (Hoolock, Symphalangus))).
Appearance and ecology
Because fossils are so scarce, it is not clear what GHLCA looked like. A 2019 study found that the species was "smaller than previously thought" and about the size of a gibbon.
It is unknown whether GHLCA was tailless and had a broad, flat rib cage like their descendants. But it is likely that it was a small animal, probably weighing only . This contradicts previous theories that they were the size of chimpanzees and that apes moved to hang and to swing from trees to get off the ground because they were too big. There mi |
https://en.wikipedia.org/wiki/Honorary%20Diploma%20of%20the%20Verkhovna%20Rada%20of%20Ukraine | The Honorary Diploma of the Verkhovna Rada of Ukraine is an award of the Verkhovna Rada (parliament) of Ukraine for significant contribution to any sphere of life, outstanding socio-political activities, services to the Ukrainian people in promoting and strengthening Ukraine as a democratic, social, legal state, implementing measures to ensure rights and freedoms of citizens, development of democracy, parliamentarism and civil harmony in society, and active participation in legislative activities.
Laureates |
https://en.wikipedia.org/wiki/List%20of%20provincial%20flags%20of%20Spain | This is a list of flags of provinces of Spain. The flags are listed per autonomous community. The list also discusses coat of arms as most flags feature them.
Andalusia
Aragon
Asturias
Balearic Islands
Basque Country
Canary Islands
Cantabria
Castile and León
Castilla-La Mancha
Catalonia
Extremadura
Galicia
La Rioja
Community of Madrid
Region of Murcia
Navarre
Valencian Community
Historical |
https://en.wikipedia.org/wiki/Gammatone%20filter | A gammatone filter is a linear filter described by an impulse response that is the product of a gamma distribution and sinusoidal tone. It is a widely used model of auditory filters in the auditory system.
A gammatone response was originally proposed in 1972 as a description of revcor functions measured in the cochlear nucleus of cats.
The gammatone impulse response is given by
where
(in Hz) is the center frequency,
(in radians) is the phase of the carrier,
is the amplitude,
is the filter's order,
(in Hz) is the filter's bandwidth,and
(in seconds) is time.
This time-domain impulse response is a sinusoid (a pure tone) with an amplitude envelope which is a scaled gamma distribution function.
Gammatone filterbank cepstral coefficients (GFCCs) are auditory features that have been used first in the speech domain, and later in the field of underwater target recognition.
A bank of gammatone filters is used as an improvement on the triangular filters conventionally used in mel scale filterbanks and MFCC features.
Different ways of motivating the gammatone filter for auditory processing have been presented by
Johannesma,
Patterson et al.,
Hewitt and Meddis,
and Lindeberg and Friberg.
Variations
Variations and improvements of the gammatone model of auditory filtering include the complex gammatone filter, the gammachirp filter, the all-pole and one-zero gammatone filters, the two-sided gammatone filter, and filter-cascade models, and various level-dependent and dynamically nonlinear versions of these. Lindeberg and Friberg define a new family of generalized gammatone filters. |
https://en.wikipedia.org/wiki/Typing%20environment | In type theory a typing environment (or typing context) represents the association between variable names and data types.
More formally an environment is a set or ordered list of pairs , usually written as , where is a variable and its type.
The judgement
is read as " has type in context ".
For each function body type checks:
Typing Rules Example:
In statically typed programming languages these environments are used and maintained by typing rules to type check a given program or expression.
See also
Type system |
https://en.wikipedia.org/wiki/Petroleum%20jelly | Petroleum jelly, petrolatum, white petrolatum, soft paraffin, or multi-hydrocarbon, CAS number 8009-03-8, is a semi-solid mixture of hydrocarbons (with carbon numbers mainly higher than 25), originally promoted as a topical ointment for its healing properties. Vaseline has been a well-known American brand of petroleum jelly since 1870.
After petroleum jelly became a medicine-chest staple, consumers began to use it for cosmetic purposes and for many ailments including toenail fungus, genital rashes (non-STI), nosebleeds, diaper rash, and common colds. Its folkloric medicinal value as a "cure-all" has since been limited by a better scientific understanding of appropriate and inappropriate uses. It is recognized by the U.S. Food and Drug Administration (FDA) as an approved over-the-counter (OTC) skin protectant and remains widely used in cosmetic skin care, where it is often loosely referred to as mineral oil.
History
Marco Polo in 1273 described the oil exportation of Baku oil by hundreds of camels and ships for burning and as an ointment for treating mange.
Native Americans discovered the use of petroleum jelly for protecting and healing skin. Sophisticated oil pits had been built as early as 1415–1450 in Western Pennsylvania. In 1859, workers operating the United States's first oil rigs noticed a paraffin-like material forming on rigs in the course of investigating malfunctions. Believing the substance hastened healing, the workers used the jelly on cuts and burns.
Robert Chesebrough, a young chemist whose previous work of distilling fuel from the oil of sperm whales had been rendered obsolete by petroleum, went to Titusville, Pennsylvania, to see what new materials had commercial potential. Chesebrough took the unrefined green-to-gold-colored "rod wax", as the drillers called it, back to his laboratory to refine it and explore potential uses. He discovered that by distilling the lighter, thinner oil products from the rod wax, he could create a light-colored |
https://en.wikipedia.org/wiki/Epipodophyllotoxin | Epipodophyllotoxins are substances naturally occurring in the root of American Mayapple plant (Podophyllum peltatum).
Some epipodophyllotoxin derivatives are currently used in the treatment of cancer. These include etoposide and teniposide. They act as anti-cancer drugs by inhibiting topoisomerase II.
See also
Podophyllotoxin |
https://en.wikipedia.org/wiki/Okumura%20model | The Okumura model is a radio propagation model that was built using the data collected in the city of Tokyo, Japan. The model is ideal for using in cities with many urban structures but not many tall blocking structures. The model served as a base for the Hata model.
Okumura model was built into three modes. The ones for urban, suburban and open areas. The model for urban areas was built first and used as the base for others.
Coverage
Frequency: 150–1920 MHz
Mobile station antenna height: between 1 m and 3 m
Base station antenna height: between 30 m and 100 m
Link distance: between 1 km and 100 km
Mathematical formulation
The Okumura model is formally expressed as:
where,
L = The median path loss. Unit: Decibel (dB)
LFSL = The free space loss. Unit: decibel (dB)
AMU = Median attenuation. Unit: decibel (dB)
HMG = Mobile station antenna height gain factor.
HBG = Base station antenna height gain factor.
Kcorrection = Correction factor gain (such as type of environment, water surfaces, isolated obstacle etc.)
Points to note
Okumura's model is one of the most widely used models for signal prediction in urban areas. This model is applicable for frequencies in the range 150–1920 MHz (although it is typically extrapolated up to 3000 MHz) and distances of 1–100 km. It can be used for base-station antenna heights ranging from 30–1000 m.
Okumura developed a set of curves giving the median attenuation relative to free space (Amu), in an urban area over a quasi-smooth terrain with a base station effective antenna height () of 200 m and a mobile antenna height (hre) of 3 m. These curves were developed from extensive measurements using vertical omni-directional antennas at both the base and mobile, and are plotted as a function of frequency in the range 100–1920 MHz and as a function of distance from the base station in the range 1–100 km. To determine path
loss using Okumura's model, the free space path loss between the points of interest is first determined, an |
https://en.wikipedia.org/wiki/Wells%20score%20%28pulmonary%20embolism%29 | The Wells score is a clinical prediction rule used to classify patients suspected of having pulmonary embolism (PE) into risk groups by quantifying the pre-test probability. It is different than Well's score for DVT (deep vein thrombosis). It was originally described by Well's et al. in 1998, using their experience from creating Well's score for DVT in 1995. Today, there are multiple (revised or simplified) versions of the rule, which may lead to ambiguity.
The purpose of the rule is to select the best method of investigation (e.g. D-dimer testing, CT angiography) for ruling in or ruling out the diagnosis of PE, and to improve the interpretation and accuracy of subsequent testing, based on a Bayesian framework for the probability of the diagnosis.
The rule is more objective than clinician gestalt, but still includes subjective opinion (unlike e.g. Geneva score).
Original algorithm
Originally it was developed in 1998 to improve the low specificity of V/Q scan results (which then had a more important role in the workup of PE than now).
It categorized patients into 3 categories: low / moderate / high probability. It was formulated in the form of an algorithm, not a score.
Subsequent testing choices were V/Q scanning, pulmonary angiography, and serial compression ultrasound.
Revised score
The emergence of D-dimer assays prompted the revision of the rule.
This version was published as a score, and according to the final score, patients could be categorized in either 3 groups (low / intermediate / high risk) or 2 groups (low / high risk)
Subsequent testing choices included D-dimer testing for low risk cases, and V/Q scanning, pulmonary angiography, and compression ultrasonography for intermediate / high risk patients and low-risk patients with positive D-dimer results.
Risk of PE using 3 categories (data from the derivation group)
Risk of PE using 2 categories (data from the derivation group) |
https://en.wikipedia.org/wiki/Versatile%20Video%20Coding | Versatile Video Coding (VVC), also known as H.266, ISO/IEC 23090-3, and MPEG-I Part 3, is a video compression standard finalized on 6 July 2020, by the Joint Video Experts Team (JVET), a joint video expert team of the VCEG working group of ITU-T Study Group 16 and the MPEG working group of ISO/IEC JTC 1/SC 29. It is the successor to High Efficiency Video Coding (HEVC, also known as ITU-T H.265 and MPEG-H Part 2). It was developed with two primary goalsimproved compression performance and support for a very broad range of applications.
Concept
In October 2015, the MPEG and VCEG formed the Joint Video Exploration Team (JVET) to evaluate available compression technologies and study the requirements for a next-generation video compression standard. The new standard has about 50% better compression rate for the same perceptual quality compared to HEVC, with support for lossless and subjectively lossless compression. It supports resolutions ranging from very low resolution up to 4K and 16K as well as 360° videos. VVC supports YCbCr 4:4:4, 4:2:2 and 4:2:0 with 8–10 bits per component, BT.2100 wide color gamut and high dynamic range (HDR) of more than 16 stops (with peak brightness of 1000, 4000 and 10000 nits), auxiliary channels (for depth, transparency, etc.), variable and fractional frame rates from 0 to 120 Hz and higher, scalable video coding for temporal (frame rate), spatial (resolution), SNR, color gamut and dynamic range differences, stereo/multiview coding, panoramic formats, and still-picture coding. Work on high bit depth support (12 to 16 bits per component) started in October 2020 and was included in the second edition published in 2022. Encoding complexity of several times (up to ten times) that of HEVC is expected, depending on the quality of the encoding algorithm (which is outside the scope of the standard). The decoding complexity is about twice that of HEVC.
VVC development has been made using the VVC Test Model (VTM), a reference software codebase t |
https://en.wikipedia.org/wiki/Russula%20aeruginea | Russula aeruginea, also known as the grass-green russula, the tacky green russula, or the green russula, is an edible Russula mushroom. Widely distributed in northern temperate regions, it is usually found under birch, mostly in pine forests. The very poisonous death cap can have a similar appearance, especially from above.
Taxonomy
The species was first described in Elias Magnus Fries's 1863 work Monographia Hymenomycetum Sueciae. The specific epithet aeruginea is derived from the Latin aeruginus, referring to the tarnished color of copper. It is commonly known variously as the "tacky green Russula", the "grass-green Russula", or the "green Russula".
Description
The cap is flat when young, soon funnel shaped and weakly striped; somewhat sticky and shiny, pale green to light grey-green, more rarely olive green. It is often in diameter. The closely spaced gills are pale cream when young, later becoming light yellow when the spores mature. The stipe is white, occasionally with rust-coloured spots at the base, often rather short with longitudinal furrows. It measures long by thick. The flesh is white, brittle and without scent, with a mild taste. R. aeruginea mushrooms are edible.
The spore print is cream-yellow. Spores are spherical to oval with ridges and warts on the surface, and measure 6–8 by 6–7 μm.
Green specimens of the crab brittlegill, Russula xerampelina, can be mistaken for R. aeruginea. They can be readily distinguished in that specimens of R. xerampelina always smell of cooked shellfish, while specimens of R. aeruginea do not have any distinctive odor.
Habitat and distribution
The fruit bodies of Russula aeruginea grow on the ground in woods, in troops in leaf litter or in grass. It is ectomycorrhizal with birch, but also with found under conifers, particularly
pine and spruce. It is widely distributed in northern temperate zones. Fruiting occurs from July to November in Europe, and in later summer to autumn in North America. The fungus is also f |
https://en.wikipedia.org/wiki/Enhanced%20Mitigation%20Experience%20Toolkit | Enhanced Mitigation Experience Toolkit (EMET) is a freeware security toolkit for Microsoft Windows, developed by Microsoft. It provides a unified interface to enable and fine-tune Windows security features. It can be used as an extra layer of defense against malware attacks, after the firewall and before antivirus software.
EMET is targeted mostly at system administrators but the newest version is supported for any Windows user running Windows 7 and later, or Windows Server 2008 R2 and later, with .NET Framework 4.5 installed. The final edition of Windows that supported EMET was version 1703 (Creator's Update). Microsoft then changed the coding in the Fall Creator's Update of Windows 10, so it no longer supported EMET. Older versions can be used on Windows XP, but not all features are available. Version 4.1 was the last version to support Windows XP.
Microsoft has announced that EMET will reach end of life on July 31, 2018. The website for microsoft.com/emet now leads to the Bing Search Site. The successors to EMET are the ProcessMitigations Module—aka Process Mitigation Management Tool—and the Windows Defender Exploit Guard only available on Windows 10 and Windows Server 2016. |
https://en.wikipedia.org/wiki/Unary%20coding | Unary coding, or the unary numeral system and also sometimes called thermometer code, is an entropy encoding that represents a natural number, n, with a code of length n + 1 ( or n ), usually n ones followed by a zero (if natural number is understood as non-negative integer) or with n − 1 ones followed by a zero (if natural number is understood as strictly positive integer). For example 5 is represented as 111110 or 11110. Some representations use n or n − 1 zeros followed by a one. The ones and zeros are interchangeable without loss of generality. Unary coding is both a prefix-free code and a self-synchronizing code.
Unary coding is an optimally efficient encoding for the following discrete probability distribution
for .
In symbol-by-symbol coding, it is optimal for any geometric distribution
for which k ≥ φ = 1.61803398879…, the golden ratio, or, more generally, for any discrete distribution for which
for . Although it is the optimal symbol-by-symbol coding for such probability distributions, Golomb coding achieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs. For the same reason, arithmetic encoding performs better for general probability distributions, as in the last case above.
Unary code in use today
Examples of unary code uses include:
In Golomb Rice code, unary encoding is used to encode the quotient part of the Golomb code word.
In UTF-8, unary encoding is used in the leading byte of a multi-byte sequence to indicate the number of bytes in the sequence so that the length of the sequence can be determined without examining the continuation bytes.
Instantaneously trained neural networks use unary coding for efficient data representation.
Unary coding in biological networks
Unary coding is used in the neural circuits responsible for birdsong production. The nucleus in the brain of the songbirds that plays a part in both the learning a |
https://en.wikipedia.org/wiki/J%C3%BCrgen%20Moser | Jürgen Kurt Moser (July 4, 1928 – December 17, 1999) was a German-American mathematician, honored for work spanning over four decades, including Hamiltonian dynamical systems and partial differential equations.
Life
Moser's mother Ilse Strehlke was a niece of the violinist and composer Louis Spohr. His father was the neurologist Kurt E. Moser (July 21, 1895 – June 25, 1982), who was born to the merchant Max Maync (1870–1911) and Clara Moser (1860–1934). The latter descended from 17th century French Huguenot immigrants to Prussia. Jürgen Moser's parents lived in Königsberg, German empire and resettled in Stralsund, East Germany as a result of the second world war. Moser attended the Wilhelmsgymnasium (Königsberg) in his hometown, a high school specializing in mathematics and natural sciences education, from which David Hilbert had graduated in 1880. His older brother Friedrich Robert Ernst (Friedel) Moser (August 31, 1925 – January 14, 1945) served in the German Army and died in Schloßberg during the East Prussian offensive.
Moser married the biologist Dr. Gertrude C. Courant (Richard Courant's daughter, Carl Runge's granddaughter and great-granddaughter of Emil DuBois-Reymond) on September 10, 1955 and took up permanent residence in New Rochelle, New York in 1960, commuting to work in New York City. In 1980 he moved to Switzerland, where he lived in Schwerzenbach near Zürich. He was a member of the Akademisches Orchester Zürich. He was survived by his younger brother, the photographic printer and processor Klaus T. Moser-Maync from Northport, New York, his wife, Gertrude Moser from Seattle, their daughters, the theater designer Nina Moser from Seattle and the mathematician Lucy I. Moser-Jauslin from Dijon, and his stepson, the lawyer Richard D. Emery from New York City. Moser played the piano and the cello, performing chamber music since his childhood in the tradition of a musical family, where his father played the violin and his mother the piano. He was a lifelo |
https://en.wikipedia.org/wiki/Middle%20lamella | The middle lamella is a layer that cements together the primary cell walls of two adjoining plant cells. It is the first formed layer to be deposited at the time of cytokinesis. The cell plate that is formed during cell division itself develops into middle lamella or lamellum. The middle lamella is made up of calcium and magnesium pectates. In a mature plant cell it is the outermost layer of cell wall.
In plants, the pectins form a unified and continuous layer between adjacent cells. Frequently, it is difficult to distinguish the middle lamella from the primary wall, especially in cells that develop thick secondary walls. In such cases, the two adjacent primary walls and the middle lamella, and perhaps the first layer of the secondary wall of each cell, may be called a compound middle lamella. When the middle lamella is degraded by enzymes, as happens during fruit ripening, the adjacent cells will separate.
See also
Cell wall
Plasma membrane |
https://en.wikipedia.org/wiki/Transit%20instrument | In astronomy, a transit instrument is a small telescope with extremely precisely graduated mount used for the precise observation of star positions. They were previously widely used in astronomical observatories and naval observatories to measure star positions in order to compile nautical almanacs for use by mariners for celestial navigation, and observe star transits to set extremely accurate clocks (astronomical regulators) which were used to set marine chronometers carried on ships to determine longitude, and as primary time standards before atomic clocks. The instruments can be divided into three groups: meridian, zenith, and universal instruments.
Types
Meridian instruments
For observation of star transits in the exact direction of South or North:
Meridian circles, Mural quadrants etc.
Passage instruments (transportable, also for prime vertical transits)
Zenith instruments
Zenith telescope
Photozenith tube (PZT)
zenith cameras
Danjon astrolabe, Zeiss Ni2 astrolabe, Circumzenital
Universal instruments
Allow transit measurements in any direction
Theodolite (Describing a theodolite as a transit may refer to the ability to turn the telescope a full rotation on the horizontal axis, which provides a convenient way to reverse the direction of view, or to sight the same object with the yoke in opposite directions, which causes some instrumental errors to cancel.)
Altaz telescopes with graduated eyepieces (also for satellite transits)
Cinetheodolites
Observation techniques and accuracy
Depending on the type of instrument, the measurements are carried out
visually and manual time registration (stopwatch, Auge-Ohr-Methode, chronograph)
visually by impersonal micrometer (moving thread with automatic registration)
photographic registration
CCD or other electro optic sensors.
The accuracy reaches from 0.2" (theodolites, small astrolabes) to 0.01" (modern meridian circles, Danjon). Early instruments (like the mural quadrants of Tycho Brahe) had no te |
https://en.wikipedia.org/wiki/Action%20%28physics%29 | In physics, action is a scalar quantity that describes how the energy of a physical system has changed over time (its dynamics). Action is significant because the equations of motion of a system can be derived through the principle of stationary action.
In the simple case of a single particle moving with a constant velocity (thereby undergoing uniform linear motion), the action is the momentum of the particle times the distance it moves, added up along its path; equivalently, action is twice the particle's kinetic energy times the duration for which it has that amount of energy. For more complicated systems, all such quantities are combined.
More formally, action is a mathematical functional which takes the trajectory (also called path or history) of the system as its argument and has a real number as its result. Generally, the action takes different values for different paths. Action has dimensions of energy × time or momentum × length, and its SI unit is joule-second (like the Planck constant h).
Introduction
Hamilton's principle states that the differential equations of motion for any physical system can be re-formulated as an equivalent integral equation. Thus, there are two distinct approaches for formulating dynamical models.
It applies not only to the classical mechanics of a single particle, but also to classical fields such as the electromagnetic and gravitational fields. Hamilton's principle has also been extended to quantum mechanics and quantum field theory—in particular the path integral formulation of quantum mechanics makes use of the concept—where a physical system randomly follows one of the possible paths, with the phase of the probability amplitude for each path being determined by the action for the path.
Solution of differential equation
Empirical laws are frequently expressed as differential equations, which describe how physical quantities such as position and momentum change continuously with time, space or a generalization thereof. |
https://en.wikipedia.org/wiki/Bombyx%20mori | The domestic silk moth (Bombyx mori) is an insect from the moth family Bombycidae. It is the closest relative of Bombyx mandarina, the wild silk moth. The silkworm is the larva (or caterpillar) of a silk moth. The silkworm is of particular economic value, being a primary producer of silk. A silkworm's preferred food is white mulberry leaves, though they may eat other species of mulberry, and even leaves of other plants like the osage orange. Domestic silk moths are entirely dependent on humans for reproduction, as a result of millennia of selective breeding. Wild silk moths (other species of Bombyx) are not as commercially viable in the production of silk.
Sericulture, the practice of breeding silkworms for the production of raw silk, has been underway for at least 5,000 years in China, whence it spread to India, Korea, Nepal, Japan, and then the West. The domestic silk moth was domesticated from the wild silk moth Bombyx mandarina, which has a range from northern India to northern China, Korea, Japan, and the far eastern regions of Russia. The domestic silk moth derives from Chinese rather than Japanese or Korean stock.
Silk moths were unlikely to have been domestically bred before the Neolithic period. Before then, the tools to manufacture quantities of silk thread had not been developed. The domesticated B. mori and the wild B. mandarina can still breed and sometimes produce hybrids. It is unknown if B. mori can hybridize with other Bombyx species. Compared to most members in the genus Bombyx, domestic silk moths have lost their color pigments as well as their ability to fly.
Types
Mulberry silkworms can be divided into three major categories based on seasonal brood frequency. Univoltine silkworms produce only one brood a season, and they are generally found in and around Europe. Univoltine eggs must hibernate through the winter, ultimately cross-fertilizing in spring. Bivoltine varieties are normally found in East Asia, and their accelerated breeding process |
https://en.wikipedia.org/wiki/Nupapillomavirus | Nupapillomavirus is a genus of viruses, in the family Papillomaviridae. Humans serve as natural hosts. There is only one species in this genus: Nupapillomavirus 1. Diseases associated with this genus include: facial warts. It has also been detected in some skin carcinomas and premalignant keratoses.
Structure
Viruses in Nupapillomavirus are non-enveloped, with icosahedral geometries, and T=7 symmetry. The diameter is around 52-55 nm. Genomes are circular, around 8kb in length.
Life cycle
Viral replication is nuclear. Entry into the host cell is achieved by attachment of the viral proteins to host receptors, which mediates endocytosis. Replication follows the dsDNA bidirectional replication model. DNA-templated transcription, with some alternative splicing mechanism is the method of transcription. The virus exits the host cell by nuclear envelope breakdown.
Human serve as the natural host. Transmission routes are contact. |
https://en.wikipedia.org/wiki/Desperate%20Dan | Desperate Dan is a wild west character in the now-defunct Scottish comic magazine The Dandy. He made his appearance in the first issue which was dated 4 December 1937 and became the magazine's mascot. He is apparently the world's strongest man, able to lift a cow with one hand. The pillow of his (reinforced) bed is filled with building rubble and his beard is so tough he shaves with a blowtorch.
The character was created by Dudley D. Watkins, originally as an outlaw or ‘desperado’ (hence his name), but evolved into a more sympathetic type, using his strength to help the underdog. After Watkins’ death in 1969, the cartoons were drawn by many other artists, principally Ken H. Harrison, though the Watkins canon was often recycled. When the Dandy became digital-only in 2012, the Desperate Dan strips were drawn by David Parkins.
There is a statue of Dan in Dundee, Scotland, where his publishers, D. C. Thomson & Co. are based.
History
The strip was drawn by Dudley D. Watkins until his death in 1969. Although The Dandy Annuals featured new strips from other artists from then on, the comic continued reprinting Watkins strips until 1983 (though the then Korky the Cat artist Charles Grigg drew new strips for annuals and summer specials), when it was decided to start running new strips. These were initially drawn by Peter Davidson, but Ken H. Harrison soon took over as regular artist. The following year Dan was promoted to the front cover of The Dandy, replacing Korky who had been there since issue 1. Starting from issue 2985, dated 6 February 1999, Cuddles and Dimples replaced Dan on the front cover. This did not last long, however, as after a readers' poll in 2000, Dan returned to the cover. Although Ken Harrison was the main artist from 1983 to 2007, other artists have also occasionally filled in for Harrison, including Tom Williams, David Parkins, Trevor Metcalfe and Anthony Caluori in the early 1990s. John Geering took over the strip between 1994 and 1997, after which |
https://en.wikipedia.org/wiki/Riemann%20hypothesis | In mathematics, the Riemann hypothesis is the conjecture that the Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part . Many consider it to be the most important unsolved problem in pure mathematics. It is of great interest in number theory because it implies results about the distribution of prime numbers. It was proposed by , after whom it is named.
The Riemann hypothesis and some of its generalizations, along with Goldbach's conjecture and the twin prime conjecture, make up Hilbert's eighth problem in David Hilbert's list of twenty-three unsolved problems; it is also one of the Clay Mathematics Institute's Millennium Prize Problems, which offers a million dollars to anyone who solves any of them. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.
The Riemann zeta function ζ(s) is a function whose argument s may be any complex number other than 1, and whose values are also complex. It has zeros at the negative even integers; that is, ζ(s) = 0 when s is one of −2, −4, −6, .... These are called its trivial zeros. The zeta function is also zero for other values of s, which are called nontrivial zeros. The Riemann hypothesis is concerned with the locations of these nontrivial zeros, and states that:
Thus, if the hypothesis is correct, all the nontrivial zeros lie on the critical line consisting of the complex numbers where t is a real number and i is the imaginary unit.
Riemann zeta function
The Riemann zeta function is defined for complex s with real part greater than 1 by the absolutely convergent infinite series
Leonhard Euler already considered this series in the 1730s for real values of s, in conjunction with his solution to the Basel problem. He also proved that it equals the Euler product
where the infinite product extends over all prime numbers p.
The Riemann hypothesis discusses zeros outside the region of convergence of this seri |
https://en.wikipedia.org/wiki/T.37 | T.37 is an ITU standard which deals with sending fax messages using email. It is also referred to as "Internet fax" or "Store-forward-fax".
A fax machine supporting T.37 will send a fax to an email address by converting the document to a TIFF-F image, attaching it to an email (using the MIME format), and sending the document (using SMTP). The destination fax receives the email and prints the attached document.
To interface with regular fax machines:
T.37 can be used in conjunction with fax gateways to communicate with regular fax machines. The fax gateway converts emails to regular faxes or regular faxes to emails.
A T.37-compliant fax machine includes legacy fax functionality to send to regular fax numbers and requires a phone line for this.
To find the destination faxes email address, the RFC 4143 standard is in development, which allows a fax machine to use a destination fax number to look up an alternative email address.
See also
Unified messaging
T.38 (Fax over the Internet Protocol)
Internet fax
External links
Official ITU-T T.37 page
Cisco Fax over IP T.37 Store and Forward Fax
Internet Standards
VoIP protocols
ITU-T recommendations
ITU-T T Series Recommendations |
https://en.wikipedia.org/wiki/Corners%20theorem | In arithmetic combinatorics, the corners theorem states that for every , for large enough , any set of at least points in the grid contains a corner, i.e., a triple of points of the form with . It was first proved by Miklós Ajtai and Endre Szemerédi in 1974 using Szemerédi's theorem. In 2003, József Solymosi gave a short proof using the triangle removal lemma.
Statement
Define a corner to be a subset of of the form , where and . For every , there exists a positive integer such that for any , any subset with size at least contains a corner.
The condition can be relaxed to by showing that if is dense, then it has some dense subset that is centrally symmetric.
Proof overview
What follows is a sketch of Solymosi's argument.
Suppose is corner-free. Construct an auxiliary tripartite graph with parts , , and , where corresponds to the line , corresponds to the line , and corresponds to the line . Connect two vertices if the intersection of their corresponding lines lies in .
Note that a triangle in corresponds to a corner in , except in the trivial case where the lines corresponding to the vertices of the triangle concur at a point in . It follows that every edge of is in exactly one triangle, so by the triangle removal lemma, has edges, so , as desired.
Quantitative bounds
Let be the size of the largest subset of which contains no corner. The best known bounds are
where and . The lower bound is due to Green, building on the work of Linial and Shraibman. The upper bound is due to Shkredov.
Multidimensional extension
A corner in is a set of points of the form , where is the standard basis of , and . The natural extension of the corners theorem to this setting can be shown using the hypergraph removal lemma, in the spirit of Solymosi's proof. The hypergraph removal lemma was shown independently by Gowers and Nagle, Rödl, Schacht and Skokan.
Multidimensional Szemerédi's Theorem
The multidimensional Szemerédi theorem states that for any fixed |
https://en.wikipedia.org/wiki/F%C3%BCrst-Plattner%20Rule | The Fürst-Plattner rule (also known as the trans-diaxial effect) describes the stereoselective addition of nucleophiles to cyclohexene derivatives.
Introduction
Cyclohexene derivatives, such as imines, epoxides, and halonium ions, react with nucleophiles in a stereoselective fashion, affording trans-diaxial addition products. The term “Trans-diaxial addition” describes the mechanism of the addition, however the products are likely to equilibrate by ring flip to the lower energy conformer, placing the new substituents in the equatorial position.
Mechanism and Stereochemistry
Epoxidation of a substituted cyclohexene affords a product where the R group resides in the pseudo-equatorial position. Nucleophilic ring-opening of this class of epoxides can occur by an attack at either the C1 or C2-position. It is well known that nucleophilic ring-opening reactions of these substrates can proceed with excellent regioselectivity. The Fürst-Plattner rule attributes this regiochemical control to a large preference for the reaction pathway that follows the more stable chair-like transition state (attack at the C1-position) compared to the one proceeding through the unfavored twist boat-like transition state (attack at the C2-position). The attack at the C1-position follows a substantially lower reaction barrier of around 5 kcal mol–1 depending on the specific conditions. Similarly, the Fürst-Plattner rule applies to nucleophilic additions to imines and halonium ions.
Examples
Epoxide addition
A recent example of the Fürst-Plattner rule can be seen from Chrisman et al. where limonene is epoxidized to give a 1:1 mixture of diastereomers. Exposure to a nitrogen nucleophile in water at reflux provides only one ring opened product in 75-85% ee.
Mechanism
The half-chair conformation indicates that attack occurs stereoselectively on the diastereomer where the electrophilic carbon can receive the nucleophile and proceed to the favored chair conformation.
Woodward's Reserpine Syn |
https://en.wikipedia.org/wiki/Joseph-Louis%20Lagrange | Joseph-Louis Lagrange (born Giuseppe Luigi Lagrangia or Giuseppe Ludovico De la Grange Tournier; 25 January 1736 – 10 April 1813), also reported as Giuseppe Luigi Lagrange or Lagrangia, was an Italian mathematician, physicist and astronomer, later naturalized French. He made significant contributions to the fields of analysis, number theory, and both classical and celestial mechanics.
In 1766, on the recommendation of Swiss Leonhard Euler and French d'Alembert, Lagrange succeeded Euler as the director of mathematics at the Prussian Academy of Sciences in Berlin, Prussia, where he stayed for over twenty years, producing volumes of work and winning several prizes of the French Academy of Sciences. Lagrange's treatise on analytical mechanics (Mécanique analytique, 4. ed., 2 vols. Paris: Gauthier-Villars et fils, 1788–89), written in Berlin and first published in 1788, offered the most comprehensive treatment of classical mechanics since Newton and formed a basis for the development of mathematical physics in the nineteenth century.
In 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life. He was instrumental in the decimalisation in Revolutionary France, became the first professor of analysis at the École Polytechnique upon its opening in 1794, was a founding member of the Bureau des Longitudes, and became Senator in 1799.
Scientific contribution
Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He extended the method to include possible constraints, arriving at the method of Lagrange multipliers.
Lagrange invented the method of solving differential equations known as variation of parameters, applied differential calculus to the theory of probabilities and worked on solutions for algebraic equations. He proved that every natural number is a sum of four squares. His treatise Theorie des fonctio |
https://en.wikipedia.org/wiki/Collaborative%20Study%20on%20the%20Genetics%20of%20Alcoholism | The Collaborative Studies on the Genetics of Alcoholism (COGA) is an eleven-center research project in the United States designed to understand the genetic basis of alcoholism. Research is conducted at University of Connecticut, Indiana University, University of Iowa, SUNY Downstate Medical Center, Washington University in St. Louis, University of California at San Diego, Rutgers University, University of Texas Health Science Center at San Antonio, Virginia Commonwealth University, Icahn School of Medicine at Mount Sinai, and Howard University.
Henri Begleiter and Theodore Reich were founding PI and Co-PI of COGA. Since 1991, COGA has interviewed more than 17,000 members of more than 2,200 families from around the United States, many of whom have been longitudinally assessed. Family members, including adults, children, and adolescents, have been carefully characterized across a variety of domains, including other alcohol and other substance-related phenotypes, co-occurring disorders (e.g., depression), electrophysiology, key precursor behavioral phenotypes (e.g., conduct disorder), and environmental risk factors (e.g., stress). This has provided us with a very rich phenotypic dataset to complement the large repository of cell lines and DNA for current and future studies. We have made this dataset widely available to advance the field: hundreds of researchers have worked with data generated as part of COGA through a variety of different mechanisms including data sharing through and the Genetic Analysis Workshops, as COGA collaborators, through meta-analysis consortia including the Psychiatric Genomics Consortium, and as independent requestors for COGA samples and data.
In studying alcoholism, COGA hopes to find better ways of treating alcoholism and improving the lives of the millions of people who suffer from alcoholism. The COGA project has achieved national and international acclaim for its accomplishments, and numerous articles about the study have been publi |
https://en.wikipedia.org/wiki/Ohmic%20contact | An ohmic contact is a non-rectifying electrical junction: a junction between two conductors that has a linear current–voltage (I–V) curve as with Ohm's law. Low-resistance ohmic contacts are used to allow charge to flow easily in both directions between the two conductors, without blocking due to rectification or excess power dissipation due to voltage thresholds.
By contrast, a junction or contact that does not demonstrate a linear I–V curve is called non-ohmic. Non-ohmic contacts come in a number of forms, such as p–n junction, Schottky barrier, rectifying heterojunction, or breakdown junction.
Generally the term "ohmic contact" implicitly refers to an ohmic contact of a metal to a semiconductor, where achieving ohmic contact resistance is possible but requires careful technique. Metal–metal ohmic contacts are relatively simpler to make, by ensuring direct contact between the metals without intervening layers of insulating contamination, excessive roughness or oxidation; various techniques are used to create ohmic metal–metal junctions (soldering, welding, crimping, deposition, electroplating, etc.). This article focuses on metal–semiconductor ohmic contacts.
Stable contacts at semiconductor interfaces, with low contact resistance and linear I–V behavior, are critical for the performance and reliability of semiconductor devices, and their preparation and characterization are major efforts in circuit fabrication. Poorly prepared junctions to semiconductors can easily show rectifying behaviour by causing depletion of the semiconductor near the junction, rendering the device useless by blocking the flow of charge between those devices and the external circuitry. Ohmic contacts to semiconductors are typically constructed by depositing thin metal films of a carefully chosen composition, possibly followed by annealing to alter the semiconductor–metal bond.
Physics of formation of metal–semiconductor ohmic contacts
Both ohmic contacts and Schottky barriers are depen |
https://en.wikipedia.org/wiki/The%20Indian%20Stammering%20Association | The Indian Stammering Association (TISA) is a public charitable trust and self-help movement for people in India who stammer. In India a person who stammers (PWS) faces stigma at home and in public, as often parents chide their children publicly, and social acceptance is not high.
Background
An estimated 11 to 12 million people in India stammer. Stammering is a physiological disorder. The World Health Organization classifies stuttering (stammering) in its section F98.5, "Mental and behavioural disorders", where it is defined as "Speech that is characterised by frequent repetition or prolongation of sounds or syllables or words, or by frequent hesitations or pauses that disrupt the rhythmic flow of speech. It should be classified as a disorder only if its severity is such as to markedly disturb the fluency of speech." However, as India has a shortage of good speech therapists, speech therapy is expensive and the government of India does not officially recognise the condition as a handicap. Those who stutter face problems getting jobs as well as barriers to career growth, resulting in feelings of shame, guilt, and fear of not being accepted within an organisation. Many stammering people avoid talking and prefer to communicate by writing, text or e-mail, as they may be unable to enunciate even simple words.
Formation
TISA grew out of an e-network managed by Viren Gandhi from Mumbai operated through a Yahoo group initiated on 3 April 2001 by Dr. Satyendra K Srivastava, an Indian PWS. The blog, called "Haqlana" (Hindi for stammer), where Srivastava first went public with his condition and was joined later by members responding from across the country, raised awareness that real-time experiences, like difficulty in answering when attendance is taken in schools, job interview difficulties, and fear of social boycott, were widespread. By late 2010, the group had 576 members, contributing almost 6,000 posts on issues including speech therapy reviews, self-help tips, and e |
https://en.wikipedia.org/wiki/List%20of%20animals%20by%20number%20of%20neurons | The following are two lists of animals ordered by the size of their nervous system. The first list shows number of neurons in their entire nervous system, indicating their overall neural complexity. The second list shows the number of neurons in the structure that has been found to be representative of animal intelligence. The human brain contains 86 billion neurons, with 16 billion neurons in the cerebral cortex.
Scientists are engaged in counting, quantification, in order to find answers to the question in the strategy of neuroscience and intelligence of "self-knowledge": how the evolution of a set of components and parameters (~1011 neurons, ~1014 synapses) of a complex system could lead to the phenomenon of the appearance of intelligence in the biological species "sapiens".
Overview
Neurons are the cells that transmit information in an animal's nervous system so that it can sense stimuli from its environment and behave accordingly. Not all animals have neurons; Trichoplax and sponges lack nerve cells altogether.
Neurons may be packed to form structures such as the brain of vertebrates or the neural ganglions of insects.
The number of neurons and their relative abundance in different parts of the brain is a determinant of neural function and, consequently, of behavior.
Whole nervous system
All numbers for neurons (except Caenorhabditis and Ciona), and all numbers for synapses (except Ciona) are estimations.
List of animal species by forebrain (cerebrum or pallium) neuron number
The question of what physical characteristic of an animal makes an animal intelligent has varied over the centuries. One early speculation was brain size (or weight, which provides the same ordering.) A second proposal was brain-to-body-mass ratio, and a third was encephalization quotient, sometimes referred to as EQ. The current best predictor is number of neurons in the forebrain, based on Herculano-Houzel's improved neuron counts. It accounts most accurately for variations |
https://en.wikipedia.org/wiki/Connection%20%28algebraic%20framework%29 | Geometry of quantum systems (e.g.,
noncommutative geometry and supergeometry) is mainly
phrased in algebraic terms of modules and
algebras. Connections on modules are
generalization of a linear connection on a smooth vector bundle written as a Koszul connection on the
-module of sections of .
Commutative algebra
Let be a commutative ring
and an A-module. There are different equivalent definitions
of a connection on .
First definition
If is a ring homomorphism, a -linear connection is a -linear morphism
which satisfies the identity
A connection extends, for all to a unique map
satisfying . A connection is said to be integrable if , or equivalently, if the curvature vanishes.
Second definition
Let be the module of derivations of a ring . A
connection on an A-module is defined
as an A-module morphism
such that the first order differential operators on
obey the Leibniz rule
Connections on a module over a commutative ring always exist.
The curvature of the connection is defined as
the zero-order differential operator
on the module for all .
If is a vector bundle, there is one-to-one
correspondence between linear
connections on and the
connections on the
-module of sections of . Strictly speaking, corresponds to
the covariant differential of a
connection on .
Graded commutative algebra
The notion of a connection on modules over commutative rings is
straightforwardly extended to modules over a graded
commutative algebra. This is the case of
superconnections in supergeometry of
graded manifolds and supervector bundles.
Superconnections always exist.
Noncommutative algebra
If is a noncommutative ring, connections on left
and right A-modules are defined similarly to those on
modules over commutative rings. However
these connections need not exist.
In contrast with connections on left and right modules, there is a
problem how to define a connection on an
R-S-bimodule over noncommutative rings
R and S. There are different d |
https://en.wikipedia.org/wiki/Multi-function%20structure | A multi-function material is a composite material. The traditional approach to the development of structures is to address the load-carrying function and other functional requirements separately. Recently, however, there has been increased interest in the development of load-bearing materials and structures which have integral non-load-bearing functions, guided by recent discoveries about how multifunctional biological systems work.
Introduction
With conventional structural materials, it has been difficult to achieve simultaneous improvement in multiple structural functions, but the increasing use of composite materials has been driven in part by the potential for such improvements. The multi-functions can vary from mechanical to electrical and thermal functions. The most widely used composites have polymer matrix materials, which are typically poor conductors. Enhanced conductivity could be achieved with reinforcing the composite with carbon nanotubes for instance.
Functions
Among the many functions that can be attained are electrical/thermal conductivity, sensing and actuation, energy harvesting/storage, self-healing capability, electromagnetic interference (EMI) shielding and recyclability and biodegradability. See also functionally graded materials which are composite materials where the composition or the microstructure are locally varied so that a certain variation of the local material properties is achieved. However, functionally graded materials can be designed for specific function and applications.
Many applications such as re-configurable aircraft wings, shape-changing aerodynamic panels for flow control, variable geometry engine exhausts, turbine blade, wind turbine configuration at different wind speed, microelectromechanical systems (micro-switches), mechanical memory cells, valves, micropumps, flexible direction panel position in solar cells, innovative architecture (adaptive shape panels for roofs and windows), flexible and foldable electronic de |
https://en.wikipedia.org/wiki/Polyelectrolyte%20adsorption | Adsorption of polyelectrolytes on solid substrates is a surface phenomenon where long-chained polymer molecules with charged groups (dubbed polyelectrolytes) bind to a surface that is charged in the opposite polarity. On the molecular level, the polymers do not actually bond to the surface, but tend to "stick" to the surface via intermolecular forces and the charges created by the dissociation of various side groups of the polymer. Because the polymer molecules are so long, they have a large amount of surface area with which to contact the surface and thus do not desorb as small molecules are likely to do. This means that adsorbed layers of polyelectrolytes form a very durable coating. Due to this important characteristic of polyelectrolyte layers they are used extensively in industry as flocculants, for solubilization, as supersorbers, antistatic agents, as oil recovery aids, as gelling aids in nutrition, additives in concrete, or for blood compatibility enhancement to name a few.
Kinetics of layer formation
Models for the adsorption behavior of polyelectrolytes in solution to a solid surface are extremely situational. Vastly different behaviors are exhibited based on varying polyelectrolyte character and concentration, ionic strength of the solution, solid surface character, and pH, among several other factors. These complex models are specialized by application for certain parameters in order to create accurate models.
Theoretical kinetics
However, the general character of the process can be reasonably well modeled with a polyelectrolyte in solution, and an oppositely charged surface where no covalent interaction between the surface and chain occurs. This model for the adsorbed amount of polyelectrolyte at a charged surface is derived from DLVO theory, which models the interaction of charged particles in solution, and mean field theory, which simplifies systems for analysis.
Using a modified Poisson-Boltzmann equation and mean field equation, the conc |
https://en.wikipedia.org/wiki/Animal%E2%80%93industrial%20complex | Animal–industrial complex (AIC) is a concept used by activists and scholars to describe what they contend is the systematic and institutionalized exploitation of animals. It includes every economic activity involving animals, such as the food industry (e.g., meat, dairy, poultry, apiculture), animal testing (e.g., academic, industrial, animals in space), medicine (e.g., bile and other animal products), clothing (e.g., leather, silk, wool, fur), labor and transport (e.g., working animals, animals in war, remote control animals), tourism and entertainment (e.g., circus, zoos, blood sports, trophy hunting, animals held in captivity), selective breeding (e.g., pet industry, artificial insemination), and so forth. Proponents of the term claim that activities described by the term differ from individual acts of animal cruelty in that they constitute institutionalized animal exploitation.
Killing more than 200 billion land and aquatic animals every year, the AIC has been implicated in climate change, ocean acidification, biodiversity loss, and the Holocene extinction. It is also responsible for spreading of diseases from animals to humans, including the COVID-19 pandemic.
Definitions
The expression animal–industrial complex was coined by the Dutch cultural anthropologist and philosopher Barbara Noske in her 1989 book Humans and Other Animals, saying that animals "have become reduced to mere appendages of computers and machines." The expression relates the practices, organizations, and overall industry that turns animals into food and other commodities to the military–industrial complex.
Richard Twine later refined the concept, regarding it as the "partly opaque and multiple set of networks and relationships between the corporate (agricultural) sector, governments, and public and private science. With economic, cultural, social and affective dimensions it encompasses an extensive range of practices, technologies, images, identities and markets." Twine also discusses the |
https://en.wikipedia.org/wiki/Poisson%20bracket | In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables (below symbolized by and , respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.
In a more general sense, the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on a Poisson manifold is a special case. There are other general examples, as well: it occurs in the theory of Lie algebras, where the tensor algebra of a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in the universal enveloping algebra article. Quantum deformations of the universal enveloping algebra lead to the notion of quantum groups.
All of these objects are named in honor of Siméon Denis Poisson.
Properties
Given two functions and that depend on phase space and time, their Poisson bracket is another function that depends on phase space and time. The following rules hold for any three functions of phase space and time:
Anticommutativity
Bilinearity
Leibniz's rule
Jacobi identity
Also, if a function is constant over phase space (but may depend on time), then for any .
Definition in canonical coordinates
In canonical coordinates (also known as Darboux coordinates) on the phase space, given two functions and , the Poisson bracket takes the form
The Poisson brackets of the canonical coordinates are
|
https://en.wikipedia.org/wiki/Hangprinter | Hangprinter is an open-source fused deposition modeling delta 3D printer notable for its unique frameless design. It was created by Torbjørn Ludvigsen. The Hangprinter uses relatively low cost parts and can be constructed for around US$250. The printer is part of the RepRap project, meaning many of the parts of the printer are able to be produced on the printer itself (partially self replicating). The design files for the printer are available on GitHub for download, modification and redistribution.
Versions
Version 0
The Hangprinter v0, also called the Slideprinter, is a 2D plotter. It was designed solely to test if a 3D version could realistically be created.
Version 1
The Hangprinter v1 uses counter weights to stay elevated.
Version 2
All parts of the Hangprinter Version 2 are contained within a single unit which uses cables to suspend the printer within a room, allowing it to create extremely large objects over 4 meters tall.
Version 3
Version 3 of the Hangprinter has the motors and gears attached to the ceiling, making the carriage lighter.
Version 4
Version 4 includes upgrades from version 3 including flex compensation, better calibration and automatic homing.
Fused Particle Fabrication/Fused Granular Fabrication Hangprinters
To enable 3D printers to economically use recycled plastic feedstocks to enable distributed recycling and additive manufacturing (DRAM) several types of fused granular fabrication (FGF)/fused particle fabrication (FPF) -based 3D printers have been designed and released with open source licenses. First, a large-scale printer was demonstrated with a GigabotX extruder based on the open source cable driven hangprinter concept. Then detailed plans using recyclebot auger techniques were released in HardwareX to build such a printer for under $1700. This approach would further reduce the cost of using hangprinters to make large scale products as the cost of recycled shredded plastic is ~$1–5/kg while filament is generally around $20/k |
https://en.wikipedia.org/wiki/Root%20ball | A root ball is the mass of roots and growing media at the base of a plant such as trees, shrubs, and other perennials and annual plants. The appearance and structure of the root ball will be largely dependent on the method of growing used in the production of the plant. The root ball of a container plant will be different than that of the field-harvested “ball and burlap” tree. The root ball is of particular significance in horticulture when plants are being planted or require repotting as, the quality, size, and preparation of the root ball will heavily determine how well the plant will survive being transplanted and re-establish in its new location.
Root ball pruning of container grown plants
Most commonly plants are grown in containers where the roots begin to circle and take the shape of their pot. The root balls that have been exposed to this scenario have a very high chance of developing circling or girdling roots that will become problematic and possibly detrimental to the tree or plant's health in the future. To manage this problem, it is best to remove any circling roots where you see them visible. Experts from Clemson University suggest making several slice marks in the root ball from the top to the bottom going 1 to 2 inches deep as this has been found to have positive effects. They have found these cuts cause new regenerative roots to be formed behind the wounds which aid in the plant establishing roots in the new location. The experts from Florida University suggest shaving the entire outside of the root ball when it has taken the shape of the pot (otherwise known as rootbound) before planting it into a larger container or its location. They have several supporting studies and images displaying how shaving the outer layer aids in removing circling roots and allows for better root establishment in the new growing area.
Root balls of field grown plants
For larger caliper trees and shrubs after their root balls are harvested from the ground, they are |
https://en.wikipedia.org/wiki/State%20space%20enumeration | In computer science, state space enumeration are methods that consider each reachable program state to determine whether a program satisfies a given property. As programs increase in size and complexity, the state space grows exponentially. The state space used by these methods can be reduced by maintaining only the parts of the state space that are relevant to the analysis. However, the use of state and memory reduction techniques makes runtime a major limiting factor.
See also
Formal methods
Model checking |
https://en.wikipedia.org/wiki/History%20of%20the%20Tesla%20coil | Nikola Tesla patented the Tesla coil circuit on April 25, 1891. and first publicly demonstrated it May 20, 1891 in his lecture "Experiments with Alternate Currents of Very High Frequency and Their Application to Methods of Artificial Illumination" before the American Institute of Electrical Engineers at Columbia College, New York. Although Tesla patented many similar circuits during this period, this was the first that contained all the elements of the Tesla coil: high voltage primary transformer, capacitor, spark gap, and air core "oscillation transformer".
Invention
.
During the Industrial Revolution the electrical industry exploited direct current (DC) and low frequency alternating current (AC), but not much was known about frequencies above 20 kHz, what are now called radio frequencies. In 1887, four years previously, Heinrich Hertz had discovered Hertzian waves (radio waves), electromagnetic waves which oscillated at very high frequencies. This attracted much attention, and a number of researchers began experimenting with high frequency currents.
Tesla's background was in the new field of alternating current power systems, so he understood transformers and resonance. In 1888 he decided that high frequencies were the most promising field for research, and set up a laboratory at 33 South Fifth Avenue, New York for researching them, initially repeating Hertz's experiments.
He first developed alternators as sources of high frequency current, but by 1890 found they were limited to frequencies of about 20 kHz. In search of higher frequencies he turned to spark-excited resonant circuits. Tesla's innovation was in applying resonance to transformers. Transformers functioned differently at high frequencies than at the low frequencies used in power systems; the iron core in low frequency transformers caused energy losses due to eddy currents and hysteresis. Tesla
and Elihu Thomson independently developed a new type of transformer without an iron core, the "oscillatio |
https://en.wikipedia.org/wiki/SigmaTel | SigmaTel, Inc., was an American system-on-a-chip (SoC), electronics and software company headquartered in Austin, Texas, that designed AV media player/recorder SoCs, reference circuit boards, SoC software development kits built around a custom cooperative kernel and all SoC device drivers including USB mass storage and AV decoder DSP, media player/recorder apps, and controller chips for multifunction peripherals. SigmaTel became Austin's largest IPO as of 2003 when it became publicly traded on NASDAQ. The company was driven by a talented mix of electrical and computer engineers plus other professionals with semiconductor industry experience in Silicon Hills, the number two IC design region in the United States, after Silicon Valley.
SigmaTel (trading symbol SGTL) was acquired by Freescale Semiconductor in 2008 and delisted from NASDAQ.
History
In the 90's and early 2000's SigmaTel produced audio codecs which went into the majority of PC sound cards. Creative's Sound Blaster used mainly SigmaTel and ADI codecs. This expanded to on board audio for computer motherboards and MP3 players.
In 2004, SigmaTel SoCs were found in over 70% of all flash memory based MP3 devices sold in the global market. However, SigmaTel lost its last iPod socket in 2006 when it was not found in the next-generation iPod Shuffle. PortalPlayer was the largest competitor, but were bought by Nvidia after PortalPlayer's chips lost their socket in the iPod. SigmaTel was voted "Best Place to Work in Austin 2005" by the Austin Chronicle.
In July 2005, SigmaTel acquired the rights to different software technologies sold by Digital Networks North America (a subsidiary of D&M Holdings, and owner of Rio Audio).
On July 25, 2006, Integrated Device Technology, Inc. (IDT) announced its acquisition of SigmaTel, Inc.'s AC'97 and High Definition Audio (HD-Audio) PC and Notebook audio codec product lines for approximately $72 million in cash, and the acquisition of SigmaTel's intellectual property and e |
https://en.wikipedia.org/wiki/NimbleX | NimbleX is a small Slackware-based Linux distribution optimized to run from a CD, USB drive or a network environment. NimbleX has been praised for how fast it boots, as well as for its small disk footprint, which is considered surprising for a distribution using KDE as desktop environment. NimbleX was also remarked for its website that allows users to generate custom bootable images by using a web browser. It was also covered in mainstream Romanian press as the first Linux distribution put together by a Romanian.
Features
NimbleX is known for its fast boot up which is an important factor in user experience when running from optical media or USB drives. A review of the 2007 NimbleX edition noted: "Expect it to boot in less than half the time that a live CD from Fedora, Ubuntu, or Knoppix takes." A more recent review of the 2008 edition also noted NimbleX's speed: "It's easily one of the fastest bootups I've seen in a while. I even tried to hinder or cripple its boot time, and even on a dog slow pendrive or an old as dirt test machine, it still booted amazingly fast. The desktop and applications are also very fast."
NimbleX is also a very compact distribution. A review of the 2007 edition wondered "how they managed to include KDE, not to mention the other applications", having included in the size of only 200 MB a window-based graphical user interface (a slimmed down KDE to fit), the Firefox web browser, the office documents editor KOffice, a PDF reader, a media player that can play almost all the file formats without the need to install a codec, the photo editing software GIMP, anti-virus and Bluetooth support integrated to name a few applications included.
As of 2008, the installation process of major Linux distributions can be customized by creating custom installation disks, usually called spins, but creating a spin requires a certain amount of expertise, and creating a spin that can run from the installation media requires further customization. NimbleX makes |
https://en.wikipedia.org/wiki/Tawkers | Tawkers is a Saas application, which allows publishers to create and distribute branded text message conversations live or after the fact. The application is created for various devices, but was originally launched as an iPhone app and has become a set of content tools for brand marketers and publishers. Tawkers is owned and managed by Tawkers Inc, which was founded in 2011.
The company launched a beta of Tawkers as a web application in 2013, before releasing the official version in March 2014. Gizmodo Brazil ranked Tawkers in their list of top iPhone Apps for that particular month.
In 2016, Tawkers began work on a SaaS product, creating an enterprise solution for brands, agencies and publishers to create and manage public messaging content between influencers. The content is then embedded across the client's media. This new offering followed a partnership with NBCUniversal and 360i in which the companies utilized the technology to create content for their owned, earned and paid media channels, as well as within mobile applications. The first campaign was with the Bravo TV show Odd Mom Out.
Background
Blake Ian is the current CEO and co-founded the company in 2011 in New York City after having the idea of sharing text conversations while he was chatting with a friend about a film over instant messaging. In late 2011, after developing the idea, the company received $360,000 in seed funding.
Following the development of a beta in 2013, Ian stated to TechCrunch that he believed that celebrities would use the Tawkers platform as a way to engage with fans and also engage conversation on given subjects.
The app did just that in the early stages of the beta, with Lee Camp, Ron "Bumblefoot" Thal, Colin Quinn, singer Eleanor Goldfield, Deepak Chopra, and also Howard Rheingold using the app during the early parts of its existence.
In March 2014, Gizmodo Brazil ranked Tawkers in their list of top Apps.
Mechanics
According to their website, the application aims to bridge |
https://en.wikipedia.org/wiki/Anterior%20atlantooccipital%20membrane | The anterior atlantooccipital membrane (anterior atlantooccipital ligament) is a broad, dense membrane extending between the anterior margin of the foramen magnum (superiorly), and (the superior margin of) the anterior arch of atlas (inferiorly).
The membrane helps limit excessive movement at the atlanto-occipital joints.
Anatomy
Structure
It is composed of broad, densely woven fibers; especially towards the midline where the membrane is continuous medially with the anterior longitudinal ligament. It is innervated by the cervical spinal nerve 1.
Relations
Medially, it is continuous with the anterior longitudinal ligament.
Laterally, it is blends with either articular capsule.
This membrane is in relation in anteriorly with the rectus capitis anterior muscles, and posteriorly with the alar ligaments.
See also
Posterior atlantooccipital membrane |
https://en.wikipedia.org/wiki/Locally%20finite%20operator | In mathematics, a linear operator is called locally finite if the space is the union of a family of finite-dimensional -invariant subspaces.
In other words, there exists a family of linear subspaces of , such that we have the following:
Each is finite-dimensional.
An equivalent condition only requires to be the spanned by finite-dimensional -invariant subspaces. If is also a Hilbert space, sometimes an operator is called locally finite when the sum of the is only dense in .
Examples
Every linear operator on a finite-dimensional space is trivially locally finite.
Every diagonalizable (i.e. there exists a basis of whose elements are all eigenvectors of ) linear operator is locally finite, because it is the union of subspaces spanned by finitely many eigenvectors of .
The operator on , the space of polynomials with complex coefficients, defined by , is not locally finite; any -invariant subspace is of the form for some , and so has infinite dimension.
The operator on defined by is locally finite; for any , the polynomials of degree at most form a -invariant subspace. |
https://en.wikipedia.org/wiki/CALHM1 | Calcium homeostasis modulator 1 (CALHM1) is a pore-forming subunit of a voltage-gated ion channel and a voltage-gated ATP channel that in humans is encoded by the CALHM1 gene.
Function
Central nervous system
CALHM1 was identified by a tissue-specific gene expression profiling approach that screened for genes located on susceptibility loci for late-onset Alzheimer's disease (AD) and that are preferentially expressed in the hippocampus, a brain region affected early in AD. CALHM1 is a plasma membrane calcium-permeable ion channel regulated by voltage and extracellular calcium levels. The exact function of CALHM1 in the brain is not completely understood, but studies have shown that CALHM1 controls neuronal intracellular calcium homeostasis and signaling, as well as calcium-dependent neuronal excitability and memory in mouse models. Recent data have also shown that CALHM1 might facilitate the proteolytic degradation of the cerebral amyloid beta peptide, a culprit in AD pathogenesis.
Peripheral taste system
CALHM1 is expressed in taste bud cells where it controls purinergic receptor-mediated taste transduction in the gustatory system.
See also
Ruthenium red |
https://en.wikipedia.org/wiki/Throat%20culture | A throat culture is a laboratory diagnostic test that evaluates for the presence of a bacterial or fungal infection in the throat. A sample from the throat is collected by swabbing the throat and placing the sample into a special cup (culture) that allows infections to grow. If an organism grows, the culture is positive and the presence of an infection is confirmed. The type of infection is found using a microscope, chemical tests, or both. If no infection grows, the culture is negative. Common infectious organisms tested for by a throat culture include Candida albicans known for causing thrush and Group A streptococcus known for causing strep throat, scarlet fever, and rheumatic fever. Throat cultures are more sensitive (81% sensitive) than the rapid strep test (70%) for diagnosing strep throat, but are nearly equal in terms of specificity.
Purpose
A throat culture may be done to investigate the cause of a sore throat. Most sore throats are caused by viral infections. However, in some cases the cause of a sore throat may be unclear and a throat culture can be used to determine if the infection is bacterial. Identifying the responsible organism can guide treatment.
Technique
The person receiving the throat culture is asked to tilt his or her head back and open his or her mouth. The health professional will press the tongue down with a tongue depressor and examine the mouth and throat. A clean swab will be rubbed over the back of the throat, around the tonsils, and over any red areas or sores to collect a sample.
The sample may also be collected using a throat washout. For this test, the patient will gargle a small amount of salt water and then spit the fluid into a clean cup. This method gives a larger sample than a throat swab and may make the culture more reliable.
A culture for Streptococcus pyogenes can take 18–24 hours when grown at 37 degrees Celsius (body temperature).
See also
Antibiogram
Bacterial culture
Laboratory specimen
Rapid strep test |
https://en.wikipedia.org/wiki/General%20Mobile%20Radio%20Service | The General Mobile Radio Service (GMRS) is a land-mobile FM UHF radio service designed for short-range two-way voice communication and authorized under part 95 of the US FCC code. It requires a license in the United States, but some GMRS compatible equipment can be used license-free in Canada. The US GMRS license is issued for a period of 10 years by the FCC. The United States permits use by adult individuals who possess a valid GMRS license, as well as their immediate family members. Immediate relatives of the GMRS system licensee are entitled to communicate among themselves for personal or business purposes, but employees of the licensee who are not family members are not covered by the license. Non-family members must be licensed separately.
GMRS radios are typically handheld portable (walkie-talkies) much like Family Radio Service (FRS) radios, and they share a frequency band with FRS near 462 and 467 MHz. Mobile and base station-style radios are available as well, but these are normally commercial UHF radios as often used in the public service and commercial land mobile bands. These are legal for use in this service as long as they are certified for GMRS under USC 47 Part 95.
GMRS licensees are allowed to establish repeaters to extend their communications range. GMRS repeaters are permitted to be linked with other GMRS repeaters but are not authorized to connect to the public switched telephone network.
Licensing
Any individual in the United States who is at least 18 years of age and not a representative of a foreign government may apply for a GMRS license by completing the application form, online through the FCC's Universal Licensing System. No exam is required. A GMRS license is issued for a 10–year term.
The current fee was reduced to $35 for all applicants on April 19, 2022.
A GMRS individual license extends to immediate family members and authorizes them to use the licensed system. GMRS license holders are allowed to communicate with FRS users on |
https://en.wikipedia.org/wiki/John%20H.%20Smith%20%28mathematician%29 | John Howard Smith is an American mathematician and retired professor of mathematics at Boston College. He received his Ph.D. from the Massachusetts Institute of Technology in 1963, under the supervision of Kenkichi Iwasawa.
In voting theory, he is known for the Smith set, the smallest nonempty set of candidates such that, in every pairwise matchup (two-candidate election/runoff) between a member and a non-member, the member is the winner by majority rule, and for the Smith criterion, a property of certain election systems in which the winner is guaranteed to belong to the Smith set. He has also made contributions to spectral graph theory and additive number theory. |
https://en.wikipedia.org/wiki/Negative%20base | A negative base (or negative radix) may be used to construct a non-standard positional numeral system. Like other place-value systems, each position holds multiples of the appropriate power of the system's base; but that base is negative—that is to say, the base is equal to for some natural number ().
Negative-base systems can accommodate all the same numbers as standard place-value systems, but both positive and negative numbers are represented without the use of a minus sign (or, in computer representation, a sign bit); this advantage is countered by an increased complexity of arithmetic operations. The need to store the information normally contained by a negative sign often results in a negative-base number being one digit longer than its positive-base equivalent.
The common names for negative-base positional numeral systems are formed by prefixing nega- to the name of the corresponding positive-base system; for example, negadecimal (base −10) corresponds to decimal (base 10), negabinary (base −2) to binary (base 2), negaternary (base −3) to ternary (base 3), and negaquaternary (base −4) to quaternary (base 4).
Example
Consider what is meant by the representation in the negadecimal system, whose base is −10:
The representation (which is intended to be negadecimal notation) is equivalent to in decimal notation, because 10,000 + (−2,000) + 200 + (−40) + 3 = .
Remark
On the other hand, in decimal would be written in negadecimal.
History
Negative numerical bases were first considered by Vittorio Grünwald in an 1885 monograph published in Giornale di Matematiche di Battaglini. Grünwald gave algorithms for performing addition, subtraction, multiplication, division, root extraction, divisibility tests, and radix conversion. Negative bases were later mentioned in passing by A. J. Kempner in 1936 and studied in more detail by Zdzisław Pawlak and A. Wakulicz in 1957.
Negabinary was implemented in the early Polish computer BINEG (and UMC), built 1957–59, ba |
https://en.wikipedia.org/wiki/Plane%20wave%20tube | An acoustic duct or plane wave tube is a test facility used in acoustics. Anechoic chambers are typically subject to a low frequency limit, governed by the length of the sound absorbing wedges employed to prevent reflections within the chamber. Test and measurement microphone calibration services are often required to be undertaken at frequencies where anechoic chambers cannot be used effectively. In this case, a plane wave acoustic duct with anechoic termination provides a practical alternative.
Such a facility consists of a long duct, with a special low-frequency sound source (subwoofer) at one end and very large acoustically absorbent wedges at the other end. The duct cross section dimensions are made sufficiently small compared to the wavelength at the frequencies of interest that sound can be assumed to propagate down the duct as a plane wave with no reflections from the sides. Acoustic ducts are most commonly used by National Measurement Institutes that specialise in acoustical measurement (such as the National Physical Laboratory (United Kingdom)), who use them for measurement microphone calibration at low frequencies.
Acoustics |
https://en.wikipedia.org/wiki/Logjam%20%28computer%20security%29 | Logjam is a security vulnerability in systems that use Diffie–Hellman key exchange with the same prime number. It was discovered by a team of computer scientists and publicly reported on May 20, 2015. The discoverers were able to demonstrate their attack on 512-bit (US export-grade) DH systems. They estimated that a state level attacker could do so for 1024-bit systems, then widely used, thereby allowing decryption of a significant fraction of Internet traffic. They recommended upgrading to at least 2048-bits for shared prime systems.
Details
Diffie–Hellman key exchange depends for its security on the presumed difficulty of solving the discrete logarithm problem. The authors took advantage of the fact that the number field sieve algorithm, which is generally the most effective method for finding discrete logarithms, consists of four large computational steps, of which the first three depend only on the order of the group G, not on the specific number whose finite log is desired. If the results of the first three steps are precomputed and saved, they can be used to solve any discrete log problem for that prime group in relatively short time. This vulnerability was known as early as 1992. It turns out that much Internet traffic only uses one of a handful of groups that are of order 1024 bits or less.
One approach enabled by this vulnerability that the authors demonstrated was using a man-in-the-middle network attacker to downgrade a Transport Layer Security (TLS) connection to use 512-bit DH export-grade cryptography, allowing them to read the exchanged data and inject data into the connection. It affects the HTTPS, SMTPS, and IMAPS protocols, among others. The authors needed several thousand CPU cores for a week to precompute data for a single 512-bit prime. Once that was done, however, individual logarithms could be solved in about a minute using two 18-core Intel Xeon CPUs. Its CVE ID is .
The authors also estimated the feasibility of the attack against 1024-bit |
https://en.wikipedia.org/wiki/Evolution%20of%20human%20intelligence | The evolution of human intelligence is closely tied to the evolution of the human brain and to the origin of language. The timeline of human evolution spans approximately seven million years, from the separation of the genus Pan until the emergence of behavioral modernity by 50,000 years ago. The first three million years of this timeline concern Sahelanthropus, the following two million concern Australopithecus and the final two million span the history of the genus Homo in the Paleolithic era.
Many traits of human intelligence, such as empathy, theory of mind, mourning, ritual, and the use of symbols and tools, are somewhat apparent in great apes, although they are in much less sophisticated forms than what is found in humans like the great ape language.
History
Hominidae
The great apes (hominidae) show some cognitive and empathic abilities. Chimpanzees can make tools and use them to acquire foods and for social displays; they have mildly complex hunting strategies requiring cooperation, influence and rank; they are status conscious, manipulative and capable of deception; they can learn to use symbols and understand aspects of human language including some relational syntax, concepts of number and numerical sequence. One common characteristic that is present in species of "high degree intelligence" (i.e. dolphins, great apes, and humans - Homo sapiens) is a brain of enlarged size. Along with this, there is a more developed neocortex, a folding of the cerebral cortex, and von Economo neurons. Said neurons are linked to social intelligence and the ability to gauge what another is thinking or feeling and are also present in bottlenose dolphins.
Homininae
Around 10 million years ago, the Earth's climate entered a cooler and drier phase, which led eventually to the Quaternary glaciation beginning some 2.6 million years ago. One consequence of this was that the north African tropical forest began to retreat, being replaced first by open grasslands and eventually by |
https://en.wikipedia.org/wiki/Bohr%20effect | The Bohr effect is a phenomenon first described in 1904 by the Danish physiologist Christian Bohr. Hemoglobin's oxygen binding affinity (see oxygen–haemoglobin dissociation curve) is inversely related both to acidity and to the concentration of carbon dioxide. That is, the Bohr effect refers to the shift in the oxygen dissociation curve caused by changes in the concentration of carbon dioxide or the pH of the environment. Since carbon dioxide reacts with water to form carbonic acid, an increase in CO2 results in a decrease in blood pH, resulting in hemoglobin proteins releasing their load of oxygen. Conversely, a decrease in carbon dioxide provokes an increase in pH, which results in hemoglobin picking up more oxygen.
Experimental discovery
In the early 1900s, Christian Bohr was a professor at the University of Copenhagen in Denmark, already well known for his work in the field of respiratory physiology. He had spent the last two decades studying the solubility of oxygen, carbon dioxide, and other gases in various liquids, and had conducted extensive research on haemoglobin and its affinity for oxygen. In 1903, he began working closely with Karl Hasselbalch and August Krogh, two of his associates at the university, in an attempt to experimentally replicate the work of Gustav von Hüfner, using whole blood instead of haemoglobin solution. Hüfner had suggested that the oxygen-haemoglobin binding curve was hyperbolic in shape, but after extensive experimentation, the Copenhagen group determined that the curve was in fact sigmoidal. Furthermore, in the process of plotting out numerous dissociation curves, it soon became apparent that high partial pressures of carbon dioxide caused the curves to shift to the right. Further experimentation while varying the CO2 concentration quickly provided conclusive evidence, confirming the existence of what would soon become known as the Bohr effect.
Controversy
There is some more debate over whether Bohr was actually the first to |
https://en.wikipedia.org/wiki/Room%20modes | Room modes are the collection of resonances that exist in a room when the room is excited by an acoustic source such as a loudspeaker. Most rooms have their fundamental resonances in the 20 Hz to 200 Hz region, each frequency being related to one or more of the room's dimensions or a divisor thereof. These resonances affect the low-frequency low-mid-frequency response of a sound system in the room and are one of the biggest obstacles to accurate sound reproduction.
Mechanism of room resonances
The input of acoustic energy to the room at the modal frequencies and multiples thereof causes standing waves. The nodes and antinodes of these standing waves result in the loudness of the particular resonant frequency being different at different locations of the room. These standing waves can be considered a temporary storage of acoustic energy as they take a finite time to build up and a finite time to dissipate once the sound energy source has been removed.
Minimizing effect of room resonances
A room with generally hard surfaces will exhibit high-Q, sharply tuned resonances. Absorbent material can be added to the room to damp such resonances which work by more quickly dissipating the stored acoustic energy.
In order to be effective, a layer of porous, absorbent material has to be of the order of a quarter-wavelength thick if placed on a wall, which at low frequencies with their long wavelengths requires very thick absorbers. Absorption occurs through friction of the air motion against individual fibres, with kinetic energy converted to heat, and so the material must be of just the right 'density' in terms of fibre packing. Too loose, and sound will pass through, but too firm and reflection will occur. Technically it is a matter of impedance matching between air motion and the individual fibres. Glass fibre, as used for thermal insulation, is very effective, but needs to be very thick (perhaps four to six inches) if the result is not to be a room that sounds unnatural |
https://en.wikipedia.org/wiki/Single%20address%20space%20operating%20system | In computer science, a single address space operating system (or SASOS) is an operating system that provides only one globally shared address space for all processes. In a single address space operating system, numerically identical (virtual memory) logical addresses in different processes all refer to exactly the same byte of data.
Single address-space operating systems offer certain advantages. In a traditional OS with private per-process address space, memory protection is based on address space boundaries ("address space isolation"). Single address-space operating systems use a different approach for memory protection that is just as strong. One advantage is that the same virtual-to-physical map page table can be used with every process (and in some SASOS, the kernel as well). This makes context switches on a SASOS faster than on operating systems that must change the page table and flush the TLB caches on every context switch.
SASOS projects include the following:
Amiga family – AmigaOS, AROS and MorphOS
Angel
BareMetal
Br1X
Genera by Symbolics
IBM i (formerly called OS/400)
Iguana at NICTA, Australia
JX a research Java OS
IncludeOS
Mungi at NICTA, Australia
Nemesis
Opal
OS-9
Phantom OS
RTEMS
Scout
Singularity
Sombrero
TempleOS
Theseus OS
Torsion
VxWorks
Zephyr
See also
Exokernel
Hybrid kernel
Kernel
Microkernel
Nanokernel
Unikernel
Flat memory model
Virtual memory |
https://en.wikipedia.org/wiki/Dreaming%20in%20Code | Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software is a (2007) Random House literary nonfiction book by Salon.com editor and journalist Scott Rosenberg. It documents the workers of Mitch Kapor's Open Source Applications Foundation as they struggled with collaboration and the software development task of building the open source calendar application Chandler.
Rosenberg spent time observing the organization at work and wrote about its milestones and problems. The book combines narrative with explanations of software development philosophy, methodology, and process, referring to The Mythical Man-Month and other texts of the field. In a review published in the Atlantic, James Fallows compared the book to Tracy Kidder's The Soul of a New Machine.
At the time of the book's publication, OSAF had not yet released Chandler 1.0. Chandler 1.0 was released on August 8, 2008. |
https://en.wikipedia.org/wiki/List%20of%20rogue%20waves | This list of rogue waves compiles incidents of known and likely rogue waves – also known as freak waves, monster waves, killer waves, and extreme waves. These are dangerous and rare ocean surface waves that unexpectedly reach at least twice the height of the tallest waves around them, and are often described by witnesses as "walls of water". They occur in deep water, usually far out at sea, and are a threat even to capital ships , ocean liners and land structures such as lighthouses.
In addition to the incidents listed below, it has also been suggested that these types of waves may be responsible for the loss of several low-flying United States Coast Guard helicopters on search and rescue missions.
Background
Anecdotal evidence from mariners' testimonies and incidents of wave damage to ships have long suggested rogue waves occurred; however, their scientific measurement was positively confirmed only following measurements of the Draupner wave, a rogue wave at the Draupner platform, in the North Sea on 1 January 1995. During this event, minor damage was inflicted on the platform, confirming that the reading was valid.
In modern oceanography, rogue waves are defined not as the biggest possible waves at sea, but instead as extreme sized waves for a given sea state.
Many of these encounters are only reported in the media, and are not examples of open ocean rogue waves. Often a huge wave is loosely and incorrectly denoted as a rogue wave. Extremely large waves offer an explanation for the otherwise-inexplicable disappearance of many ocean-going vessels. However, the claim is contradicted by information held by Lloyd's Register. One of the very few cases where evidence suggests a freak wave incident is the 1978 loss of the freighter . This claim, however, is contradicted by other sources, which maintain that, over a time period from 1969 to 1994 alone, rogue waves were responsible for the complete loss of 22 supertankers, often with their entire crew. In 2007, resear |
https://en.wikipedia.org/wiki/Continuous%20q-Hermite%20polynomials | In mathematics, the continuous q-Hermite polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties.
Definition
The polynomials are given in terms of basic hypergeometric functions by
Recurrence and difference relations
with the initial conditions
From the above, one can easily calculate:
Generating function
where . |
https://en.wikipedia.org/wiki/Anatomical%20Science%20International | Anatomical Science International is a peer-reviewed medical journal that covers gross, histologic, cellular, molecular, biochemical, physiological and behavioral studies of humans and experimental animals. It is the official publication of the Japanese Association of Anatomists (formerly known as Kaibogaku Zasshi).
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.741. |
https://en.wikipedia.org/wiki/Fondements%20de%20la%20G%C3%A9ometrie%20Alg%C3%A9brique | Fondements de la Géometrie Algébrique (FGA) is a book that collected together seminar notes of Alexander Grothendieck. It is an
important source for his pioneering work on scheme theory, which laid foundations for algebraic geometry in its modern technical developments.
The title is a translation of the title of André Weil's book Foundations of Algebraic Geometry.
It contained material on descent theory, and existence theorems including that for the Hilbert scheme. The Technique de descente et théorèmes d'existence en géometrie algébrique is one series of seminars within FGA.
Like the bulk of Grothendieck's work of the IHÉS period, duplicated notes were circulated, but the publication was not as a conventional book.
Contents
These are Séminaire Bourbaki notes, by number, from the years 1957 to 1962.
Fondements de la géométrie algébrique. Commentaires [Séminaire Bourbaki, t. 14, 1961/62, Complément];
Théorème de dualité pour les faisceaux algébriques cohérents [Séminaire Bourbaki, t. 9, 1956/57, no. 149]; (coherent duality)
Géométrie formelle et géométrie algébrique [Séminaire Bourbaki, t. 11, 1958/59, no. 182]; (formal geometry)
Technique de descente et théorèmes d'existence en géométrie algébrique. I-VI
I. Généralités. Descente par morphismes fidèlement plats [Séminaire Bourbaki, t. 12, 1959/60, no. 190];
II. Le théorème d'existence en théorie formelle des modules [Séminaire Bourbaki, t. 12, 1959/60, no. 195];
III. Préschémas quotients [Séminaire Bourbaki, t. 13, 1960/61, no. 212];
IV. Les schémas de Hilbert [Séminaire Bourbaki, t. 13, 1960/61, no. 221];
V. Les schémas de Picard. Théorèmes d'existence [Séminaire Bourbaki, t. 14, 1961/62, no. 232];
VI. Les schémas de Picard. Propriétés générales [Séminaire Bourbaki, t. 14, 1961/62, no. 236]
See also
Éléments de géométrie algébrique
Séminaire de Géométrie Algébrique du Bois Marie |
https://en.wikipedia.org/wiki/Trinomial%20triangle | The trinomial triangle is a variation of Pascal's triangle. The difference between the two is that an entry in the trinomial triangle is the sum of the three (rather than the two in Pascal's triangle) entries above it:
The -th entry of the -th row is denoted by
.
Rows are counted starting from 0. The entries of the -th row are indexed starting with from the left, and the middle entry has index 0. The symmetry of the entries of a row about the middle entry is expressed by the relationship
Properties
The -th row corresponds to the coefficients in the polynomial expansion of the expansion of the trinomial raised to the -th power:
or, symmetrically,
,
hence the alternative name trinomial coefficients because of their relationship to the multinomial coefficients:
Furthermore, the diagonals have interesting properties, such as their relationship to the triangular numbers.
The sum of the elements of -th row is .
Recurrence formula
The trinomial coefficients can be generated using the following recurrence formula:
,
for ,
where for and .
Central trinomial coefficients
The middle entries of the trinomial triangle
1, 1, 3, 7, 19, 51, 141, 393, 1107, 3139, …
were studied by Euler and are known as central trinomial coefficients.
The -th central trinomial coefficient is given by
Their generating function is
Euler noted the following exemplum memorabile inductionis fallacis ("notable example of fallacious induction"):
for ,
where is the n-th Fibonacci number. For larger , however, this relationship is incorrect. George Andrews explained this fallacy using the general identity
Applications
In chess
The triangle corresponds to the number of possible paths that can be taken by the king in a game of chess. The entry in a cell represents the number of different paths (using a minimum number of moves) the king can take to reach the cell.
In combinatorics
The coefficient of in the expansion of gives the number of different ways to draw card |
https://en.wikipedia.org/wiki/International%20Union%20for%20Pure%20and%20Applied%20Biophysics | The International Union for Pure and Applied Biophysics (IUPAB) is an international non-governmental organization whose mission is to assist in the worldwide development of biophysics, to foster international cooperation in biophysics, and to help in the application of biophysics toward solving problems of concern to all humanity. It was established in 1961 as the International Organisation for Pure and Applied Biophysics but then renamed as the International Union in 1966, when it became a member of ICSU ( the International Council of Scientific Unions), which itself was renamed in 2018 as ISC, the International Council for Science. Affiliated to it are the national adhering bodies of 61 countries, as well as the European Biophysical Societies' Association (EBSA), the Asian Biophysics Association and the Biophysical Society.
IUPAB carries out its mission by: sponsoring international meetings and its own congresses triennially; fostering communications and publications; encouraging research and education; fostering the free circulation of scientists; and cooperating with other organizations on multi-disciplinary and inter-disciplinary problems.
History
The first General Assembly of IOPAB (International Organisation for Pure and Applied Biophysics) and the first International Biophysics Congress were held in Stockholm in 1961 with over 1000 delegates attending. They were followed by a second General Assembly of IOPAB in Paris in 1964 and, two years later in 1966, IOPAB was admitted to the International Council of Scientific Unions (ICSU) which became the International Council for Science (ICS) in 2018.
In 1966 at the Vienna congress that IOPAB became IUPAB (International Union for Pure and Applied Biophysics).
Since 1966, IUPAB has hosted a Congress every three years: Boston MIT in 1969, Moscow in 1972, Copenhagen in 1975, Kyoto in 1978, Mexico City in 1981, Bristol in 1984, Jerusalem in 1987, Vancouver in 1990, Budapest in 1993, Amsterdam in 1996, New Delhi in |
https://en.wikipedia.org/wiki/Boolean-valued | Boolean-valued usually refers to:
in most applied fields: something taking one of two values (example: True or False, On or Off, 1 or 0) referring to two-element Boolean algebra (the Boolean domain), e.g. Boolean-valued function or Boolean data type
in mathematics: something taking values over an arbitrary, abstract Boolean algebra, for example Boolean-valued model
See also
Boolean algebra further explains the distinction
Mathematical concepts
Logic and statistics |
https://en.wikipedia.org/wiki/Open%20Grid%20Forum | The Open Grid Forum (OGF) is a community of users, developers, and vendors for standardization of grid computing. It was formed in 2006 in a merger of the Global Grid Forum and the Enterprise Grid Alliance.
The OGF models its process on the Internet Engineering Task Force (IETF), and produces documents with many acronyms such as OGSA, OGSI, and JSDL.
Organization
The OGF has two principal functions plus an administrative function: being the standards organization for grid computing, and building communities within the overall grid community (including extending it within both academia and industry). Each of these function areas is then divided into groups of three types: working groups with a generally tightly defined role (usually producing a standard), research groups with a looser role bringing together people to discuss developments within their field and generate use cases and spawn working groups, and community groups (restricted to community functions).
Three meetings are organized per year, divided (approximately evenly after averaging over a number of years) between North America, Europe and East Asia. Many working groups organize face-to-face meetings in the interim.
History
The concept of a forum to bring together developers, practitioners, and users of distributed computing (known as grid computing at the time) was discussed at a "Birds of a Feather" session in November 1998 at the SC98 supercomputing conference.
Based on response to the idea during this BOF, Ian Foster and Bill Johnston convened the first Grid Forum meeting at NASA Ames Research Center in June 1999, drawing roughly 100 people, mostly from the US. A group of organizers nominated Charlie Catlett (from Argonne National Laboratory and the University of Chicago) to serve as the initial chair, confirmed via a plenary vote was held at the second Grid Forum meeting in Chicago in October 1999.
With advice and assistance from the Internet Engineering Task Force (IETF), OGF established |
https://en.wikipedia.org/wiki/Leccinum%20vulpinum | Leccinum vulpinum, commonly known as the foxy bolete, is a bolete fungus in the genus Leccinum that is found in Europe. It was described as new to science by Roy Watling in 1961. An edible species, it grows in mycorrhizal association with species of pine and bearberry.
See also
List of Leccinum species |
https://en.wikipedia.org/wiki/Fertility%20preservation | Fertility preservation is the effort to help cancer patients retain their fertility, or ability to procreate. Research into how cancer, ageing and other health conditions effect reproductive health and preservation options are growing. Specifically sparked in part by the increase in the survival rate of cancer patients.
Indications
Fertility preservation procedures are indicated when it is predicted that there will be exposure to a cause of infertility, mainly cancer treatment but also ageing, sex reassignment surgery for those who identify as trans and conditions like Polycystic Ovary Syndrome (PCOS) or Primary Ovarian Insufficiency (POI).
Chemotherapy and radiotherapy
Chemotherapy and radiation treatments for cancer and autoimmunity conditions like Lupus and Multiple Sclerosis have the ability to affect reproductive health. The regimens that threaten ovarian and testicular function are mainly radiation therapy to the pelvic area and some types of chemotherapy. Chemotherapies with high risk include procarbazine and alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin and antimetabolites such as methotrexate, mercaptopurine and 5-fluoruracil.
These regimens attack rapidly dividing cells in the body, including healthy cells like sperm and those belonging to the ovarian follicle (egg). Depending on the dose and duration of administration, these therapies can have varying effects on reproductive health. Surgery involving reproductive tissue affects reproductive function and fertility.
For some patients receiving chemotherapy or radiotherapy, the decrease or loss of reproductive function is temporary; many men and females, however, do not regain fe |
https://en.wikipedia.org/wiki/Local%20time | Local time is the time observed in a specific locality. There is no canonical definition. Originally it was mean solar time, but since the introduction of time zones it is generally the time as determined by the time zone in effect, with daylight saving time where and when applicable. In some places this is known as standard time.
Some sources continue to use the term local time to mean solar time as opposed to standard time, but they are in the minority. Terms such as local mean time also relate to solar time.
See also
Local mean time
Apparent solar time
Local time (mathematics)
Local time in the Lorentz ether theory
Standard time
UTC
GMT |
https://en.wikipedia.org/wiki/Lipoatrophy | Lipoatrophy is the term describing the localized loss of fat tissue. This may occur as a result of subcutaneous injections of insulin in the treatment of diabetes, from the use of human growth hormone or from subcutaneous injections of copaxone used for the treatment of multiple sclerosis. In the latter case, an injection may produce a small dent at the injection site. Lipoatrophy occurs in HIV-associated lipodystrophy, one cause of which is an adverse drug reaction that is associated with some antiretroviral medications.
A more general term for an abnormal or degenerative condition of the entire body's adipose tissue is lipodystrophy. |
https://en.wikipedia.org/wiki/History%20of%20gravitational%20theory | In physics, theories of gravitation postulate mechanisms of interaction governing the movements of bodies with mass. There have been numerous theories of gravitation since ancient times. The first extant sources discussing such theories are found in ancient Greek philosophy. This work was furthered through the Middle Ages by Indian, Islamic, and European scientists, before gaining great strides during the Renaissance and Scientific Revolution—culminating in the formulation of Newton's law of gravity. This was superseded by Albert Einstein's theory of relativity in the early 20th century.
Greek philosopher Aristotle () found that objects immersed in a medium tend to fall at speeds proportional to their weight. Vitruvius () understood that objects fall based on their specific gravity. In the 6th century CE, Byzantine Alexandrian scholar John Philoponus modified the Aristotelian concept of gravity with the theory of impetus. In the 7th century, Indian astronomer Brahmagupta spoke of gravity as an attractive force. In the 14th century, European philosophers Jean Buridan and Albert of Saxony—who were influenced by certain Islamic scholars—developed the theory of impetus and linked it to the acceleration and mass of objects. Albert also developed a law of proportion regarding the relationship between the speed of an object in free fall and the time elapsed.
Italians of the 16th century found that objects in free fall tend to accelerate equally. In 1632, Galileo Galilei put forth the basic principle of relativity. The existence of the gravitational constant was explored by various researchers from the mid-17th century, helping Isaac Newton formulate his law of universal gravitation. Newton's classical mechanics were superseded in the early 20th century, when Einstein developed the special and general theories of relativity. The hypothetical force carrier of gravity remains an outlier in the search for a theory of everything, for which various models of quantum gravity ar |
https://en.wikipedia.org/wiki/Q%20value%20%28nuclear%20science%29 | In nuclear physics and chemistry, the value for a reaction is the amount of energy absorbed or released during the nuclear reaction. The value relates to the enthalpy of a chemical reaction or the energy of radioactive decay products. It can be determined from the masses of reactants and products. values affect reaction rates. In general, the larger the positive value for the reaction, the faster the reaction proceeds, and the more likely the reaction is to "favor" the products.
where the masses are in atomic mass units. Also, both and are the sums of the reactant and product masses respectively.
Definition
The conservation of energy, between the initial and final energy of a nuclear process enables the general definition of based on the mass–energy equivalence. For any radioactive particle decay, the kinetic energy difference will be given by:
where denotes the kinetic energy of the mass .
A reaction with a positive value is exothermic, i.e. has a net release of energy, since the kinetic energy of the final state is greater than the kinetic energy of the initial state.
A reaction with a negative value is endothermic, i.e. requires a net energy input, since the kinetic energy of the final state is less than the kinetic energy of the initial state. Observe that a chemical reaction is exothermic when it has a negative enthalpy of reaction, in contrast a positive value in a nuclear reaction.
The value can also be expressed in terms of the Mass excess of the nuclear species as:
Proof The mass of a nucleus can be written as where is the mass number (sum of number of protons and neutrons) and MeV/c. Note that the count of nucleons is conserved in a nuclear reaction. Hence, and .
Applications
Chemical values are measurement in calorimetry. Exothermic chemical reactions tend to be more spontaneous and can emit light or heat, resulting in runaway feedback(i.e. explosions).
values are also featured in particle physics. For example, |
https://en.wikipedia.org/wiki/Harrya%20chromapes | Harrya chromapes, commonly known as the yellowfoot bolete or the chrome-footed bolete, is a species of bolete fungus in the family Boletaceae. The bolete is found in eastern North America, Costa Rica, and eastern Asia, where it grows on the ground, in a mycorrhizal association with deciduous and coniferous trees. Fruit bodies have smooth, rose-pink caps that are initially convex before flattening out. The pores on the cap undersurface are white, aging to a pale pink as the spores mature. The thick stipe has fine pink or reddish dots (scabers), and is white to pinkish but with a bright yellow base. The mushrooms are edible but are popular with insects, and so they are often infested with maggots.
In its taxonomic history, Harrya chromapes has been shuffled to several different genera, including Boletus, Leccinum, and Tylopilus, and is known in field guides as a member of one of these genera. In 2012, it was transferred to the newly created genus Harrya when it was established that morphological and molecular evidence demonstrated its distinctness from the genera in which it had formerly been placed.
Taxonomy
The species was first described scientifically by American mycologist Charles Christopher Frost as Boletus chromapes. Cataloging the bolete fungi of New England, Frost published 22 new bolete species in that 1874 publication. Rolf Singer placed the species in Leccinum in 1947 due to the scabrous dots on the stipe, even though the spore print color was not typical of that genus. In 1968, Alexander H. Smith and Harry Delbert Thiers thought that Tylopilus was a more appropriate fit as they believed the pinkish-brown spore print—characteristic of that genus—to be of greater taxonomic significance. Other genera to which it has been shuffled in its taxonomic history include Ceriomyces by William Alphonso Murrill in 1909, and Krombholzia by Rolf Singer in 1942; Ceriomyces and Krombholzia have since been subsumed into Boletus and Leccinum, respectively. Additional syn |
https://en.wikipedia.org/wiki/Suspended%20animation | Suspended animation is the temporary (short- or long-term) slowing or stopping of biological function so that physiological capabilities are preserved. It may be either hypometabolic or ametabolic in nature. It may be induced by either endogenous, natural or artificial biological, chemical or physical means. In its natural form, it may be spontaneously reversible as in the case of species demonstrating hypometabolic states of hibernation. When applied with therapeutic intent, as in deep hypothermic circulatory arrest (DHCA), usually technologically mediated revival is required.
Basic principles
Suspended animation is understood as the pausing of life processes by exogenous or endogenous means without terminating life itself. Breathing, heartbeat and other involuntary functions may still occur, but they can only be detected by artificial means. For this reason, this procedure has been associated with a lethargic state in nature when animals or plants appear, over a period, to be dead but then can wake up or prevail without suffering any harm. This has been termed in different contexts hibernation, dormancy or anabiosis (the latter in some aquatic invertebrates and plants in scarcity conditions).
In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically-poor sediments, up to 101.5 million years old, below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found.
This condition of apparent death or interruption of vital signs may be similar to a medical interpretation of suspended animation. It is only possible to recover signs of life if the brain and other vital organs suffer no cell deterioration, necrosis or molecular death principally caused by oxygen deprivation or excess temperature (especially high temperature).
Some examples of people that have returned from this apparent interruption of life lasting |
https://en.wikipedia.org/wiki/EOS%20memory | EOS memory (for ECC on SIMMs) is an error-correcting memory system built into SIMMs, used to upgrade server-class computers without built-in ECC memory support. The EOS SIMM itself does the error checking, with reduced need for ECC memory modules and support. The technology was introduced by IBM in the mid-1990s. |
https://en.wikipedia.org/wiki/Flag%20of%20Burlington%2C%20Vermont | The flag of Burlington, Vermont was adopted by the Burlington city council on November 27, 2017 during the mayorship of Miro Weinberger. It is five horizontal, zig-zag stripes of blue, white, green, white, and blue.
History
1990 flag
In 1990, a group of students at Edmunds Middle School, led by eighth grader Cara Wick, designed a 2:3 proportioned flag for Burlington as part of a leadership project. The background of the flag depicts the westward view from the city over Lake Champlain of the Adirondack Mountains in New York and the Four Brother Islands, including Juniper Island. There is a yellow scroll across the top reading "BURLINGTON," below which sits a quartered shield. The first quarter, blue with a white dove, symbolizes peace and the city's sister cities. The second quarter, red with a yellow genie lamp, represents the colleges and universities in Burlington, namely University of Vermont, Champlain College, Burlington College, and Trinity College of Vermont. The third quarter, yellow with several pine trees, represents environmental conservation; pine trees were chosen because it is "the state tree of Vermont," but the state tree is actually the sugar maple. The fourth quarter, blue and white with sock and buskin represents the arts." The globe in the center of the shield represents the theme "we are one world;" atop the globe, separating the first and second quarter of the shield, is city hall. The reverse of the flag is a mirror image, except the name "BURLINGTON" is lettered correctly.
The North American Vexillological Association ranked the 1990 flag of Burlington as one of the worst city flags in the United States in a 2004 survey. The flag came in at 107 out of 150 cities, earning a 3.23 out of 10. Of the Association's five principles of good flag design, the 1990 flag breaks most, if not all of them.
2017 flag
Inspired by Roman Mars' viral 2015 TEDTalk on flag design, Mayor Weinberg announced in January 2017 the city would be changing its flag |
https://en.wikipedia.org/wiki/Space%20form | In mathematics, a space form is a complete Riemannian manifold M of constant sectional curvature K. The three most fundamental examples are Euclidean n-space, the n-dimensional sphere, and hyperbolic space, although a space form need not be simply connected.
Reduction to generalized crystallography
The Killing–Hopf theorem of Riemannian geometry states that the universal cover of an n-dimensional space form with curvature is isometric to , hyperbolic space, with curvature is isometric to , Euclidean n-space, and with curvature is isometric to , the n-dimensional sphere of points distance 1 from the origin in .
By rescaling the Riemannian metric on , we may create a space of constant curvature for any . Similarly, by rescaling the Riemannian metric on , we may create a space of constant curvature for any . Thus the universal cover of a space form with constant curvature is isometric to .
This reduces the problem of studying space forms to studying discrete groups of isometries of which act properly discontinuously. Note that the fundamental group of , , will be isomorphic to . Groups acting in this manner on are called crystallographic groups. Groups acting in this manner on and are called Fuchsian groups and Kleinian groups, respectively.
See also
Borel conjecture |
https://en.wikipedia.org/wiki/Plasma%20modeling | Plasma modeling refers to solving equations of motion that describe the state of a plasma. It is generally coupled with Maxwell's equations for electromagnetic fields or Poisson's equation for electrostatic fields. There are several main types of plasma models: single particle, kinetic, fluid, hybrid kinetic/fluid, gyrokinetic and as system of many particles.
Single particle description
The single particle model describes the plasma as individual electrons and ions moving in imposed (rather than self-consistent) electric and magnetic fields. The motion of each particle is thus described by the Lorentz Force Law.
In many cases of practical interest, this motion can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point.
Kinetic description
The kinetic model is the most fundamental way to describe a plasma, resultantly producing a distribution function
where the independent variables and are position and velocity, respectively.
A kinetic description is achieved by solving the Boltzmann equation or, when the correct description of long-range Coulomb interaction is necessary, by the Vlasov equation which contains self-consistent collective electromagnetic field, or by the Fokker–Planck equation, in which approximations have been used to derive manageable collision terms. The charges and currents produced by the distribution functions self-consistently determine the electromagnetic fields via Maxwell's equations.
Fluid description
To reduce the complexities in the kinetic description, the fluid model describes the plasma based on macroscopic quantities (velocity moments of the distribution such as density, mean velocity, and mean energy). The equations for macroscopic quantities, called fluid equations, are obtained by taking velocity moments of the Boltzmann equation or the Vlasov equation. The fluid equations are not closed without the determination of transport coef |
https://en.wikipedia.org/wiki/Holographic%20algorithm | In computer science, a holographic algorithm is an algorithm that uses a holographic reduction. A holographic reduction is a constant-time reduction that maps solution fragments many-to-many such that the sum of the solution fragments remains unchanged. These concepts were introduced by Leslie Valiant, who called them holographic because "their effect can be viewed as that of producing interference patterns among the solution fragments". The algorithms are unrelated to laser holography, except metaphorically. Their power comes from the mutual cancellation of many contributions to a sum, analogous to the interference patterns in a hologram.
Holographic algorithms have been used to find polynomial-time solutions to problems without such previously known solutions for special cases of satisfiability, vertex cover, and other graph problems. They have received notable coverage due to speculation that they are relevant to the P versus NP problem and their impact on computational complexity theory. Although some of the general problems are #P-hard problems, the special cases solved are not themselves #P-hard, and thus do not prove FP = #P.
Holographic algorithms have some similarities with quantum computation, but are completely classical.
Holant problems
Holographic algorithms exist in the context of Holant problems, which generalize counting constraint satisfaction problems (#CSP). A #CSP instance is a hypergraph G=(V,E) called the constraint graph. Each hyperedge represents a variable and each vertex is assigned a constraint A vertex is connected to an hyperedge if the constraint on the vertex involves the variable on the hyperedge. The counting problem is to compute
which is a sum over all variable assignments, the product of every constraint, where the inputs to the constrain are the variables on the incident hyperedges of .
A Holant problem is like a #CSP except the input must be a graph, not a hypergraph. Restricting the class of input graphs in this way is |
https://en.wikipedia.org/wiki/XOT | XOT (X.25 Over TCP) is a protocol developed by Cisco Systems that enables X.25 packets to be encapsulated and routed through TCP/IP connections instead of LAPB links. In 2012, X.25 tunnelled over TCP/IP using XOT was noted as by then being likely more common in actual use than physical X.25 over LAPB. |
https://en.wikipedia.org/wiki/Bateman%20equation | In nuclear physics, the Bateman equation is a mathematical model describing abundances and activities in a decay chain as a function of time, based on the decay rates and initial abundances. The model was formulated by Ernest Rutherford in 1905 and the analytical solution was provided by Harry Bateman in 1910.
If, at time t, there are atoms of isotope that decays into isotope at the rate , the amounts of isotopes in the k-step decay chain evolves as:
(this can be adapted to handle decay branches). While this can be solved explicitly for i = 2, the formulas quickly become cumbersome for longer chains. The Bateman equation is a classical master equation where the transition rates are only allowed from one species (i) to the next (i+1) but never in the reverse sense (i+1 to i is forbidden).
Bateman found a general explicit formula for the amounts by taking the Laplace transform of the variables.
(it can also be expanded with source terms, if more atoms of isotope i are provided externally at a constant rate).
While the Bateman formula can be implemented in a computer code, if for some isotope pair, cancellation can lead to computational errors. Therefore, other methods such as numerical integration or the matrix exponential method are also in use.
For example, for the simple case of a chain of three isotopes the corresponding Bateman equation reduces to
Which gives the following formula for activity of isotope (by substituting )
See also
Harry Bateman
List of equations in nuclear and particle physics
Transient equilibrium
Secular equilibrium
Pharmacokinetics, loose applicability |
https://en.wikipedia.org/wiki/Virgaviridae | Virgaviridae is a family of positive-strand RNA viruses. Plants serve as natural hosts. The name of the family is derived from the Latin word virga (rod), as all viruses in this family are rod-shaped. There are currently 59 species in this family, divided among seven genera.
Structure
Viruses in Virgaviridae are non-enveloped, with rigid helical rod geometries, and helical symmetry. The diameter is around 20-25 nm, and virions have a central "canal." Genomes are linear, single-stranded, positive sense RNA with a 3'-tRNA like structure and no polyA tail. They may be in one, two, or three segments, depending on the genus. Coat proteins are about 19–24 kiloDaltons.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription is the method of transcription. Translation takes place by leaky scanning, and suppression of termination. The virus exits the host cell by tripartite non-tubule guided viral movement, and monopartite non-tubule guided viral movement.
Plants serve as the natural host.
Taxonomy
Viruses include in the family Virgaviridae are characterized by unique alpha-like replication proteins.
The following genera are recognized:
Furovirus
Goravirus
Hordeivirus
Pecluvirus
Pomovirus
Tobamovirus
Tobravirus
Notes
The genus Benyvirus, although its members are rod shaped and infect plants, is not included in this family as its proteins appear to be only very distantly related, but is instead included in the family Benyviridae. Another related genus is Charavirus. Viruses of this genus infect charophyte algae. |
https://en.wikipedia.org/wiki/Freezing-point%20depression | Freezing-point depression is a drop in the maximum temperature at which a substance freezes, caused when a smaller amount of another, non-volatile substance is added. Examples include adding salt into water (used in ice cream makers and for de-icing roads), alcohol in water, ethylene or propylene glycol in water (used in antifreeze in cars), adding copper to molten silver (used to make solder that flows at a lower temperature than the silver pieces being joined), or the mixing of two solids such as impurities into a finely powdered drug.
In all cases, the substance added/present in smaller amounts is considered the solute, while the original substance present in larger quantity is thought of as the solvent. The resulting liquid solution or solid-solid mixture has a lower freezing point than the pure solvent or solid because the chemical potential of the solvent in the mixture is lower than that of the pure solvent, the difference between the two being proportional to the natural logarithm of the mole fraction. In a similar manner, the chemical potential of the vapor above the solution is lower than that above a pure solvent, which results in boiling-point elevation. Freezing-point depression is what causes sea water (a mixture of salt and other compounds in water) to remain liquid at temperatures below , the freezing point of pure water.
Explanation
Using vapour pressure
The freezing point is the temperature at which the liquid solvent and solid solvent are at equilibrium, so that their vapor pressures are equal. When a non-volatile solute is added to a volatile liquid solvent, the solution vapour pressure will be lower than that of the pure solvent. As a result, the solid will reach equilibrium with the solution at a lower temperature than with the pure solvent. This explanation in terms of vapor pressure is equivalent to the argument based on chemical potential, since the chemical potential of a vapor is logarithmically related to pressure. All of the colligati |
https://en.wikipedia.org/wiki/ScreenOS | ScreenOS is a real-time embedded operating system for the NetScreen range of hardware firewall devices from Juniper Networks.
Features
Beside transport level security ScreenOS also integrates these flow management applications:
IP gateway VPN management – ICSA-certified IPSec
IP packet inspection (low level) for protection against TCP/IP attacks
Virtualization for network segmentation
Possible NSA backdoor and 2015 "Unauthorized Code" incident
In December 2015, Juniper Networks announced that it had found unauthorized code in ScreenOS that had been there since August 2012. The two backdoors it created would allow sophisticated hackers to control the firewall of un-patched Juniper Netscreen products and decrypt network traffic. At least one of the backdoors appeared likely to have been the effort of a governmental interest. There was speculation in the security field about whether it was the NSA. Many in the security industry praised Juniper for being transparent about the breach. WIRED speculated that the lack of details that were disclosed and the intentional use of a random number generator with known security flaws could suggest that it was planted intentionally.
NSA and GCHQ
A 2011 leaked NSA document says that GCHQ had current exploit capability against the following ScreenOS devices: NS5gt, N25, NS50, NS500, NS204, NS208, NS5200, NS5000, SSG5, SSG20, SSG140, ISG 1000, ISG 2000. The exploit capabilities seem consistent with the program codenamed FEEDTROUGH.
Versions |
https://en.wikipedia.org/wiki/Language%20deprivation%20experiments | Language deprivation experiments have been claimed to have been attempted at least four times through history, isolating infants from the normal use of spoken or signed language in an attempt to discover the fundamental character of human nature or the origin of language.
The American literary scholar Roger Shattuck called this kind of research study the "forbidden experiment" because of the exceptional deprivation of ordinary human contact it requires. Although not designed to study language, similar experiments on non-human primates (labelled the "pit of despair") utilising complete social deprivation resulted in serious psychological disturbances.
In history
An early record of a study of this kind can be found in Herodotus's Histories. According to Herodotus (ca. 485 – 425 BC), the Egyptian pharaoh Psamtik I (664 – 610 BC) carried out such a study, and concluded the Phrygian race must antedate the Egyptians since the child had first spoken something similar to the Phrygian word , meaning "bread". Recent researchers suggest this was likely a willful interpretation of their babbling.
An experiment allegedly carried out by Holy Roman Emperor Frederick II in the 13th century saw young infants raised without human interaction in an attempt to determine if there was a natural language that they might demonstrate once their voices matured. It is claimed he was seeking to discover what language would have been imparted into Adam and Eve by God. The experiments were recorded by the monk Salimbene di Adam in his Chronicles, who was generally extremely negative about Fredrick II (portraying his calamities as parallel to the Biblical plagues in The Twelve Calamities of Emperor Frederick II) and wrote that Frederick encouraged "foster-mothers and nurses to suckle and bathe and wash the children, but in no ways to prattle or speak with them; for he would have learnt whether they would speak the Hebrew language (which he took to have been the first), or Greek, or Latin, or A |
https://en.wikipedia.org/wiki/Lifson%E2%80%93Roig%20model | In polymer science, the Lifson–Roig model
is a helix-coil transition model applied to the alpha helix-random coil transition of polypeptides; it is a refinement of the Zimm–Bragg model that recognizes that a polypeptide alpha helix is only stabilized by a hydrogen bond only once three consecutive residues have adopted the helical conformation. To consider three consecutive residues each with two states (helix and coil), the Lifson–Roig model uses a 4x4 transfer matrix instead of the 2x2 transfer matrix of the Zimm–Bragg model, which considers only two consecutive residues. However, the simple nature of the coil state allows this to be reduced to a 3x3 matrix for most applications.
The Zimm–Bragg and Lifson–Roig models are but the first two in a series of analogous transfer-matrix methods in polymer science that have also been applied to nucleic acids and branched polymers. The transfer-matrix approach is especially elegant for homopolymers, since the statistical mechanics may be solved exactly using a simple eigenanalysis.
Parameterization
The Lifson–Roig model is characterized by three parameters: the statistical weight for nucleating a helix, the weight for propagating a helix and the weight for forming a hydrogen bond, which is granted only if three consecutive residues are in a helical state. Weights are assigned at each position in a polymer as a function of the conformation of the residue in that position and as a function of its two neighbors. A statistical weight of 1 is assigned to the "reference state" of a coil unit whose neighbors are both coils, and a "nucleation" unit is defined (somewhat arbitrarily) as two consecutive helical units neighbored by a coil. A major modification of the original Lifson–Roig model introduces "capping" parameters for the helical termini, in which the N- and C-terminal capping weights may vary independently. The correlation matrix for this modification can be represented as a matrix M, reflecting the statistical weights |
https://en.wikipedia.org/wiki/Spinning%20cone | Spinning cone columns are used in a form of low temperature vacuum steam distillation to gently extract volatile chemicals from liquid foodstuffs while minimising the effect on the taste of the product. For instance, the columns can be used to remove some of the alcohol from wine, 'off' smells from cream, and to capture aroma compounds that would otherwise be lost in coffee processing.
Mechanism
The columns are made of stainless steel. Conical vanes are attached alternately to the wall of the column and to a central rotating shaft. The product is poured in at the top under vacuum, and steam is pumped into the column from below. The vanes provide a large surface area over which volatile compounds can evaporate into the steam, and the rotation ensures a thin layer of the product is constantly moved over the moving cone. It typically takes 20 seconds for the liquid to move through the column, and industrial columns might process . The temperature and pressure can be adjusted depending on the compounds targeted.
Wine controversy
Improvements in viticulture and warmer vintages have led to increasing levels of sugar in wine grapes, which have translated to higher levels of alcohol - which can reach over 15% ABV in Zinfandels from California. Some producers feel that this unbalances their wine, and use spinning cones to reduce the alcohol by 1-2 percentage points. In this case the wine is passed through the column once to distill out the most volatile aroma compounds which are then put to one side while the wine goes through the column a second time at higher temperature to extract alcohol. The aroma compounds are then mixed back into the wine. Some producers such as Joel Peterson of Ravenswood argue that technological "fixes" such as spinning cones remove a sense of terroir from the wine; if the wine has the tannins and other components to balance 15% alcohol, Peterson argues that it should be accepted on its own terms.
The use of spinning cones, and other technolo |
https://en.wikipedia.org/wiki/QRpedia | QRpedia is a mobile Web-based system which uses QR codes to deliver Wikipedia articles to users, in their preferred language. A typical use is on museum labels, linking to Wikipedia articles about the exhibited object. QR codes can easily be generated to link directly to any Uniform Resource Identifier (URI), but the QRpedia system adds further functionality. It is owned and operated by a subsidiary of Wikimedia UK (WMUK).
QRpedia was conceived by Roger Bamkin, a Wikipedia volunteer, coded by Terence Eden, and unveiled in April 2011. It is in use at museums and other institutions in countries including Australia, Bulgaria, the Czech Republic, Estonia, North Macedonia, Spain, India, the United Kingdom, Germany, Ukraine and the United States. The project's source code is freely reusable under the MIT License.
Process
When a user scans a QRpedia QR code on their mobile device, the device decodes the QR code into a Uniform Resource Locator (URL) using the domain name "languagecode.qrwp.org" and whose path (final part) is the title of a Wikipedia article, and sends a request for the article specified in the URL to the QRpedia web server. It also transmits the language setting of the device.
The QRpedia server then uses Wikipedia's API to determine whether there is a version of the specified Wikipedia article in the language used by the device, and if so, returns it in a mobile-friendly format. If there is no version of the article available in the preferred language, then the QRpedia server offers a choice of the available languages, or a Google translation.
In this way, one QRcode can deliver the same article in many languages, even when the museum is unable to make its own translations. QRpedia also records usage statistics.
Origins
QRpedia was conceived by Roger Bamkin, a Wikipedia volunteer, and Terence Eden, a mobile web consultant, and was unveiled on 9 April 2011 at Derby Museum and Art Gallery's Backstage Pass event, part of the "GLAM/Derby" collaboration |
https://en.wikipedia.org/wiki/David%20Gilbarg | David Gilbarg (17 September 1918, Boston, Massachusetts – 20 April 2001, Palo Alto, California) was an American mathematician, and a professor emeritus at Stanford University.
He completed his Ph.D. at Indiana University in 1941; his dissertation, titled On the Structure of the group of p-adic l-units, was written under the supervision of Emil Artin.
Gilbarg was co-author, together with his student Neil Trudinger, of the book Elliptic Partial Differential Equations of Second Order. Besides Trudinger, Gilbarg's doctoral students include Jerald Ericksen and James Serrin. |
https://en.wikipedia.org/wiki/Simplesse | Simplesse is a multi-functional dairy ingredient made from whey protein concentrate used as a fat substitute in low-calorie foods. Originally brought to market in 1988, the manufacturer, CP Kelco (a former NutraSweet subsidiary), sells Simplesse to food processors as a "microparticulated whey protein concentrate" in dry powder form, and recommends that it be labelled as dairy protein on food labels. Older versions of the product also contain egg whites.
The protein is partially coagulated by heat, creating a micro dispersion, in a process known as microparticulation. It is due to the small particle size of the protein that the dispersion is perceivable as a fluid with similar creaminess and richness of fat.
History
Simplesse began in 1979 as "a substance that gelled like egg white but crumbled like Styrofoam." At that time, Shoji Yamamoto, an associate of Norman S. Singer at Canadian beer company John Labatt Ltd. in London, Ontario, brought Singer the whey protein substance. After sensing that it gave the taste texture of cream cheese, another scientist put a sample under a powerful microscope and saw "tiny spheres of protein rolling over each other" about a tenth the size of particles of powdered sugar. It was this particle rolling action that gave a smooth creaminess sensation. Another associate of Singer, scientist Nina Davis, worked for three years with Simplesse (then called microcurd) to adapt it to and incorporate it into many different food products. After Singer's boss observed that the product tasted like cheesecake, Singer filed for and received a United States patent for his efforts with the product. Labatt licensed the product to NutraSweet, a subsidiary of Creve Coeur, Missouri, United States chemical conglomerate Monsanto, in 1984. The product was given the trademark "Simplesse" in January 1988. An initial version of the product, approved for use in frozen desserts like low-fat ice cream substitutes by the U.S. Food and Drug Administration (FDA), w |
https://en.wikipedia.org/wiki/Canned%20fish | Canned or tinned fish are food fish which have been processed, sealed in an airtight container such as a sealed tin can, and subjected to heat. Canning is a method of preserving food, and provides a typical shelf life ranging from one to five years. They are usually opened via a can opener, but sometimes have a pull-tab so that they can be opened by hand. In the past it was common for many cans to have a key that would be turned to peel the lid of the tin off; most predominately sardines, among others.
Fish have low acidity levels at which microbes can flourish. From a public safety point of view, foods with low acidity (pH greater than 4.6) need sterilization at high temperatures of . Achieving temperatures above the boiling point requires pressurized cooking. After sterilization, the containing can prevents microorganisms from entering and proliferating inside. Other than sterilization, no other method is dependable as a preservative. For example, the microorganism Clostridium botulinum (which causes botulism) can only be eliminated at temperatures above the boiling point.
Preservation techniques are needed to prevent spoilage and lengthen shelf life. They are designed to inhibit the activity of spoilage bacteria and the metabolic changes leading to a loss of quality. Spoilage bacteria are the specific bacteria that produce the unpleasant odours and flavours associated with spoiled fish.
Background
The "father of canning" is the Frenchman Nicolas Appert. In 1795, he began experimenting with ways to preserve fish in jars. He placed jars of fish in boiling water. During the first years of the Napoleonic Wars, the French government offered a 12,000 franc prize to anyone who could devise a cheap and effective method of preserving large amounts of food. The larger armies of the period required increased and regular supplies of quality food. Appert submitted his invention and won the prize in January 1810. The reason for lack of spoilage was unknown at the time, sinc |
https://en.wikipedia.org/wiki/Cerebellar%20hemisphere | The cerebellum consists of three parts, a median and two lateral, which are continuous with each other, and are substantially the same in structure. The median portion is constricted, and is called the vermis, from its annulated appearance which it owes to the transverse ridges and furrows upon it; the lateral expanded portions are named the hemispheres.
Sections
The "intermediate hemisphere" is also known as the "spinocerebellum".
The "lateral hemisphere" is also known as the "pontocerebellum".
The lateral hemisphere is considered the portion of the cerebellum to develop most recently.
Additional images
See also
Anatomy of the cerebellum |
https://en.wikipedia.org/wiki/Noise%20and%20vibration%20on%20maritime%20vessels | Noise and vibration on maritime vessels are not the same but they have the same origin and come in many forms. The methods to handle the related problems are similar, to a certain level, where most shipboard noise problems are reduced by controlling vibration.
Sources
The main producers of mechanically created noise and vibration are the engines, but there are also other sources, like the air conditioning, shaft-line, cargo handling and control equipment and mooring machinery.
Diesel engines
When looking at diesel driven vessels, the engines induce large accelerations that travel from the foundation of the engine throughout the ship. In most compartments, this type of vibration normally manifests itself as audible noise. The problem with diesels is that, for a given size, there is a fixed amount of power generated per cylinder. To increase power it is necessary to add cylinders but, when cylinders are added, the crankshaft has to be lengthened and after a very limited number of additions, the lengthened crankshaft begins to flex and vibrate all on its own. This results in an increase of vibrations spread all over the ships structure. Crankshaft vibration can be reduced by a harmonic balancer.
Electrical engines
Large vessels sometimes use electrical propulsion motors, the electrical power being provided by a diesel generator. Noise and vibration of electric motors include, besides mechanical and aerodynamic sources, an electromagnetic source due to electromagnetic forces which is responsible for the "whining noise" of the motor.
Turbines
Steam turbines and gas turbines, on the other hand, when new and/or in good repair, do not, by themselves generate excessive vibration as long as the turbine blades are in a perfect condition and rotate in a smooth gas flow. But after some time microscopic defects appear and cause small pits to appear in the surface of the intake and the blades which set up eddies in the gas flow, resulting in loss of performance and vibrations |
https://en.wikipedia.org/wiki/International%20Committee%20on%20Taxonomy%20of%20Viruses | The International Committee on Taxonomy of Viruses (ICTV) authorizes and organizes the taxonomic classification of and the nomenclature for viruses. The ICTV develops a universal taxonomic scheme for viruses, and thus has the means to appropriately describe, name, and classify every virus taxon. The members of the International Committee on Taxonomy of Viruses are considered expert virologists. The ICTV was formed from and is governed by the Virology Division of the International Union of Microbiological Societies. Detailed work, such as identifying new taxa and delimiting the boundaries of species, genera, families, etc. typically is performed by study groups of experts in the families.
History
The International Committee on Nomenclature of Viruses (ICNV) was established in 1966, at the International Congress for Microbiology in Moscow, to standardize the naming of virus taxa. The ICVN published its first report in 1971. For viruses infecting vertebrates, the first report included 19 genera, 2 families, and a further 24 unclassified groups.
The ICNV was renamed the International Committee on Taxonomy of Viruses in 1974.
Organisational structure
The organisation is divided into an executive committee, which includes members and executives with fixed-term elected roles, as well as directly appointed heads of seven subcommittees. Each subcommittee head, in turn, appoints numerous 'study groups', which each consist of one chair and a variable number of members dedicated to the taxonomy of a specific taxon, such as an order or family. This structure may be visualised as follows:
Executive committee
President
Vice-president
Secretaries
Business Secretary
Proposals Secretary
Data Secretary
Chairs – positions: 7 (one for each subcommittee)
Elected members – positions: 11
Subcommittees
Animal DNA Viruses and Retroviruses Subcommittee – study groups: 18
Animal dsRNA and ssRNA- Viruses Subcommittee – study groups: 24
Animal ssRNA+ Viruses Subcommittee – study |
https://en.wikipedia.org/wiki/Savior%20sibling | A savior baby or savior sibling is a child who is conceived in order to provide a stem cell transplant to a sibling that is affected with a fatal disease, such as cancer or Fanconi anemia, that can best be treated by hematopoietic stem cell transplantation.
Introduction
The savior sibling is conceived through in vitro fertilization. Fertilized zygotes are tested for genetic compatibility (human leukocyte antigen (HLA) typing), using preimplantation genetic diagnosis (PGD), and only zygotes that are compatible with the existing child are implanted. Zygotes are also tested to make sure they are free of the original genetic disease. The procedure is controversial.
Indications
A savior sibling may be the solution for any disease treated by hematopoietic stem cell transplantation. It is effective against genetically detectable (mostly monogenic) diseases, e.g. Fanconi anemia, Diamond–Blackfan anemia and β-thalassemia, in the ailing sibling, since the savior sibling can be selected to not have inherited the disease. The procedure may also be used in children with leukemia, and in such cases HLA match is the only requirement, and not exclusion of any other obvious genetic disorder.
Procedure
Multiple embryos are created and preimplantation genetic diagnosis is used to detect and select ones that are free of a genetic disorder and that are also a HLA match for an existing sibling who requires a transplant. Upon birth, umbilical cord blood is taken and used for hematopoietic stem cell transplantation.
Bioethical questions
The conception of a child in order to save another raises ethical issues, the sick eldest child being doomed, unless a compatible brother or sister can treat him or her.
This ethical debate is reflected in the terminology used, with opponents insisting on the term "baby‐medicine" to underline the "instrumentalization of the human body". For these opponents, PGD is considered "utilitarianism taken to the extreme", "human procreation is totally divert |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.