source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Casus%20irreducibilis | In algebra, () is one of the cases that may arise in solving polynomials of degree 3 or higher with integer coefficients algebraically (as opposed to numerically), i.e., by obtaining roots that are expressed with radicals. It shows that many algebraic numbers are real-valued but cannot be expressed in radicals without introducing complex numbers. The most notable occurrence of is in the case of cubic polynomials that have three real roots, which was proven by Pierre Wantzel in 1843.
One can see whether a given cubic polynomial is in so-called by looking at the discriminant, via Cardano's formula.
The three cases of the discriminant
Let
be a cubic equation with . Then the discriminant is given by
It appears in the algebraic solution and is the square of the product
of the differences of the 3 roots .
If , then the polynomial has one real root and two complex non-real roots. is purely imaginary.Although there are cubic polynomials with negative discriminant which are irreducible in the modern sense, casus irreducibilis does not apply.
If , then and there are three real roots; two of them are equal. Whether can be found out by the Euclidean algorithm, and if so, the roots by the quadratic formula. Moreover, all roots are real and expressible by real radicals.All the cubic polynomials with zero discriminant are reducible.
If , then is non-zero and real, and there are three distinct real roots which are sums of two complex conjugates.Because they require complex numbers (in the understanding of the time: cube roots from non-real numbers, i.e. from square roots from negative numbers) to express them in radicals, this case in the 16th century has been termed casus irreducibilis.
Formal statement and proof
More generally, suppose that is a formally real field, and that is a cubic polynomial, irreducible over , but having three real roots (roots in the real closure of ). Then casus irreducibilis states that it is impossible to express a solution of b |
https://en.wikipedia.org/wiki/Mathematical%20Operators%20%28Unicode%20block%29 | Mathematical Operators is a Unicode block containing characters for mathematical, logical, and set notation.
Notably absent are the plus sign (+), greater than sign (>) and less than sign (<), due to them already appearing in the Basic Latin Unicode block, and the plus-or-minus sign (±), multiplication sign (×) and obelus (÷), due to them already appearing in the Latin-1 Supplement block, although a distinct minus sign (−) is included, semantically different from the Basic Latin hyphen-minus (-).
Block
Variation sequences
The Mathematical Operators block has sixteen variation sequences defined for standardized variants. They use (VS01) to denote variant symbols (depending on the font):
History
The following Unicode-related documents record the purpose and process of defining specific characters in the Mathematical Operators block:
See also
Mathematical operators and symbols in Unicode
Supplemental Mathematical Operators |
https://en.wikipedia.org/wiki/Amplitude%20gate | An amplitude gate (also, slicer or slice amplifier) is a circuit whose output is only the part of the input signal that lies between two amplitude boundary level values. |
https://en.wikipedia.org/wiki/J%20chain | The Joining (J) chain is a protein component that links monomers of antibodies IgM and IgA to form polymeric antibodies capable of secretion. The J chain is well conserved in the animal kingdom, but its specific functions are yet to be fully understood. It is a 137 residue polypeptide, encoded by the IGJ gene.
Structure
The J chain is a glycoprotein of molecular weight 15 kDa. Its secondary structure remains undetermined but is believed to adopt either a single β-barrel or two-domain folded structure with standard immunoglobulin domains. The J chain's primary structure is unusually acidic having a high content of negatively charged amino acids. It has 8 cysteine residues, 6 of which are involved in intramolecular disulfide bonds while the remaining two function to bind the Fc tailpiece regions of IgA or IgM antibodies, the α chain and μ chain respectively. An N-linked carbohydrate resulting from N-glycosylation is also essential in the protein's incorporation to antibody polymers. There is no known protein family with significant homology to the J chain.
Function
Antibody polymerization
The J chain regulates the multimerization of IgM and IgA in mammals. When expressed in cells, it favors the formation of a pentameric IgM and an IgA dimer. IgM pentamers are most commonly found with a single J chain, but some studies have seen as many as 4 J chains associated to a single IgM pentamer.
The J chain is incorporated late in the formation of IgM polymers and thermodynamically favors the formation of pentamers as opposed to hexamers. In J chain-knockout (KO) mice, the hexameric IgM polymer dominates. These J chain negative IgM hexamers are 15-20 times more effective at activating complement than J chain positive IgM pentamers. However, J chain-KO mice have been shown have low concentrations of hexameric IgM and a deficiency in complement activation, suggesting additional in vivo regulatory mechanisms. Another consequence of pentameric IgM reduced complement activ |
https://en.wikipedia.org/wiki/TCF7L2 | Transcription factor 7-like 2 (T-cell specific, HMG-box), also known as TCF7L2 or TCF4, is a protein acting as a transcription factor that, in humans, is encoded by the TCF7L2 gene. The TCF7L2 gene is located on chromosome 10q25.2–q25.3, contains 19 exons. As a member of the TCF family, TCF7L2 can form a bipartite transcription factor and influence several biological pathways, including the Wnt signalling pathway.
Single-nucleotide polymorphisms (SNPs) in this gene are especially known to be linked to higher risk to develop type 2 diabetes, gestational diabetes, multiple neurodevelopmental disorders including schizophrenia and autism spectrum disorder, as well as other diseases. The SNP rs7903146, within the TCF7L2 gene, is, to date, the most significant genetic marker associated with type 2 diabetes risk.
Function
TCF7L2 is a transcription factor influencing the transcription of several genes thereby exerting a large variety of functions within the cell. It is a member of the TCF family that can form a bipartite transcription factor (β-catenin/TCF) alongside β-catenin. Bipartite transcription factors can have large effects on the Wnt signalling pathway. Stimulation of the Wnt signaling pathway leads to the association of β-catenin with BCL9, translocation to the nucleus, and association with TCF7L2, which in turn results in the activation of Wnt target genes. The activation of the Wnt target genes specifically represses proglucagon synthesis in enteroendocrine cells. The repression of TCF7L2 using HMG-box repressor (HBP1) inhibits Wnt signalling. Therefore, TCF7L2 is an effector in the Wnt signalling pathway. TCF7L2's role in glucose metabolism is expressed in many tissues such as gut, brain, liver, and skeletal muscle. However, TCF7L2 does not directly regulate glucose metabolism in β-cells, but regulates glucose metabolism in pancreatic and liver tissues. That said, TCF7L2 directly regulates the expression of multiple transcription factors, axon guidance cues |
https://en.wikipedia.org/wiki/FLUKA | FLUKA (FLUktuierende KAskade) is a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter.
FLUKA has many applications in particle physics, high energy experimental physics and engineering, shielding, detector and telescope design, cosmic ray studies, dosimetry, medical physics, radiobiology. A recent line of development concerns hadron therapy.
It is the standard tool used in radiation protection studies in the CERN particle accelerator laboratory.
FLUKA software code is used by Epcard, which is a software program for simulating radiation exposure on airline flights.
Comparison with other codes
MCNPX is slower than FLUKA.
Geant4 is slower than FLUKA. |
https://en.wikipedia.org/wiki/TCF4 | Transcription factor 4 (TCF-4) also known as immunoglobulin transcription factor 2 (ITF-2) is a protein that in humans is encoded by the TCF4 gene located on chromosome 18q21.2.
Function
TCF4 proteins act as transcription factors which will bind to the immunoglobulin enhancer mu-E5/kappa-E2 motif. TCF4 activates transcription by binding to the E-box (5’-CANNTG-3’) found usually on SSTR2-INR, or somatostatin receptor 2 initiator element. TCF4 is primarily involved in neurological development of the fetus during pregnancy by initiating neural differentiation by binding to DNA. It is found in the central nervous system, somites, and gonadal ridge during early development. Later in development it will be found in the thyroid, thymus, and kidneys while in adulthood TCF4 it is found in lymphocytes, muscles, mature neurons, and gastrointestinal system.
Clinical significance
Mutations in TCF4 cause Pitt-Hopkins Syndrome (PTHS). These mutations cause TCF4 proteins to not bind to DNA properly and control the differentiation of the nervous system. It has been suggested that TCF4 loss-of-function leads to decreased Wnt signaling and, consequently, a reduced neural progenitor proliferation. In most cases that have been studied, the mutations were de novo, meaning it was a new mutation not found in other family members of the patient. Common symptoms of Pitt-Hopkins Syndrome include a wide mouth, gastrointestinal problems, developmental delay of fine motor skills, speech and breathing problems, epilepsy, and other brain defects. |
https://en.wikipedia.org/wiki/Quantum%20instrument | In physics, a quantum instrument is a mathematical abstraction of a quantum measurement, capturing both the classical and quantum outputs. It combines the concepts of measurement and quantum operation. It can be equivalently understood as a quantum channel that takes as input a quantum system and has as its output two systems: a classical system containing the outcome of the measurement and a quantum system containing the post-measurement state.
Definition
Let be a countable set describing the outcomes of a measurement, and let denote a collection of trace-non-increasing completely positive maps, such that the sum of all is trace-preserving, i.e. for all positive operators .
Now for describing a quantum measurement by an instrument , the maps are used to model the mapping from an input state to the output state of a measurement conditioned on a classical measurement outcome . Therefore, the probability of measuring a specific outcome on a state is given by
The state after a measurement with the specific outcome is given by
If the measurement outcomes are recorded in a classical register, whose states are modeled by a set of orthonormal projections , then the action of an instrument is given by a quantum channel with
Here and are the Hilbert spaces corresponding to the input and the output systems of the instrument.
A quantum instrument is an example of a quantum operation in which an "outcome" indicating which operator acted on the state is recorded in a classical register. An expanded development of quantum instruments is given in quantum channel. |
https://en.wikipedia.org/wiki/Carbon%E2%80%93oxygen%20bond | A carbon–oxygen bond is a polar covalent bond between atoms of carbon and oxygen. Carbon–oxygen bonds are found in many inorganic compounds such as carbon oxides and oxohalides, carbonates and metal carbonyls, and in organic compounds such as alcohols, ethers, carbonyl compounds and oxalates. Oxygen has 6 valence electrons of its own and tends to fill its outer shell with 8 electrons by sharing electrons with other atoms to form covalent bonds, accepting electrons to form an anion, or a combination of the two. In neutral compounds, an oxygen atom can form up to two single bonds or one double bond with carbon, while a carbon atom can form up to four single bonds or two double bonds with oxygen.
Bonding motifs
Bonding at oxygen
In ethers, oxygen forms two covalent single bonds with two carbon atoms, C–O–C, whereas in alcohols oxygen forms one single bond with carbon and one with hydrogen, C–O–H. In carbonyl compounds, oxygen forms a covalent double bond with carbon, C=O, known as a carbonyl group. In ethers, alcohols and carbonyl compounds, the four nonbonding electrons in oxygen's outer shell form two lone pairs. In alkoxides, oxygen forms a single bond with carbon and accepts an electron from a metal to form an alkoxide anion, R–O−, with three lone pairs. In oxonium ions, one of oxygen's two lone pairs is used to form a third covalent bond which generates a cation, >O+– or =O+– or ≡O+, with one lone pair remaining.
Bonding at carbon
A carbon atom forms one single bond to oxygen in alcohols, ethers and peroxides, two in acetals, three in ortho esters, and four in orthocarbonates. Carbon forms a double bond to oxygen in aldehydes, ketones and acyl halides. In carboxylic acids, esters and anhydrides, each carbonyl carbon atom forms one double bond and one single bond to oxygen. In carbonate esters and carbonic acid, the carbonyl carbon forms one double bond and two single bonds to oxygen. In carbon dioxide, carbon forms two double bonds to oxygen.
Electronegativiti |
https://en.wikipedia.org/wiki/Hepatocyte%20nuclear%20factors | Hepatocyte nuclear factors (HNFs) are a group of phylogenetically unrelated transcription factors that regulate the transcription of a diverse group of genes into proteins. These proteins include blood clotting factors and in addition, enzymes and transporters involved with glucose, cholesterol, and fatty acid transport and metabolism.
Function
As the name suggests, hepatocyte nuclear factors are expressed predominantly in the liver. However HNFs are also expressed and play important roles in a number of other tissues so that the name hepatocyte nuclear factor is somewhat misleading. Nevertheless, the liver is the only tissue in which a significant number of different HNFs are expressed at the same time. In addition, there are a number of genes which contain multiple promoter and enhancer regions each regulated by a different HNF. Furthermore, efficient expression of these genes require synergistic activation by multiple HNFs. Hence hepatocyte nuclear factors function to ensure liver specific expression of certain genes.
As is the case with many transcription factors, HNFs regulate the expression of a wide variety of target genes and therefore functions. These functions (and especially functions involving the liver) include development and metabolic homeostasis of the organism. For example, HNFs influence expression of the insulin gene as well as genes involved in glucose transport and metabolism. In embryo development, HNF4α is thought to have an important role in the development of the liver, kidney, and intestines.
Disease implication
Variants of the genes can cause several relatively rare forms of MODY, an inherited, early onset form of diabetes. Mutations in the HNF4α, HNF1α, or HNF1β genes are linked to MODY 1, MODY 3, and MODY 5 respectively. Mutations in HNF genes are also associated with a number of others diseases including hepatic adenomas and renal cysts.
Members
The following is a list of human hepatocyte nuclear factors (see als |
https://en.wikipedia.org/wiki/Open%20NAND%20Flash%20Interface%20Working%20Group | The Open NAND Flash Interface Working Group (ONFI or ONFi with a lower case "i") is a consortium of technology companies working to develop open standards for NAND flash memory and devices that communicate with them. The formation of ONFI was announced at the Intel Developer Forum in March 2006.
History
The group's goals did not include the development of a new consumer flash memory card format. Rather, ONFI seeks to standardize the low-level interface to raw NAND flash chips, which are the most widely used form of non-volatile memory integrated circuits (chips); in 2006, nearly one trillion MiB of flash memory was incorporated into consumer electronics, and production was expected to double by 2007. , NAND flash memory chips from most vendors used similar packaging, had similar pinouts, and accepted similar sets of low-level commands. As a result, when more capable and inexpensive models of NAND flash become available, product designers can incorporate them without major design changes. However, "similar" operation is not optimal: subtle differences in timing and command set mean that products must be thoroughly debugged and tested when a new model of flash chip is used in them. When a flash controller is expected to operate with various NAND flash chips, it must store a table of them in its firmware so that it knows how to deal with differences in their interfaces. This increases the complexity and time-to-market of flash-based devices, and means they are likely to be incompatible with future models of NAND flash, unless and until their firmware is updated.
Thus, one of the main motivations for standardization of NAND flash was to make it easier to switch between NAND chips from different producers, thereby permitting faster development of NAND-based products and lower prices via increased competition among manufacturers.
By 2006, NAND flash became increasingly a commodity product, like SDRAM or hard disk drives. It is incorporated into many personal c |
https://en.wikipedia.org/wiki/Samhain%20%28software%29 | Samhain is an integrity checker and host intrusion detection system that can be used on single hosts as well as large, UNIX-based networks. It supports central monitoring as well as powerful (and new) stealth features to run undetected in memory, using steganography.
Main features
Complete integrity check
uses cryptographic checksums of files to detect modifications,
can find rogue SUID executables anywhere on a disk, and
Centralized monitoring
native support for logging to a central server via encrypted and authenticated connections
Tamper resistance
database and configuration files can be signed
log file entries and e-mail reports are signed
support for stealth operation
See also
Host-based intrusion detection system comparison |
https://en.wikipedia.org/wiki/Chitika | Chitika, Inc. (pronounced CHIH-tih-ka) was a search-targeted advertising company. It was located in Westborough, Massachusetts, United States. The name Chitika means "in a snap" in Telugu language.
On April 17, 2019, Chitika announced that they were shutting down their business.
History
In 2003, co-founders Venkat Kolluri and Alden DoRosario started Chitika after leaving their jobs at search engine-based company "Lycos". Since launching its online advertising service in 2004, Chitika has added a Mobile advertising Division as well as a Real-Time Bidding Division.
In 2015, Chitika's founders announced Cidewalk, their local mobile ad platform, would spin off into a separate business unit with $4 Million in seed funding and new office space in Southborough, MA.
On April 17, 2019, Chitika wrote to publishers advising them of their immediate cessation of business. The email advised publishers to remove all Chitika code from their website. The email further advised publishers that they would not be paid for impressions or clicks occurring since March 1, 2019.
Partnerships
In 2009, Chitika began a partnership with the b5media Network.
In 2010, Yahoo! closed their AdSense competitor Yahoo Publisher Network Online (YPNO) and recommended publishers migrate to Chitika as a replacement.
In 2013, Chitika announced a multi-year extension of its partnership with Yahoo!. The agreement includes off–network search syndication, monetization of Yahoo! owned and operated properties, and mobile ad serving and monetization.
Awards and recognition
2007
Inc. top free services for generating revenue on your website: Best For Promoting Ancillary Products
2008
AlwaysOn: Top 100 fastest-growing companies in the Northeast
Red Herring: Leading private technology companies in North America
MITX 2008 Technology Awards: Finalist, Marketing/Customer Relationship Technologies Category
2009
Red Herring: Top 100 Global 2008 Winner
2010
DPAC Award Finalist for Best Mobile Advertising N |
https://en.wikipedia.org/wiki/Pappalysin-1 | Pappalysin-1, also known as pregnancy-associated plasma protein A, and insulin-like growth factor binding protein-4 protease is a protein encoded by the PAPPA gene in humans. PAPPA is a secreted protease whose main substrate is insulin-like growth factor binding proteins. Pappalysin-1 is also used in screening tests for Down syndrome.
Function
PAPPA encodes a secreted metalloproteinase which cleaves insulin-like growth factor binding proteins (IGFBPs). PAPPA's proteolytic function is activated upon collagen binding. It is thought to be involved in local proliferative processes such as wound healing and bone remodeling. Low plasma level of this protein has been suggested as a biochemical marker for pregnancies with aneuploid fetuses (fetuses with an abnormal number of chromosomes). For example, low PAPPA may be commonly seen in prenatal screening for Down syndrome. Low levels may alternatively predict issues with the placenta, resulting in adverse complications such as intrauterine growth restriction, preeclampsia, placental abruption, premature birth, or fetal death.
This enzyme catalyses the following chemical reaction:
Cleavage of the Met135-Lys bond in insulin-like growth factor binding protein (IGFBP)-4, and the Ser143-Lys bond in IGFBP-5
This enzyme belongs to the peptidase family M43.
Interactions
Pappalysin-1 has been shown to interact with major basic protein.
Studies conducted at the Royal London Hospital in the United Kingdom, have shown that a marker of Down's syndrome may be expressed during the first trimester and second trimester of a pregnancy term. Concentrations of the Pregnancy-Associated Plasma Protein (PAPPA) gene as well as other markers can help screen for Down's syndrome in the beginning stages of a women's gestational period. |
https://en.wikipedia.org/wiki/Carboxypeptidase%20B | Carboxypeptidase B (, protaminase, pancreatic carboxypeptidase B, tissue carboxypeptidase B, peptidyl-L-lysine [L-arginine]hydrolase) is a carboxypeptidase that preferentially cleaves off basic amino acids arginine and lysine from the C-terminus of a peptide. This enzyme is secreted by the pancreas, and it travels to the small intestine, where it aids in protein digestion. Plasma carboxypeptidase B (carboxypeptidase B2) is responsible for converting the C5a protein into C5a des-Arg, with one less amino acid. |
https://en.wikipedia.org/wiki/Integrated%20Biosphere%20Simulator | IBIS-2 is the version 2 of the land-surface model Integrated Biosphere Simulator (IBIS), which includes several major improvements and additions to the prototype model developed by Foley et al. [1996]. IBIS was designed to explicitly link land surface and hydrological processes, terrestrial biogeochemical cycles, and vegetation dynamics within a single physically consistent framework
IBIS Functionality
The model considers transient changes in vegetation composition and structure in response to environmental change and is, therefore, classified as a Dynamic Global Vegetation Model (DGVM) This new version of IBIS has improved representations of land surface physics, plant physiology, canopy phenology, plant functional type (PFT) differences, and carbon allocation. Furthermore, IBIS-2 includes a new belowground biogeochemistry submodel, which is coupled to detritus production (litterfall and fine root turnover). All process are organized in a hierarchical framework and operate at different time steps, ranging from 60 min to 1 year. Such an approach allows for explicit coupling among ecological, biophysical, and physiological processes occurring on different timescales.
IBIS Structure
The land surface module is based on the land surface transfer model (LSX) package of Thompson and Pollard, and simulates the energy, water, carbon, and momentum balance of the soil-vegetation-atmosphere system. The model represents two vegetation canopies (e.g., trees versus shrubs and grasses), eight soil layers, and three layers of snow (when required). The solar radiative transfer scheme of IBIS-2 has been simplified in comparison with LSX and IBIS-1; sunlit and shaded fractions of the canopies are no longer treated separately. The model now follows the approach of Sellers et al. [1986] and Bonan [1995]. Infrared radiation is simulated as if each vegetation layer is a semitransparent plane; canopy emissivity depends on foliage density. Another difference between IBIS-2 and IBIS-1 a |
https://en.wikipedia.org/wiki/Littelmann%20path%20model | In mathematics, the Littelmann path model is a combinatorial device due to Peter Littelmann for computing multiplicities without overcounting in the representation theory of symmetrisable Kac–Moody algebras. Its most important application is to complex semisimple Lie algebras or equivalently compact semisimple Lie groups, the case described in this article. Multiplicities in irreducible representations, tensor products and branching rules can be calculated using a coloured directed graph, with labels given by the simple roots of the Lie algebra.
Developed as a bridge between the theory of crystal bases arising from the work of Kashiwara and Lusztig on quantum groups and the standard monomial theory of C. S. Seshadri and Lakshmibai, Littelmann's path model associates to each irreducible representation a rational vector space with basis given by paths from the origin to a weight as well as a pair of root operators acting on paths for each simple root. This gives a direct way of recovering the algebraic and combinatorial structures previously discovered by Kashiwara and Lusztig using quantum groups.
Background and motivation
Some of the basic questions in the representation theory of complex semisimple Lie algebras or compact semisimple Lie groups going back to Hermann Weyl include:
For a given dominant weight λ, find the weight multiplicities in the irreducible representation L(λ) with highest weight λ.
For two highest weights λ, μ, find the decomposition of their tensor product L(λ) L(μ) into irreducible representations.
Suppose that is the Levi component of a parabolic subalgebra of a semisimple Lie algebra . For a given dominant highest weight λ, determine the branching rule for decomposing the restriction of L(λ) to .
(Note that the first problem, of weight multiplicities, is the special case of the third in which the parabolic subalgebra is a Borel subalgebra. Moreover, the Levi branching problem can be embedded in the tensor product problem as a certain |
https://en.wikipedia.org/wiki/Boxicity | In graph theory, boxicity is a graph invariant, introduced by Fred S. Roberts in 1969.
The boxicity of a graph is the minimum dimension in which a given graph can be represented as an intersection graph of axis-parallel boxes. That is, there must exist a one-to-one correspondence between the vertices of the graph and a set of boxes, such that two boxes intersect if and only if there is an edge connecting the corresponding vertices.
Examples
The figure shows a graph with six vertices, and a representation of this graph as an intersection graph of rectangles (two-dimensional boxes). This graph cannot be represented as an intersection graph of boxes in any lower dimension, so its boxicity is two.
showed that the graph with 2n vertices formed by removing a perfect matching from a complete graph on 2n vertices has boxicity exactly n: each pair of disconnected vertices must be represented by boxes that are separated in a different dimension than each other pair. A box representation of this graph with dimension exactly n can be found by thickening each of the 2n facets of an n-dimensional hypercube into a box. Because of these results, this graph has been called the Roberts graph, although it is better known as the cocktail party graph and it can also be understood as the Turán graph T(2n,n).
Relation to other graph classes
A graph has boxicity at most one if and only if it is an interval graph; the boxicity of an arbitrary graph G is the minimum number of interval graphs on the same set of vertices such that the intersection of the edges sets of the interval graphs is G. Every outerplanar graph has boxicity at most two, and every planar graph has boxicity at most three.
If a bipartite graph has boxicity two, it can be represented as an intersection graph of axis-parallel line segments in the plane.
proved that the boxicity of a bipartite graph G is within of a factor 2 of the order dimension of the height-two partially ordered set associated to G as follows: the |
https://en.wikipedia.org/wiki/Saint-Venant%27s%20compatibility%20condition | In the mathematical theory of elasticity, Saint-Venant's compatibility condition defines the relationship between the strain and a displacement field by
where . Barré de Saint-Venant derived the compatibility condition for an arbitrary symmetric second rank tensor field to be of this form, this has now been generalized to higher rank symmetric tensor fields on spaces of dimension
Rank 2 tensor fields
For a symmetric rank 2 tensor field in n-dimensional Euclidean space () the integrability condition takes the form of the vanishing of the Saint-Venant's tensor defined by
The result that, on a simply connected domain W=0 implies that strain is the symmetric derivative of some vector field, was first described by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886. For non-simply connected domains there are finite dimensional spaces of symmetric tensors with vanishing Saint-Venant's tensor that are not the symmetric derivative of a vector field. The situation is analogous to de Rham cohomology
The Saint-Venant tensor is closely related to the Riemann curvature tensor . Indeed the first variation about the Euclidean metric with a perturbation in the metric is precisely . Consequently the number of independent components of is the same as specifically for dimension n. Specifically for , has only one independent component where as for there are six.
In its simplest form of course the components of must be assumed twice continuously differentiable, but more recent work proves the result in a much more general case.
The relation between Saint-Venant's compatibility condition and Poincaré's lemma can be understood more clearly using a reduced form of the Kröner tensor
where is the permutation symbol. For , is a symmetric rank 2 tensor field. The vanishing of is equivalent to the vanishing of and this also shows that there are six independent components for the important case of three dimensions. While this still involves two der |
https://en.wikipedia.org/wiki/Hyper-V | Microsoft Hyper-V, codenamed Viridian, and briefly known before its release as Windows Server Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows. Starting with Windows 8, Hyper-V superseded Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks.
Hyper-V was first released with Windows Server 2008, and has been available without additional charge since Windows Server 2012 and Windows 8. A standalone Windows Hyper-V Server is free, but has a command-line interface only. The last version of free Hyper-V Server is Hyper-V Server 2019, which is based on Windows Server 2019.
History
A beta version of Hyper-V was shipped with certain x86-64 editions of Windows Server 2008. The finalized version was released on June 26, 2008 and was delivered through Windows Update. Hyper-V has since been released with every version of Windows Server.
Microsoft provides Hyper-V through two channels:
Part of Windows: Hyper-V is an optional component of Windows Server 2008 and later. It is also available in x64 SKUs of Pro and Enterprise editions of Windows 8, Windows 8.1, Windows 10 and Windows 11.
Hyper-V Server: It is a freeware edition of Windows Server with limited functionality and Hyper-V component.
Hyper-V Server
Hyper-V Server 2008 was released on October 1, 2008. It consists of Windows Server 2008 Server Core and Hyper-V role; other Windows Server 2008 roles are disabled, and there are limited Windows services. Hyper-V Server 2008 is limited to a command-line interface used to configure the host OS, physical hardware, and software. A menu driven CLI interface and some freely downloadable script files simplify configuration. In addition, Hyper-V Server supports remote access via Remote Desktop Connection. However, administration and configuration of the host OS and the |
https://en.wikipedia.org/wiki/Osteoderm | Osteoderms are bony deposits forming scales, plates, or other structures based in the dermis. Osteoderms are found in many groups of extant and extinct reptiles and amphibians, including lizards, crocodilians, frogs, temnospondyls (extinct amphibians), various groups of dinosaurs (most notably ankylosaurs and stegosaurians), phytosaurs, aetosaurs, placodonts, and hupehsuchians (marine reptiles with possible ichthyosaur affinities).
Osteoderms are uncommon in mammals, although they have occurred in many xenarthrans (armadillos and the extinct glyptodonts and mylodontid ground sloths). The heavy, bony osteoderms have evolved independently in many different lineages. The armadillo osteoderm is believed to develop in subcutaneous dermal tissues. These varied structures should be thought of as anatomical analogues, not homologues, and do not necessarily indicate monophyly. The structures are however derived from scutes, common to all classes of amniotes and are an example of what has been termed deep homology. In many cases, osteoderms may function as defensive armor. Osteoderms are composed of bone tissue, and are derived from a scleroblast neural crest cell population during embryonic development of the organism. The scleroblastic neural crest cell population shares some homologous characteristics associated with the dermis. Neural crest cells, through epithelial-to-mesenchymal transition, are thought to contribute to osteoderm development.
The osteoderms of modern crocodilians are heavily vascularized, and can function as both armor and as heat-exchangers, allowing these large reptiles to rapidly raise or lower their temperature. Another function is to neutralize acidosis, caused by being submerged under water for longer periods of time and leading to the accumulation of carbon dioxide in the blood. The calcium and magnesium in the dermal bone will release alkaline ions into the bloodstream, acting as a buffer against acidification of the body fluids.
See also
Ex |
https://en.wikipedia.org/wiki/Existentially%20closed%20model | In model theory, a branch of mathematical logic, the notion of an existentially closed model (or existentially complete model) of a theory generalizes the notions of algebraically closed fields (for the theory of fields), real closed fields (for the theory of ordered fields), existentially closed groups (for the theory of groups), and dense linear orders without endpoints (for the theory of linear orders).
Definition
A substructure M of a structure N is said to be existentially closed in (or existentially complete in) if for every quantifier-free formula φ(x1,…,xn,y1,…,yn) and all elements b1,…,bn of M such that φ(x1,…,xn,b1,…,bn) is realized in N, then φ(x1,…,xn,b1,…,bn) is also realized in M. In other words: If there is a tuple a1,…,an in N such that φ(a1,…,an,b1,…,bn) holds in N, then such a tuple also exists in M. This notion is often denoted .
A model M of a theory T is called existentially closed in T if it is existentially closed in every superstructure N that is itself a model of T. More generally, a structure M is called existentially closed in a class K of structures (in which it is contained as a member) if M is existentially closed in every superstructure N that is itself a member of K.
The existential closure in K of a member M of K, when it exists, is, up to isomorphism, the least existentially closed superstructure of M. More precisely, it is any extensionally closed superstructure M∗ of M such that for every existentially closed superstructure N of M, M∗ is isomorphic to a substructure of N via an isomorphism that is the identity on M.
Examples
Let σ = (+,×,0,1) be the signature of fields, i.e. + and × are binary function symbols and 0 and 1 are constant symbols. Let K be the class of structures of signature σ that are fields. If A is a subfield of B, then A is existentially closed in B if and only if every system of polynomials over A that has a solution in B also has a solution in A. It follows that the existentially closed members of K are |
https://en.wikipedia.org/wiki/Dragon%20Slayer%3A%20The%20Legend%20of%20Heroes | is a 1989 role-playing game developed by Nihon Falcom. It is the sixth game in the Dragon Slayer series and the first in The Legend of Heroes franchise.
It was originally released in 1989 for the NEC PC-8801. Within the next few years, it would also be ported to the NEC PC-9801, MSX 2, PC Engine CD-ROM/TurboGrafx-CD, Sharp X68000, Sega Mega Drive, and Super Famicom. A Dragon Slayer: The Legend of Heroes Barcode Battler card set was also released by Epoch Co. in 1992. The PC Engine version was released in the United States for the TurboGrafx-CD and was the only game in the series released in the US until The Legend of Heroes: A Tear of Vermillion, the PlayStation Portable remake.
In 1995, a version of the game was broadcast exclusively for Japanese markets via the Super Famicom's Satellaview subunit under the name BS Dragon Slayer Eiyu Densetsu. In 1998, a remake of The Legend of Heroes was bundled with a remake of Dragon Slayer: The Legend of Heroes II and was released for both the PlayStation and Sega Saturn.
Reception
The PC Engine version was rated 25.24 out of 30 by PC Engine Fan magazine. Famitsu scored the PC Engine CD-ROM version 29 out of 40 in 1991. They later scored the Super Famicom version 29 out of 40 in 1992, and the Sega Mega Drive version 23 out of 40 in 1994.
In its January 1993 issue, Electronic Games magazine's Electronic Gaming Awards nominated the TurboGrafx-CD version for the 1992 Multimedia Game of the Year award. They wrote it "demonstrates how far multimedia has come" since the same design team's Ys I & II and that this "mammoth quest is meticulously detailed and incorporates highly involved game play".
Notes |
https://en.wikipedia.org/wiki/Overtime%20rate | Overtime rate is a calculation of hours worked by a worker that exceed those hours defined for a standard workweek. This rate can have different meanings in different countries and jurisdictions, depending on how that jurisdiction's labor law defines overtime. In many jurisdictions, additional pay is mandated for certain classes of workers when this set number of hours is exceeded. In others, there is no concept of a standard workweek or analogous time period, and no additional pay for exceeding a set number of hours within that week.
The overtime rate calculates the ratio between employee overtime with the regular hours in a specific time period. Even if the work is planned or scheduled, it can still be considered overtime if it exceeds what is considered the standard workweek in that jurisdiction.
A high overtime rate is a good indicator of a temporary or permanent high workload, and can be a contentious issue in labor-management relations. It could result in a higher illness rate, lower safety rate, higher labor costs, and lower productivity.
United States
In the United States a standard workweek is considered to be 40 hours. Most waged employees or so-called non-exempt workers under U.S. federal labor and tax law must be paid at a wage rate of 150% of their regular hourly rate for hours that exceed 40 in a week. The start of the pay week can be defined by the employer, and need not be a standard calendar week start (e.g., Sunday midnight). Many employees, especially shift workers in the U.S., have some amount of overtime built into their schedules so that 24/7 coverage can be obtained.
Formula |
https://en.wikipedia.org/wiki/Instrumentation%20%28computer%20programming%29 | In the context of computer programming, instrumentation refers to the measure of a product's performance, in order to diagnose errors and to write trace information. Instrumentation can be of two types: source instrumentation and binary instrumentation.
Output
In programming, instrumentation means:
Profiling: measuring dynamic program behaviors during a training run with a representative input. This is useful for properties of a program that cannot be analyzed statically with sufficient precision, such as alias analysis.
Inserting timers into functions.
Logging major events such as crashes.
Limitations
Instrumentation is limited by execution coverage. If the program never reaches a particular point of execution, then instrumentation at that point collects no data. For instance, if a word processor application is instrumented, but the user never activates the print feature, then the instrumentation can say nothing about the routines which are used exclusively by the printing feature.
Some types of instrumentation may cause a dramatic increase in execution time. This may limit the application of instrumentation to debugging contexts.
See also
Hooking – range of techniques used to alter or augment the behavior of an operating system, of applications, or of other software components by intercepting function calls or messages or events passed between software components.
Instruction set simulator – simulation of all instructions at machine code level to provide instrumentation
Runtime intelligence – technologies, managed services and practices for the collection, integration, analysis, and presentation of application usage levels, patterns, and practices.
Software performance analysis – techniques to monitor code performance, including instrumentation.
Hardware performance counter
DTrace – A comprehensive dynamic tracing framework for troubleshooting kernel and application problems on production systems in real time, implemented in Solaris, macOS, F |
https://en.wikipedia.org/wiki/Shareasale | ShareASale is an affiliate marketing network based in the River North neighborhood in Chicago, IL USA. ShareASale services two customer sets in affiliate marketing: the affiliate, and the merchant.
Affiliates use ShareASale to find products to promote, and earn commission for referrals on those products. Affiliates use their own website, blogs, social media, PPC campaigns, SEO campaigns, RSS and email, as well as a number of other means.
Merchants use ShareASale to implement, track, and manage their affiliate program.
Company history
ShareASale was founded in 2000 by Brian Littleton, and to date has over 16550 merchant programs hosted on its network platform. ShareASale is primarily targeting small and mid-size merchants. ShareASale is among the largest U.S. affiliate networks in terms of number of advertisers who are using an affiliate network to manage their affiliate program.
After many years with the company Brian and Michael Littleton left in 2018.
Ownership
ShareASale was a privately held Chicago, Illinois; USA Corporation when it was founded in 2000.
ShareASale was acquired by Awin that is part of Axel Springer Group on June, 10th 2017 for an undisclosed amount.
See also
Affiliate Networks
Affiliate marketing |
https://en.wikipedia.org/wiki/Protein%20A/G | Protein A/G is a recombinant fusion protein that combines IgG binding domains of both Protein A and Protein G. Protein A/G contains four Fc binding domains from Protein A and two from Protein G, yielding a final mass of 50,460 daltons. The binding of Protein A/G is less pH-dependent than Protein A, but otherwise has the additive properties of Protein A and G.
Protein A/G binds to all subclasses of human IgG, making it useful for purifying polyclonal or monoclonal IgG antibodies whose subclasses have not been determined. In addition, it binds to IgA, IgE, IgM and (to a lesser extent) IgD. Protein A/G also binds to all subclasses of mouse IgG but does not bind mouse IgA, IgM or serum albumin. This allows Protein A/G to be used for purification and detection of mouse monoclonal IgG antibodies, without interference from IgA, IgM and serum albumin. Mouse monoclonal antibodies commonly have a stronger affinity to the chimeric Protein A/G than to either Protein A or Protein G. Protein A/G also has been used for purification of macaque IgG.
Other antibody binding proteins
In addition to Protein A/G, other immunoglobulin-binding bacterial proteins such as Protein A, Protein G and Protein L are all commonly used to purify, immobilize or detect immunoglobulins. Each of these immunoglobulin-binding proteins has a different antibody binding profile in terms of the portion of the antibody that is recognized and the species and type of antibodies. |
https://en.wikipedia.org/wiki/HEPPS%20%28buffer%29 | HEPPS (EPPS) is a buffering agent used in biology and biochemistry. The pKa of HEPPS is 8.00. It is ones of Good's buffers.
Research on mice with Alzheimer's disease-like amyloid beta plaques has shown that HEPPS can cause the plaques to break up, reversing some of the symptoms in the mice. HEPPS was reported to dissociate amyloid beta oligomers in patients' plasma samples enabling blood diagnosis of Alzheimer's disease.
See also
CAPSO
CHES
HEPES |
https://en.wikipedia.org/wiki/Sunflow | Sunflow is an open-source global illumination rendering system written in Java. The project is currently inactive; the last announcement on the program's official page was made in 2007. |
https://en.wikipedia.org/wiki/Siemion%20Fajtlowicz | Siemion Fajtlowicz is a Polish-American mathematician, formerly a professor at the University of Houston. He is known for creating and developing the conjecture-making computer program Graffiti.
Fajtlowicz received his Ph.D. in 1967 or 1968 from the Institute of Mathematics of the Polish Academy of Sciences, under the supervision of Edward Marczewski. |
https://en.wikipedia.org/wiki/Fred%20Galvin | Frederick William Galvin is a mathematician, currently a professor at the University of Kansas. His research interests include set theory and combinatorics.
His notable combinatorial work includes the proof of the Dinitz conjecture. In set theory, he proved with András Hajnal that if ℵω1 is a strong limit cardinal, then
holds. The research on extending this result led Saharon Shelah to the invention of PCF theory. Galvin gave an elementary proof of the Baumgartner–Hajnal theorem (). The original proof by Baumgartner and Hajnal used forcing and absoluteness. Galvin and Shelah also proved the square bracket partition relations and . Galvin also proved the partition relation where η denotes the order type of the set of rational numbers.
Galvin and Karel Prikry proved that every Borel set is Ramsey. Galvin and Komjáth showed that the axiom of choice is equivalent to the statement that every graph has a chromatic number.
Galvin received his Ph.D. in 1967 from the University of Minnesota.
He invented Doublemove Chess in 1957, and Push Chess in 1967. |
https://en.wikipedia.org/wiki/Linked%20data | In computing, linked data is structured data which is interlinked with other data so it becomes more useful through semantic queries. It builds upon standard Web technologies such as HTTP, RDF and URIs, but rather than using them to serve web pages only for human readers, it extends them to share information in a way that can be read automatically by computers. Part of the vision of linked data is for the Internet to become a global database.
Tim Berners-Lee, director of the World Wide Web Consortium (W3C), coined the term in a 2006 design note about the Semantic Web project.
Linked data may also be open data, in which case it is usually described as Linked Open Data.
Principles
In his 2006 "Linked Data" note, Tim Berners-Lee outlined four principles of linked data, paraphrased along the following lines:
Uniform Resource Identifiers (URIs) should be used to name and identify individual things.
HTTP URIs should be used to allow these things to be looked up, interpreted, and subsequently "dereferenced".
Useful information about what a name identifies should be provided through open standards such as RDF, SPARQL, etc.
When publishing data on the Web, other things should be referred to using their HTTP URI-based names.
Tim Berners-Lee later restated these principles at a 2009 TED conference, again paraphrased along the following lines:
All conceptual things should have a name starting with HTTP.
Looking up an HTTP name should return useful data about the thing in question in a standard format.
Anything else that that same thing has a relationship with through its data should also be given a name beginning with HTTP.
Components
Thus, we can identify the following components as essential to a global Linked Data system as envisioned, and to any actual Linked Data subset within it:
URIs
HTTP
Structured data using controlled vocabulary terms and dataset definitions expressed in Resource Description Framework serialization formats such as RDFa, RDF/XML, N3, Turtle, o |
https://en.wikipedia.org/wiki/Lillian%20Rosanoff%20Lieber | Lillian Rosanoff Lieber (July 26, 1886 in Nicolaiev, Russian Empire – July 11, 1986 in Queens, New York) was a Russian-American mathematician and popular author. She often teamed up with her illustrator husband, Hugh Gray Lieber, to produce works.
Life and career
Early life and education
Lieber was one of four children of Abraham H. and Clara (Bercinskaya) Rosanoff. Her brothers were Denver publisher Joseph Rosenberg, psychiatrist Aaron Rosanoff, and chemist Martin André Rosanoff. Aaron and Martin changed their names to sound more Russian, less Jewish. Lieber moved to the US with her family in 1891. She received her A.B. from Barnard College in 1908, her M.A. from Columbia University in 1911, and her Ph.D. (in chemistry) from Clark University in 1914, under Martin's direction; at Clark, Solomon Lefschetz was a classmate. She married Hugh Gray Lieber on October 27, 1926.
Career
After teaching at Hunter College from 1908 to 1910, and in the New York City high school system (1910–1912, 1914–1915), she became a Research Fellow at Bryn Mawr College from 1915 to 1917; she then went on to teach at Wells College from 1917 to 1918 as Instructor of Physics (also acting as head of the physics department), and at the Connecticut College for Women (1918 to 1920). She joined the mathematics department at Long Island University (LIU) in Brooklyn, New York (LIU Brooklyn) in 1934, became department chair in 1945 (taking over from Hugh when he became Professor, and Chair, of Art at LIU ), and was made a full professor in 1947, until her retirement in 1954; she was appointed director of LIU's Galois Institute of Mathematics (later the Galois Institute of Mathematics and Art) (named for Évariste Galois) in 1934. Over her career she published some 17 books, which were written in a unique, free-verse style and illustrated with whimsical line drawings by her husband. Her highly accessible writings were praised by no less than Albert Einstein, Cassius Jackson Keyser, Eric Temple Bell, |
https://en.wikipedia.org/wiki/In-place%20matrix%20transposition | In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an N×M matrix in-place in computer memory, ideally with O(1) (bounded) additional storage, or at most with additional storage much less than NM. Typically, the matrix is assumed to be stored in row-major or column-major order (i.e., contiguous rows or columns, respectively, arranged consecutively).
Performing an in-place transpose (in-situ transpose) is most difficult when N ≠ M, i.e. for a non-square (rectangular) matrix, where it involves a complicated permutation of the data elements, with many cycles of length greater than 2. In contrast, for a square matrix (N = M), all of the cycles are of length 1 or 2, and the transpose can be achieved by a simple loop to swap the upper triangle of the matrix with the lower triangle. Further complications arise if one wishes to maximize memory locality in order to improve cache line utilization or to operate out-of-core (where the matrix does not fit into main memory), since transposes inherently involve non-consecutive memory accesses.
The problem of non-square in-place transposition has been studied since at least the late 1950s, and several algorithms are known, including several which attempt to optimize locality for cache, out-of-core, or similar memory-related contexts.
Background
On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid data movement.
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations |
https://en.wikipedia.org/wiki/Phison | Phison Electronics Corporation () is a Taiwanese public electronics company that primarily designs, manufactures and sells controllers for NAND flash memory chips. These are integrated into flash-based products such as USB flash drives, memory cards, and solid-state drives (SSDs).
Phison is a member of the Open NAND Flash Interface Working Group (ONFI), which aims to standardize the hardware interface to NAND flash chips.
History
Phison was founded in 2000 by Datuk Pua Khein Seng and four others.
Phison claims to have produced the earliest single-chip USB flash removable disk, dubbed a "pen drive."
In early October 2014, security researchers Adam Caudill and Brandon Wilson publicly released source code to a firmware attack against Phison USB controller ICs. This code implements the BadUSB exploit described in July 2014 at the Black Hat Briefings conference.
In August 2019, Phison announced that they would be releasing PS-50 series chips, e.g., PS5018-E18, that are designed to support PCIe 4.0 NVMe (non-volatile memory express) solid-state drives (SSDs). With such technology, the chips built on the NVMe SSDs have read and write speed of up to 7,000 MBs per second.
In late 2020, Phison started shipping their E18 for high-end NVMe SSDs.
In January 2021, Phison announced that they were planning to introduce a pair of USB flash drive controllers for high-end portable SSDs, designed to compete against current solutions that combine a USB to NVMe bridge chip with a standard NVMe SSD controller. Phison also released a new entry-level DRAM-less NVMe SSD controller in 2021. For portable SSDs, Phison introduced the U17 and U18 controllers. For NVMe SSDs, Phison introduced the E21T controller in 2021, their latest DRAM-less NVMe controller. This is a follow-up to the E19T controller, which had seen very little use in retail consumer SSDs but has actually been outselling their high-end E16 PCIe 4.0 controller due to strong demand from OEMs.
In January 2022, Phison intro |
https://en.wikipedia.org/wiki/Expanded%20Program%20on%20Immunization%20%28Philippines%29 | The Expanded Program on Immunization (EPI) in the Philippines began in July 1979. And, in 1986, made a response to the Universal Child Immunization goal. The four major strategies include:
sustaining high routine Full Immunized Child (FIC) coverage of at least 90% in all provinces and cities;
sustaining the polio-free country for global certification;
eliminating measles by 2008; and
eliminating neonatal tetanus by 2008.
Routine Schedule of Immunization
Every Wednesday is designated as immunization day and is adopted in all parts of the country. Immunization is done monthly in barangay health stations, quarterly in remote areas of the country.
Routine Immunization Schedule for Infants
The standard routine immunization schedule for infants in the Philippines is adopted to provide maximum immunity against the seven vaccine preventable diseases in the country before the child's first birthday. The fully immunized child must have completed BCG 1, DPT 1, DPT 2, DPT 3, OPV 1, OPV 2, OPV 3, HB 1, HB 2, HB 3 and measles vaccines before the child is 12 months of age.
General Principles in Infants/Children Immunization
Because measles kills, every infant needs to be vaccinated against measles at the age of 9 months or as soon as possible after 9 months as part of the routine infant vaccination schedule. It is safe to vaccinate a sick child who is suffering from a minor illness (cough, cold, diarrhea, fever or malnutrition) or who has already been vaccinated against measles.
If the vaccination schedule is interrupted, it is not necessary to restart. Instead, the schedule should be resumed using minimal intervals between doses to catch up as quickly as possible.
Vaccine combinations (few exceptions), antibiotics, low-dose steroids (less than 20 mg per day), minor infections with low fever (below 38.5º Celsius), diarrhea, malnutrition, kidney or liver disease, heart or lung disease, non-progressive encephalopathy, well controlled epilepsy or advanced age, are not co |
https://en.wikipedia.org/wiki/Astrolinguistics | Astrolinguistics is a field of linguistics connected with the search for extraterrestrial intelligence (SETI).
Early Soviet experiments
Arguably the first attempt to construct a language for interplanetary communication was the AO language created by the anarchist philosopher Wolf Gordin (brother of Abba Gordin) in his books Grammar of the Language of the Mankind AO (1920) and Grammar of the Language AO (1924), was presented as a language for interplanetary communication at the First International Exhibition of Interplanetary Machines and Mechanisms (dedicated to the 10th anniversary of the Russian Revolution and the 70th anniversary of the birth of Tsiolkovsky) in Moscow, 1927. The declared goal of Gordin was to construct a language which would be non-"fetishizing", non-"sociomorphic", non-gender based and non-classist. The design of the language was inspired by Russian Futurist poetry, the Gordin brothers' pan-anarchist philosophy, and Tsiolkovsky's early remarks on possible cosmic messaging (which were in accord with Hans Freudenthal's later insights). However, Sergei N. Kuznetsov notes that "Gordin nowhere defines his language as intended for space use," and that "in none of his works does he deal with problems of space communication, only mentioning 'Interplanetary Communication' in passing among other technical areas."
Freudenthal's LINCOS
An integral part of the SETI project in general is research in the field of the construction of messages for extraterrestrial intelligence, possibly to be transmitted into space from Earth. As far as such messages are based on linguistic principles, the research can be considered to belong to astrolinguistics. The first proposal in this field was put forward by the mathematician Hans Freudenthal at the University of Utrecht in the Netherlands, in 1960 – around the time of the first SETI effort at Greenbank in the US. Freudenthal conceived a complete Lingua Cosmica. His book LINCOS: Design of a Language for Cosmic Inter |
https://en.wikipedia.org/wiki/Multi-link%20trunking | Multi-link trunking (MLT) is a link aggregation technology developed at Nortel in 1999. It allows grouping several physical Ethernet links into one logical Ethernet link to provide fault-tolerance and high-speed links between routers, switches, and servers.
MLT allows the use of several links (from 2 up to 8) and combines them to create a single fault-tolerant link with increased bandwidth. This produces server-to-switch or switch-to-switch connections that are up to 8 times faster. Prior to MLT and other aggregation techniques, parallel links were underutilized due to Spanning Tree Protocol’s loop protection.
Fault-tolerant design is an important aspect of Multi-Link Trunking technology. Should any one or more than one link fail, the MLT technology will automatically redistribute traffic across the remaining links. This automatic redistribution is accomplished in less than half a second (typically less than 100 millisecond) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks. Combining MLT technology with Distributed Split Multi-Link Trunking (DSMLT), Split multi-link trunking (SMLT), and R-SMLT technologies create networks that support the most critical applications.
A general limitation of standard MLT is that all the physical ports in the link aggregation group must reside on the same switch. SMLT, DSMLT and R-SMLT technologies removes this limitation by allowing the physical ports to be split between two switches.
Split multi-link trunking
Split multi-link trunking (SMLT) is a Layer-2 link aggregation technology in computer networking originally developed by Nortel as an enhancement to standard multi-link trunking (MLT) as defined in IEEE 802.3ad.
Link aggregation or MLT allows multiple physical network links between two network switches and another device (which could be another switch or a network device such as a ser |
https://en.wikipedia.org/wiki/Rata%20Die | Rata Die (R.D.) is a system for assigning numbers to calendar days (optionally with time of day), independent of any calendar, for the purposes of calendrical calculations. It was named (after the Latin ablative feminine singular for "from a fixed date") by Howard Jacobson.
Rata Die is somewhat similar to Julian Dates (JD), in that the values are plain real numbers that increase by 1 each day. The systems differ principally in that JD takes on a particular value at a particular absolute time, and is the same in all contexts, whereas R.D. values may be relative to time zone, depending on the implementation. This makes R.D. more suitable for work on calendar dates, whereas JD is more suitable for work on time per se. The systems also differ trivially by having different epochs: R.D. is 1 at midnight (00:00) local time on January 1, AD 1 in the proleptic Gregorian calendar, JD is 0 at noon (12:00) Universal Time on January 1, 4713 BC in the proleptic Julian calendar.
Forms
There are three distinct forms of R.D., heretofore defined using Julian Dates.
Dershowitz and Reingold do not explicitly distinguish between these three forms, using the abbreviation "R.D." for all of them.
Dershowitz and Reingold do not say that the RD is based on Greenwich time, but page 10 state that an R.D. with a decimal fraction is called a moment, with the function moment-from-jd taking the floating point R.D. as an argument and returns the argument -1721424.5. Consequently, there is no requirement or opportunity to supply a time zone offset.
Fractional days
The first form of R.D. is a continuously-increasing fractional number, taking integer values at midnight local time. It is defined as:
RD = JD − 1,721,424.5
Midnight local time on December 31, year 0 (1 BC) in the proleptic Gregorian calendar corresponds to Julian Date 1,721,424.5 and hence RD 0.
Day Number
In the second form, R.D. is an integer that labels an entire day, from midnight to midnight local time. This is the result |
https://en.wikipedia.org/wiki/Nullator | In electronics, a nullator is a theoretical linear, time-invariant one-port defined as having zero current and voltage across its terminals. Nullators are strange in the sense that they simultaneously have properties of both a short (zero voltage) and an open circuit (zero current). They are neither current nor voltage sources, yet both at the same time.
Inserting a nullator in a circuit schematic imposes a mathematical constraint on how that circuit must behave, forcing the circuit itself to adopt whatever arrangements needed to meet the condition. For example, the inputs of an ideal operational amplifier (with negative feedback) behave like a nullator, as they draw no current and have no voltage across them, and these conditions are used to analyze the circuitry surrounding the operational amplifier.
A nullator is normally paired with a norator to form a nullor.
Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage). |
https://en.wikipedia.org/wiki/Square%20knot%20insignia | Square knot insignia are embroidered cloth patches that represent awards of the Scout associations throughout the world.
The Scout Association of the United Kingdom uses a "figure-eight" knot and many Scouting organizations of the Commonwealth countries follow suit. The World Organization of the Scout Movement uses military-style ribbons. The Boy Scouts of America a square knot made of colored ropes is depicted; the colors are generally dictated by the award the insignia is associated with.
History
In the earliest days of the Scouting Movement military veterans were urged into service as Scoutmasters. The first Scout uniforms therefore resembled military uniforms. It was common for these veterans to wear their military decorations on their modified Boy Scout uniform — a national uniform was not to be developed until the early 1920s.
Military tradition dictated that the actual medal from a military award was only worn on ceremonial occasions — at other times, it was replaced with a thin ribbon bar with the same ribbon style as found attached to the medal. This carried over to Scouting, whose awards were medals, similar to the military, but were most often worn as ribbons.
The first country to switch over from military ribbons to a unique parallel was the United Kingdom, which introduced its knot emblems in 1922.
Boy Scouts of America
The Boy Scouts of America likewise moved away from allowing Scouters to wear military ribbons, but kept the style, introducing their own ribbons in place of medals in 1934. The BSA introduced its own square knot insignia in lieu of the military-style ribbons in 1947. The choice of the square knot as the common emblem was made by James E. West, who is said to have chosen it for its use as the knot associated with first aid, thereby reminding Scouts to continue to be of service to others.
The first eight awards with square knot insignia in the BSA were the Eagle Scout Award, Quartermaster Award, Scouter's Training Award, Scouter's |
https://en.wikipedia.org/wiki/Norator | In electronics, a norator is a theoretical linear, time-invariant one-port which can have an arbitrary current and voltage between its terminals. A norator represents a controlled voltage or current source with infinite gain.
Inserting a norator in a circuit schematic provides whatever current and voltage the outside circuit demands, in particular, the demands of Kirchhoff's circuit laws. For example, the output of an ideal opamp behaves as a norator, producing nonzero output voltage and current that meet circuit requirements despite a zero input.
A norator is often paired with a nullator to form a nullor.
Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage). |
https://en.wikipedia.org/wiki/Qualys | Qualys, Inc. is an American technology firm based in Foster City, California, specializing in cloud security, compliance and related services.
Qualys has over 10,300 customers in more than 130 countries, including a majority of the Forbes Global 100. The company has strategic partnerships with major managed services providers and consulting organizations including BT, Dell SecureWorks, Fujitsu, IBM, NTT, Symantec, Verizon, and Wipro.
History
Qualys has been described as "one of the earliest software-as-a-service security vendors."
Philippe Courtot first invested in the company in 1999. He became CEO and board chair in 2001. In the announcement of the second round of financing, Courtot described Qualys as addressing a "mounting need for automatic detection of network vulnerabilities."
The company launched QualysGuard in 2000, making Qualys one of the first entrants in the vulnerability management market. This software could automatically scan corporate local area networks (LANs) for vulnerabilities and search for an available patch. The company subsequently added compliance, malware detection, and web application scanning to its platform.
Qualys went public on the Nasdaq under the stock ticker QLYS on September 28, 2012, raising net proceeds of $87.5 million.
Qualys then launched its Cloud Platform and lightweight cloud agent to continuously assess the security and compliance of organizations' global IT infrastructure and applications. The new Cloud Platform helped organizations assess and address the security and compliance of their IT assets in real-time, whether on-premises, cloud-based or mobile.
Products
Qualys provides cloud-based security and compliance solutions for businesses. As of March 2023, several categories of applications are listed on Qualys.com including asset management, IT security, compliance, cloud-native Security, and web app security.
Awards
Qualys won two Pwnie Awards in 2021, in the categories of "Best Privilege Escalation Bug" and |
https://en.wikipedia.org/wiki/Arctic%20ecology | Arctic ecology is the scientific study of the relationships between biotic and abiotic factors in the arctic, the region north of the Arctic Circle (66° 33’N). This region is characterized by two biomes: taiga (or boreal forest) and tundra. While the taiga has a more moderate climate and permits a diversity of both non-vascular and vascular plants, the tundra has a limited growing season and stressful growing conditions due to intense cold, low precipitation, and a lack of sunlight throughout the winter. Sensitive ecosystems exist throughout the Arctic region, which are being impacted dramatically by global warming.
The earliest hominid inhabitants of the Arctic were the Neanderthal sub-species. Since then, many indigenous populations have inhabited the region and continue to do so to this day. Furthermore, the Arctic is a valued area for ecological research.
In 1946, The Arctic Research Laboratory was established in Point Barrow, Alaska under the contract of the Office of Naval Research in the interest of exploring the Arctic and examining animal cycles, permafrost and the interactions between indigenous peoples and the Arctic ecology. During the Cold War, the Arctic became a place where the United States, Canada, and the Soviet Union performed significant research that has been essential to the study of climate change in recent years. A major reason why research in the Arctic is essential for the study of climate change is because the effects of climate change will be felt more quickly and more drastically in higher latitudes of the world as above average temperatures are predicted for Northwest Canada and Alaska. From an anthropological point of view, researchers study the native Inuit of Alaska as they have become extremely accustomed to adapting to ecological and climate variability.
History
Early history
Many different peoples had inhabited present-day Canada and Alaska by AD 1000. Most of these people lived by hunting, gathering and fishing; agriculture w |
https://en.wikipedia.org/wiki/List%20of%20Dominican%20Republic%20flags | This is a lists of flags used in the Dominican Republic. For more information about the national flag, visit the article Flag of the Dominican Republic.
National flags
Standards
Military
Land Force
Navy
Air Force
Police
Institutional flags
Political flags
Historical flags
See also
Provinces of the Dominican Republic |
https://en.wikipedia.org/wiki/Kbach | Kbach (Khmer: ក្បាច់) or Khmer ornamentation is made of traditional decorative elements of Cambodian architecture. While 'kbach' may refer to any sort of art-form style in the Khmer language, such as a gesture in Khmer classical dance, kbach rachana specifically refers to decorative ornament motifs. Kbach are also used in decorating of Cambodian silver crafts, furniture, regalia, murals, pottery, ceramics, stone carving, in a singular artistic expression:
Etymology
According to Chuon Nath's dictionary, kbach (ក្បាច់) used to be written kbache (ក្បាចេ) and is a derivative of kach (កាច់), to hit or break, with a bilabial infix which is a form of intensive morphology, suggesting to hit repeatedly.
Kbach: ornamentation in Khmer decorative arts
Historical development: a visual tradition
From Indian influence to Khmer identity
Not only are many decorative themes unique to Cambodia, but ornamentation motives as well are quite different from those found in India, though they can be related. The lintels chiselled with delicate reliefs, abutments excavated with foliage, pediments decorated with mythological scenes in high relief, without being specific to Cambodia, have a very special appearance which suggests that the traits of Hindu art found in the temples of Angkor owe more to religious influence than to school discipline. The Khmer artists radically modified what they had received and developed a new canon which rules would be respected through centuries.
From wood to stone
Art historians have considered that the sandstone sculptures replicated techniques pioneered from working in wood. Gilberte de Coral-Rémusat supposed that wooden models were the common ancestors of both Indian and Khmer decorative ornamentation. She believed that the particular sandstone employed by the Khmer was so easily manipulated that the adaptation from wood to stone decorative techniques was made with little impediment.
During the Angkor period it is conceivable that artists used copy |
https://en.wikipedia.org/wiki/Beverton%E2%80%93Holt%20model | The Beverton–Holt model is a classic discrete-time population model which gives the expected number n t+1 (or density) of individuals in generation t + 1 as a function of the number of individuals in the previous generation,
Here R0 is interpreted as the proliferation rate per generation and K = (R0 − 1) M is the carrying capacity of the environment. The Beverton–Holt model was introduced in the context of fisheries by Beverton & Holt (1957). Subsequent work has derived the model under other assumptions such as contest competition (Brännström & Sumpter 2005), within-year resource limited competition (Geritz & Kisdi 2004) or even as the outcome of a source-sink Malthusian patches linked by density-dependent dispersal (Bravo de la Parra et al. 2013). The Beverton–Holt model can be generalized to include scramble competition (see the Ricker model, the Hassell model and the Maynard Smith–Slatkin model). It is also possible to include a parameter reflecting the spatial clustering of individuals (see Brännström & Sumpter 2005).
Despite being nonlinear, the model can be solved explicitly, since it is in fact an inhomogeneous linear equation in 1/n.
The solution is
Because of this structure, the model can be considered as the discrete-time analogue of the continuous-time logistic equation for population growth introduced by Verhulst; for comparison, the logistic equation is
and its solution is |
https://en.wikipedia.org/wiki/Self-avoiding%20walk | In mathematics, a self-avoiding walk (SAW) is a sequence of moves on a lattice (a lattice path) that does not visit the same point more than once. This is a special case of the graph theoretical notion of a path. A self-avoiding polygon (SAP) is a closed self-avoiding walk on a lattice. Very little is known rigorously about the self-avoiding walk from a mathematical perspective, although physicists have provided numerous conjectures that are believed to be true and are strongly supported by numerical simulations.
In computational physics, a self-avoiding walk is a chain-like path in or with a certain number of nodes, typically a fixed step length and has the property that it doesn't cross itself or another walk. A system of SAWs satisfies the so-called excluded volume condition. In higher dimensions, the SAW is believed to behave much like the ordinary random walk.
SAWs and SAPs play a central role in the modeling of the topological and knot-theoretic behavior of thread- and loop-like molecules such as proteins. Indeed, SAWs may have first been introduced by the chemist Paul Flory in order to model the real-life behavior of chain-like entities such as solvents and polymers, whose physical volume prohibits multiple occupation of the same spatial point.
SAWs are fractals. For example, in the fractal dimension is 4/3, for it is close to 5/3 while for the fractal dimension is . The dimension is called the upper critical dimension above which excluded volume is negligible. A SAW that does not satisfy the excluded volume condition was recently studied to model explicit surface geometry resulting from expansion of a SAW.
The properties of SAWs cannot be calculated analytically, so numerical simulations are employed. The pivot algorithm is a common method for Markov chain Monte Carlo simulations for the uniform measure on -step self-avoiding walks. The pivot algorithm works by taking a self-avoiding walk and randomly choosing a point on this walk, and then applyin |
https://en.wikipedia.org/wiki/Subspecialty | A subspecialty (US English) or subspeciality (international English) is a narrow field of professional knowledge/skills within a specialty of trade, and is most commonly used to describe the increasingly more diverse medical specialties. A subspecialist is a specialist of a subspecialty.
In medicine, subspecialization is particularly common in internal medicine, cardiology, neurology and pathology, and has grown as medical practice has:
become more complex, and
it has become clear that a physician's case volume is negatively associated with their complication rate; that is, complications tend to decrease as the volume of cases per physician goes up.
See also
Notes and references
Medical specialties |
https://en.wikipedia.org/wiki/Partitioning%20Communication%20System | Partitioning Communication System is a computer and communications security architecture based on an information flow separation policy. The PCS extends the four foundational security policies of a MILS (Multiple Independent Levels of Security) software architecture to the network:
End-to-end Information Flow
End-to-end Data Isolation
End-to-end Periods Processing
End-to-end Damage Limitation
The PCS leverages software separation to enable application layer entities to enforce, manage, and control application layer security policies in such a manner that the application layer security policies are:
Non-bypassable
Evaluatable
Always-invoked
Tamper-proof
The result is a communications architecture that allows a software separation kernel and the PCS to share responsibility of security with the application.
The PCS was invented by OIS. OIS collaborated extensively on the requirements for the PCS with:
National Security Agency
Air Force Research Laboratory
University of Idaho
Lockheed Martin
Boeing
Rockwell Collins |
https://en.wikipedia.org/wiki/Resistome | The resistome has been used to describe to two similar yet separate concepts:
All the antibiotic resistance genes in communities of both pathogenic and non-pathogenic bacteria.
All of the resistance genes in an organism, how they are inherited, and how their transcription levels vary to defend against pathogens like viruses and bacteria.
Discovery and Current Data
The resistome was first used to describe the resistance capabilities of bacteria preventing the effectiveness of antibiotics . Although antibiotics and their accompanying antibiotic resistant genes come from natural habitats, before next-generation sequencing, most studies of antibiotic resistance had been confined to the laboratory. Increased availability of whole-genome and metagenomic next-generation sequencing techniques have revealed significant reservoirs of antibiotic resistant bacteria outside of clinical settings. Repeated testing of soil metagenomes revealed that in urban, agricultural, and forest environments, spore-forming soil bacteria showed resistance to most major antibiotics regardless of where they'd originated. In this study, they observed nearly 200 different resistance profiles among the bacteria sequenced, indicating a diverse and robust response to the antibiotics tested regardless of their bacterial target or natural or synthetic origin. Antibiotic resistant bacteria have observed through metagenomic surveys in non-clinical environments such as water treatment facilities and human microbiomes like the mouth. We now know that the antibiotic resistome exists in every environmental niche on Earth, and sequences from ancient permafrost reveal that antibiotic resistance has been around millennia before the introduction of human-synthesized antibiotics.
The Comprehensive Antibiotic Research Database (CARD) was created to compile a database of resistance genes from this rapidly increasingly available bacterial genomic data. The CARD is a compilation of sequence data and identificati |
https://en.wikipedia.org/wiki/Multiple%20Independent%20Levels%20of%20Security | Multiple Independent Levels of Security/Safety (MILS) is a high-assurance security architecture based on the concepts of separation and controlled information flow. It is implemented by separation mechanisms that support both untrusted and trustworthy components; ensuring that the total security solution is non-bypassable, evaluatable, always invoked, and tamperproof.
Overview
A MILS solution allows for independent evaluation of security components and trusted composition. MILS builds on the older Bell and La Padula theories on secure systems that represent the foundational theories of the DoD Orange Book.
A MILS system employs one or more separation mechanisms (e.g., Separation kernel, Partitioning Communication System, physical separation) to maintain assured data and process separation. A MILS system supports enforcement of one or more application/system specific security policies by authorizing information flow only between components in the same security domain or through trustworthy security monitors (e.g., access control guards, downgraders, crypto devices, etc.).
NEAT
Properties:
Non-bypassable: a component can not use another communication path, including lower level mechanisms to bypass the security monitor.
Evaluatable: any trusted component can be evaluated to the level of assurance required of that component. This means the components are modular, well designed, well specified, well implemented, small, low complexity, etc.
Always-invoked: each and every access/message is checked by the appropriate security monitors (i.e., a security monitor will not just check on a first access and then pass all subsequent accesses/messages through).
Tamperproof: the system controls "modify" rights to the security monitor code, configuration and data; preventing unauthorized changes.
A convenient acronym for these characteristics is NEAT.
Trustworthiness
'Trustworthy' means that the component have been certified to satisfy well defined security policies to a l |
https://en.wikipedia.org/wiki/Insect%20pins | Insect pins are used by entomologists for mounting collected insects.
They can also be used in dressmaking for very fine silk or antique fabrics.
As standard, they are long and come in sizes from 000 (the smallest diameter), through 00, 0, and 1, to 8 (the largest diameter).
The most generally useful size in entomology is size 2, which is in diameter, with sizes 1 and 3 being the next most useful.
They were once commonly made from brass or silver, but these would corrode from contact with insect bodies and are no longer commonly used.
Instead they are nickel-plated brass, yielding "white" or "black" enameling, or even made from stainless steel.
Similarly, the smallest sizes from 000 to 1 used to be impractical for mounting until plastic and polyethylene became commonly used for pinning bases.
There are also micro-pins, which are long.
minutens are headless micropins that are generally only made of stainless steel, used for double-mounting, where the insect is mounted on the minuten, which is pinned to a small block of soft material, which is in turn mounted on a standard, larger, insect pin. |
https://en.wikipedia.org/wiki/Bead%20and%20reel | Bead and reel is an architectural motif, usually found in sculptures, moldings and numismatics. It consists in a thin line where beadlike elements alternate with cylindrical ones. It is found throughout the modern Western world in architectural detail, particularly on Greek/Roman style buildings, wallpaper borders, and interior moulding design. It is often used in combination with the egg-and-dart motif.
According to art historian John Boardman, the bead and reels motif was entirely developed in Greece from motifs derived from the turning techniques used for wood and metal, and was first employed in stone sculpture in Greece during the 6th century BC. The motif then spread to Persia, Egypt and the Hellenistic world, and as far as India, where it can be found on the abacus part of some of the Pillars of Ashoka or the Pataliputra capital. Bead and reel motifs can be found abundantly in Greek and Hellenistic sculpture and on the border of Hellenistic coins.
See also
Pearl circle |
https://en.wikipedia.org/wiki/Super-server | A super-server or sometimes called a service dispatcher is a type of daemon run generally on Unix-like systems.
Usage
A super-server starts other servers when needed, normally with access to them checked by a TCP wrapper. It uses very few resources when in idle state. This can be ideal for workstations used for local web development, client/server development or low-traffic daemons with occasional usage (such as ident and SSH).
Performance
The creation of an operating system process embodying the sub-daemon is deferred until an incoming connection for the sub-daemon arrives. This results in a delay to the handling of the connection (in comparison to a connection handled by an already-running process).
Whether this delay is incurred repeatedly for every incoming connection depends on the design of the particular sub-daemon; simple daemons usually require a separate sub-daemon instance (i.e. a distinct, separate operating system process) be started for each and every incoming connection. Such a request-per-process design is more straightforward to implement, but for some workloads, the extra CPU and memory overhead of starting multiple operating system processes may be undesirable.
Alternatively, a single sub-daemon operating system process can be designed to handle multiple connections, allowing similar performance to a "stand alone" server (except for the one-off delay for the first connection to the sub-daemon).
Implementations
inetd
launchd
systemd
ucspi-tcp
xinetd |
https://en.wikipedia.org/wiki/Requirements%20traceability | Requirements traceability is a sub-discipline of requirements management within software development and systems engineering. Traceability as a general term is defined by the IEEE Systems and Software Engineering Vocabulary as (1) the degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or primary-subordinate relationship to one another; (2) the identification and documentation of derivation paths (upward) and allocation or flowdown paths (downward) of work products in the work product hierarchy; (3) the degree to which each element in a software development product establishes its reason for existing; and (4) discernible association among two or more logical entities, such as requirements, system elements, verifications, or tasks.
Requirements traceability in particular, is defined as "the ability to describe and follow the life of a requirement in both a forwards and backwards direction (i.e., from its origins, through its development and specification, to its subsequent deployment and use, and through periods of ongoing refinement and iteration in any of these phases)". In the requirements engineering field, traceability is about understanding how high-level requirements – objectives, goals, aims, aspirations, expectations, business needs – are transformed into development ready, low-level requirements. It is therefore primarily concerned with satisfying relationships between layers of information (aka artifacts). However, traceability may document relationships between many kinds of development artifacts, such as requirements, specification statements, designs, tests, models and developed components. For example, it is common practice to capture verification relationships to demonstrate that a requirement is verified by a certain test artifact.
Traceability is especially relevant when developing safety-critical systems and therefore prescribed by safety |
https://en.wikipedia.org/wiki/OneDrive | Microsoft OneDrive is a file hosting service operated by Microsoft. First released in August 2007, it allows registered users to store, share and sync their files. OneDrive also works as the storage backend of the web version of Microsoft 365 / Office. OneDrive offers 5 GB of storage space free of charge, with 100 GB, 1 TB, and 6 TB storage options available either separately or with Microsoft 365 subscriptions.
The OneDrive client app adds file synchronization and cloud backup features to its device. The app comes bundled with Microsoft Windows and is available for macOS, Android, iOS, Windows Phone, Xbox 360, Xbox One, and Xbox Series X and S. In addition, Microsoft 365 apps directly integrate with OneDrive.
History
At its launch the service, known as Windows Live Folders at the time (with a codename of SkyDrive), was provided as a limited beta available to a few testers in the United States. On August 1, 2007, the service was expanded to a wider audience. Shortly thereafter, on August 9, 2007, the service was renamed Windows Live SkyDrive and made available to testers in the United Kingdom and India. SkyDrive was initially available in 38 countries and regions, later expanded to 62. On December 2, 2008, the capacity of an individual SkyDrive account was upgraded from 5 GB to 25 GB, and Microsoft added a separate entry point called Windows Live Photos which allowed users to access their photos and videos stored on SkyDrive. This entry point allowed users to add "People tags" to their photos, download photos into Windows Photo Gallery or as a ZIP file, as well as viewing Exif metadata such as camera information for the photos uploaded. Microsoft also added the ability to have full-screen slide shows for photos using Silverlight.
SkyDrive was updated to "Wave 4" release on June 7, 2010, and added the ability to work with Office Web Apps (now known as Office Online), with versioning. In this update, due to the discontinuation of Windows Live Toolbar, the ability |
https://en.wikipedia.org/wiki/Nine-segment%20display | A nine-segment display is a type of display based on nine segments that can be turned on or off according to the graphic pattern to be produced. It is an extension of the more common seven-segment display, having an additional two diagonal or vertical segments (one between the top and the middle, and the other between the bottom and the horizontal segments). It provides an efficient method of displaying alphanumeric characters.
The letters displayed by a nine-segment display are not consistently uppercase or lowercase in shape. A common compromise is to use a lower-case "n" instead of "N". Depending on the design of the display segments, the use of the extra two segments may be avoided whenever possible, as in the Nixxo X-Page "tall" lowercase "r" and "y". Perhaps to avoid this awkward inconsistency, the Scope Geo N8T does not allow the use of uppercase or lowercase versions of the letters "J" (even though it could easily be displayed on a 7-segment display). Some other letters like "K", "Q", "W" (even though this would appear as an upside-down M), and "X", "Y" (even though it could easily be displayed on a 7-segment display), and "Z".
Uses
In some Soviet digital calculators of the 1970s, such as the Elektronika 4-71b, 9-segment displays were used to provide basic alphanumerics and avoid confusions with representing numbers in Soviet postcodes.
The extra two bars were slanted forward, allowing for an appropriate-looking И, and to differentiate the numeral 3
from the letter З. The Sharp Compet calculator also uses a 9-segment display, allowing a small range of characters and symbols to be used.
Nine-segment displays are used in many Timex digital watches, and some pagers, such as the Nixxo XPage, the Arch BR502 pager, and the Scope Geo N8T.
They are also used in some Epson Stylus printers, and Newport iSeries digital meters. The display used in the iSeries is unique in that it has a vertical extra segment at top, and a fully backwards-leaning slant for the ext |
https://en.wikipedia.org/wiki/RADOM-7 | RADOM is a Bulgarian Liulin-type instruments-type spectrometry-dosimetry instrument, designed to precisely measure cosmic radiation around the Moon. It is installed on the Indian satellite Chandrayaan-1. Another three instruments were deployed on the International Space Station. All Liulin-type instruments are designed and build by the Solar-Terrestrial Influences Laboratory at the Bulgarian Academy of Sciences.
Further reading
Radiation
Space program of Bulgaria |
https://en.wikipedia.org/wiki/Living%20apart%20together | Couples living apart together (LAT) have an intimate relationship but live at separate addresses. It includes couples who wish to live together but are not yet able to, as well as couples who prefer to (or must) live apart, for various reasons.
In the early 2000s, LAT couples account for around 10% of adults in Britain (excluding those who live with family), and over a quarter of all those not married or cohabiting. Similar figures are recorded for other countries in northern Europe, including Belgium, France, Germany, the Netherlands, Norway and Sweden. Research suggests similar or even higher rates in southern Europe, although there LAT couples often remain in parental households. In Australia, Canada and the US representative surveys indicate that between 6% and 9% of unmarried adults has a partner who lives elsewhere. LAT is also increasingly understood and accepted publicly, is seen by most as good enough for partnering, and subject to the same expectations about commitment and fidelity as marriage or cohabitation.
Within Asia, "walking marriages" have been increasingly common in Beijing. Guo Jianmei, director of the center for women's studies at Beijing University, told a Newsday correspondent, "Walking marriages reflect sweeping changes in Chinese society." A "walking marriage" refers to a type of temporary marriage formed by the Mosuo of China, in which male partners live elsewhere and make nightly visits. A similar arrangement in Saudi Arabia, called misyar marriage, also involves the husband and wife living separately but meeting regularly.
Research
Some researchers have seen living apart together as a historically new family form. From this perspective LAT couples can pursue both the intimacy of being in a couple and at the same time preserve autonomy. Some LAT couples may even de-prioritize couple relationships and place more importance on friendship. Alternatively, others see LAT as just a 'stage' on the way to possible cohabitation and marriage. |
https://en.wikipedia.org/wiki/AROS%20Research%20Operating%20System | AROS Research Operating System (AROS, pronounced "AR-OS") is a free and open-source multi media centric implementation of the AmigaOS 3.1 application programming interface (API). Designed to be portable and flexible. , ports are available for personal computers (PCs) based on x86 and PowerPC, in native and hosted flavors, with other architectures in development. In a show of full circle development, AROS has been ported to the Motorola 68000 series (m68k) based Amiga 1200, and there is also an ARM port for the Raspberry Pi series.
Name and identity
AROS originally stood for Amiga Research Operating System, but to avoid any trademark issues with the Amiga name, it was changed to the recursive acronym AROS Research Operating System.
The mascot of AROS is an anthropomorphic cat named Kitty, created by Eric Schwartz and officially adopted by the AROS Team in December 2002.
Used in the core AROS About and installer tools, it was also adopted by several AROS community sites and early distributions.
Other AROS identifiable symbols and logos are based around the cat shape, such as the Icaros logo, which is a stylised cat's eye, or AFA (Aros For Amiga).
Current status
The project, begun in 1995, has over the years become an almost "feature complete" implementation of AmigaOS which, as of May 2017, only lacks a few areas of functionality. This was achieved by the efforts of a small team of developers.
It can be installed on most IBM PC compatibles, and features native graphics drivers for video cards such as the GeForce range made by Nvidia. As of May 2007 USB keyboards and mice are also supported. AROS has been ported to the Sam440ep PowerPC board and a first test version for the Efika was released in 2009.
While the OS is still lacking in applications, a few have been ported, including E-UAE, an emulation program that allows m68k-native AmigaOS applications to run. Some AROS-specific applications have also been written. AROS has TCP/IP networking support, and has |
https://en.wikipedia.org/wiki/Video%20post-processing | The term post-processing (or postproc for short) is used in the video/film business for quality-improvement image processing (specifically digital image processing) methods used in video playback devices, such as stand-alone DVD-Video players; video playing software; and transcoding software. It is also commonly used in real-time 3D rendering (such as in video games) to add additional effects.
Uses in video production
Video post-processing is the process of changing the perceived quality of a video on playback (done after the decoding process). Image scaling routines such as linear interpolation, bilinear interpolation, or cubic interpolation can for example be performed when increasing the size of images; this involves either subsampling (reducing or shrinking an image) or zooming (enlarging an image). This helps reduce or hide image artifacts and flaws in the original film material. It is important to understand that post-processing always involves a trade-off between speed, smoothness and sharpness.
Image scaling and multivariate interpolation:
Nearest-neighbor interpolation
linear interpolation
bilinear interpolation
cubic interpolation
bicubic interpolation
Bézier surface
Lanczos resampling
trilinear interpolation
Tricubic interpolation
SPP (Statistical-Post-Processing)
Deblocking
Deringing
Sharpen / Unsharpen (often referred to as "soften")
Requantization
Luminance alterations
Blurring / denoising
Deinterlacing
weave deinterlace method
bob deinterlace method
linear deinterlace method
yadif deinterlace method
Deflicking
2:3 pull-down / ivtc (inverse telecine) for conversion from 24 frames/s and 23.976 frames/s to 30 frames/s and 29.97 frames/s
3:2 pull-up (telecine conversion) for conversion from 30 frames/s and 29.97 frames/s to 24 frames/s and 23.976 frames/s
Uses in 3D rendering
Additionally, post-processing is commonly used in 3D rendering, especially for video games. Instead of rendering 3D objects directly to the display, the |
https://en.wikipedia.org/wiki/Urocanase | Urocanase (also known as imidazolonepropionate hydrolase or urocanate hydratase) is the enzyme () that catalyzes the second step in the degradation of histidine, the hydration of urocanate into imidazolonepropionate.
Urocanase is coded for by the UROC1 gene, located on the 3rd chromosome in humans. The protein itself is composed of 676 amino acids which then fold, producing the final product which has 2 identical subunits, making the enzyme a homodimer.
To catalyze the hydrolysis of urocanate in the catabolic pathway of L-histidine the enzyme utilizes its two NAD+ (Nicotinamide Adnene Dinucleotide) groups. The NAD+ groups act as electrophiles, attaching to the top carbon of the urocanate which leads to sigmatropic rearrangement of the urocanate molecule. This rearrangement allows for the addition of a water molecule, converting the urocanate into 4,5-dihydro-4-oxo-5-imidazolepropanoate.
urocanate + H2O 4,5-dihydro-4-oxo-5-imidazolepropanoate
Inherited deficiency of urocanase leads to elevated levels of urocanic acid in the urine, a condition known as urocanic aciduria.
Urocanase is found in some bacteria (gene hutU), in the liver of many vertebrates and has also been found in the plant Trifolium repens (white clover). Urocanase is a protein of about 60 Kd, it binds tightly to NAD+ and uses it as an electrophil cofactor. A conserved cysteine has been found to be important for the catalytic mechanism and could be involved in the binding of the NAD+. |
https://en.wikipedia.org/wiki/Foster%20graph | In the mathematical field of graph theory, the Foster graph is a bipartite 3-regular graph with 90 vertices and 135 edges.
The Foster graph is Hamiltonian and has chromatic number 2, chromatic index 3, radius 8, diameter 8 and girth 10. It is also a 3-vertex-connected and 3-edge-connected graph. It has queue number 2 and the upper bound on the book thickness is 4.
All the cubic distance-regular graphs are known. The Foster graph is one of the 13 such graphs. It is the unique distance-transitive graph with intersection array {3,2,2,2,2,1,1,1;1,1,1,1,2,2,2,3}. It can be constructed as the incidence graph of the partial linear space which is the unique triple cover with no 8-gons of the generalized quadrangle GQ(2,2). It is named after R. M. Foster, whose Foster census of cubic symmetric graphs included this graph.
The bipartite half of the Foster graph is a distance-regular graph and a locally linear graph. It is one of a finite number of such graphs with degree six.
Algebraic properties
The automorphism group of the Foster graph is a group of order 4320. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore, the Foster graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the Foster census, the Foster graph, referenced as F90A, is the only cubic symmetric graph on 90 vertices.
The characteristic polynomial of the Foster graph is equal to .
Gallery |
https://en.wikipedia.org/wiki/Partial%20linear%20space | A partial linear space (also semilinear or near-linear space) is a basic incidence structure in the field of incidence geometry, that carries slightly less structure than a linear space.
The notion is equivalent to that of a linear hypergraph.
Definition
Let an incidence structure, for which the elements of are called points and the elements of are called lines. S is a partial linear space, if the following axioms hold:
any line is incident with at least two points
any pair of distinct points is incident with at most one line
If there is a unique line incident with every pair of distinct points, then we get a linear space.
Properties
The De Bruijn–Erdős theorem shows that in any finite linear space which is not a single point or a single line, we have .
Examples
Projective space
Affine space
Polar space
Generalized quadrangle
Generalized polygon
Near polygon |
https://en.wikipedia.org/wiki/Hanakotoba | is the Japanese form of the language of flowers. The language was meant to convey emotion and communicate directly to the recipient or viewer without needing the use of words.
Flowers and their meanings
See also
Language of flowers
Plant symbolism |
https://en.wikipedia.org/wiki/Simplicial%20volume | In the mathematical field of geometric topology, the simplicial volume (also called Gromov norm) is a certain measure of the topological complexity of a manifold. More generally, the simplicial norm measures the complexity of homology classes.
Given a closed and oriented manifold, one defines the simplicial norm by minimizing the sum of the absolute values of the coefficients over all singular chains homologous to a given cycle. The simplicial volume is the simplicial norm of the fundamental class.
It is named after Mikhail Gromov, who introduced it in 1982. With William Thurston, he proved that the simplicial volume of a finite volume hyperbolic manifold is proportional to the hyperbolic volume.
The simplicial volume is equal to twice the Thurston norm
Thurston also used the simplicial volume to prove that hyperbolic volume decreases under hyperbolic Dehn surgery. |
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Astronomy%20and%20Astrophysics | The Annual Review of Astronomy and Astrophysics is an annual peer-reviewed scientific journal published by Annual Reviews. The co-editors are Ewine van Dishoeck and Robert C. Kennicutt. The journal reviews scientific literature pertaining to local and distant celestial entities throughout the observable universe, as well as cosmology, instrumentation, techniques, and the history of developments. It was established in 1963.
History
In November 1960, the board of directors of the nonprofit publisher Annual Reviews began investigating the need for a new journal of review articles that covered developments in astronomy and astrophysics. The board consulted an advisory group of experts, including Ronald Bracewell, Robert Jastrow, Joseph Kaplan, Paul Merrill, Otto Struve, and Harold Urey. The editorial committee met in August 1961 to determine the authors and topics for the first volume, which was published in 1963. As of 2020, it was published both in print and electronically.
It defines its scope as covering significant developments in astronomy and astrophysics, including the Sun, the Solar System, exoplanets, stars, the interstellar medium, the Milky Way and other galaxies, galactic nuclei, cosmology, and the instrumentation and techniques used for research and analysis. As of 2023, Journal Citation Reports gives the journal an impact factor of 33.3, ranking it first out of 69 journals in the category "Astronomy and Astrophysics". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, Inspec, and Academic Search, among others.
Editorial processes
The Annual Review of Astronomy and Astrophysics is led by the editor or co-editors. They are is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Re |
https://en.wikipedia.org/wiki/Trace%20operator | In mathematics, the trace operator extends the notion of the restriction of a function to the boundary of its domain to "generalized" functions in a Sobolev space. This is particularly important for the study of partial differential equations with prescribed boundary conditions (boundary value problems), where weak solutions may not be regular enough to satisfy the boundary conditions in the classical sense of functions.
Motivation
On a bounded, smooth domain , consider the problem of solving Poisson's equation with inhomogeneous Dirichlet boundary conditions:
with given functions and with regularity discussed in the application section below. The weak solution of this equation must satisfy
for all .
The -regularity of is sufficient for the well-definedness of this integral equation. It is not apparent, however, in which sense can satisfy the boundary condition on : by definition, is an equivalence class of functions which can have arbitrary values on since this is a null set with respect to the n-dimensional Lebesgue measure.
If there holds by Sobolev's embedding theorem, such that can satisfy the boundary condition in the classical sense, i.e. the restriction of to agrees with the function (more precisely: there exists a representative of in with this property). For with such an embedding does not exist and the trace operator presented here must be used to give meaning to . Then with is called a weak solution to the boundary value problem if the integral equation above is satisfied. For the definition of the trace operator to be reasonable, there must hold for sufficiently regular .
Trace theorem
The trace operator can be defined for functions in the Sobolev spaces with , see the section below for possible extensions of the trace to other spaces. Let for be a bounded domain with Lipschitz boundary. Then there exists a bounded linear trace operator
such that extends the classical trace, i.e.
for all .
The continuity of impl |
https://en.wikipedia.org/wiki/Albert%20Wojciech%20Adamkiewicz | Albert Wojciech Adamkiewicz (; 11 August 1850 – 31 October 1921) was a Polish pathologist born in Żerków.
Biography
Adamkiewicz earned his medical doctorate in 1873 from the University of Breslau where he was a student-assistant to physiologist Rudolf Peter Heinrich Heidenhain. From 1879 until 1892, he was chief of general and experimental pathology at the Jagiellonian University in Cracow.
Adamkiewicz is remembered for his pathological examinations of the central nervous system. His research of the variable vascularity of the spinal cord was an important contribution to the development of modern clinical vascular surgery. He is credited with describing the major anterior segmental medullary artery, which is also known as the Artery of Adamkiewicz.
In the early 1890s, Adamkiewicz published a series of articles claiming the discovery of a cancer-causing parasite he called Coccidium sarcolytus, as well as the existence of an anti-cancer serum. Further testing proved the serum a failure, and Adamkiewicz was severely criticized by the medical community at Jagiellonian University. Soon afterwards, he relocated to Vienna, where he practiced medicine at Rothschild Hospital.
He is credited for the creation of the Adamkiewicz test, a test for detecting tryptophan, an α-amino acid that is used in the biosynthesis of proteins.
See also
Pathology
List of pathologists |
https://en.wikipedia.org/wiki/Jeffrey%20Hunker | Jeffrey Hunker (January 20, 1957 – May 31, 2013) was an American cyber security consultant and writer.
Biography
Hunker received his bachelor's degree from Harvard College and Ph.D. from Harvard Business School. He joined the Boston Consulting Group before becoming an advisor in the Department of Commerce and the founding director of the Critical Infrastructure Assurance Office (later subsumed by the Department of Homeland Security National Protection and Programs Directorate). This led him to serve on the National Security Council as the Senior Director for Critical Infrastructure.
Hunker was also a Vice President at Kidder, Peabody & Co., dean of the Heinz College at Carnegie Mellon, and a member of the Council on Foreign Relations. He is credited with coining the term cyberinfrastructure and has worked closely with Richard A. Clarke on cyberterrorism issues. Hunker's research is primarily concerned with Homeland and Information Security. Prof. Hunker was also the Carnegie Mellon Representation for the Institute for Information Infrastructure Protection.
In 2008 Hunker was charged three times with driving under the influence, followed by another incident on Thanksgiving 2009. In May 2010 Hunker pleaded guilty to these four drunken driving charges and was sentenced to 3 to 6 months in jail. He was paroled at sentencing and was sentenced to 24 months probation. This sentence was terminated early on June 2, 2011.
In 2010 his book Creeping Failure: How We Broke the Internet and What We Can Do to Fix It was published by McClelland and Stewart, a division of Random House. Creeping Failure is a Scientific American magazine Recommended Book. In 2011 a second edition was released. Also in 2010 he was co-editor of Insider Threats in Cyber Security and his article (co-authored with Christian Probst)The Risk of Risk Analysis and its Relation to the Economics of Insider Threats appears in The Economics of Information Security and Privacy.
Until 2013, he was Visiting Sc |
https://en.wikipedia.org/wiki/Atypical%20teratoid%20rhabdoid%20tumor | An atypical teratoid rhabdoid tumor (AT/RT) is a rare tumor usually diagnosed in childhood. Although usually a brain tumor, AT/RT can occur anywhere in the central nervous system (CNS), including the spinal cord. About 60% will be in the posterior cranial fossa (particularly the cerebellum). One review estimated 52% in the posterior fossa, 39% are supratentorial primitive neuroectodermal tumors (sPNET), 5% are in the pineal, 2% are spinal, and 2% are multifocal.
In the United States, three children per 1,000,000 or around 30 new AT/RT cases are diagnosed each year. AT/RT represents around 3% of pediatric cancers of the CNS.
Around 17% of all pediatric cancers involve the CNS, making these cancers the most common childhood solid tumor. The survival rate for CNS tumors is around 60%. Pediatric brain cancer is the second-leading cause of childhood cancer death, just after leukemia. Recent trends suggest that the rate of overall CNS tumor diagnosis is increasing by about 2.7% per year. As diagnostic techniques using genetic markers improve and are used more often, the proportion of AT/RT diagnoses is expected to increase.
AT/RT was only recognized as an entity in 1996 and added to the World Health Organization Brain Tumor Classification in 2000 (Grade IV). The relatively recent classification and rarity has contributed to initial misdiagnosis and nonoptimal therapy. This has led to a historically poor prognosis.
Current research is focusing on using chemotherapy protocols that are effective against rhabdomyosarcoma in combination with surgery and radiation therapy.
Recent studies using multimodal therapy have shown significantly improved survival data. In 2008,
the Dana-Farber Cancer Institute in Boston reported two-year overall survival of 53% and event-free survival of 70% (median age at diagnosis of 26 months).
In 2013, the Medical University of Vienna reported five-year overall survival of 100%, and event-free survival of 89% (median age at diagnosis |
https://en.wikipedia.org/wiki/Molecular%20Phylogenetics%20and%20Evolution | Molecular Phylogenetics and Evolution is a peer-reviewed scientific journal of evolutionary biology and phylogenetics. The journal is edited by E.A. Zimmer.
Indexing
The journal is indexed in:
EMBiology
Journal Citation Reports
Scopus
Web of Science
External links
Elsevier academic journals
Evolutionary biology journals
Phylogenetics
Molecular biology
Academic journals established in 1992
Monthly journals |
https://en.wikipedia.org/wiki/Alveolar%E2%80%93arterial%20gradient | The Alveolar–arterial gradient (A-, or A–a gradient), is a measure of the difference between the alveolar concentration (A) of oxygen and the arterial (a) concentration of oxygen. It is a useful parameter for narrowing the differential diagnosis of hypoxemia.
The A–a gradient helps to assess the integrity of the alveolar capillary unit. For example, in high altitude, the arterial oxygen is low but only because the alveolar oxygen () is also low. However, in states of ventilation perfusion mismatch, such as pulmonary embolism or right-to-left shunt, oxygen is not effectively transferred from the alveoli to the blood which results in an elevated A-a gradient.
In a perfect system, no A-a gradient would exist: oxygen would diffuse and equalize across the capillary membrane, and the pressures in the arterial system and alveoli would be effectively equal (resulting in an A-a gradient of zero). However even though the partial pressure of oxygen is about equilibrated between the pulmonary capillaries and the alveolar gas, this equilibrium is not maintained as blood travels further through pulmonary circulation. As a rule, is always higher than by at least 5–10 mmHg, even in a healthy person with normal ventilation and perfusion. This gradient exists due to both physiological right-to-left shunting and a physiological V/Q mismatch caused by gravity-dependent differences in perfusion to various zones of the lungs. The bronchial vessels deliver nutrients and oxygen to certain lung tissues, and some of this spent, deoxygenated venous blood drains into the highly oxygenated pulmonary veins, causing a right-to-left shunt. Further, the effects of gravity alter the flow of both blood and air through various heights of the lung. In the upright lung, both perfusion and ventilation are greatest at the base, but the gradient of perfusion is steeper than that of ventilation so V/Q ratio is higher at the apex than at the base. This means that blood flowing through capillaries at th |
https://en.wikipedia.org/wiki/Remington%20Medal | The Remington Honor Medal, named for eminent community pharmacist, manufacturer, and educator Joseph P. Remington (1847–1918), was established in 1918 to recognize distinguished service on behalf of American pharmacy during the preceding year, culminating in the past year, or during a long period of outstanding activity or fruitful achievement.
Awarded annually by the American Pharmacists Association, the Remington Medal is the highest recognition given in the profession of pharmacy in the US.
Past recipients
See also
List of chemistry awards
List of medicine awards |
https://en.wikipedia.org/wiki/Environmental%20sex%20determination | Environmental sex determination is the establishment of sex by a non-genetic cue, such as nutrient availability, experienced within a discrete period after fertilization. Environmental factors which often influence sex determination during development or sexual maturation include light intensity and photoperiod, temperature, nutrient availability, and pheromones emitted by surrounding plants or animals. This is in contrast to genotypic sex determination, which establishes sex at fertilization by genetic factors such as sex chromosomes. Under true environmental sex determination, once sex is determined, it is fixed and cannot be switched again. Environmental sex determination is different from some forms of sequential hermaphroditism in which the sex is determined flexibly after fertilization throughout the organism’s life.
Adaptive significance
Environmental sex determination is similar to certain forms of sexual selection in that there are oftentimes different and opposing selective pressures on males and females because of the costs of reproduction. Sexual selection is common throughout the tree of life (most known in birds); often resulting in sexual dimorphism, or size and appearance differences between sexes in the same species. In environmental sex determination, selective pressures over evolutionary time have selected for flexibility in sex determination to optimize fitness in a heterogenous environment because of the different costs of sex in males and females. Certain environmental conditions differentially affect each sex such that it would be beneficial to become one sex and not the other. This is especially pertinent for sessile organisms that cannot move to a different environment. In plants, for example, female sexual function is often more energetically expensive because once fertilized they must use significant stored energy to produce fruits, seeds, or sporophytes whereas males must only produce sperm (and sperm-containing structure; antheridium i |
https://en.wikipedia.org/wiki/Plateau%20potentials | Plateau potentials, caused by persistent inward currents (PICs), are a type of electrical behavior seen in neurons.
Spinal Cord
Plateau potentials are of particular importance to spinal cord motor systems. PICs are set up by the influence of descending monoaminergic reticulospinal pathways. Metabotropic neurotransmitters, via monoaminergic input such as 5-HT and norepinephrine, modulate the activity of dendritic L-type Calcium channels that allow a sustained, positive, inward current into the cell. This leads to a lasting depolarisation. In this state, the cell fires action potentials independent of synaptic input. The PICs can be turned off via the activation of high-frequency inhibitory input at which point the cell returns to a resting state.
Olfactory Bulb
Periglomerular cells, inhibitory interneurons that surround and innervate olfactory glomeruli, have also been shown to exhibit plateau potentials.
Cortex and Hippocampus
Plateau potentials are also seen in the cortical, and hippocampal pyramidal neurons. Using iontophoretic, or two-photon glutamate uncaging experiments, it has been discovered that these plateau potentials include activities of voltage dependent calcium channels and NMDA receptors. |
https://en.wikipedia.org/wiki/Modified%20condition/decision%20coverage | Modified condition/decision coverage (MC/DC) is a code coverage criterion used in software testing.
Overview
MC/DC requires all of the below during testing:
Each entry and exit point is invoked
Each decision takes every possible outcome
Each condition in a decision takes every possible outcome
Each condition in a decision is shown to independently affect the outcome of the decision.
Independence of a condition is shown by proving that only one condition changes at a time.
MC/DC is used in avionics software development guidance DO-178B and DO-178C to ensure adequate testing of the most critical (Level A) software, which is defined as that software which could provide (or prevent failure of) continued safe flight and landing of an aircraft. It is also highly recommended for SIL 4 in part 3 Annex B of the basic safety publication and ASIL D in part 6 of automotive standard ISO 26262.
Additionally, NASA requires 100% MC/DC coverage for any safety critical software component in Section 3.7.4 of NPR 7150.2D.
Definitions
Condition A condition is a leaf-level Boolean expression (it cannot be broken down into simpler Boolean expressions).
Decision A Boolean expression composed of conditions and zero or more Boolean operators. A decision without a Boolean operator is a condition. A decision does not imply a change of control flow, e.g. an assignment of a boolean expression to a variable is a decision for MC/DC.
Condition coverage Every condition in a decision in the program has taken all possible outcomes at least once.
Decision coverage Every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken all possible outcomes at least once.
Condition/decision coverage Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, and every decision in the program has taken all possible outcomes at least once.
Mod |
https://en.wikipedia.org/wiki/Bologna%20bottle | A Bologna bottle, also known as a Bologna phial or philosophical vial, is a glass bottle which has great external strength, often used in physics demonstrations and magic tricks. The exterior is generally strong enough that one could pound a nail into a block of wood using the bottle as a hammer; however, even a small scratch on the interior would cause it to crumble.
It is created by heating a glass bottle and then rapidly cooling the outside whilst slowly cooling the inside. This causes external compression and internal tension such that even a scratch on the inside is sufficient to shatter the bottle.
The effect is utilized in several magic effects, including the "Devil's Flask".
Manufacture
To create the desired effect, the bottles are rapidly cooled on the outside and slow cooled on the inside during the glass-making process. This causes there to be compressive stress on the outside of the bottle and tensile stress on the inside, making the inside surface susceptible to damage which can release the internal stresses and shatter the bottle. The glass is not annealed. Reheating the glass and then allowing it to cool slowly will remove the unique properties from the glass.
Uses
Because of the seemingly paradoxical nature of the glass (being both extremely durable and extremely fragile), Bologna bottles are often used as props in magic tricks, where the bottle can be shattered by rattling a small object inside it.
History
Mentioned in the publication of the Royal Society around 1740s, the Bologna bottle is named for where it was first discovered in Bologna, Italy. During this period, a glassblower would create a Bologna bottle by leaving the bottle in the open air instead of immediately placing the bottle back into the furnace to cool (annealing). This produced a special phenomenon, where the bottle would remain intact even when dropped from a distance onto the brick floor, but would immediately rupture if a small piece of flint were placed inside.
Although |
https://en.wikipedia.org/wiki/Digital%20down%20converter | In digital signal processing, a digital down-converter (DDC) converts a digitized, band-limited signal to a lower frequency signal at a lower sampling rate in order to simplify the subsequent radio stages. The process can preserve all the information in the frequency band of interest of the original signal. The input and output signals can be real or complex samples. Often the DDC converts from the raw radio frequency or intermediate frequency down to a complex baseband signal.
Architecture
A DDC consists of three subcomponents: a direct digital synthesizer (DDS), a low-pass filter (LPF), and a downsampler (which may be integrated into the low-pass filter).
The DDS generates a complex sinusoid at the intermediate frequency (IF). Multiplication of the intermediate frequency with the input signal creates images centered at the sum and difference frequency (which follows from the frequency shifting properties of the Fourier transform). The lowpass filters pass the difference (i.e. baseband) frequency while rejecting the sum frequency image, resulting in a complex baseband representation of the original signal. Assuming judicious choice of IF and LPF bandwidth, the complex baseband signal is mathematically equivalent to the original signal. In its new form, it can readily be downsampled and is more convenient to many DSP algorithms.
Any suitable low-pass filter can be used including FIR, IIR and CIC filters. The most common choice is a FIR filter for low amounts of decimation (less than ten) or a CIC filter followed by a FIR filter for larger downsampling ratios.
Variations on the DDC
Several variations on the DDC are useful, including many that input a feedback signal into the DDS. These include:
Decision directed carrier recovery phase locked loops in which the I and Q are compared to the nearest ideal constellation point of a PSK signal, and the resulting error signal is filtered and fed back into the DDS
A Costas loop in which the I and Q are multiplied |
https://en.wikipedia.org/wiki/Automorphic%20factor | In mathematics, an automorphic factor is a certain type of analytic function, defined on subgroups of SL(2,R), appearing in the theory of modular forms. The general case, for general groups, is reviewed in the article 'factor of automorphy'.
Definition
An automorphic factor of weight k is a function
satisfying the four properties given below. Here, the notation and refer to the upper half-plane and the complex plane, respectively. The notation is a subgroup of SL(2,R), such as, for example, a Fuchsian group. An element is a 2×2 matrix
with a, b, c, d real numbers, satisfying ad−bc=1.
An automorphic factor must satisfy:
For a fixed , the function is a holomorphic function of .
For all and , one has for a fixed real number k.
For all and , one has Here, is the fractional linear transform of by .
If , then for all and , one has Here, I denotes the identity matrix.
Properties
Every automorphic factor may be written as
with
The function is called a multiplier system. Clearly,
,
while, if , then
which equals when k is an integer. |
https://en.wikipedia.org/wiki/D-37C | The D-37C (D37C) is the computer component of the all-inertial NS-17 Missile Guidance Set (MGS) for accurately navigating to its target thousands of miles away. The NS-17 MGS was used in the Minuteman II (LGM-30F) ICBM. The MGS, originally designed and produced by the Autonetics Division of North American Aviation, could store multiple preprogrammed targets in its internal memory.
Unlike other methods of navigation, inertial guidance does not rely on observations of land positions or the stars, radio or radar signals, or any other information from outside the vehicle. Instead, the inertial navigator provides the guidance information using gyroscopes that indicate direction and accelerometers that measure changes in speed and direction. A computer then uses this information to calculate the vehicle's position and guide it on its course. Enemies could not "jam" the system with false or confusing information.
The Ogden Air Logistics Center at Hill AFB has been Program Manager for the Minuteman ICBM family since January 1959. The base has had complete logistics management responsibilities for Minuteman and the rest of the ICBM fleet since July 1965.
The D-37C computer consists of four main sections: the memory, the central processing unit (CPU), and the input and output units. These sections are enclosed in one case. The memory is a two-sided, fixed-head disk which rotates
at 6000 rpm. It contains 7222 words of 27 bits. Each word contains 24 data bits and three spacer bits not available to the programmer. The memory is arranged in 56 channels of 128 words each plus ten rapid access channels of one to sixteen words. The memory also includes the accumulators and instruction register.
The MM II missile was deployed with a D-37C disk computer. Autonetics also programmed functional simulators for flight program development and testing, and the code inserter verifier that was used at Wing headquarters to generate the codes to go into the airborne computer. It became ne |
https://en.wikipedia.org/wiki/Medial%20vestibular%20nucleus | The medial vestibular nucleus (Schwalbe nucleus) is one of the vestibular nuclei. It is located in the medulla oblongata.
Lateral vestibulo-spinal tract (lateral vestibular nucleus “Deiters”)- via ventrolateral medulla and spinal cord to ventral funiculus (lumbo-sacral segments). ..Ipsilaterally for posture
Medial vestibulo-spinal tract (medial, lateral, inferior, vestibular nuclei), bilateral projection via descending medial longitudinal fasciculus to cervical segments. DESCENDING MLF..Bilaterally for head/neck/eye movements
It is one of the nuclei that corresponds to CN VIII, corresponding to the vestibular nerve, which joins with the cochlear nerve.
It receives its blood supply from the Posterior Inferior Cerebellar Artery, which is compromised in the lateral medullary syndrome.
See also
Vestibular nerve
Vestibular system |
https://en.wikipedia.org/wiki/Computational%20lexicology | Computational lexicology is a branch of computational linguistics, which is concerned with the use of computers in the study of lexicon. It has been more narrowly described by some scholars (Amsler, 1980) as the use of computers in the study of machine-readable dictionaries. It is distinguished from computational lexicography, which more properly would be the use of computers in the construction of dictionaries, though some researchers have used computational lexicography as synonymous.
History
Computational lexicology emerged as a separate discipline within computational linguistics with the appearance of machine-readable dictionaries, starting with the creation of the machine-readable tapes of the Merriam-Webster Seventh Collegiate Dictionary and the Merriam-Webster New Pocket Dictionary in the 1960s by John Olney et al. at System Development Corporation. Today, computational lexicology is best known through the creation and applications of WordNet. As the computational processing of the researchers increased over time, the use of computational lexicology has been applied ubiquitously in the text analysis. In 1987, amongst others Byrd, Calzolari, Chodorow have developed computational tools for text analysis. In particular the model was designed for coordinating the associations involving the senses of polysemous words.
Study of lexicon
Computational lexicology has contributed to the understanding of the content and limitations of print dictionaries for computational purposes (i.e. it clarified that the previous work of lexicography was not sufficient for the needs of computational linguistics). Through the work of computational lexicologists almost every portion of a print dictionary entry has been studied ranging from:
what constitutes a headword - used to generate spelling correction lists;
what variants and inflections the headword forms - used to empirically understand morphology;
how the headword is delimited into syllables;
how the headword is prono |
https://en.wikipedia.org/wiki/Lycurgus%20Cup | The Lycurgus Cup is a 4th-century Roman glass cage cup made of a dichroic glass, which shows a different colour depending on whether or not light is passing through it: red when lit from behind and green when lit from in front. It is the only complete Roman glass object made from this type of glass, and the one exhibiting the most impressive change in colour; it has been described as "the most spectacular glass of the period, fittingly decorated, which we know to have existed".
The cup is also a very rare example of a complete Roman cage-cup, or diatretum, where the glass has been painstakingly cut and ground back to leave only a decorative "cage" at the original surface-level. Many parts of the cage have been completely undercut. Most cage-cups have a cage with a geometric abstract design, but here there is a composition with figures, showing the mythical King Lycurgus, who (depending on the version) tried to kill Ambrosia, a follower of the god Dionysus (Bacchus to the Romans). She was transformed into a vine that twined around the enraged king and restrained him, eventually killing him. Dionysus and two followers are shown taunting the king. The cup is the "only well-preserved figural example" of a cage cup.
The dichroic effect is achieved by making the glass with tiny proportions of nanoparticles of gold and silver dispersed in colloidal form throughout the glass material. The process used remains unclear, and it is likely that it was not well understood or controlled by the makers, and was probably discovered by accidental "contamination" with minutely ground gold and silver dust. The glass-makers may not even have known that gold was involved, as the quantities involved are so tiny; they may have come from a small proportion of gold in any silver added (most Roman silver contains small proportions of gold), or from traces of gold or gold leaf left by accident in the workshop, as residue on tools, or from other work. The very few other surviving fragmen |
https://en.wikipedia.org/wiki/Leucyl%20aminopeptidase | Leucyl aminopeptidases (, leucine aminopeptidase, LAPs, leucyl peptidase, peptidase S, cytosol aminopeptidase, cathepsin III, L-leucine aminopeptidase, leucinaminopeptidase, leucinamide aminopeptidase, FTBL proteins, proteinates FTBL, aminopeptidase II, aminopeptidase III, aminopeptidase I) are enzymes that preferentially catalyze the hydrolysis of leucine residues at the N-terminus of peptides and proteins. Other N-terminal residues can also be cleaved, however. LAPs have been found across superkingdoms. Identified LAPs include human LAP, bovine lens LAP, porcine LAP, Escherichia coli (E. coli) LAP (also known as PepA or XerB), and the solanaceous-specific acidic LAP (LAP-A) in tomato (Solanum lycopersicum).
Enzyme description, structure, and active site
The active sites in PepA and in bovine lens LAP have been found to be similar. Shown in the picture below is the proposed model for the active site of LAP-A in tomato based on the work of Strater et al. It is also known that the biochemistry of the LAPs from these three kingdoms is very similar. PepA, bovine lens LAP, and LAP-A preferentially cleave N-terminal leucine, arginine, and methionine residues. These enzymes are all metallopeptidases requiring divalent metal cations for their enzymatic activity Enzymes are active in the presence of Mn+2, Mg+2 and Zn+2. These enzymes are also known to have high pH (pH 8) and temperature optima. At pH 8, the highest enzymatic activity is seen at 60 °C. PepA, bovine lens LAP and LAP-A are also known to form hexamers in vivo. The Gu et al. from 1999 demonstrated that six 55kDA enzymatically inactive LAP-A protomers come together to form the 353kDa bioactive LAP-A hexamer. Structures of the bovine lens LAP protomer and the biologically active hexamer have been constructed can be found through Protein Data Bank (2J9A).
Mechanism(s)
Historically, the mechanisms of carboxypeptidases and endoprotease have been much more well-studied and understood by researchers (Ref #6 Lipscomb |
https://en.wikipedia.org/wiki/Carboxypeptidase%20A | Carboxypeptidase A usually refers to the pancreatic exopeptidase that hydrolyzes peptide bonds of C-terminal residues with aromatic or aliphatic side-chains. Most scientists in the field now refer to this enzyme as CPA1, and to a related pancreatic carboxypeptidase as CPA2.
Types
In addition, there are 4 other mammalian enzymes named CPA-3 through CPA-6, and none of these are expressed in the pancreas. Instead, these other CPA-like enzymes have diverse functions.
CPA3 (also known as mast-cell CPA) is involved in the digestion of proteins by mast cells.
CPA4 (previously known as CPA-3, but renumbered when mast-cell CPA was designated CPA-3) may be involved in tumor progression, but this enzyme has not been well studied.
CPA5 has not been well studied.
CPA6 is expressed in many tissues during mouse development, and in adult shows a more limited distribution in brain and several other tissues. CPA6 is present in the extracellular matrix where it is enzymatically active. A human mutation of CPA-6 has been linked to Duane's syndrome (abnormal eye movement). Recently, mutations in CPA6 were found to be linked to epilepsy. CPA6 is also one of several enzymes which degrade enkephalins.
Function
CPA-1 and CPA-2 (and, it is presumed, all other CPAs) employ a zinc ion within the protein for hydrolysis of the peptide bond at the C-terminal end of an amino acid residue. Loss of the zinc leads to loss of activity, which can be replaced easily by zinc, and also by some other divalent metals (cobalt, nickel). Carboxypeptidase A is produced in the pancreas and is crucial to many processes in the human body to include digestion, post-translational modification of proteins, blood clotting, and reproduction.
Applications
This vast scope of functionality for a single protein makes it the ideal model for research regarding other zinc proteases of unknown structure. Recent biomedical research on collagenase, enkephalinase, and angiotensin-converting enzyme used carboxy |
https://en.wikipedia.org/wiki/Schouten%E2%80%93Nijenhuis%20bracket | In differential geometry, the Schouten–Nijenhuis bracket, also known as the Schouten bracket, is a type of graded Lie bracket defined on multivector fields on a smooth manifold extending the Lie bracket of vector fields. There are two different versions, both rather confusingly called by the same name. The most common version is defined on alternating multivector fields and makes them into a Gerstenhaber algebra, but there is also another version defined on symmetric multivector fields, which is more or less the same as the Poisson bracket on the cotangent bundle. It was invented by Jan Arnoldus Schouten (1940, 1953) and its properties were investigated by his student Albert Nijenhuis (1955). It is related to but not the same as the Nijenhuis–Richardson bracket and the Frölicher–Nijenhuis bracket.
Definition and properties
An alternating multivector field is a section of the exterior algebra ∧∗TM over the tangent bundle of a manifold M. The alternating multivector fields form a graded supercommutative ring with the product of a and b written as ab (some authors use a∧b). This is dual to the usual algebra of differential forms Ω∗M by the pairing on homogeneous elements:
The degree of a multivector A in is defined to be |A| = p.
The skew symmetric Schouten–Nijenhuis bracket is the unique extension of the Lie bracket of vector fields to a graded bracket on the space of alternating multivector fields that makes the alternating multivector fields into a Gerstenhaber algebra.
It is given in terms of the Lie bracket of vector fields by
for vector fields ai, bj and
for vector fields and smooth function , where is the common interior product operator.
It has the following properties.
|ab| = |a| + |b| (The product has degree 0)
|[a,b]| = |a| + |b| − 1 (The Schouten–Nijenhuis bracket has degree −1)
(ab)c = a(bc), ab = (−1)|a||b|ba (the product is associative and (super) commutative)
[a, bc] = [a, b]c + (−1)|b|(|a| − 1)b[a, c] (Poisson identity)
[a, |
https://en.wikipedia.org/wiki/A-weighting | A-weighting is the most commonly used of a family of curves defined in the International standard IEC 61672:2003 and various national standards relating to the measurement of sound pressure level. A-weighting is applied to instrument-measured sound levels in an effort to account for the relative loudness perceived by the human ear, as the ear is less sensitive to low audio frequencies. It is employed by arithmetically adding a table of values, listed by octave or third-octave bands, to the measured sound pressure levels in dB. The resulting octave band measurements are usually added (logarithmic method) to provide a single A-weighted value describing the sound; the units are written as dB(A). Other weighting sets of values – B, C, D and now Z – are discussed below.
The curves were originally defined for use at different average sound levels, but A-weighting, though originally intended only for the measurement of low-level sounds (around 40 phon), is now commonly used for the measurement of environmental noise and industrial noise, as well as when assessing potential hearing damage and other noise health effects at all sound levels; indeed, the use of A-frequency-weighting is now mandated for all these measurements, because decades of field experience have shown a very good correlation with occupational deafness in the frequency range of human speech. It is also used when measuring low-level noise in audio equipment, especially in the United States. In Britain, Europe and many other parts of the world, broadcasters and audio engineers more often use the ITU-R 468 noise weighting, which was developed in the 1960s based on research by the BBC and other organizations. This research showed that our ears respond differently to random noise, and the equal-loudness curves on which the A, B and C weightings were based are really only valid for pure single tones.
History
A-weighting began with work by Fletcher and Munson which resulted in their publication, in 1933, of |
https://en.wikipedia.org/wiki/Background%20extinction%20rate | Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history before humans became a primary contributor to extinctions. This is primarily the pre-human extinction rates during periods in between major extinction events. Currently there have been five mass extinctions that have happened since the beginning of time all resulting in a variety of reasons.
Overview
Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it.
Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years.
Measurement
Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There is three different ways to calculate background extinction rate.. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct.
Lifespan estimates
Some species lifespan es |
https://en.wikipedia.org/wiki/Vehicle%20Area%20Network | The Vehicle Area Network (VAN) is a vehicle bus developed by PSA Peugeot Citroën and Renault. It is a serial protocol capable of speeds up to 125 kbit/s and is standardised in ISO 11519-3.
At the media layer, VAN is a differential bus with dominant and recessive states signalling ones and zeros much like CAN bus. The data is encoded using enhanced Manchester which sets it apart from almost every other line signalling protocol. This encodes blocks of 4 bits as 3 non-return-to-zero encoded bits followed by 1 Manchester encoded bit. |
https://en.wikipedia.org/wiki/IRC%20poker | IRC poker was a form of poker played over the IRC chat protocol before the surge in popularity of online poker in the early 2000s. A computer program was used to deal and manage the games. Commands could be typed in directly with a standard IRC client but point-and-click graphical clients were soon developed. The ability to message the dealer program directly before one's turn to act made games flow more quickly than face to face games.
IRC poker was played with imaginary money, but attracted a devoted following of experts. World Series of Poker champion Chris Ferguson got his start playing IRC poker.
IRC poker offered limit Texas hold 'em, limit Omaha hold 'em (Hi-Lo), no-limit Texas hold 'em, and tournaments. Each account was limited to "buying" 1000 chips per day, but there were no restrictions on creating new accounts so some players created multiple accounts and "harvested" the chips in fake games.
Players in no-limit automatically bought in for the full value of their account; the most successful accumulated over one million chips and joked about selling their accounts.
Tournaments often started with the theoretical maximum of 23 players at one table and could be completed in less than an hour.
A poker playing program, r00lbot, was able to maintain a winning record, and provided amusing quotes as well.
Notes
Poker variants
Online games
Poker |
https://en.wikipedia.org/wiki/Quasi-birth%E2%80%93death%20process | In queueing models, a discipline within the mathematical theory of probability, the quasi-birth–death process describes a generalisation of the birth–death process. As with the birth-death process it moves up and down between levels one at a time, but the time between these transitions can have a more complicated distribution encoded in the blocks.
Discrete time
The stochastic matrix describing the Markov chain has block structure
where each of A0, A1 and A2 are matrices and A*0, A*1 and A*2 are irregular matrices for the first and second levels.
Continuous time
The transition rate matrix for a quasi-birth-death process has a tridiagonal block structure
where each of B00, B01, B10, A0, A1 and A2 are matrices. The process can be viewed as a two dimensional chain where the block structure are called levels and the intra-block structure phases. When describing the process by both level and phase it is a continuous-time Markov chain, but when considering levels only it is a semi-Markov process (as transition times are then not exponentially distributed).
Usually the blocks have finitely many phases, but models like the Jackson network can be considered as quasi-birth-death processes with infinitely (but countably) many phases.
Stationary distribution
The stationary distribution of a quasi-birth-death process can be computed using the matrix geometric method. |
https://en.wikipedia.org/wiki/Bracket%20%28mathematics%29 | In mathematics, brackets of various typographical forms, such as parentheses ( ), square brackets [ ], braces { } and angle brackets ⟨ ⟩, are frequently used in mathematical notation. Generally, such bracketing denotes some form of grouping: in evaluating an expression containing a bracketed sub-expression, the operators in the sub-expression take precedence over those surrounding it. Sometimes, for the clarity of reading, different kinds of brackets are used to express the same meaning of precedence in a single expression with deep nesting of sub-expressions.
Historically, other notations, such as the vinculum generally, were similarly used for grouping. In present-day use, these notations all have specific meanings. The earliest use of brackets to indicate aggregation (i.e. grouping) was suggested in 1608 by Christopher Clavius, and in 1629 by Albert Girard.
Symbols for representing angle brackets
A variety of different symbols are used to represent angle brackets. In e-mail and other ASCII text, it is common to use the less-than (<) and greater-than (>) signs to represent angle brackets, because ASCII does not include angle brackets.
Unicode has pairs of dedicated characters; other than less-than and greater-than symbols, these include:
and
and
and
and
and , which are deprecated
In LaTeX the markup is \langle and \rangle: .
Non-mathematical angled brackets include:
and , used in East-Asian text quotation
and , which are dingbats
There are additional dingbats with increased line thickness, and some angle quotation marks and deprecated characters.
Algebra
In elementary algebra, parentheses ( ) are used to specify the order of operations. Terms inside the bracket are evaluated first; hence 2×(3 + 4) is 14, is 2 and (2×3) + 4 is 10. This notation is extended to cover more general algebra involving variables: for example . Square brackets are also often used in place of a second set of parentheses when they are nested—so as to provide a v |
https://en.wikipedia.org/wiki/Search-based%20software%20engineering | Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are often impractical for large scale software engineering problems because of their computational complexity or their assumptions on the problem structure. Researchers and practitioners use metaheuristic search techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions.
SBSE problems can be divided into two types:
black-box optimization problems, for example, assigning people to tasks (a typical combinatorial optimization problem).
white-box problems where operations on source code need to be considered.
Definition
SBSE converts a software engineering problem into a computational search problem that can be tackled with a metaheuristic. This involves defining a search space, or the set of possible solutions. This space is typically too large to be explored exhaustively, suggesting a metaheuristic approach. A metric (also called a fitness function, cost function, objective function or quality measure) is then used to measure the quality of potential solutions. Many software engineering problems can be reformulated as a computational search problem.
The term "search-based application", in contrast, refers to using search-engine technology, rather than search techniques, in another industrial application.
Brief history
One of the earliest attempts to apply optimization to a software engineering problem was reported by Webb Miller and David Spooner in 1976 in the area of software testing. In 1992, S. Xanthakis and his colleagues applied a search technique to a software engineering problem for the first time. The term SBSE was first used in 2001 by Harman a |
https://en.wikipedia.org/wiki/Mission%20and%20Spacecraft%20Library | Mission and Spacecraft Library (MSL) is a reference web site, maintained by NASA, containing information about satellites. It contains a catalog of several hundreds of satellites from different countries. The site was founded by Jet Propulsion Laboratory employee Mike Evans, who had a vision of building a database that would be publicly accessible via the World Wide Web. The site has not been updated since 1999. |
https://en.wikipedia.org/wiki/Helical%20growth | Helical growth is when cells or organs expand, resulting in helical shaped cells or organs and typically including the breakage of symmetry. This is seen in fungi, algae, and other higher plant cells or organs. Helical growth can occur naturally, such as in tendrils or in twining plants. Asymmetry can be caused by changes in pectin or through mutation and result in left- or right-handed helices. Tendril perversion, during which a tendril curves in opposite directions at each end, is seen in many cases. The helical growth of twining plants is based on the circumnutational movement, or circular growth, of stems. Most twining plans have right-handed helices regardless of the plant's growth hemisphere.
Helical growth in single cells, such as the fungi genus Phycomyces or the algae genus Nitella, is thought to be caused by a helical arrangement of structural biological material in the cell wall. In mutant thale cress, helical growth is seen at the organ level. Analysis strongly suggests that cortical microtubules have an important role in controlling the direction of organ expansion. It is unclear how helical growth mutations affect thale cress cell wall assembly.
When seen in spiral3, a conserved GRIP1 gene, a missense mutation causes a left-handed helical organization of cortical microtubules and a severe right-hand helical growth. This mutation compromises interactions between proteins GCP2 and GCP3 in yeast. While the efficiency of microtubule dynamics and nucleation were not noticeably affected, cortical microtubule angles were less narrow and more widely distributed. |
https://en.wikipedia.org/wiki/Spectrum%20bias | In biostatistics, spectrum bias refers to the phenomenon that the performance of a diagnostic test may vary in different clinical settings because each setting has a different mix of patients. Because the performance may be dependent on the mix of patients, performance at one clinic may not be predictive of performance at another clinic. These differences are interpreted as a kind of bias. Mathematically, the spectrum bias is a sampling bias and not a traditional statistical bias; this has led some authors to refer to the phenomenon as spectrum effects, whilst others maintain it is a bias if the true performance of the test differs from that which is 'expected'. Usually the performance of a diagnostic test is measured in terms of its sensitivity and specificity and it is changes in these that are considered when referring to spectrum bias. However, other performance measures such as the likelihood ratios may also be affected by spectrum bias.
Generally spectrum bias is considered to have three causes. The first is due to a change in the case-mix of those patients with the target disorder (disease) and this affects the sensitivity. The second is due to a change in the case-mix of those without the target disorder (disease-free) and this affects the specificity. The third is due to a change in the prevalence, and this affects both the sensitivity and specificity. This final cause is not widely appreciated, but there is mounting empirical evidence as well as theoretical arguments which suggest that it does indeed affect a test's performance.
Examples where the sensitivity and specificity change between different sub-groups of patients may be found with the carcinoembryonic antigen test and urinary dipstick tests.
Diagnostic test performances reported by some studies may be artificially overestimated if it is a case-control design where a healthy population ('fittest of the fit') is compared with a population with advanced disease ('sickest of the sick'); that is tw |
https://en.wikipedia.org/wiki/MYO7A | Myosin VIIA is protein that in humans is encoded by the MYO7A gene. Myosin VIIA is a member of the unconventional myosin superfamily of proteins. Myosins are actin binding molecular motors that use the enzymatic conversion of ATP - ADP + inorganic phosphate (Pi) to provide the energy for movement.
Myosins are mechanochemical proteins characterized by the presence of a motor domain, an actin-binding domain, a neck domain that interacts with other proteins, and a tail domain that serves as an anchor. Myosin VIIA is an unconventional myosin with the longest tail (1360 aa). The tail is expected to dimerize, resulting in a two-headed molecule. Unconventional myosins have diverse functions in eukaryotic cells and are primarily thought to be involved in the movement or linkage of intra-cellular membranes and organelles to the actin cytoskeleton via interactions mediated by their highly divergent tail domains.
MYO7A is expressed in a number of mammalian tissues, including testis, kidney, lung, inner ear, retina and the ciliated epithelium of the nasal mucosa.
Clinical significance
Mutations in the MYO7A gene cause the Usher syndrome type 1B, a combined deafness/blindness disorder. Affected individuals are typically profoundly deaf at birth and then undergo progressive retinal degeneration.
Model organisms
Model organisms have been used in the study of MYO7A function. A spontaneous mutant mouse line, called Myo7ash1-6J was generated. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty three tests were carried out on mutant mice and ten significant abnormalities were observed. Male homozygous mutant mice displayed a decreased body weight, a decrease in body fat, improved glucose tolerance and abnormal pelvic girdle bone morphology. Homozygous mutant mice of both sex displayed various abnormalities in a modified SHIRPA test, including abnormal gait, tail dragging and an absence of pinna reflex, a decrease in gri |
https://en.wikipedia.org/wiki/7-Aminoactinomycin%20D | 7-Aminoactinomycin D (7-AAD) is a fluorescent chemical compound with a strong affinity for DNA. It is used as a fluorescent marker for DNA in fluorescence microscopy and flow cytometry. It intercalates in double-stranded DNA, with a high affinity for GC-rich regions, making it useful for chromosome banding studies.
Applications
With an absorption maximum at 546 nm, 7-AAD is efficiently excited using a 543 nm helium–neon laser; it can also be excited with somewhat lower efficiency using a 488 nm or 514 nm argon laser lines. Its emission has a very large Stokes shift with a maximum in the deep red: 647 nm. 7-AAD is therefore compatible with most blue and green fluorophores – and even many red fluorophores – in multicolour applications.
7-AAD does not readily pass through intact cell membranes; if it is to be used as a stain for imaging DNA fluorescence, the cell membrane must be permeabilized or disrupted. This method can be used in combination with formaldehyde fixation of samples.
7-AAD is also used as a cell viability stain. Cells with compromised membranes will stain with 7-AAD, while live cells with intact cell membranes will remain dark. Viability of the cells in flow cytometry should be around 95% but not less than 90%.
Actinomycin D
The related compound actinomycin D is nonfluorescent, but binds DNA in the same way as 7-AAD. Its absorbance changes when bound to DNA, and it can be used as a stain in conventional transmission microscopy. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.