source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Control%20variates | The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.
Underlying principle
Let the unknown parameter of interest be , and assume we have a statistic such that the expected value of m is μ: , i.e. m is an unbiased estimator for μ. Suppose we calculate another statistic such that is a known value. Then
is also an unbiased estimator for for any choice of the coefficient .
The variance of the resulting estimator is
By differentiating the above expression with respect to , it can be shown that choosing the optimal coefficient
minimizes the variance of . (Note that this coefficient is the same as the coefficient obtained from a linear regression.) With this choice,
where
is the correlation coefficient of and . The greater the value of , the greater the variance reduction achieved.
In the case that , , and/or are unknown, they can be estimated across the Monte Carlo replicates. This is equivalent to solving a certain least squares system; therefore this technique is also known as regression sampling.
When the expectation of the control variable, , is not known analytically, it is still possible to increase the precision in estimating (for a given fixed simulation budget), provided that the two conditions are met: 1) evaluating is significantly cheaper than computing ; 2) the magnitude of the correlation coefficient is close to unity.
Example
We would like to estimate
using Monte Carlo integration. This integral is the expected value of , where
and U follows a uniform distribution [0, 1].
Using a sample of size n denote the points in the sample as . Then the estimate is given by
Now we introduce as a control variate with a known expected value and combine the two into a new estimate
Using realizations and an estimated optimal coefficient we obtain the following result |
https://en.wikipedia.org/wiki/Top%20quark | The top quark, sometimes also referred to as the truth quark, (symbol: t) is the most massive of all observed elementary particles. It derives its mass from its coupling to the Higgs Boson. This coupling is very close to unity; in the Standard Model of particle physics, it is the largest (strongest) coupling at the scale of the weak interactions and above. The top quark was discovered in 1995 by the CDF and DØ experiments at Fermilab.
Like all other quarks, the top quark is a fermion with spin and participates in all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. It has an electric charge of + e. It has a mass of , which is close to the rhenium atom mass. The antiparticle of the top quark is the top antiquark (symbol: , sometimes called antitop quark or simply antitop), which differs from it only in that some of its properties have equal magnitude but opposite sign.
The top quark interacts with gluons of the strong interaction and is typically produced in hadron colliders via this interaction. However, once produced, the top (or antitop) can decay only through the weak force. It decays to a W boson and either a bottom quark (most frequently), a strange quark, or, on the rarest of occasions, a down quark.
The Standard Model determines the top quark's mean lifetime to be roughly . This is about a twentieth of the timescale for strong interactions, and therefore it does not form hadrons, giving physicists a unique opportunity to study a "bare" quark (all other quarks hadronize, meaning that they combine with other quarks to form hadrons and can only be observed as such).
Because the top quark is so massive, its properties allowed indirect determination of the mass of the Higgs boson (see below). As such, the top quark's properties are extensively studied as a means to discriminate between competing theories of new physics beyond the Standard Model. The top quark is the only quark that has been directly o |
https://en.wikipedia.org/wiki/Ecological%20death | Ecological death is the inability of an organism to function in an ecological context, leading to death. This term can be used in many fields of biology to describe any species. In the context of aquatic toxicology, a toxic chemical, or toxicant, directly affects an aquatic organism but does not immediately kill it; instead it impairs an organism's normal ecological functions which then lead to death or lack of offspring. The toxicant makes the organism unable to function ecologically in some way, even though it does not suffer obviously from the toxicant. Ecological death may be caused by sublethal toxicological effects that can be behavioral, physiological, biochemical, or histological.
Types of sublethal effects causing ecological death
Sublethal effects consist of any effects of an organism caused by a toxicant that do not include death. These effects are generally not observed well in a shorter acute toxicity test. A longer, chronic toxicity test will allow enough time for these effects to appear in an organism and for them to lead to ecological death.
Behavioral effects
Toxicants can affect an organism's behavior, which with aquatic organisms, may impact their ability to swim, feed or avoid predators. The impacted behavior can lead to an organism's death because it may starve or get eaten by predators. Toxicants may affect behavior by impacting the sensory systems which organisms depend on to collect information about their environment or by impacting an organism's motivation to properly respond to sensory cues. If an organism is unable to use sensory cues effectively, they may be unable to respond to early warning signs of predation risk. Toxicants can also affect later stages of predation by impacting an organism's ability to respond to predators or follow through with escape strategies.
Physiological effects
Toxicants can affect an organism's physiology which may impact its growth, reproduction, and/or development. If an organism does not gr |
https://en.wikipedia.org/wiki/Modern%20drachma | The drachma (, ) was the official currency of modern Greece from 1832 until the launch of the euro in 2001.
First modern drachma
The drachma was reintroduced in May 1832, shortly before the establishment of the Kingdom of Greece. It replaced the phoenix at par. The drachma was subdivided into 100 lepta.
Coins
The first coinage consisted of copper denominations of 1λ, 2λ, 5λ and 10λ, silver denominations of ₯, ₯, ₯1 and ₯5 and a gold coin of ₯20. The drachma coin weighed 4.5 g and contained 90% silver, with the ₯20 coin containing 5.8 g of gold.
In 1868, Greece joined the Latin Monetary Union and the drachma became equal in weight and value to the French franc. The new coinage issued consisted of copper coins of 1λ, 2λ, 5λ and 10λ, with the 5λ and 10λ coins bearing the names obolos () and diobolon (), respectively; silver coins of 20λ and 50λ, ₯1, ₯2 and ₯5 and gold coins of ₯5, ₯10 and ₯20. (Very small numbers of ₯50 and ₯100 coins in gold were also issued.)
In 1894, cupro-nickel 5λ, 10λ and 20λ coins were introduced. No 1λ or 2λ coin had been issued since the late 1870s. Silver coins of ₯1 and ₯2 were last issued in 1911, and no coins were issued between 1912 and 1922, during which time the Latin Monetary Union collapsed due to World War I.
Between 1926 and 1930, a new coinage was introduced for the new Hellenic Republic, consisting of cupro-nickel coins in denominations of 20λ, 50λ, ₯1, and ₯2; nickel coins of ₯5; and silver coins of ₯10 and ₯20. These were the last coins issued for the first modern drachma, none were issued for the second.
Notes
Notes were issued by the National Bank of Greece from 1841 until 1928. The Bank of Greece issued notes from 1928 until 2001, when Greece joined the Euro. Early denominations ranged from ₯10 to ₯500. Smaller denominations (₯1, ₯2, ₯3 and ₯5) were issued from 1885, with the first ₯5 notes being made by cutting ₯10 notes in half.
When Greece finally achieved its independence from the Ottoman Empire in 1828, the ph |
https://en.wikipedia.org/wiki/Ostwald%20ripening | Ostwald ripening is a phenomenon observed in solid solutions and liquid sols that involves the change of an inhomogeneous structure over time, in that small crystals or sol particles first dissolve and then redeposit onto larger crystals or sol particles.
Dissolution of small crystals or sol particles and the redeposition of the dissolved species on the surfaces of larger crystals or sol particles was first described by Wilhelm Ostwald in 1896. For colloidal systems, Ostwald ripening is also found in water-in-oil emulsions, while flocculation is found in oil-in-water emulsions.
Mechanism
This thermodynamically-driven spontaneous process occurs because larger particles are more energetically favored than smaller particles. This stems from the fact that molecules on the surface of a particle are energetically less stable than the ones in the interior.
Consider a cubic crystal of atoms: all the atoms inside are bonded to 6 neighbours and are quite stable, but atoms on the surface are only bonded to 5 neighbors or fewer, which makes these surface atoms less stable. Large particles are more energetically favorable since, continuing with this example, more atoms are bonded to 6 neighbors and fewer atoms are at the unfavorable surface. As the system tries to lower its overall energy, molecules on the surface of a small particle (energetically unfavorable, with only 3 or 4 or 5 bonded neighbors) will tend to detach from the particle and diffuse into the solution.
Kelvin's equation describes the relationship between the radius of curvature and the chemical potential between the surface and the inner volume:
where corresponds to the chemical potential, to the surface tension, to the atomic volume and to the radius of the particle.
The chemical potential of an ideal solution can also be expressed as a function of the solute’s concentration if liquid and solid phases are in equilibrium.
where corresponds to the Boltzmann Constant, to the temperature and to the s |
https://en.wikipedia.org/wiki/COMOS | COMOS is a plant engineering software from Siemens. The applications for this software are in the process industries for the engineering, operation, and maintenance of process plants as well as their asset management.
History
The COMOS (acronym for Component Object Server) software system was originally developed and sold by the Logotec Software GmbH, then by the innotec GmbH (founded in 1991 with headquarters in Schwelm, Germany). The first version hit the market in 1996. In 2008, the innotec GmbH was taken over by the Siemens Corporation COMOS is developed further and marketed by a subsidiary, the Siemens Industry Software GmbH. The current status is COMOS Generation 10.
Product characteristics
Originally, COMOS was developed as an integrated CAE system for engineering in plant construction: all process engineering trades and the involved disciplines of the Electrical, Instrumentation & Control system engineering should be able to work together seamlessly on one system platform.
The system incorporates the characteristics of object orientation, central data administration, and open system architecture. Interfaces enable the integration into existing IT infrastructures or cooperation with supplementary software systems. The COMOS software system is based on a central data platform and includes applications that can be combined. They help with the engineering and set-up, operation, and shut-down of industrial plants.
Applications
The software is used by plant developers (e.g. EPC) to plan process plants (chemical, energy, water / waste water, pharmaceuticals, oil, natural gas, food, etc.). It is also used by plant owner/operators in the mentioned industries, since COMOS not only supports engineering but also operational processes. There are regular user conferences. Its architecture makes COMOS suitable for engineering: it can manage large quantities of data and provide it on an integrated basis. Siemens cooperates in the standardization of export and import |
https://en.wikipedia.org/wiki/Pintos | Pintos is computer software, a simple instructional operating system framework for the x86 instruction set architecture. It supports kernel threads, loading and running user programs, and a file system, but it implements all of these in a very simple way.
Pintos is currently used by multiple institutions, including UT Austin, UC Berkeley and Imperial College London, as an academic aid in Operating Systems class curriculums.
History
It was created at Stanford University by Ben Pfaff in 2004. It originated as a replacement for Not Another Completely Heuristic Operating System (Nachos), a similar system originally developed at UC Berkeley by Thomas E. Anderson, and was designed along similar lines.
Comparison to Nachos
Like Nachos, Pintos is intended to introduce undergraduates to concepts in operating system design and implementation by requiring them to implement significant portions of a real operating system, including thread and memory management and file system access. Pintos also teaches students valuable debugging skills.
Unlike Nachos, Pintos can run on actual x86 hardware, though it is often run atop an x86 emulator, such as Bochs or QEMU. Nachos, by contrast, runs as a user process on a host operating system, and targets the MIPS architecture (Nachos code must run atop a MIPS simulator). Pintos and its accompanying assignments are also written in the programming language C instead of C++ (used for original Nachos) or Java (used for Nachos 5.0j). |
https://en.wikipedia.org/wiki/International%20Conference%20on%20Systems%20Biology | The International Conference on Systems Biology (ICSB) is the primary international conference for systems biology research. Created by Hiroaki Kitano in 2000, its organization is now coordinated by the International Society for Systems Biology (ISSB).
Previous conferences
ICSB-2019, Okinawa, Japan (Okinawa Institute of Science and Technology Graduate University).
ICSB-2018, Lyon, France (Université de Lyon).
ICSB-2017, Blacksburg, Virginia (Virginia Tech).
ICSB-2016, Barcelona, Spain.
ICSB-2015 was originally announced to be in Shanghai, but then was held in Singapore.
ICSB-2014, Melbourne, Australia.
ICSB-2013, Copenhagen, Denmark.
ICSB-2012, Toronto, Canada.
ICSB-2011, Mannheim, Germany.
ICSB-2010 Edinburgh, UK
ICSB-2009 Stanford, California (Stanford University)
ICSB-2008 Gothenburg, Sweden (CMB, Chalmers Biocenter, YSBN, NYRC)
ICSB-2007 Long Beach, California (Caltech, UC Irvine, etc.)
ICSB-2006 Yokohama (The Systems Biology Institute, JST, RIKEN, AIST)
ICSB-2005 Boston (Harvard, MIT, Boston University)
ICSB-2004 Heidelberg (DFKI, EMBL, etc.)
ICSB-2003 St. Louis (Washington University in St. Louis)
ICSB-2002 Stockholm (Karolinska Institute)
ICSB-2001 Pasadena, California (California Institute of Technology)
ICSB-2000 Tokyo (Japan Science and Technology Agency)
Upcoming conferences
ICSB 2022, in person - Berlin, Germany.
External links
International Society for Systems Biology |
https://en.wikipedia.org/wiki/Holo-%28acyl-carrier-protein%29%20synthase | In enzymology and molecular biology, a holo-[acyl-carrier-protein] synthase (ACPS, ) is an enzyme that catalyzes the chemical reaction:
CoA-[4'-phosphopantetheine] + apo-acyl carrier protein adenosine 3',5'-bisphosphate + holo-acyl carrier protein
This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. It is also known as 4'-phosphopantetheinyl transferase after the group it transfers.
Function
All ACPS enzymes known so far are evolutionally related to each other in a single superfamily of proteins. It transfers a 4'-phosphopantetheine (4'-PP) moiety from coenzyme A (CoA) to an invariant serine in an acyl carrier protein (ACP), a small protein responsible for acyl group activation in fatty acid biosynthesis. This post-translational modification renders holo-ACP capable of acyl group activation via thioesterification of the cysteamine thiol of 4'-PP. This superfamily consists of two subtypes: the trimeric ACPS type such as E. coli ACPS and the monomeric Sfp (PCP-synthesizing) type such as B. subtilis SFP. Structures from both families are now known. The active site accommodates a magnesium ion. The most highly conserved regions of the protein are involved in binding the magnesium ion.
Nomenclature
The systematic name of this enzyme class is CoA-[4'-phosphopantetheine]:apo-[acyl-carrier-protein] 4'-pantetheinephosphotransferase. Other names in common use, disregarding the synthetase/synthase spelling difference, include acyl carrier protein holoprotein synthetase, holo-ACP synthetase, coenzyme A:fatty acid synthetase apoenzyme 4'-phosphopantetheine, acyl carrier protein synthetase (ACPS), PPTase, acyl carrier protein synthase, P-pant transferase, and CoA:apo-[acyl-carrier-protein] pantetheinephosphotransferase.
Structural studies
As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , and . |
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2036 | In molecular biology, glycoside hydrolase family 36 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 36 together with family 31 and family 27 alpha-galactosidases form the glycosyl hydrolase clan GH-D, a superfamily of alpha-galactosidases, alpha-N-acetylgalactosaminidases, and isomaltodextranases which are likely to share a common catalytic mechanism and structural topology.
Alpha-galactosidase () (melibiase) catalyzes the hydrolysis of melibiose into galactose and glucose. In man, the deficiency of this enzyme is the cause of Fabry's disease (X-linked sphingolipidosis). Alpha-galactosidase is present in a variety of organisms. There is a considerable degree of similarity in the sequence of alpha-galactosidase from various eukaryotic species. Escherichia coli alpha-galactosidase (gene melA), which requires NAD and magnesium as cofactors, is not structurally related to the eukaryotic enzymes; by contrast, an Escherichia coli plasmid encoded alpha-galactosidase (gene rafA ) contains a region of about 50 amino acids which is similar to a domain of the eukaryotic alpha-galactosidases. Alpha-N-acetylgalactosaminidase () catalyzes the hydrolysis of terminal non-reducing N-acetyl-D-galactosamine residues in N-acetyl-alpha-D- galactosaminides. In man, the deficiency of this enzyme is the cause of Schindler and Kanzaki diseases. The sequence of this enzyme is highly related to that of the eukaryotic alpha-galactosidases.
This family also includes raffinose synthase proteins, also known as seed inhibition |
https://en.wikipedia.org/wiki/Behavioral%20modeling%20in%20computer-aided%20design | In computer-aided design, behavioral modeling is a high-level circuit modeling technique where behavior of logic is modeled.
The Verilog-AMS and VHDL-AMS languages are widely used to model logic behavior.
Other modeling approaches
RTL Modeling : logic is modeled at register level.
Structural Modeling : logic is modeled at both register level and gate level. |
https://en.wikipedia.org/wiki/Tur%C3%A1n%E2%80%93Kubilius%20inequality | The Turán–Kubilius inequality is a mathematical theorem in probabilistic number theory. It is useful for proving results about the normal order of an arithmetic function. The theorem was proved in a special case in 1934 by Pál Turán and generalized in 1956 and 1964 by Jonas Kubilius.
Statement of the theorem
This formulation is from Tenenbaum. Other formulations are in Narkiewicz
and in Cojocaru & Murty.
Suppose f is an additive complex-valued arithmetic function, and write p for an arbitrary prime and for an arbitrary positive integer. Write
and
Then there is a function ε(x) that goes to zero when x goes to infinity, and such that for x ≥ 2 we have
Applications of the theorem
Turán developed the inequality to create a simpler proof of the Hardy–Ramanujan theorem about the normal order of the number ω(n) of distinct prime divisors of an integer n. There is an exposition of Turán's proof in Hardy & Wright, §22.11.
Tenenbaum gives a proof of the Hardy–Ramanujan theorem using the Turán–Kubilius inequality and states without proof several other applications.
Notes
Inequalities
Theorems in number theory |
https://en.wikipedia.org/wiki/Fleuron%20%28typography%29 | A fleuron (;), also known as printers' flower, is a typographic element, or glyph, used either as a punctuation mark or as an ornament for typographic compositions. Fleurons are stylized forms of flowers or leaves; the term derives from the ("flower"). Robert Bringhurst in The Elements of Typographic Style calls the forms "horticultural dingbats". A commonly encountered fleuron is the , the floral heart or (ivy leaf). It is also known as an aldus leaf (after Italian Renaissance printer Aldus Manutius).
History
Flower decorations are among the oldest typographic ornaments. A fleuron can also be used to fill the white space that results from the indentation of the first line of a paragraph, on a line by itself to divide paragraphs in a highly stylized way, to divide lists, or for pure ornamentation. The fleuron (as a formal glyph) is a sixteenth century introduction.
Fleurons were crafted the same way as other typographic elements were: as individual metal sorts that could be fit into the printer's compositions alongside letters and numbers. This saved the printer time and effort in producing ornamentation. Because the sorts could be produced in multiples, printers could build up borders with repeating patterns of fleurons.
Fleurons in Unicode
Thirty forms of fleuron have code points in Unicode. The Dingbats and Miscellaneous Symbols blocks have three fleurons that the standard calls "floral hearts" (also called "aldus leaf", "ivy leaf", "hedera" and "vine leaf"); twenty-four fleurons (from the pre-Unicode Wingdings and Wingdings 2 fonts) in the Ornamental Dingbats block; and three more fleurons used in archaic languages are also supported.
(Miscellaneous Symbols)
(Dingbats)
(Dingbats)
(Ornamental Dingbats)
Gallery
See also
, a printers' ornament
, mostly used as a sub-chapter section break. Although a group of asterisks is the most common style, fleurons are also seen fulfilling this role.
The Fleuron, a British typography magazine fro |
https://en.wikipedia.org/wiki/Nucleoside-specific%20porin | Nucleoside-specific porin (the tsx gene of Escherichia coli) is an outer membrane protein, Tsx, which constitutes the receptor for colicin K and Bacteriophage T6, and functions as a substrate-specific channel for nucleosides and deoxy-nucleosides.<ref
name="PUB00006257"></ref> The protein contains 294 amino acids, the first 22 of which are characteristic of a bacterial signal sequence peptide. Tsx shows no significant similarities to general bacterial porins. |
https://en.wikipedia.org/wiki/Ernesto%20Ces%C3%A0ro | Ernesto Cesàro (12 March 1859 – 12 September 1906) was an Italian mathematician who worked in the field of differential geometry. He wrote a book, Lezioni di geometria intrinseca (Naples, 1890), on this topic, in which he also describes fractal, space-filling curves, partly covered by the larger class of de Rham curves, but are still known today in his honor as Cesàro curves. He is known also for his 'averaging' method for the 'Cesàro-summation' of divergent series, known as the Cesàro mean.
Biography
After a rather disappointing start of his academic career and a journey through Europe - with the most important stop at Liège, where his older brother Giuseppe Raimondo Pio Cesàro was teaching mineralogy at the local university - Ernesto Cesàro graduated from the University of Rome in 1887, while he was already part of the Royal Science Society of Belgium for the numerous works that he had already published.
The following year, he obtained a mathematics chair at the University of Palermo, which he kept until 1891. He settled in Rome, where he stayed as a professor at the Sapienza University until his accidental death, while trying to rescue his youngest son Manlio from drowning.
Work
Cesàro's main contributions are in the field of differential geometry. Lessons of intrinsic geometry, written in 1894, explains in particular the construction of a fractal curve. After that, Cesàro also studied the "snowflake curve" of von Koch, continuous but not differentiable in any of its points.
Among his other works are Introduction to the mathematical theory of infinitesimal calculus (1893), Algebraic analysis (1894), Elements of infinitesimal calculus (1897). He proposed a possible definition of a limit of divergent sequence, known today as "Cesàro's sum," given by the limit of the mean of the sequence partial terms' sum.
Books by E. Cesàro
Lezioni di geometria intrinseca (Naples, 1896) (trans. into German under the title Vorlesungen über natürliche Geometrie; 1901, 1st |
https://en.wikipedia.org/wiki/Grey%20noise | Grey noise is random noise whose frequency spectrum follows an equal-loudness contour (such as an inverted A-weighting curve).
The result is that grey noise contains all frequencies with equal loudness, as opposed to white noise, which contains all frequencies with equal energy. The difference between the two is the result of psychoacoustics, more specifically the fact that the human hearing is more sensitive to some frequencies than others.
Since equal-loudness curves depend not only on the individual but also on the volume at which the noise is played back, there is no one true grey noise. A mathematically simpler and clearly defined approximation of an equal-loudness noise is pink noise which creates an equal amount of energy per octave, not per hertz (i.e. a logarithmic instead of a linear behavior), so pink noise is closer to "equally loud at all frequencies" than white noise is.
See also
Colors of noise |
https://en.wikipedia.org/wiki/IMSAI%208080 | The IMSAI 8080 was an early microcomputer released in late 1975, based on the Intel 8080 and later 8085 and S-100 bus. It was a clone of its main competitor, the earlier MITS Altair 8800. The IMSAI is largely regarded as the first "clone" microcomputer. The IMSAI machine ran a highly modified version of the CP/M operating system called IMDOS. It was developed, manufactured and sold by IMS Associates, Inc. (later renamed IMSAI Manufacturing Corp). In total, between 17,000 and 20,000 units were produced from 1975 to 1978.
History
In May 1972, William Millard started a business called IMS Associates (IMS) in the areas of computer consulting and engineering, using his home as an office. By 1973, Millard incorporated the business and soon found funding for it, receiving several contracts, all for software. IMS stood for "Information Management Services".
In 1974, IMS was contacted by a client which wanted a "workstation system" that could complete jobs for any General Motors car dealership. IMS planned a system including a terminal, small computer, printer, and special software. Five of these workstations were to have common access to a hard disk drive, which would be controlled by a small computer. Eventually product development was stopped.
Millard and his chief engineer Joe Killian turned to the microprocessor. Intel had announced the 8080 chip, and compared to the 4004 to which IMS Associates had been first introduced, it looked like a "real computer". Full-scale development of the IMSAI 8080 was put into action using the existing Altair 8800's S-100 bus, and by October 1975 an ad was placed in Popular Electronics, receiving positive reactions.
IMS shipped the first IMSAI 8080 kits on 16 December 1975, before turning to fully assembled units. In 1976, IMS was renamed to IMSAI Manufacturing Corporation because by then, they were a manufacturing company, not a consulting firm.
In 1977, IMSAI marketing director Seymour I. Rubinstein paid Gary Kildall $25,000 for |
https://en.wikipedia.org/wiki/Simultaneous%20Authentication%20of%20Equals | In cryptography, Simultaneous Authentication of Equals (SAE) is a password-based authentication and password-authenticated key agreement method.
Authentication
SAE is a variant of the Dragonfly Key Exchange defined in , based on Diffie–Hellman key exchange using finite cyclic groups which can be a primary cyclic group or an elliptic curve. The problem of using Diffie–Hellman key exchange is that it does not have an authentication mechanism. So the resulting key is influenced by a pre-shared key and the MAC addresses of both peers to solve the authentication problem.
Use
IEEE 802.11s
SAE was originally implemented for use between peers in IEEE 802.11s. When peers discover each other (and security is enabled) they take part in an SAE exchange. If SAE completes successfully, each peer knows the other party possesses the mesh password and, as a by-product of the SAE exchange, the two peers establish a cryptographically strong key. This key is used with the "Authenticated Mesh Peering Exchange" (AMPE) to establish a secure peering and derive a session key to protect mesh traffic, including routing traffic.
WPA3
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2. The new standard uses 128-bit encryption in WPA3-Personal mode (192-bit in WPA3-Enterprise) and forward secrecy. The WPA3 standard also replaces the pre-shared key (PSK) exchange with Simultaneous Authentication of Equals as defined in IEEE 802.11-2016 resulting in a more secure initial key exchange in personal mode. The Wi-Fi Alliance also claims that WPA3 will mitigate security issues posed by weak passwords and simplify the process of setting up devices with no display interface.
Security
In 2019 Eyal Ronen and Mathy Vanhoef (co-author of the KRACK attack) released an analysis of WPA3's Dragonfly handshake and found that "an attacker within range of a victim can still recover the password" and the bugs found "allow an adversary to impersonate any user, and thereby access the Wi- |
https://en.wikipedia.org/wiki/Morse%20code%20for%20non-Latin%20alphabets | This is a summary of the use of Morse code to represent alphabets other than Latin.
Greek
The Greek Morse code alphabet is very similar to the Latin alphabet. It uses one extra letter for Greek letter and no longer uses the codes for Latin letters "J", "U" and "V".
The tonos is not transmitted in Morse code; the receiver can simply infer which vowels require one. The Greek diphthongs presented in the bottom three rows of the table are specified in old Greek Morse-code tables but they are never used in actual communication, the two vowels being sent separately.
Cyrillic
Cyrillic letters are represented using the representation of similar-sounding Latin letters (e.g. , [German pronunciation], , , etc.). Cyrillic letters with no such Latin correspondence are assigned to Latin letters with no Cyrillic correspondence (e.g. ). The same correspondence was later used to create Russian national character sets KOI-7 and KOI-8.
The order and encoding shown uses the Russian national standard. The Bulgarian standard is the same except for the two letters (, ) given in parentheses: The Bulgarian language does not use , while is frequent, but missing in Russian standard Morse.
The letter (Yo) does not have an international Morse phonetic equivalent, with international used instead. Ukrainian Morse uses instead of , instead of , but also has encoded as , and has additional (Yi).
Hebrew
Hebrew letters are mostly represented using the Morse representation of a similar-sounding Latin letter (e.g. "Bet" ב≡B); however the representation for several letters are from a Latin letter with a similar shape (e.g. "Tet" ט ≡U, while "Tav" ת≡T). Though Hebrew Morse code is transcribed from right to left, the table below is transcribed from left to right as per the Latin letters in the table.
Arabic
Kurdish
See
Persian
See also :fa:کد مورس
Devanagari
Devanagari is a left-to-right abugida (alphasyllabary) widely used in the Indian subcontinent. The following telegraph code |
https://en.wikipedia.org/wiki/Fervidobacterium%20changbaicum | Fervidobacterium changbaicum (F. changbaicum) is a species of thermophilic anaerobic bacteria. It is non-sporulating, motile, gram-negative, and rod-shaped. The type strain is CBS-1(T) (=DSM 17883(T) =JCM 13353(T)). Fervidobacterium changbaicum was isolated from a hot spring mix of mud and water in the changbai mountions, China |
https://en.wikipedia.org/wiki/Job%20embeddedness | Job embeddedness is the collection of forces that influence employee retention. It can be distinguished from turnover in that its emphasis is on all of the factors that keep an employee on the job, rather than the psychological process one goes through when quitting.
The scholars who introduced job embeddedness described the concept as consisting of three key components (links, fit, and sacrifice), each of which are important both on and off the job. Job embeddedness is therefore conceptualized as six dimensions: links, fit, and sacrifice between the employee and organization, and links, fit and sacrifice between the employee and the community.
Theoretical background
Job embeddedness was first introduced by Mitchell and colleagues in an effort to improve traditional employee turnover models. According to these models, factors such as job satisfaction and organizational commitment and the individual's perception of job alternatives together predict an employee's intent to leave and subsequently, turnover (e.g.,). Since these scholars suggest traditional models only modestly predict turnover, Mitchell et al. proposed job embeddedness as an alternative model and incorporated "off-the-job" factors (e.g. attachment to family) and other organizational factors (e.g. attachment to working groups) that have also been shown to affect employee retention, but were not included in these traditional models.
When creating this alternative model for explaining why employees stay on a job, Mitchell and colleagues drew on research from Lee and Mitchell's unfolding model of turnover. This line of research suggests that many of those who leave a job are a) mostly satisfied with their jobs, b) do not search for an alternative position before leaving, c) and quit due to some sudden off-the-job event. Results of the initial study indicated that job embeddedness predicted both intent to leave and actual turnover, and was a better predictor of voluntary turnover than job satisfaction, |
https://en.wikipedia.org/wiki/Windows%20Easy%20Transfer | Windows Easy Transfer was a specialized file-transfer program developed by Microsoft that allowed users of the Windows operating system to transfer personal files and settings from a computer running an earlier version of Windows to a computer running a newer version.
Windows Easy Transfer was introduced in Windows Vista and included in Windows 7, Windows 8, and Windows 8.1. It replaced the Files and Settings Transfer Wizard included with Windows XP and offered limited migration services for computers running Windows 2000 SP4 and Windows XP SP2. For all versions of Windows, it did not transfer applications—only files and settings.
Microsoft incorporated a key technology into the Windows Easy Transfer tool based on its acquisition of Apptimum in 2006. Apptimum's technology complemented the transfer experience offered across multiple Windows operating systems, including Windows Vista, 7, 8.1, and 10.
Windows Easy Transfer was discontinued with Windows 10. From September 1, 2015 to August 31, 2016, Microsoft partnered with Laplink to provide a free download of PCmover Express, which allowed 500 MB of data and settings to be transferred from at least Windows XP to either Windows 8.1 or Windows 10.
History
For Windows 2000, Microsoft developed the User State Migration Tool command line utility that allowed users of Windows 95, 98, and NT 4.0 to migrate their data and settings to the newer operating system; it did not provide a graphical user interface. An additional migration tool, Files and Settings Transfer Wizard (migwiz.exe) was developed for Windows XP to facilitate the migration of data and settings from Windows 98 and Windows Me. It could be launched from the Windows XP CD-ROM and presented options to transfer data and settings via a 3.5-inch floppy, computer network, direct cable connection, or a Zip disk. Users could also create a wizard disk to initiate the migration process when run from earlier operating system.
A preliminary version of Windows Easy Tr |
https://en.wikipedia.org/wiki/Triangle%20graph | In the mathematical field of graph theory, the triangle graph is a planar undirected graph with 3 vertices and 3 edges, in the form of a triangle.
The triangle graph is also known as the cycle graph and the complete graph .
Properties
The triangle graph has chromatic number 3, chromatic index 3, radius 1, diameter 1 and girth 3. It is also a 2-vertex-connected graph and a 2-edge-connected graph.
Its chromatic polynomial is
See also
Triangle-free graph |
https://en.wikipedia.org/wiki/Max%20Q%20%28astronaut%20band%29 | Max Q is a Houston-based rock band whose members are all astronauts. It was formed in early 1987 by Brewster Shaw and Robert L. Gibson, recruiting George Nelson. Gibson has stated that he came up with the name "Max Q" (though recognizes that Shaw has also claimed having come up with the name himself), the engineering term for the maximum dynamic pressure from the atmosphere experienced by an ascending spacecraft. He joked that like the Space Shuttle, the band "makes lots of noise but no music."
The band's rotating line-up often changes due to flight crew assignments, training, and the occasional retirement. In 2009, members included:
Ricky Arnold – rhythm guitar
Dan Burbank – lead vocals and guitar
Tracy "TC" Caldwell Dyson – lead vocals
Ken "Taco" Cockrell – keyboards and background vocals
Chris Ferguson – drums
Drew Feustel – lead vocals and lead guitar
Kevin A. Ford – drums
Chris A. Hadfield – lead vocals and bass guitar
Greg "Box" Johnson – keyboards and background vocals
Dottie Metcalf-Lindenburger – lead vocals
Steve "Stevie Ray" Robinson – lead guitar
Former members include:
Carl Walz – lead vocals
Susan Helms – lead vocals and keyboards (after Hawley left)
Kevin "Chili" Chilton – lead vocals and guitar (after Shaw left)
Pierre Thuot – bass guitar (after Nelson left)
Steven A. Hawley - keyboards (the original keyboard player)
...with the original members having been:
Jim Wetherbee – drums
George "Pinky" Nelson – bass guitar
Robert "Hoot" Gibson – lead vocals and lead guitar
Brewster Shaw – rhythm guitar
The genesis of this band is connected to the Challenger disaster, as it was in the wake of that severely somber event that Dan Brandenstein, as Chief of the Astronaut Office, suggested that they hold a party to lighten things up. Brewster Shaw then approached Hoot Gibson, who he had performed a few songs with at astronaut parties, with the idea of forming a four-piece group to play at this party. The band was well-liked. They decided to add a keyboar |
https://en.wikipedia.org/wiki/%CE%91-Neoendorphin | α-Neoendorphin is an endogenous opioid peptide with a decapeptide structure and the amino acid sequence Tyr-Gly-Gly-Phe-Leu-Arg-Lys-Tyr-Pro-Lys.
α-Neoendorphin is a neuropeptide. Prodynorphin or Proenkephalin B is its precursor. Researchers and anatomists have not yet studied the distribution of α-neoendorphin in the human in detail. However, some studies have been done which supports the presence of α-neoendorphin immunoreactive fibers throughout the human brainstem. According to a study done by Duque, Ewing, Arturo Mangas, Pablo Salinas, Zaida Díaz-cabiale, José Narváez, and Rafael Coveñas; α-neoendorphin immunoreactive fibers can be found in the caudal part of the solitary nucleus, in the caudal and the gelatinosa parts of the spinal trigeminal nucleus, and only low density was found in the central grey matter of medulla.
See also
β-Neoendorphin |
https://en.wikipedia.org/wiki/Curie%20temperature | In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, which can (in most cases) be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism was lost at a critical temperature.
The force of magnetism is determined by the magnetic moment, a dipole moment within an atom which originates from the angular momentum and spin of electrons. Materials have different structures of intrinsic magnetic moments that depend on temperature; the Curie temperature is the critical point at which a material's intrinsic magnetic moments change direction.
Permanent magnetism is caused by the alignment of magnetic moments, and induced magnetism is created when disordered magnetic moments are forced to align in an applied magnetic field. For example, the ordered magnetic moments (ferromagnetic, Figure 1) change and become disordered (paramagnetic, Figure 2) at the Curie temperature. Higher temperatures make magnets weaker, as spontaneous magnetism only occurs below the Curie temperature. Magnetic susceptibility above the Curie temperature can be calculated from the Curie–Weiss law, which is derived from Curie's law.
In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the phase transition between ferroelectricity and paraelectricity. In this context, the order parameter is the electric polarization that goes from a finite value to zero when the temperature is increased above the Curie temperature.
Magnetic moments
Magnetic moments are permanent dipole moments within an atom that comprise electron angular momentum and spin by the relation μl = el/2me, where me is the mass of an electron, μl is the magnetic moment, and l is the angular momentum; this ratio is called the gyromagnetic ratio.
The electrons in an atom contribute magnetic moments from their own angular momen |
https://en.wikipedia.org/wiki/PERMIS | PERMIS (PrivilEge and Role Management Infrastructure Standards) is a sophisticated policy-based authorization system that implements an enhanced version of the U.S. National Institute of Standards and Technology (NIST) standard Role-Based Access Control (RBAC) model. PERMIS supports the distributed assignment of both roles and attributes to users by multiple distributed attribute authorities, unlike the NIST model which assumes the centralised assignment of roles to users. PERMIS provides a cryptographically secure privilege management infrastructure (PMI) using public key encryption technologies and X.509 Attribute certificates to maintain users' attributes. PERMIS does not provide any authentication mechanism, but leaves it up to the application to determine what to use. PERMIS's strength comes from its ability to be integrated into virtually any application and any authentication scheme like Shibboleth (Internet2), Kerberos, username/passwords, Grid proxy certificates and Public Key Infrastructure (PKI).
As a standard RBAC system, PERMIS's main entities are
an authorisation policy,
a set of users,
a set of administrators (attribute authorities) who assign roles/attributes to users,
a set of resources that are to be protected,
a set of actions on resources,
a set of access control rules,
and optional obligations and constraints.
The PERMIS policy is eXtensible Markup Language (XML)-based and has rules for user-role assignments and role-privilege assignments, the latter containing optional obligations that are returned to the application when a user is granted access to a resource. A PERMIS policy can be stored as either a simple text XML file, or as an attribute within a signed X.509 attribute certificate to provide integrity protection and tampering detection. User roles and attributes may be held in secure signed X.509 attributes certificates, and stored in Lightweight Directory Access Protocol (LDAP) directories or Web-based Distributed Authorin |
https://en.wikipedia.org/wiki/P3M | {{DISPLAYTITLE:P3M}}
Particle–Particle–Particle–Mesh (P3M) is a Fourier-based Ewald summation method to calculate potentials in N-body simulations.
The potential could be the electrostatic potential among N point charges i.e. molecular dynamics, the gravitational potential among N gas particles in e.g. smoothed particle hydrodynamics, or any other useful function. It is based on the particle mesh method, where particles are interpolated onto a grid, and the potential is solved for this grid (e.g. by solving the discrete Poisson equation). This interpolation introduces errors in the force calculation, particularly for particles that are close together. Essentially, the particles are forced to have a lower spatial resolution during the force calculation. The P3M algorithm attempts to remedy this by calculating the potential through a direct sum for particles that are close, and through the particle mesh method for particles that are separated by some distance. |
https://en.wikipedia.org/wiki/Ellen%20Maycock | Ellen Johnston Maycock (born September 15, 1950 in Maryland) is an American mathematician and mathematics educator. She is the former Johnson Family University Professor and professor emerita of mathematics at DePauw University in Greencastle, Indiana. Her mathematical research was in functional analysis.
Education and career
In 1972, Maycock received a B.A. degree in mathematics and economics from Wellesley College in Wellesley, Massachusetts. In 1974, she received a M.S. degree in mathematics and in 1986, a Ph.D. in mathematics, both from Purdue University in West Lafayette, Indiana. Her dissertation "The Brauer Group of Graded Continuous trace -algebras was supervised by Jerome Alvin Kaminker.
After teaching at Wellesley for two years following her degree, in 1988, Maycock joined the faculty at DePauw as an assistant professor, was promoted to associate professor in 1993 and to professor in 2001. She developed a series of workshops that brought faculty from across the nation to DePauw to learn innovative teaching styles. Maycock is known for her development of creative approaches to teaching abstract algebra. She developed a course that used the software package "Exploring Small Groups" to assist students in their mastery of the concepts of abstract algebra. She also introduced computer technology in courses on Euclidean and non-Euclidean geometry and analysis.
Maycock has served on the Editorial Boards of the Mathematical Association of America (MAA) Notes and Spectrum series and the American Mathematical Society Committee on the Profession. She served on the AMS-MAA-SIAM Committee on Employment Opportunities Past Members from 2007 to 2014.
In September 2005, Maycock joined the staff of the American Mathematical Society (AMS) as an associate executive director. In that role, she was responsible for AMS meetings and professional services, programs that served AMS members, supported and improved the public image of the profession. She remained in that positi |
https://en.wikipedia.org/wiki/Paradox%20of%20the%20pesticides | The paradox of the pesticides is a paradox that states that applying pesticide to a pest may end up increasing the abundance of the pest if the pesticide upsets natural predator–prey dynamics in the ecosystem.
Lotka–Volterra equation
To describe the paradox of the pesticides mathematically, the Lotka–Volterra equation, a set of first-order, nonlinear, differential equations, which are frequently used to describe predator–prey interactions, can be modified to account for the additions of pesticides into the predator–prey interactions.
Without pesticides
The variables represent the following:
The following two equations are the original Lotka–Volterra equation, which describe the rate of change of each respective population as a function of the population of the other organism:
By setting each equation to zero and thus assuming a stable population, a graph of two lines (isoclines) can be made to find the equilibrium point, the point at which both interacting populations are stable.
These are the isoclines for the two above equations:
Accounting for pesticides
Now, to account for the difference in the population dynamics of the predator and prey that occurs with the addition of pesticides, variable q is added to represent the per capita rate at which both species are killed by the pesticide. The original Lotka–Volterra equations change to be as follows:
Solving the isoclines as was done above, the following equations represent the two lines with the intersection that represents the new equilibrium point. These are the new isoclines for the populations:
As one can see from the new isoclines, the new equilibrium will have a higher H value and a lower P value so the number of prey will increase while the number of predator decreases. Thus, prey, which is normally the targeted by the pesticide, is actually being benefited instead of harmed by the pesticide.
Empirical evidence
The paradox has been documented repeatedly throughout the history of pe |
https://en.wikipedia.org/wiki/IEEE%20Xplore | IEEE Xplore digital library is a research database for discovery and access to journal articles, conference proceedings, technical standards, and related materials on computer science, electrical engineering and electronics, and allied fields. It contains material published mainly by the Institute of Electrical and Electronics Engineers (IEEE) and other partner publishers. IEEE Xplore provides web access to more than 5 million documents from publications in computer science, electrical engineering, electronics and allied fields. Its documents and other materials comprise more than 300 peer-reviewed journals, more than 1,900 global conferences, more than 11,000 technical standards, almost 5,000 ebooks, and over 500 online courses. Approximately 20,000 new documents are added each month. Anyone can search IEEE Xplore and find bibliographic records and abstracts for its contents, while access to full-text documents may require an individual or institutional subscription.
See also
ACM Digital Library
IEEE Computer Society Digital Library
List of academic databases and search engines |
https://en.wikipedia.org/wiki/TRiC%20%28complex%29 | T-complex protein Ring Complex (TRiC), otherwise known as Chaperonin Containing TCP-1 (CCT), is a multiprotein complex and the chaperonin of eukaryotic cells. Like the bacterial GroEL, the TRiC complex aids in the folding of ~10% of the proteome, and actin and tubulin are some of its best known substrates. TRiC is an example of a biological machine that folds substrates within the central cavity of its barrel-like assembly using the energy from ATP hydrolysis.
Subunits
The human TRiC complex is formed by two rings containing 8 similar but non-identical subunits, each with molecular weights of ~60 kDa. The two rings are stacked in an asymmetrical fashion, forming a barrel-like structure with a molecular weight of ~1 MDa.
Molecular weight of human subunits.
Counterclockwise from the exterior, each ring is made of the subunits in the following order: 6-8-7-5-2-4-1-3.
Evolution
The CCT evolved from the archaeal thermosome ~2Gya, with the two subunits diversifying into multiple units. The CCT changed from having one type of subunit, to having two, three, five, and finally eight types.
See also
Chaperone
Chaperonin
Heat shock protein
Notes |
https://en.wikipedia.org/wiki/National%20Ganga%20River%20Basin%20Authority | National Ganga River Basin Authority (NGRBA) is a financing, planning, implementing, monitoring and coordinating authority for the Ganges River, functioning under the Jal Shakti ministry of India. The mission of the organisation is to safeguard the drainage basin which feeds water into the Ganges by protecting it from pollution or overuse. In July 2014, the NGRBA has been transferred from the Ministry of Environment and Forests to the Ministry of Water Resources, River Development & Ganga Rejuvenation, formerly Ministry of Water Resources (India).
Union government in a notification issued on 20 September 2016 has taken decision under River Ganga (Rejuvenation, Protection and Management) Authorities Order 2016 for a new body named "National Council for River Ganga (Rejuvenation, Protection and Management)" NCRG to replace existing NGRBA. The new body will act as an authority replacing the existing National Ganga River Basin Authority for overall responsibility for superintendence of pollution prevention and rejuvenation of river Ganga Basin.
Establishment
It was established by the Central Government of India, on 20 February 2009 under Section 3(3) of the Environment Protection Act, 1986, which also declared Ganges as the "National River" of India.
Overview
The Prime Minister is the chair of the Authority. Other members include the cabinet ministers of ministries that include the Ganges among their direct concerns and the chief ministers of states through which the Ganges River flows. The Chief Ministers as members are from the states through which Ganges flow viz. Uttarakhand, UP, Bihar, Jharkhand, West Bengal, among others.
The first meeting of the National Ganga River Basin Authority was held on 5 October 2009.
In the 2010 Union budget of India, the allocation for National Ganga River Basin Authority doubled to 500 crore (5,000,000,000.00).
Members of the NGRBA
There are total of 23 members of the NGRBA. 14 out of 23 come from the government sectors wherea |
https://en.wikipedia.org/wiki/Dodecahedral%20number | A dodecahedral number is a figurate number that represents a dodecahedron. The nth dodecahedral number is given by the formula
The first such numbers are 0, 1, 20, 84, 220, 455, 816, 1330, 2024, 2925, 4060, 5456, 7140, 9139, 11480, … .
History
The first study of dodecahedral numbers appears to have been by René Descartes, around 1630, in his De solidorum elementis. Prior to Descartes, figurate numbers had been studied by the ancient Greeks and by Johann Faulhaber, but only for polygonal numbers, pyramidal numbers, and cubes. Descartes introduced the study of figurate numbers based on the Platonic solids and some semiregular polyhedra; his work included the dodecahedral numbers. However, De solidorum elementis was lost, and not rediscovered until 1860. In the meantime, dodecahedral numbers had been studied again by other mathematicians, including Friedrich Wilhelm Marpurg in 1774, Georg Simon Klügel in 1808, and Sir Frederick Pollock in 1850.
Properties
Generating Function
The ordinary generating function of the Dodecahedral numbers is |
https://en.wikipedia.org/wiki/Heisenberg%20group | In mathematics, the Heisenberg group , named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form
under the operation of matrix multiplication. Elements a, b and c can be taken from any commutative ring with identity, often taken to be the ring of real numbers (resulting in the "continuous Heisenberg group") or the ring of integers (resulting in the "discrete Heisenberg group").
The continuous Heisenberg group arises in the description of one-dimensional quantum mechanical systems, especially in the context of the Stone–von Neumann theorem. More generally, one can consider Heisenberg groups associated to n-dimensional systems, and most generally, to any symplectic vector space.
The three-dimensional case
In the three-dimensional case, the product of two Heisenberg matrices is given by:
As one can see from the term {{math|ab}}, the group is non-abelian.
The neutral element of the Heisenberg group is the identity matrix, and inverses are given by
The group is a subgroup of the 2-dimensional affine group Aff(2): acting on corresponds to the affine transform .
There are several prominent examples of the three-dimensional case.
Continuous Heisenberg group
If , are real numbers (in the ring R) then one has the continuous Heisenberg group H3(R).
It is a nilpotent real Lie group of dimension 3.
In addition to the representation as real 3×3 matrices, the continuous Heisenberg group also has several different representations in terms of function spaces. By Stone–von Neumann theorem, there is, up to isomorphism, a unique irreducible unitary representation of H in which its centre acts by a given nontrivial character. This representation has several important realizations, or models. In the Schrödinger model, the Heisenberg group acts on the space of square integrable functions. In the theta representation, it acts on the space of holomorphic functions on the upper half-plane; it is so named for its connection with the theta functions. |
https://en.wikipedia.org/wiki/Mathematics%20in%20Education%20and%20Industry | MEI (Mathematics in Education and Industry) is an independent educational charity and curriculum development body for mathematics education in the United Kingdom. Income generated through its work is used to support the teaching and learning of mathematics.
History
MEI was founded in 1963 with a grant from the Schools & Industry Committee of the Mathematical Association. In 1965 it produced its first exam, Additional Mathematics, then produced an A level course two years later. MEI's A-level exams were the first to include probability.
It was incorporated as a company on 18 October 1996.
Structure
Although independent, MEI works in partnership with many organisations, including the UK Government. MEI is a registered charity with a board of directors and a small professional staff.
Qualifications
GCE AS/A level Mathematics, Further Mathematics and Further Mathematics (Additional) (Published by OCR)
AS Level Statistics
GCSE Mathematics
Foundations of Advanced Mathematics (FAM) – a freestanding course
Introduction to Quantitative Methods (in association with OCR)
OCR MEI Level 3 Core Maths Qualifications (Level 3 Certificate in Quantitative Reasoning and Level 3 Certificate in Quantitative Problem Solving)
Competitions
MEI organises an annual online competition called Ritangle for teams of students of A level Mathematics, the International Baccalaureate and Scottish Highers. Questions are posted on the Integral website, with correct answers releasing a clue for the final question. |
https://en.wikipedia.org/wiki/Brother%20Jonathan | Brother Jonathan is the personification of New England. He was also used as an emblem of the United States in general, and can be an allegory of capitalism. His too-short pants, too-tight waistcoat and old-fashioned style reflect his taste for inexpensive, second-hand products and efficient use of means.
Brother Jonathan soon became a stock fictional character, developed as a good-natured parody of all New England during the early American Republic. He was widely popularized by the weekly newspaper Brother Jonathan and the humor magazine Yankee Notions.
Brother Jonathan was usually depicted in editorial cartoons and patriotic posters outside New England as a long-winded New Englander who dressed in striped trousers, somber black coat and stove-pipe hat. Inside New England, "Brother Jonathan" was depicted as an enterprising and active businessman who blithely boasted of Yankee conquests for the Universal Yankee Nation.
After 1865, the garb of Brother Jonathan was emulated by Uncle Sam, a common personification of the continental government of the United States.
History
The term dates at least to the 17th century, when it was applied to Puritan roundheads during the English Civil War. It came to include residents of colonial New England, who were mostly Puritans in support of the Parliamentarians during the war. It probably is derived from the Biblical words spoken by David after the death of his friend Jonathan, "I am distressed for thee, my brother Jonathan" (2 Samuel 1:26). As Kenneth Hopper and William Hopper put it, "Used as a term of abuse for their ... Puritan opponents by Royalists during the English Civil War, it was applied by British officers to the rebellious colonists during the American Revolution".
A popular folk tale about the origin of the term holds that the character is derived from Jonathan Trumbull (1710–1785), Governor of the State of Connecticut, which was the main source of supplies for the Northern and Middle Departments during the Ameri |
https://en.wikipedia.org/wiki/Protein%E2%80%93protein%20interaction%20screening | Protein–protein interaction screening refers to the identification of Protein–protein interaction with high-throughput screening methods such as computer- and/or robot-assisted plate reading, flow cytometry analyzing.
The interactions between proteins are central to virtually every process in a living cell. Information about these interactions improves understanding of diseases and can provide the basis for new therapeutic approaches.
Methods to screen protein–protein interactions
Though there are many methods to detect protein–protein interactions, the majority of these methods—such as co-immunoprecipitation, fluorescence resonance energy transfer (FRET) and dual polarisation interferometry—are not screening approaches.
Ex vivo or in vivo methods
Methods that screen protein–protein interactions in the living cells.
Bimolecular fluorescence complementation (BiFC) is a technique for observing the interactions of proteins. Combining it with other new techniques, dual expression recombinase based (DERB) methods can enable the screening of protein–protein interactions and their modulators.
The yeast two-hybrid screen investigates the interaction between artificial fusion proteins inside the nucleus of yeast. This approach can identify the binding partners of a protein without bias. However, the method has a notoriously high false-positive rate, which makes it necessary to verify the identified interactions by co-immunoprecipitation.
In-vitro methods
The tandem affinity purification (TAP) method allows the high-throughput identification of proteins interactions. In contrast with the Y2H approach, the accuracy of the method can be compared to those of small-scale experiments (Collins et al., 2007) and the interactions are detected within the correct cellular environment as by co-immunoprecipitation. However, the TAP tag method requires two successive steps of protein purification, and thus can not readily detect transient protein–protein interactions. Recent genome |
https://en.wikipedia.org/wiki/Accela | Accela is an American private government technology company. It was established in 1999 as a result of a merger with Sierra Computer Systems and Open Data Systems. Accela's platform is used by state and local government agencies in the United States and in other countries.
History
Accela was founded in 1999 as a result of a merger with Sierra Computer Systems and Open Data Systems.
Between 2014 and 2015, Accela acquired ten companies including PublicStuff, GeoTMS, IQM2, Envista, Kinsail, Government Outreach, Decade Software, Civic Insight, Springbrook Software, and SoftRight. In 2017, Accela was acquired by Berkshire Partners.
In September 2018, Accela partnered with Microsoft Azure to power its cloud-based services. On December 10, 2018, Gary Kovacs was named Chief Executive Officer of Accela.
Usage
Government agencies that use Accela's platform include those of San Joaquin County, California; Pima County, Arizona; San Antonio, Texas; San Diego, California; Baltimore County, Maryland; New York City's Department of Health and Mental Hygiene; the city and county of Denver, Colorado; El Paso, Texas; Indianapolis, Indiana; Salt Lake City, Utah; Culver City, California; Cabarrus County, North Carolina; several cities and counties across Florida; and Abu Dhabi.
The Accela Civic Platform digitizes governmental processes. Accela's Civic Applications aid governments in delivering various services, such as permitting, licensing, and code enforcement. Accela also has permitting applications for solar energy and natural disasters. |
https://en.wikipedia.org/wiki/NEAT%20chipset | The NEAT chipset (the acronym standing for "New Enhanced AT") is a
4 chip VLSI implementation (including the 82C206 IPC) of the control logic used in the IBM PC compatible PC/AT computers. It consists of the 82C211 CPU/Bus controller, 82C212 Page/Interleave and EMS Memory controller, 82C215 Data/Address buffer, and 82C206 Integrated Peripherals Controller (IPC). NEAT, official designation CS8221, was developed by Chips and Technologies.
History
The NEAT chipset descended from the first chipset that C&T had developed for IBM XT-compatible systems, which is based around the 82C100 "XT controller" chip. 82C100 incorporates the functionality of what had been, until its invention, discrete TTL chips on the XT's mainboard, namely:
8284 clock generator
8288 bus controller
8254 Programmable Interval Timer
8255 parallel I/O interface
8259 Programmable Interrupt Controller
8237 DMA controller
8255 Programmable Peripheral Interface (PPI)
DRAM/SRAM controller
XT Keyboard controller
IBM PC compatibility is provided by C&T's 82C206 Integrated Peripheral Controller (IPC), introduced by C&T in 1986. This chip, like its predecessor the 82C100, provides equivalent functionality to the TTL chips on the PC/AT's mainboard, namely:
82284 clock generator
82288 bus controller
8254 Programmable Interval Timer
two 8259 Programmable Interrupt Controllers
two 8237 DMA controllers
74LS612 Memory Mapper chip
MC146818 NVRAM/RTC chip
NEAT CS8221's predecessor, called CS8220, requires five chips (buffers and memory controllers) for a virtually complete motherboard, while NEAT requires four, and added support for separate ISA bus clocks. The eventual successor to the NEAT chipset, 82C235 Single Chip AT (SCAT), amalgamates all of the chips of the NEAT chipset into a single chip.
Other manufacturers
Other manufacturers produced equivalent chips. OPTi, for example, produced a two-chip "AT controller" chipset comprising the OPTi 82C206 and 82C495XLC, which is found in many early 80486 and Penti |
https://en.wikipedia.org/wiki/Wave%20shoaling | In fluid dynamics, wave shoaling is the effect by which surface waves, entering shallower water, change in wave height. It is caused by the fact that the group velocity, which is also the wave-energy transport velocity, changes with water depth. Under stationary conditions, a decrease in transport speed must be compensated by an increase in energy density in order to maintain a constant energy flux. Shoaling waves will also exhibit a reduction in wavelength while the frequency remains constant.
In other words, as the waves approach the shore and the water gets shallower, the waves get taller, slow down, and get closer together.
In shallow water and parallel depth contours, non-breaking waves will increase in wave height as the wave packet enters shallower water. This is particularly evident for tsunamis as they wax in height when approaching a coastline, with devastating results.
Overview
Waves nearing the coast change wave height through different effects. Some of the important wave processes are refraction, diffraction, reflection, wave breaking, wave–current interaction, friction, wave growth due to the wind, and wave shoaling. In the absence of the other effects, wave shoaling is the change of wave height that occurs solely due to changes in mean water depth – without changes in wave propagation direction and dissipation. Pure wave shoaling occurs for long-crested waves propagating perpendicular to the parallel depth contour lines of a mildly sloping sea-bed. Then the wave height at a certain location can be expressed as:
with the shoaling coefficient and the wave height in deep water. The shoaling coefficient depends on the local water depth and the wave frequency (or equivalently on and the wave period ). Deep water means that the waves are (hardly) affected by the sea bed, which occurs when the depth is larger than about half the deep-water wavelength
Physics
For non-breaking waves, the energy flux associated with the wave motion, which is the |
https://en.wikipedia.org/wiki/Deaf%20flag | The Deaf flag is a flag that symbolises the Deaf community (especially the signing Deaf community), and is used as a form of visibility for a socio-cultural minority that is often discriminated against in various areas.
The flag was designed by the French Deafblind artist Arnaud Balard. It depicts a large open turquoise hand on another yellow hand (of which only the profile around the turquoise hand is visible). The tips of the fingers are outside the flag, so that the fingers "stretch out" indefinitely. The background colour is navy blue.
Symbolism
There are several different symbols that are included in the flag (based on what is indicated by its creator):
The hands represent the signing Deaf community and Sign language.
The infinite fingers allude to the projection of the use of Sign language in the world, with more than 200 existing Sign Languages. The fingers also symbolise the connection with the five continents (in order from top to bottom): Europe, the Americas, Asia, Oceania and Africa.
The colour turquoise is the world color of Sign language, Deaf culture, and the signing Deaf community (Deaf, Deafblind, CODA, Sign Language interpreters, family members).
The colour yellow symbolizes light, life, the awakened mind, coexistence.
The colour navy blue (dark blue) symbolises planet Earth, humanity, and is the color adopted to represent deafness (represented by a blue ribbon). In this way, non-signing Deaf and Deafblind people (oralists) would be included.
The design aims for the flag to be a symbol of openness, inclusiveness and union, rather than isolation or segregation.
History
At the beginning of the 20th century, the World Federation of the Deaf initiated a process to define the Deaf flag.
The Swedish proposal
In 2009, during the Congress of the Swedish Confederation of the Deaf in Leksand, the Swedish National Association of the Deaf (, SDR) was commissioned to present a flag at the 16th World Congress of the World Federation of the Deaf, he |
https://en.wikipedia.org/wiki/Daylight | Daylight is the combination of all direct and indirect sunlight during the daytime. This includes direct sunlight, diffuse sky radiation, and (often) both of these reflected by Earth and terrestrial objects, like landforms and buildings. Sunlight scattered or reflected by astronomical objects is generally not considered daylight. Therefore, daylight excludes moonlight, despite it being reflected indirect sunlight.
Definition
Daylight is present at a particular location, to some degree, whenever the Sun is above the local horizon. (This is true for slightly more than 50% of the Earth at any given time. For an explanation of why it is not exactly half, see here). However, the outdoor illuminance can vary from 120,000 lux for direct sunlight at noon, which may cause eye pain, to less than 5 lux for thick storm clouds with the Sun at the horizon (even <1 lux for the most extreme case), which may make shadows from distant street lights visible. It may be darker under unusual circumstances like a solar eclipse or very high levels of atmospheric particulates, which include smoke (see New England's Dark Day), dust, and volcanic ash.
Intensity in different conditions
For comparison, nighttime illuminance levels are:
For a table of approximate daylight intensity in the Solar System, see sunlight.
See also |
https://en.wikipedia.org/wiki/Water%20slide%20decal | Water slide decals (or water transfer decals) are decals which rely on dextrose residue from the decal paper to bond the decal to a surface. A water-based adhesive layer can be added to the decal to create a stronger bond or may be placed between layers of lacquer to create a durable decal transfer. The paper also has a layer of glucose film added prior to the dextrose layer which gives it adhesive properties; the dextrose layer gives the decal the ability to slide off the paper and onto the substrate (lubricity).
Water slide decals are thinner than many other decorative techniques (such as vinyl stickers) and as they are printed, they can be produced to a very high level of detail. As such, they are popular in craft areas such as scale modeling, as well as for labeling DIY electronics devices, such as guitar pedals.
Previously, water slide decals were professionally printed and only available in supplied designs, but with the advent of printable decal paper for colour inkjet and laser printers, custom decals can now be produced by the hobbyist or small business. |
https://en.wikipedia.org/wiki/Platonic%20hydrocarbon | In organic chemistry, a Platonic hydrocarbon is a hydrocarbon (molecule) whose structure matches one of the five Platonic solids, with carbon atoms replacing its vertices, carbon–carbon bonds replacing its edges, and hydrogen atoms as needed.
Not all Platonic solids have molecular hydrocarbon counterparts; those that do are the tetrahedron (tetrahedrane), the cube (cubane), and the dodecahedron (dodecahedrane).
Tetrahedrane
Tetrahedrane (C4H4) is a hypothetical compound. It has not yet been synthesized without substituents, but it is predicted to be kinetically stable in spite of its angle strain. Some stable derivatives, including tetra(tert-butyl)tetrahedrane (a hydrocarbon) and tetra(trimethylsilyl)tetrahedrane, have been produced.
Cubane
Cubane (C8H8) has been synthesized. Although it has high angle strain, cubane is kinetically stable, due to a lack of readily available decomposition paths.
Octahedrane
Angle strain would make an octahedron highly unstable due to inverted tetrahedral geometry at each vertex. There would also be no hydrogen atoms because four edges meet at each corner; thus, the hypothetical octahedrane molecule would be an allotrope of elemental carbon, C6, and not a hydrocarbon. The existence of octahedrane cannot be ruled out completely, although calculations have shown that it is unlikely.
Dodecahedrane
Dodecahedrane (C20H20) was first synthesized in 1982, and has minimal angle strain; the tetrahedral angle is 109.5° and the dodecahedral angle is 108°, only a slight discrepancy.
Icosahedrane
The tetravalency (4-connectedness) of carbon excludes an icosahedron because 5 edges meet at each vertex. True pentavalent carbon is unlikely; methanium, nominally , usually exists as . The hypothetical icosahedral lacks hydrogen so it is not a hydrocarbon; it is also an ion.
Both icosahedral and octahedral structures have been observed in boron compounds such as the dodecaborate ion and some of the carbon-containing carboranes.
Other polyhedr |
https://en.wikipedia.org/wiki/Malapterurus%20shirensis | Malapterurus shirensis is a species of electric catfish native to the Zambezi River basin where it occurs in the countries of Mozambique, Zambia and Zimbabwe. This species grows to a length of SL. |
https://en.wikipedia.org/wiki/BIOS-3 | BIOS-3 is an experimental closed ecosystem at the Institute of Biophysics in Krasnoyarsk, Russia.
Its construction began in 1965, and was completed in 1972. BIOS-3 consists of a underground steel structure suitable for up to three persons, and was initially used for developing closed ecological human life-support ecosystems. It was divided into 4 compartments, one of which is a crew area. The crew area consists of 3 single-cabins, a galley, lavatory and control room. Initially one other compartment was an algal cultivator, and the other two phytotrons for growing wheat or vegetables. The plants growing in the two phytotrons contributed approximately 25% of the air filtering in the compound. Later, the algal cultivator was converted into a third phytotron. A level of light comparable to sunlight was supplied in each of the 4 compartments by 20 kW xenon lamps, cooled by water jackets. The facility used 400 kW of electricity, supplied by a nearby hydroelectric power station.
Chlorella algae were used to recycle air breathed by humans, absorbing carbon dioxide and replenishing it with oxygen through photosynthesis. The algae were cultivated in stacked tanks under artificial light. To achieve a balance of oxygen and carbon dioxide, one human needed of exposed Chlorella. Air was purified of more complex organic compounds by heating to in the presence of a catalyst. Water and nutrients were stored in advance and were also recycled. By 1968, system efficiency had reached 85% by recycling water. Dried meat was imported into the facility, and urine and feces were generally dried and stored, rather than being recycled.
BIOS-3 facilities were used to conduct 10 manned closure experiments with a one to three man crew. The longest experiment with a three-man crew lasted 180 days (in 1972-1973). The facilities were used for the tests at least until 1984.
In 1986, Dr. Josef Gitelson, head of the Institute of Biophysics (IBP) at Krasnoyarsk and developer of biospherics as we |
https://en.wikipedia.org/wiki/Nexus%20Hawk | The Nexus Hawk 4G is a gateway router linking broadband cellular data, such as CDMA, GSM and Wi-Fi (IEEE 802.11)a, b, g, n) and WAN (such as BGAN Satellite) networks providing enterprises with broadband wireless internet/network data services in mobile and remote environments.
The Nexus Hawk's original development was funded under a DOD (Department of Defense) prime contract. The technology was primarily designed for military use and supports public safety. The Nexus Hawk is currently in use by law enforcement agencies, governmental data infrastructure, commercial fleet, connectivity in and to retail locations, and livery services in Washington, DC.
The device provides secure access to public and private wired and wireless networks including, Sprint (CDMA EVDO Rev A, 1xRTT), Verizon Wireless LTE, CDMA EVDO Rev A 1xRTT, AT&T Wireless 4G, GSM /HSDPA, Telus HSDPA+, CDMA EVDO Rev A 1xRTT, Washington DC EVDO Rev A Regional Wireless Broadband Network (RWBN), non-U.S. cellular networks, and secure WiFi. GPS for applications such as Automatic Vehicle Location (AVL) sometimes commercial referred to as fleet tracking or Geo-Based Dispatch and Navigation. Connectivity to multiple simultaneous WAN via GIG ethernet, USB or WiFi paths with user-selectable order for failover and fail back. Access to 4 simultaneous WANS and GPS. Automatic and persistent network connections. Incorporates 2 USB and 4 PCI-M slots to accommodate future networks such as WiMAX and Public Safety Band), accepts ExpressCard 34mm air cards, PCMCIA CardBus air cards and USB air cards via adapter, Secure Remote Configuration Management, Built in IPsec and OpenVPN and pass through security features, FIPS140-2 SSL Certified Module.
See also
HSPA
Huawei E220
External links
Nexus Hawk Official website
Networking companies of the United States
Telecommunications equipment
Wireless networking hardware |
https://en.wikipedia.org/wiki/International%20Journal%20of%20Computational%20Geometry%20and%20Applications | The International Journal of Computational Geometry and Applications (IJCGA) is a bimonthly journal published since 1991, by World Scientific. It covers the application of computational geometry in design and analysis of algorithms, focusing on problems arising in various fields of science and engineering such as computer-aided geometry design (CAGD), operations research, and others.
The current editors-in-chief are D.-T. Lee of the Institute of Information Science in Taiwan, and Joseph S. B. Mitchell from the Department of Applied Mathematics and Statistics
in the State University of New York at Stony Brook.
Abstracting and indexing
Current Contents/Engineering, Computing & Technology
ISI Alerting Services
Science Citation Index Expanded (also known as SciSearch)
CompuMath Citation Index
Mathematical Reviews
INSPEC
DBLP Bibliography Server
Zentralblatt MATH
Computer Abstracts |
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Plant%20Biology | Annual Review of Plant Biology is a peer-reviewed scientific journal published by Annual Reviews. It was first published in 1950 as the Annual Review of Plant Physiology. Sabeeha Merchant has been the editor since 2005, making her the longest-serving editor in the journal's history after Winslow Briggs (1973–1993). Journal Citation Reports lists the journal's 2022 impact factor as 23.9, ranking it second of 238 journal titles in the category "Plant Sciences".
History
Beginning in 1947, the publishing nonprofit Annual Reviews began asking plant physiologists if it would be useful to have an annual journal that published review articles summarizing the recent literature in the field. Responses indicated that this would be very favorable, and the Annual Review of Plant Physiology published its first volume in 1950. Its founding editor was Daniel I. Arnon. It was thus the seventh journal title to be published by Annual Reviews. Its scope was somewhat reduced by the publication of the Annual Review of Phytopathology, first released in 1963. In 1988, its named changed to the Annual Review of Plant Physiology and Plant Molecular Biology. In the 1990s, it began having color illustrations and was published online for the first time. Its name was changed once again in 2002 to its current version, the Annual Review of Plant Biology. As of 2020, it was published both in print and electronically.
The journal covers developments in the field of plant biology, including cell biology, genetics, genomics, molecular biology, cell differentiation, tissue, acclimation (including adaptation), and methods. The journal is abstracted and indexed in the following databases.
Chemical Abstracts Service
MEDLINE/PubMed
Science Citation Index
BIOSIS Previews
Editorial processes
The Annual Review of Plant Biology is helmed by the editor. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members part |
https://en.wikipedia.org/wiki/Geochore | Geochores (Greek gé "the earth" and chora "area") are relatively large landscape areas with similar – but owing to their size not fully uniform – characteristics. They therefore consist of a tapestry of smaller landscape units, which can be hierarchically grouped:
Physiotopes or geotopes form the base unit (tope from the Greek, τόπος, "place"). These are objects whose features are assessed as homogenous and which cannot sensibly be subdivided into smaller landscapes. Their area depends on the distribution pattern of their features and on the purpose or aim of the classification, but in general they are between 0.1 and 5 hectares in area.
Nanogeochores or nanochores are the simplest level of physiotopes.
Example: Ameisenberg near Oybin is part of the Oybin Rock Region (microgeochore)
Microgeochores are small scale landscape units with an average area of 12 km2. In terms of biotopes or woodland or agricultural land which is managed in a certain way, they form a tapestry of nanogeochores. They cover areas which are similar mainly in terms of their geological origins, rocks, topographical elevation or relief energy. They are a good example of the how geological and topographical history affects the resulting landscape structure.
Example: Hochwald Ridge and Oybin Rock Region
Mesogeochores are simply formations and groups of microgeochores. Their association is based on similarities of climate, topography such as mountains, valleys and hills or associated features from the Pleistocene (ice age). They are oriented towards the management and relative size of the microgeochores of which they comprise.
Example: Zittau Mountains or Zittau Basin
Macrogeochores or major landscapes - as natural region major units - are simply groupings of mesogeochores, whose cohesiveness is based e.g. on geological foundations, on climatic conditions or vegetation (e. g. hpnV). They are "regional" in size.
Example: Lusatian Mountains or Upper Lusatian Highlands.
Literature
Haase, G. |
https://en.wikipedia.org/wiki/AIM%20Multiuser%20Benchmark | The AIM Multiuser Benchmark, also called the AIM Benchmark Suite VII or AIM7, is a job throughput benchmark widely used by UNIX computer system vendors. Current research operating systems such as K42 use
the reaim
form of the benchmark for performance analysis.
The AIM7 benchmark measures some of the same things as the SDET benchmark.
The original code was developed by Gene Dronek for AIM Technology, Inc., who licensed it to others. The first AIM Benchmarks were for single user PCs. The suite was expanded and enhanced to become multi-user benchmarks by Donald Steiny. Caldera International, Inc., bought the license and released
the source code for Suite VII and Suite IX under the GPL.
AIM7 is a program written in C that forks many processes called tasks, each of which concurrently runs in random order a set of subtests called jobs. There are 53 kinds of jobs, each of which exercises a different aspect of the operating system, such as disk-file operations, process creation, user virtual memory operations, pipe I/O, and compute-bound arithmetic loops
.
An AIM7 benchmark run is composed of a sequence of subruns with the number of tasks incrementing by one between each subrun. Each subrun goes until each of its tasks has completed its set of jobs. Each subrun reports a metric of jobs completed per minute, with the final report for the overall benchmark being a table of that throughput metric versus number of tasks. A given system will have a peak number of tasks N at which the jobs per minute is maximized. Either N or the value of the jobs per minute at N is typically used as the metric of interest. |
https://en.wikipedia.org/wiki/Radiocarbon%20%28journal%29 | Radiocarbon is a scientific journal devoted to the topic of radiocarbon dating.
It was founded in 1959 as a supplement to the American Journal of Science, and is an important source of data and information about radiocarbon dating. It publishes many radiocarbon results, and since 1979 it has published the proceedings of the international conferences on radiocarbon dating. The journal is published six times per year. it is published by Cambridge University Press.
See also
Carbon-14 |
https://en.wikipedia.org/wiki/Mons%20pubis | In human anatomy, and in mammals in general, the mons pubis or pubic mound (also known simply as the mons, and known specifically in females as the mons Venus or mons veneris) is a rounded mass of fatty tissue found over the pubic symphysis of the pubic bones.
Anatomy
For females, the mons pubis forms the anterior portion of the vulva. It divides into the labia majora (literally "larger lips"), on either side of the furrow known as the pudendal cleft, that surrounds the labia minora, clitoris, urinary meatus, vaginal opening, and other structures of the vulval vestibule.
Although present in both men and women, the mons pubis tends to be larger in women. Its fatty tissue is sensitive to estrogen, causing a distinct mound to form with the onset of female puberty. This pushes the forward portion of the labia majora out and away from the pubic bone. The mound also becomes covered with pubic hair. It often becomes less prominent with the decrease in bodily estrogen experienced during menopause.
Etymology
The term mons pubis is derived from Latin for "pubic mound". The more specifically female mons Venus or mons veneris is derived from Latin for "mound of Venus".
Society and culture
Although not part of external genitalia itself, the pubic mound can be regarded as an erogenous zone and is highly eroticized in many cultures. Throughout history, the complete or partial removal of pubic hair has been common in many societies, and more recently it has become widespread in the Western world. The full removal of pubic hair by use of wax, sugar or shaving, known as a "Brazilian wax", has become common practice in recent years.
In some circumstances, the mons veneris is subjected to aesthetic ideals beyond hair removal. Correspondingly, plastic surgery is offered which alters the shape of the mons to a desired ideal. Desired ideals may be influenced by personal preferences, current cultural norms, or societal pressures.
Permanent forms of decoration to enhance the aesthe |
https://en.wikipedia.org/wiki/Jacobi%20theta%20functions%20%28notational%20variations%29 | There are a number of notational systems for the Jacobi theta functions. The notations given in the Wikipedia article define the original function
which is equivalent to
where and .
However, a similar notation is defined somewhat differently in Whittaker and Watson, p. 487:
This notation is attributed to "Hermite, H.J.S. Smith and some other mathematicians". They also define
This is a factor of i off from the definition of as defined in the Wikipedia article. These definitions can be made at least proportional by x = za, but other definitions cannot. Whittaker and Watson, Abramowitz and Stegun, and Gradshteyn and Ryzhik all follow Tannery and Molk, in which
Note that there is no factor of π in the argument as in the previous definitions.
Whittaker and Watson refer to still other definitions of . The warning in Abramowitz and Stegun, "There is a bewildering variety of notations...in consulting books caution should be exercised," may be viewed as an understatement. In any expression, an occurrence of should not be assumed to have any particular definition. It is incumbent upon the author to state what definition of is intended. |
https://en.wikipedia.org/wiki/Mir-632%20microRNA%20precursor%20family | In molecular biology mir-632 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
miR-632 and MDS
miR-632 has been identified as one of three key miRNAs associated with the anti-ageing myelodysplastic syndromes (MDS). In particular, its levels show high discrimination between MDS and normal controls, and expression is decreased in MDS. In this way it can be used as a potential diagnostic marker for MDS.
DNAJB6 protein
miR-632 targets the coding region of the DNAJB6 protein, a member of the Heat Shock Protein 40 (HSP40) family which shows constitutive expression. DNAJB6 is known to be a negative regulator of tumour progression in breast cancer and its levels are compromised in advanced tumour progression. miR-632 has been linked to be downregulation of DNAJB6 and is capable of silencing both spliced variants of this protein. It is currently unknown whether miR-632 may be just one of many negative regulators controlling DNAJB6 levels.
See also
MicroRNA |
https://en.wikipedia.org/wiki/Dopamine-responsive%20dystonia | Dopamine-responsive dystonia (DRD) also known as Segawa syndrome (SS), is a genetic movement disorder which usually manifests itself during early childhood at around ages 5–8 years (variable start age).
Characteristic symptoms are increased muscle tone (dystonia, such as clubfoot) and Parkinsonian features, typically absent in the morning or after rest but worsening during the day and with exertion. Children with dopamine-responsive dystonia are often misdiagnosed as having cerebral palsy. The disorder responds well to treatment with levodopa.
Signs and symptoms
The disease typically starts in one limb, typically one leg. Progressive dystonia results in clubfoot and tiptoe walking. The symptoms can spread to all four limbs around age 18, after which progression slows and eventually symptoms reach a plateau. There can be regression in developmental milestones (both motor and mental skills) and failure to thrive in the absence of treatment.
In addition, dopamine-responsive dystonia is typically characterized by signs of parkinsonism that may be relatively subtle. Such signs may include slowness of movement (bradykinesia), tremors, stiffness and resistance to movement (rigidity), balance difficulties, and postural instability. Approximately 25 percent also have abnormally exaggerated reflex responses (hyperreflexia), particularly in the legs. These symptoms can result in a presentation similar to that of Parkinson's disease.
Many patients experience improvement with sleep, are relatively free of symptoms in the morning, and develop increasingly severe symptoms as the day progresses (i.e., diurnal fluctuation). Accordingly, this disorder has sometimes been referred to as "progressive hereditary dystonia with diurnal fluctuations." Yet some people with dopamine-responsive dystonia do not experience such diurnal fluctuations, causing many researchers to prefer other disease terms.
Other symptoms - footwear
excessive wear at toes, but little wear on heels, thus re |
https://en.wikipedia.org/wiki/Elliptic%20surface | In mathematics, an elliptic surface is a surface that has an elliptic fibration, in other words a proper morphism with connected fibers to an algebraic curve such that almost all fibers are smooth curves of genus 1. (Over an algebraically closed field such as the complex numbers, these fibers are elliptic curves, perhaps without a chosen origin.) This is equivalent to the generic fiber being a smooth curve of genus one. This follows from proper base change.
The surface and the base curve are assumed to be non-singular (complex manifolds or regular schemes, depending on the context). The fibers that are not elliptic curves are called the singular fibers and were classified by Kunihiko Kodaira. Both elliptic and singular fibers are important in string theory, especially in F-theory.
Elliptic surfaces form a large class of surfaces that contains many of the interesting examples of surfaces, and are relatively well understood in the theories of complex manifolds and smooth 4-manifolds. They are similar to (have analogies with, that is), elliptic curves over number fields.
Examples
The product of any elliptic curve with any curve is an elliptic surface (with no singular fibers).
All surfaces of Kodaira dimension 1 are elliptic surfaces.
Every complex Enriques surface is elliptic, and has an elliptic fibration over the projective line.
Kodaira surfaces
Dolgachev surfaces
Shioda modular surfaces
Kodaira's table of singular fibers
Most of the fibers of an elliptic fibration are (non-singular) elliptic curves. The remaining fibers are called singular fibers: there are a finite number of them, and each one consists of a union of rational curves, possibly with singularities or non-zero multiplicities (so the fibers may be non-reduced schemes). Kodaira and Néron independently classified the possible fibers, and Tate's algorithm can be used to find the type of the fibers of an elliptic curve over a number field.
The following table lists the possible fibers of a minimal el |
https://en.wikipedia.org/wiki/Miga%2C%20Quatchi%2C%20Sumi%20and%20Mukmuk | Miga and Quatchi are the official mascots of the 2010 Winter Olympics, Sumi is the official mascot of the 2010 Winter Paralympics, and Mukmuk is their designated "sidekick" for both games, held in Vancouver, British Columbia, Canada. The four mascots were introduced on November 27, 2007. They were designed by the Canadian and American design studio, Meomi Design. It was the first time (since Cobi and Petra) that the Olympic and Paralympic mascots were introduced at the same time.
Development
The emblem of 2010 Winter Olympics, "Ilanaaq the Inukshuk", was picked through an open contest. However, it met criticism from some aboriginal groups over its design. So the mascot artist was selected through a competition.
Through the process where 177 professionals around the world were submitted their ideas, five were made final. In December 2006, VANOC eventually selected concepts from Meomi Design. Formed in 2002, Meomi is a group of Vicki Wong, a Vancouver-born Canadian of Chinese descent who worked in graphic and web design, and Michael Murphy, born in Milford, Michigan, who worked in design and motion graphics. Writing for Sports Illustrated, experts Michael Erdmann and John Ryan, while making comments on the mascots of the Olympic Games held in Canada, pointed out that Meomi's character drawing styles "are more closely related to Urban Vinyl [...]".
After the selection, Meomi provided more than 20 different concepts to VANOC, and three concepts were selected. The conception of the mascots were based on the local wildlife, as well as First Nations legends, mythologies and legendary creatures. During the design process, an early name for Quatchi was dismissed when the undisclosed word was found to have a rude connotation in another language. An animated video by Buck, a design studio based in New York and Los Angeles, with music provided by Kid Koala was screened on the first public presentation of the mascots. Details about mascots were kept secret until November 27, |
https://en.wikipedia.org/wiki/Thunderbolt%20%28interface%29 | Thunderbolt is the brand name of a hardware interface for the connection of external peripherals to a computer. It has been developed by Intel, in collaboration with Apple. It was initially marketed under the name Light Peak, and first sold as part of an end-user product on 24 February 2011.
Thunderbolt combines PCI Express (PCIe) and DisplayPort (DP) into two serial signals, and additionally provides DC power, all in one cable. Up to six peripherals may be supported by one connector through various topologies. Thunderbolt 1 and 2 use the same connector as Mini DisplayPort (MDP), whereas Thunderbolt 3, 4, and 5 use the same USB-C connector as USB does.
Description
Thunderbolt controllers multiplex one or more individual data lanes from connected PCIe and DisplayPort devices for transmission via two duplex Thunderbolt lanes, then de-multiplex them for use by PCIe and DisplayPort devices on the other end. A single Thunderbolt port supports up to six Thunderbolt devices via hubs or daisy chains; as many of these as the host has DP sources may be Thunderbolt monitors.
A single Mini DisplayPort monitor or other device of any kind may be connected directly or at the very end of the chain. Thunderbolt is interoperable with DP-1.1a compatible devices. When connected to a DP-compatible device, the Thunderbolt port can provide a native DisplayPort signal with four lanes of output data at no more than 5.4 Gbit/s per Thunderbolt lane. When connected to a Thunderbolt device, the per-lane data rate becomes 10 Gbit/s and the four Thunderbolt lanes are configured as two duplex lanes, each 10 Gbit/s comprising one lane of input and one lane of output.
Thunderbolt can be implemented on PCIe graphics cards, which have access to DisplayPort data and PCIe connectivity, or on the motherboard of new computers with onboard video, such as the MacBook Air.
The interface was originally intended to run exclusively on an optical physical layer using components and flexible optical fiber c |
https://en.wikipedia.org/wiki/251%20%28number%29 | 251 (two hundred [and] fifty-one) is the natural number between 250 and 252. It is also a prime number.
In mathematics
251 is:
a Sophie Germain prime.
the sum of three consecutive primes (79 + 83 + 89) and seven consecutive primes (23 + 29 + 31 + 37 + 41 + 43 + 47).
a Chen prime.
an Eisenstein prime with no imaginary part.
a de Polignac number, meaning that it is odd and cannot be formed by adding a power of two to a prime number.
the smallest number that can be formed in more than one way by summing three positive cubes:
Every 5 × 5 matrix has exactly 251 square submatrices. |
https://en.wikipedia.org/wiki/Wings%20Across%20America%202008 | Wings Across America 2008 (WAA-08) was a group of model airplane enthusiasts that flew a battery-powered radio-controlled aircraft (RC), designated as a park flyer, in all 48 contiguous United States with hopes to make all 50, if Alaska and Hawaii could be reached. A park flyer is a small radio-controlled plane typically flown in a field such as a local park or soccer field.
History
Wings Across America 2008 (WAA-08) was the creation of Frank Geisler of Gloucester, Virginia. Frank is an avid RC pilot, USAF veteran and AMA contest director who volunteers his free time to help promote the sport/hobby of radio controlled flying. When Frank discussed this project with some of his friends, it was received with such enthusiasm that the project was born of this energy. All that was needed was to find hundreds of RC pilots across the US in every state willing to fly the plane at their home field and then drive to the next pilot, thus forming a nationwide network of pilots who would fly across America.
Frank used the internet RC forums and emailed Academy of Model Aeronautics chartered clubs in search of volunteers willing to help the project come to fruition. In only 5 weeks from inception, 230 pilots had joined, representing all 48 continental United States.
This type of project had been attempted before. What sets this project apart from all the others ever attempted or completed was that the pilots hand delivered the plane from pilot to pilot; the plane was never shipped by mail to its next destination. In this way a "chain" was created of pilots that personally flew the model, then handed it off to the next pilot, all across the continental United States. In the end, the model airplane flew in all 48 states and covered a distance of over 30,000 miles.The WAA-08 Pilots MapThe WAA-08 Completed Pilots Map
It ended its journey at the home field in eastern Virginia 5 years, 145 days, 21 hours and 50 minutes after it made its maiden flight. Over 340 RC pilots reg |
https://en.wikipedia.org/wiki/Efsi%20Toys | Holand Oto is a Dutch manufacturing company based in Weert that produces diecast scale model cars and trucks. The company was established in 1959 in Heerlen as "Bestbox", then changing its name to "Efsi" in early 1970s.
The company could be considered the Matchbox Toys of the Netherlands, but its origins and purpose as a government sponsored employer was far less commercial than other toy manufacturers. Efsi was based in Heerlen, Netherlands.
History
In 1959 (some sources say 1962), Bestbox started making simple diecast cars in Limburg province. In the 1960s, the coal mines in Limburg closed. Some significant post-mining industries where workers relocated were automobile producers DAF and Nedcar in Born. After the closure, DSM (The Dutch State Mines) tried providing new employment to out-of-work or disabled miners in different shops or factories. These 'WIM' workshops (Workshop for Disabled Miners) were backed by a national Fund for Social Institutions (FSI).
One of the activities was to continue the production of Best-box models. Later the name was changed to EFSI, apparently a phonetic pronunciation of FSI.</ref> The Bestbox name was discontinued in 1971, perhaps because of the similarity to the name (and competition) of Matchbox toys. Though similar to Matchbox, and popular locally, Efsi toys do not seem to have been too well known outside the Netherlands. Most Efsi products were marked "Made in Holland" (as opposed to 'Made in the Netherlands') on the base.
Early offerings
Among EFSI's first vehicles were a set of 1960s Formula One cars including Ferrari, Brabham, Honda, Lotus, and Cooper Maserati. These were first marketed under the Best Box name and had realistic engines, thick black tires and realistic, solid appearing, deep-set wheels. Several other vehicles were also marketed under the Best Box label, like a Citroen Dyan sedan. Another popular line were a very Matchbox-like series of diecast bodied Ford Model T vehicles in various forms including co |
https://en.wikipedia.org/wiki/Triose%20phosphate%20translocator | The triose phosphate translocator is an integral membrane protein found in the inner membrane of chloroplasts. It exports triose phosphate (Dihydroxyacetone phosphate) in exchange for inorganic phosphate and is therefore classified as an antiporter. The imported phosphate is then used for ATP regeneration via the light-dependent-reaction; the ATP may then for example be used for further reactions in the Calvin-cycle. The translocator protein is responsible for exporting all the carbohydrate produced in photosynthesis by plants and therefore most of the carbon in food that one eats has been transported by the triose phosphate translocator. Its three-dimensional structure was reported in 2017, revealing how it recognizes two different substrates to catalyze the strict 1:1 exchange. |
https://en.wikipedia.org/wiki/Aarskog%E2%80%93Scott%20syndrome | Aarskog–Scott syndrome (AAS) is a rare disease inherited as X-linked and characterized by short stature, facial abnormalities, skeletal and genital anomalies. This condition mainly affects males, although females may have mild features of the syndrome.
Signs and symptoms
People with Aarskog–Scott syndrome often have distinctive facial features, such as widely spaced eyes (hypertelorism), a small nose, a long area between the nose and mouth (philtrum), and a widow's peak hairline. They frequently have mild to moderate short stature during childhood, but their growth usually catches up with that of their peers during puberty. Hand abnormalities are common in this syndrome and include short fingers (brachydactyly), curved pinky fingers (fifth finger clinodactyly), webbing of the skin between some fingers (cutaneous syndactyly), and a single crease across the palm. Other abnormalities in people with Aarskog–Scott syndrome include heart defects and a split in the upper lip (cleft lip) with or without an opening in the roof of the mouth (cleft palate).
Most males with Aarskog–Scott syndrome have a shawl scrotum, in which the scrotum surrounds the penis instead of hanging below. Less often, they have undescended testes (cryptorchidism) or a soft out-pouching around the belly-button (umbilical hernia) or in the lower abdomen (inguinal hernia).
The intellectual development of people with Aarskog–Scott syndrome varies widely. Some may have mild learning and behavior problems, while others have normal intelligence. In rare cases, severe intellectual disability has been reported.
Genetics
Mutations in the FGD1 gene are the only known genetic cause of Aarskog-Scott syndrome. The FGD1 gene provides instructions for making a protein that turns on (activates) another protein called Cdc42, which transmits signals that are important for various aspects of development before and after birth.
Mutations in the FGD1 gene lead to the production of an abnormally functioning protein. |
https://en.wikipedia.org/wiki/Code%20as%20data | In computer science, the expressions code as data and data as code refer to the interchangeable nature of code and data. Specifically, "code is data" refers to the idea that source code written in a programming language can be manipulated as data, such as a sequence of characters or an abstract syntax tree (AST), and it has an execution semantics only in the context of a given compiler or interpreter. The expression "data is code" refers to the idea that an arbitrary data structure such as a list of integers can be interpreted or compiled using a specialized language semantics. The notions are often used in the context of Lisp-like languages that use S-expressions as their main syntax, as writing programs using nested lists of symbols makes the interpretation of the program as an AST quite transparent (a property known as homoiconicity).
These ideas are generally used in the context of what is called metaprogramming, writing programs that treat other programs as their data. For example, code-as-data allows the serialization of first-class functions in a portable manner. Another use case is storing a program in a string, which is then processed by a compiler to produce an executable. More often there is a reflection API that exposes the structure of a program as an object within the language, reducing the possibility of creating a malformed program.
In computational theory, Kleene's second recursion theorem provides a form of code-is-data, by proving that a program can have access to its own source code.
Code-as-data is also a principle of the Von Neumann architecture, since stored programs and data are both represented as bits in the same memory device. This architecture offers the ability to write self-modifying code. It also opens the security risk of disguising a malicious program as user data and then using an exploit to direct execution to the malicious program.
In declarative programming, the data-as-code principle is often applied. For example, configurat |
https://en.wikipedia.org/wiki/Deep%20homology | In evolutionary developmental biology, the concept of deep homology is used to describe cases where growth and differentiation processes are governed by genetic mechanisms that are homologous and deeply conserved across a wide range of species.
History
In 1822, the French zoologist Étienne Geoffroy Saint-Hilaire dissected a crayfish, discovering that its body is organised like a vertebrate's, but inverted belly to back (dorsoventrally):
Geoffroy's homology theory was denounced by the leading French zoologist of his day, Georges Cuvier, but in 1994, Geoffroy was shown to be correct. In 1915, Santiago Ramon y Cajal mapped the neural connections of the optic lobes of a fly, finding that these resembled those of vertebrates. In 1978, Edward B. Lewis helped to found evolutionary developmental biology, discovering that homeotic genes regulated embryonic development in fruit flies.
In 1997, the term deep homology first appeared in a paper by Neil Shubin, Cliff Tabin, and Sean B. Carroll, describing the apparent relatedness in genetic regulatory apparatuses which indicated evolutionary similarities in disparate animal features.
A different kind of homology
Whereas ordinary homology is seen in the pattern of structures such as limb bones of mammals that are evidently related, deep homology can apply to groups of animals that have quite dissimilar anatomy: vertebrates (with endoskeletons made of bone and cartilage) and arthropods (with exoskeletons made of chitin) nevertheless have limbs that are constructed using similar recipes or "algorithms".
Within the metazoa, homeotic genes control differentiation along major body axes, and pax genes (especially PAX6) help to control the development of the eye and other sensory organs. The deep homology applies across widely separated groups, such as in the eyes of mammals and the structurally quite different compound eyes of insects.
Similarly, hox genes help to form an animal's segmentation pattern. HoxA and HoxD, that regul |
https://en.wikipedia.org/wiki/Applied%20mechanics | Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering.
Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology.
Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics.
Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools. |
https://en.wikipedia.org/wiki/Hostinger | Hostinger International, Ltd is an employee-owned web hosting provider and an ICANN-accredited domain registrar. Established in 2004, the company is headquartered in Lithuania and employs more than 1,000 people. Hostinger is the parent company of 000webhost, Hosting24, Zyro, and Niagahoster.
Hostinger has been ranked among the fastest-growing companies in Europe for four consecutive years in the Financial Times' annual FT 1000 list.
History
Hostinger was founded in 2004 as Hosting Media.
In 2007, Hosting Media’s paid hosting offer was joined by a free web hosting service when the company founded 000webhost.
In 2008, the company launched Hosting24, a cPanel-based web hosting brand, in the United States. The data centers were located in Asheville, North Carolina, and the United Kingdom.
In 2011, Arnas Stuopelis joined the company as CEO. In the same year Hosting Media rebranded to Hostinger.
In 2013, Hostinger launched its subsidiary Niagahoster in Indonesia. It became one of the first hosting providers to offer tier-4 data centers.
In 2014, the company launched a Brazilian subsidiary, Weblink.
In 2019, Hostinger introduced a no-code, drag-and-drop website builder. Launched as a subsidiary under the name Zyro, the builder is supported by Hostinger’s hosting infrastructure and is aimed at small to medium-sized enterprises.
In 2023, Hostinger launched the AI Website Builder, a platform that uses artificial intelligence to generate a custom site from scratch. AI Builder automatically writes unique content, selects royalty-free stock images, and chooses a color palette and fonts that best suit the description.
In 2023, Daugirdas Jankus becomes Hostinger CEO. Arnas Stuopelis continues as Chairman of the Board.
Acquisitions
In 2019, the company partnered with LiteSpeed Web Server (LSWS). It also partnered with Google Cloud Platform in 2020.
In 2021, private equity company ConHostinger acquired an approximately 31% controlling stake in Hostinger with the goal o |
https://en.wikipedia.org/wiki/Orme%27s%20law | Orme's law is a rule of thumb to assist modelers when they design an electric power system for their radio-controlled model. Orme's law simply recommends the use of one NiMH rechargeable battery cell for every of wing area for sport planes. One cell for every of wing area for trainers. When using LiPo cells with higher voltage, the number of cells is cut in half, since a LiPo cell has double the voltage.
Origin
This rule is the work of Matthew Orme, who used to manage the sales of Aveox's line of motors to the hobby market, and later developed a small high performance line of brushless motors at RazorMotors. Matt wanted to come up with a simpler way to recommend systems to the many people who ask him for advice powering their models. Brushless motors are very versatile and it can be hard for newcomers to decide which system they should use.
Need for Orme's law
The main purpose was to demonstrate to the modeler that the electric motor was not the source of power like an internal combustion engine. With an IC engine, a particular sized engine will have a particular power output. Typically a model airplane manufacturer will recommend a 2-cycle glow plug motor by its size (typically .40-.60 cubic inch for the average sized plane) Since electric motors operate over a much wider range, there was no good way to recommend an electric motor for a plane.
Orme's law makes it clear that the battery pack is the power source, and it determines the available power, not the electric motor. Once the power requirements of any particular airplane model have been determined, an electric motor can be chosen to match the power source. i.e., a 10 cell NiMh battery pack produces approximately 1 volt per cell under load. At 30 amps, that translates to .4 HP.
System selection
Matt Orme's rule makes system selection easy. Orme's law assumes that the hobbyist will be using 1700 to 2000 mAH cells and will prop for 4 minutes of flight, or 30 amps. 2000 mAH = 2 amp hours or 120 amp minutes |
https://en.wikipedia.org/wiki/Thyrse | A thyrse is a type of inflorescence in which the main axis grows indeterminately, and the subaxes (branches) have determinate growth.
Gallery |
https://en.wikipedia.org/wiki/Niche%20%28video%20game%29 | Niche: A Genetics Survival Game is a simulation video game developed and published by Stray Fawn Studio. It entered early access for Microsoft Windows, Mac OS, and Linux-based systems in September 2016 after a successful Kickstarter crowd-funding campaign and was released in September 2017. Its main aim is to breed certain traits or genes into a group of canine or feline creatures to make the pack genetically perfect for its environment.
Gameplay
The game starts off with the player choosing story mode, a quick how-to tutorial or sandbox mode, where the player chooses their animals and terrain for their tribes and environment and a short cut-scene of a tribe of nichelings living on an island. A large bird takes a child niche and tries to fly off with him but he falls onto a random island. This starts off the game by playing as a nicheling named Adam and the tutorial level. Game play and turns are organized into "days" which gives each animal a few actions of play each day.
Niche's game mechanics were inspired by population genetics and effectively provide what is considered a fairly realistic genetic gaming experience. Animals can perish due to illness, injury, and even old age, so learning what will help them thrive with consideration to their living environment is key to not only building a solid tribe, but winning the game as well. Niche simulates over 100 genes, houses 4 different biomes with each having their very own predators, plants, and prey. Within the 100 available genes, there are characteristic options which a player can choose in order to realistically develop their own animal species, some of which detail physical characteristics, disease immunity, fertility, and overall dexterity. There are additional genetic options available as the gamer unlocks and utilizes new environments and events in-game.
Development
Niche was crowdfunded on Kickstarter. It was inspired by Creatures, Spore, and Don't Starve. It entered early access in 2016. Stray Fawn di |
https://en.wikipedia.org/wiki/Geometry%20Expert | Geometry Expert (GEX) is a Chinese software package for dynamic diagram drawing and automated geometry theorem proving and discovering.
There's a new Chinese version of Geometry Expert, called MMP/Geometer.
Java Geometry Expert is free under GNU General Public License.
External links
GEX Official website
Java GEX (old, new) on Wichita State University
Java GEX Documentation on Wichita State University
Theorem proving software systems
Automated theorem proving
Interactive geometry software |
https://en.wikipedia.org/wiki/Dienes%20phenomenon | Dienes phenomenon, when two identical Proteus cultures are inoculated at different points on the same plate of non-inhibitory medium, the resulting swarming of growth coalesce without signs of demarcation. When, however, two different strains of Proteus are inoculated, the spreading films of growth fail to coalesce and remain separated by a narrow easily visible area. The observation of this appearance, the Dienes phenomenon has been used to determine the identity or non-identity of strains in epidemiological studies. |
https://en.wikipedia.org/wiki/Serial%20number%20arithmetic | Many protocols and algorithms require the serialization or enumeration of related entities. For example, a communication protocol must know whether some packet comes "before" or "after" some other packet. The IETF (Internet Engineering Task Force) attempts to define "serial number arithmetic" for the purposes of manipulating and comparing these sequence numbers. In short, when the absolute serial number value decreases by more than half of the maximum value (e.g. 128 in an 8-bit value), it is considered to be "after" the former, whereas other decreases are considered to be "before".
This task is rather more complex than it might first appear, because most algorithms use fixed-size (binary) representations for sequence numbers. It is often important for the algorithm not to "break down" when the numbers become so large that they are incremented one last time and "wrap" around their maximum numeric ranges (go instantly from a large positive number to 0 or a large negative number). Some protocols choose to ignore these issues and simply use very large integers for their counters, in the hope that the program will be replaced (or they will retire) before the problem occurs (see Y2K).
Many communication protocols apply serial number arithmetic to packet sequence numbers in their implementation of a sliding window protocol. Some versions of TCP use protection against wrapped sequence numbers (PAWS). PAWS applies the same serial number arithmetic to packet timestamps, using the timestamp as an extension of the high-order bits of the sequence number.
Operations on sequence numbers
Only addition of a small positive integer to a sequence number and comparison of two sequence numbers are discussed.
Only unsigned binary implementations are discussed, with an arbitrary size in bits noted throughout the RFC (and below) as "SERIAL_BITS".
Addition
Adding an integer to a sequence number is simple unsigned integer addition, followed by unsigned modulo operation to bring the r |
https://en.wikipedia.org/wiki/June%20Morita | June Gloria Morita is an American statistician and statistics educator. She is a principal lecturer emerita in statistics at the University of Washington, and is known for her innovative lessons in statistics based on examples from real life. For instance, one of her classes tested whether helium-filled footballs travel farther than air-filled footballs, with the assistance of her son, Washington Huskies football place-kicker Eric Guttorp. Another lesson, for local elementary school students, tested the mark and recapture method by catching fish at the school's fish pond.
Morita did her undergraduate and graduate education at the University of California, Berkeley. She graduated in 1976 with a double major in mathematics and anthropology, earned a master's degree in 1978, and completed her Ph.D. in 1984. Her dissertation, supervised by Kjell Doksum, was Nonparametric Methods for Matched Observations from Life Distributions.
Morita is married to Swedish statistician Peter Guttorp, who was also educated at Berkeley, and the two statisticians were the first new hires when the University of Washington first formed its statistics department in 1980.
Morita won Washington's Distinguished Teaching Award in 1999.
In 2006, five years after her husband, she was elected as a Fellow of the American Statistical Association.
She is also an elected member of the International Statistical Institute.
In 2009, the American Statistical Association gave Morita their Founders Award "for outstanding leadership; for energetic service to the association as chapter president, regional vice chair, and chair of the Council of Chapters and a member of the Council of Sections Governing Board, the Board of Directors, and numerous committees; for initiating, promoting, and sustaining effective programs to enhance quantitative and statistical literacy in schools nationally and internationally, including Making Sense of Statistical Studies and programs to prepare undergraduates as mathematics/sta |
https://en.wikipedia.org/wiki/Vladimir%20Levenshtein | Vladimir Iosifovich Levenshtein (; 20 May 1935 – 6 September 2017) was a Russian and Soviet scientist who did research in information theory, error-correcting codes, and combinatorial design. Among other contributions, he is known for the Levenshtein distance and a Levenshtein algorithm, which he developed in 1965.
He graduated from the Department of Mathematics and Mechanics of Moscow State University in 1958 and worked at the Keldysh Institute of Applied Mathematics in Moscow ever since. He was a fellow of the IEEE Information Theory Society.
He received the IEEE Richard W. Hamming Medal in 2006, for "contributions to the theory of error-correcting codes and information theory, including the Levenshtein distance".
Life
Levenshtein graduated from Moscow State University in 1958, where he studied in the faculty of Mechanics and Mathematics. After graduation he worked at the M.V Keldysh Institute of Applied Mathematics.
Publications
V.I. Levenshtein, Application of Hadamard matrices to a problem in coding theory, Problems of Cybernetics, vol. 5, GIFML, Moscow, 1961, 125–136.
V.I. Levenshtein, On the stable extension of finite automata, Problems of Cybernetics, vol. 10, GIFML, Moscow, 1963, 281–286.
V.I. Levenshtein, On some coding systems and self-tuning machines for decoding messages, Problems of Cybernetics, vol. 11, GIFML, Moscow, 1964, 63–121.
V.I. Levenshtein, Decoding automata invariant with respect to the initial state, Problems of Cybernetics, vol. 12, GIFML, Moscow, 1964, 125–136.
V.I. Levenshtein, Binary codes providing synchronization and correction of errors, Abstracts of short scientific reports of the International Congress of Mathematicians, Section 13, Moscow, 1966, 24.
V.I. Levenshtein, Asymptotically optimal binary code with correction of occurrences of one or two adjacent characters, Problems of Cybernetics, vol. 19, Science, Moscow, 1967, 293–298.
V.I. Levenshtein, On the redundancy and deceleration of separable c |
https://en.wikipedia.org/wiki/Evolution%20of%20biparental%20care%20in%20tropical%20frogs | The Evolution of biparental care in tropical frogs is the evolution of the behaviour of a parental care system in frogs in which both the mother and father raise their offspring.
Evolution
Many tropical frogs have developed a parental care system where both the mother and father partake in raising their offspring. The evolution of biparental care, which is the joint effort of both parents, is a topic that is still under investigation. Biparentalism arose in some species of tropical frogs as a result of ecological conditions, the differences between the sexes, and their natural tendencies.
Male parental care could have served as the basis for the development of biparental care. Phylogenetic evidence shows that male parental care is the ancestral strategy in Dendrobates. Currently there are Dendrobates species, such as D. ventrimaculatus and D. fantasticus, that exhibit biparental care. The trend of using males to guard or brood eggs for biparental care or paternal care can be understood from the perspective of the female. After oviposition, or when the eggs are laid, the females need to replenish their bodies that have been dedicated to nurturing the eggs before they can mate again. Brooding by the females would delay the opportunity to mate by about two to four weeks. Since this outcome would cause many males to compete for a few females that are able to mate, the males are favored for the brooding.
Environment
The environment can have a substantial impact on the uses of parental care. Not all tropical frogs have the ability to lay their eggs plainly on land or plants. Tropical frogs can choose from a variety of water sources, such as lakes, streams, and small puddles. There is greater risk involved with reproducing in bigger bodies of water because of the higher likelihood of fish and other aquatic predators being there. Instead, frogs can choose to place eggs in phytotelmata. However, there is a trade-off that comes with electing a smaller water source. Not m |
https://en.wikipedia.org/wiki/Sinodelphys | Sinodelphys is an extinct mammal from the Early Cretaceous, estimated to be 125 million years old. It was discovered and described in 2003 in rocks of the Yixian Formation in Liaoning Province, China, by a team of scientists including Zhe-Xi Luo and John Wible. While initially suggested to be the oldest known metatherian, later studies interpreted it as a eutherian.
Description
Only one fossil specimen is known, a slab and counterslab given catalog number CAGS00-IG03. It is in the collection of the Chinese Academy of Geological Sciences.
Sinodelphys szalayi grew only 15 cm (5.9 in) long and possibly weighed about 30 g (1.05 oz). Its fossilized skeleton is surrounded by impressions of fur and soft tissue, thanks to the exceptional sediment that preserves such details. Luo et al. (2003) inferred from the foot structure of Sinodelphys that it was a scansorial tree-dweller, like the contemporary Eomaia and modern opossums such as Didelphis. Sinodelphys probably hunted worms and insects.
Taxonomy
Sinodelphys szalayi, living in China around 125 million years ago, was initially interpreted as the earliest known metatherian. This makes it almost contemporary to the eutherian Acristatherium, which has been found in the same area. However, Bi et al. (2018) reinterpreted Sinodelphys as an early member of Eutheria.
See also
Eomaia
Evolution of mammals |
https://en.wikipedia.org/wiki/List%20of%20quantum-mechanical%20systems%20with%20analytical%20solutions | Much insight in quantum mechanics can be gained from understanding the closed-form solutions to the time-dependent non-relativistic Schrödinger equation. It takes the form
where is the wave function of the system, is the Hamiltonian operator, and is time. Stationary states of this equation are found by solving the time-independent Schrödinger equation,
which is an eigenvalue equation. Very often, only numerical solutions to the Schrödinger equation can be found for a given physical system and its associated potential energy. However, there exists a subset of physical systems for which the form of the eigenfunctions and their associated energies, or eigenvalues, can be found. These quantum-mechanical systems with analytical solutions are listed below.
Solvable systems
The two-state quantum system (the simplest possible quantum system)
The free particle
The delta potential
The double-well Dirac delta potential
The particle in a box / infinite potential well
The finite potential well
The one-dimensional triangular potential
The particle in a ring or ring wave guide
The particle in a spherically symmetric potential
The quantum harmonic oscillator
The quantum harmonic oscillator with an applied uniform field
The hydrogen atom or hydrogen-like atom e.g. positronium
The hydrogen atom in a spherical cavity with Dirichlet boundary conditions
The particle in a one-dimensional lattice (periodic potential)
The particle in a one-dimensional lattice of finite length
The Morse potential
The Mie potential
The step potential
The linear rigid rotor
The symmetric top
The Hooke's atom
The Spherium atom
Zero range interaction in a harmonic trap
The quantum pendulum
The rectangular potential barrier
The Pöschl–Teller potential
The Inverse square root potential
Multistate Landau–Zener models
The Luttinger liquid (the only exact quantum mechanical solution to a model including interparticle interactions)
See also
List of quantum-mechanical potentials – a list of physically |
https://en.wikipedia.org/wiki/OPNsense |
OPNsense is an open source, FreeBSD-based firewall and routing software developed by Deciso, a company in the Netherlands that makes hardware and sells support packages for OPNsense. It is a fork of pfSense, which in turn was forked from m0n0wall built on FreeBSD. It was launched in January 2015. When m0n0wall closed down in February 2015 its creator, Manuel Kasper, referred its developer community to OPNsense.
OPNsense has a web-based interface and can be used on the x86-64 platform. Along with acting as a firewall, it has traffic shaping, load balancing, and virtual private network capabilities, and others can be added via plugins. OPNsense offers next-generation firewall capabilities utilizing Zenarmor, a NGFW plugin developed by OPNsense partner Sunny Valley Networks.
History
In November 2017, a World Intellectual Property Organization panel found that Netgate, the copyright holder of pfSense, used the domain opnsense.com in bad faith to discredit OPNsense, and obligated Netgate to transfer domain ownership to Deciso.
See also
Comparison of firewalls
List of router and firewall distributions |
https://en.wikipedia.org/wiki/Electronic%20Signatures%20Directive | The Electronic Signatures Directive 1999/93/EC was a European Union directive on the use of electronic signatures (e-signatures) in electronic contracts within the European Union (EU).
It was repealed by the eIDAS regulation on 1 July 2016.
Contents
The central provision of the directive is article 5, which requires that electronic signatures are regarded as equivalent to written signatures.
Related acts
COM(2008) 798 final – Not published in the Official Journal
COM(2006) 120 final – Not published in the Official Journal
Official Journal L 175 , 15/07/2003 P. 0045 - 0046
Official Journal L 289 , 16/11/2000 P. 0042 - 0043
Implementation
See also
Electronic signature
United States Electronic Signatures in Global and National Commerce Act
External links
Docusign article on Directive 1999/93/EC |
https://en.wikipedia.org/wiki/Tolecusatellitidae | Tolecusatellitidae is a incertae sedis ssDNA/ssDNA(+) family of biological satellites. The family contains two genera and 131 species. This family of viruses depend on the presence of another virus (helper viruses) to replicate their genomes, as such they have minimal genomes with very low genomic redundancy.
Name
The name Tolecusatellitidae is a combination of Tolecu, from the first DNA satellite shown to be associated with Tomato leaf curl virus and satellite, the fact that it is a satellite.
Taxonomy
The species are ssDNA unless specified ssDNA(+). There are 62 ssDNA(+) and 69 ssDNA satellites.
Betasatellite
Ageratum leaf curl Buea betasatellite
Ageratum leaf curl Cameroon betasatellite
Ageratum yellow leaf curl betasatellite
Ageratum yellow vein betasatellite
Ageratum yellow vein China betasatellite ssDNA(+)
Ageratum yellow vein Sri Lanka betasatellite
Alternanthera yellow vein betasatellite
Andrographis yellow vein leaf curl betasatellite
Bhendi yellow vein mosaic betasatellite
Cardiospermum yellow leaf curl betasatellite
Chili leaf curl betasatellite
Chili leaf curl Jaunpur betasatellite
Chili leaf curl Sri Lanka betasatellite
Codiaeum leaf curl betasatellite ssDNA(+)
Cotton leaf curl Bahraich betasatellite ssDNA(+)
Cotton leaf curl Bangalore betasatellite 1 ssDNA(+)
Cotton leaf curl Bangalore betasatellite 2 ssDNA(+)
Cotton leaf curl Bangalore betasatellite 3 ssDNA(+)
Cotton leaf curl Bangalore betasatellite 4 ssDNA(+)
Cotton leaf curl Burkina Faso betasatellite ssDNA(+)
Cotton leaf curl Gezira betasatellite
Cotton leaf curl Kashmir betasatellite ssDNA(+)
Cotton leaf curl Multan betasatellite
Cotton leaf curl Tandojam betasatellite ssDNA(+)
Croton yellow vein mosaic betasatellite
Emilia yellow vein betasatellite ssDNA(+)
Emilia yellow vein Fujian betasatellite ssDNA(+)
Erectites yellow mosaic betasatellite ssDNA(+)
Eupatorium yellow vein betasatellite
Eupatorium yellow vein mosaic betasatellite
French bean leaf curl betasa |
https://en.wikipedia.org/wiki/Bulletin%20of%20the%20Lebedev%20Physics%20Institute | Bulletin of the Lebedev Physics Institute (Russian: Краткие сообщения по физике, Kratkue Soobsheniya po fisike) is a peer-reviewed scientific journal of physics. The journal was established in 1970, and is published by the Lebedev Physical Institute (in Russian) a monthly basis. Springer publishes an English translation of the journal. It is edited by Oleg N. Krokhin. The online version is available since 2007.
The Bulletin covers research results from and the Institute for Nuclear Research of the Russian Academy of Sciences in addition to those of the Lebedev Physical Institute.
Indexing
Bulletin of the Lebedev Physics Institute is abstracted and indexed in the following databases:
External links
Bulletin of the Lebedev Physics Institute website
Краткие сообщения по физике website
Physics journals
Science and technology in Russia
Springer Science+Business Media academic journals
Monthly journals |
https://en.wikipedia.org/wiki/ChatScript | ChatScript is a combination Natural Language engine and dialog management system designed initially for creating chatbots, but is currently also used for various forms of NL processing. It is written in C++. The engine is an open source project at SourceForge. and GitHub.
ChatScript was written by Bruce Wilcox and originally released in 2011, after Suzette (written in ChatScript) won the 2010 Loebner Prize, fooling one of four human judges.
Features
In general ChatScript aims to author extremely concisely, since the limiting scalability of hand-authored chatbots is how much/fast one can write the script.
Because ChatScript is designed for interactive conversation, it automatically maintains user state across volleys. A volley is any number of sentences the user inputs at once and the chatbots response.
The basic element of scripting is the rule. A rule consists of a type, a label (optional), a pattern, and an output. There are three types of rules. Gambits are something a chatbot might say when it has control of the conversation. Rejoinders are rules that respond to a user remark tied to what the chatbot just said. Responders are rules that respond to arbitrary user input which is not necessarily tied to what the chatbot just said. Patterns describe conditions under which a rule may fire. Patterns range from extremely simplistic to deeply complex (analogous to Regex but aimed for NL). Heavy use is typically made of concept sets, which are lists of words sharing a meaning. ChatScript contains some 2000 predefined concepts and scripters can easily write their own. Output of a rule intermixes literal words to be sent to the user along with common C-style programming code.
Rules are bundled into collections called topics. Topics can have keywords, which allows the engine to automatically search the topic for relevant rules based on user input.
Example code
Topic: ~food( ~fruit fruit food eat)
t: What is your favorite food?
a: (~fruit) I like fruit |
https://en.wikipedia.org/wiki/Zeroshell | Zeroshell is a small open-source Linux distribution for servers and embedded systems which aims to provide network services. Its administration relies on a web-based graphical interface; no shell is needed to administer and configure it. Zeroshell is available as Live CD and CompactFlash images, and VMware virtual machines.
Zeroshell can be installed on any IA-32 computer with almost any Ethernet interface. It can also be installed on most embedded devices and single-board computers such as Raspberry Pi and Orange Pi.
The project reached EOL in April of 2021 with the version 3.9.5.
There are several known vulnerabilities for various versions of this software: V2, V3.6x up to V3.7, V3.9.0, V3.9.3 and last V3.9.5 for example, allowing an attacker to e.g. gain root access to the device easily. The main attack vector is the cgi script in use, 'kerbynet'.
Selected features
RADIUS server which is able to provide strong authentication for the Wireless clients by using IEEE 802.1X and Wi-Fi Protected Access (WPA/WPA2) protocols
Captive portal for network authentication in the HotSpots by using a web browser. The credentials can be verified against a Radius server, a Kerberos 5 KDC (such as Active Directory KDC)
Netfilter – Firewall, Packet Filter and Stateful Packet Inspection (SPI), Layer 7 filter to block or shape the connections generated by Peer to Peer clients
Linux network scheduler – control maximum bandwidth, the guaranteed bandwidth and the priority of some types of traffic such as VoIP and peer-to-peer
VPN host-to-LAN and LAN-to-LAN with the IPSec/L2TP and OpenVPN protocols
Routing and Bridging capabilities with VLAN IEEE 802.1Q support
Multizone DNS (Domain name system) server
Multi subnet DHCP server
PPPoE client for connection to the WAN (Wide area network) via ADSL, DSL and cable lines
Dynamic DNS client updater for DynDNS
NTP (Network Time Protocol) client and server
Syslog server for receiving and cataloging the system logs produced by the re |
https://en.wikipedia.org/wiki/Histidinol%20dehydrogenase | In enzymology, histidinol dehydrogenase (HIS4) (HDH) () is an enzyme that catalyzes the chemical reaction
L-histidinol + 2 NAD+ L-histidine + 2 NADH + 2 H+
Thus, the two substrates of this enzyme are L-histidinol and NAD+, whereas its 3 products are L-histidine, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is L-histidinol:NAD+ oxidoreductase. This enzyme is also called L-histidinol dehydrogenase.
Structure
In bacteria, HDH is a single chain polypeptide; in fungi it is the C-terminal domain of a multifunctional enzyme which catalyses three different steps of histidine biosynthesis; and in plants it is expressed as a nuclear encoded protein precursor which is exported to the chloroplast.
Active site
Histidinol is held inside the active site thanks to a zinc ion, but the zinc ion does not participate in the catalysis otherwise. The zinc ion is held in place by His262, Gln259, Asp360 and His419 (which, in homodimeric histidinol dehydrogenases, comes from the other monomer). Histidinol itself is held in place by His327 and His367 from one moment unit and Glu414 from the other monomer unit.
A Cys residue has been implicated in the catalytic mechanism of the second oxidative step. However, according to newer studies with histidinol dehydrogenase from E. coli, the mechanism is catalyzed by four bases, B1-B4. His327 acts as the first base, deprotonating histidinol's hydroxyl group. Concomitantly, hydride is abstracted from histidinol by NAD+, which is then exchanged for a second NAD+ molecule. Glu325 acts as the second base, deprotonating a molecule of water, which then attacks histidinol. At the same time, His327 (now protonated) donates a proton to the aldehydic oxygen, which results in a gem-diol. After then, His327 again deprotonates one of the hydroxyl groups and NAD+ abstracts a proton from the reactive carbon a |
https://en.wikipedia.org/wiki/Implantable%20myoelectric%20sensors | An implantable myoelectric sensor (IMES) is a sensor implanted in or near a muscular region of the body in order to read the electric outputs of the muscles. This allows the device to measure the exact degree of activation of the muscle. This device is primarily used in disabled individuals as a detection module that feeds information regarding movement to externally powered and controlled prosthetics (e.g. synthetic arm, leg, etc.). The IMES was invented by Dr. Richard F. ff. Weir of the Rehabilitation Institute of Chicago and Dr. Philip Troyk of the Illinois Institute of Technology, with the first experimental studies performed by Jack F. Schorsch of the Rehabilitation Institute of Chicago.
IMES are an implantable electromyographic sensor meaning that they work by picking up on the electrical outputs of muscles. They do this by detecting the electrical potential across the terminal electrodes of this device. These potentials are representative of the electrical activity of nearby muscles. After acquisition, the potentials are amplified and digitized before transmission. Transmission occurs over a band separated tx/rx link. Generally multiple IMES are implanted into discrete muscles or muscle compartments to increase the sensitivity of the individual implant for that muscle. The IMES control unit is capable of allocating total available bandwidth across the number of implants in the system, and supports up to 16 implants on an individual base station. By using multiple implants, it is possible to use machine learning and pattern recognition techniques to estimate movement intent.
Power and data transmission are accomplished over a shared inductive link. There are no internal batteries for the IMES device, and no dedicated onboard memory resources for the implanted, and it must remain within the inductive power field to operate.
See also
Myoelectric Prosthesis |
https://en.wikipedia.org/wiki/Biotechnology%20in%20pharmaceutical%20manufacturing | Biotechnology is the use of living organisms to develop useful products. Biotechnology is often used in pharmaceutical manufacturing. Notable examples include the use of bacteria to produce things such as insulin or human growth hormone. Other examples include the use of transgenic pigs for the creation of hemoglobin in use of humans.
Human Insulin
Amongst the earliest uses of biotechnology in pharmaceutical manufacturing is the use of recombinant DNA technology to modify Escherichia coli bacteria to produce human insulin, which was performed at Genentech in 1978. Prior to the development of this technique, insulin was extracted from the pancreas glands of cattle, pigs, and other farm animals. While generally efficacious in the treatment of diabetes, animal-derived insulin is not indistinguishable from human insulin, and may therefore produce allergic reactions. Genentech researchers produced artificial genes for each of the two protein chains that comprise the insulin molecule. The artificial genes were "then inserted... into plasmids... among a group of genes that" are activated by lactose. Thus, the insulin-producing genes were also activated by lactose. The recombinant plasmids were inserted into Escherichia coli bacteria, which were "induced to produce 100,000 molecules of either chain A or chain B human insulin." The two protein chains were then combined to produce insulin molecules.
Human growth hormone
Prior to the use of recombinant DNA technology to modify bacteria to produce human growth hormone, the hormone was manufactured by extraction from the pituitary glands of cadavers, as animal growth hormones have no therapeutic value in humans. Production of a single year's supply of human growth hormone required up to fifty pituitary glands, creating significant shortages of the hormone. In 1979, scientists at Genentech produced human growth hormone by inserting DNA coding for human growth hormone into a plasmid that was implanted in Escherichia coli bacter |
https://en.wikipedia.org/wiki/Beta2-adaptin%20C-terminal%20domain | The C-terminal domain of Beta2-adaptin is a protein domain is involved in cell trafficking by aiding import and export of substances in and out of the cell.
Function
This is an adaptor protein which helps the formation of a clathrin coat around a vesicle.
Structure
This entry represents a subdomain of the appendage (ear) domain of beta-adaptin from AP clathrin adaptor complexes. This domain has a three-layer arrangement, alpha-beta-alpha, with a bifurcated antiparallel beta-sheet. This domain is required for binding to clathrin, and its subsequent polymerisation. Furthermore, a hydrophobic patch present in the domain also binds to a subset of D-phi-F/W motif-containing proteins that are bound by the alpha-adaptin appendage domain (epsin, AP180, eps15).
Cell trafficking
Proteins synthesized on the ribosome and processed in the endoplasmic reticulum are transported from the Golgi apparatus to the trans-Golgi network (TGN), and from there via small carrier vesicles to their final destination compartment. These vesicles have specific coat proteins (such as clathrin or coatomer) that are important for cargo selection and direction of transport. Clathrin coats contain both clathrin (acts as a scaffold) and adaptor complexes that link clathrin to receptors in coated vesicles. Clathrin-associated protein complexes are believed to interact with the cytoplasmic tails of membrane proteins, leading to their selection and concentration. The two major types of clathrin adaptor complexes are the heterotetrameric adaptor protein (AP) complexes, and the monomeric GGA (Golgi-localising, Gamma-adaptin ear domain homology, ARF-binding proteins) adaptors.
AP (adaptor protein) complexes are found in coated vesicles and clathrin-coated pits. AP complexes connect cargo proteins and lipids to clathrin at vesicle budding sites, as well as binding accessory proteins that regulate coat assembly and disassembly (such as AP180, epsins and auxilin). There are different AP complexes in mammals |
https://en.wikipedia.org/wiki/Semiconductor%20package | A semiconductor package is a metal, plastic, glass, or ceramic casing containing one or more discrete semiconductor devices or integrated circuits. Individual components are fabricated on semiconductor wafers (commonly silicon) before being diced into die, tested, and packaged. The package provides a means for connecting it to the external environment, such as printed circuit board, via leads such as lands, balls, or pins; and protection against threats such as mechanical impact, chemical contamination, and light exposure. Additionally, it helps dissipate heat produced by the device, with or without the aid of a heat spreader. There are thousands of package types in use. Some are defined by international, national, or industry standards, while others are particular to an individual manufacturer.
Package functions
A semiconductor package may have as few as two leads or contacts for devices such as diodes, or in the case of advanced microprocessors, a package may have hundreds of connections. Very small packages may be supported only by their wire leads. Larger devices, intended for high-power applications, are installed in carefully designed heat sinks so that they can dissipate hundred or thousands of watts of waste heat.
In addition to providing connections to the semiconductor and handling waste heat, the semiconductor package must protect the "chip" from the environment, particularly the ingress of moisture. Stray particles or corrosion products inside the package may degrade performance of the device or cause failure. A hermetic package allows essentially no gas exchange with the surroundings; such construction requires glass, ceramic or metal enclosures.
Date code
Manufacturers usually print—using ink or laser marking—the manufacturer's logo and the manufacturer's part number on the package,
to make it easier to distinguish the many different and incompatible devices packaged in relatively few kinds of packages.
The markings often include a 4 digit date c |
https://en.wikipedia.org/wiki/Katherine%20E.%20Stange | Katherine E. Stange is a Canadian-American mathematician and an associate professor of mathematics at the University of Colorado Boulder. She is a number theorist specializing in topics in arithmetic geometry.
Education and career
Stange earned her PhD in mathematics from Brown University in 2008 under the supervision of Joseph H. Silverman. She was a National Science Foundation (NSF) Postdoctoral Fellow and Junior Lecturer at Harvard University from 2008 to 2009. She also held postdoctoral fellowships at Simon Fraser University, Pacific Institute for the Mathematical Sciences, and the University of British Columbia (2009–2011) and Stanford University (2011–2012). In 2012, Stange joined the faculty at the University of Colorado Boulder as an assistant professor. She was promoted to associate professor with indefinite tenure effective August 2018.
Stange has been active in Women in Numbers, the prototype for the Association for Women in Mathematics' Research Collaboration Networks for Women. She was co-organizer and proceedings co-editor of Directions in Number Theory: Proceedings of the 2014 WIN3 Workshop and a project leader for Women in Numbers 4. Stange served on the American Mathematical Society Committee on Women in Mathematics (CoWIM) from 2019–2020.
Recognition
The Mathematical Association of America presented Stange and Lionel Levine the 2013 Paul R. Halmos - Lester R. Ford Award for outstanding paper in The American Mathematical Monthly for their paper How to make the most of a shared meal: plan the last bite first.
Stange was elected a Fellow of the Association for Women in Mathematics in the Class of 2021 "for leadership in the Women in Numbers Network by creating its website (the first of its kind), mentoring early-career researchers, organizing conferences, editing its proceedings volumes, and chairing its steering committee; and for service on AWM committees, including support of other research networks". She was named a 2021 Simons Fellow in Math |
https://en.wikipedia.org/wiki/Firing%20squad%20synchronization%20problem | The firing squad synchronization problem is a problem in computer science and cellular automata in which the goal is to design a cellular automaton that, starting with a single active cell, eventually reaches a state in which all cells are simultaneously active. It was first proposed by John Myhill in 1957 and published (with a solution by John McCarthy and Marvin Minsky) in 1962 by Edward F. Moore.
Problem statement
The name of the problem comes from an analogy with real-world firing squads: the goal is to design a system of rules according to which an officer can command an execution detail to fire so that its members fire their rifles simultaneously.
More formally, the problem concerns cellular automata, arrays of finite state machines called "cells" arranged in a line, such that at each time step each machine transitions to a new state as a function of its previous state and the states of its two neighbors in the line. For the firing squad problem, the line consists of a finite number of cells, and the rule according to which each machine transitions to the next state should be the same for all of the cells interior to the line, but the transition functions of the two endpoints of the line are allowed to differ, as these two cells are each missing a neighbor on one of their two sides.
The states of each cell include three distinct states: "active", "quiescent", and "firing", and the transition function must be such that a cell that is quiescent and whose neighbors are quiescent remains quiescent. Initially, at time , all states are quiescent except for the cell at the far left (the general), which is active. The goal is to design a set of states and a transition function such that, no matter how long the line of cells is, there exists a time such that every cell transitions to the firing state at time , and such that no cell belongs to the firing state prior to time .
Solutions
The first solution to the FSSP was found by John McCarthy and Marvin Minsky and |
https://en.wikipedia.org/wiki/Jim%20Horning | James Jay Horning (24 August 1942 – 18 January 2013) was an American computer scientist and ACM Fellow.
Overview
Jim Horning received a PhD in computer science from Stanford University in 1969 for a thesis entitled A Study of Grammatical Inference. He was a founding member, and later chairman, of the Computer Systems Research Group at the University of Toronto, Canada, from 1969 until 1977. He was then a Research Fellow at the Xerox Palo Alto Research Center (PARC) from 1977 until 1984 and a founding member and senior consultant at DEC Systems Research Center (DEC/SRC) from 1984 until 1996. He was founder and director of STAR Lab from 1997 until 2001 at InterTrust Technologies Corp.
Peter G. Neumann reported on 22 January 2013 in the RISKS Digest, Volume 27, Issue 14, that Horning had died on 18 January 2013.
Horning's interests included programming languages, programming methodology, specification, formal methods, digital rights management and computer/network security. A major contribution was his involvement with the Larch approach to formal specification with John Guttag (MIT) et al.
Selected publications
A Compiler Generator (with William M. McKeeman and D. B. Wortman), Prentice Hall (1970). . |
https://en.wikipedia.org/wiki/Tetradic%20Palatini%20action | The Einstein–Hilbert action for general relativity was first formulated purely in terms of the space-time metric. To take the metric and affine connection as independent variables in the action principle was first considered by Palatini. It is called a first order formulation as the variables to vary over involve only up to first derivatives in the action and so doesn't overcomplicate the Euler–Lagrange equations with higher derivative terms. The tetradic Palatini action is another first-order formulation of the Einstein–Hilbert action in terms of a different pair of independent variables, known as frame fields and the spin connection. The use of frame fields and spin connections are essential in the formulation of a generally covariant fermionic action (see the article spin connection for more discussion of this) which couples fermions to gravity when added to the tetradic Palatini action.
Not only is this needed to couple fermions to gravity and makes the tetradic action somehow more fundamental to the metric version, the Palatini action is also a stepping stone to more interesting actions like the self-dual Palatini action which can be seen as the Lagrangian basis for Ashtekar's formulation of canonical gravity (see Ashtekar's variables) or the Holst action which is the basis of the real variables version of Ashtekar's theory. Another important action is the Plebanski action (see the entry on the Barrett–Crane model), and proving that it gives general relativity under certain conditions involves showing it reduces to the Palatini action under these conditions.
Here we present definitions and calculate Einstein's equations from the Palatini action in detail. These calculations can be easily modified for the self-dual Palatini action and the Holst action.
Some definitions
We first need to introduce the notion of tetrads. A tetrad is an orthonormal vector basis in terms of which the space-time metric looks locally flat,
where is the Minkowski metric. The tetr |
https://en.wikipedia.org/wiki/Pittsburgh%20Supercomputing%20Center | The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States.
In addition to providing a family of Big Data-optimized supercomputers with unique shared memory architectures, PSC features the National Institutes of Health-sponsored National Resource for Biomedical Supercomputing, an Advanced Networking Group that conducts research on network performance and analysis, and a STEM education and outreach program supporting K-20 education. In 2012, PSC established a new Public Health Applications Group that will apply supercomputing resources to problems in preventing, monitoring and responding to epidemics and other public health needs.
Mission
The Pittsburgh Supercomputing Center provides university, government, and industrial researchers with access to several of the most powerful systems for high-performance computing, communications and data-handling and analysis available nationwide for unclassified research. As a resource provider in the Extreme Science and Engineering Discovery Environment (XSEDE), the National Science Foundation's network of integrated advanced digital resources, PSC works with its XSEDE partners to harness the full range of information technologies to enable discovery in U.S. science and engineering.
Partnerships
PSC is a leading partner in XSEDE. PSC-scientific co-director Ralph Roskies is a co-principal investigator of XSEDE and co-leads its Extended Collaborative Support Services. Other PSC staff lead XSEDE efforts in Networking, Incident Response, Systems & Software Engineering, Outreach, Allocations Coordination, and Novel & Innovative Projects. This NSF-funded program provides U.S. academic researchers with support for and access to leadership-class computing infrastructure and research.
The Natio |
https://en.wikipedia.org/wiki/Vapor%E2%80%93liquid%20equilibrium | In thermodynamics and chemical engineering, the vapor–liquid equilibrium (VLE) describes the distribution of a chemical species between the vapor phase and a liquid phase.
The concentration of a vapor in contact with its liquid, especially at equilibrium, is often expressed in terms of vapor pressure, which will be a partial pressure (a part of the total gas pressure) if any other gas(es) are present with the vapor. The equilibrium vapor pressure of a liquid is in general strongly dependent on temperature. At vapor–liquid equilibrium, a liquid with individual components in certain concentrations will have an equilibrium vapor in which the concentrations or partial pressures of the vapor components have certain values depending on all of the liquid component concentrations and the temperature. The converse is also true: if a vapor with components at certain concentrations or partial pressures is in vapor–liquid equilibrium with its liquid, then the component concentrations in the liquid will be determined dependent on the vapor concentrations and on the temperature. The equilibrium concentration of each component in the liquid phase is often different from its concentration (or vapor pressure) in the vapor phase, but there is a relationship. The VLE concentration data can be determined experimentally or approximated with the help of theories such as Raoult's law, Dalton's law, and Henry's law.
Such vapor–liquid equilibrium information is useful in designing columns for distillation, especially fractional distillation, which is a particular specialty of chemical engineers. Distillation is a process used to separate or partially separate components in a mixture by boiling (vaporization) followed by condensation. Distillation takes advantage of differences in concentrations of components in the liquid and vapor phases.
In mixtures containing two or more components, the concentrations of each component are often expressed as mole fractions. The mole fraction of |
https://en.wikipedia.org/wiki/Low-volatility%20anomaly | In investing and finance, the low-volatility anomaly is the observation that low-volatility stocks have higher returns than high-volatility stocks in most markets studied. This is an example of a stock market anomaly since it contradicts the central prediction of many financial theories that taking higher risk must be compensated with higher returns.
Furthermore, the Capital Asset Pricing Model (CAPM) predicts a positive relation between the systematic risk-exposure of a stock (also known as the stock beta) and its expected future returns. However, some narratives of the low-volatility anomaly falsify this prediction of the CAPM by showing that stocks with higher beta have historically under-performed the stocks with lower beta.
Other narratives of this anomaly show that even stocks with higher idiosyncratic risk are compensated with lower returns in comparison to stocks with lower idiosyncratic risk.
The low-volatility anomaly has also been referred to as the low-beta, minimum-variance, minimum volatility anomaly.
History
The CAPM was developed in the late 1960s and predicts that expected returns should be a positive and linear function of beta, and nothing else. First, the return of a stock with average beta should be the average return of stocks. Second, the intercept should be equal to the risk-free rate. Then the slope can be computed from these two points. Almost immediately these predictions were empirically challenged. Studies find that the correct slope is either less than predicted, not significantly different from zero, or even negative. Economist Fischer Black (1972) proposed a theory where there is a zero-beta return which is different from the risk-free return. This fits the data better. It still presumes, on principle, that there is higher return for higher beta. Research challenging CAPM's underlying assumptions about risk has been mounting for decades. One challenge was in 1972, when Michael C. Jensen, Fischer Black and Myron Scholes published |
https://en.wikipedia.org/wiki/Endothelial%20lipase | Endothelial lipase (LIPG) is a form of lipase secreted by vascular endothelial cells in tissues with high metabolic rates and vascularization, such as the liver, lung, kidney, and thyroid gland. The LIPG enzyme is a vital component to many biological processes. These processes include lipoprotein metabolism, cytokine expression, and lipid composition in cells. Unlike the lipases that hydrolyze Triglycerides, endothelial lipase primarily hydrolyzes phospholipids. Due to the hydrolysis specificity, endothelial lipase contributes to multiple vital systems within the body. On the contrary to the beneficial roles that LIPG plays within the body, endothelial lipase is thought to play a potential role in cancer and inflammation. Knowledge obtained in vitro and in vivo suggest the relations to these conditions, but human interaction knowledge lacks due to the recent discovery of endothelial lipase. Endothelial lipase was first characterized in 1999. The two independent research groups which are notable for this discovery cloned the endothelial lipase gene and identified the novel lipase secreted from endothelial cells. The anti-Atherosclerosis opportunity through alleviating plaque blockage and prospective ability to raise High-density lipoprotein (HDL) have gained endothelial lipase recognition.
Discovery
In 1999, the identification of endothelial lipase was independently discovered by two research groups.
The first group at Rhone-Poulenc Rorer cloned and characterized a new member of the triacylglyerol (TG) family. When this novel endothelial lipase was over-expressed in mice, the concentrations of HDL Cholesterol and apolipoprotein A-I in plasma decreased.
A second group at Stanford University independently cloned this same endothelial lipase from human umbilical vein endothelial cells, human coronary artery endothelial cells and rodent endothelial-like yolk sacs. Suppression subtractive hybridization was used to isolate the genes. The genes were then compared and |
https://en.wikipedia.org/wiki/TAZ%20zinc%20finger | In molecular biology, TAZ zinc finger (Transcription Adaptor putative Zinc finger) domains are zinc-containing domains found in the homologous transcriptional co-activators CREB-binding protein (CBP) and the P300. CBP and P300 are histone acetyltransferases (EC) that catalyse the reversible acetylation of all four histones in nucleosomes, acting to regulate transcription via chromatin remodelling. These large nuclear proteins interact with numerous transcription factors and viral oncoproteins, including p53 tumour suppressor protein, E1A oncoprotein, MyoD, and GATA-1, and are involved in cell growth, differentiation and apoptosis. Both CBP and P300 have two copies of the TAZ domain, one in the N-terminal region, the other in the C-terminal region. The TAZ1 domain of CBP and P300 forms a complex with CITED2 (CBP/P300-interacting transactivator with ED-rich tail), inhibiting the activity of the hypoxia inducible factor (HIF-1alpha) and thereby attenuating the cellular response to low tissue oxygen concentration. Adaptation to hypoxia is mediated by transactivation of hypoxia-responsive genes by hypoxia-inducible factor-1 (HIF-1) in complex with the CBP and p300 transcriptional coactivators.
The TAZ domain adopts an all-alpha fold with zinc-binding sites in the loops connecting the helices. The TAZ1 domain in P300 and the TAZ2 (CH3) domain in CBP have each been shown to have four amphipathic helices, organised by three zinc-binding clusters with HCCC-type coordination. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.