source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Empirical%20modelling | Empirical modelling refers to any kind of (computer) modelling based on empirical observations rather than on mathematically describable relationships of the system modelled.
Empirical Modelling
Empirical Modelling as a variety of empirical modelling
Empirical modelling is a generic term for activities that create models by observation and experiment. Empirical Modelling (with the initial letters capitalised, and often abbreviated to EM) refers to a specific variety of empirical modelling in which models are constructed following particular principles. Though the extent to which these principles can be applied to model-building without computers is an interesting issue (to be revisited below), there are at least two good reasons to consider Empirical Modelling in the first instance as computer-based. Without doubt, computer technologies have had a transformative impact where the full exploitation of Empirical Modelling principles is concerned. What is more, the conception of Empirical Modelling has been closely associated with thinking about the role of the computer in model-building.
An empirical model operates on a simple semantic principle: the maker observes a close correspondence between the behaviour of the model and that of its referent. The crafting of this correspondence can be 'empirical' in a wide variety of senses: it may entail a trial-and-error process, may be based on computational approximation to analytic formulae, it may be derived as a black-box relation that affords no insight into 'why it works'.
Empirical Modelling is rooted on the key principle of William James's radical empiricism, which postulates that all knowing is rooted in connections that are given-in-experience. Empirical Modelling aspires to craft the correspondence between the model and its referent in such a way that its derivation can be traced to connections given-in-experience. Making connections in experience is an essentially individual human activity that requires skill a |
https://en.wikipedia.org/wiki/Methyl%20benzoate | Methyl benzoate is an organic compound. It is an ester with the chemical formula C6H5CO2CH3. It is a colorless liquid that is poorly soluble in water, but miscible with organic solvents. Methyl benzoate has a pleasant smell, strongly reminiscent of the fruit of the feijoa tree, and it is used in perfumery. It also finds use as a solvent and as a pesticide used to attract insects such as orchid bees.
Synthesis and reactions
Methyl benzoate is formed by the condensation of methanol and benzoic acid, in presence of a strong acid.
Methyl benzoate reacts at both the ring and the ester, depending on the substrate. Electrophiles attack the ring, illustrated by acid-catalysed nitration with nitric acid to give methyl 3-nitrobenzoate. Nucleophiles attack the carbonyl center, illustrated by hydrolysis with addition of aqueous NaOH to give methanol and sodium benzoate.
Occurrence
Methyl benzoate can be isolated from the freshwater fern Salvinia molesta. It is one of many compounds that is attractive to males of various species of orchid bees, which apparently gather the chemical to synthesize pheromones; it is commonly used as bait to attract and collect these bees for study.
Cocaine hydrochloride hydrolyzes in moist air to give methyl benzoate; drug-sniffing dogs are thus trained to detect the smell of methyl benzoate.
Uses
Non electric Heat cost allocators. See: DIN EN 835. |
https://en.wikipedia.org/wiki/Valiant%E2%80%93Vazirani%20theorem | The Valiant–Vazirani theorem is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP = RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986. The proof is based on the Mulmuley–Vazirani–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science.
The Valiant–Vazirani theorem implies that the Boolean satisfiability problem, which is NP-complete, remains a computationally hard problem even if the input instances are promised to have at most one satisfying assignment.
Proof outline
Unambiguous-SAT is the promise problem of deciding whether a given Boolean formula that has at most one satisfying assignment is unsatisfiable or has exactly one satisfying assignment. In the first case, an algorithm for Unambiguous-SAT should reject, and in the second it should accept the formula.
If the formula has more than one satisfying assignment, then there is no condition on the behavior of the algorithm.
The promise problem Unambiguous-SAT can be decided by a nondeterministic Turing machine that has at most one accepting computation path. In this sense, this promise problem belongs to the complexity class UP (which is usually only defined for languages).
The proof of the Valiant–Vazirani theorem consists of a probabilistic reduction from SAT to SAT such that, with probability at least , the output formula has at most one satisfying assignment, and thus satisfies the promise of the Unambiguous-SAT problem.
More precisely, the reduction is a randomized polynomial-time algorithm that maps a Boolean formula with variables to a Boolean formula such that
every satisfying assignment of also satisfies , and
if is satisfiable, then, with probability at least , has a unique satisfying assignment .
By running the reduction a polynomial number of times, each t |
https://en.wikipedia.org/wiki/Angular%20diameter%20distance | In astronomy, angular diameter distance is a distance defined in terms of an object's physical size, , and its angular size, , as viewed from Earth:
Cosmology dependence
The angular diameter distance depends on the assumed cosmology of the universe. The angular diameter distance to an object at redshift, , is expressed in terms of the comoving distance, as:
where is the FLRW coordinate defined as:
where is the curvature density and is the value of the Hubble parameter today.
In the currently favoured geometric model of our Universe, the "angular diameter distance" of an object is a good approximation to the "real distance", i.e. the proper distance when the light left the object.
Angular size redshift relation
The angular size redshift relation describes the relation between the angular size observed on the sky of an object of given physical size, and the object's redshift from Earth (which is related to its distance, , from Earth). In a Euclidean geometry the relation between size on the sky and distance from Earth would simply be given by the equation:
where is the angular size of the object on the sky, is the size of the object and is the distance to the object. Where is small this approximates to:
However, in the ΛCDM model, the relation is more complicated. In this model, objects at redshifts greater than about 1.5 appear larger on the sky with increasing redshift.
This is related to the angular diameter distance, which is the distance an object is calculated to be at from and , assuming the Universe is Euclidean.
The Mattig relation yields the angular-diameter distance, , as a function of redshift z for a universe with ΩΛ = 0. is the present-day value of the deceleration parameter, which measures the deceleration of the expansion rate of the Universe; in the simplest models, corresponds to the case where the Universe will expand forever, to closed models which will ultimately stop expanding and contract, corresponds to the critical case |
https://en.wikipedia.org/wiki/Noether%20normalization%20lemma | In mathematics, the Noether normalization lemma is a result of commutative algebra, introduced by Emmy Noether in 1926. It states that for any field k, and any finitely generated commutative k-algebra A, there exist algebraically independent elements y1, y2, ..., yd in A such that A is a finitely generated module over the polynomial ring S = k[y1, y2, ..., yd]. The integer d is equal to the Krull dimension of the ring A; and if A is an integral domain, d is also the transcendence degree of the field of fractions of A over k.
The theorem has a geometric interpretation. Suppose A is the coordinate ring of an affine variety X, and consider S as the coordinate ring of a d-dimensional affine space . Then the inclusion map induces a surjective finite morphism of affine varieties : that is, any affine variety is a branched covering of affine space.
When k is infinite, such a branched covering map can be constructed by taking a general projection from an affine space containing X to a d-dimensional subspace.
More generally, in the language of schemes, the theorem can equivalently be stated as: every affine k-scheme (of finite type) X is finite over an affine n-dimensional space. The theorem can be refined to include a chain of ideals of R (equivalently, closed subsets of X) that are finite over the affine coordinate subspaces of the corresponding dimensions.
The Noether normalization lemma can be used as an important step in proving Hilbert's Nullstellensatz, one of the most fundamental results of classical algebraic geometry. The normalization theorem is also an important tool in establishing the notions of Krull dimension for k-algebras.
Proof
The following proof is due to Nagata, following Mumford's red book. A more geometric proof is given on page 127 of the red book.
The ring A in the lemma is generated as a k-algebra by some elements, . We shall induct on m. If , then the assertion is trivial. Assume now . It is enough to show that there is a subring S of A th |
https://en.wikipedia.org/wiki/Mobile%20daughter%20card | The mobile daughter card, also known as an MDC or CDC (communications daughter card), is a notebook version of the AMR slot on the motherboard of a desktop computer. It is designed to interface with special Ethernet (EDC), modem (MDC) or bluetooth (BDC) cards.
Intel MDC specification 1.0
In 1999, Intel published a specification for mobile audio/modem daughter cards. The document defines a standard connector (AMP* 3-179397-0), mechanical elements including several form factors, and electrical interface. The 30-pin connector carries power, several audio channels and AC-Link serial data. Up to two AC'97 codecs are supported on such a card.
Several form factors are specified:
45 × 27 mm
45 × 37 mm
55 × 27 mm with RJ11 jack
55 × 37 mm with RJ11 jack
45 × 55 mm
45 × 70 mm
30-pin AMP* 3-179397-0 pinout
See also
Daughter board
External links
intel.com – MDC specification.pdf
Mobile computers
Motherboard expansion slot |
https://en.wikipedia.org/wiki/Shlomi%20Dolev | Shlomi Dolev (; born December 5, 1958) is a Rita Altura Trust Chair Professor in Computer Science at Ben-Gurion University of the Negev (BGU) and the head of the BGU Negev Hi-Tech Faculty Startup Accelerator.
Biography
Shlomi Dolev received B.Sc. in Civil Engineering and B.A. in Computer Science in 1984 and 1985, and his M.Sc. and D.Sc. in computer science in 1990 and 1992 from the Technion Israel Institute of Technology. From 1992 to 1995 he was at Texas A&M University as a visiting research specialist.
Academic career
In 1995 Dolev joined the Department of Mathematics and Computer Science at BGU. He was the founder and first department head of the Computer Science Department, established in 2000. After 15 years, the department was ranked among the first 150 best departments in the world.
He is the author of Self-Stabilization published by MIT Press in 2000. From 2011 to 2014, Dolev served as Dean of the Natural Sciences Faculty. From 2010 he has served for six years, as the Head of the Inter University Computation Center of Israel.
He is a co-founder, board member and CSO of Secret Double Octopus. He is also a co-founder of Secret Sky (SecretSkyDB) Ltd. In 2015 Dolev was appointed head of the steering committee on computer science studies of the Israeli Ministry of Education.
Dolev together with Yuval Elovici and Ehud Gudes established the Telekom Innovation Laboratories at Ben-Gurion University. Dolev was instrumental in establishing the IBM Cyber Security Center of Excellence (CCoE) in Collaboration with Ben-Gurion University of the Negev, and JVP Cyber Labs. Several agencies and companies support his research including ISF, NSF, IBM (faculty awards), Verisign, EMC, Intel, Orange France, Deutsche Telekom, US Airforce and the European Union in the sum of several millions of dollars.
Dolev was a visiting professor at MIT, Paris 11, Paris 6 and DIMACS. He served in more than a hundred program committees, chairing two leading conferences in distributed comput |
https://en.wikipedia.org/wiki/Virokine | Virokines are proteins encoded by some large DNA viruses that are secreted by the host cell and serve to evade the host's immune system. Such proteins are referred to as virokines if they resemble cytokines, growth factors, or complement regulators; the term viroceptor is sometimes used if the proteins resemble cellular receptors. A third class of virally encoded immunomodulatory proteins consists of proteins that bind directly to cytokines. Due to the immunomodulatory properties of these proteins, they have been proposed as potentially therapeutically relevant to autoimmune diseases.
Mechanism
The primary mechanism of virokine interference with immune signaling is thought to be competitive inhibition of the binding of host signaling molecules to their target receptors. Virokines occupy binding sites on host receptors, thereby inhibiting access by signaling molecules. Viroceptors mimic host receptors and thus divert signaling molecules from finding their targets. Cytokine-binding proteins bind to and sequester cytokines, occluding the binding surface through which they interact with receptors. The effect is to attenuate and subvert host immune response.
Discovery
The term "virokine" was coined by National Institutes of Health virologist Bernard Moss. The early 1990s saw several reports of virally encoded proteins with sequence homology to immune proteins, followed by reports of the cowpox and vaccinia viruses directly interfering with key immune regulator IL1B. The first identified virokine was an epidermal growth factor-like protein found in myxoma viruses.
Much of the early work on virokines involved vaccinia virus, which was discovered to secrete proteins that promote proliferation of neighboring cells and block complement immune activity leading to inflammation.
Evolutionary origins
The immunomodulatory proteins, including virokines, in the poxvirus family have been extensively studied in the context of the evolution of the family. Virokines in this family |
https://en.wikipedia.org/wiki/Infinite%20regress | An infinite regress is an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor. In the epistemic regress, for example, a belief is justified because it is based on another belief that is justified. But this other belief is itself in need of one more justified belief for itself to be justified and so on. An infinite regress argument is an argument against a theory based on the fact that this theory leads to an infinite regress. For such an argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress is vicious. There are different ways in which a regress can be vicious. The most serious form of viciousness involves a contradiction in the form of metaphysical impossibility. Other forms occur when the infinite regress is responsible for the theory in question being implausible or for its failure to solve the problem it was formulated to solve. Traditionally, it was often assumed without much argument that each infinite regress is vicious but this assumption has been put into question in contemporary philosophy. While some philosophers have explicitly defended theories with infinite regresses, the more common strategy has been to reformulate the theory in question in a way that avoids the regress. One such strategy is foundationalism, which posits that there is a first element in the series from which all the other elements arise but which is not itself explained this way. Another way is coherentism, which is based on a holistic explanation that usually sees the entities in question not as a linear series but as an interconnected network. Infinite regress arguments have been made in various areas of philosophy. Famous examples include the cosmological argument, Bradley's regress and regress arguments in epistemology.
Definition
An infinite regress is an infinite series of entities governed b |
https://en.wikipedia.org/wiki/Cauchy%27s%20equation | In optics, Cauchy's transmission equation is an empirical relationship between the refractive index and wavelength of light for a particular transparent material. It is named for the mathematician Augustin-Louis Cauchy, who defined it in 1837.
The equation
The most general form of Cauchy's equation is
where n is the refractive index, λ is the wavelength, A, B, C, etc., are coefficients that can be determined for a material by fitting the equation to measured refractive indices at known wavelengths. The coefficients are usually quoted for λ as the vacuum wavelength in micrometres.
Usually, it is sufficient to use a two-term form of the equation:
where the coefficients A and B are determined specifically for this form of the equation.
A table of coefficients for common optical materials is shown below:
The theory of light-matter interaction on which Cauchy based this equation was later found to be incorrect. In particular, the equation is only valid for regions of normal dispersion in the visible wavelength region. In the infrared, the equation becomes inaccurate, and it cannot represent regions of anomalous dispersion. Despite this, its mathematical simplicity makes it useful in some applications.
The Sellmeier equation is a later development of Cauchy's work that handles anomalously dispersive regions, and more accurately models a material's refractive index across the ultraviolet, visible, and infrared spectrum.
Humidity dependence for air
Cauchy's two-term equation for air, expanded by Lorentz to account for humidity, is as follows:
where p is the air pressure in millibar, T is the temperature in kelvin, and v is the vapor pressure of water in millibar.
See also
Sellmeier equation |
https://en.wikipedia.org/wiki/Bell%27s%20law%20of%20computer%20classes | Bell's law of computer classes formulated by Gordon Bell in 1972 describes how types of computing systems (referred to as computer classes) form, evolve and may eventually die out. New classes of computers create new applications resulting in new markets and new industries.
Description
Bell considers the law to be partially a corollary to Moore's law which states "the number of transistors per chip double every 18 months". Unlike Moore's law, a new computer class is usually based on lower cost components that have fewer transistors or less bits on a magnetic surface, etc. A new class forms about every decade. It also takes up to a decade to understand how the class formed, evolved, and is likely to continue. Once formed, a lower priced class may evolve in performance to take over and disrupt an existing class. This evolution has caused clusters of scalable personal computers with 1 to thousands of computers to span a price and performance range of use from a PC, through mainframes, to become the largest supercomputers of the day. Scalable clusters became a universal class beginning in the mid-1990s; by 2010, clusters of at least one million independent computers will constitute the world's largest cluster.
Definition: Roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.
Established market class computers aka platforms are introduced and continue to evolve at roughly a constant price (subject to learning curve cost reduction) with increasing functionality (or performance) based on Moore's law that gives more transistors per chip, more bits per unit area, or increased functionality per system. Roughly every decade, technology advances in semiconductors, storage, networks, and interfaces enable the emergence of a new, lower-cost computer class (aka "platform") to serve a new need that is enabled by smaller devices (e.g. more transisto |
https://en.wikipedia.org/wiki/Atomic%20model%20%28mathematical%20logic%29 | In model theory, a subfield of mathematical logic, an atomic model is a model such that the complete type of every tuple is axiomatized by a single formula. Such types are called principal types, and the formulas that axiomatize them are called complete formulas.
Definitions
Let T be a theory. A complete type p(x1, ..., xn) is called principal or atomic (relative to T) if it is axiomatized relative to T by a single formula φ(x1, ..., xn) ∈ p(x1, ..., xn).
A formula φ is called complete in T if for every formula ψ(x1, ..., xn), the theory T ∪ {φ} entails exactly one of ψ and ¬ψ.
It follows that a complete type is principal if and only if it contains a complete formula.
A model M is called atomic if every n-tuple of elements of M satisfies a formula that is complete in Th(M)—the theory of M.
Examples
The ordered field of real algebraic numbers is the unique atomic model of the theory of real closed fields.
Any finite model is atomic.
A dense linear ordering without endpoints is atomic.
Any prime model of a countable theory is atomic by the omitting types theorem.
Any countable atomic model is prime, but there are plenty of atomic models that are not prime, such as an uncountable dense linear order without endpoints.
The theory of a countable number of independent unary relations is complete but has no completable formulas and no atomic models.
Properties
The back-and-forth method can be used to show that any two countable atomic models of a theory that are elementarily equivalent are isomorphic.
Notes |
https://en.wikipedia.org/wiki/Disjunctive%20sum | In the mathematics of combinatorial games, the sum or disjunctive sum of two games is a game in which the two games are played in parallel, with each player being allowed to move in just one of the games per turn. The sum game finishes when there are no moves left in either of the two parallel games, at which point (in normal play) the last player to move wins.
This operation may be extended to disjunctive sums of any number of games, again by playing the games in parallel and moving in exactly one of the games per turn. It is the fundamental operation that is used in the Sprague–Grundy theorem for impartial games and which led to the field of combinatorial game theory for partisan games.
Application to common games
Disjunctive sums arise in games that naturally break up into components or regions that do not interact except in that each player in turn must choose just one component to play in. Examples of such games are Go, Nim, Sprouts, Domineering, the Game of the Amazons, and the map-coloring games.
In such games, each component may be analyzed separately for simplifications that do not affect its outcome or the outcome of its disjunctive sum with other games. Once this analysis has been performed, the components can be combined by taking the disjunctive sum of two games at a time, combining them into a single game with the same outcome as the original game.
Mathematics
The sum operation was formalized by . It is a commutative and associative operation: if two games are combined, the outcome is the same regardless of what order they are combined, and if more than two games are combined, the outcome is the same regardless of how they are grouped.
The negation −G of a game G (the game formed by trading the roles of the two players) forms an additive inverse under disjunctive sums: the game G + −G is a zero game (won by whoever goes second) using a simple echoing strategy in which the second player repeatedly copies the first player's move in the other game |
https://en.wikipedia.org/wiki/Foxconn | Hon Hai Precision Industry Co., Ltd., trading as Hon Hai Technology Group in China and Taiwan and Foxconn internationally, is a Taiwanese multinational electronics contract manufacturer established in 1974 with headquarters in Tucheng, New Taipei City, Taiwan. In 2021, the company's annual revenue reached () and was ranked 20th in the 2023 Fortune Global 500. It is the world's largest contract manufacturer of electronics. While headquartered in Taiwan, the company earns the majority of its revenue from assets in China and is one of the largest employers worldwide. Terry Gou is the company founder and former chairman.
Foxconn manufactures electronic products for major American, Canadian, Chinese, Finnish, and Japanese companies. Notable products manufactured by Foxconn include the BlackBerry, iPad, iPhone, iPod, Kindle, all Nintendo gaming systems since the GameCube, Nintendo DS models, Sega models, Nokia devices, Cisco products, Sony devices (including mostly PlayStation gaming consoles), Google Pixel devices, Xiaomi devices, every successor to Microsoft's Xbox console, and several CPU sockets, including the TR4 CPU socket on some motherboards. As of 2012, Foxconn factories manufactured an estimated 40% of all consumer electronics sold worldwide.
Foxconn named Young Liu its new chairman after the retirement of founder Terry Gou, effective on 1 July 2019. Young Liu was the special assistant to former chairman Terry Gou and the head of business group S (semiconductor). Analysts said the handover signals the company's future direction, underscoring the importance of semiconductors, together with technologies like artificial intelligence, robotics, and autonomous driving, after Foxconn's traditional major business of smartphone assembly has matured.
History
Terry Gou established Hon Hai Precision Industry Co., Ltd. as an electrical components manufacturer in 1974 in Taipei, Taiwan. Foxconn's first manufacturing plant in Mainland China opened in Longhua Town, Shenzh |
https://en.wikipedia.org/wiki/Adaptive%20beamformer | An adaptive beamformer is a system that performs adaptive spatial signal processing with an array of transmitters or receivers. The signals are combined in a manner which increases the signal strength to/from a chosen direction. Signals to/from other directions are combined in a benign or destructive manner, resulting in degradation of the signal to/from the undesired direction. This technique is used in both radio frequency and acoustic arrays, and provides for directional sensitivity without physically moving an array of receivers or transmitters.
Motivation/Applications
Adaptive beamforming was initially developed in the 1960s for the military applications of sonar and radar. There exist several modern applications for beamforming, one of the most visible applications being commercial wireless networks such as LTE. Initial applications of adaptive beamforming were largely focused in radar and electronic countermeasures to mitigate the effect of signal jamming in the military domain.
Radar uses can be seen here Phased array radar. Although not strictly adaptive, these radar applications make use of either static or dynamic (scanning) beamforming.
Commercial wireless standards such as 3GPP Long Term Evolution (LTE (telecommunication)) and IEEE 802.16 WiMax rely on adaptive beamforming to enable essential services within each standard.
Basic Concepts
An adaptive beamforming system relies on principles of wave propagation and phase relationships. See Constructive interference, and Beamforming. Using the principles of superimposing waves, a higher or lower amplitude wave is created (e.g. by delaying and weighting the signal received). The adaptive beamforming system dynamically adapts in order to maximize or minimize a desired parameter, such as Signal-to-interference-plus-noise ratio.
Adaptive Beamforming Schemes
There are several ways to approach the beamforming design, the first approach was implemented by maximizing the signal to noise ratio (SNR) by Appleb |
https://en.wikipedia.org/wiki/Information%20privacy%20law | Information privacy, data privacy or data protection laws provide a legal framework on how to obtain, use and store data of natural persons. The various laws around the world describe the rights of natural persons to control who is using its data. This includes usually the right to get details on which data is stored, for what purpose and to request the deletion in case the purpose is not given anymore.
Over 80 countries and independent territories, including nearly every country in Europe and many in Latin America and the Caribbean, Asia, and Africa, have now adopted comprehensive data protection laws. The European Union has the General Data Protection Regulation (GDPR), in force since May 25, 2018. The United States is notable for not having adopted a comprehensive information privacy law, but rather having adopted limited sectoral laws in some areas like the California Consumer Privacy Act (CCPA).
By Jurisdiction
The German state of Hessia enacted the World's first data privacy law on 30SEP1970. In Germany the term informational self-determination was first used in the context of a German constitutional ruling relating to personal information collected during the 1983 census.
Asia
India
India passed its Digital Personal Data Protection Act, 2023 in August 2023.
China
China passed its Personal Information Protection Law (PIPL) in mid-2021, and was effective from November 1, 2021. It focuses heavily on consent, rights of the individual, and transparency of data processing. PIPL has been compared to the EU GDPR as it has similar scope and many similar provisions.
Philippines
In the Philippines, The Data Privacy Act of 2012 mandated the creation of the National Privacy Commission that would monitor and maintain policies that involve information privacy and personal data protection in the country. Modeled after the EU Data Protection Directive and the Asia-Pacific Economic Cooperation (APEC) Privacy Framework, the independent body would ensure compliance of t |
https://en.wikipedia.org/wiki/Net%20interest%20margin | Net interest margin (NIM) is a measure of the difference between the interest income generated by banks or other financial institutions and the amount of interest paid out to their lenders (for example, deposits), relative to the amount of their (interest-earning) assets. It is similar to the gross margin (or gross profit margin) of non-financial companies.
It is usually expressed as a percentage of what the financial institution earns on loans in a time period and other assets minus the interest paid on borrowed funds divided by the average amount of the assets on which it earned income in that time period (the average earning assets).
Net interest margin is similar in concept to net interest spread, but the net interest spread is the nominal average difference between the borrowing and the lending rates, without compensating for the fact that the earning assets and the borrowed funds may be different instruments and differ in volume.
Calculation
NIM is calculated as a percentage of net interest income to average interest-earning assets during a specified period. For example, a bank's average interest-earning assets (which generally includes, loans and investment securities) was $100.00 in a year while it earned interest income of $6.00 and paid interest expense of $3.00. The NIM then is computed as ($6.00 – $3.00) / $100.00 = 3%. Net interest income equals the interest earned on interest-earning assets (such as interest earned on loans and investment securities) minus the interest paid on interest-bearing liabilities (such as interest paid to customers on their deposits).
In particular, for a bank or a financial institution, if the bank or financial institution has a significant amount of non-performing assets (such as loans where full repayment is in doubt), its NIM will generally decrease because interest earned on non-performing assets is treated, for accounting purposes, as repayment of principal and not payment of interest due to the uncertainty that the |
https://en.wikipedia.org/wiki/Leslie%20Valiant | Leslie Gabriel Valiant (born 28 March 1949) is a British American computer scientist and computational theorist. He was born to a chemical engineer father and a translator mother. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. Valiant was awarded the Turing Award in 2010, having been described by the A.C.M. as a heroic figure in theoretical computer science and a role model for his courage and creativity in addressing some of the deepest unsolved problems in science; in particular for his "striking combination of depth and breadth".
Education
Valiant was educated at King's College, Cambridge, Imperial College London, and the University of Warwick where he received a PhD in computer science in 1974.
Research and career
Valiant is world-renowned for his work in Theoretical Computer Science. Among his many contributions to Complexity Theory, he introduced the notion of #P-completeness ("Sharp-P completeness") to explain why enumeration and reliability problems are intractable. He created the Probably Approximately Correct or PAC model of learning that introduced the field of Computational Learning Theory and became a theoretical basis for the development of Machine Learning. He also introduced the concept of Holographic Algorithms inspired by the Quantum Computation model. In computer systems, he is most well-known for introducing the Bulk Synchronous Parallel processing model. Analogous to the von Neumann model for a single computer architecture, BSP has been an influential model for parallel and distributed computing architectures. Recent examples are Google adopting it for computation at large scale via MapReduce, MillWheel, Pregel and Dataflow, and Facebook creating a graph analytics system capable of processing over 1 trillion edges. There have also been active open-source projects to add explicit BSP programming as well as other high-performance parallel programming models derived from B |
https://en.wikipedia.org/wiki/Vijay%20Vazirani | Vijay Virkumar Vazirani (; b. 1957) is an Indian American distinguished professor of computer science in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine.
Education and career
Vazirani first majored in electrical engineering at Indian Institute of Technology, Delhi but in his second year he transferred to MIT and received his bachelor's degree in computer science from MIT in 1979 and his Ph.D. from the University of California, Berkeley in 1983. His dissertation, Maximum Matchings without Blossoms, was supervised by Manuel Blum.
After postdoctoral research with Michael O. Rabin and Leslie Valiant at Harvard University, he joined the faculty at Cornell University in 1984. He moved to the IIT Delhi as a full professor in 1990, and moved again to the Georgia Institute of Technology in 1995. He was also a McKay Visiting Professor at the University of California, Berkeley, and a Distinguished SISL Visitor at the Social and Information Sciences Laboratory at the California Institute of Technology. In 2017 he moved to the University of California, Irvine as distinguished professor.
Research
Vazirani's research career has been centered around the design of algorithms, together with work on computational complexity theory, cryptography, and algorithmic game theory.
During the 1980s, he made seminal contributions to the classical maximum matching problem, and some key contributions to computational complexity theory, e.g., the isolation lemma, the Valiant-Vazirani theorem, and the equivalence between random generation and approximate counting. During the 1990s he worked mostly on approximation algorithms, championing the primal-dual schema, which he applied to problems arising in network design, facility location and web caching, and clustering. In July 2001 he published what is widely regarded as the definitive book on approximation algorithms (Springer-Verlag, Berlin). Since 2002, he has been at the forefront
of the ef |
https://en.wikipedia.org/wiki/Map-coloring%20games | Several map-coloring games are studied in combinatorial game theory. The general idea is that we are given a map with regions drawn in but with not all the regions colored. Two players, Left and Right, take turns coloring in one uncolored region per turn, subject to various constraints, as in the map-coloring problem. The move constraints and the winning condition are features of the particular game.
Some players find it easier to color vertices of the dual graph, as in the Four color theorem. In this method of play, the regions are represented by small circles, and the circles for neighboring regions are linked by line segments or curves. The advantages of this method are that only a small area need be marked on a turn, and that the representation usually takes up less space on the paper or screen. The first advantage is less important when playing with a computer interface instead of pencil and paper. It is also possible to play with Go stones or Checkers.
Move constraints
An inherent constraint in each game is the set of colors available to the players in coloring regions. If Left and Right have the same colors available to them, the game is impartial; otherwise the game is partisan. The set of colors could also depend on the state of the game; for instance it could be required that the color used be different from the color used on the previous move.
The map-based constraints on a move are usually based on the region to be colored and its neighbors, whereas in the map-coloring problem, regions are considered to be neighbors when they meet along a boundary longer than a single point. The classical map-coloring problem requires that no two neighboring regions be given the same color. The classical move constraint enforces this by prohibiting coloring a region with the same color as one of its neighbor. The anticlassical constraint prohibits coloring a region with a color that differs from the color of one of its neighbors.
Another kind of constrain |
https://en.wikipedia.org/wiki/Electric%20discharge | In electromagnetism, an electric discharge is the release and transmission of electricity in an applied electric field through a medium such as a gas (ie., an outgoing flow of electric current through a non-metal medium).
Applications
The properties and effects of electric discharges are useful over a wide range of magnitudes. Tiny pulses of current are used to detect ionizing radiation in a Geiger–Müller tube. A low steady current can be used to illustrate the spectrum of gases in a gas-filled tube. A neon lamp is an example of a gas-discharge lamp, useful both for illumination and as a voltage regulator. A flashtube generates a short pulse of intense light useful for photography by sending a heavy current through a gas arc discharge. Corona discharges are used in photocopiers.
Electric discharges can convey substantial energy to the electrodes at the ends of the discharge. A spark gap is used in internal combustion engines to ignite the fuel/air mixture on every power stroke. Spark gaps are also used to switch heavy currents in a Marx generator and to protect electrical apparatus. In electric discharge machining, multiple tiny electric arcs are used to erode a conductive workpiece to a finished shape. Arc welding is used to assemble heavy steel structures, where the base metal is heated to melting by the heat of the arc. An electric arc furnace sustains arc currents of tens of thousands of amperes and is used for steelmaking and production of alloys and other products.
Examples
Examples of electric discharge phenomena include:
Brush discharge
Dielectric barrier discharge
Corona discharge
Electric glow discharge
Electric arc
Electrostatic discharge
Electric discharge in gases
Leader (spark)
Partial discharge
Streamer discharge
Vacuum arc
Townsend discharge
St. Elmo's fire
Lightning
Electric organ
See also
Electrical breakdown
Electric discharge in gases
Lichtenberg figure
Space charge
Debye sheath |
https://en.wikipedia.org/wiki/Working%20set | Working set is a concept in computer science which defines the amount of memory that a process requires in a given time interval.
Definition
Peter Denning (1968) defines "the working set of information of a process at time to be the collection of information referenced by the process during the process time interval ".
Typically the units of information in question are considered to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process.
Rationale
The effect of the choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then its page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system approaches zero.
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other processes to use.
Often a heavily loaded computer has so many processes queued up that, if all the processes were allowed to run for one scheduling time slice, they would refer to more pages than there is RAM, causing the computer to "thrash".
By swapping some processes from memory, the result is that processes—even processes that were temporarily removed from memory—finish much sooner than they would if the computer attempted to run them all at |
https://en.wikipedia.org/wiki/Antimycobacterial | An antimycobacterial is a type of medication used to treat Mycobacteria infections.
Types include:
Tuberculosis treatments
Leprostatic agents
Notes
Antibiotics |
https://en.wikipedia.org/wiki/Mathematics%20Subject%20Classification | The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo |
https://en.wikipedia.org/wiki/Stanley%20Osher | Stanley Osher (born April 24, 1942) is an American mathematician, known for his many contributions in shock capturing, level-set methods, and PDE-based methods in computer vision and image processing. Osher is a professor at the University of California, Los Angeles (UCLA), Director of Special Projects in the Institute for Pure and Applied Mathematics (IPAM) and member of the California NanoSystems Institute (CNSI) at UCLA.
He has a daughter, Kathryn, and a son, Joel.
Education
BS, Brooklyn College, 1962
MS, New York University, 1964
PhD, New York University, 1966
Research interests
Level-set methods for computing moving fronts
Approximation methods for hyperbolic conservation laws and Hamilton–Jacobi equations
Total variation (TV) and other PDE-based image processing techniques
Scientific computing
Applied partial differential equations
L1/TV-based convex optimization
Osher is listed as an ISI highly cited researcher.
Research contributions
Osher was the inventor (or co-inventor) and developer of many highly successful numerical methods for computational physics, image processing and other fields, including:
High resolution numerical schemes to compute flows having shocks and steep gradients, including ENO (essentially non-oscillatory) schemes (with Harten, Chakravarthy, Engquist, Shu), WENO (weighted ENO) schemes (with Liu and Chan), the Osher scheme, the Engquist-Osher scheme, and the Hamilton–Jacobi versions of these methods. These methods have been widely used in computational fluid dynamics (CFD) and related fields.
Total variation (TV)-based image restoration (with Rudin and Fatemi) and shock filters (with Rudin). These are pioneering - and widely used - methods for PDE based image processing and have also been used for inverse problems.
Level-set method (with Sethian) for capturing moving interfaces, which has been phenomenally successful as a key tool in PDE based image processing and computer vision, as well as applications in differential |
https://en.wikipedia.org/wiki/Frangipane | Frangipane ( ), is a sweet almond-flavored custard, typical in French pastry, used in a variety of ways, including cakes and such pastries as the Bakewell tart, conversation tart, Jésuite and pithivier. A French spelling from a 1674 cookbook is franchipane, with the earliest modern spelling coming from a 1732 confectioners' dictionary. Originally designated as a custard tart flavored by almonds or pistachios, it came later to designate a filling that could be used in a variety of confections and baked goods.
It is traditionally made by combining two parts of almond cream (crème d’amande) with one part pastry cream (crème pâtissière). Almond cream is made from butter, sugar, eggs, almond meal, bread flour, and rum; and pastry cream is made from whole milk, vanilla bean, cornstarch, sugar, egg yolks or whole eggs, and butter. There are many variations on both of these creams as well as on the proportion of almond cream to pastry cream in frangipane.
On Epiphany, the French cut the king cake, a round cake made of frangipane layers into slices to be distributed by a child known as le petit roi (the little king), who is usually hiding under the dining table. The cake is decorated with stars, a crown, flowers and a special bean hidden inside the cake. Whoever gets the piece of the frangipane cake with the bean is crowned "king" or "queen" for the following year.
Etymology
The word frangipane is a French term used to name products with an almond flavour. The word comes ultimately from the last name of Marquis Muzio Frangipani or Cesare Frangipani. The word first denoted the frangipani plant, from which was produced the perfume originally said to flavor frangipane. Other sources say that the name as applied to the almond custard was an homage by 16th-century Parisian chefs in name only to Frangipani, who created a jasmine-based perfume with a smell like the flowers to perfume leather gloves.
See also
List of almond dishes
List of custard desserts
List of pastries |
https://en.wikipedia.org/wiki/Environmental%20change | Environmental change is a change or disturbance of the environment most often caused by human influences and natural ecological processes. Environmental changes include various factors, such as natural disasters, human interferences, or animal interaction. Environmental change encompasses not only physical changes, but also factors like an infestation of invasive species.
See also
Climate change (general concept)
Environmental degradation
Global warming
Human impact on the environment
Acclimatization
Atlas of Our Changing Environment
Phenotypic plasticity
Socioeconomics |
https://en.wikipedia.org/wiki/Vastus%20medialis | The vastus medialis (vastus internus or teardrop muscle) is an extensor muscle located medially in the thigh that extends the knee. The vastus medialis is part of the quadriceps muscle group.
Structure
The vastus medialis is a muscle present in the anterior compartment of thigh, and is one of the four muscles that make up the quadriceps muscle. The others are the vastus lateralis, vastus intermedius and rectus femoris. It is the most medial of the "vastus" group of muscles. The vastus medialis arises medially along the entire length of the femur, and attaches with the other muscles of the quadriceps in the quadriceps tendon.
The vastus medialis muscle originates from a continuous line of attachment on the femur, which begins on the front and middle side (anteromedially) on the intertrochanteric line of the femur. It continues down and back (posteroinferiorly) along the pectineal line and then descends along the inner (medial) lip of the linea aspera and onto the medial supracondylar line of the femur. The fibers converge onto the inner (medial) part of the quadriceps tendon and the inner (medial) border of the patella.
The obliquus genus muscle is the most distal segment of the vastus medialis muscle. Its specific training plays an important role in maintaining patella position and limiting injuries to the knee. With no clear delineation, it is simply the most distal group of fibers of the vastus medialis.
Function
The vastus medialis is one of four muscles in the anterior compartment of the thigh. It is involved in knee extension, along with the other muscles which make up the quadriceps muscle. The vastus medialis also contributes to correct tracking of the patella.
A division of the vastus medialis muscle into two groups of fibers has been hypothesized, a long and relatively inline group of fibres with the quadriceps ligament, the vastus medialis longus; and a shorter and more obliquely oriented with group of fibres, the vastus medialis obliquus. There is as |
https://en.wikipedia.org/wiki/Null%20%28mathematics%29 | In mathematics, the word null (from meaning "zero", which is from meaning "none") is often associated with the concept of zero or the concept of nothing. It is used in varying context from "having zero members in a set" (e.g., null set) to "having a value of zero" (e.g., null vector).
In a vector space, the null vector is the neutral element of vector addition; depending on the context, a null vector may also be a vector mapped to some null by a function under consideration (such as a quadratic form coming with the vector space, see null vector, a linear mapping given as matrix product or dot product, a seminorm in a Minkowski space, etc.). In set theory, the empty set, that is, the set with zero elements, denoted "{}" or "∅", may also be called null set. In measure theory, a null set is a (possibly nonempty) set with zero measure.
A null space of a mapping is the part of the domain that is mapped into the null element of the image (the inverse image of the null element). For example, in linear algebra, the null space of a linear mapping, also known as kernel, is the set of vectors which map to the null vector under that mapping.
In statistics, a null hypothesis is a proposition that no effect or relationship exists between populations and phenomena. It is the hypothesis which is presumed true—unless statistical evidence indicates otherwise.
See also
0
Null sign |
https://en.wikipedia.org/wiki/Particle%20shower | In particle physics, a shower is a cascade of secondary particles produced as the result of a high-energy particle interacting with dense matter. The incoming particle interacts, producing multiple new particles with lesser energy; each of these then interacts, in the same way, a process that continues until many thousands, millions, or even billions of low-energy particles are produced. These are then stopped in the matter and absorbed.
Types
There are two basic types of showers. Electromagnetic showers are produced by a particle that interacts primarily or exclusively via the electromagnetic force, usually a photon or electron. Hadronic showers are produced by hadrons (i.e. nucleons and other particles made of quarks), and proceed mostly via the strong nuclear force.
Electromagnetic showers
An electromagnetic shower begins when a high-energy electron, positron or photon enters a material. At high energies (above a few MeV), in which the photoelectric effect and Compton scattering are insignificant, photons interact with matter primarily via pair production — that is, they convert into an electron-positron pair, interacting with an atomic nucleus or electron in order to conserve momentum. High-energy electrons and positrons primarily emit photons, a process called bremsstrahlung. These two processes (pair production and bremsstrahlung) continue, leading to a cascade of particles of decreasing energy until photons fall below the pair production threshold, and energy losses of electrons other than bremsstrahlung start to dominate.
The characteristic amount of matter traversed for these related interactions is called the radiation length . is both the mean distance over which a high-energy electron loses all but 1/e of its energy by bremsstrahlung and 7/9 of the mean free path for pair production by a high energy photon. The length of the cascade scales with ; the "shower depth" is approximately determined by the relation
where is the radiation length of t |
https://en.wikipedia.org/wiki/Rent%27s%20rule | Rent's rule pertains to the organization of computing logic, specifically the relationship between the number of external signal connections to a logic block (i.e., the number of "pins") with the number of logic gates in the logic block, and has been applied to circuits ranging from small digital circuits to mainframe computers. Put simply, it states that there is a simple power law relationship between these two values (pins and gates).
E. F. Rent's discovery and first publications
In the 1960s, E. F. Rent, an IBM employee, found a remarkable trend between the number of pins (terminals, T) at the boundaries of integrated circuit designs at IBM and the number of internal components (g), such as logic gates or standard cells. On a log–log plot, these datapoints were on a straight line, implying a power-law relation , where t and p are constants (p < 1.0, and generally 0.5 < p < 0.8).
Rent's findings in IBM-internal memoranda were published in the IBM Journal of Research and Development in 2005, but the relation was described in 1971 by Landman and Russo. They performed a hierarchical circuit partitioning in such a way that at each hierarchical level (top-down) the fewest interconnections had to be cut to partition the circuit (in more or less equal parts). At each partitioning step, they noted the number of terminals and the number of components in each partition and then partitioned the sub-partitions further. They found the power-law rule applied to the resulting T versus g plot and named it "Rent's rule".
Rent's rule is an empirical result based on observations of existing designs, and therefore it is less applicable to the analysis of non-traditional circuit architectures. However, it provides a useful framework with which to compare similar architectures.
Theoretical basis
Christie and Stroobandt later derived Rent's rule theoretically for homogeneous systems and pointed out that the amount of optimization achieved in placement is reflected by the paramete |
https://en.wikipedia.org/wiki/Flory%E2%80%93Huggins%20solution%20theory | Flory–Huggins solution theory is a lattice model of the thermodynamics of polymer solutions which takes account of the great dissimilarity in molecular sizes in adapting the usual expression for the entropy of mixing. The result is an equation for the Gibbs free energy change for mixing a polymer with a solvent. Although it makes simplifying assumptions, it generates useful results for interpreting experiments.
Theory
The thermodynamic equation for the Gibbs energy change accompanying mixing at constant temperature and (external) pressure is
A change, denoted by , is the value of a variable for a solution or mixture minus the values for the pure components considered separately. The objective is to find explicit formulas for and , the enthalpy and entropy increments associated with the mixing process.
The result obtained by Flory and Huggins is
The right-hand side is a function of the number of moles and volume fraction of solvent (component ), the number of moles and volume fraction of polymer (component ), with the introduction of a parameter to take account of the energy of interdispersing polymer and solvent molecules. is the gas constant and is the absolute temperature. The volume fraction is analogous to the mole fraction, but is weighted to take account of the relative sizes of the molecules. For a small solute, the mole fractions would appear instead, and this modification is the innovation due to Flory and Huggins. In the most general case the mixing parameter, , is a free energy parameter, thus including an entropic component.
Derivation
We first calculate the entropy of mixing, the increase in the uncertainty about the locations of the molecules when they are interspersed. In the pure condensed phases — solvent and polymer — everywhere we look we find a molecule. Of course, any notion of "finding" a molecule in a given location is a thought experiment since we can't actually examine spatial locations the size of molecules. The expression |
https://en.wikipedia.org/wiki/Controlled%20NOT%20gate | In computer science, the controlled NOT gate (also C-NOT or CNOT), controlled-X gate, controlled-bit-flip gate, Feynman gate or controlled Pauli-X is a quantum logic gate that is an essential component in the construction of a gate-based quantum computer. It can be used to entangle and disentangle Bell states. Any quantum circuit can be simulated to an arbitrary degree of accuracy using a combination of CNOT gates and single qubit rotations. The gate is sometimes named after Richard Feynman who developed an early notation for quantum gate diagrams in 1986.
The CNOT can be expressed in the Pauli basis as:
Being both unitary and Hermitian, CNOT has the property and , and is involutory.
The CNOT gate can be further decomposed as products of rotation operator gates and exactly one two qubit interaction gate, for example
In general, any single qubit unitary gate can be expressed as , where H is a Hermitian matrix, and then the controlled U is .
The CNOT gate is also used in classical reversible computing.
Operation
The CNOT gate operates on a quantum register consisting of 2 qubits. The CNOT gate flips the second qubit (the target qubit) if and only if the first qubit (the control qubit) is .
If are the only allowed input values for both qubits, then the TARGET output of the CNOT gate corresponds to the result of a classical XOR gate. Fixing CONTROL as , the TARGET output of the CNOT gate yields the result of a classical NOT gate.
More generally, the inputs are allowed to be a linear superposition of . The CNOT gate transforms the quantum state:
into:
The action of the CNOT gate can be represented by the matrix (permutation matrix form):
The first experimental realization of a CNOT gate was accomplished in 1995. Here, a single Beryllium ion in a trap was used. The two qubits were encoded into an optical state and into the vibrational state of the ion within the trap. At the time of the experiment, the reliability of the CNOT-operation was measured to b |
https://en.wikipedia.org/wiki/General%20selection%20model | The general selection model (GSM) is a model of population genetics that describes how a population's allele frequencies will change when acted upon by natural selection.
Equation
The General Selection Model applied to a single gene with two alleles (let's call them A1 and A2) is encapsulated by the equation:
where:
is the frequency of allele A1
is the frequency of allele A2
is the rate of evolutionary change of the frequency of allele A2
are the relative fitnesses of homozygous A1, heterozygous (A1A2), and homozygous A2 genotypes respectively.
is the mean population relative fitness.
In words:
The product of the relative frequencies, , is a measure of the genetic variance. The quantity pq is maximized when there is an equal frequency of each gene, when . In the GSM, the rate of change is proportional to the genetic variation.
The mean population fitness is a measure of the overall fitness of the population. In the GSM, the rate of change is inversely proportional to the mean fitness —i.e. when the population is maximally fit, no further change can occur.
The remainder of the equation, , refers to the mean effect of an allele substitution. In essence, this term quantifies what effect genetic changes will have on fitness.
See also
Darwinian fitness
Hardy–Weinberg principle
Population genetics |
https://en.wikipedia.org/wiki/Charismatic%20megafauna | Charismatic megafauna are animal species that are large—in the relevant category that they represent—with symbolic value or widespread popular appeal, and are often used by environmental activists to gain public support for environmentalist goals. Examples include tigers, lions, jaguars, hippopotamuses, elephants, gorillas, chimpanzees, giant pandas, brown and polar bears, rhinoceroses, kangaroos, koalas, blue whales, humpback whales, orcas, walruses, elephant seals, bald eagles, white-tailed and eastern imperial eagles, penguins, crocodiles and great white sharks among countless others. In this definition, animals such as penguins or bald eagles can be considered megafauna because they are among the largest animals within the local animal community, and they disproportionately affect their environment. The vast majority of charismatic megafauna species are threatened and endangered by overhunting, poaching, black market trade, climate change, habitat destruction, invasive species, and many more causes.
Use in conservation
Charismatic species are often used as flagship species in conservation programs, as they are supposed to affect people's feelings more. However, being charismatic does not protect species against extinction; all of the 10 most charismatic species are currently endangered, and only the giant panda shows a demographic growth from an extremely small population.
Beginning early in the 20th century, efforts to reintroduce extirpated charismatic megafauna to ecosystems have been an interest of a number of private and non-government conservation organizations. Species have been reintroduced from captive breeding programs in zoos, such as the wisent (the European bison) to Poland's Białowieża Forest.
These and other reintroductions of charismatic megafauna, such as Przewalski's horse to Mongolia, have been to areas of limited, and often patchy, range compared to the historic ranges of the respective species.
Environmental activists and proponents of |
https://en.wikipedia.org/wiki/Gilbert%20Ames%20Bliss | Gilbert Ames Bliss, (9 May 1876 – 8 May 1951), was an American mathematician, known for his work on the calculus of variations.
Life
Bliss grew up in a Chicago family that eventually became affluent; in 1907, his father became president of the company supplying all of Chicago's electricity. The family was not affluent, however, when Bliss entered the University of Chicago in 1893 (its second year of operation). Hence he had to support himself while a student by winning a scholarship, and by playing in a student professional mandolin quartet.
After obtaining the B.Sc. in 1897, he began graduate studies at Chicago in mathematical astronomy (his first publication was in that field), switching in 1898 to mathematics. He discovered his life's work, the calculus of variations, via the lecture notes of Weierstrass's 1879 course, and Bolza's teaching. Bolza went on to supervise Bliss's Ph.D. thesis, The Geodesic Lines on the Anchor Ring, completed in 1900 and published in the Annals of Mathematics in 1902. After two years as an instructor at the University of Minnesota, Bliss spent the 1902–03 academic year at the University of Göttingen, interacting with Felix Klein, David Hilbert, Hermann Minkowski, Ernst Zermelo, Erhard Schmidt, Max Abraham, and Constantin Carathéodory.
Upon returning to the United States, Bliss taught one year each at the University of Chicago and the University of Missouri. In 1904, he published two more papers on the calculus of variations in the Transactions of the American Mathematical Society. Bliss was a Preceptor at Princeton University, 1905–08, joining a strong group of young mathematicians that included Luther P. Eisenhart, Oswald Veblen, and Robert Lee Moore. While at Princeton he was also an associate editor of the Annals of Mathematics.
In 1908, Chicago's Maschke died and Bliss was hired to replace him; Bliss remained at Chicago until his 1941 retirement. While at Chicago, he was an editor of the Transactions of the American Mathematica |
https://en.wikipedia.org/wiki/Preferential%20entailment | Preferential entailment is a non-monotonic logic based on selecting only models that are considered the most plausible. The plausibility of models is expressed by an ordering among models called a preference relation, hence the name preference entailment.
Formally, given a propositional formula and an ordering over propositional models , preferential entailment selects only the models of that are minimal according to . This selection leads to a non-monotonic inference relation: holds if and only if all minimal models of according to are also models of .
Circumscription can be seen as the particular case of preferential entailment when the ordering is based on containment of the sets of variables assigned to true (in the propositional case) or containment of the extensions of predicates (in the first-order logic case).
See also |
https://en.wikipedia.org/wiki/Entropy%20of%20vaporization | In thermodynamics, the entropy of vaporization is the increase in entropy upon vaporization of a liquid. This is always positive, since the degree of disorder increases in the transition from a liquid in a relatively small volume to a vapor or gas occupying a much larger space. At standard pressure , the value is denoted as and normally expressed in joules per mole-kelvin, J/(mol·K).
For a phase transition such as vaporization or fusion (melting), both phases may coexist in equilibrium at constant temperature and pressure, in which case the difference in Gibbs free energy is equal to zero:
where is the heat or enthalpy of vaporization. Since this is a thermodynamic equation, the symbol refers to the absolute thermodynamic temperature, measured in kelvins (K). The entropy of vaporization is then equal to the heat of vaporization divided by the boiling point:
According to Trouton's rule, the entropy of vaporization (at standard pressure) of most liquids has similar values. The typical value is variously given as 85 J/(mol·K), 88 J/(mol·K) and 90 J/(mol·K). Hydrogen-bonded liquids have somewhat higher values of
See also
Entropy of fusion |
https://en.wikipedia.org/wiki/Anton%20Felkel | Anton Felkel (26 April 1740, Kamenz, Silesia – c. 1800, possibly in Lisbon, Portugal) was an Austrian mathematician who worked on the determination of prime numbers.
Work
In 1776 and 1777, Felkel published a table giving complete decompositions of all integers not divisible by 2, 3, and 5, from 1 to 408,000. Felkel had planned to extend his table to 10 million. A reconstruction of his table is found on the LOCOMAT site.
Publications
Tafel aller einfachen Factoren der durch 2, 3, 5 nicht theilbaren Zahlen von 1 bis 10 000 000. Vienna: 1776;
I. Theil enthaltend die Factoren von 1 bis 144000 (also published in Latin)
Pars II. exhibens factores numerorum ab 144001 usque 336000
Pars III. exhibens factores numerorum ab 336001 usque 408000
Wahre Beschaffenheit des Donners: Eine ganz neue Entdeckung durch einen Liebhaber der Naturkunde. Wien: v. Ghelen, 1780;
Neueröffnetes Geheimniss der Parallellinien enthaltend verschiedene wichtige Zusätze zur Proportion und Körperlehre von Anton Felkel; nebst einer dreyfachen vorläufigen Nachricht von den dazu dienenden neuerfundenen mechanischen Kunstgriffen etc. Wien; von Ghelenschen Buchhandlung, 1781; |
https://en.wikipedia.org/wiki/Controlled%20image%20base | Controlled image base or CIB is unclassified digital imagery, produced to support mission planning and command, control, communications, and intelligence systems. CIB is used as a map substitute for emergencies and crises in the event that maps do not exist or are outdated. CIB is produced from SPOT commercial imagery that has been orthonormalized using the National Geospatial-Intelligence Agency (NGA)'s DTED. CIB is RPF and NITF compliant. (Source: FAS) |
https://en.wikipedia.org/wiki/Software%20token | A software token (a.k.a. soft token) is a piece of a two-factor authentication security device that may be used to authorize the use of computer services. Software tokens are stored on a general-purpose electronic device such as a desktop computer, laptop, PDA, or mobile phone and can be duplicated. (Contrast hardware tokens, where the credentials are stored on a dedicated hardware device and therefore cannot be duplicated — absent physical invasion of the device)
Because software tokens are something one does not physically possess, they are exposed to unique threats based on duplication of the underlying cryptographic material - for example, computer viruses and software attacks. Both hardware and software tokens are vulnerable to bot-based man-in-the-middle attacks, or to simple phishing attacks in which the one-time password provided by the token is solicited, and then supplied to the genuine website in a timely manner. Software tokens do have benefits: there is no physical token to carry, they do not contain batteries that will run out, and they are cheaper than hardware tokens.
Security architecture
There are two primary architectures for software tokens: shared secret and public-key cryptography.
For a shared secret, an administrator will typically generate a configuration file for each end-user. The file will contain a username, a personal identification number, and the secret. This configuration file is given to the user.
The shared secret architecture is potentially vulnerable in a number of areas. The configuration file can be compromised if it is stolen and the token is copied. With time-based software tokens, it is possible to borrow an individual's PDA or laptop, set the clock forward, and generate codes that will be valid in the future. Any software token that uses shared secrets and stores the PIN alongside the shared secret in a software client can be stolen and subjected to offline attacks. Shared secret tokens can be difficult to distribu |
https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch%20theorem%20for%20smooth%20manifolds | In mathematics, a Riemann–Roch theorem for smooth manifolds is a version of results such as the Hirzebruch–Riemann–Roch theorem or Grothendieck–Riemann–Roch theorem (GRR) without a hypothesis making the smooth manifolds involved carry a complex structure. Results of this kind were obtained by Michael Atiyah and Friedrich Hirzebruch in 1959, reducing the requirements to something like a spin structure.
Formulation
Let X and Y be oriented smooth closed manifolds,
and f: X → Y a continuous map.
Let vf=f*(TY) − TX in the K-group
K(X).
If dim(X) ≡ dim(Y) mod 2, then
where ch is the Chern character, d(vf) an element of
the integral cohomology group H2(Y, Z) satisfying
d(vf) ≡ f* w2(TY)-w2(TX) mod 2,
fK* the Gysin homomorphism for K-theory,
and fH* the Gysin homomorphism for cohomology
.
This theorem was first proven by Atiyah and Hirzebruch.
The theorem is proven by considering several special cases.
If Y is the Thom space of a vector bundle V over X,
then the Gysin maps are just the Thom isomorphism.
Then, using the splitting principle, it suffices to check the theorem via explicit computation for line
bundles.
If f: X → Y is an embedding, then the
Thom space of the normal bundle of X in Y can be viewed as a tubular neighborhood of X
in Y, and excision gives a map
and
.
The Gysin map for K-theory/cohomology is defined to be the composition of the Thom isomorphism with these maps.
Since the theorem holds for the map from X to the Thom space of N,
and since the Chern character commutes with u and v, the theorem is also true for embeddings.
f: X → Y.
Finally, we can factor a general map f: X → Y
into an embedding
and the projection
The theorem is true for the embedding.
The Gysin map for the projection is the Bott-periodicity isomorphism, which commutes with the Chern character,
so the theorem holds in this general case also.
Corollaries
Atiyah and Hirzebruch then specialised and refined in the case X = a point, where the condition becomes the existence of a |
https://en.wikipedia.org/wiki/Come%20Rain%20or%20Come%20Shine | "Come Rain or Come Shine" is a popular music song, with music by Harold Arlen and lyrics by Johnny Mercer. It was written for the Broadway musical St. Louis Woman, which opened on March 30, 1946, and closed after 113 performances.
Chart performance
It "became a modest hit during the show's run, making the pop charts with a Margaret Whiting (Paul Weston and His Orchestra) recording rising to number seventeen, and, shortly after, a Helen Forrest and Dick Haymes recording rising to number twenty-three."
Other recordings
The song has subsequently been recorded by a host of artists, including:
In 1955, Billie Holiday included it on her Music for Torching LP.
In 1956, Judy Garland included it on her Judy LP, as well her 1961 live album, Judy at Carnegie Hall.
In 1956, Fran Warren included it on her album Mood Indigo.
In 1958, Art Blakey & The Jazz Messengers recorded it for their album released in 1959, Moanin’.
In 1959, Connie Francis included it on her The Exciting Connie Francis LP.
In 1959, Ray Charles included it on his The Genius of Ray Charles LP.
In 1959, Oscar Peterson included it on his 1960 album Oscar Peterson Plays the Harold Arlen Songbook.
In 1959, Bill Evans opened his 1960 album Portrait in Jazz with this song.
In 1961, Ella Fitzgerald included it on her Ella Fitzgerald Sings the Harold Arlen Songbook LP.
In 1962, Frank Sinatra included it on his Sinatra and Strings LP.
In 1973, Joe Pass included it on his 1983 album Virtuoso No. 4.
In 1977, Return To Forever included it on their live 4LP Live
In 1979, Barbra Streisand included it on her Wet LP.
In 1989, Michael Crawford included it on his With Love / The Phantom Unmasked album.
In 1991, Bette Midler sings the song in her film For the Boys and it is also featured in the movie's soundtrack.
In 1995, Don Henley for the Leaving Las Vegas soundtrack, and The Unplugged Collection, Volume One
In 1997, Alison Eastwood for the Midnight in the Garden of Good and Evil soundtrack
In 1999, Bobby Caldwell for his C |
https://en.wikipedia.org/wiki/Malecot%27s%20method%20of%20coancestry | Malecot's coancestry coefficient, , refers to an indirect measure of genetic similarity of two individuals which was initially devised by the French mathematician Gustave Malécot.
is defined as the probability that any two alleles, sampled at random (one from each individual), are identical copies of an ancestral allele. In species with well-known lineages (such as domesticated crops), can be calculated by examining detailed pedigree records. Modernly, can be estimated using genetic marker data.
Evolution of inbreeding coefficient in finite size populations
In a finite size population, after some generations, all individuals will have a common ancestor : .
Consider a non-sexual population of fixed size , and call the inbreeding coefficient of generation . Here, means the probability that two individuals picked at random will have a common ancestor. At each generation, each individual produces a large number of descendants, from the pool of which individual will be chosen at random to form the new generation.
At generation , the probability that two individuals have a common ancestor is "they have a common parent" OR "they descend from two distinct individuals which have a common ancestor" :
What is the source of the above formula? Is it in a later paper than the 1948 Reference.
This is a recurrence relation easily solved. Considering the worst case where at generation zero, no two individuals have a common ancestor,
, we get
The scale of the fixation time (average number of generation it takes to homogenize the population) is therefore
This computation trivially extends to the inbreeding coefficients of alleles in a sexual population by changing to (the number of gametes).
See also
Coefficient of relationship
Consanguinity
Genetic distance |
https://en.wikipedia.org/wiki/Nitrogen%20balance | The concept of nitrogen balance dictates that the difference between nitrogen intake and loss reflects gain or loss of total body protein respectively. If more nitrogen (protein) is given to the patient than is lost, the patient is considered to be anabolic or “in positive nitrogen balance."
Nitrogen Balance = Nitrogen intake - Nitrogen loss
Blood urea nitrogen can be used in estimating nitrogen balance, as the urea concentration in urine.
Nitrogen Balance and Protein Metabolism
Nitrogen is a fundamental component of amino acids, the molecular building blocks of protein. Therefore, measuring nitrogen inputs and losses can be used to study protein metabolism.
Positive nitrogen balance is associated with periods of growth, hypothyroidism, tissue repair, and pregnancy. Because of this, the intake of nitrogen into the body is greater than the loss of nitrogen from the body, so there is an increase in the total body pool of protein.
Negative nitrogen balance is associated with burns, serious tissue injuries, fever, hyperthyroidism, wasting diseases, and periods of fasting. This means that the amount of nitrogen excreted from the body is greater than the amount of nitrogen ingested. A negative nitrogen balance can be used as part of a clinical evaluation of malnutrition.
Nitrogen balance is the traditional method of measuring dietary protein requirements. Determining dietary protein requirements using nitrogen balance requires that all nitrogen inputs and losses are carefully collected, to ensure that all nitrogen exchange is accounted for. In order to control nitrogen inputs and losses, nitrogen balance studies usually require participants to eat very specific diets (so total nitrogen intake is known) and stay in the study location for the duration of the study (to collect all nitrogen losses). Because of these conditions, it can be difficult to study the dietary protein requirements of various populations using the nitrogen balance technique (e.g. children).
Die |
https://en.wikipedia.org/wiki/Delay-locked%20loop | In electronics, a delay-locked loop (DLL) is a pseudo-digital circuit similar to a phase-locked loop (PLL), with the main difference being the absence of an internal voltage-controlled oscillator, replaced by a delay line.
A DLL can be used to change the phase of a clock signal (a signal with a periodic waveform), usually to enhance the clock rise-to-data output valid timing characteristics of integrated circuits (such as DRAM devices). DLLs can also be used for clock recovery (CDR). From the outside, a DLL can be seen as a negative delay gate placed in the clock path of a digital circuit.
The main component of a DLL is a delay chain composed of many delay gates connected output-to-input. The input of the chain (and thus of the DLL) is connected to the clock that is to be negatively delayed. A multiplexer is connected to each stage of the delay chain; a control circuit automatically updates the selector of this multiplexer to produce the negative delay effect. The output of the DLL is the resulting, negatively delayed clock signal.
Another way to view the difference between a DLL and a PLL is that a DLL uses a variable phase (=delay) block, whereas a PLL uses a variable frequency block.
A DLL compares the phase of its last output with the input clock to generate an error signal which is then integrated and fed back as the control to all of the delay elements.
The integration allows the error to go to zero while keeping the control signal, and thus the delays, where they need to be for phase lock. Since the control signal directly impacts the phase this is all that is required.
A PLL compares the phase of its oscillator with the incoming signal to generate an error signal which is then integrated to create a control signal for the voltage-controlled oscillator. The control signal impacts the oscillator's frequency, and phase is the integral of frequency, so a second integration is unavoidably performed by the oscillator itself.
In the Control Systems jargon, th |
https://en.wikipedia.org/wiki/Teleradiology | Teleradiology is the transmission of radiological patient images from procedures such as x-rays photographs, Computed tomography (CT), and MRI imaging, from one location to another for the purposes of sharing studies with other radiologists and physicians. Teleradiology allows radiologists to provide services without actually having to be at the location of the patient. This is particularly important when a sub-specialist such as an MRI radiologist, neuroradiologist, pediatric radiologist, or musculoskeletal radiologist is needed, since these professionals are generally only located in large metropolitan areas working during daytime hours. Teleradiology allows for specialists to be available at all times.
Teleradiology utilizes standard network technologies such as the Internet, telephone lines, wide area networks, local area networks (LAN) and the latest advanced technologies such as medical cloud computing. Specialized software is used to transmit the images and enable the radiologist to effectively analyze potentially hundreds of images of a given study. Technologies such as advanced graphics processing, voice recognition, artificial intelligence, and image compression are often used in teleradiology. Through teleradiology and mobile DICOM viewers, images can be sent to another part of the hospital or to other locations around the world with equal effort.
Teleradiology is a growth technology given that imaging procedures are growing approximately 15% annually against an increase of only 2% in the radiologist population.
Reports
Teleradiologists can provide a preliminary read for emergency room cases and other emergency cases or a final read for the official patient record and for use in billing.
Preliminary reports include all pertinent findings and a telephone call for any critical findings. For some teleradiology services, the turnaround time is rapid with a 30-minute standard turnaround and expedited for critical and stroke studies.
Teleradiology final |
https://en.wikipedia.org/wiki/Big%20Numbers%20%28comics%29 | Big Numbers is an unfinished graphic novel by writer Alan Moore and artist Bill Sienkiewicz. In 1990 Moore's short-lived imprint Mad Love published two of the planned twelve issues. The series was picked up by Kevin Eastman's Tundra Publishing, but the completed third issue did not print, and the remaining issues, whose artwork was to be handled by Sienkiewicz's assistant Al Columbia, were never finished.
The work marks a move, on Moore's part, away from genre fiction, in the wake of the success of Watchmen. Moore weaves mathematics (in particular the work of mathematician Benoit Mandelbrot on fractal geometry and chaos theory) into a narrative of socioeconomic changes wrought by an American corporation's building of a shopping mall in a small, traditional English town, and the effects of the economic policies of the Margaret Thatcher administration in the 1980s.
Publication history
The planned 500-page graphic novel was to be serialised one chapter at a time over twelve issues. The series was printed on high-quality paper in an unusual square format.
The first two issues were produced by Alan Moore's self-publishing company Mad Love, with writing by Moore and artwork by Bill Sienkiewicz. However, the workload for the comic was intense, and Sienkiewicz stalled. By the time he backed out of the series, the third issue was still incomplete and rising overhead crippled the production. Kevin Eastman, creator of Teenage Mutant Ninja Turtles, stepped in and attempted to have his company Tundra Publishing publish Big Numbers. Moore and Eastman asked Sienkiewicz' assistant, Al Columbia, to become the series' sole artist and Roxanne Starr to be its letterer. Columbia worked on the fourth issue but, for reasons which remain unclear, destroyed his own artwork and abandoned the project as well. Big Numbers #3 and #4 were never published, and the series remains unfinished.
In 1999, ten pages of Sienkiewicz's art for Big Numbers #3 were published in the first (and only) issu |
https://en.wikipedia.org/wiki/Helge%20Tverberg | Helge Arnulf Tverberg (March 6, 1935December 28, 2020) was a Norwegian mathematician. He was a professor in the Mathematics Department at the University of Bergen, his speciality being combinatorics; he retired at the mandatory age of seventy.
He was born in Bergen. He took the cand.real. degree at the University of Bergen in 1958, and the dr.philos. degree in 1968. He was a lecturer from 1958 to 1971 and professor from 1971 to his retirement in 2005. He was a visiting scholar at the University of Reading in 1966 and at the Australian National University, in Canberra, from 1980 to 1981, 1987 to 1988 and in 2004. He was a member of the Norwegian Academy of Science and Letters.
Tverberg, in 1965, proved a result on intersection patterns of partitions of point configurations that has come to be known as Tverberg's partition theorem. It inaugurated a new branch of combinatorial geometry, with many variations and applications. An account by Günter M. Ziegler of Tverberg's work in this direction appeared in the issue of the Notices of the American Mathematical Society for April, 2011.
See also
Geometric separator |
https://en.wikipedia.org/wiki/4C%20Entity | The 4C Entity is a digital rights management (DRM) consortium formed by IBM, Intel, Panasonic and Toshiba that has established and licensed interoperable cryptographic protection mechanisms for removable media technologies. 4C Entity was founded in 1999 when Warner Music approached the companies to develop stronger DRM technologies for the then-novel DVD-Audio format after Intel’s CSS DRM technology was hacked.
The group developed and currently lease the Content Protection for Recordable Media (CPRM) and the Content Protection for Prerecorded Media (CPPM) schemes, which use Media Key Block technology and the Cryptomeria cipher along with audio watermarks. 4C Entity has also written the Content Protection System Architecture (CPSA), which describes how content protection solutions work together and the role of each current technology.
CPPM and CPRM are implemented in SD Cards, DVD-Audio, Flash media, and other digital media formats. Like many DRM technologies, 4C Entity and its products have been criticized, with the Associated Press writing that CPRM “spark[ed] privacy concerns.” |
https://en.wikipedia.org/wiki/Zeta%20function%20regularization | In mathematics and theoretical physics, zeta function regularization is a type of regularization or summability method that assigns finite values to divergent sums or products, and in particular can be used to define determinants and traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to ill-conditioned sums appearing in number theory.
Definition
There are several different summation methods called zeta function regularization for defining the sum of a possibly divergent series
One method is to define its zeta regularized sum to be ζA(−1) if this is defined, where the zeta function is defined for large Re(s) by
if this sum converges, and by analytic continuation elsewhere.
In the case when an = n, the zeta function is the ordinary Riemann zeta function. This method was used by Euler to "sum" the series 1 + 2 + 3 + 4 + ... to ζ(−1) = −1/12.
showed that in flat space, in which the eigenvalues of Laplacians are known, the zeta function corresponding to the partition function can be computed explicitly. Consider a scalar field φ contained in a large box of volume V in flat spacetime at the temperature T = β−1. The partition function is defined by a path integral over all fields φ on the Euclidean space obtained by putting τ = it which are zero on the walls of the box and which are periodic in τ with period β. In this situation from the partition function he computes energy, entropy and pressure of the radiation of the field φ. In case of flat spaces the eigenvalues appearing in the physical quantities are generally known, while in case of curved space they are not known: in this case asymptotic methods are needed.
Another method defines the possibly divergent infinite product a1a2.... to be exp(−ζ′A(0)). used this to define the determinant of a positive self-adjoint operator A (the Laplacian of a Riemannian manifold in their application) with eigenvalues |
https://en.wikipedia.org/wiki/NPH%20insulin | Neutral Protamine Hagedorn (NPH) insulin, also known as isophane insulin, is an intermediate-acting insulin given to help control blood sugar levels in people with diabetes. It is used by injection under the skin once to twice a day. Onset of effects is typically in 90 minutes and they last for 24 hours. Versions are available that come premixed with a short-acting insulin, such as regular insulin.
The common side effect is low blood sugar. Other side effects may include pain or skin changes at the sites of injection, low blood potassium, and allergic reactions. Use during pregnancy is relatively safe for the fetus. NPH insulin is made by mixing regular insulin and protamine in exact proportions with zinc and phenol such that a neutral-pH is maintained and crystals form. There are human and pig insulin based versions.
Protamine insulin was first created in 1936 and NPH insulin in 1946. It is on the World Health Organization's List of Essential Medicines. NPH is an abbreviation for "neutral protamine Hagedorn". In 2020, insulin isophane was the 221st most commonly prescribed medication in the United States, with more than 2million prescriptions. In 2020, the combination of human insulin with insulin isophane was the 246th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
NPH insulin is cloudy and has an onset of 1–3 hours. Its peak is 6–8 hours and its duration is up to 24 hours.
It has an intermediate duration of action, meaning longer than that of regular and rapid-acting insulin, and shorter than long acting insulins (ultralente, glargine or detemir). A recent Cochrane systematic review compared the effects of NPH insulin to other insulin analogues (insulin detemir, insulin glargine, insulin degludec) in both children and adults with Type 1 diabetes. Insulin detemir appeared provide a lower risk of severe hyperglycemia compared to NPH insulin, however this finding was inconsistent across included stu |
https://en.wikipedia.org/wiki/Motion%20detector | A motion detector is an electrical device that utilizes a sensor to detect nearby motion. Such a device is often integrated as a component of a system that automatically performs a task or alerts a user of motion in an area. They form a vital component of security, automated lighting control, home control, energy efficiency, and other useful systems.
Overview
An active electronic motion detector contains an optical, microwave, or acoustic sensor, as well as a transmitter. However, a passive contains only a sensor and only senses a signature from the moving object via emission or reflection. Changes in the optical, microwave or acoustic field in the device's proximity are interpreted by the electronics based on one of several technologies. Most low-cost motion detectors can detect motion at distances of about . Specialized systems are more expensive but have either increased sensitivity or much longer ranges. Tomographic motion detection systems can cover much larger areas because the radio waves it senses are at frequencies which penetrate most walls and obstructions, and are detected in multiple locations.
Motion detectors have found wide use in commercial applications. One common application is activating automatic door openers in businesses and public buildings. Motion sensors are also widely used in lieu of a true occupancy sensor in activating street lights or indoor lights in walkways, such as lobbies and staircases. In such smart lighting systems, energy is conserved by only powering the lights for the duration of a timer, after which the person has presumably left the area. A motion detector may be among the sensors of a burglar alarm that is used to alert the home owner or security service when it detects the motion of a possible intruder. Such a detector may also trigger a security camera to record the possible intrusion.
Sensor technology
Several types of motion detection are in wide use:
Passive infrared (PIR)
Passive infrared (PIR) sensors are sen |
https://en.wikipedia.org/wiki/Minification%20%28programming%29 | Minification (also minimisation or minimization) is the process of removing all unnecessary characters from the source code of interpreted programming languages or markup languages without changing its functionality. These unnecessary characters usually include white space characters, new line characters, comments, and sometimes block delimiters, which are used to add readability to the code but are not required for it to execute. Minification reduces the size of the source code, making its transmission over a network (e.g. the Internet) more efficient. In programmer culture, aiming at extremely minified source code is the purpose of recreational code golf competitions.
Minification can be distinguished from the more general concept of data compression in that the minified source can be interpreted immediately without the need for an uncompression step: the same interpreter can work with both the original as well as with the minified source.
The goals of minification are not the same as the goals of obfuscation; the former is often intended to be reversed using a pretty-printer or unminifier. However, to achieve its goals, minification sometimes uses techniques also used by obfuscation; for example, shortening variable names and refactoring the source code. When minification uses such techniques, the pretty-printer or unminifier can only fully reverse the minification process if it is supplied details of the transformations done by such techniques. If not supplied those details, the reversed source code will contain different variable names and control flow, even though it will have the same functionality as the original source code.
Example
For example, the JavaScript code
// This is a comment that will be removed by the minifier
var array = [];
for (var i = 0; i < 20; i++) {
array[i] = i;
}
is equivalent to but longer than
for(var a=[],i=0;i<20;a[i]=i++);
History
In 2001 Douglas Crockford introduced JSMin, which removed comments and whitespace from JavaScrip |
https://en.wikipedia.org/wiki/Tribometer | A tribometer is an instrument that measures tribological quantities, such as coefficient of friction, friction force, and wear volume, between two surfaces in contact. It was invented by the 18th century Dutch scientist Musschenbroek
A tribotester is the general name given to a machine or device used to perform tests and simulations of wear, friction and lubrication which are the subject of the study of tribology. Often tribotesters are extremely specific in their function and are fabricated by manufacturers who desire to test and analyze the long-term performance of their products. An example is that of orthopedic implant manufacturers who have spent considerable sums of money to develop tribotesters that accurately reproduce the motions and forces that occur in human hip joints so that they can perform accelerated wear tests of their products.
Theory
A simple tribometer is described by a hanging mass and a mass resting on a horizontal surface, connected to each other via a string and pulley. The coefficient of friction, µ, when the system is stationary, is determined by increasing the hanging mass until the moment that the resting mass begins to slide. Then using the general equation for friction force:
Where N, the normal force, is equal to the weight (mass x gravity) of the sitting mass (mT) and F, the loading force, is equal to the weight (mass x gravity) of the hanging mass (mH).
To determine the kinetic coefficient of friction the hanging mass is increased or decreased until the mass system moves at a constant speed.
In both cases, the coefficient of friction is simplified to the ratio of the two masses:
In most test applications using tribometers, wear is measured by comparing the mass or surfaces of test specimens before and after testing. Equipment and methods used to examine the worn surfaces include optical microscopes, scanning electron microscopes, optical interferometry and mechanical roughness testers.
Types
Tribometers are often referred to |
https://en.wikipedia.org/wiki/Newton%E2%80%93Euler%20equations | In classical mechanics, the Newton–Euler equations describe the combined translational and rotational dynamics of a rigid body.
Traditionally the Newton–Euler equations is the grouping together of Euler's two laws of motion for a rigid body into a single equation with 6 components, using column vectors and matrices. These laws relate the motion of the center of gravity of a rigid body with the sum of forces and torques (or synonymously moments) acting on the rigid body.
Center of mass frame
With respect to a coordinate frame whose origin coincides with the body's center of mass for τ(torque) and an inertial frame of reference for F(force), they can be expressed in matrix form as:
where
F = total force acting on the center of mass
m = mass of the body
I3 = the 3×3 identity matrix
acm = acceleration of the center of mass
vcm = velocity of the center of mass
τ = total torque acting about the center of mass
Icm = moment of inertia about the center of mass
ω = angular velocity of the body
α = angular acceleration of the body
Any reference frame
With respect to a coordinate frame located at point P that is fixed in the body and not coincident with the center of mass, the equations assume the more complex form:
where c is the location of the center of mass expressed in the body-fixed frame,
and
denote skew-symmetric cross product matrices.
The left hand side of the equation—which includes the sum of external forces, and the sum of external moments about P—describes a spatial wrench, see screw theory.
The inertial terms are contained in the spatial inertia matrix
while the fictitious forces are contained in the term:
When the center of mass is not coincident with the coordinate frame (that is, when c is nonzero), the translational and angular accelerations (a and α) are coupled, so that each is associated with force and torque components.
Applications
The Newton–Euler equations are used as the basis for more complicated "multi-body" formulations (screw |
https://en.wikipedia.org/wiki/Business%20Process%20Model%20and%20Notation | Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model.
Originally developed by the Business Process Management Initiative (BPMI), BPMN has been maintained by the Object Management Group (OMG) since the two organizations merged in 2005. Version 2.0 of BPMN was released in January 2011, at which point the name was amended to Business Process Model and Notation to reflect the introduction of execution semantics, which were introduced alongside the existing notational and diagramming elements. Though it is an OMG specification, BPMN is also ratified as ISO 19510. The latest version is BPMN 2.0.2, published in January 2014.
Overview
Business Process Model and Notation (BPMN) is a standard for business process modeling that provides a graphical notation for specifying business processes in a Business Process Diagram (BPD), based on a flowcharting technique very similar to activity diagrams from Unified Modeling Language (UML). The objective of BPMN is to support business process management, for both technical users and business users, by providing a notation that is intuitive to business users, yet able to represent complex process semantics. The BPMN specification also provides a mapping between the graphics of the notation and the underlying constructs of execution languages, particularly Business Process Execution Language (BPEL).
BPMN has been designed to provide a standard notation readily understandable by all business stakeholders, typically including business analysts, technical developers and business managers. BPMN can therefore be used to support the generally desirable aim of all stakeholders on a project adopting a common language to describe processes, helping to avoid communication gaps that can arise between business process design and implementation.
BPMN is one of a number of business process modeling language standards used by modeling tools and processes. While the cu |
https://en.wikipedia.org/wiki/Maximum%20entropy%20thermodynamics | In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review.
Maximum Shannon entropy
Central to the MaxEnt thesis is the principle of maximum entropy. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the probability distribution, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy,
This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function).
A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables:
kB, the Boltzmann constant, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constan |
https://en.wikipedia.org/wiki/Fission%20track%20dating | Fission track dating is a radiometric dating technique based on analyses of the damage trails, or tracks, left by fission fragments in certain uranium-bearing minerals and glasses. Fission-track dating is a relatively simple method of radiometric dating that has made a significant impact on understanding the thermal history of continental crust, the timing of volcanic events, and the source and age of different archeological artifacts. The method involves using the number of fission events produced from the spontaneous decay of uranium-238 in common accessory minerals to date the time of rock cooling below closure temperature. Fission tracks are sensitive to heat, and therefore the technique is useful at unraveling the thermal evolution of rocks and minerals. Most current research using fission tracks is aimed at: a) understanding the evolution of mountain belts; b) determining the source or provenance of sediments; c) studying the thermal evolution of basins; d) determining the age of poorly dated strata; and e) dating and provenance determination of archeological artifacts.
In the 1930s it was discovered that uranium (specifically U-238) would undergo fission when struck by neutrons. This caused damage tracks in solids which could be revealed by chemical etching.
Method
Unlike other isotopic dating methods, the "daughter" in fission track dating is an effect in the crystal rather than a daughter isotope. Uranium-238 undergoes spontaneous fission decay at a known rate, and it is the only isotope with a decay rate that is relevant to the significant production of natural fission tracks; other isotopes have fission decay rates too slow to be of consequence. The fragments emitted by this fission process
leave trails of damage (fossil tracks or ion tracks) in the crystal structure of the mineral that contains the uranium. The process of track production is essentially the same by which swift heavy ions produce ion tracks.
Chemical etching of polished internal |
https://en.wikipedia.org/wiki/List%20of%20recombinant%20proteins | The following is a list of notable proteins that are produced from recombinant DNA, using biomolecular engineering. In many cases, recombinant human proteins have replaced the original animal-derived version used in medicine. The prefix "rh" for "recombinant human" appears less and less in the literature. A much larger number of recombinant proteins is used in the research laboratory. These include both commercially available proteins (for example most of the enzymes used in the molecular biology laboratory), and those that are generated in the course specific research projects.
Human recombinants that largely replaced animal or harvested from human types
Medicinal applications
Human growth hormone (rHGH): Humatrope from Lilly and Serostim from Serono replaced cadaver harvested human growth hormone
human insulin (BHI): Humulin from Lilly and Novolin from Novo Nordisk among others largely replaced bovine and porcine insulin for human therapy. Some prefer to continue using the animal-sourced preparations, as there is some evidence that synthetic insulin varieties are more likely to induce hypoglycemia unawareness. Remaining manufacturers of highly purified animal-sourced insulin include the U.K.'s Wockhardt Ltd. (headquartered in India), Argentina's Laboratorios Beta S.A., and China's Wanbang Biopharma Co.
Follicle-stimulating hormone (FSH) as a recombinant gonadotropin preparation replaced Serono's Pergonal which was previously isolated from post-menopausal female urine
Factor VIII: Kogenate from Bayer replaced blood harvested factor VIII
Research applications
Ribosomal proteins: For the studies of individual ribosomal proteins, the use of proteins that are produced and purified from recombinant sources has largely replaced those that are obtained through isolation. However, isolation is still required for the studies of the whole ribosome.
Lysosomal proteins: Lysosomal proteins are difficult to produce recombinantly due to the number and type of post-trans |
https://en.wikipedia.org/wiki/Spin-1/2 | In quantum mechanics, spin is an intrinsic property of all elementary particles. All known fermions, the particles that constitute ordinary matter, have a spin of . The spin number describes how many symmetrical facets a particle has in one full rotation; a spin of means that the particle must be rotated by two full turns (through 720°) before it has the same configuration as when it started.
Particles having net spin include the proton, neutron, electron, neutrino, and quarks. The dynamics of spin- objects cannot be accurately described using classical physics; they are among the simplest systems which require quantum mechanics to describe them. As such, the study of the behavior of spin- systems forms a central part of quantum mechanics.
Stern–Gerlach experiment
The necessity of introducing half-integer spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong heterogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be an integer, because even if the intrinsic angular momentum of the atoms were the smallest (non-zero) integer possible, 1, the beam would be split into 3 parts, corresponding to atoms with Lz = −1, +1, and 0, with 0 simply being the value known to come between −1 and +1 while also being a whole-integer itself, and thus a valid quantized spin number in this case. The existence of this hypothetical "extra step" between the two polarized quantum states would necessitate a third quantum state; a third beam, which is not observed in the experiment. The conclusion was that silver atoms had net intrinsic angular momentum of .
General properties
Spin- objects are all fermions (a fact explained by the spin–statistics theorem) and satisfy the Pauli exclusion principle. Spin- particles can have a permanent magnetic moment along the d |
https://en.wikipedia.org/wiki/Sergei%20Fomin | Sergei Vasilyevich Fomin (; 9 December 1917 – 17 August 1975) was a Soviet mathematician who was co-author with Andrey Kolmogorov of Introductory real analysis, and co-author with Israel Gelfand of Calculus of Variations (1963), both books that are widely read in Russian and in English.
Fomin entered Moscow State University at the age of 16. His first paper was published at 19 on infinite abelian groups. After his graduation he worked with Kolmogorov. He was drafted during World War II, after which he returned to Moscow.
When the war ended Fomin returned to Moscow State University and joined Andrey Tikhonov's department. In 1951 he was awarded his habilitation for a dissertation on dynamical systems with invariant measure. Two years later he was appointed a professor. Later in life, he became involved with mathematical aspects of biology.
The American mathematician Paul Halmos wrote the following about Fomin:
Some of the mathematical interests of Sergei Vasilovich were always close to some of mine (measure and ergodic theory); he supervised the translation of a couple of my books into Russian. We had corresponded before we met, and it was a pleasure to shake hands with a man instead of reading a letter. Three or four years later he came to visit me in Hawaii, and it was a pleasure to see him enjoy, in contrast to Moscow, the warm sunshine.
Fomin died in Vladivostok. |
https://en.wikipedia.org/wiki/Leray%27s%20theorem | In algebraic topology and algebraic geometry, Leray's theorem (so named after Jean Leray) relates abstract sheaf cohomology with Čech cohomology.
Let be a sheaf on a topological space and an open cover of If is acyclic on every finite intersection of elements of , then
where is the -th Čech cohomology group of with respect to the open cover |
https://en.wikipedia.org/wiki/De%20Rham%E2%80%93Weil%20theorem | In algebraic topology, the De Rham–Weil theorem allows computation of sheaf cohomology using an acyclic resolution of the sheaf in question.
Let be a sheaf on a topological space and a resolution of by acyclic sheaves. Then
where denotes the -th sheaf cohomology group of with coefficients in
The De Rham–Weil theorem follows from the more general fact that derived functors may be computed using acyclic resolutions instead of simply injective resolutions. |
https://en.wikipedia.org/wiki/Laboratory%20of%20Solid%20State%20Microstructure%2C%20Nanjing%20University | Laboratory of Solid State Microstructure (LSSMS) is located in Nanjing University, China. It is a key laboratory in physics, associated with such faculties as schools of physics and electronics and department of materials of engineering school at Nanjing University.
The Laboratory has accomplished many achievements and enjoys international fame. Nature magazine listed it as one of the two best research groups approaching/with world-class standards in East Asia apart from Japan. The Institute for Scientific Information listed it as the No. 1 laboratory in China as published in Science magazine.
History
In 1984, Nanjing University Institute of Solid State Physics was changed to State Key Laboratory of Solid State Microstructures of Nanjing University, which was mainly associated with the Department of Physics of Nanjing University at the time.
Nanjing National Laboratory of Microstructures, which mainly based upon LSSMS and LCC (State Key Laboratory of Coordination Chemistry) at Nanjing University, was formally started to establish in 2006, with estimated investment of RMB 300 million, and before that, in 2004, NU received endowment of RMB 50 million from Cyrus Tang Foundation for its establishment, and the National Microstructures Laboratory Building - Cyrus Tang Building, was completed in 2007.
Research areas
Physics of microstructured dielectric materials
Nano-structured materials and physics
Aggregations and pattern formation under non-equilibrium conditions
Dynamics of microstructural assembly and modulation
Strong correlation effect in solids
Phase transitions
Other related microstructural physics in solids
Notable scientists
Feng Duan
Ming Naiben
Notes
Nanjing University
Physics laboratories
Research institutes in China
Laboratories in China |
https://en.wikipedia.org/wiki/Motor%20protein | Motor proteins are a class of molecular motors that can move along the cytoplasm of cells. They convert chemical energy into mechanical work by the hydrolysis of ATP. Flagellar rotation, however, is powered by a proton pump.
Cellular functions
Motor proteins are the driving force behind most active transport of proteins and vesicles in the cytoplasm. Kinesins and cytoplasmic dyneins play essential roles in intracellular transport such as axonal transport and in the formation of the spindle apparatus and the separation of the chromosomes during mitosis and meiosis. Axonemal dynein, found in cilia and flagella, is crucial to cell motility, for example in spermatozoa, and fluid transport, for example in trachea. The muscle protein myosin "motors" the contraction of muscle fibers in animals.
Diseases associated with motor protein defects
The importance of motor proteins in cells becomes evident when they fail to fulfill their function. For example, kinesin deficiencies have been identified as the cause for Charcot-Marie-Tooth disease and some kidney diseases. Dynein deficiencies can lead to chronic infections of the respiratory tract as cilia fail to function without dynein. Numerous myosin deficiencies are related to disease states and genetic syndromes. Because myosin II is essential for muscle contraction, defects in muscular myosin predictably cause myopathies. Myosin is necessary in the process of hearing because of its role in the growth of stereocilia so defects in myosin protein structure can lead to Usher syndrome and non-syndromic deafness.
Cytoskeletal motor proteins
Motor proteins utilizing the cytoskeleton for movement fall into two categories based on their substrate: microfilaments or microtubules. Actin motors such as myosin move along microfilaments through interaction with actin, and microtubule motors such as dynein and kinesin move along microtubules through interaction with tubulin.
There are two basic types of microtubule motors: plus-en |
https://en.wikipedia.org/wiki/Gelfand%20pair | In mathematics, a Gelfand pair is a pair (G,K) consisting of a group G and a subgroup K (called an Euler subgroup of G) that satisfies a certain property on restricted representations. The theory of Gelfand pairs is closely related to the topic of spherical functions in the classical theory of special functions, and to the theory of Riemannian symmetric spaces in differential geometry. Broadly speaking, the theory exists to abstract from these theories their content in terms of harmonic analysis and representation theory.
When G is a finite group the simplest definition is, roughly speaking, that the (K,K)-double cosets in G commute. More precisely, the Hecke algebra, the algebra of functions on G that are invariant under translation on either side by K, should be commutative for the convolution on G.
In general, the definition of Gelfand pair is roughly that the restriction to K of any irreducible representation of G contains the trivial representation of K with multiplicity no more than 1. In each case one should specify the class of considered representations and the meaning of contains.
Definitions
In each area, the class of representations and the definition of containment for representations is slightly different. Explicit definitions in several such cases are given here.
Finite group case
When G is a finite group the following are equivalent
(G,K) is a Gelfand pair.
The algebra of (K,K)-double invariant functions on G with multiplication defined by convolution is commutative.
For any irreducible representation π of G, the space πK of K-invariant vectors in π is no-more-than-1-dimensional.
For any irreducible representation π of G, the dimension of HomK(π, C) is less than or equal to 1, where C denotes the trivial representation.
The permutation representation of G on the cosets of K is multiplicity-free, that is, it decomposes into a direct sum of distinct absolutely irreducible representations in characteristic zero.
The centralizer algebra (Schur |
https://en.wikipedia.org/wiki/Hi%20no%20Tori%20Hououhen%20%28MSX%29 | is a 1987 video game for the MSX2 developed by Konami, produced alongside a similarly named game for the Famicom. Both games are based on the series and story arc with the same names.
It is in essence a Knightmare-like vertical scrolling shooter with the player viewing his character on the back and enemies and obstacles entering from the top of the screen. In addition, the game's six stages are laid out in a labyrinthine way, adding puzzle elements to the mix. In order to find, reach and defeat the game's final boss, the player would have to travel back and forth between the various stages to obtain a large assortment of keys. These keys then allow access to parts of other stages, even earlier ones. This traveling between the stages is highly unusual for a shoot 'em up. |
https://en.wikipedia.org/wiki/Formal%20moduli | In mathematics, formal moduli are an aspect of the theory of moduli spaces (of algebraic varieties or vector bundles, for example), closely linked to deformation theory and formal geometry. Roughly speaking, deformation theory can provide the Taylor polynomial level of information about deformations, while formal moduli theory can assemble consistent Taylor polynomials to make a formal power series theory. The step to moduli spaces, properly speaking, is an algebraization question, and has been largely put on a firm basis by Artin's approximation theorem.
A formal universal deformation is by definition a formal scheme over a complete local ring, with special fiber the scheme over a field being studied, and with a universal property amongst such set-ups. The local ring in question is then the carrier of the formal moduli. |
https://en.wikipedia.org/wiki/Goal%20programming | Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Deviations are measured from these goals both above and below the target. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed. Goal programming is used to perform three types of analysis:
Determine the required resources to achieve a desired set of objectives.
Determine the degree of attainment of the goals with the available resources.
Providing the best satisfying solution under a varying amount of resources and priorities of the goals.
History
Goal programming was first used by Charnes, Cooper and Ferguson in 1955, although the actual name first appeared in a 1961 text by Charnes and Cooper. Seminal works by Lee, Ignizio, Ignizio and Cavalier, and Romero followed. Schniederjans gives in a bibliography of a large number of pre-1995 articles relating to goal programming, and Jones and Tamiz give an annotated bibliography of the period 1990-2000. A recent textbook by Jones and Tamiz . gives a comprehensive overview of the state-of-the-art in goal programming.
The first engineering application of goal programming, due to Ignizio in 1962, was the design and placement of the antennas employed on the second stage of the Saturn V. This was used to launch the Apollo space capsule that landed the first men on the moon.
Variants
The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a high |
https://en.wikipedia.org/wiki/Asian%20Pacific%20Mathematics%20Olympiad | The Asian Pacific Mathematics Olympiad (APMO) starting from 1989 is a regional mathematics competition which involves countries from the Asian Pacific region. The United States also takes part in the APMO. Every year, APMO is held in the afternoon of the second Monday of March for participating countries in the North and South Americas, and in the morning of the second Tuesday of March for participating countries on the Western Pacific and in Asia.
APMO's Aims
the discovering, encouraging and challenging of mathematically gifted school students in all Pacific-Rim countries
the fostering of friendly international relations and cooperation between students and teachers in the Pacific-Rim Region
the creating of an opportunity for the exchange of information on school syllabi and practice throughout the Pacific Region
the encouragement and support of mathematical involvement with Olympiad type activities, not only in the APMO participating countries, but also in other Pacific-Rim countries.
Scoring and Format
The APMO contest consists of one four-hour paper consisting of five questions of varying difficulty and each having a maximum score of 7 points. Contestants should not have formally enrolled at a university (or equivalent post-secondary institution) and they must be younger than 20 years of age on 1 July of the year of the contest.
APMO Member Nations/Regions
Observer Nations
Honduras and South Africa
Results
https://cms.math.ca/Competitions/APMO/
https://www.apmo-official.org/
https://www.apmo-official.org/2017/ResultsByName.html
http://imomath.com/index.php?options=Ap&mod=23&ttn=Asian-Pacific
See also
International Mathematical Olympiad
External links
APMO Official Website
Mathematics competitions |
https://en.wikipedia.org/wiki/Class%20number%20formula | In number theory, the class number formula relates many important invariants of a number field to a special value of its Dedekind zeta function.
General statement of the class number formula
We start with the following data:
is a number field.
, where denotes the number of real embeddings of , and is the number of complex embeddings of .
is the Dedekind zeta function of .
is the class number, the number of elements in the ideal class group of .
is the regulator of .
is the number of roots of unity contained in .
is the discriminant of the extension .
Then:
Theorem (Class Number Formula). converges absolutely for and extends to a meromorphic function defined for all complex with only one simple pole at , with residue
This is the most general "class number formula". In particular cases, for example when is a cyclotomic extension of , there are particular and more refined class number formulas.
Proof
The idea of the proof of the class number formula is most easily seen when K = Q(i). In this case, the ring of integers in K is the Gaussian integers.
An elementary manipulation shows that the residue of the Dedekind zeta function at s = 1 is the average of the coefficients of the Dirichlet series representation of the Dedekind zeta function. The n-th coefficient of the Dirichlet series is essentially the number of representations of n as a sum of two squares of nonnegative integers. So one can compute the residue of the Dedekind zeta function at s = 1 by computing the average number of representations. As in the article on the Gauss circle problem, one can compute this by approximating the number of lattice points inside of a quarter circle centered at the origin, concluding that the residue is one quarter of pi.
The proof when K is an arbitrary imaginary quadratic number field is very similar.
In the general case, by Dirichlet's unit theorem, the group of units in the ring of integers of K is infinite. One can nevertheless reduce the compu |
https://en.wikipedia.org/wiki/Current%20%28mathematics%29 | In mathematics, more particularly in functional analysis, differential topology, and geometric measure theory, a k-current in the sense of Georges de Rham is a functional on the space of compactly supported differential k-forms, on a smooth manifold M. Currents formally behave like Schwartz distributions on a space of differential forms, but in a geometric setting, they can represent integration over a submanifold, generalizing the Dirac delta function, or more generally even directional derivatives of delta functions (multipoles) spread out along subsets of M.
Definition
Let denote the space of smooth m-forms with compact support on a smooth manifold A current is a linear functional on which is continuous in the sense of distributions. Thus a linear functional
is an m-dimensional current if it is continuous in the following sense: If a sequence of smooth forms, all supported in the same compact set, is such that all derivatives of all their coefficients tend uniformly to 0 when tends to infinity, then tends to 0.
The space of m-dimensional currents on is a real vector space with operations defined by
Much of the theory of distributions carries over to currents with minimal adjustments. For example, one may define the support of a current as the complement of the biggest open set such that
whenever
The linear subspace of consisting of currents with support (in the sense above) that is a compact subset of is denoted
Homological theory
Integration over a compact rectifiable oriented submanifold M (with boundary) of dimension m defines an m-current, denoted by :
If the boundary ∂M of M is rectifiable, then it too defines a current by integration, and by virtue of Stokes' theorem one has:
This relates the exterior derivative d with the boundary operator ∂ on the homology of M.
In view of this formula we can define a boundary operator on arbitrary currents
via duality with the exterior derivative by
for all compactly supported m-forms
Certain |
https://en.wikipedia.org/wiki/Ingress%20filtering | In computer networking, ingress filtering is a technique used to ensure that incoming packets are actually from the networks from which they claim to originate. This can be used as a countermeasure against various spoofing attacks where the attacker's packets contain fake IP addresses. Spoofing is often used in denial-of-service attacks, and mitigating these is a primary application of ingress filtering.
Problem
Networks receive packets from other networks. Normally a packet will contain the IP address of the computer that originally sent it. This allows devices in the receiving network to know where it came from, allowing a reply to be routed back (amongst other things), except when IP addresses are used through a proxy or a spoofed IP address, which does not pinpoint a specific user within that pool of users.
A sender IP address can be faked (spoofed), characterizing a spoofing attack. This disguises the origin of packets sent, for example in a denial-of-service attack. The same holds true for proxies, although in a different manner than IP spoofing.
Potential solutions
One potential solution involves implementing the use of intermediate Internet gateways (i.e., those servers connecting disparate networks along the path followed by any given packet) filtering or denying any packet deemed to be illegitimate. The gateway processing the packet might simply ignore the packet completely, or where possible, it might send a packet back to the sender relaying a message that the illegitimate packet has been denied. Host intrusion prevention systems (HIPS) are one example of technical engineering applications that help to identify, prevent and/or deter unwanted, unsuspected or suspicious events and intrusions.
Any router that implements ingress filtering checks the source IP field of IP packets it receives and drops packets if the packets don't have an IP address in the IP address block to which the interface is connected. This may not be possible if the end host i |
https://en.wikipedia.org/wiki/Dichotomic%20search | In computer science, a dichotomic search is a search algorithm that operates by selecting between two distinct alternatives (dichotomies) at each step. It is a specific type of divide and conquer algorithm. A well-known example is binary search.
Abstractly, a dichotomic search can be viewed as following edges of an implicit binary tree structure until it reaches a leaf (a goal or final state). This creates a theoretical tradeoff between the number of possible states and the running time: given k comparisons, the algorithm can only reach O(2k) possible states and/or possible goals.
Some dichotomic searches only have results at the leaves of the tree, such as the Huffman tree used in Huffman coding, or the implicit classification tree used in Twenty Questions. Other dichotomic searches also have results in at least some internal nodes of the tree, such as a dichotomic search table for Morse code. There is thus some looseness in the definition. Though there may indeed be only two paths from any node, there are thus three possibilities at each step: choose one onwards path or the other, ''or' stop at this node.
Dichotomic searches are often used in repair manuals, sometimes graphically illustrated with a flowchart similar to a fault tree.
See also
Binary search algorithm |
https://en.wikipedia.org/wiki/Icemaker | An icemaker, ice generator, or ice machine may refer to either a consumer device for making ice, found inside a home freezer; a stand-alone appliance for making ice, or an industrial machine for making ice on a large scale. The term "ice machine" usually refers to the stand-alone appliance.
The ice generator is the part of the ice machine that actually produces the ice. This would include the evaporator and any associated drives/controls/subframe that are directly involved with making and ejecting the ice into storage. When most people refer to an ice generator, they mean this ice-making subsystem alone, minus refrigeration.
An ice machine, however, particularly if described as 'packaged', would typically be a complete machine including refrigeration, controls, and dispenser, requiring only connection to power and water supplies.
The term icemaker is more ambiguous, with some manufacturers describing their packaged ice machine as an icemaker, while others describe their generators in this way.
History
In 1748, the first known artificial refrigeration was demonstrated by William Cullen at the University of Glasgow. Mr. Cullen never used his discovery for any practical purposes. This may be the reason why the history of the icemakers begins with Oliver Evans, an American inventor who designed the first refrigeration machine in 1805. In 1834, Jacob Perkins built the first practical refrigerating machine using ether in a vapor compression cycle. The American inventor, mechanical engineer and physicist received 21 American and 19 English patents (for innovations in steam engines, the printing industry and gun manufacturing among others) and is considered today the father of the refrigerator.
In 1844, an American physician, John Gorrie, built a refrigerator based on Oliver Evans' design to make ice to cool the air for his yellow fever patients. His plans date back to 1842, making him one of the founding fathers of the refrigerator. Unfortunately for John Gorrie, his |
https://en.wikipedia.org/wiki/Entropy%20of%20mixing | In thermodynamics, the entropy of mixing is the increase in the total entropy when several initially separate systems of different composition, each in a thermodynamic state of internal equilibrium, are mixed without chemical reaction by the thermodynamic operation of removal of impermeable partition(s) between them, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new unpartitioned closed system.
In general, the mixing may be constrained to occur under various prescribed conditions. In the customarily prescribed conditions, the materials are each initially at a common temperature and pressure, and the new system may change its volume, while being maintained at that same constant temperature, pressure, and chemical component masses. The volume available for each material to explore is increased, from that of its initially separate compartment, to the total common final volume. The final volume need not be the sum of the initially separate volumes, so that work can be done on or by the new closed system during the process of mixing, as well as heat being transferred to or from the surroundings, because of the maintenance of constant pressure and temperature.
The internal energy of the new closed system is equal to the sum of the internal energies of the initially separate systems. The reference values for the internal energies should be specified in a way that is constrained to make this so, maintaining also that the internal energies are respectively proportional to the masses of the systems.
For concision in this article, the term 'ideal material' is used to refer to either an ideal gas (mixture) or an ideal solution.
In the special case of mixing ideal materials, the final common volume is in fact the sum of the initial separate compartment volumes. There is no heat transfer and no work is done. The entropy of mixing is entirely accounted for by the diffusive expansion of each material into a final volume not in |
https://en.wikipedia.org/wiki/MOO | A MOO ("MUD, object-oriented") is a text-based online virtual reality system to which multiple users (players) are connected at the same time.
The term MOO is used in two distinct, but related, senses. One is to refer to those programs descended from the original MOO server, and the other is to refer to any MUD that uses object-oriented techniques to organize its database of objects, particularly if it does so in a similar fashion to the original MOO or its derivatives. Most of this article refers to the original MOO and its direct descendants, but see non-descendant MOOs for a list of MOO-like systems.
The original MOO server was authored by Stephen White, based on his experience from creating the programmable TinyMUCK system. There was additional later development and maintenance from LambdaMOO founder, and former Xerox PARC employee, Pavel Curtis.
One of the most distinguishing features of a MOO is that its users can perform object-oriented programming within the server, ultimately expanding and changing how it behaves to everyone. Examples of such changes include authoring new rooms and objects, creating new generic objects for others to use, and changing the way the MOO interface operates. The programming language used for extension is the MOO programming language, and many MOOs feature convenient libraries of verbs that can be used by programmers in their coding known as Utilities. The MOO programming language is a domain-specific language.
Background
MOOs are network accessible, multi-user, programmable, interactive systems well-suited to the construction of text-based adventure games, conferencing systems, and other collaborative software. Their most common use, however, is as multi-participant, low-bandwidth virtual realities. They have been used in academic environments for distance education, collaboration (such as Diversity University), group decision systems, and teaching object-oriented concepts; but others are primarily social in nature, o |
https://en.wikipedia.org/wiki/Direct%20image%20functor | In mathematics, the direct image functor is a construction in sheaf theory that generalizes the global sections functor to the relative case. It is of fundamental importance in topology and algebraic geometry. Given a sheaf F defined on a topological space X and a continuous map f: X → Y, we can define a new sheaf f∗F on Y, called the direct image sheaf or the pushforward sheaf of F along f, such that the global sections of f∗F is given by the global sections of F. This assignment gives rise to a functor f∗ from the category of sheaves on X to the category of sheaves on Y, which is known as the direct image functor. Similar constructions exist in many other algebraic and geometric contexts, including that of quasi-coherent sheaves and étale sheaves on a scheme.
Definition
Let f: X → Y be a continuous map of topological spaces, and let Sh(–) denote the category of sheaves of abelian groups on a topological space. The direct image functor
sends a sheaf F on X to its direct image presheaf f∗F on Y, defined on open subsets U of Y by
This turns out to be a sheaf on Y, and is called the direct image sheaf or pushforward sheaf of F along f.
Since a morphism of sheaves φ: F → G on X gives rise to a morphism of sheaves f∗(φ): f∗(F) → f∗(G) on Y in an obvious way, we indeed have that f∗ is a functor.
Example
If Y is a point, and f: X → Y the unique continuous map, then Sh(Y) is the category Ab of abelian groups, and the direct image functor f∗: Sh(X) → Ab equals the global sections functor.
Variants
If dealing with sheaves of sets instead of sheaves of abelian groups, the same definition applies. Similarly, if f: (X, OX) → (Y, OY) is a morphism of ringed spaces, we obtain a direct image functor f∗: Sh(X,OX) → Sh(Y,OY) from the category of sheaves of OX-modules to the category of sheaves of OY-modules. Moreover, if f is now a morphism of quasi-compact and quasi-separated schemes, then f∗ preserves the property of being quasi-coherent, so we obtain the direct image |
https://en.wikipedia.org/wiki/Credential%20service%20provider | A credential service provider (CSP) is a trusted entity that issues security tokens or electronic credentials to subscribers. A CSP forms part of an authentication system, most typically identified as a separate entity in a Federated authentication system. A CSP may be an independent third party, or may issue credentials for its own use. The term CSP is used frequently in the context of the US government's eGov and e-authentication initiatives. An example of a CSP would be an online site whose primary purpose may be, for example, internet banking - but whose users may be subsequently authenticated to other sites, applications or services without further action on their part.
History
In any authentication system, some entity is required to authenticate the user on behalf of the target application or service. For many years there was poor understanding of the impact of security and the multiplicity of services and applications that would ultimately require authentication. The result of this is that not only are users burdened with many credentials that they must remember or carry around with them, but also applications and services must perform some level of registration and then some level of authentication of those users. As a result, Credential Service Providers were created. A CSP separates those functions from the application or service and typically provides trust to that application or service over a network (such as the Internet).
CSP Process
The CSP establishes a mechanism to uniquely identify each subscriber and the associated tokens and credentials issued to that subscriber. The CSP registers or gives the subscriber a token to be used in an authentication protocol and issues credentials as needed to bind that token to the identity, or to bind the identity to some other useful verified attribute. The subscriber may be given electronic credentials to go with the token at the time of registration, or credentials may be generated later as needed. Subscribers |
https://en.wikipedia.org/wiki/Evidence%20Eliminator | Evidence Eliminator is a computer software program that runs on Microsoft Windows operating systems at least through Windows 7. The program deletes hidden information from the user's hard drive that normal procedures may fail to delete. Such "cleaner" or "eraser" programs typically overwrite previously allocated disk space, in order to make it more difficult to salvage deleted information. In the absence of such overwrite procedures, information that a user thinks has been deleted may actually remain on the hard drive until that physical space is claimed for another use (i.e. to store another file). When it was offered for sale, the program cost between $20 early on to $150 later.
History
Evidence Eliminator was produced by Robin Hood Software, based in Nottingham, England, up to version 6.04.
Controversy
There has been controversy surrounding Evidence Eliminator's marketing tactics. The company has used popup ads to market the program, including claims that the user's system was being compromised. In response, Robin Hood Software produced a "dis-information page" addressing these concerns. Radsoft, a competitor to Robin Hood, criticised its operation.
Legal
On June 1, 2005, Peter Beale, one of the "Phoenix Four" used Evidence Eliminator to remove all trace of certain files from his PC the day after the appointment of DTI inspectors to investigate the collapse of MG Rover.
In a 2011 case, MGA v. Mattel, a federal court found that a former employee used the program to delete information that he was accused of giving to MGA while employed at Mattel. |
https://en.wikipedia.org/wiki/Zemmix | Zemmix was a trade mark and brand name of the South Korean electronics company Daewoo Electronics Co., Ltd. It was an MSX-based video game console brand whose name is no longer in use.
Under the name Zemmix, Daewoo released a series of gaming consoles compatible with the MSX home computer standards. The consoles were in production between 1985 and 1995. The consoles were not sold outside South Korea.
Hardware
Console Models
All consoles were designed to broadcast standard NTSC, have low and high outputs for connecting to a TV and have a universal adapter for connection to the mains 120/230 volts.
The consoles also had a letter coming after the serial number. These letters indicated the color combination of the console. The key is as follows.
W - white and silver colors
R - red and black colors
B - yellow, blue and black colors
For example, CPC 51W would be a white or silver Zemmix V (see below).
Consoles compatible with the MSX standard
CPC-50 (Zemmix)
CPC-51 (Zemmix V)
Consoles compatible with the MSX2 standard
CPC-61 (Zemmix Super V)
Consoles compatible with the MSX2+ standard
CPG-120 (Zemmix Turbo)
FPGA based MSX2+ compatible console
Zemmix Neo (by Retroteam Neo)
Zemmix Neo Lite (by Retroteam Neo)
Raspberry PI based MSX2+ (Turbo R) compatible console
CPC-Mini (Licensed)- Zemmix Mini (by Retroteam Neo)
Peripherals
Other Zemmix products:
By Daewoo
CPJ-905: MSX joystick for Zemmix CPC-51 console
CPJ-600: MSX joypad for Zemmix CPC-61 console
CPK-30: keyboard for Zemmix CPC-61
CPJ-102K: joystick for CPC-330
CPK-31K: input device for CPC-330
By Zemina
A Keyboard & Cartridge port divider
The Zemina Music Box
An MSX2 Upgrade Kit
A Zemmix PC card
MSX RAM expansion cards
A 'Family Card' that allows the user to play Famicom games on the Zemmix
Software
Korean software companies that produced software for the Zemmix gaming console:
Aproman
Boram
Clover
Daou Infosys
FA Soft
Mirinae
Prosoft
Screen
Topia
Uttum
Zemina
Most Zemmix software wor |
https://en.wikipedia.org/wiki/Discriminant%20of%20an%20algebraic%20number%20field | In mathematics, the discriminant of an algebraic number field is a numerical invariant that, loosely speaking, measures the size of the (ring of integers of the) algebraic number field. More specifically, it is proportional to the squared volume of the fundamental domain of the ring of integers, and it regulates which primes are ramified.
The discriminant is one of the most basic invariants of a number field, and occurs in several important analytic formulas such as the functional equation of the Dedekind zeta function of K, and the analytic class number formula for K. A theorem of Hermite states that there are only finitely many number fields of bounded discriminant, however determining this quantity is still an open problem, and the subject of current research.
The discriminant of K can be referred to as the absolute discriminant of K to distinguish it from the relative discriminant of an extension K/L of number fields. The latter is an ideal in the ring of integers of L, and like the absolute discriminant it indicates which primes are ramified in K/L. It is a generalization of the absolute discriminant allowing for L to be bigger than Q; in fact, when L = Q, the relative discriminant of K/Q is the principal ideal of Z generated by the absolute discriminant of K.
Definition
Let K be an algebraic number field, and let OK be its ring of integers. Let b1, ..., bn be an integral basis of OK (i.e. a basis as a Z-module), and let {σ1, ..., σn} be the set of embeddings of K into the complex numbers (i.e. injective ring homomorphisms K → C). The discriminant of K is the square of the determinant of the n by n matrix B whose (i,j)-entry is σi(bj). Symbolically,
Equivalently, the trace from K to Q can be used. Specifically, define the trace form to be the matrix whose (i,j)-entry is
TrK/Q(bibj). This matrix equals BTB, so the square of the discriminant of K is the determinant of this matrix.
The discriminant of an order in K with integral basis b1, ..., bn is defined i |
https://en.wikipedia.org/wiki/Hierarchical%20task%20network | In artificial intelligence, hierarchical task network (HTN) planning is an approach to automated planning in which the dependency among actions can be given in the form of hierarchically structured networks.
Planning problems are specified in the hierarchical task network approach by
providing a set of tasks, which can be:
primitive (initial state) tasks, which roughly correspond to the actions of STRIPS;
compound tasks (intermediate state), which can be seen as composed of a set of simpler tasks;
goal tasks (goal state), which roughly corresponds to the goals of STRIPS, but are more general.
A solution to an HTN problem is then an executable sequence of primitive tasks that can be obtained from the initial task network by decomposing compound tasks into their set of simpler tasks, and by inserting ordering constraints.
A primitive task is an action that can be executed directly given the state in which it is executed supports its precondition. A compound task is a complex task composed of a partially ordered set of further tasks, which can either be primitive or abstract. A goal task is a task of satisfying a condition. The difference between primitive and other tasks is that the primitive actions can be directly executed. Compound and goal tasks both require a sequence of primitive actions to be performed; however, goal tasks are specified in terms of conditions that have to be made true, while compound tasks can only be specified in terms of other tasks via the task network outlined below.
Constraints among tasks are expressed in the form of networks, called (hierarchical) task networks. A task network is a set of tasks and constraints among them. Such a network can be used as the precondition for another compound or goal task to be feasible. This way, one can express that a given task is feasible only if a set of other actions (those mentioned in the network) are done, and they are done in such a way that the constraints among them (specified by the net |
https://en.wikipedia.org/wiki/Glossy%20display | A glossy display is an electronic display with a glossy surface. In certain light environments, glossy displays provide better color intensity and contrast ratios than matte displays. The primary disadvantage of these displays is their tendency to reflect any external light, often resulting in an undesirable glare.
Technology
Some LCDs use an antireflective coating, or nanotextured glass surface, to reduce the amount of external light reflecting from the surface without affecting light emanating from the screen as an alternative to matte display.
Disadvantages
Because of the reflective nature of the display, in most lighting conditions that include direct light sources facing the screen, glossy displays create reflections, which can be distracting to the user of the computer. This can be especially distracting to users working in an environment where the position of lights and windows are fixed, such as in an office, as these create unavoidable reflections on glossy displays.
Adverse health effects
Ergonomic studies show that prolonged work in the office environment with the presence of discomforting glares and disturbances from light reflections on the screen can cause mild to severe health effects, ranging from eye strain and headaches to photosensitive epileptic episodes. These effects are usually explained by the physiology of the human eye and the human visual system. The image of light sources reflected in the screen can cause the human visual system to focus on that image, which is usually at a much farther distance than the information shown on the screen. This competition between two images that can be focused is considered to be the primary source of such effects.
Advantages
In controlled environments, such as darkened rooms, or rooms where all light sources are diffused, glossy displays create more saturated colors, deeper blacks, brighter whites, and are sharper than matte displays. This is why supporters of glossy screens consider these typ |
https://en.wikipedia.org/wiki/Hydathode | A hydathode is a type of pore, commonly found in angiosperms, that secretes water through pores in the epidermis or leaf margin, typically at the tip of a marginal tooth or serration. Hydathodes occur in the leaves of submerged aquatic plants such as Ranunculus fluitans as well as herbaceous plants of drier habitats such as Campanula rotundifolia. They are connected to the plant vascular system by a vascular bundle. Hydathodes are commonly seen in water lettuce, water hyacinth, rose, balsam, and many other species.
Hydathodes are made of a group of living cells with numerous intercellular spaces filled with water, but few or no chloroplasts, and represent modified bundle-ends. These cells (called epithem cells) open out into one or more sub-epidermal chambers. These, in turn, communicate with the exterior through an open water stoma or open pore. The water stoma structurally resembles an ordinary stoma, but is usually larger and has lost the power of movement.
Hydathodes are involved in the process of guttation, in which positive xylem pressure (due to root pressure) causes liquid to exude from the pores. Some halophytes possess glandular trichomes that actively secrete salt in order to reduce the concentration of cytotoxic inorganic ions in their cytoplasm; this may lead to the formation of a white powdery substance on the surface of the leaf.
Hydathodes are of two types:
passive hydathodes, formed when a leaf vein terminates in an epithem (an area of thin-walled parenchyma).
active hydathodes, formed when epidermal cells lose water actively.
See also
Transpiration
Vascular plants
Xylem |
https://en.wikipedia.org/wiki/FrostWire | FrostWire is a free and open-source BitTorrent client first released in September 2004, as a fork of LimeWire. It was initially very similar to LimeWire in appearance and functionality, but over time developers added more features, including support for the BitTorrent protocol. In version 5, support for the Gnutella network was dropped entirely, and FrostWire became a BitTorrent-only client.
History
FrostWire, a BitTorrent client (formerly a Gnutella client), is a collaborative, open-source project licensed under the GPL-3.0-or-later license. In late 2005, concerned developers of LimeWire's open source community announced the start of a new project fork "FrostWire" that would protect the developmental source code of the LimeWire client. FrostWire has evolved to replace LimeWire's BitTorrent core for that of Vuze, the Azureus BitTorrent Engine, and ultimately to remove the LimeWire's Gnutella core to become a 100% BitTorrent client powered by the libtorrent library through FrostWire's jLibtorrent Java wrapper library since August 2014.
Gnutella client
The project was started in September 2004 after LimeWire's distributor considered adding "blocking" code in response to RIAA pressure. The RIAA threatened legal action against several peer-to-peer developers including LimeWire as a result of the U.S. Supreme Court's decision in MGM Studios, Inc. v. Grokster, Ltd..
The second beta release of FrostWire was available in the last quarter of 2005.
Multiprotocol P2P client
Since version 4.20.x, FrostWire was able to handle torrent files and featured a new junk filter. Also, in version 4.21.x support was added for most Android devices.
BitTorrent client
Since version 5.0 (2011), FrostWire relaunched itself as a BitTorrent application, so those using the Gnutella network either have to use version 4, or switch to another client altogether.
Preview before download
Since version 6.0, FrostWire adds preview files before download.
Adware and malware
Since around 2008 som |
https://en.wikipedia.org/wiki/Parasympatholytic | A parasympatholytic agent is a substance or activity that reduces the activity of the parasympathetic nervous system.
The term parasympatholytic typically refers to the effect of a drug, although some poisons act to block the parasympathetic nervous system as well. Most drugs with parasympatholytic properties are anticholinergics.
Parasympatholytic agents and sympathomimetic agents have similar effects to each other, although some differences between the two groups can be observed. For example, both cause mydriasis, but parasympatholytics reduce accommodation (cycloplegia), whereas sympathomimetics do not.
Clinical significance
Parasympatholytic drugs are sometimes used to treat slow heart rhythms (bradycardias or bradydysrhythmias) caused by myocardial infarctions or other pathologies, as well as to treat conditions that cause bronchioles in the lung to constrict, such as asthma. By blocking the parasympathetic nervous system, parasympatholytic drugs can increase heart rate in patients with bradycardic heart rhythms, and open up airways and reduce mucous production in patients with asthma.
About
External links
Overview at salisbury.edu
Anticholinergics |
https://en.wikipedia.org/wiki/Kashk | Kashk ( Kašk, ), qurut (Tuvan and , , , , , , Turkish: kurut), chortan ( chort’an), or aaruul and khuruud (Mongolian: ааруул or хурууд) is a range of dairy products popular in Middle Eastern cuisine, Caucasian cuisine, and Central Asian cuisine. Kashk is made from strained yogurt, drained buttermilk (in particular, drained qatiq) or drained sour milk by shaping it and letting it dry. It can be made in a variety of forms, like rolled into balls, sliced into strips, and formed into chunks.
There are three main kinds of food products with this name: foods based on curdled milk products like yogurt or cheese; foods based on barley broth, bread, or flour; and foods based on cereals combined with curdled milk.
Etymology
From Middle Persian (kšk' / kašk), thought to have came from (hwš- / hōš-, "dry") in reference to the fermentation process which involves drying under the sun, The term was loaned to numerous languages including Arabic, Syriac, Turkish, Azerbaijani and many others.
In Armenian – chortan (chor means "dry", and tan means "buttermilk")
In Turkic languages, qurut derives from the verb quru-t ('to make dry').
Background
The ancient form of kashk is a porridge of grains fermented with whey and dried in the sun. The long shelf-life and nutritional value of kashk made it a useful item for peasants during the winter months, as well as soldiers and travelers. Kashk is the origin of tarhana found in the moderns cuisines of Turkey and Greece, where it is called trachanas ().
Modern kashk is usually a dish of dried buttermilk that can be crumbled and turned into a paste with water. This coarse powder can be used to thicken soups and stews and improve their flavor, or as an ingredient in various meat, rice or vegetable dishes. Drying allows a longer shelf life for the product.
Kashk is also central to the staple Iranian eggplant dish known as kashk-e bademjan.
Kashk in different languages and cultures
Kashk dairy products can be found in the cuisines of Iran, |
https://en.wikipedia.org/wiki/Line%20of%20force | A line of force in Faraday's extended sense is synonymous with Maxwell's line of induction. According to J.J. Thomson, Faraday usually discusses lines of force as chains of polarized particles in a dielectric, yet sometimes Faraday discusses them as having an existence all their own as in stretching across a vacuum. In addition to lines of force, J.J. Thomson—similar to Maxwell—also calls them tubes of electrostatic inductance, or simply Faraday tubes. From the 20th century perspective, lines of force are energy linkages embedded in a 19th-century unified field theory that led to more mathematically and experimentally sophisticated concepts and theories, including Maxwell's equations, electromagnetic waves, and Einstein's relativity.
Lines of force originated with Michael Faraday, whose theory holds that all of reality is made up of force itself. His theory predicts that electricity, light, and gravity have finite propagation delays. The theories and experimental data of later scientific figures such as Maxwell, Hertz, Einstein, and others are in agreement with the ramifications of Faraday's theory. Nevertheless, Faraday's theory remains distinct. Unlike Faraday, Maxwell and others (e.g., J.J. Thomson) thought that light and electricity must propagate through an ether. In Einstein's relativity, there is no ether, yet the physical reality of force is much weaker than in the theories of Faraday.
Historian Nancy J. Nersessian in her paper "Faraday's Field Concept" distinguishes between the ideas of Maxwell and Faraday:
The specific features of Faraday's field concept, in its 'favourite' and most complete form, are that force is a substance, that it is the only substance and that all forces are interconvertible through various motions of the lines of force. These features of Faraday's 'favourite notion' were not carried on. Maxwell, in his approach to the problem of finding a mathematical representation for the continuous transmission of electric and magnetic fo |
https://en.wikipedia.org/wiki/Flavored%20fortified%20wine | Flavored fortified wines or tonic wines (known informally as bum wines or bum vino) are inexpensive fortified wines that typically have an alcohol content between 13% and 20% alcohol by volume (ABV). They are made from various fruits (including grapes and citrus fruits) with added sugar, artificial flavor, and artificial color.
Brands
Buckfast Tonic Wine is a tonic wine with added alcohol, caffeine, and sugar, produced under license from Buckfast Abbey, a Roman Catholic monastery located in Devon, England. It is particularly popular along the central belt of Scotland, especially Glasgow, Faifley, East Kilbride, Hamilton, Coatbridge and other Strathclyde areas, as well as Falkirk, Fife, Edinburgh and the Lothians, but critics have blamed it for being a cause of social problems in Scotland. Some have nicknamed it "Wreck the Hoose Juice". It also enjoys strong popularity and near cult-following in Northern Ireland, often referred to simply as "Bucky" and in some cases "Lurgan Champagne".
Divas Vkat (originally Vodkat). Made in the Riverina region of New South Wales, Australia, Divas produce a range of flavoured and unflavoured fortified wines used as a substitute to traditional mixers such as vodka, triple sec and coconut rum. Alcohol levels vary by flavour, but tend to be between 20% and 22% alcohol by volume (40 to 44 proof). There is also a range of "Ready to Drink" (RTD) 330 mL pre-mixed cocktails in a four-pack, with a strength of 8% alcohol by volume.
MD 20/20 (often called by its nickname Mad Dog) is an American fortified wine. The MD stands for its producer, Mogen David. MD 20/20 has an alcohol content that varies by flavor from 13% to 18%. Initially, 20/20 stood for 20 oz at 20% alcohol. Currently, MD 20/20 is sold neither in 20 oz bottles nor at 20% alcohol by volume. In the Central Belt of Scotland it is a very popular beverage, usually drunk from the bottle and often associated with antisocial behaviour.
Solntsedar (, named after a town on the Bla |
https://en.wikipedia.org/wiki/Brewer%20and%20Nash%20model | The Brewer and Nash model was constructed to provide information security access controls that can change dynamically. This security model, also known as the Chinese wall model, was designed to provide controls that mitigate conflict of interest in commercial organizations and is built upon an information flow model.
In the Brewer and Nash model, no information can flow between the subjects and objects in a way that would create a conflict of interest.
This model is commonly used by consulting and accounting firms. For example, once a consultant accesses data belonging to Acme Ltd, a consulting client, they may no longer access data to any of Acme's competitors. In this model, the same consulting firm can have clients that are competing with Acme Ltd while advising Acme Ltd. This model uses the principle of data isolation within each conflict class of data to keep users out of potential conflict of interest situations. Because company relationships change all the time, dynamic and up-to-date updates to members and definitions for conflict classes are important.
See also
Bell–LaPadula model
Biba model
Clark–Wilson model
Graham–Denning model |
https://en.wikipedia.org/wiki/Graham%E2%80%93Denning%20model | The Graham–Denning model is a computer security model that shows how subjects and objects should be securely created and deleted.
It also addresses how to assign specific access rights. It is mainly used in access control mechanisms for distributed systems. There are three main parts to the model: A set of subjects, a set of objects, and a set of eight rules. A subject may be a process or a user that makes a request to access a resource. An object is the resource that a user or process wants to access.
Features
This model addresses the security issues associated with how to define a set of basic rights on how specific subjects can execute security functions on an object.
The model has eight basic protection rules (actions) that outline:
How to securely create an object.
How to securely create a subject.
How to securely delete an object.
How to securely delete a subject.
How to securely provide the read access right.
How to securely provide the grant access right.
How to securely provide the delete access right.
How to securely provide the transfer access right.
Moreover, each object has an owner that has special rights on it, and each subject has another subject (controller) that has special rights on it.
The model is based on the Access Control Matrix model where rows correspond to subjects and columns correspond to objects and subjects, each element contains a set of rights between subject i and object j or between subject i and subject k.
For example an action A[s,o] contains the rights that subject s has on object o (example: {own, execute}).
When executing one of the 8 rules, for example creating an object, the matrix is changed: a new column is added for that object, and the subject that created it becomes its owner.
Each rule is associated with a precondition, for example if subject x wants to delete object o, it must be its owner (A[x,o] contains the 'owner' right).
Limitations
Harrison-Ruzzo-Ullman extended this model by defining |
https://en.wikipedia.org/wiki/Security%20modes | Generally, security modes refer to information systems security modes of operations used in mandatory access control (MAC) systems. Often, these systems contain information at various levels of security classification. The mode of operation is determined by:
The type of users who will be directly or indirectly accessing the system.
The type of data, including classification levels, compartments, and categories, that are processed on the system.
The type of levels of users, their need to know, and formal access approvals that the users will have.
Dedicated security mode
In this mode of operation, all users must have:
Signed NDA for ALL information on the system.
Proper clearance for ALL information on the system.
Formal access approval for ALL information on the system.
A valid need to know for ALL information on the system.
All users can access ALL data.
System high security mode
In system high mode of operation, all users must have:
Signed NDA for ALL information on the system.
Proper clearance for ALL information on the system.
Formal access approval for ALL information on the system.
A valid need to know for SOME information on the system.
All users can access SOME data, based on their need to know.
Compartmented security mode
In this mode of operation, all users must have:
Signed NDA for ALL information on the system.
Proper clearance for ALL information on the system.
Formal access approval for SOME information they will access on the system.
A valid need to know for SOME information on the system.
All users can access SOME data, based on their need to know and formal access approval.
Multilevel security mode
In multilevel security mode of operation (also called Controlled Security Mode), all users must have:
Signed NDA for ALL information on the system.
Proper clearance for SOME information on the system.
Formal access approval for SOME information on the system.
A valid need to know for SOME information on the system.
All users can |
https://en.wikipedia.org/wiki/Finite%20potential%20well | The finite potential well (also known as the finite square well) is a concept from quantum mechanics. It is an extension of the infinite potential well, in which a particle is confined to a "box", but one which has finite potential "walls". Unlike the infinite potential well, there is a probability associated with the particle being found outside the box. The quantum mechanical interpretation is unlike the classical interpretation, where if the total energy of the particle is less than the potential energy barrier of the walls it cannot be found outside the box. In the quantum interpretation, there is a non-zero probability of the particle being outside the box even when the energy of the particle is less than the potential energy barrier of the walls (cf quantum tunnelling).
Particle in a 1-dimensional box
For the 1-dimensional case on the x-axis, the time-independent Schrödinger equation can be written as:
where
is the reduced Planck's constant,
is Planck's constant,
is the mass of the particle,
is the (complex valued) wavefunction that we want to find,
is a function describing the potential energy at each point x, and
is the energy, a real number, sometimes called eigenenergy.
For the case of the particle in a 1-dimensional box of length L, the potential is outside the box, and zero for x between and . The wavefunction is considered to be made up of different wavefunctions at different ranges of x, depending on whether x is inside or outside of the box. Therefore, the wavefunction is defined such that:
Inside the box
For the region inside the box, V(x) = 0 and Equation 1 reduces to
Letting
the equation becomes
This is a well-studied differential equation and eigenvalue problem with a general solution of
Hence,
Here, A and B can be any complex numbers, and k can be any real number.
Outside the box
For the region outside of the box, since the potential is constant, and equation becomes:
There are two possible families of solutions, depending |
https://en.wikipedia.org/wiki/Rainforest%20Alliance | The Rainforest Alliance is an international non-governmental organization (NGO) with staff in more than 20 countries and operations in more than 70 countries. It was founded in 1987 by Daniel Katz, an American environmental activist, who serves as the chair of the board of directors. The NGO states that its mission is “to create a more sustainable world by using social and market forces to protect nature and improve the lives of farmers and forest communities.” Its work includes the provision of an environmental certification for sustainability in agriculture. In parallel to its certification program, the Rainforest Alliance develops and implements long-term conservation and community development programs in a number of critically important tropical landscapes where commodity production threatens ecosystem health and the well-being of rural communities.
The Rainforest Alliance is a product-oriented multistakeholder governance group combining the interests of companies, farmers, foresters, communities, and consumers to produce sustainable and harmonious goods.
Merger with UTZ
In June 2017, the Rainforest Alliance and UTZ announced their intention to merge, and in January 2018, the merger was legally completed.
The Rainforest Alliance’s work continues in Latin America, Africa, and Asia.
The new Rainforest Alliance released a new certification standard in 2020, building upon the previous Rainforest Alliance Sustainable Agriculture Standard and the UTZ Certification Standard. The two previous certification programs will continue to operate in parallel, and farms will continue as either Rainforest Alliance or UTZ certified until they transition to the new 2020 standard. Audits against the new standard were set to begin in July 2021.
Rainforest Alliance programs
Sustainable forestry certification
As of October 1, 2018, the Rainforest Alliance transitioned its sustainable forestry certification business, including all related services, personnel and clients, to Pre |
https://en.wikipedia.org/wiki/Mohammad%20Fahim%20Dashty | Mohammad Fahim Dashty ( 1973 – 4 or 5 September 2021) was an Afghan journalist, politician and military official. In 2021, he served as spokesman of the National Resistance Front of Afghanistan during the Republican insurgency in Afghanistan.
Biography
Born around 1973, Dashty was the nephew of Afghan politician, Abdullah Abdullah, and a close associate of the family of the Northern Alliance leader, Ahmad Shah Massoud. He was with Massoud when the latter was killed by a suicide bombing on 9 September 2001. Dashty was badly wounded in the bombing.
After the United States invasion of Afghanistan, Dashty founded a newspaper based at Kabul and became known for supporting journalists and advocating freedom of speech in Afghanistan. He was a leader of the Afghanistan National Journalist Union (ANJU) as well as a key figure in the Federation for "Afghan Journalists and Media Entities", founded in 2012. In addition, he contributed to the South Asia Press Freedom Report.
In 2021, following the takeover of Afghanistan by Taliban, Dashty joined the National Resistance Front of Afghanistan as a spokesman. Beforehand, he had reportedly refused offers of a government post by the Taliban. Dashty was one of the main sources of information in the Panjshir Valley as the Taliban pressed in, issuing statements on Twitter. Shortly before his death, he stated "If we die, history will write about us, as people who stood for their country till the end of the line". On 4 or 5 September 2021, Dashty was killed in combat during the Taliban offensive into Panjshir. His death was confirmed by his friend Noor Rahman Akhlaqi on Facebook as well as other sources. The Taliban claimed that he had died as they had advanced into Bazarak, capital of the Panjshir Province. In contrast, the International Federation of Journalists stated that he had died alongside General Abdul Wodo Zara at Dashtak, Anaba District. According to unspecified sources and defense analyst Babak Taghvaee, Dashty was killed b |
https://en.wikipedia.org/wiki/Computer%20security%20model | A computer security model is a scheme for specifying and enforcing security policies. A security model may be founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no particular theoretical grounding at all. A computer security model is implemented through a computer security policy.
For a more complete list of available articles on specific security models, see :Category:Computer security models.
Selected topics
Access control list (ACL)
Attribute-based access control (ABAC)
Bell–LaPadula model
Biba model
Brewer and Nash model
Capability-based security
Clark-Wilson model
Context-based access control (CBAC)
Graham-Denning model
Harrison-Ruzzo-Ullman (HRU)
High-water mark (computer security)
Lattice-based access control (LBAC)
Mandatory access control (MAC)
Multi-level security (MLS)
Non-interference (security)
Object-capability model
Protection ring
Role-based access control (RBAC)
Take-grant protection model
Discretionary access control (DAC) |
https://en.wikipedia.org/wiki/Operations%20security | Operations security (OPSEC) or operational security is a process that identifies critical information to determine whether friendly actions can be observed by enemy intelligence, determines if information obtained by adversaries could be interpreted to be useful to them, and then executes selected measures that eliminate or reduce adversary exploitation of friendly critical information.
The term "operations security" was coined by the United States military during the Vietnam War.
History
Vietnam
In 1966, United States Admiral Ulysses Sharp established a multidisciplinary security team to investigate the failure of certain combat operations during the Vietnam War. This operation was dubbed Operation Purple Dragon, and included personnel from the National Security Agency and the Department of Defense.
When the operation concluded, the Purple Dragon team codified their recommendations. They called the process "Operations Security" in order to distinguish the process from existing processes and ensure continued inter-agency support.
NSDD 298
In 1988, President Ronald Reagan signed National Security Decision Directive (NSDD) 298. This document established the National Operations Security Program and named the Director of the National Security Agency as the executive agent for inter-agency OPSEC support. This document also established the Interagency OPSEC Support Staff (IOSS).
Private-sector application
The private sector has also adopted OPSEC as a defensive measure against competitive intelligence collection efforts.
See also
For Official Use Only – FOUO
Information security
Intelligence cycle security
Security
Security Culture
Sensitive but unclassified – SBU
Controlled Unclassified Information - CUI
Social engineering |
https://en.wikipedia.org/wiki/SnapPea | SnapPea is free software designed to help mathematicians, in particular low-dimensional topologists, study hyperbolic 3-manifolds. The primary developer is Jeffrey Weeks, who created the first version as part of his doctoral thesis, supervised by William Thurston. It is not to be confused with the unrelated android malware with the same name.
The latest version is 3.0d3. Marc Culler, Nathan Dunfield and collaborators have extended the SnapPea kernel and written Python extension modules which allow the kernel to be used in a Python program or in the interpreter. They also provide a graphical user interface written in Python which runs under most operating systems (see external links below).
The following people are credited in SnapPea 2.5.3's list of acknowledgments: Colin Adams, Bill Arveson, Pat Callahan, Joe Christy, Dave Gabai, Charlie Gunn, Martin Hildebrand, Craig Hodgson, Diane Hoffoss, A. C. Manoharan, Al Marden, Dick McGehee, Rob Meyerhoff, Lee Mosher, Walter Neumann, Carlo Petronio, Mark Phillips, Alan Reid, and Makoto Sakuma.
The C source code is extensively commented by Jeffrey Weeks and contains useful descriptions of the mathematics involved with references.
The SnapPeaKernel is released under GNU GPL 2+ as is SnapPy.
Algorithms and functions
At the core of SnapPea are two main algorithms. The first attempts to find a minimal ideal triangulation of a given link complement. The second computes the canonical decomposition of a cusped hyperbolic 3-manifold. Almost all the other functions of SnapPea rely in some way on one of these decompositions.
Minimal ideal triangulation
SnapPea inputs data in a variety of formats. Given a link diagram, SnapPea can ideally triangulate the link complement. It then performs a sequence of simplifications to find a locally minimal ideal triangulation.
Once a suitable ideal triangulation is found, SnapPea can try to find a hyperbolic structure. In his Princeton lecture notes, Thurston noted a method for descr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.