source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Operation%20RAFTER
|
RAFTER was a code name for the MI5 radio receiver detection technique, mostly used against clandestine Soviet agents and monitoring of domestic radio transmissions by foreign embassy personnel from the 1950s on.
Explanation
Most radio receivers of the period were of the AM superhet design, with local oscillators which generate a signal typically 455 kHz above or sometimes below the frequency to be received. There is always some oscillator radiation leakage from such receivers, and in the initial stages of RAFTER, MI5 simply attempted to locate clandestine receivers by detecting the leaked signal with a sensitive custom-built receiver. This was complicated by domestic radios in people's homes also leaking radiation.
By accident, one such receiver for MI5 mobile radio transmissions was being monitored when a passing transmitter produced a powerful signal which overloaded the receiver, producing an audible change in the received signal. The agency realized that they could identify the actual frequency being monitored if they produced their own transmissions and listened for the change in the superhet tone.
Soviet transmitters
Soviet short-wave transmitters were extensively used to broadcast messages to clandestine agents, the transmissions consisting simply of number sequences read aloud and decoded using a one-time pad. It was realized that this new technique could be used to track down such agents. Specially equipped aircraft would fly over urban areas at times when the agents were receiving Soviet transmissions, and attempt to locate receivers tuned to the transmissions.
Tactics
Like many secret technologies, RAFTER's use was attended by the fear of over-use, alerting the quarry and causing a shift in tactics which would neutralize the technology. As a technical means of intelligence, it was also not well supported by the more traditional factions in MI5. Its part in the successes and failures of MI5 at the time is not entirely known.
In his book Spycatcher,
|
https://en.wikipedia.org/wiki/Kinetic%20exchange%20models%20of%20markets
|
Kinetic exchange models are multi-agent dynamic models inspired by the statistical physics of energy distribution, which try to explain the robust and universal features of income/wealth distributions.
Understanding the distributions of income and wealth in an economy has been a classic problem in economics for more than a hundred years. Today it is one of the main branches of econophysics.
Data and basic tools
In 1897, Vilfredo Pareto first found a universal feature in the distribution of wealth. After that, with some notable exceptions, this field had been dormant for many decades, although accurate data had been accumulated over this period. Considerable investigations with the real data during the last fifteen years (1995–2010) revealed that the tail (typically 5 to 10 percent of agents in any country) of the income/wealth distribution indeed follows a power law. However, the majority of the population (i.e., the low-income population) follows a different distribution which is debated to be either Gibbs or log-normal.
Basic tools used in this type of modelling are probabilistic and statistical methods mostly taken from the kinetic theory of statistical physics. Monte Carlo simulations often come handy in solving these models.
Overview of the models
Since the distributions of income/wealth are the results of the interaction among many heterogeneous agents, there is an analogy with statistical mechanics, where many particles interact. This similarity was noted by Meghnad Saha and B. N. Srivastava in 1931 and thirty years later by Benoit Mandelbrot. In 1986, an elementary version of the stochastic exchange model was first proposed by J. Angle.
In the context of kinetic theory of gases, such an exchange model was first investigated by A. Dragulescu and V. Yakovenko. The main modelling effort has been put to introduce the concepts of savings, and taxation in the setting of an ideal gas-like system. Basically, it assumes that in the short-run, an economy remains
|
https://en.wikipedia.org/wiki/Haloarcula%20marismortui
|
Haloarcula marismortui is a halophilic archaeon isolated from the Dead Sea
Morphology
Haloarcula marismortui is a Gram-negative archaeon with a cell size of 1.0-2.x 2.0-3.0 μm (diameter x length). Cells are pleomorphic appearing as short rods to rectangles. H. marismortui is motile via archaellum and possesses a cell membrane that consists of triglycosyl, diether lipids, and glycoproteins.
Metabolism
H. marismortui is an aerobic chemoorganotroph that utilizes glycolysis and a modified Entner-Doudaroff pathway for the breakdown of nutrients. H. marismortui utilizes energy sources such as glucose, sucrose, fructose, glycerol, malate, acetate & succinate while producing nitrogen, metabolic carbon, and acid as byproducts. Can also grow anaerobically by using nitrate as an electron acceptor.
Genomic Properties
The genome of H. marismortui is organized into nine circular replicons, in which individual G+C content varies from 54 to 62%. H. marismortui contains 4,366 genes and 4,274,642 base pairs (Strain ATCC 43049).
H. marismortui has one of the only two prokaryotic large ribosomal subunits which have so far been crystallized. The other one is Deinococcus radiodurans.
Ecology
Habitat
Haloarcula marismortui is considered an extreme halophile and has been isolated from the Dead Sea. H. marismortui has a temperature optima between 40 and 50°C and a pH range of 5.5-8.0. Growth can occur at a wide range of NaCl concentrations spanning 5-35% with optimal growth between 15 and 25%. The unusually large number of environmental regulatory genes found within the H. marismortui genome suggests higher fitness in extreme environments compared to other species of Halobacterium.
Adaptability
H. marismortui encodes a large family of multi-domain proteins(49) that act as sensors and regulators including Opsinproteins "Sop I & II, Hop, & Bop". These proteins help maintain physiological ion concentrations, facilitate phototaxis, and generate chemical energy via a proton gradient. H.
|
https://en.wikipedia.org/wiki/Gummed%20film
|
Gummed film refers to a technique used to measure nuclear fallout. It involves the use of a sheet of plastic (cellulose acetate) or paper substrate coated on one side with an adhesive (e.g., rubber cement). The sheet is exposed (adhesive-side up) to the environment to be monitored, where fallout particles land on (and thus adhere to) the gummed film. After some period, the films are collected and analyzed for radioactivity.
|
https://en.wikipedia.org/wiki/Zipping%20%28computer%20science%29
|
In computer science, zipping is a function which maps a tuple of sequences into a sequence of tuples. This name zip derives from the action of a zipper in that it interleaves two formerly disjoint sequences. The inverse function is unzip.
Example
Given the three words cat, fish and be where |cat| is 3, |fish| is 4 and |be| is 2. Let denote the length of the longest word which is fish; . The zip of cat, fish, be is then 4 tuples of elements:
where # is a symbol not in the original alphabet. In Haskell this truncates to the shortest sequence , where :
zip3 "cat" "fish" "be"
-- [('c','f','b'),('a','i','e')]
Definition
Let Σ be an alphabet, # a symbol not in Σ.
Let x1x2... x|x|, y1y2... y|y|, z1z2... z|z|, ... be n words (i.e. finite sequences) of elements of Σ. Let denote the length of the longest word, i.e. the maximum of |x|, |y|, |z|, ... .
The zip of these words is a finite sequence of n-tuples of elements of , i.e. an element of :
,
where for any index , the wi is #.
The zip of x, y, z, ... is denoted zip(x, y, z, ...) or x ⋆ y ⋆ z ⋆ ...
The inverse to zip is sometimes denoted unzip.
A variation of the zip operation is defined by:
where is the minimum length of the input words. It avoids the use of an adjoined element , but destroys information about elements of the input sequences beyond .
In programming languages
Zip functions are often available in programming languages, often referred to as . In Lisp-dialects one can simply the desired function over the desired lists, is variadic in Lisp so it can take an arbitrary number of lists as argument. An example from Clojure:
;; `nums' contains an infinite list of numbers (0 1 2 3 ...)
(def nums (range))
(def tens [10 20 30])
(def firstname "Alice")
;; To zip (0 1 2 3 ...) and [10 20 30] into a vector, invoke `map vector' on them; same with list
(map vector nums tens) ; ⇒ ([0 10] [1 20] [2 30])
(map list nums tens) ; ⇒ ((0 10) (1 20) (2 30))
(map str nums tens)
|
https://en.wikipedia.org/wiki/XDNA%20%28multi-graphics%29
|
The xDNA technology is a technology designed by Diamond Multimedia, allowing motherboards which are not certified for ATI CrossFire (for instance, nForce 500/600 series motherboards designed by Nvidia) to install multiple ATI Radeon video cards (up to four) as a CrossFire setup and operate in rendering modes which are exclusively made for CrossFire setups. Diamond Multimedia will provide optimized Catalyst drivers and middleware for the platform in a single package.
See also
SLI
|
https://en.wikipedia.org/wiki/Oracle%20Advertising
|
Oracle Advertising, formerly Datalogix, is a cloud-based consumer data collection, activation, and measurement platform for use by digital advertisers. Datalogix was a consumer data collection company based in Westminster, Colorado that provided offline consumer spending data to marketers. In December 2014, Oracle signed an agreement to acquire Datalogix. After the acquisition, Datalogix's name changed to Oracle Data Cloud, which later became Oracle Advertising. Oracle Advertising is part of the Oracle Advertising and Customer Experience (CX) application suite.
This collected data, which includes purchase data from stores, credit cards, and loyalty cards, helps marketing teams determine their ad campaigns' effectiveness and those that will potentially increase profits, with Datalogix clients including retail stores, grocers, travel agencies, PepsiCo, Ford, and the Dr. Pepper Snapple Group. After consumer spending behaviors are measured, the information is sold to advertising companies and publishers, such as Facebook, Google, Twitter, Snapchat, and Pinterest. Advertisers then use the information obtained to tailor online ads and reach new or existing customers. In turn, publishers use the data to determine the amount of profit advertisers earn and to convince them to purchase more ads that will feature on their websites. Advertisers and publishers have used Datalogix to help increase profits, as the use of digital media continues to expand.
In 2017, Oracle also acquired Moat, an ad measurement company, which also became part of Oracle Data Cloud, now Oracle Advertising. The Moat acquisition added measurement and analytics capabilities to allow advertisers to track and measure media, as well as the performance of interactions with online ads.
BlueKai, acquired by Oracle in 2014, is a cloud-based data collection platform that uses website cookies to collect online tracking data. The tracking data is then used by marketers to target users with ads specific to their
|
https://en.wikipedia.org/wiki/Doroth%C3%A9e%20Normand-Cyrot
|
Dorothée Normand-Cyrot is a French applied mathematician and control theorist, known for her work on discrete-time nonlinear control systems.
Education and career
As a teenager entering the French university system in 1971, Normand-Cyrot found the grandes écoles closed off to her because she was female; instead she went to a lesser university to study mathematics. Her mentors included algebraist Andrée Ehresmann and, a few years later, control theorist Michel Fliess.
Normand-Cyrot worked for two years for Électricité de France, earned a doctorat de troisième cycle in mathematics in 1978 at Paris Diderot University, became a researcher for the French National Centre for Scientific Research (CNRS) in 1981, and completed her doctorat d'état in 1983 at Paris-Sud University. She became a director of research for CNRS in 1991, and was posted by CNRS to the Laboratoire des signaux et systèmes at Paris-Saclay University.
Recognition
Normand-Cyrot was named an IEEE Fellow in 2005, "for contributions to discrete-time and digital nonlinear control systems".
|
https://en.wikipedia.org/wiki/Fredholm%20theory
|
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.
Overview
The following sections provide a casual sketch of the place of Fredholm theory in the broader context of operator theory and functional analysis. The outline presented here is broad, whereas the difficulty of formalizing this sketch is, of course, in the details.
Fredholm equation of the first kind
Much of Fredholm theory concerns itself with the following integral equation for f when g and K are given:
This equation arises naturally in many problems in physics and mathematics, as the inverse of a differential equation. That is, one is asked to solve the differential equation
where the function is given and is unknown. Here, stands for a linear differential operator.
For example, one might take to be an elliptic operator, such as
in which case the equation to be solved becomes the Poisson equation.
A general method of solving such equations is by means of Green's functions, namely, rather than a direct attack, one first finds the function such that for a given pair ,
where is the Dirac delta function.
The desired solution to the above differential equation is then written as an integral in the form of a Fredholm integral equation,
The function is variously known as a Green's function, or the kernel of an integral. It is sometimes called the nucleus of the integral, whence the term nuclear operator arises.
In the general theory, and may be points on any manifold; the real number line or -dimensional Euclidean space in the simplest cases. The general theory also often requires that the functions belong to some given function space: often, t
|
https://en.wikipedia.org/wiki/Ichthyosis%20acquisita
|
Ichthyosis acquisita is a skin condition clinically and histologically similar to ichthyosis vulgaris.
Presentation
Associated conditions
The development of ichthyosis in adulthood can be a manifestation of systemic disease, and it has been described in association with malignancies, drugs, endocrine and metabolic disease, HIV, infection, and autoimmune conditions.
It usually is associated with people who have Hodgkin's disease but it is also occurs in people with mycosis fungoides, other malignant sarcomas, Kaposi's sarcoma and visceral carcinomas. It can occur in people with leprosy, AIDS, tuberculosis, and typhoid fever.
See also
Ichthyosis
Confluent and reticulated papillomatosis of Gougerot and Carteaud
List of cutaneous conditions
|
https://en.wikipedia.org/wiki/Variation%20ratio
|
The variation ratio is a simple measure of statistical dispersion in nominal distributions; it is the simplest measure of qualitative variation.
It is defined as the proportion of cases which are not in the mode category:
where fm is the frequency (number of cases) of the mode, and N is the total number of cases. While a simple measure, it is notable in that some texts and guides suggest or imply that the dispersion of nominal measurements cannot be ascertained. It is defined for instance by .
Just as with the range or standard deviation, the larger the variation ratio, the more differentiated or dispersed the data are; and the smaller the variation ratio, the more concentrated and similar the data are.
An example
A group which is 55% female and 45% male has a proportion of 0.55 females (the mode is 0.55), therefore its variation ratio is
Similarly, in a group of 100 people where 60 people like beer 25 people like wine and the rest (15) prefer cocktails, the variation ratio is
See also
Qualitative variation, for a number of other measures of dispersion in nominal variables
|
https://en.wikipedia.org/wiki/Thiolutin
|
Thiolutin is a sulfur-containing antibiotic, which is a potent inhibitor of bacterial and yeast RNA polymerases. It was found to inhibit in vitro RNA synthesis directed by all three yeast RNA polymerases (I, II, and III). Thiolutin is also an inhibitor of mannan and glucan formation in Saccharomyces cerevisiae and used for the analysis of mRNA stability. Studies have shown that thiolutin inhibits adhesion of human umbilical vein endothelial cells (HUVECs) to vitronectin and thus suppresses tumor cell-induced angiogenesis in vivo.
Thiolutin is formed in submerged fermentation by several strains of Streptomycetes luteosporeus. Some sources erroneously specify "aureothricin" as a synonym of thiolutin. Aureothricin is an antibiotic very similar to thiolutin, and is created as a by-product during the thiolutin fermentation.
|
https://en.wikipedia.org/wiki/Geodemographic%20segmentation
|
In marketing, geodemographic segmentation is a multivariate statistical classification technique for discovering whether the individuals of a population fall into different groups by making quantitative comparisons of multiple characteristics with the assumption that the differences within any group should be less than the differences between groups.
Principles
Geodemographic segmentation is based on two simple principles:
People who live in the same neighborhood are more likely to have similar characteristics than are two people chosen at random.
Neighborhoods can be categorized in terms of the characteristics of the population which they contain. Any two neighborhoods can be placed in the same category, i.e., they contain similar types of people, even though they are widely separated.
Clustering algorithms
The use of different algorithms leads to different results, but there is no single best approach for selecting the best algorithm, just as no algorithm offers any theoretical proof of its certainty. One of the most frequently used techniques in geodemographic segmentation is the widely known k-means clustering algorithm. In fact most of the current commercial geodemographic systems are based on a k-means algorithm. Still, clustering techniques coming from artificial neural networks, genetic algorithms, or fuzzy logic are more efficient within large, multidimensional databases (Brimicombe 2007).
Neural networks can handle non-linear relationships, are robust to noise and exhibit a high degree of automation. They do not assume any hypotheses regarding the nature or distribution of the data and they provide valuable assistance in handling problems of a geographical nature that, to date, have been impossible to solve. One of the best known and most efficient neural network methods for achieving unsupervised clustering is the Self-Organizing Map (SOM). SOM has been proposed as an improvement over the k-means method, for it provides a more flexible approach to
|
https://en.wikipedia.org/wiki/Ampersand
|
The ampersand, also known as the and sign, is the logogram , representing the conjunction "and". It originated as a ligature of the letters et—Latin for "and".
Etymology
Traditionally in English, when spelling aloud, any letter that could also be used as a word in itself ("A", "I", and, "O") was referred to by the Latin expression ('by itself'), as in "per se A" or "A per se A". The character &, when used by itself as opposed to more extended forms such as &c., was similarly referred to as "and per se and". This last phrase was routinely slurred to "ampersand" and the term had entered common English usage by 1837.
It has been falsely claimed that André-Marie Ampère used the symbol in his widely read publications and that people began calling the new shape "Ampère's and".
History
The ampersand can be traced back to the 1st century A.D. and the old Roman cursive, in which the letters E and T occasionally were written together to form a ligature (Evolution of the ampersand – figure 1). In the later and more flowing New Roman Cursive, ligatures of all kinds were extremely common; figures 2 and 3 from the middle of 4th century are examples of how the et-ligature could look in this script. During the later development of the Latin script leading up to Carolingian minuscule (9th century) the use of ligatures in general diminished. The et-ligature, however, continued to be used and gradually became more stylized and less revealing of its origin (figures 4–6).
The modern italic type ampersand is a kind of "et" ligature that goes back to the cursive scripts developed during the Renaissance. After the advent of printing in Europe in 1455, printers made extensive use of both the italic and Roman ampersands. Since the ampersand's roots go back to Roman times, many languages that use a variation of the Latin alphabet make use of it.
The ampersand often appeared as a character at the end of the Latin alphabet, as for example in Byrhtferð's list of letters from 1011. Simil
|
https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20theorem
|
In mathematics, Apéry's theorem is a result in number theory that states the Apéry's constant ζ(3) is irrational. That is, the number
cannot be written as a fraction where p and q are integers. The theorem is named after Roger Apéry.
The special values of the Riemann zeta function at even integers () can be shown in terms of Bernoulli numbers to be irrational, while it remains open whether the function's values are in general rational or not at the odd integers () (though they are conjectured to be irrational).
History
Leonhard Euler proved that if n is a positive integer then
for some rational number . Specifically, writing the infinite series on the left as , he showed
where the are the rational Bernoulli numbers. Once it was proved that is always irrational, this showed that is irrational for all positive integers n.
No such representation in terms of π is known for the so-called zeta constants for odd arguments, the values for positive integers n. It has been conjectured that the ratios of these quantities
are transcendental for every integer .
Because of this, no proof could be found to show that the zeta constants with odd arguments were irrational, even though they were (and still are) all believed to be transcendental. However, in June 1978, Roger Apéry gave a talk titled "Sur l'irrationalité de ζ(3)." During the course of the talk he outlined proofs that and were irrational, the latter using methods simplified from those used to tackle the former rather than relying on the expression in terms of π. Due to the wholly unexpected nature of the proof and Apéry's blasé and very sketchy approach to the subject, many of the mathematicians in the audience dismissed the proof as flawed. However Henri Cohen, Hendrik Lenstra, and Alfred van der Poorten suspected Apéry was on to something and set out to confirm his proof. Two months later they finished verification of Apéry's proof, and on August 18 Cohen delivered a lecture giving full details of th
|
https://en.wikipedia.org/wiki/169th%20meridian%20east
|
The meridian 169° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, New Zealand, the Southern Ocean, and Antarctica to the South Pole.
The 169th meridian east forms a great circle with the 11th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 169th meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Chukotka Autonomous Okrug — Ayon Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" | Chaunskaya Bay
|-valign="top"
|
! scope="row" |
| Chukotka Autonomous Okrug Kamchatka Krai — from Chukotka Autonomous Okrug — from Kamchatka Krai — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bering Sea
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Bokak Atoll
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Likiep Atoll
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Passing just east of Ailinglaplap Atoll, (at ) Passing just west of Kili Island, (at ) Passing just east of Ebon Atoll, (at ) Passing just east of the island of Tikopia, (at )
|-
|
! scope="row" |
| Island of Erromango
|-valign="top"
| style="backg
|
https://en.wikipedia.org/wiki/NanoLanguage
|
NanoLanguage is a scripting interface built on top of the interpreted programming language Python, and is primarily intended for simulation of physical and chemical properties of nanoscale systems.
Introduction
Over the years, several electronic-structure codes based on density functional theory have been developed by different groups of academic researchers; VASP, Abinit, SIESTA, and Gaussian are just a few examples. The input to these programs is usually a simple text file written in a code-specific format with a set of code-specific keywords.
NanoLanguage was introduced by Atomistix A/S as an interface to Atomistix ToolKit (version 2.1) in order to provide a more flexible input format. A NanoLanguage script (or input file) is just a Python program and can be anything from a few lines to a script performing complex numerical simulations, communicating with other scripts and files, and communicating with other software (e.g. plotting programs).
NanoLanguage is not a proprietary product of Atomistix and can be used as an interface to other density functional theory codes as well as to codes utilizing e.g. tight-binding, k.p, or quantum-chemical methods.
Features
Built on top of Python, NanoLanguage includes the same functionality as Python and with the same syntax. Hence, NanoLanguage contains, among other features, common programming elements (for loops, if statements, etc.), mathematical functions, and data arrays.
In addition, a number of concepts and objects relevant to quantum chemistry and physics are built into NanoLanguage, e.g. a periodic table, a unit system (including both SI units and atomic units like Ångström), constructors of atomic geometries, and different functions for density-functional theory and transport calculations.
Example
This NanoLanguage script uses the Kohn–Sham method to calculate the total energy of a water molecule as a function of the bending angle.
# Define function for molecule setup
def waterConfiguration(angle, bondLeng
|
https://en.wikipedia.org/wiki/99%20Points%20of%20Intersection
|
99 Points of Intersection: Examples—Pictures—Proofs is a book on constructions in Euclidean plane geometry in which three or more lines or curves meet in a single point of intersection. This book was originally written in German by Hans Walser as 99 Schnittpunkte (Eagle / Ed. am Gutenbergplatz, 2004), translated into English by Peter Hilton and Jean Pedersen, and published by the Mathematical Association of America in 2006 in their MAA Spectrum series ().
Topics and organization
The book is organized into three sections. The first section provides introductory material, describing different mathematical situations in which multiple curves might meet, and providing different possible explanations for this phenomenon, including symmetry, geometric transformations, and membership of the curves in a pencil of curves. The second section shows the 99 points of intersection of the title. Each is given on its own page, as a large figure with three smaller figures showing its construction, with a one-line caption but no explanatory text. The third section provides background material and proofs for some of these points of intersection, as well as extending and generalizing some of these results.
Some of these points of intersection are standard; for instance, these include the construction of the centroid of a triangle as the point where its three median lines meet, the construction of the orthocenter as the point where the three altitudes meet, and the construction of the circumcenter as the point where the three perpendicular bisectors of the sides meet, as well as two versions of Ceva's theorem. However, others are new to this book, and include intersections related to silver rectangles, tangent circles, the Pythagorean theorem, and the nine-point hyperbola.
Audience
John Jensen writes that "the clear and uncluttered illustrations of intersection make for a rich source for geometric investigation by high school geometry students".
And although Gerry Leversha calls the
|
https://en.wikipedia.org/wiki/Gloger%27s%20rule
|
Gloger's rule is an ecogeographical rule which states that within a species of endotherms, more heavily pigmented forms tend to be found in more humid environments, e.g. near the equator. It was named after the zoologist Constantin Wilhelm Lambert Gloger, who first remarked upon this phenomenon in 1833 in a review of covariation of climate and avian plumage color. Erwin Stresemann later noted that the idea had been expressed even earlier by Peter Simon Pallas in Zoographia Rosso-Asiatica (1811). Gloger found that birds in more humid habitats tended to be darker than their relatives from regions with higher aridity. Over 90% of 52 North American bird species studies conform to this rule.
One explanation of Gloger's rule in the case of birds appears to be the increased resistance of dark feathers to feather- or hair-degrading bacteria such as Bacillus licheniformis. Feathers in humid environments have a greater bacterial load, and humid environments are more suitable for microbial growth; dark feathers or hair are more difficult to break down. More resilient eumelanins (dark brown to black) are deposited in hot and humid regions, whereas in arid regions, pheomelanins (reddish to sandy color) predominate due to the benefit of crypsis.
Among mammals, there is a marked tendency in equatorial and tropical regions to have a darker skin color than poleward relatives. In this case, the underlying cause is probably the need to better protect against the more intense solar UV radiation at lower latitudes. However, absorption of a certain amount of UV radiation is necessary for the production of certain vitamins, notably vitamin D (see also osteomalacia).
Gloger's rule is also vividly demonstrated among human populations. Populations that evolved in sunnier environments closer to the equator tend to be darker-pigmented than populations originating farther from the equator. There are exceptions, however; among the most well known are the Tibetans and Inuit, who have darker s
|
https://en.wikipedia.org/wiki/Moses%20Holden
|
Moses Holden (21 November 1777 – 3 June 1864) was an English astronomer, known particularly for giving lectures on astronomy.
Life
Holden was born in Bolton, Lancashire, the second youngest of five children of Thomas Holden, a handloom weaver, and his wife Joyce. As a youth he worked in a foundry at Preston, until disabled by an accident. On his recovery he worked as a landscape gardener. From early in life he possessed a love of astronomy; he collected a library, and gave talks on the subject.
In 1814–15 he constructed a large orrery and a magic lantern, made to illustrate his astronomical lectures. These were first given in the Theatre Royal, Preston, in 1815, and then in many towns in the north of England; their success led to his touring throughout (northern) England to give lectures. He lectured at the Theatre Royal, Nottingham, in 1817. In 1826 he devoted the proceeds of one of his lectures to the erection of a monument in St. Michael's Church, Toxteth Park, Liverpool, to the memory of the astronomer Jeremiah Horrocks.
In 1818 Holden published A small Celestial Atlas, or Maps of the Visible Heavens, in the Latitude of Britain, (3rd edition 1834, 4th edition 1840). It was one of the earliest works of the kind published at a low price. He also compiled an almanac, published in 1835 and later. He made several microscopes, and made a telescope for the Revd William Carus Wilson.
He settled in Preston in 1828, where he gave courses on astronomy until 1852. He assisted in establishing the Preston Institution for the Diffusion of Knowledge, and from 1837 he was an enthusiastic member of the British Association for the Advancement of Science. In 1834 the Freedom of the Borough was conferred on him. Holden died at his home in Jordan Strret, Preston on 3 June 1864, aged 86.
|
https://en.wikipedia.org/wiki/Conflict-free%20replicated%20data%20type
|
In distributed computing, a conflict-free replicated data type (CRDT) is a data structure that is replicated across multiple computers in a network, with the following features:
The application can update any replica independently, concurrently and without coordinating with other replicas.
An algorithm (itself part of the data type) automatically resolves any inconsistencies that might occur.
Although replicas may have different state at any particular point in time, they are guaranteed to eventually converge.
The CRDT concept was formally defined in 2011 by Marc Shapiro, Nuno Preguiça, Carlos Baquero and Marek Zawirski. Development was initially motivated by collaborative text editing and mobile computing. CRDTs have also been used in online chat systems, online gambling, and in the SoundCloud audio distribution platform. The NoSQL distributed databases Redis, Riak and Cosmos DB have CRDT data types.
Background
Concurrent updates to multiple replicas of the same data, without coordination between the computers hosting the replicas, can result in inconsistencies between the replicas, which in the general case may not be resolvable. Restoring consistency and data integrity when there are conflicts between updates may require some or all of the updates to be entirely or partially dropped.
Accordingly, much of distributed computing focuses on the problem of how to prevent concurrent updates to replicated data. But another possible approach is optimistic replication, where all concurrent updates are allowed to go through, with inconsistencies possibly created, and the results are merged or "resolved" later. In this approach, consistency between the replicas is eventually re-established via "merges" of differing replicas. While optimistic replication might not work in the general case, there is a significant and practically useful class of data structures, CRDTs, where it does work — where it is always possible to merge or resolve concurrent updates on differe
|
https://en.wikipedia.org/wiki/Adaptive%20scalable%20texture%20compression
|
Adaptive scalable texture compression (ASTC) is a lossy block-based texture compression algorithm developed by Jørn Nystad et al. of ARM Ltd. and AMD.
Full details of ASTC were first presented publicly at the High Performance Graphics 2012 conference, in a paper by Olson et al. entitled "Adaptive Scalable Texture Compression".
ASTC was adopted as an official extension for both OpenGL and OpenGL ES by the Khronos Group on 6 August 2012.
Hardware support
On Linux, all Gallium 3D drivers have a software fallback since 2018, so ASTC can be used on any AMD Radeon GPU.
Overview
The method of compression is an evolution of Color Cell Compression with features including numerous closely spaced fractional bit rates, multiple color formats, support for high-dynamic-range (HDR) textures, and real 3D texture support.
The stated primary design goal for ASTC is to enable content developers to have better control over the space/quality tradeoff inherent in any lossy compression scheme. With ASTC, the ratio between adjacent bit rates is of the order of 25%, making it less expensive to increase quality for a given texture.
Encoding different assets often requires different color formats. ASTC allows a wide choice of input formats, including luminance-only, luminance-alpha, RGB, RGBA, and modes optimized for surface normals. The designer can thus choose the optimal format without having to support multiple different compression schemes.
The choices of bit rate and color format do not constrain each other, so that it's possible to choose from a large number of combinations.
Despite this flexibility, ASTC achieves better peak signal-to-noise ratios than PVRTC, S3TC, and ETC2 when measured at 2 and 3.56 bits per texel. For HDR textures, it produces results comparable to BC6H at 8 bits per texel.
Supported color formats
ASTC supports anywhere from 1 to 4 channels. In modes with 2–4 channels, one of the channels can be treated as "uncorrelated" and be given a separate gradi
|
https://en.wikipedia.org/wiki/Descriptive%20botanical%20name
|
Descriptive botanical names are scientific names of groups of plants that are irregular, not being derived systematically from the name of a type genus. They may describe some characteristics of the group in general or may be a name already in existence before regularised scientific nomenclature.
Descriptive names can occur above or at the rank of family. There is only a single descriptive below the rank of family (the subfamily Papilionoideae).
Above the rank of family
Descriptive names above the rank of family are governed by Article 16 of the International Code of Nomenclature for algae, fungi, and plants (ICN), which rules that a name above the rank of family may either be ‘automatically typified’ (such as Magnoliophyta and Magnoliopsida from the type genus Magnolia) or be descriptive.
Descriptive names of this type may be used unchanged at different ranks (without modifying the suffix). These descriptive plant names are decreasing in importance, becoming less common than ‘automatically typified names’, but many are still in use, such as:
Plantae, Algae, Musci, Fungi, Embryophyta, Tracheophyta, Spermatophyta, Gymnospermae, Coniferae, Coniferales, Angiospermae, Monocotyledones, Dicotyledones, etc.
Many of these descriptive names have a very long history, often preceding Carl Linnaeus. Some are Classical Latin common nouns in the nominative plural, meaning for instance ‘the plants’, ‘the seaweeds’, ‘the mosses’. Like all names above the rank of family, these names follow the Latin grammatical rules of nouns in the plural, and are written with an initial capital letter.
At the rank of family
Article 18.5 of the ICN allows a descriptive name, of long usage, for the following eight families. For each of these families there also exists a name based on the name of an included genus (an alternative name that is also allowed, here in parentheses):
Compositae = "composites" (alternative name: Asteraceae, based on the genus Aster)
Cruciferae = "cross-bearers"
|
https://en.wikipedia.org/wiki/Titlo
|
Titlo is an extended diacritic symbol initially used in early Cyrillic and Glagolitic manuscripts, e.g., in Old Church Slavonic and Old East Slavic languages. The word is a borrowing from the Greek , "title" and is a cognate of the words tittle and tilde. The titlo still appears in inscriptions on modern icons and in service books printed in Church Slavonic.
The titlo is drawn as a line over a text. In some styles of writing the line is drawn with serifs, so that it may appear as a zigzag. The usual form in this case is short stroke up, falling slanted line, short stroke up; an alternative resembles a volta bracket: short stroke up, horizontal line, short stroke down.
The titlo has several meanings depending on the context:
One meaning is in its use to mark letters when they are used as numerals. This is a quasi-decimal system analogous to Greek numerals.
A titlo is also used as a scribal abbreviation mark for frequently written long words and also for nouns describing sacred persons. In place of , for example, 'God' was written under the titlo and '[he] speaks' is abbreviated as . Fig. 3 shows a list of the most common of these abbreviations in current use in printed Church Slavonic. Fig. 2 shows 'Lord' abbreviated to its first letter and stem ending (also a single letter here, in the nominative case). Around the 15th century, titla in most schools came to be restricted to a special semiotic meaning, used exclusively to refer to sacred concepts, while the same words were otherwise spelled out without titla, and so, for example, while "God" in the sense of the one true God is abbreviated as above, "god" referring to "false" gods is spelled out; likewise, while the word for "angel" is generally abbreviated, "angels" is spelled out in "performed by evil angels" in Psalm 77. This corresponds to the Nomina sacra (Latin: "Sacred names") tradition of using contractions for certain frequently occurring names in Greek Scriptures.
A short titlo is placed over a
|
https://en.wikipedia.org/wiki/Minkowski%27s%20theorem
|
In mathematics, Minkowski's theorem is the statement that every convex set in which is symmetric with respect to the origin and which has volume greater than contains a non-zero integer point (meaning a point in that is not the origin). The theorem was proved by Hermann Minkowski in 1889 and became the foundation of the branch of number theory called the geometry of numbers. It can be extended from the integers to any lattice and to any symmetric convex set with volume greater than , where denotes the covolume of the lattice (the absolute value of the determinant of any of its bases).
Formulation
Suppose that is a lattice of determinant in the -dimensional real vector space and is a convex subset of that is symmetric with respect to the origin, meaning that if is in then is also in . Minkowski's theorem states that if the volume of is strictly greater than , then must contain at least one lattice point other than the origin. (Since the set is symmetric, it would then contain at least three lattice points: the origin 0 and a pair of points , where .)
Example
The simplest example of a lattice is the integer lattice of all points with integer coefficients; its determinant is 1. For , the theorem claims that a convex figure in the Euclidean plane symmetric about the origin and with area greater than 4 encloses at least one lattice point in addition to the origin. The area bound is sharp: if is the interior of the square with vertices then is symmetric and convex, and has area 4, but the only lattice point it contains is the origin. This example, showing that the bound of the theorem is sharp, generalizes to hypercubes in every dimension .
Proof
The following argument proves Minkowski's theorem for the specific case of .
Proof of the case: Consider the map
Intuitively, this map cuts the plane into 2 by 2 squares, then stacks the squares on top of each other. Clearly has area less than or equal to 4, because this set lies within a 2 by 2 square.
|
https://en.wikipedia.org/wiki/225%20%28number%29
|
225 (two hundred [and] twenty-five) is the natural number following 224 and preceding 226.
In mathematics
225 is the smallest number that is a polygonal number in five different ways. It is a square number ,
an octagonal number, and a squared triangular number .
As the square of a double factorial, counts the number of permutations of six items in which all cycles have even length, or the number of permutations in which all cycles have odd length. And as one of the Stirling numbers of the first kind, it counts the number of permutations of six items with exactly three cycles.
225 is a highly composite odd number, meaning that it has more divisors than any smaller odd numbers. After 1 and 9, 225 is the third smallest number n for which , where σ is the sum of divisors function and φ is Euler's totient function. 225 is a refactorable number.
225 is the smallest square number to have one of every digit in some number base (225 is 3201 in base 4)
225 is the first odd number with exactly 9 divisors.
|
https://en.wikipedia.org/wiki/NMOS%20logic
|
N-type metal–oxide–semiconductor logic uses n-type (-) MOSFETs (metal–oxide–semiconductor field-effect transistors) to implement logic gates and other digital circuits. These nMOS transistors operate by creating an inversion layer in a p-type transistor body. This inversion layer, called the n-channel, can conduct electrons between n-type "source" and "drain" terminals. The n-channel is created by applying voltage to the third terminal, called the gate. Like other MOSFETs, nMOS transistors have four modes of operation: cut-off (or subthreshold), triode, saturation (sometimes called active), and velocity saturation.
For many years, NMOS circuits were much faster than comparable PMOS and CMOS circuits, which had to use much slower p-channel transistors. It was also easier to manufacture NMOS than CMOS, as the latter has to implement p-channel transistors in special n-wells on the p-substrate. The major drawback with NMOS (and most other logic families) is that a direct current must flow through a logic gate even when the output is in a steady state (low in the case of NMOS). This means static power dissipation, i.e. power drain even when the circuit is not switching, leading to high power consumption.
Additionally, just like in diode–transistor logic, transistor–transistor logic, emitter-coupled logic etc., the asymmetric input logic levels make NMOS and PMOS circuits more susceptible to noise than CMOS. These disadvantages are why CMOS logic has supplanted most of these types in most high-speed digital circuits such as microprocessors despite the fact that CMOS was originally very slow compared to logic gates built with bipolar transistors.
Overview
MOS stands for metal-oxide-semiconductor, reflecting the way MOS-transistors were originally constructed, predominantly before the 1970s, with gates of metal, typically aluminium. Since around 1970, however, most MOS circuits have used self-aligned gates made of polycrystalline silicon, a technology first developed by
|
https://en.wikipedia.org/wiki/Relaxation%20%28NMR%29
|
In MRI and NMR spectroscopy, an observable nuclear spin polarization (magnetization) is created by a homogeneous magnetic field. This field makes the magnetic dipole moments of the sample precess at the resonance (Larmor) frequency of the nuclei. At thermal equilibrium, nuclear spins precess randomly about the direction of the applied field. They become abruptly phase coherent when they are hit by radiofrequency (RF) pulses at the resonant frequency, created orthogonal to the field. The RF pulses cause the population of spin-states to be perturbed from their thermal equilibrium value. The generated transverse magnetization can then induce a signal in an RF coil that can be detected and amplified by an RF receiver. The return of the longitudinal component of the magnetization to its equilibrium value is termed spin-lattice relaxation while the loss of phase-coherence of the spins is termed spin-spin relaxation, which is manifest as an observed free induction decay (FID).
For spin=½ nuclei (such as 1H), the polarization due to spins oriented with the field N− relative to the spins oriented against the field N+ is given by the Boltzmann distribution:
where ΔE is the energy level difference between the two populations of spins, k is the Boltzmann constant, and T is the sample temperature. At room temperature, the number of spins in the lower energy level, N−, slightly outnumbers the number in the upper level, N+. The energy gap between the spin-up and spin-down states in NMR is minute by atomic emission standards at magnetic fields conventionally used in MRI and NMR spectroscopy. Energy emission in NMR must be induced through a direct interaction of a nucleus with its external environment rather than by spontaneous emission. This interaction may be through the electrical or magnetic fields generated by other nuclei, electrons, or molecules. Spontaneous emission of energy is a radiative process involving the release of a photon and typified by phenomena such as fluo
|
https://en.wikipedia.org/wiki/Digital%20Item
|
Digital Item is the basic unit of transaction in the MPEG-21 framework. It is a structured digital object, including a standard representation, identification and metadata.
A Digital Item may be a combination of resources like videos, audio tracks or images; metadata, such as descriptors and identifiers; and structure for describing the relationships between the resources.
It is becoming difficult for users of content to identify and interpret the different intellectual property rights that are associated with the elements of multimedia content. For this reason, new solutions are required for the access, delivery, management and protection of this content.
Digital Item Declaration
MPEG-21 proposes to facilitate a wide range of actions involving Digital Items so there is a need for a very precise description for defining exactly what constitutes such an item.
A Digital Item Declaration (DID) is a document that specifies the makeup, structure and organisation of a Digital Item. The purpose of the Digital Item Declaration is to describe a set of abstract terms and concepts, to form a useful model for defining what a Digital Item is. Following this model, a Digital Item is the digital representation of an object, which is managed, described or exchanged within the model.
Digital Item Identification
Digital Item Identification (DII) specification includes not only how to identify Digital Items uniquely but also to distinguish different types of them. These Identifiers are placed in a specific part of the Digital Item Declaration, which is the statement element, and they are associated with Digital Items.
Digital Items and their parts are identified by encapsulating uniform resource identifiers, which are a compact string of characters for identifying an abstract or physical resource.
The elements of a DID can have zero, one or more descriptors; each descriptor may contain a statement which can contain an identifier relating to the parent element of the stateme
|
https://en.wikipedia.org/wiki/Compiler
|
In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.
There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language.
Related software include, a program that translates from a low-level language to a higher level one is a decompiler; a program that translates between high-level languages, usually called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler-compiler is a compiler that produces a compiler (or part of one), often in a generic and reusable way so as to be able to produce many differing compilers.
A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness.
Compilers are not the only language processor us
|
https://en.wikipedia.org/wiki/Series%2040
|
Series 40, often shortened as S40, is a software platform and application user interface (UI) software on Nokia's broad range of mid-tier feature phones, as well as on some of the Vertu line of luxury phones. It was one of the world's most widely used mobile phone platforms and found in hundreds of millions of devices. Nokia announced on 25 January 2012 that the company has sold over 1.5 billion Series 40 devices. It was not used for smartphones, with Nokia turning first to Symbian, then in 2012–2017 to Windows Phone, and most recently Android. However, in 2012 and 2013, several Series 40 phones from the Asha line, such as the 308, 309 and 311, were advertised as "smartphones" although they do not actually support smartphone features like multitasking or a fully fledged HTML browser.
In 2014, Microsoft acquired Nokia's mobile phones business. As part of a licensing agreement with the company, Microsoft Mobile is allowed to use the Nokia brand on feature phones, such as the Series 40 range. However, a July 2014 company memo revealed that Microsoft would end future production of Series 40 devices. It was replaced by Series 30+.
History
Series 40 was introduced in 1999 with the release of the Nokia 7110. It had a 96 × 65 pixel monochrome display and was the first phone to come with a WAP browser. Over the years, the S40 UI evolved from a low-resolution UI to a high-resolution color UI with an enhanced graphical look. The third generation of Series 40 that became available in 2005 introduced support for devices with resolutions as high as QVGA (240×320). It is possible to customize the look and feel of the UI via comprehensive themes. In 2012, Nokia Asha mobile phones 200/201, 210, 302, 303, 305, 306, 308, 310 and 311 were released and all used Series 40. The final feature phone running Series 40 was the Nokia 515 from 2013, running the 6th Edition.
Technical information
Applications
Series 40 provides communication applications such as telephone, Internet telephon
|
https://en.wikipedia.org/wiki/Cultivar
|
A cultivar is a kind of cultivated plant that people have selected for desired traits and which retains those traits when propagated. Methods used to propagate cultivars include division, root and stem cuttings, offsets, grafting, tissue culture, or carefully controlled seed production. Most cultivars arise from purposeful human manipulation, but some originate from wild plants that have distinctive characteristics. Cultivar names are chosen according to rules of the International Code of Nomenclature for Cultivated Plants (ICNCP), and not all cultivated plants qualify as cultivars. Horticulturists generally believe the word cultivar was coined as a term meaning "cultivated variety".
Popular ornamental plants like roses, camellias, daffodils, rhododendrons, and azaleas are commonly cultivars produced by breeding and selection or as sports, for floral colour or size, plant form, or other desirable characteristics. Similarly, the world's agricultural food crops are almost exclusively cultivars that have been selected for characters such as improved yield, flavour, and resistance to disease, and very few wild plants are now used as food sources. Trees used in forestry are also special selections grown for their enhanced quality and yield of timber.
Cultivars form a major part of Liberty Hyde Bailey's broader group, the cultigen, which is defined as a plant whose origin or selection is primarily due to intentional human activity. A cultivar is not the same as a botanical variety, which is a taxonomic rank below subspecies, and there are differences in the rules for creating and using the names of botanical varieties and cultivars. In recent times, the naming of cultivars has been complicated by the use of statutory patents for plants and recognition of plant breeders' rights.
The International Union for the Protection of New Varieties of Plants (UPOV – ) offers legal protection of plant cultivars to persons or organisations that introduce new cultivars to commerce.
|
https://en.wikipedia.org/wiki/Phycological%20Society%20of%20America
|
The Phycological Society of America (PSA) is a professional society, founded in 1946, that is dedicated to the advancement of phycology, the study of algae. The PSA is responsible for the publication of Journal of Phycology and organizes annual conferences among other events that aid in the advancement of related algal sciences.
Membership in the Phycological Society of America is open to anyone from any nation who is concerned with the physiology, taxonomy, molecular biology, experimental biology, cell biology, and developmental biology of related algal sciences. As of 2012, membership was approximately 2,000 from 63 countries.
Awards and Fellowships
The PSA offers four honorary awards that are announced at the annual conference
The Harold C. Bold Award, established in 1973, the Bold Award is given for the outstanding graduate student paper(s) presented at the Annual Meeting as determined by the Bold Award Committee.
The Gerald W. Prescott Award, The Prescott Award is given to recognize scholarly work in English in the form of a published book or monograph devoted to phycology published in the last 2 years.
The Luigi Provasoli Award, The Luigi Provasoli Award is presented annually to the author(s) of the three, or fewer, outstanding papers published in the Journal of Phycology during the previous fiscal year.
The Ralph A. Lewin Poster Award, The Lewin Award is presented to graduate student members of the PSA for the creation and presentation of an original research poster.
The L. H. Tiffany Award, The L. H. Tiffany Award is presented to an individual or team who enhances the awareness or importance of algae through an original work within the previous 3 years.
The Award of Excellence, This award has been established to recognize phycologists who have demonstrated sustained scholarly contributions in and impact on the field of phycology over their careers. These individuals have also provided service to PSA as well as other phycological societies.
The
|
https://en.wikipedia.org/wiki/Microwave%20welding
|
Microwave welding is a plastic welding process that utilizes alternating electromagnetic fields in the microwave band to join thermoplastic base materials that are melted by the phenomenon of dielectric heating.
See also
Dielectric heating
Plastic welding
Radio-frequency welding
|
https://en.wikipedia.org/wiki/Butroxydim
|
Butroxydim is a chemical used as a herbicide. It is a group A herbicide used to kill grass weeds in a range of broadacre crops. Structurally related herbicides against grasses are alloxydim, sethoxydim, clethodim, and cycloxydim.
|
https://en.wikipedia.org/wiki/Variational%20method%20%28quantum%20mechanics%29
|
In quantum mechanics, the variational method is one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states. This allows calculating approximate wavefunctions such as molecular orbitals. The basis for this method is the variational principle.
The method consists of choosing a "trial wavefunction" depending on one or more parameters, and finding the values of these parameters for which the expectation value of the energy is the lowest possible. The wavefunction obtained by fixing the parameters to such values is then an approximation to the ground state wavefunction, and the expectation value of the energy in that state is an upper bound to the ground state energy. The Hartree–Fock method, Density matrix renormalization group, and Ritz method apply the variational method.
Description
Suppose we are given a Hilbert space and a Hermitian operator over it called the Hamiltonian . Ignoring complications about continuous spectra, we consider the discrete spectrum of and a basis of eigenvectors (see spectral theorem for Hermitian operators for the mathematical background):
where is the Kronecker delta
and the satisfy the eigenvalue equation
Once again ignoring complications involved with a continuous spectrum of , suppose the spectrum of is bounded from below and that its greatest lower bound is . The expectation value of in a state is then
If we were to vary over all possible states with norm 1 trying to minimize the expectation value of , the lowest value would be and the corresponding state would be the ground state, as well as an eigenstate of . Varying over the entire Hilbert space is usually too complicated for physical calculations, and a subspace of the entire Hilbert space is chosen, parametrized by some (real) differentiable parameters . The choice of the subspace is called the ansatz. Some choices of ansatzes lead to better approximations than others, therefore the choice of ansatz is important.
Let
|
https://en.wikipedia.org/wiki/Psychometrika
|
Psychometrika is the official journal of the Psychometric Society, a professional body devoted to psychometrics and quantitative psychology. The journal covers quantitative methods for measurement and evaluation of human behavior, including statistical methods and other mathematical techniques. Past editors include Marion Richardson, Dorothy Adkins, Norman Cliff, and Willem J. Heiser. According to Journal Citation Reports, the journal had a 2019 impact factor of 1.959.
History
In 1935 LL Thurstone, EL Thorndike and JP Guilford founded Psychometrika and also the Psychometric Society.
Editors-in-chief
The complete list of editor-in-chief of Psychometrika can be found at:
https://www.psychometricsociety.org/content/past-psychometrika-editors
The following is a subset of persons who have been editor-in-chief of Psychometrika:
Paul Horst
Albert K. Kurtz
Dorothy Adkins
Norman Cliff
Roger Millsap
Shizuhiko Nishisato
Willem J. Heiser
Irini Moustaki
Matthias von Davier
Some notable papers
See also
List of scientific journals in statistics
|
https://en.wikipedia.org/wiki/Online%20credentials%20for%20learning
|
Online credentials for learning are digital credentials that are offered in place of traditional paper credentials for a skill or educational achievement. They are directly linked to the accelerated development of internet communication technologies, the development of digital badges, electronic passports and massive open online courses (MOOCs).
History
Online credentials have their origin in the concept of open educational resources (OER), which was invented during the Forum on Open Courseware for Higher Education in Developing Countries held in 2002 at UNESCO. Over the next decade the OER concept gained significant traction, and this was confirmed by the World Open Educational Resources (OER) Congress organized by UNESCO in 2012. One of the outcomes of the congress was to encourage the open licensing of educational materials produced with public funds. Creative Commons licensing provides the necessary standardization for copyright permissions, with a strong emphasis on the shift towards sharing and open licensing.
Digital credentials ecosystem
The digital credentials ecosystem is made up of a combination of traditional (better established) systems and flexible and dynamic (less regulated and new) systems. The challenge for the recognition of learning is that the pace of development, and also the point of departure, of these two aspects is different. The system is made up of seven interrelated sectors and groups of stakeholders, anchored to specific functions in the digital credentials environment.
Use. These are the users of credentials, notably learners, who are placed at the centre of the system. Providers and employers can also be users.
Provide. Referring to education and training institutions and the emerging variety of for-profit and non-profit digital platforms, such as Coursera, FutureLearn, Credley, Verifdiploma and Mozilla.
Award. Awarding bodies in the traditional sense are institutions and professional bodies. Employers, MOOCs, and in some i
|
https://en.wikipedia.org/wiki/Conjugate%20coding
|
Conjugate coding is a cryptographic tool, introduced by Stephen Wiesner in the late 1960s. It is part of the two applications Wiesner described for quantum coding, along with a method for creating fraud-proof banking notes. The application that the concept was based on was a method of transmitting multiple messages in such a way that reading one destroys the others. This is called quantum multiplexing and it uses photons polarized in conjugate bases as "qubits" to pass information. Conjugate coding also is a simple extension of a random number generator.
At the behest of Charles Bennett, Wiesner published the manuscript explaining the basic idea of conjugate coding with a number of examples but it was not embraced because it was significantly ahead of its time. Because its publication has been rejected, it was developed to the world of public-key cryptography in the 1980s as Oblivious Transfer, first by Michael Rabin and then by Shimon Even. It is used in the field of quantum computing. The initial concept of quantum cryptography developed by Bennett and Gilles Brassard was also based on this concept.
|
https://en.wikipedia.org/wiki/List%20of%20materials%20properties
|
A material property is an intensive property of a material, i.e., a physical property or chemical property that does not depend on the amount of the material. These quantitative properties may be used as a metric by which the benefits of one material versus another can be compared, thereby aiding in materials selection.
A property having a fixed value for a given material or substance is called material constant or constant of matter.
(Material constants should not be confused with physical constants, that have a universal character.)
A material property may also be a function of one or more independent variables, such as temperature. Materials properties often vary to some degree according to the direction in the material in which they are measured, a condition referred to as anisotropy. Materials properties that relate to different physical phenomena often behave linearly (or approximately so) in a given operating range . Modeling them as linear functions can significantly simplify the differential constitutive equations that are used to describe the property.
Equations describing relevant materials properties are often used to predict the attributes of a system.
The properties are measured by standardized test methods. Many such methods have been documented by their respective user communities and published through the Internet; see ASTM International.
Acoustical properties
Acoustical absorption
Speed of sound
Sound reflection
Sound transfer
Third order elasticity (Acoustoelastic effect)
Atomic properties
Atomic mass: (applies to each element) the average mass of the atoms of an element, in daltons (Da), a.k.a. atomic mass units (amu).
Atomic number: (applies to individual atoms or pure elements) the number of protons in each nucleus
Relative atomic mass, a.k.a. atomic weight: (applies to individual isotopes or specific mixtures of isotopes of a given element) (no units)
Standard atomic weight: the average relative atomic mass of a typical sample of the ele
|
https://en.wikipedia.org/wiki/N-entity
|
In telecommunication, a n-entity is an active element in the n-th layer of the Open Systems Interconnection--Reference Model (OSI-RM) that (a) interacts directly with elements, i.e., entities, of the layer immediately above or below the n-th layer, (b) is defined by a unique set of rules, i.e., syntax, and information formats, including data and control formats, and (c) performs a defined set of functions.
The n refers to any one of the 7 layers of the OSI-RM.
In an existing layered open system, the n may refer to any given layer in the system.
Layers are conventionally numbered from the lowest, i.e., the physical layer, to the highest, so that the -th layer is above the n-th layer and the -th layer is below.
|
https://en.wikipedia.org/wiki/Noisy%20data
|
Noisy data are data that is corrupted, distorted, or has a low signal-to-noise ratio. Improper procedures (or improperly-documented procedures) to subtract out the noise in data can lead to a false sense of accuracy or false conclusions.
Noisy data are data with a large amount of additional meaningless information in it called noise. This includes data corruption and the term is often used as a synonym for corrupt data. It also includes any data that a user system cannot understand and interpret correctly. Many systems, for example, cannot use unstructured text. Noisy data can adversely affect the results of any data analysis and skew conclusions if not handled properly. Statistical analysis is sometimes used to weed the noise out of noisy data.
Sources of noise
Differences in real-world measured data from the true values come about from by multiple factors affecting the measurement.
Random noise is often a large component of the noise in data. Random noise in a signal is measured as the signal-to-noise ratio. Random noise contains almost equal amounts of a wide range of frequencies, and is also called white noise (as colors of light combine to make white). Random noise is an unavoidable problem. It affects the data collection and data preparation processes, where errors commonly occur. Noise has two main sources: errors introduced by measurement tools and random errors introduced by processing or by experts when the data is gathered.
Improper filtering can add noise if the filtered signal is treated as if it were a directly measured signal. As an example, Convolution-type digital filters such a moving average can have side effects such as lags or truncation of peaks. Differentiating digital filters amplifies random noise in the original data.
Outlier data are data that appear to not belong in the data set. It can be caused by human error such as transposing numerals, mislabeling, programming bugs, etc. If actual outliers are not removed from the d
|
https://en.wikipedia.org/wiki/Fibronectin%20type%20II%20domain
|
Fibronectin type II domain is a collagen-binding protein domain. Fibronectin is a multi-domain glycoprotein, found in a soluble form in plasma, and in an insoluble form in loose connective tissue and basement membranes, that binds cell surfaces and various compounds including collagen, fibrin, heparin, DNA, and actin. Fibronectins are involved in a number of important functions e.g., wound healing; cell adhesion; blood coagulation; cell differentiation and migration; maintenance of the cellular cytoskeleton; and tumour metastasis. The major part of the sequence of fibronectin consists of the repetition of three types of domains, which are called type I, II, and III.
Type II domain is approximately sixty amino acids long, contains four conserved cysteines involved in disulfide bonds and is part of the collagen-binding region of fibronectin. Type II domains occur two times in fibronectin. Type II domains have also been found in a range of proteins including blood coagulation factor XII; bovine seminal plasma proteins PDC-109 (BSP-A1/A2) and BSP-A3; cation-independent mannose-6-phosphate receptor; mannose receptor of macrophages; 180 Kd secretory phospholipase A2 receptor; DEC-205 receptor; 72 Kd and 92 Kd type IV collagenase (); and hepatocyte growth factor activator.
Fibronectin type II domain and Lipid bilayer interaction
Fibronectin type II domain is part of the extracellular portions of EphA2 receptor proteins. FN2 domain on EphA2 receptors bears positively-charged components, namely K441 and R443, which attract and almost exclusively bind to anionic lipids such as anionic membrane lipid phosphatidylglycerol. K441 and R443 together make up a membrane-binding motif that allows EphA2 receptors to attach to the cell membrane.
Human proteins containing this domain
BSPH1; ELSPBP1; F12; FN1; HGFAC; IGF2R; LY75; MMP2;
MMP9; MRC1; MRC1L1; MRC2; PLA2R1; SEL1L;
Fibronectin type I domain: F12; FN1; HGFA
|
https://en.wikipedia.org/wiki/Hilbert%27s%20inequality
|
In analysis, a branch of mathematics, Hilbert's inequality states that
for any sequence of complex numbers. It was first demonstrated by David Hilbert with the constant instead of ; the sharp constant was found by Issai Schur. It implies that the discrete Hilbert transform is a bounded operator in .
Formulation
Let be a sequence of complex numbers. If the sequence is infinite, assume that it is square-summable:
Hilbert's inequality (see ) asserts that
Extensions
In 1973, Montgomery & Vaughan reported several generalizations of Hilbert's inequality, considering the bilinear forms
and
where are distinct real numbers modulo 1 (i.e. they belong to distinct classes in the quotient group ) and are distinct real numbers. Montgomery & Vaughan's generalizations of Hilbert's inequality are then given by
and
where
is the distance from to the nearest integer, and denotes the smallest positive value. Moreover, if
then the following inequalities hold:
and
|
https://en.wikipedia.org/wiki/IonQ
|
IonQ is a quantum computing hardware and software company based in College Park, Maryland. They are developing a general-purpose trapped ion quantum computer and software to generate, optimize, and execute quantum circuits.
History
IonQ was co-founded by Christopher Monroe and Jungsang Kim, professors at the University of Maryland and Duke University, respectively, in 2015, with the help of Harry Weller and Andrew Schoen, partners at venture firm New Enterprise Associates.
The company is an offshoot of the co-founders’ 25 years of academic research in quantum information science. Monroe's quantum computing research began as a Staff Researcher at the National Institute of Standards and Technology (NIST) with Nobel-laureate physicist David Wineland where he led a team using trapped ions to produce the first controllable qubits and the first controllable quantum logic gate, culminating in a proposed architecture for a large-scale trapped ion computer.
Kim and Monroe began collaborating formally as a result of larger research initiatives funded by the Intelligence Advanced Research Projects Activity (IARPA). They wrote a review paper for Science Magazine entitled Scaling the Ion Trap Quantum Processor, pairing Monroe's research in trapped ions with Kim’s focus on scalable quantum information processing and quantum communication hardware.
This research partnership became the seed for IonQ’s founding. In 2015, New Enterprise Associates invested $2 million to commercialize the technology Monroe and Kim proposed in their Science paper.
In 2016, they brought on David Moehring from IARPA—where he was in charge of several quantum computing initiatives—to be the company’s chief executive. In 2017, they raised a $20 million series B, led by GV (formerly Google Ventures) and New Enterprise Associates, the first investment GV has made in quantum computing technology. They began hiring in earnest in 2017, with the intent to bring an offering to market by late 2018.
In May 2019
|
https://en.wikipedia.org/wiki/Corn%20husk%20doll
|
A corn husk doll is a Native American doll made out of the dried leaves or "husk" of a corn cob. Maize, known in some countries as corn, is a large grain plant domesticated by indigenous peoples in Mesoamerica in prehistoric times. Every part of the ear of corn was used. Women braided the husks for rope and twine and coiled them into containers and mats. Shredded husks made good kindling and filling for pillows and mattresses. The corncobs served as bottle stoppers, scrubbing brushes, and fuel for smoking meat. Corn silk made hair for corn husk dolls. Corn husk dolls have been made by Northeastern Native Americans probably since the beginnings of corn agriculture more than a thousand years ago. Brittle dried cornhusks become soft if soaked in water and produce finished dolls sturdy enough for children's toys. Making corn husk dolls was adopted by early European settlers in the United States of America. Corn husk doll making is now practiced in the United States as a link to Native American culture and the arts and crafts of the settlers.. In other cultures, (specifically Western) corn dollies are used to celebrate Lammas. Corn dollies are magical charms thought to protect the home, livestock, and personal wellness of the maker and their family. They may be a home for the spirit of the crop. The tradition pertains to the idea that the crop of grain has a spirit that loses its home after the final harvest and it is therefore to be invited and housed in the home over the winter before being returned to the earth in spring for the next crop.
Corn husk dolls do not have faces, and there are a number of traditional explanations for this. One legend is that the Spirit of Corn, one of the Three Sisters, made a doll out of her husks to entertain children. The doll had a beautiful face, and began to spend less time with children and more time contemplating her own loveliness. As a result of her vanity, the doll's face was taken away.
See also
Corn dolly
|
https://en.wikipedia.org/wiki/James%20Sakoda
|
James Sakoda (1916–2005) was a Japanese-American psychologist and pioneer in computational modeling.
Career
Sakoda was born in Lancaster, California in 1916.
During World War II, Sakoda spent time incarcerated at the Tule Lake and Minidoka internment camps. He documented the experiences of Japanese Americans in internment camps, using what may be the first "agent-based model." In 1949, he published a dissertation based on his research. As a result, he earned a psychology Ph.D. from the University of California, Berkeley, that year.
After the war, Sakoda pursued a career in psychology and teaching. He taught at Brooklyn College, before joining the psychology department at the University of Connecticut in 1958. In 1962, he joined the sociology department at Brown University and became the director of the Social Science Computer Laboratory.
Sakoda was a well-known figure in the field of origami and published two books on the subject. These were first published in 1969 and 1992 and were republished in 1997 and 1999, respectively.
|
https://en.wikipedia.org/wiki/Tandy%20Graphics%20Adapter
|
Tandy Graphics Adapter (TGA, also Tandy graphics) is a computer display standard for the Tandy 1000 series of IBM PC compatibles, which has compatibility with the video subsystem of the IBM PCjr but became a standard in its own right.
PCjr graphics
The Tandy 1000 series began in 1984 as a clone of the IBM PCjr, offering support for existing PCjr software. As a result, its graphics subsystem is largely compatible.
The PCjr, released in 1983, has a graphics subsystem built around IBM's Video Gate Array (not to be confused with the later Video Graphics Array) and an MC6845 CRTC and extends on the capabilities of the Color Graphics Adapter (CGA), increasing the number of colors in each screen mode. CGA's 2-color mode can be displayed with four colors, and its 4-color mode can be displayed with all 16 colors.
Since the Tandy 1000 was much more successful than PCjr, their shared hardware capabilities became more associated with the Tandy brand than with IBM.
While there is no specific name for the Tandy graphics subsystem (Tandy's documentation calls it the "Video System Logic"), common parlance referred to it as TGA. Where not otherwise stated, information in this article that describes the TGA also applies to the PCjr video subsystem.
While EGA would eventually deliver a superset of TGA graphics on IBM compatibles, software written for TGA is not compatible with EGA cards.
Output capabilities
Tandy Video I / PCjr
Tandy 1000 systems before the Tandy 1000 SL, and the PCjr, have this type of video. It offers several CGA-compatible modes and enhanced modes.
CGA compatible modes:
in 4 colors from a 16 color (4-bit RGBI) hardware palette. Pixel aspect ratio of 1:1.2.
in 2 colors from 16. Pixel aspect ratio of 1:2.4
with pixel font text mode (effective resolution of )
with pixel font text mode (effective resolution of )
Both text modes could themselves be set to display in monochrome, or in 16 colors.
In addition to the CGA modes, it offers:
160×200 with
|
https://en.wikipedia.org/wiki/Mod%20qos
|
mod_qos is a quality of service (QoS) module for the Apache HTTP server implementing control mechanisms that can provide different priority to different requests.
Description
A web server can only serve a limited number of concurrent requests. QoS is used to ensure that important resources stay available under high server load. mod_qos is used to reject requests to unimportant resources while granting access to more important applications. It is also possible to disable access restrictions, for example, for requests to very important resources or for very important users.
Control mechanisms are available at the following levels:
Request level control: mod_qos controls the number of concurrent requests to a name space (URL). It is used to define different priorities to different pages or applications within a web server.
Connection level control: mod_qos controls the number of TCP connections to the web server. This helps limit the connections coming from a single client or from unknown networks, in order to reduce the maximum number of concurrent connections to a virtual server or to implement dynamic HTTP keep-alive settings.
Bandwidth level control: throttles requests/responses to certain URL on the web server.
Generic request line and header filter dropping suspicious request URLs or HTTP headers.
The module can be useful when used in a reverse proxy in order to divide up resources to different webserver.
Use Cases
Slow Application
The first use case shows how mod_qos can avoid service outage of a web server due to slow responses of a single application. In case an application (here /ccc) is very slow, requests wait until a timeout occurs. Due to many waiting requests, the web server runs out of free TCP connections and is not able to process other requests to application /aaa or /bbb. mod_qos limits the concurrent requests to an application in order to assure the availability of other resources.
HTTP keep-alive
The keep-alive extension to HTTP 1.1 allows
|
https://en.wikipedia.org/wiki/INF2
|
Inverted formin-2 is a protein that in humans is encoded by the INF2 gene. It belongs to the protein family called the formins. It has two splice isoforms, CAAX which localizes to the endoplasmic reticulum and non-CAAX which localizes to focal adhesions and the cytoplasm with enrichment at the Golgi. INF2 plays a role in mitochondrial fission and dorsal stress fiber formation. INF2 accelerates actin nucleation and elongation by interacting with barbed ends (fast-growing ends) of actin filaments, but also accelerates disassembly of actin through encircling and severing filaments.
Clinical significance
It can be associated with Focal segmental glomerulosclerosis and Charcot-Marie Tooth Disease.
|
https://en.wikipedia.org/wiki/List%20of%20system%20quality%20attributes
|
Within systems engineering, quality attributes are realized non-functional requirements used to evaluate the performance of a system. These are sometimes named architecture characteristics, or "ilities" after the suffix many of the words share. They are usually architecturally significant requirements that require architects' attention.
Quality attributes
Notable quality attributes include:
accessibility
accountability
accuracy
adaptability
administrability
affordability
agility
auditability
autonomy
availability
compatibility
composability
confidentiality
configurability
correctness
credibility
customizability
debuggability
degradability
determinability
demonstrability
dependability
deployability
discoverability
distributability
durability
effectiveness
efficiency
evolvability
extensibility
failure transparency
fault-tolerance
fidelity
flexibility
inspectability
installability
integrity
interchangeability
interoperability
learnability
localizability
maintainability
manageability
mobility
modifiability
modularity
observability
operability
orthogonality
portability
precision
predictability
process capabilities
producibility
provability
recoverability
redundancy
relevance
reliability
repeatability
reproducibility
resilience
responsiveness
reusability
robustness
safety
scalability
seamlessness
self-sustainability
serviceability (a.k.a. supportability)
securability
simplicity
stability
standards compliance
survivability
sustainability
tailorability
testability
timeliness
traceability
transparency
ubiquity
understandability
upgradability
usability
vulnerability
Many of these quality attributes can also be applied to data quality.
Common subsets
Together, reliability, availability, serviceability, usability and installability, are referred to as RASUI.
Functionality, usability, reliability, performance and supportability are together referred to as FURPS in relation to software
|
https://en.wikipedia.org/wiki/Kurepa%20tree
|
In set theory, a Kurepa tree is a tree (T, <) of height ω1, each of whose levels is at most countable, and has at least ℵ2 many branches. This concept was introduced by . The existence of a Kurepa tree (known as the Kurepa hypothesis, though Kurepa originally conjectured that this was false) is consistent with the axioms of ZFC: Solovay showed in unpublished work that there are Kurepa trees in Gödel's constructible universe . More precisely, the existence of Kurepa trees follows from the diamond plus principle, which holds in the constructible universe. On the other hand, showed that if a strongly inaccessible cardinal is Lévy collapsed to ω2 then, in the resulting model, there are no Kurepa trees. The existence of an inaccessible cardinal is in fact equiconsistent with the failure of the Kurepa hypothesis, because if the Kurepa hypothesis is false then the cardinal ω2 is inaccessible in the constructible universe.
A Kurepa tree with fewer than 2ℵ1 branches is known as a Jech–Kunen tree.
More generally if κ is an infinite cardinal, then a κ-Kurepa tree is a tree of height κ with more than κ branches but at most |α| elements of each infinite level α<κ, and the Kurepa hypothesis for κ is the statement that there is a κ-Kurepa tree. Sometimes the tree is also assumed to be binary. The existence of a binary κ-Kurepa tree is equivalent to the existence of a Kurepa family: a set of more than κ subsets of κ such that their intersections with any infinite ordinal α<κ form a set of cardinality at most α. The Kurepa hypothesis is false if κ is an ineffable cardinal, and conversely Jensen showed that in the constructible universe for any uncountable regular cardinal κ there is a κ-Kurepa tree unless κ is ineffable.
Specializing a Kurepa tree
A Kurepa tree can be "killed" by forcing the existence of a function whose value on any non-root node is an ordinal less than the rank of the node, such that whenever three nodes, one of which is a lower bound for the other two, are
|
https://en.wikipedia.org/wiki/John%20Bridges%20%28software%20developer%29
|
John Bridges is the co-author of the computer program PCPaint and primary developer of the program GRASP for Microtex Industries with Doug Wolfgram. He is also the sole author of GLPro and AfterGRASP. His article entitled "Differential Image Compression" was published in the February 1991 issue of Dr. Dobb's Journal.
Early work
In 1980 Bridges started his programming career at the NYU Institute for Reconstructive Plastic Surgery as a summer intern, working with sophisticated programmable vector graphics systems. He wrote editing tools and also updated and debugged software used for early 3D x-ray scanning research.
From 1981-85 Bridges wrote the RAM disk drivers, utilities, cracking software, task switching software, and memory test diagnostics for Abacus, a maker of large memory cards for the Apple II.
In 1982, he started working for Classroom Consortia Media, Inc., an educational software company, developing and writing Apple and IBM graphics libraries and tools for their software. During his tenure there he created a drawing program called SuperDraw for CCM, and on his own wrote the core graphics code for what would later become PCPaint, as well as develop the GRASP GL library format.
PCPaint
In 1984, Bridges developed the first version of PCPaint with Doug Wolfgram for Mouse Systems. PCPaint was the first IBM PC-based mouse driven GUI paint program. The company purchased the exclusive rights to PCPaint, and John continued development until 1990.
GRASP
In 1985, Bridges' PCPaint code and Doug's slideshow program morphed into a new program, GRASP. GRASP was the first multimedia animation program for the IBM PC and created the GRASP GL library format. GRASP was originally released as shareware through Doug's company, Microtex Industries. However, version 2.0 and after were sold commercially by Paul Mace Software. Doug sold his shares of both PCPaint and GRASP to Bridges in 1990, and Bridges' work on GRASP continued through 1994, when he terminated the con
|
https://en.wikipedia.org/wiki/Discoid%20lupus%20erythematosus
|
Discoid lupus erythematosus is the most common type of chronic cutaneous lupus (CCLE), an autoimmune skin condition on the lupus erythematosus spectrum of illnesses. It presents with red, painful, inflamed and coin-shaped patches of skin with a scaly and crusty appearance, most often on the scalp, cheeks, and ears. Hair loss may occur if the lesions are on the scalp. The lesions can then develop severe scarring, and the centre areas may appear lighter in color with a rim darker than the normal skin. These lesions can last for years without treatment.
Patients with systemic lupus erythematous develop discoid lupus lesions with some frequency. However, patients who present initially with discoid lupus infrequently develop systemic lupus. Discoid lupus can be divided into localized, generalized, and childhood discoid lupus.
The lesions are diagnosed by biopsy. Patients are first treated with sunscreen and topical steroids. If this does not work, an oral medication—most likely hydroxychloroquine or a related medication—can be tried.
Signs and symptoms
Morphology of lesions
Discoid lupus erythematosus (DLE) skin lesions first present as dull or purplish red, disc-shaped flat or raised and firm areas of skin. These lesions then develop increasing amounts of white, adherent scale. Finally, the lesions develop extensive scarring and/or atrophy, as well as pigment changes. They may also have overlying dried fluid, known as crust. On darker skin, the lesions often lose skin pigmentation in the center and develop increased, dark skin pigmentation around the rim. On lighter skin, the lesions often develop a gray color or have very little color change. More rarely, the lesions may be bright red and look like hives.
Location of lesions
The skin lesions are most often in sun-exposed areas localized above the neck, with favored sites being the scalp, bridge of the nose, upper cheeks, lower lip, and ear and hands 24% of patients also have lesions in the mouth (most often the
|
https://en.wikipedia.org/wiki/LCD%20Smartie
|
LCD Smartie is open-source software for Microsoft Windows which allows a character LCD to be used as an auxiliary display device for a PC. Supported devices include displays based on the Hitachi HD44780 LCD controller, the Matrix Orbital Serial/USB LCD, and Palm OS devices (when used in conjunction with PalmOrb). The program has built in support for many systems statistics (i.e. cpu load, network utilization, free disk space...), downloading RSS feeds, Winamp integration and support for several other popular applications. To support less common applications LCD Smartie uses a powerful plugin system.
The project was started as freeware by BasieP who wrote it in Delphi. After running the software as freeware from 2001 to late 2004, BasieP passed the project on to Chris Lansley as an Open Source project hosted on the SourceForge servers. Chris Lansley maintained the project for few years, and now the whole project remains alive thanks to the program community.
LCD Smartie is a relatively mature software and development of the main executable has slowed considerably, most of the new features are introduced by new plugins which are released by both the core team and by the community. The LCD Smartie forums are the primary source for support and developer discussion.
To facilitate the use of LCD Smartie on modern PCs running version of windows 7 and 8 the team has started working on a USB interface to connect LCDs to a PC that does not require any additional kernel driver and provides a complete plug and play experience.
External links
Official project page on SourceForge.
Official program forum
Limbo's home page with plugins for LCD Smartie.
lcdsmartie-laz An actively maintained fork
Free software
Liquid crystal displays
Pascal (programming language) software
|
https://en.wikipedia.org/wiki/19th%20meridian%20west
|
The meridian 19° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Greenland, Iceland, the Atlantic Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 19th meridian west forms a great circle with the 161st meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 19th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="125" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Mainland and several islands, including Prinsesse Thyra Island, Store Koldewey, and Shannon Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Greenland Sea
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Southern Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" | Antarctica
| Queen Maud Land, claimed by
|-
|}
See also
18th meridian west
20th meridian west
w019 meridian west
|
https://en.wikipedia.org/wiki/Trunking
|
In telecommunications, trunking is a technology for providing network access to multiple clients simultaneously by sharing a set of circuits, carriers, channels, or frequencies, instead of providing individual circuits or channels for each client. This is reminiscent to the structure of a tree with one trunk and many branches. Trunking in telecommunication originated in telegraphy, and later in telephone systems where a trunk line is a communications channel between telephone exchanges.
Other applications include the trunked radio systems commonly used by police agencies.
In the form of link aggregation and VLAN tagging, trunking has been applied in computer networking.
Telecommunications
A trunk line is a circuit connecting telephone switchboards (or other switching equipment), as distinguished from local loop circuit which extends from telephone exchange switching equipment to individual telephones or information origination/termination equipment.
Trunk lines are used for connecting a private branch exchange (PBX) to a telephone service provider. When needed they can be used by any telephone connected to the PBX, while the station lines to the extensions serve only one station’s telephones. Trunking saves cost, because there are usually fewer trunk lines than extension lines, since it is unusual in most offices to have all extension lines in use for external calls at once. Trunk lines transmit voice and data in formats such as analog, T1, E1, ISDN, PRI or SIP. The dial tone lines for outgoing calls are called DDCO (Direct Dial Central Office) trunks.
In the UK and the Commonwealth countries, a trunk call was the term for long-distance calling which traverses one or more trunk lines and involving more than one telephone exchange. This is in contrast to making a local call which involves a single exchange and typically no trunk lines.
Trunking also refers to the connection of switches and circuits within a telephone exchange. Trunking is closely related to t
|
https://en.wikipedia.org/wiki/Separation%20kernel
|
A separation kernel is a type of security kernel used to simulate a distributed environment. The concept was introduced by John Rushby in a 1981 paper. Rushby proposed the separation kernel as a solution to the difficulties and problems that had arisen in the development and verification of large, complex security kernels that were intended to "provide multilevel secure operation on general-purpose multi-user systems." According to Rushby, "the task of a separation kernel is to create an environment which is indistinguishable from that provided by a physically distributed system: it must appear as if each regime is a separate, isolated machine and that information can only flow from one machine to another along known external communication lines. One of the properties we must prove of a separation kernel, therefore, is that there are no channels for information flow between regimes other than those explicitly provided."
A variant of the separation kernel, the partitioning kernel, has gained acceptance in the commercial aviation community as a way of consolidating, onto a single processor, multiple functions, perhaps of mixed criticality. Commercial real-time operating system products in this genre have been used by aircraft manufacturers for safety-critical avionics applications.
In 2007 the Information Assurance Directorate of the U.S. National Security Agency (NSA) published the Separation Kernel Protection Profile (SKPP), a security requirements specification for separation kernels suitable to be used in the most hostile threat environments. The SKPP describes, in Common Criteria parlance, a class of modern products that provide the foundational properties of Rushby's conceptual separation kernel. It defines the security functional and assurance requirements for the construction and evaluation of separation kernels while yet providing some latitude in the choices available to developers.
The SKPP defines separation kernel as "hardware and/or firmware and/or so
|
https://en.wikipedia.org/wiki/Offset%20binary
|
Offset binary, also referred to as excess-K, excess-N, excess-e, excess code or biased representation, is a method for signed number representation where a signed number n is represented by the bit pattern corresponding to the unsigned number n+K, K being the biasing value or offset. There is no standard for offset binary, but most often the K for an n-bit binary word is K = 2n−1 (for example, the offset for a four-digit binary number would be 23=8). This has the consequence that the minimal negative value is represented by all-zeros, the "zero" value is represented by a 1 in the most significant bit and zero in all other bits, and the maximal positive value is represented by all-ones (conveniently, this is the same as using two's complement but with the most significant bit inverted). It also has the consequence that in a logical comparison operation, one gets the same result as with a true form numerical comparison operation, whereas, in two's complement notation a logical comparison will agree with true form numerical comparison operation if and only if the numbers being compared have the same sign. Otherwise the sense of the comparison will be inverted, with all negative values being taken as being larger than all positive values.
The 5-bit Baudot code used in early synchronous multiplexing telegraphs can be seen as an offset-1 (excess-1) reflected binary (Gray) code.
One historically prominent example of offset-64 (excess-64) notation was in the floating point (exponential) notation in the IBM System/360 and System/370 generations of computers. The "characteristic" (exponent) took the form of a seven-bit excess-64 number (The high-order bit of the same byte contained the sign of the significand).
The 8-bit exponent in Microsoft Binary Format, a floating point format used in various programming languages (in particular BASIC) in the 1970s and 1980s, was encoded using an offset-129 notation (excess-129).
The IEEE Standard for Floating-Point Arithmetic (IEEE
|
https://en.wikipedia.org/wiki/Peutz%E2%80%93Jeghers%20syndrome
|
Peutz–Jeghers syndrome (often abbreviated PJS) is an autosomal dominant genetic disorder characterized by the development of benign hamartomatous polyps in the gastrointestinal tract and hyperpigmented macules on the lips and oral mucosa (melanosis). This syndrome can be classed as one of various hereditary intestinal polyposis syndromes and one of various hamartomatous polyposis syndromes. It has an incidence of approximately 1 in 25,000 to 300,000 births.
Signs and symptoms
The risks associated with this syndrome include a substantial risk of cancer, especially of the breast and gastrointestinal tracts. Colorectal is the most common malignancy, with a lifetime risk of 39 percent, followed by breast cancer in females with a lifetime risk of 32 to 54 percent.
Patients with the syndrome also have an increased risk of developing carcinomas of the liver, lungs, breast, ovaries, uterus, testes, and other organs. Specifically, it is associated with an increased risk of sex-cord stromal tumor with annular tubules in the ovaries.
Due to the increased risk of malignancies, direct surveillance is recommended.
The average age of first diagnosis is 23. The first presentation is often bowel obstruction or intussuseption from the hamartomatous gastrointestinal polyps. Dark blue, brown, and black pigmented mucocutaneous macules, are present in over 95 percent of individuals with Peutz–Jeghers syndrome. Pigmented lesions are rarely present at birth, but often appear before 5 years of age. The macules may fade during puberty. The melanocytic macules are not associated with malignant transformation.
Complications associated with Peutz–Jeghers syndrome include obstruction and intussusception, which occur in up to 69 percent of patients, typically first between the ages of 6 and 18, though surveillance for them is controversial. Anemia is also common due to gastrointestinal bleeding from the polyps.
Genetics
In 1998, a gene was found to be associated with the mutation. On chrom
|
https://en.wikipedia.org/wiki/Rainy%20Monday
|
"Rainy Monday" is the third single by rock group Shiny Toy Guns from their album We Are Pilots. This single peaked at #23 on the Modern Rock Tracks chart.
Music video
The music video shows the band performing the song, with Chad on vocals and guitar, Jeremy on keyboard, Mikey on drums and Carah on bass, with a girl appearing now and again. Every band member is wearing black clothes and has black hair in the video.
Usage in other media
It was featured on the British programme Waterloo Road.
Songs about weather
2008 singles
Shiny Toy Guns songs
2007 songs
Songs written by Gregori Chad Petree
|
https://en.wikipedia.org/wiki/Transformation%20theory%20%28quantum%20mechanics%29
|
The term transformation theory refers to a procedure and a "picture" used by Paul Dirac in his early formulation of quantum theory, from around 1927.
This "transformation" idea refers to the changes a quantum state undergoes in the course of time, whereby its vector "moves" between "positions" or "orientations" in its Hilbert space. Time evolution, quantum transitions, and symmetry transformations in Quantum mechanics may thus be viewed as the systematic theory of abstract, generalized rotations in this space of quantum state vectors.
Remaining in full use today, it would be regarded as a topic in the mathematics of Hilbert space, although, technically speaking, it is somewhat more general in scope. While the terminology is reminiscent of rotations of vectors in ordinary space, the Hilbert space of a quantum object is more general, and holds its entire quantum state.
(The term further sometimes evokes the wave–particle duality, according to which a particle (a "small" physical object) may display either particle or wave aspects, depending on the observational situation. Or, indeed, a variety of intermediate aspects, as the situation demands.)
|
https://en.wikipedia.org/wiki/Constructing%20skill%20trees
|
Constructing skill trees (CST) is a hierarchical reinforcement learning algorithm which can build skill trees from a set of sample solution trajectories obtained from demonstration. CST uses an incremental MAP (maximum a posteriori) change point detection algorithm to segment each demonstration trajectory into skills and integrate the results into a skill tree. CST was introduced by George Konidaris, Scott Kuindersma, Andrew Barto and Roderic Grupen in 2010.
Algorithm
CST consists of mainly three parts;change point detection, alignment and merging. The main focus of CST is online change-point detection. The change-point detection algorithm is used to segment data into skills and uses the sum of discounted reward as the target regression variable. Each skill is assigned an appropriate abstraction. A particle filter is used to control the computational complexity of CST.
The change point detection algorithm is implemented as follows. The data for times and models with prior are given. The algorithm is assumed to be able to fit a segment from time to using model with the fit probability . A linear regression model with Gaussian noise is used to compute . The Gaussian noise prior has mean zero, and variance which follows . The prior for each weight follows .
The fit probability is computed by the following equation.
Then, CST compute the probability of the changepoint at time with model , and using a Viterbi algorithm.
The descriptions of the parameters and variables are as follows;
: a vector of m basis functions evaluated at state
: Gamma function
: The number of basis functions q has.
: an m by m matrix with on the diagonal and zeros elsewhere
The skill length is assumed to follow a Geometric distribution with parameter
: Expected skill length
Using the method above, CST can segment data into a skill chain. The time complexity of the change point detection is and storage size is , where is the number of
|
https://en.wikipedia.org/wiki/Stoned%20Fox
|
Stoned Fox () is an anthropomorphic taxidermied fox that has become an Internet sensation in Russia.
The fox died of natural causes. It was stuffed in 2012 by Welsh artist Adele Morse (originally from Blackwood, Caerphilly, now based in Dalston, East London), who at the time was taking a Masters course at the Royal Academy of London and made it as part of a school project. She put the finished work on eBay where it surprisingly gained much attention.
At the end of 2012, the fox quickly became an Internet meme in Russia. It has since been edited into famous paintings, photographs, videos, and other visual media.
See also
Lion of Gripsholm Castle
Homunculus loxodontus
|
https://en.wikipedia.org/wiki/Mobile%20security
|
Mobile security, or mobile device security, is the protection of smartphones, tablets, and laptops from threats associated with wireless computing. It has become increasingly important in mobile computing. The security of personal and business information now stored on smartphones is of particular concern.
Increasingly, users and businesses use smartphones not only to communicate, but also to plan and organize their work and private life. Within companies, these technologies are causing profound changes in the organization of information systems and have therefore become the source of new risks. Indeed, smartphones collect and compile an increasing amount of sensitive information to which access must be controlled to protect the privacy of the user and the intellectual property of the company.
The majority of attacks are aimed at smartphones. These attacks take advantage of vulnerabilities discovered in smartphones that can result from different modes of communication, including Short Message Service (SMS, text messaging), Multimedia Messaging Service (MMS), wireless connections, Bluetooth, and GSM, the de facto international standard for mobile communications. Smartphone operating systems or browsers are another weakness. Some malware makes use of the common user's limited knowledge. Only 2.1% of users reported having first-hand contact with mobile malware, according to a 2008 McAfee study, which found that 11.6% of users had heard of someone else being harmed by the problem. Yet, it is predicted that this number will rise.
Security countermeasures are being developed and applied to smartphones, from security best practices in software to the dissemination of information to end users. Countermeasures can be implemented at all levels, including operating system development, software design, and user behavior modifications.
Challenges of smartphone mobile security
Threats
A smartphone user is exposed to various threats when they use their phone. In just the la
|
https://en.wikipedia.org/wiki/SCART
|
SCART (also known as or , especially in France, 21-pin EuroSCART in marketing by Sharp in Asia, Euroconector in Spain, EuroAV or EXT, or EIA Multiport in the United States, as an EIA interface) is a French-originated standard and associated 21-pin connector for connecting audio-visual (AV) equipment. The name SCART comes from , "Radio and Television Receiver Manufacturers' Association", the French organisation that created the connector in the mid-1970s. The related European standard EN 50049 has then been refined and published in 1978 by CENELEC, calling it péritelevision, but it is commonly called by the abbreviation péritel in French.
The signals carried by SCART include both composite and RGB (with composite synchronisation) video, stereo audio input/output and digital signalling. SCART is also capable of carrying S-Video signals, using the red pins for chroma. A TV can be woken from standby mode and automatically switch to the appropriate AV channel when the SCART attached device is switched on. SCART was also used for high definition signals such as 720p, 1080i, 1080p with YPbPr connection by some manufacturers, but this usage is scarce due to the advent of HDMI .
In Europe, SCART was the most common method of connecting AV equipment and was a standard connector for such devices; it was far less common elsewhere.
The official standard for SCART is CENELEC document number EN 50049–1. SCART is sometimes referred to as the IEC 933-1 standard.
History
Before SCART was introduced, TVs did not offer a standardised way of inputting signals other than RF antenna connectors, and these differed between countries. Assuming other connectors even existed, devices made by various companies could have different and incompatible standards. For example, a domestic VCR could output a composite video signal through a German-originated DIN-style connector, an American-originated RCA connector, an SO239 connector or a BNC connector.
The SCART connector first appeared on TV
|
https://en.wikipedia.org/wiki/Rabbit%20%28nuclear%20engineering%29
|
In the field of nuclear engineering, a rabbit is a pneumatically controlled tool used to insert small samples of material inside the core of a nuclear reactor, usually for the purpose of studying the effect of irradiation on the material. Some rabbits have special linings to screen out certain types of neutrons. (For example, the Missouri University of Science and Technology research reactor uses a cadmium-lined rabbit to allow only high-energy neutrons through to samples in its core.)
|
https://en.wikipedia.org/wiki/List%20of%20ocean%20circulation%20models
|
This is a list of ocean circulation models, as used in physical oceanography. Ocean circulation models can also be used to study chemical oceanography, biological oceanography, geological oceanography, and climate science.
Integrated ocean modeling systems
Integrated ocean modeling systems use multiple coupled models. This coupling allows researchers to understand processes that happen among multiple systems that are usually modeled independently, such as the ocean, atmosphere, waves, and sediments. Integrated ocean modeling systems is helpful for specific regions: for example, the ESPreSSO model is used to study the Mid-Atlantic Bight region. Integrated ocean modeling systems often use data from buoys and weather stations for atmospheric forcing and boundary conditions. Two examples of integrated ocean modeling systems are:
COAWST: Coupled Ocean Atmosphere Wave Sediment Transport Modeling System (uses ROMS as its ocean circulation component).
ESPreSSO: Experimental System for Predicting Shelf and Slope Optics (uses ROMS as its ocean circulation component).
|
https://en.wikipedia.org/wiki/N%2CN%27-Diisopropylcarbodiimide
|
{{DISPLAYTITLE:N,N'''-Diisopropylcarbodiimide}}
{{chembox
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 424839902
| Name =
| ImageFile = N,N'-methanediylidenebis(propan-2-amine) 200.svg
| ImageSize = 220
| ImageFile1 = N,N'-Diisopropylcarbodiimide molecule ball.png
| ImageSize1 = 240
| ImageAlt1 = Ball-and-stick model of the N,N'-diisopropylcarbodiimide molecule
| PIN = N,N-Di(propan-2-yl)methanediimine
| OtherNames = Diisopropylmethanediimine, DIC
|Section1=
|Section2=
|Section3=
}}N,-Diisopropylcarbodiimide' is a carbodiimide used in peptide synthesis. As a liquid, it is easier to handle than the commonly used N,-dicyclohexylcarbodiimide, a waxy solid. In addition, N,''-diisopropylurea, its byproduct in many chemical reactions, is soluble in most organic solvents, a property that facilitates work-up.
Further reading
Peptide coupling reagents
Carbodiimides
Reagents for biochemistry
Biochemistry
Biochemistry methods
Isopropylamino compounds
|
https://en.wikipedia.org/wiki/Moment-area%20theorem
|
The moment-area theorem is an engineering tool to derive the slope, rotation and deflection of beams and frames. This theorem was developed by Mohr and later stated namely by Charles Ezra Greene in 1873. This method is advantageous when we solve problems involving beams, especially for those subjected to a series of concentrated loadings or having segments with different moments of inertia.
Theorem 1
The change in slope between any two points on the elastic curve equals the area of the M/EI (moment) diagram between these two points.
where,
= moment
= flexural rigidity
= change in slope between points A and B
= points on the elastic curve
Theorem 2
The vertical deviation of a point A on an elastic curve with respect to the tangent which is extended from another point B equals the moment of the area under the M/EI diagram between those two points (A and B). This moment is computed about point A where the deviation from B to A is to be determined.
where,
= moment
= flexural rigidity
= deviation of tangent at point A with respect to the tangent at point B
= points on the elastic curve
Rule of sign convention
The deviation at any point on the elastic curve is positive if the point lies above the tangent, negative if the point is below the tangent; we measured it from left tangent, if θ is counterclockwise direction, the change in slope is positive, negative if θ is clockwise direction.
Procedure for analysis
The following procedure provides a method that may be used to determine the displacement and slope at a point on the elastic curve of a beam using the moment-area theorem.
Determine the reaction forces of a structure and draw the M/EI diagram of the structure.
If there are only concentrated loads on the structure, the problem will be easy to draw M/EI diagram which will results a series of triangular shapes.
If there are mixed with distributed loads and concentrated, the moment diagram (M/EI) will results parabolic curves, cubic, etc.
Th
|
https://en.wikipedia.org/wiki/Quantum%20relative%20entropy
|
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
Motivation
For simplicity, it will be assumed that all objects in the article are finite-dimensional.
We first discuss the classical case. Suppose the probabilities of a finite sequence of events is given by the probability distribution P = {p1...pn}, but somehow we mistakenly assumed it to be Q = {q1...qn}. For instance, we can mistake an unfair coin for a fair one. According to this erroneous assumption, our uncertainty about the j-th event, or equivalently, the amount of information provided after observing the j-th event, is
The (assumed) average uncertainty of all possible events is then
On the other hand, the Shannon entropy of the probability distribution p, defined by
is the real amount of uncertainty before observation. Therefore the difference between these two quantities
is a measure of the distinguishability of the two probability distributions p and q. This is precisely the classical relative entropy, or Kullback–Leibler divergence:
Note
In the definitions above, the convention that 0·log 0 = 0 is assumed, since . Intuitively, one would expect that an event of zero probability to contribute nothing towards entropy.
The relative entropy is not a metric. For example, it is not symmetric. The uncertainty discrepancy in mistaking a fair coin to be unfair is not the same as the opposite situation.
Definition
As with many other objects in quantum information theory, quantum relative entropy is defined by extending the classical definition from probability distributions to density matrices. Let ρ be a density matrix. The von Neumann entropy of ρ, which is the quantum mechanical analog of the Shannon entropy, is given by
For two density matrices ρ and σ, the quantum relative entropy of ρ with respect to σ is defined by
We see that, when the states are classically related, i
|
https://en.wikipedia.org/wiki/Doxastic%20logic
|
Doxastic logic is a type of logic concerned with reasoning about beliefs.
The term derives from the Ancient Greek (doxa, "opinion, belief"), from which the English term doxa ("popular opinion or belief") is also borrowed. Typically, a doxastic logic uses the notation to mean "It is believed that is the case", and the set denotes a set of beliefs. In doxastic logic, belief is treated as a modal operator.
There is complete parallelism between a person who believes propositions and a formal system that derives propositions. Using doxastic logic, one can express the epistemic counterpart of Gödel's incompleteness theorem of metalogic, as well as Löb's theorem, and other metalogical results in terms of belief.
Types of reasoners
To demonstrate the properties of sets of beliefs, Raymond Smullyan defines the following types of reasoners:
Accurate reasoner: An accurate reasoner never believes any false proposition. (modal axiom T)
Inaccurate reasoner: An inaccurate reasoner believes at least one false proposition.
Consistent reasoner: A consistent reasoner never simultaneously believes a proposition and its negation. (modal axiom D)
Normal reasoner: A normal reasoner is one who, while believing also believes they believe p (modal axiom 4).
A variation on this would be someone who, while not believing also believes they don't believe p (modal axiom 5).
Peculiar reasoner: A peculiar reasoner believes proposition p while also believing they do not believe Although a peculiar reasoner may seem like a strange psychological phenomenon (see Moore's paradox), a peculiar reasoner is necessarily inaccurate but not necessarily inconsistent.
Regular reasoner: A regular reasoner is one who, while believing , also believes .
Reflexive reasoner: A reflexive reasoner is one for whom every proposition has some proposition such that the reasoner believes .
If a reflexive reasoner of type 4 [see below] believes , they will believe p. This is a parallelism
|
https://en.wikipedia.org/wiki/Angular%20momentum%20of%20light
|
The angular momentum of light is a vector quantity that expresses the amount of dynamical rotation present in the electromagnetic field of the light. While traveling approximately in a straight line, a beam of light can also be rotating (or "spinning, or "twisting) around its own axis. This rotation, while not visible to the naked eye, can be revealed by the interaction of the light beam with matter.
There are two distinct forms of rotation of a light beam, one involving its polarization and the other its wavefront shape. These two forms of rotation are therefore associated with two distinct forms of angular momentum, respectively named light spin angular momentum (SAM) and light orbital angular momentum (OAM).
The total angular momentum of light (or, more generally, of the electromagnetic field and the other force fields) and matter is conserved in time.
Introduction
Light, or more generally an electromagnetic wave, carries not only energy but also momentum, which is a characteristic property of all objects in translational motion. The existence of this momentum becomes apparent in the "radiation pressure phenomenon, in which a light beam transfers its momentum to an absorbing or scattering object, generating a mechanical pressure on it in the process.
Light may also carry angular momentum, which is a property of all objects in rotational motion. For example, a light beam can be rotating around its own axis while it propagates forward. Again, the existence of this angular momentum can be made evident by transferring it to small absorbing or scattering particles, which are thus subject to an optical torque.
For a light beam, one can usually distinguish two "forms of rotation, the first associated with the dynamical rotation of the electric and magnetic fields around the propagation direction, and the second with the dynamical rotation of light rays around the main beam axis. These two rotations are associated with two forms of angular momentum, namely SAM an
|
https://en.wikipedia.org/wiki/Timeline%20of%20entomology%20%E2%80%93%201800%E2%80%931850
|
19th century
1800 – an arbitrary date but it was around this time that systematists began to specialise. There remained entomological polyhistors – those who continued to work on the insect fauna as a whole.
From the beginning of the century, however, the specialist began to predominate, harbingered by Johann Wilhelm Meigen's Nouvelle classification des mouches à deux aile (New classification of the Diptera) commenced in the first year of the century. Lepidopterists were amongst the first to follow Meigen's lead.
The specialists fell into three categories. First there were species describers, then specialists in species recognition and then specialists in gross taxonomy. There were however considerable degrees of overlap. Also then, as now, few could entirely resist the lure of groups other than their own, and this was especially true of those in small countries where they were the sole 'expert', and many famous specialists in one order also worked on others. Hence, for instance, many works which began as butterfly faunas were completed as general regional works, often collaboratively.
"Man is born not to solve the problems of the universe, but to find out where the problem begins, and then to restrain himself within the limits of the comprehensible"
Johann Wolfgang von Goethe Conversations with Eckerman: Feb. 13, 1829
1800
Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck first expressed his views on evolution in lectures.
The total number of species of insects described is estimated at not exceeding the figure of 20 000.
1801
Publication of Jean Baptiste Pierre Antoine de Monet de Lamarck. Système des animaux sans vertèbres ou tableau général des classes, des ordres et des genres de ces animaux. Paris:Deterville in English, 'System of invertebrate animals or general table of classes, orders and genera of these animals'
Johan Christian Fabricius Systema eleutheratorum commenced. In a series of successive works to 1806 Johan Christian Fabricius de
|
https://en.wikipedia.org/wiki/Universal%20algebraic%20geometry
|
In algebraic geometry, universal algebraic geometry generalizes the geometry of rings to geometries of arbitrary varieties of algebras, so that every variety of algebras has its own algebraic geometry. The two terms algebraic variety and variety of algebras should not be confused.
See also
Algebraic geometry
Universal algebra
|
https://en.wikipedia.org/wiki/Mathematikum
|
The Mathematikum is a science museum, located in Gießen, Germany, which offers a huge variety of mathematical hands-on exhibits. It was founded by Albrecht Beutelspacher, a German mathematician.
The Mathematikum opened its doors to visitors on 19 November 2002. It was inaugurated by the German president Johannes Rau. Since then, the museum has attracted more than 1,500,000 visitors. Annually the museum is visited by more than 150,000 people. The museum is opened every day of the week, including Sunday and Monday.
Concept
The purpose of the Mathematikum is to let people of any age, gender and any qualification learn mathematics by personal experience, rather than teaching it using formulae or equations and hardly ever numbers and symbols.
The visitors can therefore learn, by participating in more than 150 interactive exhibits in the museum and by gathering, a different mathematical experience from each of the exhibits.
Exhibits
Mathematical experiments include mirrors, a Leonardo bridge, soap films, and puzzles.
Once every month on a Tuesday, a mathematician is invited. The mathematician is interviewed by professor Beutelspacher on Beutelspachers Sofa (Beutelspacher's Couch). At the end of the interview the audience can talk to the guest and ask them questions.
Awards
2004: IQ Award
External links
http://www.mathematikum.de
Museums in Hesse
Science museums in Germany
Museums established in 2002
2002 establishments in Germany
Giessen
|
https://en.wikipedia.org/wiki/Molding%20%28decorative%29
|
Moulding (British English), or molding (American English), also coving (in United Kingdom, Australia), is a strip of material with various profiles used to cover transitions between surfaces or for decoration. It is traditionally made from solid milled wood or plaster, but may be of plastic or reformed wood. In classical architecture and sculpture, the moulding is often carved in marble or other stones. In historic architecture, and some expensive modern buildings, it may be formed in place with plaster.
A "plain" moulding has right-angled upper and lower edges. A "sprung" moulding has upper and lower edges that bevel towards its rear, allowing mounting between two non-parallel planes (such as a wall and a ceiling), with an open space behind. Mouldings may be decorated with paterae as long, uninterrupted elements may be boring for eyes.
Types
Decorative mouldings have been made of wood, stone and cement. Recently mouldings have been made of extruded polyvinyl chloride (PVC) and expanded polystyrene (EPS) as a core with a cement-based protective coating. Synthetic mouldings are a cost-effective alternative that rival the aesthetic and function of traditional profiles.
Common mouldings include:
Archivolt: Ornamental moulding or band following the curve on the underside of an arch.
Astragal: Semi-circular moulding attached to one of a pair of doors to cover the gap where they meet.
Baguette: Thin, half-round moulding, smaller than an astragal, sometimes carved, and enriched with foliages, pearls, ribbands, laurels, etc. When enriched with ornaments, it was also called chapelet.
Bandelet: Any little band or flat moulding, which crowns a Doric architrave. It is also called a tenia (from Greek an article of clothing in the form of a ribbon).
Baseboard, "base moulding" or "skirting board": Used to conceal the junction of an interior wall and floor, to protect the wall from impacts and to add decorative features. A "speed base" makes use of a base "cap mouldin
|
https://en.wikipedia.org/wiki/Acetobacterium%20carbinolicum
|
Acetobacterium carbinolicum is a homoacetogenic, strictly anaerobic bacterium that oxidises primary aliphatic alcohols.
These Gram-positive, non-spore-forming and rod-shaped bacteria grow at optimal temperatures of about 30 °C, but some subspecies are also psychrotolerant, being able to grow at a minimum temperature of 2 °C, as the microorganisms belonging to the subspecies A. carbinolicum kysingense, which have been isolated from fine sand and mud sedimented in a brackish fjord in Jutland, Denmark, where concentrations of sodium chloride (NaCl) in water are up to 4.3%.
|
https://en.wikipedia.org/wiki/Fructosamine%20kinase%20family
|
In molecular biology the fructosamine kinase family is a family of enzymes. This family includes eukaryotic fructosamine-3-kinase enzymes which may initiate a process leading to the deglycation of fructoselysine and of glycated proteins and in the phosphorylation of 1-deoxy-1-morpholinofructose, fructoselysine, fructoseglycine, fructose and glycated lysozyme. The family also includes ketosamine-3-kinases (KT3K). Ketosamines derive from a non-enzymatic reaction between a sugar and a protein. Ketosamine-3-kinases (KT3K) catalyse the phosphorylation of the ketosamine moiety of glycated proteins. The instability of a phosphorylated ketosamine leads to its degradation, and KT3K is thus thought to be involved in protein repair.
The function of the prokaryotic members of this group has not been established. However, several lines of evidence indicate that they may function as fructosamine-3-kinases (FN3K). First, they are similar to characterised FN3K from mouse and human. Second, the Escherichia coli members are found in close proximity on the genome to fructose-6-phosphate kinase (PfkB). Last, FN3K activity has been found in the blue-green algae Anacystis montana indicating such activity-directly demonstrated in eukaryotes-is nonetheless not confined to eukaryotes.
|
https://en.wikipedia.org/wiki/Semi-structured%20data
|
Semi-structured data is a form of structured data that does not obey the tabular structure of data models associated with relational databases or other forms of data tables, but nonetheless contains tags or other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known as self-describing structure.
In semi-structured data, the entities belonging to the same class may have different attributes even though they are grouped together, and the attributes' order is not important.
Semi-structured data are increasingly occurring since the advent of the Internet where full-text documents and databases are not the only forms of data anymore, and different applications need a medium for exchanging information. In object-oriented databases, one often finds semi-structured data.
Types
XML
XML, other markup languages, email, and EDI are all forms of semi-structured data. OEM (Object Exchange Model) was created prior to XML as a means of self-describing a data structure. XML has been popularized by web services that are developed utilizing SOAP principles.
Some types of data described here as "semi-structured", especially XML, suffer from the impression that they are incapable of structural rigor at the same functional level as Relational Tables and Rows. Indeed, the view of XML as inherently semi-structured (previously, it was referred to as "unstructured") has handicapped its use for a widening range of data-centric applications. Even documents, normally thought of as the epitome of semi-structure, can be designed with virtually the same rigor as database schema, enforced by the XML schema and processed by both commercial and custom software programs without reducing their usability by human readers.
In view of this fact, XML might be referred to as having "flexible structure" capable of human-centric flow and hierarchy as well as highly rigorous element structure and data typing.
The concept of XML
|
https://en.wikipedia.org/wiki/Active%20packaging
|
The terms active packaging, intelligent packaging, and smart packaging refer to amplified packaging systems used with foods, pharmaceuticals, and several other types of products. They help extend shelf life, monitor freshness, display information on quality, improve safety, and improve convenience.
The terms are often related and can overlap. Active packaging usually means having active functions beyond the inert passive containment and protection of the product. Intelligent and smart packaging usually involve the ability to sense or measure an attribute of the product, the inner atmosphere of the package, or the shipping environment. This information can be communicated to users or can trigger active packaging functions. Programmable matter, smart materials, etc. can be employed in packages. Yam, Tashitov, and Miltz have defined intelligent or smart packaging as:
Depending on the working definitions, some traditional types of packaging might be considered as "active" or "intelligent". More often, the terms are used with new technologically advanced systems: microelectronics, computer applications, nanotechnology, etc.
Moisture control
For many years, desiccants have been used to control the water vapor in a closed package. A desiccant is a hygroscopic substance usually in a porous pouch or sachet which is placed inside a sealed package. They have been used to reduce corrosion of machinery and electronics and to extend the shelf life of moisture-sensitive foods. With pharmaceutical packages, a common method is to include a small packet of desiccant in a bottle. Other methods of including desiccants attached to the inner surface or in the material have recently been developed.
Corrosion
Corrosion inhibitors can be applied to items to help prevent rust and corrosion. Volatile corrosion inhibitors (VCI) or vapor phase corrosion inhibitors can be provided inside a package in a pouch or can be incorporated in a saturated overwrap of special paper or pl
|
https://en.wikipedia.org/wiki/Inverted%20ligand%20field%20theory
|
Inverted ligand field theory (ILFT) describes a phenomenon in the bonding of coordination complexes where the lowest unoccupied molecular orbital is primarily of ligand character. This is contrary to the traditional ligand field theory or crystal field theory picture and arises from the breaking down of the assumption that in organometallic complexes, ligands are more electronegative and have frontier orbitals below those of the d orbitals of electropositive metals. As we move to the right of the d-block and approach the transition-metal - main group boundary, the d orbitals become more core-like, making their cations more electronegative. This decreases their energies and eventually arrives at a point where they are lower in energy than the ligand frontier orbitals. Here the ligand field inverts so that the bonding orbitals are more metal-based, and antibonding orbitals more ligand-based. The relative arrangement of the d orbitals are also inverted in complexes displaying this inverted ligand field. This has consequences in our understanding of accessible metal oxidation states, and the reactivity of complexes exhibiting ILFT.
History
The first example of an inverted ligand field was demonstrated in paper form 1995 by James Snyder. In this theoretical paper, Snyder proposed that the [Cu(CF3)4]- complexes reported by Naumann et al. and assigned a formal oxidation state of 3+ at the copper would be better thought of as Cu(I). By comparing the d-orbital occupation, calculated charges and orbital population of [Cu(CF3)4]- "Cu(III)" complex and the formally Cu(I) [Cu(CH3)2]- complex, they illustrated how the former could be better described as a d10 copper complex experiencing two electron donation from the CF3- ligands. The phenomenon, termed an inverted ligand field by Roald Hoffman, began to be described by Aullón and Alvarez as they identified this phenonmenon as being a result of relative electronegativities. Lancaster and co-workers later provided experimental
|
https://en.wikipedia.org/wiki/TOPS%20%28file%20server%29
|
TOPS (Transcendental OPerating System) is a peer-to-peer LAN-based file sharing system best known in its Macintosh implementation, but also available for DOS and able to interoperate with Unix's NFS. Originally written by Centram Systems West, the company was purchased by Sun Microsystems as part of Sun's development of the NFS ecosystem. The Centram company was renamed to TOPS after its acquisition by Sun. Sales of TOPS dried up after the introduction of System 7, which featured a similar file sharing system built-in, and Sun spun off their NFS developments to Sitka.
Early versions
TOPS was implemented in the 1980s, an era where each computer system featured its own networking protocol and were generally unable to talk to each other. At the time Apple was in the midst of the Macintosh Office effort, and was working with two external companies to develop the Apple Filing Protocol (AFP), built on top of AppleTalk. The Macintosh Office effort ultimately failed, and one of the two companies, Centram, decided to implement a similar system on their own. This became the first version of TOPS.
When TOPS was originally released in July 1985 there was no peer-to-peer file sharing solution on the Mac. According to PC Magazine, connecting a Mac to an Apple LaserWriter printer was the initial intended function of AppleTalk. Apple's own file sharing solution, AppleShare, was not released until later, and unlike TOPS it required a dedicated server machine to run on, at least a Mac Plus. For smaller offices TOPS was an attractive low-cost solution, and saw relatively widespread use. Even after the introduction of AppleShare, TOPS managed to hold on to an estimated 600,000 client installs.
TOPS was initially a protocol using a custom set of remote procedure calls and able to talk only between TOPS clients. PCs generally lacked networking of any sort, and Centram addressed this problem by introducing a line of LocalTalk cards for the PC, along with a TOPS client. Files could be e
|
https://en.wikipedia.org/wiki/Cell%20and%20Tissue%20Research
|
Cell and Tissue Research presents regular articles and reviews in the areas of molecular, cell, stem cell biology and tissue engineering. In particular, the journal provides a forum for publishing data that analyze the supracellular, integrative actions of gene products and their impact on the formation of tissue structure and function. Articles emphasize structure–function relationships as revealed by recombinant molecular technologies. The coordinating editor of the journal is Klaus Unsicker.
Subjects covered in journal
Areas of research frequently published in Cell and Tissue Research include: neurobiology, neuroendocrinology, endocrinology, reproductive biology, skeletal and immune systems, and development.
Editors
The coordinating editor of the journal is Klaus Unsicker, of the University of Heidelberg. Section editors are K. Unsicker, neurobiology/sense organs/endocrinology; M. Furutani-Seiki, Development/growth/regeneration; W.W. Franke, molecular/cell biology; Andreas Oksche and Horst-Werner Korf, neuroendocrinology; T. Pihlajaniemi, extracellular Extracellular matrix; D. Furst, muscle; Joseph Bonventre, kidney and related subjects; P. Sutovsky, reproductive biology; B. Singh, immunology/hematology; and V. Hartenstein, invertebrates.
See also
Autophagy (journal)
Cell Biology International
Cell Cycle (journal)
|
https://en.wikipedia.org/wiki/WH2%20motif
|
Function
The WH2 motif or WH2 domain is an evolutionarily conserved sequence motif contained in proteins. It is found in WASP proteins which control actin polymerisation, therefore, WH2 is important in cellular processes such as cell contractility, cell motility, cell trafficking and cell signalling.
Motif
The WH2 motif (for Wiskott–Aldrich syndrome homology region 2) has been shown in WAS and Scar1/WASF1 (mammalian homologue) to interact via their WH2 motifs with actin.
The WH2 (WASP-Homology 2, or Wiskott–Aldrich homology 2) domain is an ~18 amino acids actin-binding motif. This domain was first recognized as an essential element for the regulation of the cytoskeleton by the mammalian Wiskott–Aldrich syndrome protein (WASP) family. WH2 proteins occur in eukaryotes from yeast to mammals, in insect viruses, and in some bacteria. The WH2 domain is found as a modular part of larger proteins; it can be associated with the WH1 or EVH1 domain and with the CRIB domain, and the WH2 domain can occur as a tandem repeat. The WH2 domain binds to actin monomers and can facilitate the assembly of actin monomers into actin filaments.
Examples
Human genes encoding proteins containing the WH2 motif include:
COBL, COBLL1, ESPN, INF2, JMY
LMOD1, LMOD2, LMOD3
MTSS1, PXK
WAS, WASF1, WASF2, WASF3, WASF4, WASL, WASPIP, WHDC1, WIPF1, WIPF2
|
https://en.wikipedia.org/wiki/Matsumoto%20zeta%20function
|
In mathematics, Matsumoto zeta functions are a type of zeta function introduced by Kohji Matsumoto in 1990. They are functions of the form
where p is a prime and Ap is a polynomial.
|
https://en.wikipedia.org/wiki/Jan%20Bergstra
|
Johannes Aldert "Jan" Bergstra (born 1951) is a Dutch computer scientist. His work has focussed on logic and the theoretical foundations of software engineering, especially on formal methods for system design. He is best known as an expert on algebraic methods for the specification of data and computational processes in general.
Biography
Jan Bergstra was born in 1951 in Rotterdam, the son of Tjeerd Bergstra and Johanna Bisschop. He was educated at the Montessori Lyceum Rotterdam (gymnasium beta) and then studied mathematics at Utrecht University, starting in 1969. After an MSc he wrote a PhD thesis, defended in 1976, on recursion theory in higher types, under the supervision of Dirk van Dalen.
Bergstra held posts at the Institute of Applied Mathematics and Computer Science of the University of Leiden (1976–82), and the Centrum Wiskunde & Informatica (CWI) in Amsterdam. In 1985 he was appointed Professor of Programming and Software Engineering at the Informatics Institute of the University of Amsterdam and, at the same time, Professor of Applied Logic at Utrecht University; such split positions are not uncommon in the Netherlands. These two chairs he continues to hold.
He has been an Advisor of the CWI (1985–2004). In 1989 he worked for a year at Philips Research in Eindhoven as a project leader and, subsequently, continued as a consultant there until 2002. While at Philips he was involved in industrial projects on consumer electronics and medical equipment.
He founded CONCUR, the international conference series in Concurrency Theory, by organising the first two conferences in Amsterdam in 1990 and 1991. He is a member of several editorial boards, and is the managing editor of Science of Computer Programming and the Journal of Logic and Algebraic Programming.
In 2004 Jan Bergstra contacted Mark Burgess of the Oslo University College, looking for scientific backing for a proposed one year masters course at the University in system administration. In spite of ve
|
https://en.wikipedia.org/wiki/Flag%20of%20the%20German%20Empire
|
The Flag of the German Empire, or Imperial Flag, Realm Flag, () is a combination between the flag of Prussia and the flag of the Hanseatic League. Starting as the national flag of the North German Confederation, it would go on to be commonly used officially and unofficially under the nation-state of the German Reich, which existed from 1871 to 1945.
History
The flag was first proposed and adopted under the leadership of Otto von Bismarck, where it would be used as the flag of the North German Confederation which was formed in 1867. During the Franco-Prussian War, the German Empire was founded (i.e., the South German states joined the Confederation). Germany would continue using it until the German Revolution of 1918–1919, which resulted in the founding of the Weimar Republic.
The Weimar Republic did not use it as a national flag though it did see use within the Reichswehr and by many paramilitary organizations including the Freikorps. It would see usage by right-wing conservative and liberal political parties, including the German National People's Party and the German People's Party. Immediately after the electoral victory of the Nazi Party in March 1933, German President Paul von Hindenburg reinstated the flag by decree as the co-official flag of Germany. In 1935, a year after Hindenburg's death, the Imperial Flag was banned from use as the national flag in favour of the black-red-white swastika flag.
During World War II, German exiles in the Soviet Union called the National Committee for a Free Germany adopted it as their new flag for a free German state. Many members of this organization would play a role in the Soviet occupation and organization of the East German government. Due to this, after World War II, it was proposed that East Germany adopted the Imperial Flag as their national flag.
Due to the ban on Nazi swastika flag in modern Germany, many German Neo-Nazis instead adopted the Imperial Flag. However, the flag never originally had any racist or a
|
https://en.wikipedia.org/wiki/H3R8me2
|
H3R8me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 8th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
H3R8me2 is associated with transcriptional repression, and modified by PRMT5, but not CARM1.
As of March 2021, there are no disease associations known for H3R8me2.
Nomenclature
The name of this modification indicates dimethylation of arginine 8 on histone H3 protein subunit:
Arginine
Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases.
Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction, and transcriptional regulation.
Arginine methylation plays a major role in gene regulation because of the ability of the PRMTs to deposit key activating (histone H4R3me2, H3R2me2, H3R17me2, H3R26me2) or repressive (H3R2me2, H3R8me2, H4R3me2) histone marks.
Histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin.
Mechanism and function of modification
The H3R8 site is methylated by PRMT5 and linked to transcriptional repression. PRMT5 is recruited by several transcriptional repressors, such as Snail, ZNF224 and Ski. A prior acetylation of H3K9 or H3K14 prevents methylation of H3R8 by PRMT5.
Epigenetic implications
The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is in
|
https://en.wikipedia.org/wiki/Audiomack
|
Audiomack is an on-demand music streaming and audio discovery platform that allows artists and creators to upload limitless music and podcasts for listeners through its mobile apps and website. In February 2021, Billboard announced Audiomack streaming data would begin informing some of its flagship charts, including the Hot 100, the Billboard 200, and the Global 200. In March 2021, Fast Company magazine named Audiomack one of the 10 most innovative companies in music.
History
Co-founded in 2012 by Dave Macli, David Ponte, Thomas Klinger, Ty Wangsness and Brian Zisook, Audiomack allowed artists to freely share their mixtapes, songs, and albums. In April 2013, J. Cole (Yours Truly 2) and Chance the Rapper (Acid Rap) released new projects exclusively on the platform. In September 2018, Eminem released "Killshot", a diss track about Machine Gun Kelly, exclusively on Audiomack and gained 8.6 million plays in four months. In February, 2019, Nicki Minaj released three songs exclusively on the platform, including a remix of Blueface's "Thotiana."
In November 2020, Audiomack signed a music licensing agreement with Warner Music Group, covering the United States, Canada, Jamaica, and five "key African territories," including Ghana, Kenya, Nigeria, South Africa, and Tanzania. In February 2021, Variety reported Audiomack has music licensing agreements in the United States with Universal Music Group and Sony Music Entertainment. Audiomack also receives music through licensing deals with labels and distributors such as 300 and EMPIRE, among others, and manages rights through its relationship with Songtrust.
In December 2020, Audiomack opened its monetization program, AMP, to all eligible creators based in the United States, Canada, and the United Kingdom. In July 2021, Audiomack expanded the program to creators worldwide and introduced a partnership with Ziiki Media to help promote artists across Africa.
In December 2021, Audiomack launched Supporters, a "feature that will en
|
https://en.wikipedia.org/wiki/Physical%20literacy
|
Physical literacy is a fundamental and valuable human capability that can be described as a disposition acquired by human individuals encompassing the motivation, confidence, physical competence, knowledge and understanding that establishes purposeful physical pursuits as an integral part of their lifestyle.
The fundamental and significant aspects of physical literacy are:
everyone can be physically literate as it is appropriate to each individual’s endowment
everyone’s physical literacy journey is unique
the skills that make up physical literacy can vary by location
physical literacy is relevant and valuable at all stages and ages of life
the concept embraces much more than physical competence
at the heart of the concept is the motivation and commitment to be active
the disposition is evidenced by a love of being active, born out of the pleasure and satisfaction individuals experience in participation
a physically literate individual values and takes responsibility for maintaining purposeful physical pursuits throughout the lifecourse
charting of progress of an individual’s personal journey must be judged against previous achievements and not against any form of national benchmarks
History
In 1993, Dr. Margaret Whitehead proposed the concept of Physical literacy at the International Association of Physical Education and Sport for Girls and Women Congress in Melbourne, Australia. From this research, the concept and definition of physical literacy was developed. In addition, the implications of physical literacy being the goal of all structures were drawn up. Since 1993 to the present day, much has been done to advance physical literacy. Research has been conducted on Physical Literacy and presented at conferences around the world. In addition, the book Physical Literacy: throughout the life course was written and numerous conferences and workshops have been delivered, to train educators, parents, health practitioners, early childhood educators, coaches,
|
https://en.wikipedia.org/wiki/Galileo%27s%20Middle%20Finger
|
Galileo's Middle Finger is a 2015 book about the ethics of medical research by Alice Dreger, an American bioethicist and author. Dreger explores the relationship between science and social justice by discussing a number of scientific controversies. These include the debates surrounding intersex genital surgery, autogynephilia, and anthropologist Napoleon Chagnon's work.
Synopsis
The first part of Galileo's Middle Finger recounts Dreger's activism against surgical "correction" of intersex individuals' genitalia. Some surgeons called this "total urogenital mobilization" which is "...ripping out everything that didn't seem right to the doctor and rebuilding a girl's genitals from scratch using Frankenstein stitches..." Based on her interactions with the intersex community as well as her own research, she advocated that genital surgery for intersex children be postponed until the individual is old enough to make an informed decision, in the absence of any evidence that the benefits of such surgery outweighed its already reported risks.
The second section provides her analysis of the controversy surrounding The Man Who Would Be Queen (2003), by sex researcher and psychologist J. Michael Bailey. In that book, Bailey summarized research on Blanchard's transsexualism typology in a way that Dreger says is scientifically accurate, well-intended, and sympathetic, but insensitive to its political implications. Dreger writes that "Bailey made the mistake of thinking that openly accepting and promoting the truth about people's identities would be understood as the same as accepting them and helping them, as he felt he was". Instead, many activists in the trans community objected to the contention that their transition was sexually motivated.
Bailey's book was based on the academic publications of psychologist Ray Blanchard, which Bailey interpreted for a lay audience. The larger audience and potential to influence public beliefs about transgenderism led a prominent transgende
|
https://en.wikipedia.org/wiki/Validated%20numerics
|
Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification () is numerics including mathematically strict error (rounding error, truncation error, discretization error) evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems, and today it is recognized as a powerful tool for the study of dynamical systems.
Importance
Computation without verification may cause unfortunate results. Below are some examples.
Rump's example
In the 1980s, Rump made an example. He made a complicated function and tried to obtain its value. Single precision, double precision, extended precision results seemed to be correct, but its plus-minus sign was different from the true value.
Phantom solution
Breuer–Plum–McKenna used the spectrum method to solve the boundary value problem of the Emden equation, and reported that an asymmetric solution was obtained. This result to the study conflicted to the theoretical study by Gidas–Ni–Nirenberg which claimed that there is no asymmetric solution. The solution obtained by Breuer–Plum–McKenna was a phantom solution caused by discretization error. This is a rare case, but it tells us that when we want to strictly discuss differential equations, numerical solutions must be verified.
Accidents caused by numerical errors
The following examples are known as accidents caused by numerical errors:
Failure of intercepting missiles in the Gulf War (1991)
Failure of the Ariane 5 rocket (1996)
Mistakes in election result totalization
Main topics
The study of validated numerics is divided into the following fields:
Tools
See also
|
https://en.wikipedia.org/wiki/Outline%20VPN
|
Outline VPN is a free and open-source tool that deploys Shadowsocks servers on multiple cloud service providers. The software suite also includes client software for multiple platforms. Outline was developed by Jigsaw, a technology incubator created by Google.[3]
The Outline Server supports self-hosting, as well as cloud service providers including DigitalOcean, Rackspace, Google Cloud Platform, and Amazon EC2. Installation involves running a command on its command-line interface, or in the case of installing on DigitalOcean or Google Cloud, its graphical user interface.
Components
Outline has three main components:
The Outline Server acts as a proxy and relays connections between the client and the sites they want to access. It is based on Shadowsocks, and offers a REST API for management of the server by the Outline Manager application.
The Outline Manager is a graphical application used to deploy and manage access to Outline Servers. It supports Windows, macOS and Linux.
The Outline Client connects to the internet via the Outline Server. It supports Windows, macOS, Linux, ChromeOS, Android, and iOS.
Security and privacy
Outline uses the Shadowsocks protocol for communication between the client and server. Traffic is encrypted with the IETF ChaCha20 stream cipher (256-bit key) and authenticated with the IETF Poly1305 authenticator.
Outline is free and open-source, licensed under the Apache License 2.0, and was audited by Radically Open Security and claims not to log users' web traffic. The Outline Server supports unattended upgrades.
Outline is not a true VPN solution but rather a Shadowsocks-based proxy. The two technologies are similar in the way they can be used to redirect network traffic and make it appear as originating from another device (the server), and hide the traffic's final destination from observers and filters until it reaches the proxy server. However, a VPN has additional capabilities, such as encapsulating traffic within a virtual tun
|
https://en.wikipedia.org/wiki/Ricochet%20%28software%29
|
Ricochet or Ricochet IM is a free software, multi-platform, instant messaging software project originally developed by John Brooks and later adopted as the official instant messaging client project of the Invisible.im group. A goal of the Invisible.im group is to help people maintain privacy by developing a "metadata free" instant messaging client.
History
Originally called Torsion IM, Ricochet was renamed in June 2014. Ricochet is a modern alternative to TorChat, which hasn't been updated in several years, and to Tor Messenger, which is discontinued. On September 17, 2014, it was announced that the Invisible.im group would be working with Brooks on further development of Ricochet in a Wired article by Kim Zetter. Zetter also wrote that Ricochet's future plans included a protocol redesign and file-transfer capabilities. The protocol redesign was implemented in April 2015.
In February 2016, Ricochet's developers made public a security audit that had been sponsored by the Open Technology Fund and carried out by the NCC Group in November 2015. The results of the audit were "reasonably positive". The audit identified "multiple areas of improvement" and one vulnerability that could be used to deanonymize users. According to Brooks, the vulnerability has been fixed in the latest release.
Technology
Ricochet is a decentralized instant messenger, meaning there is no server to connect to and share metadata with. Further, using Tor, Ricochet starts a Tor hidden service locally on a person's computer and can communicate only with other Ricochet users who are also running their own Ricochet-created Tor hidden services. This way, Ricochet communication never leaves the Tor network. A user screen name (example: ) is auto-generated upon first starting Ricochet; the first half of the screen name is the word "ricochet", with the second half being the address of the Tor hidden service. Before two Ricochet users can talk, at least one of them must privately or publicly share their
|
https://en.wikipedia.org/wiki/3000%20%28number%29
|
3000 (three thousand) is the natural number following 2999 and preceding 3001. It is the smallest number requiring thirteen letters in English (when "and" is required from 101 forward).
Selected numbers in the range 3001–3999
3001 to 3099
3001 – super-prime; divides the Euclid number 2999# + 1
3003 – triangular number, only number known to appear eight times in Pascal's triangle; no number is known to appear more than eight times other than 1. (see Singmaster's conjecture)
3019 – super-prime, happy prime
3023 – 84th Sophie Germain prime, 51st safe prime
3025 = 552, sum of the cubes of the first ten integers, centered octagonal number, dodecagonal number
3037 – star number, cousin prime with 3041
3045 – sum of the integers 196 to 210 and sum of the integers 211 to 224
3046 – centered heptagonal number
*3052 – decagonal number
3059 – centered cube number
3061 – prime of the form 2p-1
3063 – perfect totient number
3067 – super-prime
3071 – Thabit number
3072 – 3-smooth number (210×3)
3075 – nonagonal number
3078 – 18th pentagonal pyramidal number
3080 – pronic number
3081 – triangular number, 497th sphenic number
3087 – sum of first 40 primes
3100 to 3199
3109 – super-prime
3119 – safe prime
3121 – centered square number, emirp, largest minimal prime in quinary.
3125 – a solution to the expression , where ().
3136 = 562, palindromic in ternary (110220113), tribonacci number
3137 – Proth prime, both a left- and right-truncatable prime
3149 – highly cototient number
3155 – member of the Mian–Chowla sequence
3160 – triangular number
3167 – safe prime
3169 – super-prime, Cuban prime of the form .
3192 – pronic number
3200 to 3299
3203 – safe prime
3207 – number of compositions of 14 whose run-lengths are either weakly increasing or weakly decreasing
3229 – super-prime
3240 – triangular number
3248 – member of a Ruth-Aaron pair with 3249 under second definition, largest number whose factorial is less than 1010000 – hence its factoria
|
https://en.wikipedia.org/wiki/Google%20Code-in
|
Google Code-in (GCI) was an international annual programming competition hosted by Google LLC that allowed pre-university students to complete tasks specified by various, partnering open source organizations. The contest was originally the Google Highly Open Participation Contest, but in 2010, the format was modified into its current state. Students that completed tasks won certificates and T-shirts. Each organization also selected two grand prize award winners who would earn a free trip to Google's Headquarters located in Mountain View, California. In 2020, Google announced cancellation of the contest.
History
The program began as Google Highly Open Participation Contest during 2007–2008 aimed at high school students. The contest was designed to encourage high school students to participate in open source projects. In 2010, the program was modified into Google Code-in. After the 2014 edition, the Google Melange was replaced by a separate website for Google Code-in. Mauritius, an African country, participated for the first time in 2016, and was noticed for its strong debut and in 2017, produced its first Grand Prize winner.
The contest was open to students thirteen years of age or older who were then enrolled in high school (or equivalent pre-university or secondary school program). Prizes offered by Google included a contest T-shirt and a participation certificate for completing at least one task and US$100 for every three tasks completed to a maximum of US$500. There was a grand prize of a trip to the Google headquarters for an award ceremony. Each participating open source project selected one contestant to receive the grand prize, for a total of 10 grand prize winners.
Statistics
Eligibility
Students must be between 13 and 17 years old (inclusive) to participate. In addition, students must upload parental consent forms as well as some documentation proving enrollment in a pre-university program.
Program
Google partners with certain open source organiza
|
https://en.wikipedia.org/wiki/Hess%27s%20law
|
Hess's law of constant heat summation, also known simply as Hess' law, is a relationship in physical chemistry named after Germain Hess, a Swiss-born Russian chemist and physician who published it in 1840. The law states that the total enthalpy change during the complete course of a chemical reaction is independent of the sequence of steps taken.
Hess's law is now understood as an expression of the fact that the enthalpy of a chemical process is independent of the path taken from the initial to the final state (i.e. enthalpy is a state function). According to the first law of thermodynamics, the enthalpy change in a system due to a reaction at constant pressure is equal to the heat absorbed (or the negative of the heat released), which can be determined by calorimetry for many reactions. The values are usually stated for reactions with the same initial and final temperatures and pressures (while conditions are allowed to vary during the course of the reactions). Hess's law can be used to determine the overall energy required for a chemical reaction that can be divided into synthetic steps that are individually easier to characterize. This affords the compilation of standard enthalpies of formation, which may be used to predict the enthalpy change in complex synthesis.
Theory
Hess's law states that the change of enthalpy in a chemical reaction is the same regardless of whether the reaction takes place in one step or several steps, provided the initial and final states of the reactants and products are the same. Enthalpy is an extensive property, meaning that its value is proportional to the system size. Because of this, the enthalpy change is proportional to the number of moles participating in a given reaction.
In other words, if a chemical change takes place by several different routes, the overall enthalpy change is the same, regardless of the route by which the chemical change occurs (provided the initial and final condition are the same). If this were not tru
|
https://en.wikipedia.org/wiki/The%20Anatomy%20Lesson%20of%20Dr.%20Nicolaes%20Tulp
|
The Anatomy Lesson of Dr. Nicolaes Tulp is a 1632 oil painting on canvas by Rembrandt housed in the Mauritshuis museum in The Hague, the Netherlands. It was originally created to be displayed by the Surgeons Guild in their meeting room. The painting is regarded as one of Rembrandt's early masterpieces.
In the work, Nicolaes Tulp is pictured explaining the musculature of the arm to a group of doctors. Some of the spectators are various doctors who paid commissions to be included in the painting. The painting is signed in the top-left hand corner Rembrandt. f[ecit] 1632. This may be the first instance of Rembrandt signing a painting with his forename (in its original form) as opposed to the monogram RHL (Rembrandt Harmenszoon of Leiden), and is thus a sign of his growing artistic confidence.
Background
The event can be dated to 31 January 1632: the Amsterdam Guild of Surgeons, of which Tulp was official City Anatomist, permitted only one public dissection a year, and the body would have to be that of an executed criminal.
Anatomy lessons were a social event in the 17th century, taking place in lecture rooms that were actual theatres, with students, colleagues and the general public being permitted to attend on payment of an entrance fee. The spectators are appropriately dressed for this social occasion. It is thought that the uppermost (not holding the paper) and farthest left figures were added to the picture later.
Every five to ten years, the Surgeon's Guild would commission a portrait by a leading portraitist of the period; Rembrandt was commissioned for this task when he was 25 years old, and newly arrived in Amsterdam. It was his first major commission in Amsterdam. Each of the men included in the portrait would have paid a certain amount of money to be included in the work, and the more central figures (in this case, Tulp) probably paid more, even twice as much. Rembrandt's anatomical portrait radically altered the conventions of the genre, by including a
|
https://en.wikipedia.org/wiki/Eugene%20Podkletnov
|
Eugene Podkletnov (, Yevgeny Podkletnov) is a Russian ceramics engineer known for his claims made in the 1990s of designing and demonstrating gravity shielding devices consisting of rotating discs constructed from ceramic superconducting materials.
Background and education
Podkletnov graduated from the University of Chemical Technology, Mendeleyev Institute, in Moscow; he then spent 15 years at the Institute for High Temperatures in the Russian Academy of Sciences. He received a doctorate in materials science from Tampere University of Technology in Finland. After graduation he continued superconductor research at the university, in the Materials Science department, until his expulsion in 1997. After which he moved back to Moscow where it is reported that he took an engineering job. Since leaving Tampere in 1997 Podkletnov has avoided public contact or appearances. There is a report that he later returned to Tampere to work on superconductors at Tamglass Engineering Oy.
Gravity shielding
According to the account Podkletnov gave to Wired reporter Charles Platt in a 1996 phone interview, during a 1992 experiment with a rotating superconducting disc:
"Someone in the laboratory was smoking a pipe, and the pipe smoke rose in a column above the superconducting disc. So we placed a ball-shaped magnet above the disc, attached to a balance. The balance behaved strangely. We substituted a nonmagnetic material, silicon, and still the balance was very strange. We found that any object above the disc lost some of its weight, and we found that if we rotated the disc, the effect was increased."
Public controversy
Podkletnov's first peer-reviewed paper on the apparent gravity-modification effect, published in 1992, attracted little notice. In 1996, he submitted a longer paper, in which he claimed to have observed a larger effect (2% weight reduction as opposed to 0.3% in the 1992 paper) to the Journal of Physics D. According to Platt, a member of the editorial staff, Ian Samp
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.