source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/GC376 | GC376 is a broad-spectrum antiviral medication under development by the biopharmaceutical company Anivive Lifesciences for therapeutic uses in humans and animals. Anivive licensed the exclusive worldwide patent rights to GC376 from Kansas State University. As of 2020, GC376 is being investigated as treatment for COVID-19. GC376 shows activity against many human and animal viruses including coronavirus and norovirus; the most extensive research has been multiple in vivo studies in cats treating a coronavirus which causes deadly feline infectious peritonitis. Other research supports use in porcine epidemic diarrhea virus.
COVID-19
Since GC376 shows broad-spectrum activity against coronavirus, early on during the pandemic of 2020 it was suggested as a potential treatment for COVID-19. In response to the crisis, researchers at the University of Arizona published in vitro research indicating GC376 is highly active against 3CLpro in SARS-CoV-2 (the coronavirus which causes COVID-19). Another group of virologists at the University of Alberta led by D. Lorne Tyrrell then released a separate publication confirming GC376's activity against 3CLpro in SARS-CoV-2 and also indicating GC376 had a potent antiviral effect. The next day, Columbia University's Zuckerman Institute hosted a COVID-19 Virtual Symposium which released research led by David Ho and characterized GC376 as "the most promising" protease inhibitor under investigation in Ho's lab because GC376 was the "most potent" and reached "complete viral inhibition" in a culture of cells infected with SARS-CoV-2.
Pharmacology
Pharmacodynamics
GC376 is a protease inhibitor. It blocks the 3CLpro, a protease common to many (+)ssRNA viruses, thereby preventing the viral polyprotein from maturing into its functional parts. Chemically, GC376 is the bisulfite adduct of an aldehyde GC373 and it behaves as a prodrug for that compound. This aldehyde forms a covalent bond with the cysteine-144 residue at the protease's active sit |
https://en.wikipedia.org/wiki/Titchmarsh%20convolution%20theorem | The Titchmarsh convolution theorem describes the properties of the support of the convolution of two functions. It was proven by Edward Charles Titchmarsh in 1926.
Titchmarsh convolution theorem
If and are integrable functions, such that
almost everywhere in the interval , then there exist and satisfying such that almost everywhere in and almost everywhere in
As a corollary, if the integral above is 0 for all then either or is almost everywhere 0 in the interval Thus the convolution of two functions on cannot be identically zero unless at least one of the two functions is identically zero.
As another corollary, if for all and one of the function or is almost everywhere not null in this interval, then the other function must be null almost everywhere in .
The theorem can be restated in the following form:
Let . Then if the left-hand side is finite. Similarly, if the right-hand side is finite.
Above, denotes the support of a function and and denote the infimum and supremum. This theorem essentially states that the well-known inclusion is sharp at the boundary.
The higher-dimensional generalization in terms of the convex hull of the supports was proven by Jacques-Louis Lions in 1951:
If , then
Above, denotes the convex hull of the set and denotes the space of distributions with compact support.
The original proof by Titchmarsh uses complex-variable techniques, and is based on the Phragmén–Lindelöf principle, Jensen's inequality, Carleman's theorem, and Valiron's theorem. The theorem has since been proven several more times, typically using either real-variable or complex-variable methods. Gian-Carlo Rota has stated that no proof yet addresses the theorem's underlying combinatorial structure, which he believes is necessary for complete understanding. |
https://en.wikipedia.org/wiki/Ranked%20lists%20of%20country%20subdivisions | Ranked lists of country subdivisions.
Economics
List of country subdivisions by GDP
List of country subdivisions by GDP over 100 billion US dollars
List of governments in Canada by annual expenditures
List of Romanian counties by foreign trade
List of Ukrainian oblasts and territories by salary
Geography
List of Brazilian states by highest point
Population
List of federal subjects of Russia by population
List of South African provinces by population density
List of U.S. states by population density
See also
International rankings by country
country subdivisions |
https://en.wikipedia.org/wiki/Hausdorff%20paradox | The Hausdorff paradox is a paradox in mathematics named after Felix Hausdorff. It involves the sphere (a 3-dimensional sphere in ). It states that if a certain countable subset is removed from , then the remainder can be divided into three disjoint subsets and such that and are all congruent. In particular, it follows that on there is no finitely additive measure defined on all subsets such that the measure of congruent sets is equal (because this would imply that the measure of is simultaneously , , and of the non-zero measure of the whole sphere).
The paradox was published in Mathematische Annalen in 1914 and also in Hausdorff's book, Grundzüge der Mengenlehre, the same year. The proof of the much more famous Banach–Tarski paradox uses Hausdorff's ideas. The proof of this paradox relies on the axiom of choice.
This paradox shows that there is no finitely additive measure on a sphere defined on all subsets which is equal on congruent pieces. (Hausdorff first showed in the same paper the easier result that there is no countably additive measure defined on all subsets.) The structure of the group of rotations on the sphere plays a crucial role here the statement is not true on the plane or the line. In fact, as was later shown by Banach, it is possible to define an "area" for all bounded subsets in the Euclidean plane (as well as "length" on the real line) in such a way that congruent sets will have equal "area". (This Banach measure, however, is only finitely additive, so it is not a measure in the full sense, but it equals the Lebesgue measure on sets for which the latter exists.) This implies that if two open subsets of the plane (or the real line) are equi-decomposable then they have equal area.
See also |
https://en.wikipedia.org/wiki/La%20Tribune%20de%20l%27art | La Tribune de l'art (The Art Tribune) is a French online magazine on art history and western heritage from the Middle Ages to the 1930s. It was set up on 7 April 2003 by Didier Rykner, art historian and former agronomist. In 2008, the magazine's editor-in-chief received the La Demeure historique prize in the "journalist's prize, written press — internet" category. In 2021, the magazine will have 4,000 subscribers, a turnover of 320,000 euros and four employees. |
https://en.wikipedia.org/wiki/Qbox | Qbox is an open-source software package for atomic-scale simulations of molecules, liquids and solids. It implements first principles (or ab initio) molecular dynamics, a simulation method in which inter-atomic forces are derived from quantum mechanics. Qbox is released under a GNU General Public License (GPL) with documentation provided at http://qboxcode.org. It is available as a FreeBSD port.
Main features
Born-Oppenheimer molecular dynamics in the microcanonical(NVE) or canonical ensemble (NVT)
Car-Parrinello molecular dynamics
Constrained molecular dynamics for thermodynamic integration
Efficient computation of maximally localized Wannier functions
GGA and hybrid density functional approximations (LDA, PBE, SCAN, PBE0, B3LYP, HSE06, ...)
Electronic structure in the presence of a constant electric field
Computation of the electronic polarizability
Electronic response to arbitrary external potentials
Infrared and Raman spectroscopy
Methods and approximations
Qbox computes molecular dynamics trajectories of atoms using Newton's equations of motion, with forces derived from electronic structure calculations performed using Density Functional Theory. Simulations can be performed either within the Born-Oppenheimer approximation or using Car-Parrinello molecular dynamics. The electronic ground state is computed at each time step by solving the Kohn-Sham equations. Various levels of Density Functional Theory approximations can be used, including the local-density approximation (LDA), the generalized gradient approximation (GGA), or hybrid functionals that incorporate a fraction of Hartree-Fock exchange energy. Electronic wave functions are expanded using the plane wave basis set. The electron-ion interaction is represented by pseudopotentials.
Examples of use
Electronic properties of nanoparticles
Electronic properties of aqueous solutions
Free energy landscape of molecules
Infrared and Raman spectra of hydrogen at high pressure
Properties of solid-liquid interfa |
https://en.wikipedia.org/wiki/Static%20fatigue | Static fatigue describes how prolonged and constant cyclic stress weakens a material until it breaks apart, which is called failure. It is sometimes called "delayed fracture". This damage happens at a smaller stress level than the stress level needed to create a normal tensile fracture. Static fatigue can involve plastic deformation or crack growth. For example, repeated stress can create small cracks that grow and eventually break apart plastic, glass, and ceramic materials. The material reaches failure faster by increasing cyclic stress. Static fatigue varies with material type and environmental factors such as moisture presence and temperature.
Applications
Static fatigue tests can estimate a material’s lifetime and hardness to different environments. However, measuring a static fatigue limit takes a long time, and it is hard to measure a material’s true static fatigue limit with full certainty.
Typical occurrence
Stress corrosion cracking
Stress corrosion cracking (SCC) happens when a stressed material is in a corrosive (chemically destructive) environment. One example of SSC embrittlement is when moisture increases static fatigue effects in glass. SCC is also seen in hydrogen embrittlement, embrittlement of some polymers, and more.
Plastic Deformation (Plastic Flow)
Plastic deformation happens when stresses flatten, bend, or twist a material until it no longer returns to its original shape. This can create several cracks in the material and decrease its lifetime.
Examples of Static Fatigue and Stresses on Materials
Plastic pipes under water or other fluids experience hydrodynamic forces resulting in fatigue. The pipes reach failure sooner with higher temperature or increased exposure to aggressive substances. For static fatigue tests, rotating machines apply weight on the material under study causing it to bend in different directions, which weakens the material overtime. |
https://en.wikipedia.org/wiki/Felix%20Klein | Christian Felix Klein (; 25 April 1849 – 22 June 1925) was a German mathematician and mathematics educator, known for his work with group theory, complex analysis, non-Euclidean geometry, and on the associations between geometry and group theory. His 1872 Erlangen program, classifying geometries by their basic symmetry groups, was an influential synthesis of much of the mathematics of the time.
Life
Felix Klein was born on 25 April 1849 in Düsseldorf, to Prussian parents. His father, Caspar Klein (1809–1889), was a Prussian government official's secretary stationed in the Rhine Province. His mother was Sophie Elise Klein (1819–1890, née Kayser). He attended the Gymnasium in Düsseldorf, then studied mathematics and physics at the University of Bonn, 1865–1866, intending to become a physicist. At that time, Julius Plücker had Bonn's professorship of mathematics and experimental physics, but by the time Klein became his assistant, in 1866, Plücker's interest was mainly geometry. Klein received his doctorate, supervised by Plücker, from the University of Bonn in 1868.
Plücker died in 1868, leaving his book concerning the basis of line geometry incomplete. Klein was the obvious person to complete the second part of Plücker's Neue Geometrie des Raumes, and thus became acquainted with Alfred Clebsch, who had relocated to Göttingen in 1868. Klein visited Clebsch the next year, along with visits to Berlin and Paris. In July 1870, at the beginning of the Franco-Prussian War, he was in Paris and had to leave the country. For a brief time he served as a medical orderly in the Prussian army before being appointed lecturer at Göttingen in early 1871.
Erlangen appointed Klein professor in 1872, when he was only 23 years old. For this, he was endorsed by Clebsch, who regarded him as likely to become the best mathematician of his time. Klein did not wish to remain in Erlangen, where there were very few students, and was pleased to be offered a professorship at the Technische Hoc |
https://en.wikipedia.org/wiki/Metal%20assisted%20chemical%20etching | Metal Assisted Chemical Etching (also known as MACE) is the process of wet chemical etching of semiconductors (mainly silicon) with the use of a metal catalyst, usually deposited on the surface of a semiconductor in the form of a thin film or nanoparticles. The semiconductor, covered with the metal is then immersed in an etching solution containing and oxidizing agent and hydrofluoric acid. The metal on the surface catalyzes the reduction of the oxidizing agent and therefore in turn also the dissolution of silicon. In the majority of the conducted research this phenomenon of increased dissolution rate is also spatially confined, such that it is increased in close proximity to a metal particle at the surface. Eventually this leads to the formation of straight pores that are etched into the semiconductor (see figure to the right). This means that a pre-defined pattern of the metal on the surface can be directly transferred to a semiconductor substrate.
History of development
MACE is a relatively new technology in semiconductor engineering and therefore it has yet to be a process that is used in industry. The first attempts of MACE consisted of a silicon wafer that was partially covered with aluminum and then immersed in an etching solution. This material combination led to an increased etching rate compared to bare silicon. Often this very first attempt is also called galvanic etching instead of metal assisted chemical etching.
Further research showed that a thin film of a noble metal deposited on a silicon wafer's surface can also locally increase the etching rate. In particular, it was observed that noble metal particles sink down into the material when the sample is immersed in an etching solution containing an oxidizing agent and hydrofluoric acid (see image in the introduction). This method is now commonly called the metal assisted chemical etching of silicon.
Other semiconductors were also successfully etched with MACE, such as silicon carbide or gallium |
https://en.wikipedia.org/wiki/List%20of%20long%20mathematical%20proofs | This is a list of unusually long mathematical proofs. Such proofs often use computational proof methods and may be considered non-surveyable.
, the longest mathematical proof, measured by number of published journal pages, is the classification of finite simple groups with well over 10000 pages. There are several proofs that would be far longer than this if the details of the computer calculations they depend on were published in full.
Long proofs
The length of unusually long proofs has increased with time. As a rough rule of thumb, 100 pages in 1900, or 200 pages in 1950, or 500 pages in 2000 is unusually long for a proof.
1799 The Abel–Ruffini theorem was nearly proved by Paolo Ruffini, but his proof, spanning 500 pages, was mostly ignored and later, in 1824, Niels Henrik Abel published a proof that required just six pages.
1890 Killing's classification of simple complex Lie algebras, including his discovery of the exceptional Lie algebras, took 180 pages in 4 papers.
1894 The ruler-and-compass construction of a polygon of 65537 sides by Johann Gustav Hermes took over 200 pages.
1905 Emanuel Lasker's original proof of the Lasker–Noether theorem took 98 pages, but has since been simplified: modern proofs are less than a page long.
1963 Odd order theorem by Feit and Thompson was 255 pages long, which at the time was over 10 times as long as what had previously been considered a long paper in group theory.
1964 Resolution of singularities. Hironaka's original proof was 216 pages long; it has since been simplified considerably down to about 10 or 20 pages.
1966 Abyhankar's proof of resolution of singularities for 3-folds in characteristic greater than 6 covered about 500 pages in several papers. In 2009, Cutkosky simplified this to about 40 pages.
1966 Discrete series representations of Lie groups. Harish-Chandra's construction of these involved a long series of papers totaling around 500 pages. His later work on the Plancherel theorem for semisimple groups added a |
https://en.wikipedia.org/wiki/Coenzyme%20Q10 | {{DISPLAYTITLE:Coenzyme Q10}}
Coenzyme Q is a coenzyme family that is ubiquitous in animals and most bacteria (hence its other name, ubiquinone). In humans, the most common form is coenzyme Q10 (which is also called CoQ10 () and ubiquinone-10.
Coenzyme Q10 is a 1,4-benzoquinone, in which Q refers to the quinone chemical group and 10 refers to the number of isoprenyl chemical subunits (shown enclosed in brackets in the diagram) in its tail. In natural ubiquinones, there are from six to ten subunits in the tail.
This family of fat-soluble substances, which resemble vitamins, is present in all respiring eukaryotic cells, primarily in the mitochondria. It is a component of the electron transport chain and participates in aerobic cellular respiration, which generates energy in the form of ATP. Ninety-five percent of the human body's energy is generated this way. Organs with the highest energy requirements—such as the heart, liver, and kidney—have the highest CoQ10 concentrations.
There are three redox states of CoQ: fully oxidized (ubiquinone), semiquinone (ubisemiquinone), and fully reduced (ubiquinol). The capacity of this molecule to act as a two-electron carrier (moving between the quinone and quinol form) and a one-electron carrier (moving between the semiquinone and one of these other forms) is central to its role in the electron transport chain due to the iron–sulfur clusters that can only accept one electron at a time, and as a free-radical–scavenging antioxidant.
Deficiency and toxicity
There are two major pathways of deficiency of CoQ10 in humans: reduced biosynthesis, and increased use by the body. Biosynthesis is the major source of CoQ10. Biosynthesis requires at least 12 genes, and mutations in many of them cause CoQ deficiency. CoQ10 levels also may be affected by other genetic defects (such as mutations of mitochondrial DNA, ETFDH, APTX, FXN, and BRAF, genes that are not directly related to the CoQ10 biosynthetic process). Some of these, such as muta |
https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert%20model | The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Several natural and human-made systems, including the Internet, the World Wide Web, citation networks, and some social networks are thought to be approximately scale-free and certainly contain few nodes (called hubs) with unusually high degree as compared to the other nodes of the network. The BA model tries to explain the existence of such nodes in real networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert.
Concepts
Many observed networks (at least approximately) fall into the class of scale-free networks, meaning that they have power-law (or scale-free) degree distributions, while random graph models such as the Erdős–Rényi (ER) model and the Watts–Strogatz (WS) model do not exhibit power laws. The Barabási–Albert model is one of several proposed models that generate scale-free networks. It incorporates two important general concepts: growth and preferential attachment. Both growth and preferential attachment exist widely in real networks.
Growth means that the number of nodes in the network increases over time.
Preferential attachment means that the more connected a node is, the more likely it is to receive new links. Nodes with a higher degree have a stronger ability to grab links added to the network. Intuitively, the preferential attachment can be understood if we think in terms of social networks connecting people. Here a link from A to B means that person A "knows" or "is acquainted with" person B. Heavily linked nodes represent well-known people with lots of relations. When a newcomer enters the community, they are more likely to become acquainted with one of those more visible people rather than with a relative unknown. The BA model was proposed by assuming that in the World Wide Web, new pages link preferentially to hubs, i.e. very well known sites such as Google, rather than to pages t |
https://en.wikipedia.org/wiki/Tangent%20space%20to%20a%20functor | In algebraic geometry, the tangent space to a functor generalizes the classical construction of a tangent space such as the Zariski tangent space. The construction is based on the following observation. Let X be a scheme over a field k.
To give a -point of X is the same thing as to give a k-rational point p of X (i.e., the residue field of p is k) together with an element of ; i.e., a tangent vector at p.
(To see this, use the fact that any local homomorphism must be of the form
)
Let F be a functor from the category of k-algebras to the category of sets. Then, for any k-point , the fiber of over p is called the tangent space to F at p.
If the functor F preserves fibered products (e.g. if it is a scheme), the tangent space may be given the structure of a vector space over k. If F is a scheme X over k (i.e., ), then each v as above may be identified with a derivation at p and this gives the identification of with the space of derivations at p and we recover the usual construction.
The construction may be thought of as defining an analog of the tangent bundle in the following way. Let . Then, for any morphism of schemes over k, one sees ; this shows that the map that f induces is precisely the differential of f under the above identification. |
https://en.wikipedia.org/wiki/JData | JData is a light-weight data annotation and exchange open-standard designed to represent general-purpose and scientific data structures using human-readable (text-based) JSON and (binary) UBJSON formats. JData specification specifically aims at simplifying exchange of hierarchical and complex data between programming languages, such as MATLAB, Python, JavaScript etc. It defines a comprehensive list of JSON-compatible "name":value constructs to store a wide range of data structures, including scalars, N-dimensional arrays, sparse/complex-valued arrays, maps, tables, hashes, linked lists, trees and graphs, and support optional data grouping and metadata for each data element. The generated data files are compatible with JSON/UBJSON specifications and can be readily processed by most existing parsers. JData-defined annotation keywords also permit storage of strongly-typed binary data streams in JSON, data compression, linking and referencing.
History
The initial development of the JData annotation scheme started in 2011 as part of the development of the JSONLab Toolbox - a widely used open-source MATLAB/GNU Octave JSON reader/writer. The majority of the annotated N-D array constructs, such as _ArrayType_, _ArraySize_, and _ArrayData_, had been implemented in the early releases of JSONLab. In 2015, the first draft of the JData Specification was developed in the Iso2Mesh Wiki; since 2019, the subsequent development of the specification has been migrated to Github.
Releases
JData Version 0.5
The v0.5 version of the JData specification is the first complete draft and public request-for-comment (RFC) of the specification, made available on May 15, 2019. This preview version of the specification supports a majority of the data structures related to scientific data and research, including N-D arrays, sparse and complex-valued arrays, binary data interface, data-record-level compression, hashes, tables, trees, linked lists and graphs. It also describes the general approach |
https://en.wikipedia.org/wiki/Value%20of%20life | The value of life is an economic value used to quantify the benefit of avoiding a fatality. It is also referred to as the cost of life, value of preventing a fatality (VPF), implied cost of averting a fatality (ICAF), and value of a statistical life (VSL). In social and political sciences, it is the marginal cost of death prevention in a certain class of circumstances. In many studies the value also includes the quality of life, the expected life time remaining, as well as the earning potential of a given person especially for an after-the-fact payment in a wrongful death claim lawsuit.
As such, it is a statistical term, the cost of reducing the average number of deaths by one. It is an important issue in a wide range of disciplines including economics, health care, adoption, political economy, insurance, worker safety, environmental impact assessment, globalization, and process safety.
The motivation for placing a monetary value on life is to enable policy and regulatory analysts to allocate the limited supply of resources, infrastructure, labor, and tax revenue. Estimates for the value of a life are used to compare the life-saving and risk-reduction benefits of new policies, regulations, and projects against a variety of other factors, often using a cost-benefit analysis.
Estimates for the statistical value of life are published and used in practice by various government agencies. In Western countries and other liberal democracies, estimates for the value of a statistical life typically range from —; for example, the United States FEMA estimated the value of a statistical life at in 2020.
Treatment in economics and methods of calculation
There is no standard concept for the value of a specific human life in economics. However, when looking at risk/reward trade-offs that people make with regard to their health, economists often consider the value of a statistical life (VSL). Note that the VSL is very different from the value of an actual life. It is the value |
https://en.wikipedia.org/wiki/Binomial%20transform | In combinatorics, the binomial transform is a sequence transformation (i.e., a transform of a sequence) that computes its forward differences. It is closely related to the Euler transform, which is the result of applying the binomial transform to the sequence associated with its ordinary generating function.
Definition
The binomial transform, T, of a sequence, {an}, is the sequence {sn} defined by
Formally, one may write
for the transformation, where T is an infinite-dimensional operator with matrix elements Tnk.
The transform is an involution, that is,
or, using index notation,
where is the Kronecker delta. The original series can be regained by
The binomial transform of a sequence is just the nth forward differences of the sequence, with odd differences carrying a negative sign, namely:
where Δ is the forward difference operator.
Some authors define the binomial transform with an extra sign, so that it is not self-inverse:
whose inverse is
In this case the former transform is called the inverse binomial transform, and the latter is just binomial transform. This is standard usage for example in On-Line Encyclopedia of Integer Sequences.
Example
Both versions of the binomial transform appear in difference tables. Consider the following difference table:
Each line is the difference of the previous line. (The n-th number in the m-th line is am,n = 3n−2(2m+1n2 + 2m(1+6m)n + 2m-19m2), and the difference equation am+1,n = am,n+1 - am,n holds.)
The top line read from left to right is {an} = 0, 1, 10, 63, 324, 1485, ... The diagonal with the same starting point 0 is {tn} = 0, 1, 8, 36, 128, 400, ... {tn} is the noninvolutive binomial transform of {an}.
The top line read from right to left is {bn} = 1485, 324, 63, 10, 1, 0, ... The cross-diagonal with the same starting point 1485 is {sn} = 1485, 1161, 900, 692, 528, 400, ... {sn} is the involutive binomial transform of {bn}.
Ordinary generating function
The transform connects the generating functions asso |
https://en.wikipedia.org/wiki/Shuttle%20Inc. | Shuttle Inc. () (TAIEX:2405) is a Taiwan-based manufacturer of motherboards, barebone computers, complete PC systems and monitors. Throughout the last 10 years, Shuttle has been one of the world's top 10 motherboard manufacturers, and gained fame in 2001 with the introduction of the Shuttle SV24, one of the world's first commercially successful small form factor computers. Shuttle XPC small form factor computers tend to be popular among PC enthusiasts and hobbyists, although in 2004 Shuttle started a campaign to become a brand name recognized by mainstream PC consumers.
Shuttle XPC desktop systems are based on same PC platform as the XPC barebone (case+motherboard+power supply) Shuttle manufactures. More recently, the differentiation between Shuttle barebones and Shuttle systems has become greater, with the launch of system exclusive models such as the M-series and X-series.
History
1983 – Shuttle was initially incorporated in Taiwan by David and Simon Yu under the name Holco (浩鑫), and commences trading of computer motherboards.
1984 – Holco begins manufacturing motherboards in its Taoyuan County (now Taoyuan City), Taiwan factory.
1988 – Holco establishes its first overseas branch office, in Fremont, California.
1990 – Holco subsidiary Shuttle Computer Handel is established in Elmshorn, Germany to serve European market.
1994 – Introduces Shuttle RiscPC 4475, a desktop based on DEC Alpha 64-bit microprocessor and Microsoft Windows NT for Alpha.
1995 – Shuttle reaches #5 motherboard manufacturer worldwide in terms of volume.
1997 – Holco officially changes its name to Shuttle Inc.
2000 – Goes public on TAIEX stock market under symbol 2405.
2001 – Introduces Shuttle SV24, a compact all-aluminum computer using desktop components.
2002 – SV24 evolves into XPC line of small form factor barebones computers, including models for Intel's Pentium 4 and AMD's Athlon.
2003 – 8 different XPCs introduced, including models featuring chipsets from Nvidia, Intel, SiS, |
https://en.wikipedia.org/wiki/Ecoflation | Ecoflation (a portmanteau of "ecological" and "inflation") is a future scenario in "Rattling Supply Chains", a research report by the World Resources Institute and A.T. Kearney, released in November 2008. It is characterized by natural resources becoming scarcer and sustainability issues become more pressing, leading to an increase in the price of commodities. The effects of the increase in the price of commodities are felt by corporations suffering environmental costs being added to their usual cost of doing business. The concept of ecoflation focuses on having environmental externalities of business be the burden of the organization/business responsible, rather than costs being allocated to the general public. Ecoflation represents more accurate pricing of the true costs associated with business actions. The concepts also emphasized the necessity of businesses to be creative and innovative in order to adapt their business models and supply chains to remain competitive on the market. The idea is that the more a business integrates sustainability in their core business principle, the more success they will have.
Drivers of ecoflation
The World Resources Institute and A.T. Kearney identified three main drivers for ecoflation :
Scarcity of resources
An increase in population and consumption leads to an increased demand for resources, such as wood, oil, water, and grain. Meanwhile, climate change is an increasing threat to such resources, in the form of extreme weather events (wildfires, droughts, floods, etc.) and biodiversity loss. For example, as freshwater levels decline, and the demand keeps rising globally, because of the law of supply and demand, the price of water is bound to rise, some predict by 20% to 30% by 2050. Also in regards to water, agricultural regions are facing droughts and water scarcity from climate change which has led to the increase in production costs. Hydroelectric power plants are one specific area that has been directly affected by wat |
https://en.wikipedia.org/wiki/Forte%20number | In musical set theory, a Forte number is the pair of numbers Allen Forte assigned to the prime form of each pitch class set of three or more members in The Structure of Atonal Music (1973, ). The first number indicates the number of pitch classes in the pitch class set and the second number indicates the set's sequence in Forte's ordering of all pitch class sets containing that number of pitches.
In the 12-TET tuning system (or in any other system of tuning that splits the octave into twelve semitones), each pitch class may be denoted by an integer in the range from 0 to 11 (inclusive), and a pitch class set may be denoted by a set of these integers.
The prime form of a pitch class set is the most compact (i.e., leftwards packed or smallest in lexicographic order) of either the normal form of a set or of its inversion. The normal form of a set is that which is transposed so as to be most compact. For example, a second inversion major chord contains the pitch classes 7, 0, and 4. The normal form would then be 0, 4 and 7. Its (transposed) inversion, which happens to be the minor chord, contains the pitch classes 0, 3, and 7; and is the prime form.
The major and minor chords are both given Forte number 3-11, indicating that it is the eleventh in Forte's ordering of pitch class sets with three pitches. In contrast, the Viennese trichord, with pitch classes 0, 1, and 6, is given Forte number 3-5, indicating that it is the fifth in Forte's ordering of pitch class sets with three pitches. The normal form of the diatonic scale, such as C major; 0, 2, 4, 5, 7, 9, and 11; is 11, 0, 2, 4, 5, 7, and 9; while its prime form is 0, 1, 3, 5, 6, 8, and 10; and its Forte number is 7-35, indicating that it is the thirty-fifth of the seven-member pitch class sets.
Sets of pitches which share the same Forte number have identical interval vectors. Those that have different Forte numbers have different interval vectors with the exception of z-related sets (for example 6-Z44 and 6-Z19). |
https://en.wikipedia.org/wiki/Fluorine-18 | Fluorine-18 (18F) is a fluorine radioisotope which is an important source of positrons. It has a mass of 18.0009380(6) u and its half-life is 109.771(20) minutes. It decays by positron emission 96% of the time and electron capture 4% of the time. Both modes of decay yield stable oxygen-18.
Natural occurrence
is a natural trace radioisotope produced by cosmic ray spallation of atmospheric argon as well as by reaction of protons with natural oxygen: 18O + p → 18F + n.
Synthesis
In the radiopharmaceutical industry, fluorine-18 is made using either a cyclotron or linear particle accelerator to bombard a target, usually of natural or enriched [18O]water with high energy protons (typically ~18 MeV). The fluorine produced is in the form of a water solution of [18F]fluoride, which is then used in a rapid chemical synthesis of various radio pharmaceuticals. The organic oxygen-18 pharmaceutical molecule is not made before the production of the radiopharmaceutical, as high energy protons destroy such molecules (radiolysis). Radiopharmaceuticals using fluorine must therefore be synthesized after the fluorine-18 has been produced.
History
First published synthesis and report of properties of fluorine-18 were in 1937 by Arthur H. Snell, produced by the nuclear reaction of 20Ne(d,α)18F in the cyclotron laboratories of Ernest O. Lawrence.
Chemistry
Fluorine-18 is often substituted for a hydroxyl group in a radiotracer parent molecule, due to similar steric and electrostatic properties. This may however be problematic in certain applications due to possible changes in the molecule polarity.
Applications
Fluorine-18 is one of the early tracers used in positron emission tomography (PET), having been in use since the 1960s.
Its significance is due to both its short half-life and the emission of positrons when decaying.
A major medical use of fluorine-18 is: in positron emission tomography (PET) to image the brain and heart; to image the thyroid gland; as a radiotracer to i |
https://en.wikipedia.org/wiki/Phylogenetic%20niche%20conservatism | The term phylogenetic niche conservatism has seen increasing use in recent years in the scientific literature, though the exact definition has been a matter of some contention. Fundamentally, phylogenetic niche conservatism refers to the tendency of species to retain their ancestral traits. When defined as such, phylogenetic niche conservatism is therefore nearly synonymous with phylogenetic signal. The point of contention is whether or not "conservatism" refers simply to the tendency of species to resemble their ancestors, or implies that "closely related species are more similar than expected based on phylogenetic relationships". If the latter interpretation is employed, then phylogenetic niche conservatism can be seen as an extreme case of phylogenetic signal, and implies that the processes which prevent divergence are in operation in the lineage under consideration. Despite efforts by Jonathan Losos to end this habit, however, the former interpretation appears to frequently motivate scientific research. In this case, phylogenetic niche conservatism might best be considered a form of phylogenetic signal reserved for traits with broad-scale ecological ramifications (i.e. related to the Hutchinsonian niche). Thus, phylogenetic niche conservatism is usually invoked with regards to closely related species occurring in similar environments.
History and debate
According to a recent review, the term niche conservatism traces its roots to a book on comparative methods in evolutionary biology. However, and as these authors also note, the idea is much older. For instance, Darwin observed in the Origin of Species that species in the same genus tend to resemble one another. This was not a matter of chance, as the entire Linnean taxonomy system is based on classifying species into hierarchically nested groups, e.g. a genus is (and was particularly at the time of Darwin's writing) by definition a collection of similar species. In modern times this pattern has come to be refer |
https://en.wikipedia.org/wiki/Calcium%20silicate | Calcium silicate is the chemical compound Ca2SiO4, also known as calcium orthosilicate and is sometimes formulated as 2CaO·SiO2. It is also referred to by the shortened trade name Cal-Sil or Calsil. It occurs naturally as the mineral larnite.
Properties
Calcium silicate is a white free-flowing powder. It can be derived from naturally occurring limestone and diatomaceous earth, a siliceous sedimentary rock. It is one of a group of compounds that can be produced by reacting calcium oxide and silica in various ratios e.g. 3CaO·SiO2, alite (Ca3SiO5); 2CaO·SiO2, (Ca2SiO4); 3CaO·2SiO2, (Ca3SiO7); and CaO·SiO2, wollastonite (CaSiO3). It has a low bulk density and high physical water absorption.
Use
Calcium silicate is used as an anticaking agent in food preparation, including table salt and as an antacid. It is approved by the United Nations' FAO and WHO bodies as a safe food additive in a large variety of products. It has the E number reference E552.
High-temperature insulation
Calcium silicate is commonly used as a safe alternative to asbestos for high-temperature insulation materials. Industrial-grade piping and equipment insulation is often fabricated from calcium silicate. Its fabrication is a routine part of the curriculum for insulation apprentices. Calcium silicate competes in these realms against rockwool and proprietary insulation solids, such as perlite mixture and vermiculite bonded with sodium silicate. Although it is popularly considered an asbestos substitute, early uses of calcium silicate for insulation still made use of asbestos fibers.
Passive fire protection
It is used in passive fire protection and fireproofing as calcium silicate brick or in roof tiles. It is one of the most successful materials in fireproofing in Europe because of regulations and fire safety guidelines for commercial and residential building codes. Where North Americans use spray fireproofing plasters, Europeans are more likely to use cladding made of calcium silicate. High |
https://en.wikipedia.org/wiki/WDC%2065C21 | The W65C21S is a very flexible Peripheral Interface Adapter (PIA) for use with WDC’s 65xx and other 8-bit microprocessor families. It is produced by Western Design Center (WDC).
The W65C21S provides programmed microprocessor control of up to two peripheral devices (Port A and Port B). Peripheral device control is accomplished through two 8-bit bidirectional I/O Ports, with individually designed Data Direction Registers. The Data Direction Registers provide selection of data flow direction (input or output) at each respective I/O Port. Data flow direction may be selected on a line-by-line basis with intermixed input and output lines within the same port. The “handshake” interrupt control feature is provided by four peripheral control lines. This capability provides enhanced control over data transfer functions between the microprocessor and peripheral devices, as well as bidirectional data transfer between W65C21S Peripheral Interface Adapters in multiprocessor systems.
The PIA interfaces to the 65xx microprocessor family with a reset line, a ϕ2 clock line, a read/write line, two interrupt request lines, two register select lines, three chip select lines and an 8-bit bidirectional data bus. The PIA interfaces to the peripheral devices with four interrupt/control lines and two 8-bit bidirectional buses.
The W65C21S PIA is organized into two independent sections referred to as the A Side and the B Side. Each section consists of Control Register (CRA, CRB), Data Direction Register (DDRA, DDRB), Output Register (ORA, ORB), Interrupt Status Control (ISCA, ISCB) and the buffers necessary to drive the Peripheral Interface buses. Data Bus Buffers (DBB) interface data from the two sections to the data bus, while the Date Input Register (DIR) interfaces data from the DBB to the PIA registers. Chip Select and RWB control circuitry interface to the processor bus control lines.
Features of the W65C21S
Low power CMOS N-well silicon gate technology
High speed/Low power replace |
https://en.wikipedia.org/wiki/Inkscape | Inkscape is a free and open-source vector graphics editor for GNU/Linux, Windows and macOS. It offers a rich set of features and is widely used for both artistic and technical illustrations such as cartoons, clip art, logos, typography, diagramming and flowcharting. It uses vector graphics to allow for sharp printouts and renderings at unlimited resolution and is not bound to a fixed number of pixels like raster graphics. Inkscape uses the standardized Scalable Vector Graphics (SVG) file format as its main format, which is supported by many other applications including web browsers. It can import and export various other file formats, including SVG, AI, EPS, PDF, PS and PNG.
Inkscape can render primitive vector shapes (e.g. rectangles, ellipses, polygons, arcs, spirals, stars and 3D boxes) and text. These objects may be filled with solid colors, patterns, radial or linear color gradients and their borders may be stroked, both with adjustable transparency. Embedding and optional tracing of raster graphics is also supported, enabling the editor to create vector graphics from photos and other raster sources. Created shapes can be further manipulated with transformations, such as moving, rotating, scaling and skewing.
History
Inkscape began in 2003 as a code fork of the Sodipodi project. Sodipodi, developed since 1999, was itself based on Raph Levien's Gill (GNOME Illustration Application). One of the main priorities of the Inkscape project was interface consistency and usability by following the GNOME human interface guidelines.
Inkscape FAQ interprets the word Inkscape as a compound of ink and .
Four former Sodipodi developers (Ted Gould, Bryce Harrington, Nathan Hursten, and MenTaLguY) led the fork, citing differences over project objectives, openness to third-party contributions, and technical disagreements. They said that Inkscape would focus development on implementing the complete SVG standard, whereas Sodipodi development emphasized developing a general-purp |
https://en.wikipedia.org/wiki/Norpak | Norpak Corporation was a company headquartered in Kanata, Ontario, Canada, that specialized in the development of systems for television-based data transmission. In 2010, it was acquired by Ross Video Ltd. of Iroquois and Ottawa, Ontario.
Norpak developed the NABTS (North American Broadcast Teletext Standard) protocol for teletext in the 1980s, as an improved version to the then-incumbent World System Teletext, or WST, protocol. NABTS was designed to improve graphics capability over WST, but required a much more complex and expensive decoder, making NABTS somewhat of a market failure for teletext. However, NABTS still thrives as a data protocol for embedding almost any form of digital data within the VBI of an analog video signal.
Norpak's products, now part of and complementary to the Ross Video line, include equipment for embedding data in a television or video signal such as for closed captioning, XDS, V-chip data, non-teletext NABTS data for closed-circuit data transmission, and other data protocols for VBI transmission. |
https://en.wikipedia.org/wiki/Calcium%20signaling%20in%20Arabidopsis | Calcium signaling in Arabidopsis is a calcium mediated signalling pathway that Arabidopsis plants use in order to respond to a stimuli. In this pathway, Ca2+ works as a long range communication ion, allowing for rapid communication throughout the plant. Systemic changes in metabolites such as glucose and sucrose takes a few minutes after the stimulus, but gene transcription occurs within seconds. Because hormones, peptides and RNA travel through the vascular system at lower speeds than the plants response to wounds, indicates that Ca2+ must be involved in the rapid signal propagation. Instead of local communication to nearby cells and tissues, Ca2+ uses mass flow within the vascular system to help with rapid transport throughout the plant. Ca2+ moving through the xylem and phloem acts through a “calcium signature” receptor system in cells where they integrate the signal and respond with the activation of defense genes. These calcium signatures encode information about the stimulus allowing the response of the plant to cater towards the type of stimulus.
Calcium wound/damage response
Different kinds of stimuli result in different responses within the Arabidopsis plant. A wound or damage to the plant causes a wound-activated surface potential (WASP) changes that serve as an alert message to undamaged tissues. This wound response results in a plasma membrane depolarization, H+ and Ca2+ efflux and K+ influx, causing an action potential. This action potential causes Ca2+ cytosolic concentration to increase, therefore sending calcium into the phloem, where the signaling is spread, and as it arrives to systemic tissues. Because of the various stimuli perceived by the plant, abiotic and biotic stress results in different amplitudes, durations, frequencies and localizations of Ca2+ concentrations. These “calcium signatures” encode information about the nature of the stimulus and different signatures are sensed by different sensor proteins. With herbivory being an abiotic s |
https://en.wikipedia.org/wiki/Observer%20pattern | In software design and engineering, the observer pattern is a software design pattern in which an object, named the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods.
It is often used for implementing distributed event-handling systems in event-driven software. In such systems, the subject is usually named a "stream of events" or "stream source of events" while the observers are called "sinks of events." The stream nomenclature alludes to a physical setup in which the observers are physically separated and have no control over the emitted events from the subject/stream source. This pattern thus suits any process by which data arrives from some input that is not available to the CPU at startup, but instead arrives seemingly at random (HTTP requests, GPIO data, user input from peripherals, distributed databases and blockchains, etc.).
Overview
The observer design pattern is a behavioural pattern listed among the 23 well-known "Gang of Four" design patterns that address recurring design challenges in order to design flexible and reusable object-oriented software, yielding objects that are easier to implement, change, test and reuse.
Which problems can the observer design pattern solve?
The observer pattern addresses the following problems:
A one-to-many dependency between objects should be defined without making the objects tightly coupled.
When one object changes state, an open-ended number of dependent objects should be updated automatically.
An object can notify multiple other objects.
Defining a one-to-many dependency between objects by defining one object (subject) that updates the state of dependent objects directly is inflexible because it couples the subject to particular dependent objects. However, it might be applicable from a performance point of view or if the object implementation is tightly coupled (such as low-level kernel structures that ex |
https://en.wikipedia.org/wiki/Product-family%20engineering | Product-family engineering (PFE), also known as product-line engineering, is based on the ideas of "domain engineering" created by the Software Engineering Institute, a term coined by James Neighbors in his 1980 dissertation at University of California, Irvine. Software product lines are quite common in our daily lives, but before a product family can be successfully established, an extensive process has to be followed. This process is known as product-family engineering.
Product-family engineering can be defined as a method that creates an underlying architecture of an organization's product platform. It provides an architecture that is based on commonality as well as planned variabilities. The various product variants can be derived from the basic product family, which creates the opportunity to reuse and differentiate on products in the family. Product-family engineering is conceptually similar to the widespread use of vehicle platforms in the automotive industry.
Product-family engineering is a relatively new approach to the creation of new products. It focuses on the process of engineering new products in such a way that it is possible to reuse product components and apply variability with decreased costs and time. Product-family engineering is all about reusing components and structures as much as possible.
Several studies have proven that using a product-family engineering approach for product development can have several benefits. Here is a list of some of them:
Higher productivity
Higher quality
Faster time-to-market
Lower labor needs
The Nokia case mentioned below also illustrates these benefits.
Overall process
The product family engineering process consists of several phases. The three main phases are:
Phase 1: Product management
Phase 2: Domain engineering
Phase 3: Product engineering
The process has been modeled on a higher abstraction level. This has the advantage that it can be applied to all kinds of product lines and families, not on |
https://en.wikipedia.org/wiki/Fine-needle%20aspiration | Fine-needle aspiration (FNA) is a diagnostic procedure used to investigate lumps or masses. In this technique, a thin (23–25 gauge (0.52 to 0.64 mm outer diameter)), hollow needle is inserted into the mass for sampling of cells that, after being stained, are examined under a microscope (biopsy). The sampling and biopsy considered together are called fine-needle aspiration biopsy (FNAB) or fine-needle aspiration cytology (FNAC) (the latter to emphasize that any aspiration biopsy involves cytopathology, not histopathology). Fine-needle aspiration biopsies are very safe minor surgical procedures. Often, a major surgical (excisional or open) biopsy can be avoided by performing a needle aspiration biopsy instead, eliminating the need for hospitalization. In 1981, the first fine-needle aspiration biopsy in the United States was done at Maimonides Medical Center. Today, this procedure is widely used in the diagnosis of cancer and inflammatory conditions. Fine needle aspiration is generally considered a safe procedure. Complications are infrequent.
Aspiration is safer and far less traumatic than an open biopsy; complications beyond bruising and soreness are rare. However, the few problematic cells can be too few (inconclusive) or missed entirely (a false negative).
Medical uses
This type of sampling is performed for one of two reasons:
A biopsy is performed on a lump or a tissue mass when its nature is in question.
For known tumors, this biopsy is performed to assess the effect of treatment or to obtain tissue for special studies.
When the lump can be felt, the biopsy is usually performed by a cytopathologist or a surgeon. In this case, the procedure is usually short and simple. Otherwise, it may be performed by an interventional radiologist, a doctor with training in performing such biopsies under x-ray or ultrasound guidance. In this case, the procedure may require more extensive preparation and take more time to perform.
Also, fine-needle aspiration is the main me |
https://en.wikipedia.org/wiki/Model%20Aviation | Model Aviation is the monthly full-color publication written, prepared and distributed by the Academy of Model Aeronautics beginning in 1936 and established as an independent publication in July 1975. The magazine is based in Muncie, Indiana.
It is a standard benefit of club membership and covers all aspects (primarily free flight, control line and radio control model aircraft) enjoyed as the core of the hobby activity of aeromodeling. Model Aviation is considered to be the voice of the AMA and features editorial content, product reviews, how-to articles and coverage of major national and international aeromodeling events.
The publication now offers multiple digital outlets. In addition to the printed issue, members can view every issue since 1975 through the digital library database (a web-based digital viewer). Bonus content and supplemental articles related to the print magazine can be found at the website of the magazine. And Model Aviation offers a tablet app of the magazine for Android and Apple for a fee.
The magazine itself is available by subscription to non-AMA members and over the counter at select hobby shops across the country. |
https://en.wikipedia.org/wiki/The%20Panda%27s%20Thumb%20%28blog%29 | The Panda's Thumb is a blog on issues of creationism and evolution from a mainstream scientific perspective. In 2006, Nature listed it as one of the top five science blogs, and Mark Pallen has called it "the definitive blog on the evolution versus creationism debate".
It is written by multiple contributors, including Wesley R. Elsberry, Joe Felsenstein, Paul R. Gross, Nick Matzke, and Mark Perakh, many of whom used to have complementary blogs at ScienceBlogs before it went defunct. The blog takes its name from The Panda's Thumb, the pub of the virtual University of Ediacara, which is named after the book of the same name by Stephen Jay Gould, which in turn takes its title from the essay "The Panda's Peculiar Thumb", which discusses the Panda's sesamoid bone, an example of convergent evolution.
See also
Rejection of evolution by religious groups |
https://en.wikipedia.org/wiki/Evolutionary%20fauna | The concept of the three great evolutionary faunas of marine animals from the Cambrian to the present (that is, the entire Phanerozoic) was introduced by Jack Sepkoski in 1981 using factor analysis of the fossil record. An evolutionary fauna typically displays an increase in biodiversity following a logistic curve followed by extinctions (although the Modern Fauna has not yet exhibited the diminishing part of the curve).
Cambrian fauna
Fauna I, known as "Cambrian", described as a "Trilobite-rich assemblage", encompasses the bulk of the fossils which first appeared in the Cambrian explosion, and largely became extinct in the Ordovician-Silurian extinction event. This fauna comprises trilobites, small shelly fossils (grouped by Sepkoski into "Polychaeta", but including cribricyathids, coleolids, and volborthellids), Monoplacophora, inarticulate brachiopods and hyoliths.
Paleozoic fauna
Fauna II, known as "Paleozoic", described as a "Brachiopod-rich assemblage", accounts for most of the fossils appearing in the Great Ordovician Biodiversification Event, and largely became extinct in the Capitanian mass extinction event and the Permian-Triassic extinction event. This fauna is marked by fossils of the following classes: Articulata, Crinoidea, Ostracoda, Cephalopoda, Anthozoa, Stenolaemata, Stelleroidea.
Modern fauna
Fauna III, known as "Modern", described as a "Mollusc-rich assemblage", arose largely in the Mesozoic-Cenozoic Radiation, still in progress. The following classes are included: Gastropoda, Bivalvia, Osteichthyes, Malacostraca, Echinoidea, Gymnolaemata, Demospongiae, Chondrichthyes.
Kindred concepts
In the mid-19th century, John Phillips suggested three great systems: Palaeozoic, Mesozoic and Cenozoic. Writing after Sepkoski, Brenchley and Harper suggested that there were two early evolutionary faunas before the three of Sepkoski: Ediacaran and Tomottian. They also point out similarities with four "evolutionary terrestrial plant floras": Early Vascular, Pt |
https://en.wikipedia.org/wiki/Neurotrophin | Neurotrophins are a family of proteins that induce the survival, development, and function of neurons.
They belong to a class of growth factors, secreted proteins that can signal particular cells to survive, differentiate, or grow. Growth factors such as neurotrophins that promote the survival of neurons are known as neurotrophic factors. Neurotrophic factors are secreted by target tissue and act by preventing the associated neuron from initiating programmed cell death – allowing the neurons to survive. Neurotrophins also induce differentiation of progenitor cells, to form neurons.
Although the vast majority of neurons in the mammalian brain are formed prenatally, parts of the adult brain (for example, the hippocampus) retain the ability to grow new neurons from neural stem cells, a process known as neurogenesis. Neurotrophins are chemicals that help to stimulate and control neurogenesis.
Terminology
According to the United States National Library of Medicine's medical subject headings, the term neurotrophin may be used as a synonym for neurotrophic factor, but the term neurotrophin is more generally reserved for four structurally related factors: nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin-3 (NT-3), and neurotrophin-4 (NT-4). The term neurotrophic factor generally refers to these four neurotrophins, the GDNF family of ligands, and ciliary neurotrophic factor (CNTF), among other biomolecules. Neurotrophin-6 and neurotrophin-7 also exist but are only found in zebrafish.
Function
During the development of the vertebrate nervous system, many neurons become redundant (because they have died, failed to connect to target cells, etc.) and are eliminated. At the same time, developing neurons send out axon outgrowths that contact their target cells. Such cells control their degree of innervation (the number of axon connections) by the secretion of various specific neurotrophic factors that are essential for neuron survival. One of |
https://en.wikipedia.org/wiki/Leucangium%20carthusianum | Leucangium carthusianum is a species of ascomycete fungus. It is commonly known as the Oregon black truffle. It is found in the Pacific Northwest region of North America, where it grows in an ectomycorrhizal association with Douglas-fir. It is commercially collected, usually assisted by a specially trained truffle dog. Mature fruiting bodies can be dug up mostly during winter, but the season can extend from September through April.
Description
On the outside, the fruit bodies are dark brown and rough to smooth. They are sometimes mistaken for coal lumps. Inside, the gleba is gray to brownish and separated into pockets by veins. The odor is pungent and fruity, usually resembling pineapple.
Edibility
Leucangium carthusianum is a good edible mushroom; it can be prepared similarly to Oregon White and European truffles; it is typically shaved raw on top of a dish to add its complex musky aroma.
See also
Tuber oregonense |
https://en.wikipedia.org/wiki/International%20Mathematical%20Olympiad | The International Mathematical Olympiad (IMO) is a mathematical olympiad for pre-university students, and is the oldest of the International Science Olympiads. It is “the most prestigious” mathematical competition in the world. Winning in IMO is widely regarded as the greatest feat for any high school student. The first IMO was held in Romania in 1959. It has since been held annually, except in 1980. More than 100 countries participate. Each country sends a team of up to six students, plus one team leader, one deputy leader, and observers.
The content ranges from extremely difficult algebra and pre-calculus problems to problems in branches of mathematics not conventionally covered in secondary or high school and often not at university level either, such as projective and complex geometry, functional equations, combinatorics, and well-grounded number theory, of which extensive knowledge of theorems is required. Calculus, though allowed in solutions, is never required, as there is a principle that anyone with a basic understanding of mathematics should understand the problems, even if the solutions require a great deal more knowledge. Supporters of this principle claim that this allows more universality and creates an incentive to find elegant, deceptively simple-looking problems which nevertheless require a certain level of ingenuity, often times a great deal of ingenuity to net all points for a given IMO problem.
The selection process differs by country, but it often consists of a series of tests which admit fewer students at each progressing test. Awards are given to approximately the top-scoring 50% of the individual contestants. Teams are not officially recognized—all scores are given only to individual contestants, but team scoring is unofficially compared more than individual scores. Contestants must be under the age of 20 and must not be registered at any tertiary institution. Subject to these conditions, an individual may participate any number of times in |
https://en.wikipedia.org/wiki/Volume%20testing | Volume testing belongs to the group of non-functional tests, which are a group of tests often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data to assert the system performance with a certain amount of data in the database. Volume testing is regarded by some as a type of capacity testing, and is often deemed necessary as other types of tests normally don't use large amounts of data, but rather typically use small amounts of data. It is the only type of test which checks the ability of a system to handle large pools of data. For example, the test can be used to stress the database to its maximum limit. While the amount can, in generic terms, be the database size, it could also be the size of an interface file that is the subject of volume testing. For example, if one wants to volume test an application with a specific database size, the database will be expanded to that size and the application's performance will then be tested on it. Another example could be when there is a requirement for the application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. A sample file of an intended size can then be created and used to test the application's functionality in order to test the performance. |
https://en.wikipedia.org/wiki/Babassu%20oil | Babassu oil or cusi oil is a clear light yellow vegetable oil extracted from the seeds of the babassu palm (Attalea speciosa) which grows in the Amazon region of South America. It is a non-drying oil used in food, cleaners and skin products. This oil has properties similar to coconut oil and is used in much the same context. It is increasingly being used as a substitute for coconut oil. Babassu oil is about 70% lipids, in the following proportions:
Lauric and myristic acids have melting points relatively close to human body temperature, so babassu oil can be applied to the skin as a solid that melts on contact. This heat transfer can produce a cooling sensation. It is an effective emollient.
During February 2008, a mixture of babassu oil and coconut oil was used to partially power one engine of a Boeing 747, in a biofuel trial sponsored by Virgin Atlantic. |
https://en.wikipedia.org/wiki/Retirement%20annuity%20plan | Retirement annuity plan is a financial product that ensures regular income to retirees in later years most often issued and distributed (or sold) by an insurance organization. The main idea behind this product is to provide retirees the opportunity to attain income after retirement. A 'Retirement annuity plan (RAP) is a type of retirement plan similar to IRA that provides a stream of regular (single) distributions to an insured retiree. Time intervals between distributions as well as their amount are defined by conditions and type of the annuity between issuer organization and client. Nowadays many types of retirement annuities are offered on the market.
Accumulation & Distribution Phase
Accumulation Phase
The Accumulation Phase of a retirement plan is a period of an individual's life in which they are working and are able to save money for retirement. The accumulation phase begins when an individual starts to save money for retirement and ends when they start to receive distributions.
When individuals decide to buy an annuity they agree to pay a lump upfront or to make regular deposits to the insurance institution. The money individuals pay to the insurance companies is then reinvested into the market. Money grows until the day when an individual decides to retire.
Distribution (Payback) Phase
The payback phase starts as soon as distributions are paid to the insured individuals. There are different ways how insurance organizations can distribute payments. Payments could be distributed for a predetermined period of time (e. g. 15 years) annually, semi-annually, etc.; as well as in the form of a life annuity or a single payment. Payments could be paid immediately after the retirement of an individual or after some period of time.
Types of Retirement Annuities and their differences
Fixed vs. Variable Retirement Annuity
Individuals that enter into a fixed annuity have the opportunity to decide ahead of time how much they will receive when the distribution phas |
https://en.wikipedia.org/wiki/Euler%27s%20constant | Euler's constant (sometimes called the Euler–Mascheroni constant) is a mathematical constant, usually denoted by the lowercase Greek letter gamma (), defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by :
Here, represents the floor function.
The numerical value of Euler's constant, to 50 decimal places, is:
History
The constant first appeared in a 1734 paper by the Swiss mathematician Leonhard Euler, titled De Progressionibus harmonicis observationes (Eneström Index 43). Euler used the notations and for the constant. In 1790, the Italian mathematician Lorenzo Mascheroni used the notations and for the constant. The notation appears nowhere in the writings of either Euler or Mascheroni, and was chosen at a later time perhaps because of the constant's connection to the gamma function. For example, the German mathematician Carl Anton Bretschneider used the notation in 1835 and Augustus De Morgan used it in a textbook published in parts from 1836 to 1842.
Appearances
Euler's constant appears, among other places, in the following (where '*' means that this entry contains an explicit equation):
Expressions involving the exponential integral*
The Laplace transform* of the natural logarithm
The first term of the Laurent series expansion for the Riemann zeta function*, where it is the first of the Stieltjes constants*
Calculations of the digamma function
A product formula for the gamma function
The asymptotic expansion of the gamma function for small arguments.
An inequality for Euler's totient function
The growth rate of the divisor function
In dimensional regularization of Feynman diagrams in quantum field theory
The calculation of the Meissel–Mertens constant
The third of Mertens' theorems*
Solution of the second kind to Bessel's equation
In the regularization/renormalization of the harmonic series as a finite value
The mean of the Gumbel distribution
The information entropy of the Weibull and |
https://en.wikipedia.org/wiki/CARD%20%28domain%29 | Caspase recruitment domains, or caspase activation and recruitment domains (CARDs), are interaction motifs found in a wide array of proteins, typically those involved in processes relating to inflammation and apoptosis. These domains mediate the formation of larger protein complexes via direct interactions between individual CARDs. CARDs are found on a strikingly wide range of proteins, including helicases, kinases, mitochondrial proteins, caspases, and other cytoplasmic factors.
Basic features
CARDs are a subclass of protein motif known as the death fold, which features an arrangement of six to seven antiparallel alpha helices with a hydrophobic core and an outer face composed of charged residues. Other motifs in this class include the pyrin domain (PYD), death domain (DD), and death effector domain (DED), all of which also function primarily in regulation of apoptosis and inflammatory responses.
In apoptosis
CARDs were originally characterized based on their involvement in the regulation of caspase activation and apoptosis. The basic six-helix structure of the domain appears to be conserved as far back as the ced-3 and ced-4 genes in C. elegans, the organism in which several components of the apoptotic machinery were first characterized. CARDs are present on a number of proteins that promote apoptosis, primarily caspases 1,2,4,5,9, and 15 in mammals.
In the mammalian immune response
IL-1 and IL-18 processing
A number of CARDs have been shown to play a role in regulating inflammation in response to bacterial and viral pathogens as well as to a variety of endogenous stress signals. Recently, studies on the NLR protein Ipaf-1 have provided insight into how CARDs participate in the immune response. Ipaf-1 features an N-terminal CARD, a nucleotide-binding domain, and C-terminal leucine-rich repeats (LRRs), thought to function in a similar fashion to those found in Toll-like receptors. The primary role of this molecule appears to be regulation of the prote |
https://en.wikipedia.org/wiki/Entropy%20%28astrophysics%29 | In astrophysics, what is referred to as "entropy" is actually the adiabatic constant derived as follows.
Using the first law of thermodynamics for a quasi-static, infinitesimal process for a hydrostatic system
For an ideal gas in this special case, the internal energy, U, is only a function of the temperature T; therefore the partial derivative of heat capacity with respect to T is identically the same as the full derivative, yielding through some manipulation
Further manipulation using the differential version of the ideal gas law, the previous equation, and assuming constant pressure, one finds
For an adiabatic process and recalling , one finds
{|
|
|-
|
|}
One can solve this simple differential equation to find
This equation is known as an expression for the adiabatic constant, K, also called the adiabat. From the ideal gas equation one also knows
where is Boltzmann's constant.
Substituting this into the above equation along with and for an ideal monatomic gas one finds
where is the mean molecular weight of the gas or plasma; and is the mass of the Hydrogen atom, which is extremely close to the mass of the proton, , the quantity more often used in astrophysical theory of galaxy clusters.
This is what astrophysicists refer to as "entropy" and has units of [keV cm2]. This quantity relates to the thermodynamic entropy as
Astrophysics
Entropy |
https://en.wikipedia.org/wiki/Henry%20Foundation%20for%20Botanical%20Research | The Henry Foundation for Botanical Research is a nonprofit botanical garden of 50 acres located in Lower Merion Township, Pennsylvania at 801 Stony Lane, Gladwyne.
The garden was established in 1948 by botanist and plant explorer Mary Gibson Henry (1884-1967) for plants that she collected through remote areas of the West, Midwest, and Southeast United States. The garden has been maintained by the Foundation since 1950.
Today the garden contains scenic plantings, gardens, and trails for walking and horseback riding set among steep hills along Rock Creek, near the Schuylkill River. It also contains 15 taxa of native magnolias, in a program established under the auspices of the North American Plant Collections Consortium (NAPCC) to help broaden the genetic diversity of organized botanical collections.
See also
List of botanical gardens in the United States
North American Plant Collections Consortium |
https://en.wikipedia.org/wiki/List%20of%20.NET%20libraries%20and%20frameworks | This article contains a list of libraries that can be used in .NET languages. These languages require .NET Framework, Mono, or .NET, which provide a basis for software development, platform independence, language interoperability and extensive framework libraries. Standard Libraries (including the Base Class Library) are not included in this article.
Preamble
Apps created with .NET Framework or .NET run in a software environment known as the Common Language Runtime (CLR), an application virtual machine that provides services such as security, memory management, and exception handling. The framework includes a large class library called Framework Class Library (FCL).
Thanks to the hosting virtual machine, different .NET CLI-compliant languages can operate on the same kind of data structures. Therefore, all CLI-compliant languages can use FCL and other .NET libraries that are written in one of the CLI compliant languages. When the source code of a CLI-compliant language is compiled, the compiler generates platform-independent code in the Common Intermediate Language (CIL, also referred to as bytecode), which is stored in CLI assemblies. When a .NET app runs, the just-in-time compiler (JIT) turns the CIL code into platform-specific machine code. To improve performance, .NET Framework also comes with the Native Image Generator (NGEN), which performs ahead-of-time compilation to machine code.
This architecture provides language interoperability. Each language can use code written in other languages. Calls from one language to another are exactly the same as would be within a single programming language. If a library is written in one CLI language, it can be used in other CLI languages. Moreover, apps that consist only of pure .NET assemblies, can be transferred to any platform that contains an implementation of CLI and run on that platform. For example, apps written using .NET can run on Windows, macOS, and various versions of Linux.
.NET apps or their libraries, h |
https://en.wikipedia.org/wiki/Bidirectional%20scattering%20distribution%20function | The definition of the BSDF (bidirectional scattering distribution function) is not well standardized. The term was probably introduced in 1980 by Bartell, Dereniak, and Wolfe. Most often it is used to name the general mathematical function which describes the way in which the light is scattered by a surface. However, in practice, this phenomenon is usually split into the reflected and transmitted components, which are then treated separately as BRDF (bidirectional reflectance distribution function) and BTDF (bidirectional transmittance distribution function).
BSDF is a superset and the generalization of the BRDF and BTDF. The concept behind all BxDF functions could be described as a black box with the inputs being any two angles, one for incoming (incident) ray and the second one for the outgoing (reflected or transmitted) ray at a given point of the surface. The output of this black box is the value defining the ratio between the incoming and the outgoing light energy for the given couple of angles. The content of the black box may be a mathematical formula which more or less accurately tries to model and approximate the actual surface behavior or an algorithm which produces the output based on discrete samples of measured data. This implies that the function is 4(+1)-dimensional (4 values for 2 3D angles + 1 optional for wavelength of the light), which means that it cannot be simply represented by 2D and not even by a 3D graph. Each 2D or 3D graph, sometimes seen in the literature, shows only a slice of the function.
Some tend to use the term BSDF simply as a category name covering the whole family of BxDF functions.
The term BSDF is sometimes used in a slightly different context, for the function describing the amount of the scatter (not scattered light), simply as a function of the incident light angle. An example to illustrate this context: for perfectly lambertian surface the BSDF (angle)=const. This approach is used for instance to verify the output quali |
https://en.wikipedia.org/wiki/Morton%20Gurtin | Morton E. Gurtin (7 March 1934 – 20 April 2022) was a mechanical engineer who became a mathematician and mathematical physicist. He was an emeritus professor of mathematical sciences at Carnegie-Mellon University, where for many years he held an endowed chair as the Alumni Professor of Mathematical Science. His main work is in materials science, in the form of the mathematical, rational mechanics of non-linear continuum mechanics and thermodynamics, in the style of Clifford Truesdell and Walter Noll, a field also known under the combined name of continuum thermomechanics. He has published over 250 papers, many among them in Archive for Rational Mechanics and Analysis, as well as a number of books.
Biography
Gurtin received his Bachelor of Mechanical Engineering at Rensselaer Polytechnic Institute (1955), and a Ph.D. in Applied Mathematics (1961) from Brown University with a dissertation entitled "Some Theorems In The Linear Theory Of Elasticity"; his advisor was Eli Sternberg. His experience prior to his stint at Brown University includes work as a structural engineer at Douglas Aircraft, Los Angeles, and at General Electric (Utica, N.Y.), in their Advanced Engineering Program.
He has taught at Brown University and joined the Department of Mathematical Sciences of Carnegie Mellon University as professor in 1966 where he held the Alumni Chair in Mathematical Sciences from 1992 until his retirement. He has successfully advised over 20 doctoral students.
Research
Gurtin's research concerns nonlinear continuum mechanics and thermodynamics, with important contributions on the mathematical and conceptual foundations of these fields in the 1960s and 70's. Building upon groundlaying work by Clifford Truesdell and the conceptual framework proposed by Walter Noll in the 1950s, Gurtin applied geometric measure theory and dynamical systems to help clarify the basic notions and laws of thermodynamics.
He increasingly directed his attention towards applications to problems i |
https://en.wikipedia.org/wiki/22%20%28number%29 | 22 (twenty-two) is the natural number following 21 and preceding 23.
In mathematics
22 is a palindromic number. It is the second Smith number, the second Erdős–Woods number, and the fourth large Schröder number. It is also a Perrin number, from a sum of 10 and 12.
22 is the sixth distinct semiprime, and the fouth of the form where is a higher prime. It is the second member of the second cluster of discrete biprimes (21, 22), where the next such cluster is (38, 39). It contains an aliquot sum of 14 (itself semiprime), within an aliquot sequence of four composite numbers (22, 14, 10, 8, 7, 1, 0) that are rooted in the prime 7-aliquot tree.
The maximum number of regions into which five intersecting circles divide the plane is 22. 22 is also the quantity of pieces in a disc that can be created with six straight cuts, which makes 22 the seventh central polygonal number.
22 is the fourth pentagonal number, the third hexagonal pyramidal number, and the third centered heptagonal number.
is a commonly used approximation of the irrational number , the ratio of the circumference of a circle to its diameter, where in particular 22 and 7 are consecutive hexagonal pyramidal numbers. Also,
from an approximate construction of the squaring of the circle by Srinivasa Ramanujan, correct to eight decimal places.
Natural logarithms of integers in binary are known to have Bailey–Borwein–Plouffe type formulae for for all integers .
22 is the number of partitions of 8, as well as the sum of the totient function over the first eight integers, with for 22 returning 10.
22 can read as "two twos", which is the only fixed point of John Conway's look-and-say function. In other words, "22" generates the infinite repeating sequence "22, 22, 22, ..."
All regular polygons with < edges can be constructed with an angle trisector, with the exception of the 11-sided hendecagon.
There is an elementary set of twenty-two single-orbit convex tilings that tessellate two-dimensional s |
https://en.wikipedia.org/wiki/Holland%20Computing%20Center | The Holland Computing Center, often abbreviated HCC, is the high-performance computing core for the University of Nebraska System. HCC has locations in both the University of Nebraska-Lincoln June and Paul Schorr III Center for Computer Science & Engineering and the University of Nebraska Omaha Peter Kiewit Institute.
The center was named after Omaha businessman Richard Holland who donated considerably to the university for the project.
Both locations provide various research computing services and hardware. The retrofitted facilities at the PKI location include the Crane Supercomputer which “is used by scientists and engineers to study topics such as nanoscale chemistry, subatomic physics, meteorology, crashworthiness, artificial intelligence and bioinformatics” and Anvil, the Holland Computing Center's "Cloud" based on the OpenStack Architecture. Other resources include "Rhino" for shared memory processing and "Red" for LHC grid computing.
Active resources
Crane
The Crane Supercomputer is HCC's most powerful supercomputer and is used as the primary computational resource for many researchers within the University of Nebraska system across a variety of disciplines. When it was implemented in 2013, Crane was ranked 474 in the TOP500. As of May 2019, Crane is composed of 548 nodes offering a total of 12,236 cores, 68,000 GB of memory, and 57 Nvidia GPU's. Crane has 1.5 PB of available Lustre storage (1 PB = 1 million gigabytes).
In 2017, Crane received a major upgrade, adding nodes with the Omnipath InfiniBand Architecture.
Rhino
Rhino is the latest addition to HCC's Resources, taking the place of the former Tusker super computer, using nodes from both Tusker and Sandhills. At its creation in June 2019, Rhino was composed of 112 nodes offering a total of 7,168 cores, 25,856 GB of memory. The cluster has 360 TB of Lustre storage available.
Red
Red is the resource for the University of Nebraska-Lincoln's US CMS Tier-2 site. Initially created in August 2005, th |
https://en.wikipedia.org/wiki/Crystal%20habit | In mineralogy, crystal habit is the characteristic external shape of an individual crystal or aggregate of crystals. The habit of a crystal is dependent on its crystallographic form and growth conditions, which generally creates irregularities due to limited space in the crystallizing medium (commonly in rocks).
Crystal forms
Recognizing the habit can aid in mineral identification and description, as the crystal habit is an external representation of the internal ordered atomic arrangement. Most natural crystals, however, do not display ideal habits and are commonly malformed. Hence, it is also important to describe the quality of the shape of a mineral specimen:
Euhedral: a crystal that is completely bounded by its characteristic faces, well-formed. Synonymous terms: idiomorphic, automorphic;
Subhedral: a crystal partially bounded by its characteristic faces and partially by irregular surfaces. Synonymous terms: hypidiomorphic, hypautomorphic;
Anhedral: a crystal that lacks any of its characteristic faces, completely malformed. Synonymous terms: allotriomorphic, xenomorphic.
Altering factors
Factors influencing habit include: a combination of two or more crystal forms; trace impurities present during growth; crystal twinning and growth conditions (i.e., heat, pressure, space); and specific growth tendencies such as growth striations. Minerals belonging to the same crystal system do not necessarily exhibit the same habit. Some habits of a mineral are unique to its variety and locality: For example, while most sapphires form elongate barrel-shaped crystals, those found in Montana form stout tabular crystals. Ordinarily, the latter habit is seen only in ruby. Sapphire and ruby are both varieties of the same mineral: corundum.
Some minerals may replace other existing minerals while preserving the original's habit, i.e. pseudomorphous replacement. A classic example is tiger's eye quartz, crocidolite asbestos replaced by silica. While quartz typically forms prism |
https://en.wikipedia.org/wiki/Polar%20night | Polar night is a phenomenon in the northernmost and southernmost regions of Earth where night lasts for more than 24 hours. This occurs only inside the polar circles. The opposite phenomenon, polar day, or midnight sun, occurs when the Sun remains above the horizon for more than 24 hours.
"Night" is understood as the center of the Sun being below a free horizon. Since the atmosphere refracts sunlight, polar day is longer than polar night, and the area that is affected by polar night is somewhat smaller than the area of midnight sun. The polar circle is located at a latitude between these two areas, at approximately 66.5°. While it is day in the Arctic Circle, it is night in the Antarctic Circle, and vice versa.
Any planet or moon with a sufficient axial tilt that rotates with respect to its star significantly more frequently than it orbits the star (and with no tidal locking between the two) will experience the same phenomenon (a nighttime lasting more than one rotation period).
Description
The polar shortest day is not totally dark everywhere inside the polar circle, but only in places within about 5.5° of the poles, and only when the moon is well below the horizon. Regions located at the inner border of the polar circles experience polar twilight instead of polar night. In fact, polar regions typically get more twilight throughout the year than equatorial regions.
For regions inside the polar circles, the maximum lengths of the time that the Sun is completely below the horizon varies from zero to a few days beyond the Arctic Circle and Antarctic Circle to 179 days at the Poles. However, not all this time is classified as polar night since sunlight may be visible because of refraction. The time when any part of the Sun is above the horizon at the poles is 186 days. The preceding numbers are average numbers: the ellipticity of Earth's orbit makes the South Pole receive a week more of Sun-below-horizon than the North Pole (see equinox).
Types of polar night
A |
https://en.wikipedia.org/wiki/Real%20Analysis%20Exchange | The Real Analysis Exchange (RAEX) is a biannual mathematics journal, publishing survey articles, research papers, and conference reports in real analysis and related topics. Its editor-in-chief is Paul D. Humke.
External links
The website of RAEX
Mathematics journals |
https://en.wikipedia.org/wiki/Yak | The domestic yak (Bos grunniens), also known as the Tartary ox, grunting ox, or hairy cattle, is a species of long-haired domesticated cattle found throughout the Himalayan region of the Indian subcontinent, the Tibetan Plateau, Gilgit-Baltistan (Kashmir), Tajikistan and as far north as Mongolia and Siberia. It is descended from the wild yak (Bos mutus).
Etymology
The English word yak originates from the . In Tibetan and Balti it refers only to the male of the species, the female being called , or in Tibetan and in Balti. In English, as in most other languages that have borrowed the word, yak is usually used for both sexes, with bull or cow referring to each sex separately.
Taxonomy
Belonging to the genus Bos, Yaks are related to cattle (Bos primigenius). Mitochondrial DNA analyses to determine the evolutionary history of yaks have been inconclusive.
The yak may have diverged from cattle at any point between one and five million years ago, and there is some suggestion that it may be more closely related to bison than to the other members of its designated genus. Apparent close fossil relatives of the yak, such as Bos baikalensis, have been found in eastern Russia, suggesting a possible route by which yak-like ancestors of the modern American bison could have entered the Americas.
The species was originally designated as Bos grunniens ("grunting ox") by Linnaeus in 1766, but this name is now generally considered to refer only to the domesticated form of the animal, with Bos mutus ("mute ox") being the preferred name for the wild species. Although some authors still consider the wild yak to be a subspecies, Bos grunniens mutus, the ICZN made an official ruling in 2003 permitting the use of the name Bos mutus for wild yaks, and this is now the more common usage.
Except where the wild yak is considered as a subspecies of Bos grunniens, there are no recognised subspecies of yak.
Physical characteristics
Yaks are heavily built animals with bulky frames, sturdy |
https://en.wikipedia.org/wiki/PrairieTek | PrairieTek was a hard drive manufacturer located in Longmont, Colorado in the late 1980s and early 1990s. It was founded by Terry Johnson in 1985. It manufactured 5 and 10 megabyte "ruggedized" miniature hard drives for the laptop computer market.
Its PrairieTek 220 was the first hard drive, a potentially profitable move, but by the time the drive entered into mass production, its storage capacity was already low for the market. "PrairieTek's single disk 40MB model, potentially a
cost-effective competitive product, was...late to market."
Unlike many manufacturers of the time, PrairieTek did not rest the drive heads on the disks, but instead used reverse EMF (ElectroMagnetic Force) to park the drives on a spreader bar. At the time all manufacturers parked heads on the platters in an out of the way place. When the platters spun up, with the increasingly smooth surface of the platter, the heads had a tendency to stick to the surface (sticktion) which resulted in ripping them off the arms. This dynamic loading of the heads avoided the problem and was a carry over from a previous design used in 8 inch removable drives.
The company first went into the black in September 1990, at which point co-founder Steve Volk and the core group of engineers resigned to found Intégral Peripherals, to develop 1.8" drives.
The company failed within a year of Volk leaving. The first round of layoffs started in February 1991, all production activity ceased at the end of June - and the company filed for Chapter 11 bankruptcy in September 1991.
The bidding war for PrairieTek's patent portfolio in 1992 rose to the then astounding price of $18M, paid by Conner Peripherals and Alps Electric - PrairieTek's patent #4,933,785 in particular was sought after. |
https://en.wikipedia.org/wiki/Pulay%20stress | The Pulay stress or Pulay forces (named for Peter Pulay) is an error that occurs in the stress tensor (or Jacobian matrix) obtained from self-consistent field calculations (Hartree–Fock or density functional theory) due to the incompleteness of the basis set.
A plane-wave density functional calculation on a crystal with specified lattice vectors will typically include in the basis set all plane waves with energies below the specified energy cutoff. This corresponds to all points on the reciprocal lattice that lie within a sphere whose radius is related to the energy cutoff. Consider what happens when the lattice vectors are varied, resulting in a change in the reciprocal lattice vectors. The points on the reciprocal lattice which represent the basis set will no longer correspond to a sphere, but an ellipsoid. This change in the basis set will result in errors in the calculated ground state energy change.
The Pulay stress is often nearly isotropic, and tends to result in an underestimate of the equilibrium volume. Pulay stress can be reduced by increasing the energy cutoff. Another way to mitigate the effect of Pulay stress on the equilibrium cell shape is to calculate the energy at different lattice vectors with a fixed energy cutoff.
Similarly, the error occurs in any calculation where the basis set explicitly depends on the position of atomic nuclei (which are to change during the geometry optimization). In this case, the Hellmann–Feynman theorem – which is used to avoid derivation of many-parameter wave function (expanded in a basis set) – is only valid for the complete basis set. Otherwise, the terms in theorem's expression containing derivatives of the wavefunction persist, giving rise to additional forces – the Pulay forces:
The presence of Pulay forces makes the optimized geometry parameters converge slower with increasing basis set. The way to eliminate the erroneous forces is to use nuclear-position-independent basis functions, to explicitly calculate |
https://en.wikipedia.org/wiki/Synthetic%20media | Synthetic media (also known as AI-generated media, media produced by generative AI, personalized media, personalized content, and colloquially as deepfakes) is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology (and often use "deepfakes" as a euphemism, e.g. "deepfakes for text" for natural-language generation; "deepfakes for voices" for neural voice cloning, etc.) Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
History
Pre-1950s
Synthetic media as a process of automated art dates back to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria designed machines capable of writing text, generating sounds, and playing music. The tradition of automaton-based entertainment flourished throughout history, with mechanical beings' seemingly magical ability to mimic human creativity often drawing crowds throughout Europe, China, India, and so on. |
https://en.wikipedia.org/wiki/Interolog | An interolog is a conserved interaction between a pair of proteins which have interacting homologs in another organism. The term was introduced in a 2000 paper by Walhout et al.
Example
Suppose that A and B are two different interacting human proteins, and A' and B' are two different interacting dog proteins. Then the interaction between A and B is an interolog of the interaction between A' and B' if the following conditions all hold:
A is a homolog of A'. (Protein homologs have similar amino acid sequences and derive from a common ancestral sequence).
B is a homolog of B'.
A and B interact.
A' and B' interact.
Thus, interologs are homologous pairs of protein interactions across different organisms.
See also
Homology (biology)
Systems biology
Bioinformatics |
https://en.wikipedia.org/wiki/Swelling%20index | Swelling index may refer to the following material parameters that quantify volume change:
Crucible swelling index, also known as free swelling index, in coal assay
Swelling capacity, the amount of a liquid that can be absorbed by a polymer
Shrink–swell capacity in soil mechanics
Unload-reload constant (κ) in critical state soil mechanics
Mechanics
Materials science |
https://en.wikipedia.org/wiki/Soundscape%20SSHDR1 | The Soundscape SSHDR1 (1993 - 1996) was one of the first Windows based digital audio workstations available and was manufactured by Soundscape Digital Technology Ltd..
The system consisted of an external 2U rack unit which housed the audio processing hardware, based on Motorola 56000 family DSPs, 2 inputs and 4 outputs in both unbalanced analogue and S/PDIF digital and two IDE hard disk drives. Synchronisation was via MIDI in/out/thru via MIDI Timecode and an optional I/O board provided balanced analogue and AES/EBU connections. Each unit could record and play 4 tracks of 16bit 48kHz audio but later software upgrades increase this to 8 tracks.
The unit connected to an ISA card fitted into a PC expansion slot, each of which could host 2 x SSHDR1 units. Multiple host cards could be used.
Windows software (for Windows 3.1, 95/98/ME, 2000, XP) controlled the unit and provided 256 virtual tracks, mixing and editing.
Up to 16 units could be used simultaneously, with full sample accurate synchronisation, controlled by one Soundscape editing application.
In 1995 an optional DSP board added a configurable mixer with real-time DSP plugins, an extra 8 in/out via a TDIF port and allowed the unit to record 10 or playback 12 tracks simultaneously at up to 24bit 48kHz. The card could be retro-fitted into any unit.
Optional software packages for Auto-Conforming (for film and TV post-production use) and CD Mastering were available as well as a selection of plug-in effects developed by well known companies such as TC Electronics and Dolby.
Digital audio |
https://en.wikipedia.org/wiki/Handheld%20projector | A handheld projector (also known as a pocket projector, mobile projector, pico projector or mini beamer) is an image projector in a handheld device. It was developed as a computer display device for compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but are too small to accommodate a display screen that an audience can see easily. Handheld projectors involve miniaturized hardware, and software that can project digital images onto a nearby viewing surface.
The system comprises five main parts: the battery, the electronics, the laser or LED light sources, the combiner optic, and in some cases, scanning micromirror devices. First, the electronics system turns the image into an electronic signal. Next, the electronic signals drive laser or LED light sources with different colors and intensities down different paths. In the combiner optic, the different light paths are combined into one path, defining a palette of colors. An important design characteristic of a handheld projector is the ability to project a clear image on various viewing surfaces.
History
Major advances in imaging technology have allowed the introduction of hand-held (pico) type video projectors. The concept was also introduced by Explay in 2003 to various consumer electronics players. Their solution was publicly announced through their relationship with Kopin in January 2005.
Insight Media market research has divided the leading players in this application into various categories:
Micro-display makers (e.g., TI's DLP, Himax, Microvision, Lemoptix and bTendo MEMS scanners)
Light source makers (e.g., Philips Lumileds, Osram, Cree LEDs and Corning, Nichia, Mitsubishi Lasers)
Module makers (e.g., Texas Instruments (DLP), 3M Liquid crystal on silicon (LCoS))
Various manufacturers have produced handheld projectors exhibiting high-resolution, good brightness, and low energy cons |
https://en.wikipedia.org/wiki/Theory%20of%20conjoint%20measurement | The theory of conjoint measurement (also known as conjoint measurement or additive conjoint measurement) is a general, formal theory of continuous quantity. It was independently discovered by the French economist Gérard Debreu (1960) and by the American mathematical psychologist R. Duncan Luce and statistician John Tukey .
The theory concerns the situation where at least two natural attributes, A and X, non-interactively relate to a third attribute, P. It is not required that A, X or P are known to be quantities. Via specific relations between the levels of P, it can be established that P, A and X are continuous quantities. Hence the theory of conjoint measurement can be used to quantify attributes in empirical circumstances where it is not possible to combine the levels of the attributes using a side-by-side operation or concatenation. The quantification of psychological attributes such as attitudes, cognitive abilities and utility is therefore logically plausible. This means that the scientific measurement of psychological attributes is possible. That is, like physical quantities, a magnitude of a psychological quantity may possibly be expressed as the product of a real number and a unit magnitude.
Application of the theory of conjoint measurement in psychology, however, has been limited. It has been argued that this is due to the high level of formal mathematics involved (e.g., ) and that the theory cannot account for the "noisy" data typically discovered in psychological research (e.g., ). It has been argued that the Rasch model is a stochastic variant of the theory of conjoint measurement (e.g., ; ; ; ; ; ), however, this has been disputed (e.g., Karabatsos, 2001; Kyngdon, 2008). Order restricted methods for conducting probabilistic tests of the cancellation axioms of conjoint measurement have been developed in the past decade (e.g., Karabatsos, 2001; Davis-Stober, 2009).
The theory of conjoint measurement is (different but) related to conjoint analysis, whi |
https://en.wikipedia.org/wiki/15%20Bean%20Soup | 15 Bean Soup is a packaged dry bean soup product from Indianapolis-based N.K. Hurst Co. According to company president Rick Hurst, it is the #1 selling dry bean soup in the U.S.
Ingredients
Every package of 15 bean soup includes a seasoning packet and at least 15 of the following varieties of dried pulses:
Northern beans
Pinto beans
Large lima beans
Yelloweye beans
Garbanzo beans
Baby lima beans
Green split peas
Kidney beans
Cranberry beans
Small white beans
Pink beans
Small red beans
Yellow split peas
Lentils
Navy beans
White kidney beans
Black beans
The soup is currently produced in ham, chicken, Cajun, and beef flavors.
See also
List of bean soups
List of legume dishes |
https://en.wikipedia.org/wiki/Electron%20wake | Electron wake is the disturbance left after a high-energy charged particle passes through condensed matter or plasma. Ions passing through can introduce periodic oscillations in the crystal lattice or plasma wave with the characteristic frequency of the crystal or plasma frequency. Interactions of the field created by these oscillations with the charged particle field alternate from constructive interference to destructive interference, producing alternating waves of electric field and displacement. The frequency of the wake field is determined by the nature of the penetrated matter, and the period of the wake field is directly proportional to the speed of the incoming charged particle. The amplitude of the first wake wave is the most important, as it produces a braking force on the charged particle, eventually slowing it down. Wake fields also can capture and guide lightweight ions or positrons in the direction perpendicular to the wake. The larger the speed of the original charged particle, the larger the angle between the initial particle's velocity and the captured ion's velocity. |
https://en.wikipedia.org/wiki/Nestin%20%28protein%29 | Nestin is a protein that in humans is encoded by the NES gene.
Nestin (acronym for neuroepithelial stem cell protein) is a type VI intermediate filament (IF) protein. These intermediate filament proteins are expressed mostly in nerve cells where they are implicated in the radial growth of the axon. Seven genes encode for the heavy (NF-H), medium (NF-M) and light neurofilament (NF-L) proteins, nestin and α-internexin in nerve cells, synemin α and desmuslin/synemin β (two alternative transcripts of the DMN gene) in muscle cells, and syncoilin (also in muscle cells). Members of this group mostly preferentially coassemble as heteropolymers in tissues. Steinert et al. has shown that nestin forms homodimers and homotetramers but does not form IF by itself in vitro. In mixtures, nestin preferentially co-assembles with purified vimentin or the type IV IF protein internexin to form heterodimer coiled-coil molecules.
Gene
Structurally, nestin has the shortest head domain (N-terminus) and the longest tail domain (C-terminus) of all the IF proteins. Nestin is of high molecular weight (240kDa) with a terminus greater than 500 residues (compared to cytokeratins and lamins with termini less than 50 residues).
After subcloning the human nestin gene into plasmid vectors, Dahlstrand et al. determined the nucleotide sequence of all coding regions and parts of the introns. In order to establish the boundaries of the introns, they used the polymerase chain reaction (PCR) to amplify a fragment made from human fetal brain cDNA using two primers located in the first and fourth exon, respectively. The resulting 270 base pair (bp) long fragment was then sequenced directly in its entirety, and intron positions precisely located by comparison with the genomic sequence. Putative initiation and stop codons for the human nestin gene were found at the same positions as in the rat gene, in regions where overall similarity was very high. Based on this assumption, the human nestin gene encod |
https://en.wikipedia.org/wiki/IMPDH/GMPR%20family | In molecular biology, the IMPDH/GMPR family of enzymes includes IMP dehydrogenase and GMP reductase. These enzymes are involved in purine metabolism. These enzymes adopt a TIM barrel structure. |
https://en.wikipedia.org/wiki/Fermat%E2%80%93Catalan%20conjecture | In number theory, the Fermat–Catalan conjecture is a generalization of Fermat's Last Theorem and of Catalan's conjecture, hence the name. The conjecture states that the equation
has only finitely many solutions (a,b,c,m,n,k) with distinct triplets of values (am, bn, ck) where a, b, c are positive coprime integers and m, n, k are positive integers satisfying
The inequality on m, n, and k is a necessary part of the conjecture. Without the inequality there would be infinitely many solutions, for instance with k = 1 (for any a, b, m, and n and with c = am + bn) or with m, n, and k all equal to two (for the infinitely many known Pythagorean triples).
Known solutions
As of 2015, the following ten solutions to equation (1) which meet the criteria of equation (2) are known:
(for to satisfy Eq. 2)
The first of these (1m + 23 = 32) is the only solution where one of a, b or c is 1, according to the Catalan conjecture, proven in 2002 by Preda Mihăilescu. While this case leads to infinitely many solutions of (1) (since one can pick any m for m > 6), these solutions only give a single triplet of values (am, bn, ck).
Partial results
It is known by the Darmon–Granville theorem, which uses Faltings's theorem, that for any fixed choice of positive integers m, n and k satisfying (2), only finitely many coprime triples (a, b, c) solving (1) exist. However, the full Fermat–Catalan conjecture is stronger as it allows for the exponents m, n and k to vary.
The abc conjecture implies the Fermat–Catalan conjecture.
For a list of results for impossible combinations of exponents, see Beal conjecture#Partial results. Beal's conjecture is true if and only if all Fermat–Catalan solutions have m = 2, n = 2, or k = 2.
See also
Sums of powers, a list of related conjectures and theorems |
https://en.wikipedia.org/wiki/Argus%20retinal%20prosthesis | Argus retinal prosthesis, also known as a bionic eye, is an electronic retinal implant manufactured by the American company Second Sight Medical Products. It is used as a visual prosthesis to improve the vision of people with severe cases of retinitis pigmentosa. The Argus II version of the system was approved for marketing in the European Union in March 2011, and it received approval in the US in February 2013 under a humanitarian device exemption. The Argus II system costs about US$150,000, excluding the cost of the implantation surgery and training to learn to use the device. Second Sight had its IPO in 2014 and was listed on Nasdaq.
Production and development of the prosthesis was discontinued in 2020, but taken over by the company Cortigent in 2023.
Medical use
The Argus II is specifically designed to treat people with retinitis pigmentosa. The device was approved with data from a single-arm clinical trial that enrolled thirty people with severe retinitis pigmentosa; the longest follow-up on a trial subject was 38.3 months. People in the trial received the implant in only one eye and tests were conducted with the device switched on, or switched off as a control. With the device switched on, about 23% of the subjects had improvements in their ability to see; all had been at 2.9 or higher on the LogMAR scale and improvements ranged from just under 2.9 to 1.6 LogMAR – the equivalent of 20/1262 reading ability. 96% of the subjects were better able to identify a white square on a black computer screen; 57% were more able to determine the direction in which a white bar moved across a black computer screen. With the device switched on, about 60% were able to accurately walk to a door that was 20 feet away, as opposed to only 5% with the device switched off; 93% had no change in their perception of light.
Side effects
Among the thirty subjects in the clinical trial, there were nine serious adverse events recorded, including lower than normal intraocular pressure, e |
https://en.wikipedia.org/wiki/Cryptomorphism | In mathematics, two objects, especially systems of axioms or semantics for them, are called cryptomorphic if they are equivalent but not obviously equivalent. In particular, two definitions or axiomatizations of the same object are "cryptomorphic" if it is not obvious that they define the same object. Examples of cryptomorphic definitions abound in matroid theory and others can be found elsewhere, e.g., in group theory the definition of a group by a single operation of division, which is not obviously equivalent to the usual three "operations" of identity element, inverse, and multiplication.
This word is a play on the many morphisms in mathematics, but "cryptomorphism" is only very distantly related to "isomorphism", "homomorphism", or "morphisms". The equivalence may in a cryptomorphism, if it is not actual identity, be informal, or may be formalized in terms of a bijection or equivalence of categories between the mathematical objects defined by the two cryptomorphic axiom systems.
Etymology
The word was coined by Garrett Birkhoff before 1967, for use in the third edition of his book Lattice Theory. Birkhoff did not give it a formal definition, though others working in the field have made some attempts since.
Use in matroid theory
Its informal sense was popularized (and greatly expanded in scope) by Gian-Carlo Rota in the context of matroid theory: there are dozens of equivalent axiomatic approaches to matroids, but two different systems of axioms often look very different.
In his 1997 book Indiscrete Thoughts, Rota describes the situation as follows:
Though there are many cryptomorphic concepts in mathematics outside of matroid theory and universal algebra, the word has not caught on among mathematicians generally. It is, however, in fairly wide use among researchers in matroid theory.
See also
Combinatorial class, an equivalence among combinatorial enumeration problems hinting at the existence of a cryptomorphism |
https://en.wikipedia.org/wiki/Earth%E2%80%93Moon%20problem | The Earth–Moon problem is an unsolved problem on graph coloring in mathematics. It is an extension of the planar map coloring problem (solved by the four color theorem), and was posed by Gerhard Ringel in 1959. In mathematical terms, it seeks the chromatic number of biplanar graphs. It is known that this number is at least 9 and at most 12.
Formulation
In the map coloring problem, a collection of simply connected regions in the Euclidean plane or a topologically equivalent space, such as countries on the surface of the Earth, are to be colored so that, when two regions have a nonzero length of boundary, they have different colors. It can be transformed into a graph coloring problem by making a vertex for each region and an edge for each two neighboring regions, producing a planar graph whose vertices are to be colored. According to the four color theorem, it is always possible to do so using at most four different colors, no matter how many regions are given.
Instead, in the Earth–Moon problem, each country on the Earth has a corresponding colony on the surface of the Moon, that must be given the same color. These colonies may have borders that are completely different from the arrangement of the borders on the Earth. The countries must be colored, using the same color for each country and its colony, so that when two countries share a border either on the Earth or on the Moon they are given different colors. Ringel's problem asks: how many colors are needed to guarantee that the countries can all be colored, no matter how their boundaries are arranged?
Again, one can phrase the same question equivalently as one in graph theory, with a single vertex for each pair of a country and its colony, and an edge for each adjacency between countries or colonies. The graphs that result in this version of the problem are biplanar graphs, or equivalently the graphs of thickness two: their edges can be partitioned into two subsets (the Earth adjacencies and the Moon adjacencie |
https://en.wikipedia.org/wiki/Remote%20Utilities | Remote Utilities is a remote desktop software that allows a user to remotely control another computer through a proprietary protocol and see the remote computer's desktop, operate its keyboard and mouse.
The program utilizes the client-server model and consists of two primary components: the Host that is installed on the remote computer and the viewer that is installed on the local PC. Other modules include Agent, Remote Utilities Server (RU Server) and portable Viewer.
Feature and architecture
Remote Utilities provides full control over the remote system and allows to view the remote computer without disrupting its user. The connection is established via an IP address or the Internet ID and it has an IP filtering system allowing to restrict access to only certain IP addresses.
It has the following connection modes:
Full control
View only
File transfer
Task manager
Terminal
Inventory manager
RDP Integration
Text chat
Remote registry
Screen recorder
Execute
Power control
Send message
Voice and Video chat
Remote settings
The Internet-ID
The Internet-ID technology became available in Remote Utilities starting with version 5.0. It allows the user to bypass software and hardware firewalls and NAT devices when setting up a remote connection over the Internet.
Remote Utilities Agent
Remote Utilities Agent was introduced with the release of version 5.1 which works as a program module for spontaneous support that runs without installation and administrative privileges.
Remote Utilities Server
Remote Utilities Server (RU Server) is a program module which serves as a self-hosted replacement for Remote Utilities hosted relay servers. RU Server has been made available with Remote Utilities version 5.1 release. The most recent version of RU Server as of December 22, 2021 is version 3.1.0.0.
History
The developer company Remote Utilities, formerly known as Usoris Systems was founded in 2009. The predecessor project, Remote Office Manager was sta |
https://en.wikipedia.org/wiki/Julia%20Robinson%20Mathematics%20Festival | The Julia Robinson Mathematics Festival (JRMF) is an educational organization that sponsors locally organized mathematics festivals and online webinars targeting K–12 students. The events are designed to introduce students to mathematics in a collaborative and non-competitive forum.
History
In the 1970s, Saint Mary's College of California produced a mathematics contest that was popular with secondary schools throughout the San Francisco Bay Area. In 2005, Nancy Blachman attended an education forum sponsored by the Mathematical Sciences Research Institute (MSRI) and remembered how the Saint Mary's contest had inspired her as a student. Unfortunately, the contest no longer existed. Seeking to possibly resurrect the contest, Blachman and MSRI development director Jim Sotiros reached out to colleagues in the educational community. One response was from local high school math teacher Joshua Zucker, who also remembered the contest and even had saved a book of problems from it. Sotiros suggested that Blachman and her husband David desJardins fund MSRI in order to hire Zucker to recreate a program in the style of the Saint Mary’s Math Contest. Blachman and Zucker became co-founders and organized their first event in 2007. They called it a festival rather than a contest because they wanted to emphasize collaboration, creativity and fun rather than competition. They named the festival after Julia Robinson, a mathematician renowned for her contributions to decision problems. In fact, her work on Hilbert's 10th problem played a crucial role in its ultimate resolution. Blachman felt that such a woman would provide a role model for young girls and would show that one need not be male to be a great mathematician.
When they sent out invitations to local schools, the response was so overwhelming that, in order to have enough space, they prevailed upon Google in nearby Mountain View to host the first festival. A second festival was hosted in 2008 by Pixar Animation Studios |
https://en.wikipedia.org/wiki/Proceedings%20of%20the%20Institution%20of%20Electrical%20Engineers | Proceedings of the Institution of Electrical Engineers was a series journals which published the proceedings of the Institution of Electrical Engineers. It was originally established as the Journal of the Society of Telegraph Engineers in 1872, and was known under several titles over the years, such as Journal of the Institution of Electrical Engineers, Proceedings of the IEE and IEE Proceedings.
History
The journal was originally established in 1872, as
Journal of the Society of Telegraph Engineers (1872–1880)
Then underwent a series of name changes
Journal of the Society of Telegraph Engineers and of Electricians (1881–1882)
Journal of the Society of Telegraph-Engineers and Electricians (1883–1888)
Until in 1889 it settled into
Journal of the Institution of Electrical Engineers (1889–1940)
The journal remained under that name for over 50 years.
From 1926 to 1940, a new journal was started
Institution of Electrical Engineers - Proceedings of the Wireless Section of the Institution (1926–1940)
In 1941, the journals were reorganized in distinct parts. From 1941 to 1948 those were
Journal of the Institution of Electrical Engineers - Part I: General
Journal of the Institution of Electrical Engineers - Part II: Power Engineering
Journal of the Institution of Electrical Engineers - Part IIA: Automatic Regulators and Servo Mechanisms
Journal of the Institution of Electrical Engineers - Part III: Communication Engineering
Journal of the Institution of Electrical Engineers - Part III: Radio and Communication Engineering
Journal of the Institution of Electrical Engineers - Part IIIA: Radiocommunication
Journal of the Institution of Electrical Engineers - Part IIIA: Radiolocation
In 1949, until 1954, the publications were reorganized into
Journal of the Institution of Electrical Engineers
and
Proceedings of the IEE - Part I: General
Proceedings of the IEE - Part IA: Electric Railway Traction
Proceedings of the IEE - Part II: Power Engineering
Proceedings of the IEE - |
https://en.wikipedia.org/wiki/Second%20covariant%20derivative | In the math branches of differential geometry and vector calculus, the second covariant derivative, or the second order covariant derivative, of a vector field is the derivative of its derivative with respect to another two tangent vector fields.
Definition
Formally, given a (pseudo)-Riemannian manifold (M, g) associated with a vector bundle E → M, let ∇ denote the Levi-Civita connection given by the metric g, and denote by Γ(E) the space of the smooth sections of the total space E. Denote by T*M the cotangent bundle of M. Then the second covariant derivative can be defined as the composition of the two ∇s as follows:
For example, given vector fields u, v, w, a second covariant derivative can be written as
by using abstract index notation. It is also straightforward to verify that
Thus
When the torsion tensor is zero, so that , we may use this fact to write Riemann curvature tensor as
Similarly, one may also obtain the second covariant derivative of a function f as
Again, for the torsion-free Levi-Civita connection, and for any vector fields u and v, when we feed the function f into both sides of
we find
.
This can be rewritten as
so we have
That is, the value of the second covariant derivative of a function is independent on the order of taking derivatives.
Notes
Tensors in general relativity
Riemannian geometry |
https://en.wikipedia.org/wiki/Grail%20%28company%29 | Grail (styled GRAIL) is an American biotechnology company in Menlo Park, California. It is a subsidiary of Illumina started as a startup seeking to develop an early cancer screening test for people who do not have symptoms.
Their liquid biopsy (also called multi-cancer early detection test) which was launched in June 2021 and is called the Galleri test, detects fragments of DNA in a blood sample via next-generation sequencing, which identifies DNA methylation, distinct patterns of which are associated with particular cancers, potentially allowing for the early detection of cancer and providing information of the origin of the cancer. It is one of three multi-cancer screening tests under investigation; the other two being the CancerSEEK assay and the PanSeer assay.
History
Grail began as a San Francisco biotechnology and pharmaceutical startup company in 2015, the parent company being Illumina of San Diego, which produces most of the DNA sequencing machines that scientists use to study human biology and diagnose rare genetic diseases. Richard Klausner, then chief medical officer at Illumina and former director of the National Cancer Institute, advocated for the new business. According to the San Francisco Business Times, he correctly predicted how DNA sequencing technology would make it possible to detect evidence of a tumor from a blood sample. He also joined Grail's board of directors. According to Forbes in 2017, 20% of Grail's profits are kept by Illumina.
In September 2020, Illumina announced an agreement to purchase Grail outright for $7.1 billion.
On November 27, 2020, Grail announced a commercial partnership with the National Health Service (England) (NHS), to trial the Galleri test, reporting in 2026.
In March 2021, the Federal Trade Commission (FTC) sued to block the vertical merger. In September 2022, an administrative judge ruled against the FTC's position on antitrust grounds.
In October 2023, the European Commission ordered Grail to be divested |
https://en.wikipedia.org/wiki/Subatomic%20particle | In physics, a subatomic particle is a particle smaller than an atom. According to the Standard Model of particle physics, a subatomic particle can be either a composite particle, which is composed of other particles (for example, a baryon, like a proton or a neutron, composed of three quarks; or a meson, composed of two quarks), or an elementary particle, which is not composed of other particles (for example, quarks; or electrons, muons, and tau particles, which are called leptons). Particle physics and nuclear physics study these particles and how they interact. Most force carrying particles like photons or gluons are called bosons and, although they have discrete quanta of energy, do not have rest mass or discrete diameters (other than pure energy wavelength) and are unlike the former particles that have rest mass and cannot overlap or combine which are called fermions.
Experiments show that light could behave like a stream of particles (called photons) as well as exhibiting wave-like properties. This led to the concept of wave–particle duality to reflect that quantum-scale behave both like particles and like waves; they are sometimes called wavicles to reflect this.
Another concept, the uncertainty principle, states that some of their properties taken together, such as their simultaneous position and momentum, cannot be measured exactly. The wave–particle duality has been shown to apply not only to photons but to more massive particles as well.
Interactions of particles in the framework of quantum field theory are understood as creation and annihilation of quanta of corresponding fundamental interactions. This blends particle physics with field theory.
Even among particle physicists, the exact definition of a particle has diverse descriptions. These professional attempts at the definition of a particle include:
A particle is a collapsed wave function
A particle is a quantum excitation of a field
A particle is an irreducible representation of the Poinca |
https://en.wikipedia.org/wiki/A2%20holin%20family | The Putative Archaeal 2 TMS Holin (A2-Hol) Family (TC# 9.B.109) consists of a few putative holins from Nitrososphaerota ranging in size from about 130 to 165 amino acyl residues (aas) and exhibiting 2 transmembrane segments (TMSs). A representative list of proteins belonging to the A2-Hol family can be found in the Transporter Classification Database. The archaeon, Candidatus Nitrosoarchaeum limnia, encodes adjacent genes designated Toxin Secretion/Lysis Holin. The "toxin" gene encodes a soluble protein of 325 aas stated as belonging to the "Glycosyltransferase GBT-type Superfamily". This protein brings up other glycosyltransferases in a NCBI BLAST search. The adjacent gene encodes a small protein of 132 aas and 2 TMSs (TC# 9.B.109.1.1) that could be a holin, based on its size and topology. This protein has the UniProt accession number of S2E3C4. Paralogues are found in this same organism (Candidatus Nitrosoarchaeum koreensis) and other closely related species.
See also
Holin
Lysin
Transporter Classification Database
Further reading
Saier, Milton H.; Reddy, Bhaskara L. (2015-01-01). "Holins in Bacteria, Eukaryotes, and Archaea: Multifunctional Xenologues with Potential Biotechnological and Biomedical Applications". Journal of Bacteriology 197(1): 7–17. . . . .
Wang, I. N.; Smith, D. L.; Young, R. (2000-01-01). "Holins: the protein clocks of bacteriophage infections". Annual Review of Microbiology 54: 799–825. . . . |
https://en.wikipedia.org/wiki/Kiyosi%20It%C3%B4 | was a Japanese mathematician who made fundamental contributions to probability theory, in particular, the theory of stochastic processes. He invented the concept of stochastic integral and stochastic differential equation, and is known as the founder of so-called Itô calculus.
Overview
Itô pioneered the theory of stochastic integration and stochastic differential equations, now known as Itô calculus. Its basic concept is the Itô integral, and among the most important results is a change of variable formula known as Itô's lemma. Itô calculus is a method used in the mathematical study of random events and is applied in various fields, and is perhaps best known for its use in mathematical finance. Itô also made contributions to the study of diffusion processes on manifolds, known as stochastic differential geometry.
Although the standard Hepburn romanization of his name is Kiyoshi Itō, he used the spelling Kiyosi Itô (Kunrei-shiki romanization). The alternative spellings Itoh and Ito are also sometimes seen in the West.
Biography
Itô was born in Hokusei-cho in Mie Prefecture on the main island of Honshū. He graduated with a B.S. (1938) and a Ph.D (1945) in Mathematics from the University of Tokyo. Between 1938 and 1945, Itô worked for the Japanese National Statistical Bureau, where he published two of his seminal works on probability and stochastic processes, including a series of articles in which he defined the stochastic integral and laid the foundations of the Itō calculus. After that he continued to develop his ideas on stochastic analysis with many important papers on the topic.
In 1952, he became a professor at the University of Kyoto to which he remained affiliated until his retirement in 1979. Starting in the 1950s, Itô spent long periods of time outside Japan, at Cornell, Stanford, the Institute for Advanced Study in Princeton, New Jersey, and Aarhus University in Denmark.
Itô was awarded the inaugural Gauss Prize in 2006 by the International Mathemat |
https://en.wikipedia.org/wiki/Photodynamic%20therapy | Photodynamic therapy (PDT) is a form of phototherapy involving light and a photosensitizing chemical substance, used in conjunction with molecular oxygen to elicit cell death (phototoxicity).
PDT is popularly used in treating acne. It is used clinically to treat a wide range of medical conditions, including wet age-related macular degeneration, psoriasis, atherosclerosis and has shown some efficacy in anti-viral treatments, including herpes. It also treats malignant cancers including head and neck, lung, bladder and particular skin.
The technology has also been tested for treatment of prostate cancer, both in a dog model and in human prostate cancer patients.
It is recognised as a treatment strategy that is both minimally invasive and minimally toxic. Other light-based and laser therapies such as laser wound healing and rejuvenation, or intense pulsed light hair removal do not require a photosensitizer. Photosensitisers have been employed to sterilise blood plasma and water in order to remove blood-borne viruses and microbes and have been considered for agricultural uses, including herbicides and insecticides.
Photodynamic therapy's advantages lessen the need for delicate surgery and lengthy recuperation and minimal formation of scar tissue and disfigurement. A side effect is the associated photosensitisation of skin tissue.
Basics
PDT applications involve three components: a photosensitizer, a light source and tissue oxygen. The wavelength of the light source needs to be appropriate for exciting the photosensitizer to produce radicals and/or reactive oxygen species. These are free radicals (Type I) generated through electron abstraction or transfer from a substrate molecule and highly reactive state of oxygen known as singlet oxygen (Type II).
PDT is a multi-stage process. First a photosensitiser, ideally with negligible toxicity other than its phototoxicity, is administered in the absence of light, either systemically or topically. When a sufficient amoun |
https://en.wikipedia.org/wiki/OH/IR%20star | __notoc__
An OH/IR star is an asymptotic giant branch (AGB) or a red supergiant or hypergiant (RSG or RHG) star that shows strong OH maser emission and is unusually bright at near-infrared wavelengths.
In the very late stages of AGB evolution, a star develops a super-wind with extreme mass loss. The gas in the stellar wind condenses as it cools away from the star, forming molecules such as water (H2O) and silicon monoxide (SiO). This can form grains of dust, mostly silicates, which obscure the star at shorter wavelengths, leading to a strong infrared source. Hydroxyl (OH) radicals can be produced by photodissociation or collisional dissociation.
H2O and OH can both be pumped to produce maser emission. OH masers in particular can give rise to a powerful maser action at 1612 MHz and this is regarded as a defining feature of the OH/IR stars. Many other AGB stars such as Mira variables show weaker OH masers at other wavelengths, such as 1667MHz or 22MHz.
Examples
OH/IR stars
R Aquilae
SY Aquilae
WX Piscium
GX Monocerotis
IW Hydrae
QX Puppis
V437 Scuti
V669 Cassiopeiae
V1300 Aquilae
V1365 Aquilae
V1366 Aquilae
NSV 25875
W43A
OH/IR supergiants
S Persei
VX Sagittarii
VY Canis Majoris
AH Scorpii
MY Cephei
PZ Cassiopeiae
IRC −30308
NML Cygni
IRC +10420
Notes |
https://en.wikipedia.org/wiki/Zen%203 | Zen 3 is the codename for a CPU microarchitecture by AMD, released on November 5, 2020. It is the successor to Zen 2 and uses TSMC's 7 nm process for the chiplets and GlobalFoundries's 14 nm process for the I/O die on the server chips and 12 nm for desktop chips. Zen 3 powers Ryzen 5000 mainstream desktop processors (codenamed "Vermeer") and Epyc server processors (codenamed "Milan"). Zen 3 is supported on motherboards with 500 series chipsets; 400 series boards also saw support on select B450 / X470 motherboards with certain BIOSes. Zen 3 is the last microarchitecture before AMD switched to DDR5 memory and new sockets, which are AM5 for the desktop "Ryzen" chips alongside SP5 and SP6 for the EPYC server platform. According to AMD, Zen 3 has a 19% higher instructions per cycle (IPC) on average than Zen 2.
On April 1, 2022, AMD released the new Ryzen 6000 series for the laptop, using an improved Zen 3+ architecture. On April 20, 2022, AMD also released the Ryzen 7 5800X3D desktop processor, which increases the single threading performance by another 15% in gaming by using, for the first time in a PC product, 3D vertically stacked L3 cache.
Features
Zen 3 is a significant incremental improvement over its predecessors, with an IPC increase of 19%, and being capable of reaching higher clock speeds.
Like Zen 2, Zen 3 is composed of up to 2 core complex dies (CCD) along with a separate IO die containing the I/O components. A Zen 3 CCD is composed of a single core complex (CCX) containing 8 CPU cores and 32MB of shared L3 cache, this is in contrast to Zen 2 where each CCD is composed of 2 CCX, each containing 4 cores each as well as 16MB of L3 cache. The new configuration allows all 8 cores of the CCX to directly communicate with each other and the L3 Cache instead of having to use the IO die through the Infinity Fabric.
Zen 3 (along with AMD's RDNA2 GPUs) were also the first implementation of Resizable BAR, an optional feature introduced in PCIe2.0, that was branded |
https://en.wikipedia.org/wiki/Yeast%20Promoter%20Atlas | The Yeast Promoter Atlas (YPA) is a repository of promoter features in Saccharomyces cerevisiae. |
https://en.wikipedia.org/wiki/Bomberman%20%281983%20video%20game%29 | is a maze video game developed and published by Hudson Soft. The original home computer game was released in July 1983 for the NEC PC-8801, NEC PC-6001 mkII, Fujitsu FM-7, Sharp MZ-700, Sharp MZ-2000, Sharp X1 and MSX in Japan, and a graphically modified version for the MSX and ZX Spectrum in Europe as Eric and the Floaters. A sequel, 3-D Bomberman, was produced. In 1985, Bomberman was released for the Nintendo Entertainment System. It spawned the Bomberman series with many installments building on its basic gameplay.
Gameplay
In the NES/Famicom release, the eponymous character, Bomberman, is a robot that must find his way through a maze while avoiding enemies. Doors leading to further maze rooms are found under rocks, which Bomberman must destroy with bombs. There are items that can help improve Bomberman's bombs, such as the Fire ability, which improves the blast range of his bombs. Bomberman will turn human when he escapes and reaches the surface. Each game has 50 levels in total. The original home computer games are more basic and have some different rules.
Notably, completing the NES and Famicom version reveals that the game is a prequel to Hudson Soft's NES port of Broderbund Software's 1983 game Lode Runner. Upon clearing the final screen, Bomberman is shown turning into Lode Runner's unnamed protagonist. In the Japanese version of the game, the player is explicitly told that Bomberman will 'See [them] in Lode Runner''', while in the international version, they are instead asked if they can recognise the protagonist from another Hudson game.
DevelopmentBomberman was written in 1980 to serve as a tech demo for Hudson Soft's BASIC compiler. This very basic version of the game was given a small-scale release for Japanese PCs in 1983 and the European PCs the following year.
The Famicom version was developed (ported) by Shinichi Nakamoto, who reputedly completed the task alone over a 72 hour period.
According to Zero magazine, Bomberman adopted gameplay elem |
https://en.wikipedia.org/wiki/IBM%20308X | The IBM 308X is a line of mainframe computers, of which the first model, the Model 3081 Processor Complex, was introduced November 12, 1980. It consisted of a 3081 Processor Unit with supporting units.
Later models in the series were the 3083 and the 3084. The 3083 was announced March 31 and the 3084 on September 3, both in 1982.
The IBM 308X line introduced the System/370 Extended Architecture (S/370-XA) required by the new MVS/SP V2 and the Start Interpretive Execution (SIE) instruction used by the new Virtual Machine/eXtended Architecture Migration Aid (VM/XA MA).
All three 308X systems, which IBM had marketed as "System/370-Compatibles," were withdrawn August 4, 1987.
IBM 3081
The initial 3081 offered, the 3081D, was a 5 MIPS machine. The next offering, the 3081K, was a 7 MIPS machine. Last came the 3081G.
The 3081D was announced Nov 12, 1980; the 3081K came nearly a year later; the 3081G was introduced September 3, 1982 as part of the initial 3084 announcement.
"The IBM 3081 Processor Complex offers flexible growth steps in the 308X family of processors, between the 3083 Model Groups F, B and J and the 3084."
The 3081 was "two processors in a single box ... it was not possible to partition it and run it as two independent machines."
The dyadic concept offers "under the cover" dual processors.
3081 as successor to 3033
Some key technological features of the 3081, compared to the previous most powerful processor, the 3033, were the following:
About 800,000 circuits implemented in large scale integration, using up to 704 logic circuits per chip, which provided the required performance, reliability, and serviceability that were design goals
"Elimination of one complete level of packaging, the card level"
Water cooling, which provides heat removal from chips beyond the ability of conventional air cooling
A machine cycle time of 26 nanoseconds (38 MHz equivalent CPU)
Reduced power consumption, 23 kilowatts for a 3081-D16 versus 68 kilowatts for a 3033-U16
|
https://en.wikipedia.org/wiki/Economic%20materialism | Economic materialism can be described as either a personal attitude that attaches importance to acquiring and consuming material goods or as a logistical analysis of how physical resources are shaped into consumable products.
The use of the term "materialistic" to describe a person's personality or a society tends to have a negative or critical connotation. Also called acquisitiveness, it is often associated with a value system that regards social status as being determined by affluence (see conspicuous consumption), as well as the belief that possessions can provide happiness. Environmentalism can be considered a competing orientation to materialism.
The definition of materialism coincides with how and why resources to extract and create the material object are logistically formed. "Success materialism" can be considered a pragmatic form of enlightened self-interest based on a prudent understanding of the character of market-oriented economy and society.
Definition
Consumer research typically looks at materialism in two ways: one as a collection of personality traits; and the other as an enduring belief or value.
Materialism as a personality trait
Russell W. Belk conceptualizes materialism to include three original personality traits:
Nongenerosity – an unwillingness to give or share possessions with others.
Envy – desire for other people's possessions.
Possessiveness – concern about loss of possessions and a desire for the greater control of ownership.
Materialism as a value
Acquisition centrality is when acquiring material possession functions as a central life goal with the belief that possessions are the key to happiness and that success can be judged by a person's material wealth and the quality and price of material goods she or he can buy.
Growing materialism in the western world
In the western world, there is a growing trend of increasing materialism in reaction to discontent. Research conducted in the United States shows that recent generations |
https://en.wikipedia.org/wiki/Radiophysics | Radiophysics (also modern writing "radio physics") is a branch of physics focused on the theoretical and experimental study of certain kinds of radiation, its emission, propagation and interaction with matter.
The term is used in the following major meanings:
study of radio waves (the original area of research)
study of radiation used in radiology
study of other ranges of the spectrum of electromagnetic radiation in some specific applications
Among the main applications of radiophysics are radio communications, radiolocation, radio astronomy and radiology.
Branches
Classical radiophysics deals with radio wave communications and detection
Quantum radiophysics (physics of lasers and masers; Nikolai Basov was the founder of quantum radiophysics in the Soviet Union)
Statistical radiophysics |
https://en.wikipedia.org/wiki/Response%20modeling%20methodology | Response modeling methodology (RMM) is a general platform for statistical modeling of a linear/nonlinear relationship between a response variable (dependent variable) and a linear predictor (a linear combination of predictors/effects/factors/independent variables), often denoted the linear predictor function. It is generally assumed that the modeled relationship is monotone convex (delivering monotone convex function) or monotone concave (delivering monotone concave function). However, many non-monotone functions, like the quadratic equation, are special cases of the general model.
RMM was initially developed as a series of extensions to the original inverse Box–Cox transformation: where y is a percentile of the modeled response, Y (the modeled random variable), z is the respective percentile of a normal variate and λ is the Box–Cox parameter. As λ goes to zero, the inverse Box–Cox transformation becomes: an exponential model. Therefore, the original inverse Box-Cox transformation contains a trio of models: linear (λ = 1), power (λ ≠ 1, λ ≠ 0) and exponential (λ = 0). This implies that on estimating λ, using sample data, the final model is not determined in advance (prior to estimation) but rather as a result of estimating. In other words, data alone determine the final model.
Extensions to the inverse Box–Cox transformation were developed by Shore (2001a) and were denoted Inverse Normalizing Transformations (INTs). They had been applied to model monotone convex relationships in various engineering areas, mostly to model physical properties of chemical compounds (Shore et al., 2001a, and references therein). Once it had been realized that INT models may be perceived as special cases of a much broader general approach for modeling non-linear monotone convex relationships, the new Response Modeling Methodology had been initiated and developed (Shore, 2005a, 2011 and references therein).
The RMM model expresses the relationship between a response, Y (the modeled ra |
https://en.wikipedia.org/wiki/Popcorn%20seasoning | Popcorn seasoning is any ingredient used to add flavor to popcorn. In the United States, popcorn seasoning is mass-produced by several companies for commercial and consumer use. Popcorn seasonings may be used to enhance the flavor of popcorn, and some are used to add a buttery flavor to popcorn. Significant amounts are often used to ensure the adequate flavoring of popcorn, due to popcorn's low density. It is also sometimes utilized to add coloring to popcorn. Some popcorn seasoning may contain monosodium glutamate. Some specialty products exist in unique flavors, such as chocolate and bubble gum. Some popcorn seasoning products may be referred to as popcorn salt.
Some oils used to cook popcorn contain popcorn seasonings mixed within the oil, and may be referred to as popcorn seasoning oils or liquid popcorn seasoning.
Since the 1960s, American movie theaters have commonly used the seasoning Flavacol–made up of salt, butter flavoring, and artificial colors–to enhance their popcorn.
Formulation
Dry popcorn seasoning may be finely granulated to enable even dispersion when placed upon popcorn. Common homemade popcorn seasoning ingredients include salt and melted butter.
Popcorn seasoning is sometimes used within machines that are utilized to produce large quantities of popcorn for consumer purchase.
In the 1950s in the United States, many commercial oil-based popcorn seasonings were produced with a coconut oil base, and also utilized artificial coloring.
See also
Butter salt
Condiment
List of dried foods
List of spice mixes
Molly McButter |
https://en.wikipedia.org/wiki/Amniotic%20sac | The amniotic sac, also called the bag of waters or the membranes, is the sac in which the embryo and later fetus develops in amniotes. It is a thin but tough transparent pair of membranes that hold a developing embryo (and later fetus) until shortly before birth. The inner of these membranes, the amnion, encloses the amniotic cavity, containing the amniotic fluid and the embryo. The outer membrane, the chorion, contains the amnion and is part of the placenta. On the outer side, the amniotic sac is connected to the yolk sac, the allantois, and via the umbilical cord, the placenta.
The yolk sac, amnion, chorion, and allantois are the four extraembryonic membranes that lie outside of the embryo and are involved in providing nutrients and protection to the developing embryo. They form from the inner cell mass; the first to form is the yolk sac followed by the amnion which grows over the developing embryo. The amnion remains an important extraembryonic membrane throughout prenatal development. The third membrane is the allantois, and the fourth is the chorion which surrounds the embryo after about a month and eventually fuses with the amnion.
Amniocentesis is a medical procedure where fluid from the sac is sampled during fetal development, between 15 and 20 weeks of pregnancy, to be used in prenatal diagnosis of chromosomal abnormalities and fetal infections.
Structure
The amniotic cavity is the closed sac between the embryo and the amnion, containing the amniotic fluid. The amniotic cavity is formed by the fusion of the parts of the amniotic fold, which first makes its appearance at the cephalic extremity and subsequently at the caudal end and sides of the embryo. As the amniotic fold rises and fuses over the dorsal aspect of the embryo, the amniotic cavity is formed.
Development
At the beginning of the second week, a cavity appears within the inner cell mass, and when it enlarges, it becomes the amniotic cavity. The floor of the amniotic cavity is formed by the e |
https://en.wikipedia.org/wiki/Standard%20Model | The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the |
https://en.wikipedia.org/wiki/Series%20of%20tubes | "A series of tubes" is a phrase used originally as an analogy by then-United States Senator Ted Stevens (R-Alaska) to describe the Internet in the context of opposing network neutrality. On June 28, 2006, he used this metaphor to criticize a proposed amendment to a committee bill. The amendment would have prohibited Internet service providers such as AT&T, Comcast, Time Warner Cable and Verizon Communications from charging fees to give some companies' data a higher priority in relation to other traffic. The metaphor was widely ridiculed, because Stevens was perceived to have displayed an extremely limited understanding of the Internet, despite his leading the Senate committee responsible for regulating it.
Partial text of Stevens's comments
Media commentary
On June 28, 2006, Public Knowledge government affairs manager Alex Curtis wrote a brief blog entry introducing the senator's speech and posted an MP3 recording. The next day, the Wired magazine blog 27B Stroke 6 featured a lengthier post by Ryan Singel, which included Singel's transcriptions of some parts of Stevens's speech considered the most humorous. Within days, thousands of other blogs and message boards posted the story.
Most writers and commentators derisively cited several of Stevens's misunderstandings of Internet technology, arguing that the speech showed that he had formed a strong opinion on a topic which he understood poorly (e.g., referring to an e-mail message as "an Internet," and blaming bandwidth issues for an e-mail problem much more likely to be caused by mail server or routing issues). The story sparked mainstream media attention, including a mention in The New York Times. The technology podcast This Week in Tech also discussed the incident.
According to The Wall Street Journal, as summarized by MediaPost commentator Ross Fadner, "'The Internet is a Series of Tubes!' spawned a new slogan that became a rallying cry for Net neutrality advocates. ... Stevens's overly simplistic description |
https://en.wikipedia.org/wiki/Echochrome | is a 2008 puzzle game created by Sony's Japan Studio and Game Yarouze for PlayStation 3 from the PlayStation Store and for PlayStation Portable (PSP). The gameplay involves a mannequin figure traversing a rotatable world where physics and reality depend on perspective. The world is occupied by Oscar Reutersvärd's impossible constructions. This concept is inspired by M. C. Escher's artwork, such as "Relativity". The game is based on the Object Locative Environment Coordinate System developed by Jun Fujiki—an engine that determines what is occurring based on the camera's perspective.
Echochrome received a spin-off in 2009 titled Echoshift and a sequel, Echochrome II for the PlayStation 3 utilizing the PlayStation Move in December 2010.
Gameplay
Echochrome requires the player to control a moving character—which resembles an articulated wooden artist's mannequin—to visit, in any order, particular locations on the surfaces of collections of three-dimensional shapes. The objectives are marked by shadows ("echoes") of the moving character. When the last marked position has been visited, one last echo appears, which the player must reach to finish the level: scoring is simply a matter of timing completion of each level (or a course containing several levels).
However, the character cannot be directly controlled by the player: it moves autonomously, following a path along the surface of each shape in a manner that keeps the path's boundary on the character's left (that is, in order of preference, turning left, proceeding straight ahead, turning right, or turning back on itself).
The unique aspect of the game is that the path can be altered merely by rotating the shapes and viewing them from a different perspective: for instance, if a gap or obstacle is obscured, the character will behave as if the path continues behind the object which currently, obscures the gap or obstacle from view. Similarly, if discontinuous shapes or parts of the same shape appear, from the chosen |
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2013 | In molecular biology, glycoside hydrolase family 13 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Enzymes containing this domain belong to family 13 (CAZY GH_13) of the glycosyl hydrolases. The maltogenic alpha-amylase is an enzyme which catalyses hydrolysis of (1-4)-alpha-D-glucosidic linkages in polysaccharides so as to remove successive alpha-maltose residues from the non-reducing ends of the chains in the conversion of starch to maltose. Other enzymes in this family include neopullulanase, which hydrolyses pullulan to panose, and cyclomaltodextrinase, which hydrolyses cyclodextrins. |
https://en.wikipedia.org/wiki/Retrovisceral%20space | The retrovisceral space is divided into the retropharyngeal space and the danger space by the alar fascia. It is of particular clinical importance because it is a main route by which oropharyngeal infections can spread into the mediastinum.
Some sources say the retrovisceral space is the same as the retropharyngeal space.
Other sources say that the retrovisceral space is "continuous superiorly" with the retropharyngeal space. |
https://en.wikipedia.org/wiki/Brilliant%20Black%20BN | Brilliant Black BN, Brilliant Black PN, Brilliant Black A, Black PN, Food Black 1, Naphthol Black, C.I. Food Black 1, or C.I. 28440, is a synthetic black diazo dye. It is soluble in water. It usually comes as tetrasodium salt. It has the appearance of solid, fine powder or granules. Calcium and potassium salts are allowed as well.
When used as a food dye, its E number is E151. It is used in food decorations and coatings, desserts, sweets, ice cream, mustard, red fruit jams, soft drinks, flavored milk drinks, fish paste, lumpfish caviar and other foods.
E151 has been banned in the United States, Japan. It is approved in the European Union. It was banned in Norway until 2001 when it was unbanned due to trade relationships with other countries.
It is used for staining animal by-products in category 2. |
https://en.wikipedia.org/wiki/Graft-chimaera | In horticulture, a graft-chimaera may arise in grafting at the point of contact between rootstock and scion and will have properties intermediate between those of its "parents". A graft-chimaera is not a true hybrid but a mixture of cells, each with the genotype of one of its "parents": it is a chimaera. Hence, the once widely used term "graft-hybrid" is not descriptive; it is now frowned upon.
Propagation is by cloning only. In practice graft-chimaeras are not noted for their stability and may easily revert to one of the "parents".
Nomenclature
Article 21 of the ICNCP stipulates that a graft-chimaera can be indicated either by
a formula: the names of both "parents", in alphabetical order, joined by the plus sign "+":
Crataegus + Mespilus
a name:
if the "parents" belong to different genera a name may be formed by joining part of one generic name to the whole of the other generic name. This name must not be identical to a generic name published under the International Code of Nomenclature for algae, fungi, and plants (ICN). For example + Crataegomespilus is the name for the graft-chimaera which may also be indicated by the formula Crataegus + Mespilus. This name is clearly different from ×Crataemespilus, the name under the ICN for the true hybrid between Crataegus and Mespilus, which can also be designated by the formula Crataegus × Mespilus.
if both "parents" belong to the same genus the graft-chimaera may be given a cultivar name. For example Syringa 'Correlata' is a graft-chimaera involving Syringa vulgaris (common lilac) and Syringa × chinensis (Rouen lilac, which is itself a hybrid between S. vulgaris and S. persica). No plus sign is used, because both "parents" belong to the genus Syringa.
A graft-chimaera cannot have a species name, because it is simultaneously two species. Although +Laburnocytisus 'Adamii', for example, is sometimes seen written as if it were a species (+Laburnocytisus adamii), this is incorrect.
In Darwin's works
Charles Darwin " |
https://en.wikipedia.org/wiki/Pathfinder%20%28video%20game%29 | Pathfinder is a maze shooter written by Randy Jongens for the Atari 8-bit, published in 1982 by Gebelli Software.
Gameplay
The object of Pathfinder is to clean an underground maze of canisters of radioactive waste. These canisters are scattered in the maze and can be collected on contact by the Pathfinder, controlled by the player. Complicating this, radioactive creatures will attempt to stop the player. These are: Nukes, absorbing power from nuclear waste, Phantoms, moving through walls and Minelayers moving throughout the whole level placing mines. To defend himself the player is armed with a plasma gun that can be fired in any of eight directions. If the player shoots a mine, he will start a fire, which can spread through a maze.
A level is completed by removing all radioactive waste. The game has nine levels, each characterized by color changes and sometimes changes in the structure of the maze itself.
Reception
The Addison-Wesley Book of Atari Software 1984 concluded: "It's very hard to classify this game as either good or bad. While on the one hand it appeared to be a superficial and repetitious game, many of the teens who played it enjoyed seeing half of the maze on fire and then trying to put it out." |
https://en.wikipedia.org/wiki/BHIM | BHIM (Bharat Interface for Money) is an Indian mobile payment app developed by the National Payments Corporation of India (NPCI), based on the Unified Payments Interface (UPI). Launched on 30 December 2016, it is intended to facilitate e-payments directly through banks and encourage cashless transactions. It was named after Dr Bhimrao Ambedkar.
The application supports all Indian banks which use UPI, which is built over the Immediate Payment Service (IMPS) infrastructure and allows the user to instantly transfer money between 170 member banks of any two parties. It can be used on all mobile devices.
Operation
BHIM allows users to send or receive money to or from UPI payment addresses, or to non-UPI based accounts (by scanning a QR code with account number and IFS code or MMID code).
Unlike mobile wallets (Paytm, MobiKwik, M-Pesa, Airtel Money, etc.) which hold money, the BHIM app is only a mechanism which transfers money between different bank accounts. Transactions on BHIM are nearly instantaneous and can be done at any time, including weekends and bank holidays.
BHIM now also allows users to send or receive digital payments through Aadhaar authentication.
On-device wallet
On September 20, RBI governor Shaktikanta Das officially launched on-device wallet called UPI Lite at Global Fintech Fest 2022. As per NPCI, some of the early use cases involve FASTag recharges, insurance payments, and EMI collections offline. Canara Bank, HDFC Bank, Indian Bank, Kotak Mahindra Bank, Punjab National Bank, State Bank of India, Union Bank of India and Utkarsh Small Finance Bank enabled UPI Lite feature on BHIM.
Transaction fees and limits
Currently, there is no charge for transactions from ₹1 to ₹100,000. Some banks might, however, levy a fee for UPI or IMPS transfers.
In 2017 Indian banks have proposed transaction charges on UPI transactions, but there is no information on whether transactions through BHIM will also be charged.
Language support
the app supports 20 langua |
https://en.wikipedia.org/wiki/Syllabogram | Syllabograms are signs used to write the syllables (or morae) of words. This term is most often used in the context of a writing system otherwise organized on different principles—an alphabet where most symbols represent phonemes, or a logographic script where most symbols represent morphemes—but a system based mostly on syllabograms is a syllabary.
Syllabograms in the Maya script most frequently take the form of V (vowel) or CV (consonant-vowel) syllables of which approximately 83 are known. CVC signs are present as well. Two modern well-known examples of syllabaries consisting mostly of CV syllabograms are the Japanese kana, used to represent the same sounds in different occasions. Syllabograms tend not to be used for languages with more complicated syllables: for example English phonotactics allows syllables as complex as CCCVCCCC (as in strengths), generating many thousands of possible syllables and making the use of syllabograms cumbersome.
Types of writing system that use syllabograms
Syllabary
Semi-syllabary |
https://en.wikipedia.org/wiki/Welch%27s%20method | Welch's method, named after Peter D. Welch, is an approach for spectral density estimation.
It is used in physics, engineering, and applied mathematics for estimating the power of a signal at different frequencies.
The method is based on the concept of using periodogram spectrum estimates, which are the result of converting a signal from the time domain to the frequency domain. Welch's method is an improvement on the standard periodogram spectrum estimating method and on Bartlett's method, in that it reduces noise in the estimated power spectra in exchange for reducing the frequency resolution. Due to the noise caused by imperfect and finite data, the noise reduction from Welch's method is often desired.
Definition and procedure
The Welch method is based on Bartlett's method and differs in two ways:
The signal is split up into overlapping segments: the original data segment is split up into L data segments of length M, overlapping by D points.
If D = M / 2, the overlap is said to be 50%
If D = 0, the overlap is said to be 0%. This is the same situation as in the Bartlett's method.
The overlapping segments are then windowed: After the data is split up into overlapping segments, the individual L data segments have a window applied to them (in the time domain).
Most window functions afford more influence to the data at the center of the set than to data at the edges, which represents a loss of information. To mitigate that loss, the individual data sets are commonly overlapped in time (as in the above step).
The windowing of the segments is what makes the Welch method a "modified" periodogram.
After doing the above, the periodogram is calculated by computing the discrete Fourier transform, and then computing the squared magnitude of the result. The individual
periodograms are then averaged, which reduces the variance of the individual power measurements. The end result is an array of power measurements vs. frequency "bin".
Related approaches
Other overlapp |
https://en.wikipedia.org/wiki/Geo%20%28microformat%29 | Geo is a microformat used for marking up geographical coordinates (latitude and longitude) in HTML (or XHTML). Coordinates are expected in angular units of degrees and geodetic datum WGS84. Although termed a "draft" specification, the format is a de facto standard, stable and in widespread use; not least as a sub-set of the published hCalendar and hCard microformat specifications, neither of which is still a draft.
Use of Geo allows parsing tools (for example other websites, or Firefox's Operator extension) to extract the locations, and display them using some other website or web mapping tool, or to load them into a GPS device, index or aggregate them, or convert them into an alternative format.
Usage
If latitude is present, so must be longitude, and vice versa.
The same number of decimal places should be used in each value, including trailing zeroes.
The Geo microformat is applied using three HTML classes. For example, the marked-up text:
<div>Belvide: 52.686; -2.193</div>
becomes:
<div class="geo">Belvide: <span class="latitude">52.686</span>; <span class="longitude">-2.193</span></div>
by adding the class-attribute values "geo", "latitude" and "longitude".
This will display
Belvide: 52.686; -2.193
and a geo microformat for that location, Belvide Reservoir, which will be detected, on this page, by microformat parsing tools.
hCard
Each Geo microformat may be wrapped in an hCard microformat, allowing for the inclusion of personal, organisational or venue names, postal addresses, telephone contacts, URLs, pictures, etc.
Extensions
There are three proposals, none mutually-exclusive, to extend the geo microformat:
geo-extension - for representing coordinates on other planets, moons etc., and with non-WGS84 schema
geo-elevation - for representing altitude
geo-waypoint - for representing routes and boundaries, using waypoints
Users
Organisations and websites using Geo include:
Flickr - on over three million photo pages
Geograph - on over one mill |
https://en.wikipedia.org/wiki/Restricted%20isometry%20property | In linear algebra, the restricted isometry property (RIP) characterizes matrices which are nearly orthonormal, at least when operating on sparse vectors. The concept was introduced by Emmanuel Candès and Terence Tao and is used to prove many theorems in the field of compressed sensing. There are no known large matrices with bounded restricted isometry constants (computing these constants is strongly NP-hard, and is hard to approximate as well), but many random matrices have been shown to remain bounded. In particular, it has been shown that with exponentially high probability, random Gaussian, Bernoulli, and partial Fourier matrices satisfy the RIP with number of measurements nearly linear in the sparsity level. The current smallest upper bounds for any large rectangular matrices are for those of Gaussian matrices. Web forms to evaluate bounds for the Gaussian ensemble are available at the Edinburgh Compressed Sensing RIC page.
Definition
Let A be an m × p matrix and let 1 ≤ s ≤ p be an integer. Suppose that there exists a constant such that, for every m × s submatrix As of A and for every s-dimensional vector y,
Then, the matrix A is said to satisfy the s-restricted isometry property with restricted isometry constant .
This condition is equivalent to the statement that for every m × s submatrix As of A we have
where is the identity matrix and is the operator norm. See for example for a proof.
Finally this is equivalent to stating that all eigenvalues of are in the interval .
Restricted Isometric Constant (RIC)
The RIC Constant is defined as the infimum of all possible for a given .
It is denoted as .
Eigenvalues
For any matrix that satisfies the RIP property with a RIC of , the following condition holds:
.
The tightest upper bound on the RIC can be computed for Gaussian matrices. This can be achieved by computing the exact probability that all the eigenvalues of Wishart matrices lie within an interval.
See also
Compressed sensing
Mutual coh |
https://en.wikipedia.org/wiki/Empirical%20modelling | Empirical modelling refers to any kind of (computer) modelling based on empirical observations rather than on mathematically describable relationships of the system modelled.
Empirical Modelling
Empirical Modelling as a variety of empirical modelling
Empirical modelling is a generic term for activities that create models by observation and experiment. Empirical Modelling (with the initial letters capitalised, and often abbreviated to EM) refers to a specific variety of empirical modelling in which models are constructed following particular principles. Though the extent to which these principles can be applied to model-building without computers is an interesting issue (to be revisited below), there are at least two good reasons to consider Empirical Modelling in the first instance as computer-based. Without doubt, computer technologies have had a transformative impact where the full exploitation of Empirical Modelling principles is concerned. What is more, the conception of Empirical Modelling has been closely associated with thinking about the role of the computer in model-building.
An empirical model operates on a simple semantic principle: the maker observes a close correspondence between the behaviour of the model and that of its referent. The crafting of this correspondence can be 'empirical' in a wide variety of senses: it may entail a trial-and-error process, may be based on computational approximation to analytic formulae, it may be derived as a black-box relation that affords no insight into 'why it works'.
Empirical Modelling is rooted on the key principle of William James's radical empiricism, which postulates that all knowing is rooted in connections that are given-in-experience. Empirical Modelling aspires to craft the correspondence between the model and its referent in such a way that its derivation can be traced to connections given-in-experience. Making connections in experience is an essentially individual human activity that requires skill a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.