source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Pip%20Pattison | Philippa Eleanor "Pip" Pattison (born 11 April 1952) is a quantitative psychologist who retired in December 2021 as the Deputy Vice-Chancellor Education at the University of Sydney. She is now an Emeritus Professor at the University of Sydney and the University of Melbourne.
Early life
Pattison was born in Perth, Western Australia, and spent part of her childhood in Adelaide and Melbourne. She completed her education at the University of Melbourne, obtaining her BSc (Hons) in 1973 before being awarded a PhD in Psychology in 1980. She married Ian Pattison in 1973.
Career
Pattison's full-time employment at the University of Melbourne began with a lecturing role in the Department of Psychology in 1977 while she was completing a PhD, and she continued to hold an appointment in Psychology until 2014. She was promoted to Professor in the Department of Psychology in 2000 and then took on leadership roles that included Head of the School of Behavioural Science, and Deputy Vice President, Vice President and President of the Academic Board. She became Pro Vice-Chancellor (Learning and Teaching) in 2009 and then Deputy Vice-Chancellor (Academic) in 2011. In a number of these roles she was involved in the introduction and implementation of the Melbourne Model (a restructure of the undergraduate curriculum to provide only generalist undergraduate courses and specialist postgraduate courses) and oversight of learning and teaching, including eLearning and the introduction of MOOCs to the University.
Pattison joined the University of Sydney in 2014 as Deputy Vice-Chancellor (DVC) Education. As DVC Education she is a member of the University’s senior executive group and leads the strategy for teaching and learning at Sydney.
Pattison was named on the Queen’s Birthday 2015 Honours List as an Officer of the Order of Australia for "distinguished service to higher education, particularly through contributions to the study of social network modelling, analysis and theory an |
https://en.wikipedia.org/wiki/Phase%20response | In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter.
Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time.
See also
Group delay and phase delay |
https://en.wikipedia.org/wiki/RISC%20OS%20Open | RISC OS Open Ltd. (also referred to as ROOL) is a limited company engaged in computer software and IT consulting. It is managing the process of publishing the source code to RISC OS. Company founders include staff who formerly worked for Pace, the company which acquired RISC OS after Acorn's demise.
The source code publication was initially facilitated by a shared source initiative (SSI) between ROOL and Castle Technology (CTL), prior to a switch to the more widely recognised Apache licence in October 2018.
ROOL hopes that by making the RISC OS source code available for free it will help stimulate development of both the RISC OS source code and the platform as a whole.
Operations
ROOL set initial goals to make the source code easily available (on the web), and also to establish a wiki, forum and bug tracker. These have been available since December 2006.
Operations exist to facilitate tasks related to ROOL's goals. Additionally, staff undertake development work on the code themselves. Since early 2009, ownership, development and sales of the tools were transferred to RISC OS Open. As an extension to the initial goals, in 2011 ROOL introduced a bounty scheme to encourage further development.
Attendance at computer shows is often arranged, with other knowledgeable coders sometimes standing in when ROOL staff are unavailable. A Facebook page was created in 2012.
Publishing
A number of book titles have been published starting in 2015 with the RISC OS Style Guide, a three book set in support of the Desktop Development Environment, BBC BASIC Reference Manual and the RISC OS 5 User Guide.
Forum
Discussions of a technical and more general nature take place on the forum. A thread entitled "Let's get started with a Pandora port" witnessed discussion of porting to the Cortex-A8 used in the Pandora handheld game console. The thread was started in September 2008. |
https://en.wikipedia.org/wiki/Fences%20and%20pickets%20model%20of%20plasma%20membrane%20structure | The fences and pickets model of plasma membrane is a concept of cell membrane structure suggesting that the fluid plasma membrane is compartmentalized by actin-based
membrane-skeleton "fences" and anchored transmembrane protein "pickets". This model differs from older cell membrane structure concepts such as the Singer-Nicolson fluid mosaic model and the Saffman-Delbrück two-dimensional continuum fluid model that view the membrane as more or less homogeneous. The fences and pickets model was proposed to explain observations of molecular traffic made due to recent advances in single molecule tracking techniques.
Membrane skeleton fence model
The actin-based membrane skeleton (MSK) meshwork is directly situated on the cytoplasmic surface of the plasma membrane. Membrane skeleton fence, or membrane skeleton corralling model, suggests that this meshwork is likely to partition the plasma membrane into many small compartments with regard to the lateral diffusion of membrane molecules. Cytoplasmic domains collide with the actin-based membrane skeleton which induces temporary confinement or corralling of transmembrane (TM) proteins in the membrane skeleton mesh. TM proteins are capable to hop between adjacent compartments when the distance between the meshwork and the membrane becomes large enough, or when the meshwork temporarily and locally dissociates. Cytoplasmic molecules located on the inner surface of the plasma membrane also exhibit confinement within actin-based compartments.
Anchored transmembrane protein picket model
The movement of phospholipids, even those located in the outer leaflet of the membrane, is regulated by the actin-based membrane skeleton meshwork. Which is surprising, because the membrane skeleton is located on the cytoplasmic surface of the plasma membrane, and cannot directly interact with the phospholipids located in the outer leaflet of the plasma membrane.
To explain the hop diffusion of phospholipids, consistently with that of TM protein |
https://en.wikipedia.org/wiki/Electric%20form%20factor | The electric form factor is the Fourier transform of electric charge distribution in a nucleon. Nucleons (protons and neutrons) are made of up and down quarks which have charges associated with them (2/3 & -1/3, respectively). The study of Form Factors falls within the regime of Perturbative QCD.
The idea originated from young William Thomson.
See also
Form factor (disambiguation)
Electrodynamics |
https://en.wikipedia.org/wiki/David%20Preiss | David Preiss FRS (born January 21, 1947) is a Czech and British mathematician, specializing in mathematical analysis.
He is a professor of mathematics at the University of Warwick
Preiss is a recipient of the Ostrowski Prize (2011)
and the winner of the 2008 London Mathematical Society Pólya Prize for his 1987 result on Geometry of Measures, where he solved the remaining problem in the geometric theoretic structure of sets and measures in Euclidean space.
He was an invited speaker at the ICM 1990 in Kyoto.
He is a Fellow of the Royal Society (2004) and a Foreign Fellow of the Learned Society of the Czech Republic (2003).
He is associate editor of the mathematical journal Real Analysis Exchange.
Publications |
https://en.wikipedia.org/wiki/Racah%20Lectures%20in%20Physics | The Racah Lecture is annual memorial lecture given at The Racah Institute of Physics of the Hebrew University of Jerusalem commemorating Prof. Giulio Racah.
The lecturers are selected from among the leading physists in the world.
List of Previous Years Speakers |
https://en.wikipedia.org/wiki/C%20Sharp%20%28programming%20language%29 | C# (pronounced ) is a general-purpose high-level programming language supporting multiple paradigms. C# encompasses static typing, strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines.
The C# programming language was designed by Anders Hejlsberg from Microsoft in 2000 and was later approved as an international standard by Ecma (ECMA-334) in 2002 and ISO/IEC (ISO/IEC 23270) in 2003. Microsoft introduced C# along with .NET Framework and Visual Studio, both of which were closed-source. At the time, Microsoft had no open-source products. Four years later, in 2004, a free and open-source project called Mono began, providing a cross-platform compiler and runtime environment for the C# programming language. A decade later, Microsoft released Visual Studio Code (code editor), Roslyn (compiler), and the unified .NET platform (software framework), all of which support C# and are free, open-source, and cross-platform. Mono also joined Microsoft but was not merged into .NET.
the most recent stable version of the language is C# 11.0, which was released in 2022 in .NET 7.0.
Design goals
The Ecma standard lists these design goals for C#:
The language is intended to be a simple, modern, general-purpose, object-oriented programming language.
The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
The language is intended for use in developing software components suitable for deployment in distributed environments.
Portability is very important for source code and programmers, especially those already familiar with C and C++.
Support for internationalization is very important.
C# is intended to be suitable for wr |
https://en.wikipedia.org/wiki/Veld | Veld ( or ), also spelled veldt, is a type of wide open rural landscape in Southern Africa. Particularly, it is a flat area covered in grass or low scrub, especially in the countries of South Africa, Lesotho, Eswatini, Zimbabwe and Botswana. A certain sub-tropical woodland ecoregion of Southern Africa has been officially defined as the Bushveld by the World Wide Fund for Nature. Trees are not abundant—frost, fire and grazing animals allow grass to grow but prevent the build-up of dense foliage.
Etymology
The word veld () comes from the Afrikaans word for "field".
The etymological origin is older modern Dutch veldt, a spelling that the Dutch abandoned in favour of veld during the 19th century, decades before the first Afrikaans dictionary. A cognate to the English field, it was spelt velt in Middle Dutch and felt in Old Dutch.
Climate
The climate of the veld is highly variable, but its general pattern is mild winters from May to September and hot or very hot summers from November to March, with moderate or considerable variations in daily temperatures and abundant sunshine. Precipitation mostly occurs in the summer months in the form of high-energy thunderstorms.
Over most of the South African Highveld, the average annual rainfall is between a year, decreasing to about near the western border and increasing to nearly in some parts of the Lesotho Highlands; the South African Lowveld generally receives more precipitation than the Highveld. Temperature is closely related to elevation. In general, the mean July (winter) temperatures range between in the Lesotho Highlands and in the Lowveld. January (summer) temperatures range between and .
In Zimbabwe the precipitation averages around on the Highveld, dropping to less than in the lowest areas of the Lowveld. Temperatures are slightly higher than in South Africa.
Over the entire veld, seasonal and annual average rainfall variations of up to 40 percent are common. Damaging drought affects at least half the a |
https://en.wikipedia.org/wiki/Local%20Tate%20duality | In Galois cohomology, local Tate duality (or simply local duality) is a duality for Galois modules for the absolute Galois group of a non-archimedean local field. It is named after John Tate who first proved it. It shows that the dual of such a Galois module is the Tate twist of usual linear dual. This new dual is called the (local) Tate dual.
Local duality combined with Tate's local Euler characteristic formula provide a versatile set of tools for computing the Galois cohomology of local fields.
Statement
Let K be a non-archimedean local field, let Ks denote a separable closure of K, and let GK = Gal(Ks/K) be the absolute Galois group of K.
Case of finite modules
Denote by μ the Galois module of all roots of unity in Ks. Given a finite GK-module A of order prime to the characteristic of K, the Tate dual of A is defined as
(i.e. it is the Tate twist of the usual dual A∗). Let Hi(K, A) denote the group cohomology of GK with coefficients in A. The theorem states that the pairing
given by the cup product sets up a duality between Hi(K, A) and H2−i(K, A′) for i = 0, 1, 2. Since GK has cohomological dimension equal to two, the higher cohomology groups vanish.
Case of p-adic representations
Let p be a prime number. Let Qp(1) denote the p-adic cyclotomic character of GK (i.e. the Tate module of μ). A p-adic representation of GK is a continuous representation
where V is a finite-dimensional vector space over the p-adic numbers Qp and GL(V) denotes the group of invertible linear maps from V to itself. The Tate dual of V is defined as
(i.e. it is the Tate twist of the usual dual V∗ = Hom(V, Qp)). In this case, Hi(K, V) denotes the continuous group cohomology of GK with coefficients in V. Local Tate duality applied to V says that the cup product induces a pairing
which is a duality between Hi(K, V) and H2−i(K, V ′) for i = 0, 1, 2. Again, the higher cohomology groups vanish.
See also
Tate duality, a global version (i.e. for global fields)
Notes |
https://en.wikipedia.org/wiki/List%20of%20M%C4%81ori%20plant%20common%20names | This is a list of Māori plant common names.
Akakura
Akatea
Akeake
Aruhe
Hangehange
Harakeke
Heketara
Horoeka
Horokaka
Horopito
Houhere
Houpara
Hutu
Kahakaha
Kahikatea
Kaikōmako
Kāmahi
Kānuka
Karaka
Kareao
Karo
Kātote
Kauri
Kawakawa
Kiekie
Kohekohe
Kōhia
Kōhūhū
Korokio tāranga
Koromiko
Kotukutuku
Kōwhai
Kūmara
Kūmarahou
Makomako
Mamaku
Mānuka
Mataī
Mingimingi
Miro
Monoao
Neinei
Nīkau
Parataniwha
Patē
Pikopiko
Pīngao
Pōhutukawa
Pōkākā
Ponga
Puka
Rangiora (plant)
Rātā
Rimu
Tarata
Tauhinu
Tawāpou
Tāwhai
Tītoki
Toetoe
Tōtara
Tutu
Wharariki
Whekī
Whekī ponga
+Maori
Māori words and phrases
Plants
Maori
Maori |
https://en.wikipedia.org/wiki/Narrow-gap%20semiconductor | Narrow-gap semiconductors are semiconducting materials with a magnitude of bandgap that is smaller than 0.5 eV, which corresponds to an infrared absorption cut-off wavelength over 2.5 micron. A more extended definition includes all semiconductors with bandgaps smaller than silicon (1.1 eV). Modern terahertz, infrared, and thermographic technologies are all based on this class of semiconductors.
Narrow-gap materials made it possible to realize satellite remote sensing, photonic integrated circuits for telecommunications, and unmanned vehicle Li-Fi systems, in the regime of Infrared detector and infrared vision. They are also the materials basis for terahertz technology, including security surveillance of concealed weapon uncovering, safe medical and industrial imaging with terahertz tomography, as well as dielectric wakefield accelerators. Besides, thermophotovoltaics embedded with narrow-gap semiconductors can potentially use the traditionally wasted portion of solar energy that takes up ~49% of the sun light spectrum. Space crafts, deep ocean instruments, and vacuum physics setups use narrow-gap semiconductors to achieve cryogenic cooling.
List of narrow-gap semiconductors
{| class="wikitable"
|-
!Name
!Chemical formula
!Groups
!Band gap (300 K)
|-
| Mercury cadmium telluride
| Hg1−xCdxTe
| II-VI
| 0 to 1.5 eV
|-
| Mercury zinc telluride
| Hg1−xZnxTe
| II-VI
| 0.15 to 2.25 eV
|-
| Lead selenide
| PbSe
| IV-VI
| 0.27 eV
|-
| Lead(II) sulfide
| PbS
| IV-VI
| 0.37 eV
|-
| Lead telluride
| PbTe
| IV-VI
| 0.32 eV
|-
| Indium arsenide
| InAs
| III-V
| 0.354 eV
|-
| Indium antimonide
| InSb
| III-V
| 0.17 eV
|-
|Gallium antimonide
| GaSb
| III-V
| 0.67 eV
|-
| Cadmium arsenide
| Cd3As2
| II-V
| 0.5 to 0.6 eV
|-
| Bismuth telluride
| Bi2Te3
|
| 0.21 eV
|-
| Tin telluride
| SnTe
| IV-VI
| 0.18 eV
|-
| Tin selenide
| SnSe
| IV-VI
| 0.9 eV
|-
| Silver(I) selenide
| Ag2Se
|
| 0.07 eV
|-
|Magnesium silicide
|Mg2Si
|II-IV
|0.79 eV
|}
See also
List of semiconductor materia |
https://en.wikipedia.org/wiki/AMAX | AMAX is a certification program for AM radio broadcasting standards, created in the United States beginning in 1991 by the Electronic Industries Association (EIA) and the National Association of Broadcasters (NAB). It was developed with the intention of helping AM stations, especially ones with musical formats, become more competitive with FM broadcasters. The standards cover both consumer radio receivers and broadcasting station transmission chains.
Although the Federal Communications Commission (FCC) endorsed the AMAX proposal, the agency never made it into a formal requirement, leaving its adoption as voluntary. Ultimately few receiver manufacturers and radio stations adhered to the standard, thus it has done little to stem the continued decline in AM station listenership.
Standards
Receiver
AMAX radio receivers are divided into three categories: home, automotive and portable. Receiver certification requirements include:
Wide audio bandwidth, with a minimum of 7,500 hertz for home and automotive radios, and 6,500 hertz for portables.
Bandwidth control, either manual or automatic, including at least two settings, such as "narrow" and "wide".
Meet receiver standards for low total harmonic distortion and proper NRSC-1 audio de-emphasis curve.
Attenuation of the 10,000 hertz "whistle" heterodyne. (In the U.S., 10 kHz is the standard separation of adjacent transmitting frequencies.)
Provision for connecting an external AM antenna.
Ability to receive stations broadcasting on the 1610 to 1700 kHz expanded AM band frequencies.
Effective noise blanking, for home and automotive receivers.
Transmission
For AM broadcasting stations, the AMAX qualifications specified "a unified standard for pre–emphasis and distortion" for broadcasting station transmission chains.
Implementation
From a technical standpoint, the AMAX standards met with approval, with one reviewer noting that "The AMAX standard is a last-ditch effort by broadcasters and radio makers to save AM |
https://en.wikipedia.org/wiki/Epitaxy | Epitaxy (prefix epi- means "on top of”) refers to a type of crystal growth or material deposition in which new crystalline layers are formed with one or more well-defined orientations with respect to the crystalline seed layer. The deposited crystalline film is called an epitaxial film or epitaxial layer. The relative orientation(s) of the epitaxial layer to the seed layer is defined in terms of the orientation of the crystal lattice of each material. For most epitaxial growths, the new layer is usually crystalline and each crystallographic domain of the overlayer must have a well-defined orientation relative to the substrate crystal structure. Epitaxy can involve single-crystal structures, although grain-to-grain epitaxy has been observed in granular films. For most technological applications, single domain epitaxy, which is the growth of an overlayer crystal with one well-defined orientation with respect to the substrate crystal, is preferred. Epitaxy can also play an important role while growing superlattice structures.
The term epitaxy comes from the Greek roots epi (ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner".
One of the main commercial applications of epitaxial growth is in the semiconductor industry, where semiconductor films are grown epitaxially on semiconductor substrate wafers. For the case of epitaxial growth of a planar film atop a substrate wafer, the epitaxial film's lattice will have a specific orientation relative to the substrate wafer's crystalline lattice such as the [001] Miller index of the film aligning with the [001] index of the substrate. In the simplest case, the epitaxial layer can be a continuation of the same exact semiconductor compound as the substrate; this is referred to as homoepitaxy. Otherwise, the epitaxial layer will be composed of a different compound; this is referred to as heteroepitaxy.
Types
Homoepitaxy is a kind of epitaxy performed with only one material, in which a crystalline film is gr |
https://en.wikipedia.org/wiki/Kapur%20%28wood%29 | Kapur (or kapor) is a dipterocarp hardwood from trees of the genus Dryobalanops found in lowland tropical rainforests of Malaysia, Indonesia and South-East Asia. It is a durable construction tropical timber. One variety, D. aromatica, is a source of camphor.
Species
The name kapur can refer to the following species from the Dryobalanops genus:
D. aromatica
D. beccarii
D. fusca
D. keithii
D. lanceolata
D. oblongifolia
D. rappa
Deforestation
Kapur is logged from old-growth forest, often illegally. These forests have developed over the course of hundreds of years. When harvested, these trees are often between 250 and 1000 years old. For a tree from the family Dipterocarpaceae, it takes approximately 100 years to reach a height of 30 meters. Most of the species that are sold as kapur are listed on the IUCN Red List for endangered species. For example, D. fusca is critically endangered.
Overexploitation has led to large scale deforestation in the tropics. The International Tropical Timber Organization is concerned with conserving the habitat of trees producing tropical timber.
According to FSC, certified tropical hardwood can counteract deforestation. Forests that are managed according to the FSC standards, become economically valuable and might therefore not be converted to farmland. However, other organisations advise consumers to stay away from kapur altogether to avoid logging of centuries-old trees.
See also
Dipterocarp timber classification |
https://en.wikipedia.org/wiki/Coulomb%20blockade | In mesoscopic physics, a Coulomb blockade (CB), named after Charles-Augustin de Coulomb's electrical force, is the decrease in electrical conductance at small bias voltages of a small electronic device comprising at least one low-capacitance tunnel junction. Because of the CB, the conductance of a device may not be constant at low bias voltages, but disappear for biases under a certain threshold, i.e. no current flows.
Coulomb blockade can be observed by making a device very small, like a quantum dot. When the device is small enough, electrons inside the device will create a strong Coulomb repulsion preventing other electrons to flow. Thus, the device will no longer follow Ohm's law and the current-voltage relation of the Coulomb blockade looks like a staircase.
Even though the Coulomb blockade can be used to demonstrate the quantization of the electric charge, it remains a classical effect and its main description does not require quantum mechanics. However, when few electrons are involved and an external static magnetic field is applied, Coulomb blockade provides the ground for a spin blockade (like Pauli spin blockade) and valley blockade, which include quantum mechanical effects due to spin and orbital interactions respectively between the electrons.
The devices can comprise either metallic or superconducting electrodes. If the electrodes are superconducting, Cooper pairs (with a charge of minus two elementary charges ) carry the current. In the case that the electrodes are metallic or normal-conducting, i.e. neither superconducting nor semiconducting, electrons (with a charge of ) carry the current.
In a tunnel junction
The following section is for the case of tunnel junctions with an insulating barrier between two normal conducting electrodes (NIN junctions).
The tunnel junction is, in its simplest form, a thin insulating barrier between two conducting electrodes. According to the laws of classical electrodynamics, no current can flow through an insulat |
https://en.wikipedia.org/wiki/Parasitology | Parasitology is the study of parasites, their hosts, and the relationship between them. As a biological discipline, the scope of parasitology is not determined by the organism or environment in question but by their way of life. This means it forms a synthesis of other disciplines, and draws on techniques from fields such as cell biology, bioinformatics, biochemistry, molecular biology, immunology, genetics, evolution and ecology.
Fields
The study of these diverse organisms means that the subject is often broken up into simpler, more focused units, which use common techniques, even if they are not studying the same organisms or diseases. Much research in parasitology falls somewhere between two or more of these definitions. In general, the study of prokaryotes falls under the field of bacteriology rather than parasitology.
Medical
The parasitologist F. E. G. Cox noted that "Humans are hosts to nearly 300 species of parasitic worms and over 70 species of protozoa, some derived from our primate ancestors and some acquired from the animals we have domesticated or come in contact with during our relatively short history on Earth".
One of the largest fields in parasitology, medical parasitology is the subject that deals with the parasites that infect humans, the diseases caused by them, clinical picture and the response generated by humans against them. It is also concerned with the various methods of their diagnosis, treatment and finally their prevention & control.
A parasite is an organism that live on or within another organism called the host.
These include organisms such as:
Plasmodium spp., the protozoan parasite which causes malaria. The four species infective to humans are P. falciparum, P. malariae, P. vivax and P. ovale.
Leishmania, unicellular organisms which cause leishmaniasis
Entamoeba and Giardia, which cause intestinal infections (dysentery and diarrhoea)
Multicellular organisms and intestinal worms (helminths) such as Schistosoma spp., Wuchereri |
https://en.wikipedia.org/wiki/WO%20virus | WO virus is bacteriophage virus that infects bacteria of the genus Wolbachia, which it is named after. This virus is notable for carrying DNA related to the black widow spider toxin gene, becoming an example of a bacteriophage with animal-like DNA, implying DNA transfers between eukaryotes and bacteriophages. |
https://en.wikipedia.org/wiki/SPOJ | SPOJ (Sphere Online Judge) is an online judge system with over 315,000 registered users and over 20,000 problems. Tasks are prepared by its community of problem setters or are taken from previous programming contests. SPOJ allows advanced users to organize contests under their own rules and also includes a forum where programmers can discuss how to solve a particular problem.
Apart from the English language, SPOJ also offers its content in Polish, Portuguese and Vietnamese languages. The solution to problems can be submitted in over 40 programming languages, including esoteric ones, via the Sphere Engine. It is run by the Polish company Sphere Research Labs.
The website is considered both an automated evaluator of user-submitted programs as well as an online learning platform to help people understand and solve computational tasks. It also allows students to compare paradigms and approaches with a wide variety of languages.
History
This system was originally created to apply an online judge in the teaching of students. It basically focused on the students and lecturers of universities and members of a wider programming community, interested in algorithms and programming contests.
Aims
It aimed at different users for different purposes such as:
For young people and beginner programmers to develop understanding of algorithms.
The students of universities are given a chance to do their homework, honestly, thoroughly and without cheating.
ACM contest pros can solve tasks without being cramped by the restraints of too few programming languages or an inconvenient user interface.
Enthusiasts of functional or object oriented programming can solve contest problems in their favorite language.
Any persons willing to share an interesting task with the rest of the SPOJ community can do so nearly automatically (one mail to the admins requesting problem-setter's privileges is enough),
Any person, wishing to organize a programming contest, with nearly any rules they may |
https://en.wikipedia.org/wiki/Postulates%20of%20special%20relativity | In physics, Albert Einstein derived the theory of special relativity in 1905 from principle now called the postulates of special relativity. Einstein's formulation is said to only require two postulates, though his derivation implies a few more assumptions.
The idea that special relativity depended only on two postulates, both of which seemed to be follow from the theory and experiment of the day, was one of the most compelling arguments for the correctness of the theory (Einstein 1912: "This theory is correct to the extent to which the two principles upon which it is based are correct. Since these seem to be correct to a great extent, ...")
Postulates of special relativity
1. First postulate (principle of relativity)
The laws of physics take the same form in all inertial frames of reference.
2. Second postulate (invariance of c)
As measured in any inertial frame of reference, light is always propagated in empty space with a definite velocity c that is independent of the state of motion of the emitting body. Or: the speed of light in free space has the same value c in all inertial frames of reference.
The two-postulate basis for special relativity is the one historically used by Einstein, and it is sometimes the starting point today. As Einstein himself later acknowledged, the derivation of the Lorentz transformation tacitly makes use of some additional assumptions, including spatial homogeneity, isotropy, and memorylessness. Also Hermann Minkowski implicitly used both postulates when he introduced the Minkowski space formulation, even though he showed that c can be seen as a space-time constant, and the identification with the speed of light is derived from optics.
Alternative derivations of special relativity
Historically, Hendrik Lorentz and Henri Poincaré (1892–1905) derived the Lorentz transformation from Maxwell's equations, which served to explain the negative result of all aether drift measurements. By that the luminiferous aether becomes unde |
https://en.wikipedia.org/wiki/Life%20history%20theory | Life history theory is an analytical framework designed to study the diversity of life history strategies used by different organisms throughout the world, as well as the causes and results of the variation in their life cycles. It is a theory of biological evolution that seeks to explain aspects of organisms' anatomy and behavior by reference to the way that their life histories—including their reproductive development and behaviors, post-reproductive behaviors, and lifespan (length of time alive)—have been shaped by natural selection. A life history strategy is the "age- and stage-specific patterns" and timing of events that make up an organism's life, such as birth, weaning, maturation, death, etc. These events, notably juvenile development, age of sexual maturity, first reproduction, number of offspring and level of parental investment, senescence and death, depend on the physical and ecological environment of the organism.
The theory was developed in the 1950s and is used to answer questions about topics such as organism size, age of maturation, number of offspring, life span, and many others. In order to study these topics, life history strategies must be identified, and then models are constructed to study their effects. Finally, predictions about the importance and role of the strategies are made, and these predictions are used to understand how evolution affects the ordering and length of life history events in an organism's life, particularly the lifespan and period of reproduction. Life history theory draws on an evolutionary foundation, and studies the effects of natural selection on organisms, both throughout their lifetime and across generations. It also uses measures of evolutionary fitness to determine if organisms are able to maximize or optimize this fitness, by allocating resources to a range of different demands throughout the organism's life. It serves as a method to investigate further the "many layers of complexity of organisms and their worl |
https://en.wikipedia.org/wiki/Environmental%20toxicants%20and%20fetal%20development | Environmental toxicants and fetal development is the impact of different toxic substances from the environment on the development of the fetus. This article deals with potential adverse effects of environmental toxicants on the prenatal development of both the embryo or fetus, as well as pregnancy complications. The human embryo or fetus is relatively susceptible to impact from adverse conditions within the mother's environment. Substandard fetal conditions often cause various degrees of developmental delays, both physical and mental, for the growing baby. Although some variables do occur as a result of genetic conditions pertaining to the father, a great many are directly brought about from environmental toxins that the mother is exposed to.
Various toxins pose a significant hazard to fetuses during development. A 2011 study found that virtually all US pregnant women carry multiple chemicals, including some banned since the 1970s, in their bodies. Researchers detected polychlorinated biphenyls, organochlorine pesticides, perfluorinated compounds, phenols, polybrominated diphenyl ethers, phthalates, polycyclic aromatic hydrocarbons, perchlorate PBDEs, compounds used as flame retardants, and dichlorodiphenyltrichloroethane (DDT), a pesticide banned in the United States in 1972, in the bodies of 99 to 100 percent of the pregnant women they tested. Among other environmental estrogens, Bisphenol A (BPA) was identified in 96 percent of the women surveyed. Several of the chemicals were at the same concentrations that have been associated with negative effects in children from other studies and it is thought that exposure to multiple chemicals can have a greater impact than exposure to only one substance.
Effects
Environmental toxicants can be described separately by what effects they have, such as structural abnormalities, altered growth, functional deficiencies, congenital neoplasia, or even death for the fetus.
Preterm birth
One in ten US babies is born preterm and a |
https://en.wikipedia.org/wiki/Nicolson%E2%80%93Ross%E2%80%93Weir%20method | Nicolson–Ross–Weir method is a measurement technique for determination of complex permittivities and permeabilities of material samples for microwave frequencies. The method is based on insertion of a material sample with a known thickness inside a waveguide, such as a coaxial cable or a rectangular waveguide, after which the dispersion data is extracted from the resulting scattering parameters. The method is named after A. M. Nicolson and G. F. Ross, and W. B. Weir, who developed the approach in 1970 and 1974, respectively.
The technique is one of the most common procedures for material characterization in microwave engineering.
Method
The method uses scattering parameters of a material sample embedded in a waveguide, namely and , to calculate permittivity and permeability data. and correspond to the cumulative reflection and transmission coefficient of the sample that are referenced to the each sample end, respectively: these parameters account for the multiple internal reflections inside the sample, which is considered to have a thickness of . The reflection coefficient of the bulk sample is:
where
The sign of the root for the reflection coefficient is chosen appropriately to ensure its passivity (). Similarly, the transmission coefficient of the bulk sample can be written as:
Thus, the effective permeability () and permittivity () of the material can be written as:
where
and
is the free-space wavelength.
is the guided mode wavelength of the unfilled transmission line.
is the cutoff wavelength of the unfilled transmission line
The constitutive relation for admits an infinite number of solutions due to the branches of the complex logarithm. The ambiguity regarding its result can be resolved by taking the group delay into account.
Limitations and extensions
In the case of low material loss, the Nicolson–Ross–Weir method is known to be unstable for sample thicknesses at integer multiples of one half wavelength due to resonance phenomenon. Improvem |
https://en.wikipedia.org/wiki/Consensus%20CDS%20Project | The Consensus Coding Sequence (CCDS) Project is a collaborative effort to maintain a dataset of protein-coding regions that are identically annotated on the human and mouse reference genome assemblies. The CCDS project tracks identical protein annotations on the reference mouse and human genomes with a stable identifier (CCDS ID), and ensures that they are consistently represented by the National Center for Biotechnology Information (NCBI), Ensembl, and UCSC Genome Browser. The integrity of the CCDS dataset is maintained through stringent quality assurance testing and on-going manual curation.
Motivation and background
Biological and biomedical research has come to rely on accurate and consistent annotation of genes and their products on genome assemblies. Reference annotations of genomes are available from various sources, each with their own independent goals and policies, which results in some annotation variation.
The CCDS project was established to identify a gold standard set of protein-coding gene annotations that are identically annotated on the human and mouse reference genome assemblies by the participating annotation groups. The CCDS gene sets that have been arrived at by consensus of the different partners now consist of over 18,000 human and over 20,000 mouse genes (see CCDS release history). The CCDS dataset is increasingly representing more alternative splicing events with each new release.
Contributing groups
Participating annotation groups include:
National Center for Biotechnology Information (NCBI)
European Bioinformatics Institute (EBI)
Wellcome Trust Sanger Institute (WTSI)
HUGO Gene Nomenclature Committee (HGNC)
Mouse Genome Informatics (MGI)
Manual annotation is provided by:
Reference Sequence (RefSeq) at NCBI
Human and Vertebrate Analysis and Annotation (HAVANA) at WTSI
Defining the CCDS gene set
"Consensus" is defined as protein-coding regions that agree at the start codon, stop codon, and splice junctions, and for wh |
https://en.wikipedia.org/wiki/Robert%20McCance | Robert Alexander McCance, CBE, FRS (9 December 1898 in Ulster– 3 March 1993 in Cambridge) was a British paediatrician, physiologist, biochemist and nutritionist and was the first Professor of Experimental Medicine at the University of Cambridge.
Life
Born in Ulster, the son of a linen merchant, he was educated at St. Bees School, before wartime service in the Royal Naval Air Service flying an observation aircraft from the warship HMS Indomitable.<ref>Ashwell, M.[editor] McCance & Widdowson – A Scientific Partnership of 60 years McCance and Widdowson: A Scientific Partnership of 60 Years 1993 </ref> From 1919 he read Natural Sciences at Cambridge University, after an initial start on the Agriculture course. In 1925 he went on to study medicine at King's College Hospital in London. While studying with R. D. Lawrence at his diabetic clinic and McCance became interested in diabetes, particularly in one of the complications of diabetic coma, sodium chloride (salt) deficiency. He then began a scientific career in the study of nutrition.
Career
Remaining at Kings College in the 1930s, McCance conducted research on himself, studying the physiological effects of salt. He would use volunteers who would eat a salt free diet, and lie sweating under heat lamps for two hours a day for 10 days, to remove salt from the body, and become salt deficient. This would lead to suffering in the volunteers including cramps and shortness of breath as well as anorexia and nausea. This led to further research which established that infants with salt deficiency, excreted little salt. A later line of research, initiated with a patient who had polycythaemia rubra vera led to research which established that the amount of iron in the body is regulated, not by intestinal excretion as had been taught, but by controlled absorption.
With colleague H. Shipp, he published The Chemistry of Flesh Foods and their Losses on Cooking in 1933. In 1936, he delivered the Goulstonian Lecture to the Royal Colleg |
https://en.wikipedia.org/wiki/De%20Corpore | De Corpore ("On the Body") is a 1655 book by Thomas Hobbes. As its full Latin title Elementorum philosophiae sectio prima De corpore implies, it was part of a larger work, conceived as a trilogy. De Cive had already appeared, while De Homine would be published in 1658. Hobbes had in fact been drafting De Corpore for at least ten years before its appearance, putting it aside for other matters. This delay affected its reception: the approach taken seemed much less innovative than it would have done in the previous decade.
Contents
Although the chosen title would suggest a work of natural philosophy, De Corpore is largely devoted to foundational matters. It consists of four sections. Part I covers logic. Part II and Part III concern “abstract bodies”: the second part is a repertoire of scientific concepts, and the third of geometry. The Chapters 16 to 20 of Part III are in fact devoted to mathematics generally, in a reductive way, and proved controversial. They proposed a kinematic foundation for geometry, which Hobbes wished to equate with mathematics; geometry itself, that is, is a “science of motion”. Hobbes here adopts ideas from Galileo and Cavalieri. It is in Part IV, on natural phenomena, that there is discussion of physics as such.
Scope
Hobbes in De Corpore states that the subject of philosophy is devoted to "bodies". He clarifies this by division: in English translation, natural philosophy is concerned with concept of "natural body" (), while the bodies called commonwealths are the concern of "civil philosophy". He then applies "body" as synonymous with substance, breaking with the scholastic tradition.
Mathematical errors
Some proofs in the work being "botched", as Noel Malcolm puts it, De Corpore had a negative effect on Hobbes's scholarly reputation. The inclusion of a claimed solution for squaring the circle, an apparent afterthought rather than a systematic development, led to an extended pamphlet war in the Hobbes-Wallis controversy.
Editions and tr |
https://en.wikipedia.org/wiki/Jerusalem%20Biblical%20Zoo | The Tisch Family Biblical Zoo in Jerusalem (, ), popularly known as the Jerusalem Biblical Zoo, is a zoo located in the Malha neighborhood of Jerusalem. It is famous for its Afro-Asiatic collection of wildlife, many of which are described in the Hebrew Bible, as well as for its success in breeding endangered species. According to Dun and Bradstreet, the Biblical Zoo was the most popular tourist attraction in Israel from 2005 to 2007, and logged a record 738,000 visitors in 2009. The zoo had about 55,000 members in 2009.
History
Downtown Jerusalem (1940–1947)
The Jerusalem Biblical Zoo opened in September 1940 as a small "animal corner" on Rabbi Kook Street in central Jerusalem. The zoo was founded by Aharon Shulov, a professor of zoology at the Hebrew University of Jerusalem, Mount Scopus. Among Shulov's goals were to provide a research facility for his students; to gather animals, reptiles and birds mentioned in the Bible; and, as he wrote in 1951, to break down the "invisible wall" between the intellectuals on Mount Scopus and the general public.
Early on, the zoo ran into several difficulties in its decision to focus on animals mentioned in the Bible. For one, the meaning of many names of animals, reptiles and birds in Scriptures is often uncertain; for example, nesher (), commonly translated as "eagle", could also mean "vulture". More significantly, many of the animals mentioned in the Bible are now extinct in Israel due to over-hunting, destruction of natural habitats by rapid construction and development, illegal poisoning by farmers, and low birth rate. Zoo planners decided to branch beyond strictly biblical animals and include worldwide endangered species as well.
The presence of the animal corner generated many complaints from residents in adjoining buildings due to the smell and noise, as well as the perceived danger of animal escapes. Due to the complaints, the zoo relocated in 1941 to a lot on Shmuel HaNavi Street. Here, too, complaints were heard |
https://en.wikipedia.org/wiki/Susana%20Urbina | Susana Urbina (born 1946) is a Peruvian-American psychologist. She received her Ph.D. in Psychometrics from Fordham University in 1972 and was licensed in Florida in 1976. She currently teaches at University of North Florida, where her principal areas of teaching and research are psychological testing and assessment.
Urbina is a fellow of Division 5 (Evaluation, Measurement, and Statistics) of the American Psychological Association (APA) and of the Society for Personality Assessment. She has chaired the Committee on Psychological Tests and Assessment and the Committee on Professional Practice and Standards of the APA. In 1995, Urbina was part of an 11-member APA task force led by Ulric Neisser which published "Intelligence: Knowns and Unknowns," a report written in response to The Bell Curve.
Publications
Urbina S. Psychological Testing: Seventh Edition, Study Guide. Macmillan Pub Co; 6th edition (June 1, 1989)
Urbina S. Psychological Testing: Study Guide. Prentice Hall; 7th edition (September 1, 1997)
Urbina S. (2014) Essentials of Psychological Testing. Wiley; 2nd edition
External links
Susana Urbina website via University of North Florida
American women psychologists
21st-century American psychologists
Fordham University alumni
People from Jacksonville, Florida
University of North Florida faculty
Place of birth missing (living people)
1946 births
Living people
American women academics
21st-century American women scientists
20th-century American psychologists
Quantitative psychologists
20th-century American women scientists
American people of Peruvian descent |
https://en.wikipedia.org/wiki/Sriramachakra | Sriramachakra (also called Sri Rama Chakra, Ramachakra, Rama Chakra, or Ramar Chakra) is a mystic diagram or a yantra given in Tamil almanacs as an instrument of astrology for predicting one's future. The geometrical diagram consists of a square divided into smaller squares by equal numbers of lines parallel to the sides of the square. Certain integers in well defined patterns are written in the various smaller squares. In some almanacs, for example, in the Panchangam published by the Sringeri Sharada Peetham or the Pnachangam published by Srirangam Temple, the diagram takes the form of a magic square of order 4 with certain special properties. This magic square belongs to a certain class of magic squares called strongly magic squares (or complete magic squares) which has been so named and studied by T V Padmakumar, an amateur mathematician from Thiruvananthapuram, Kerala. In some almanacs, for example, in the Pambu Panchangam, the diagram consists of an arrangement of 36 small squares in 6 rows and 6 columns in which the digits 1, 2, ..., 9 are written in that order from left to right starting from the top-left corner, repeating the digits in the same direction once the digit 9 is reached.
There is another smaller mystic diagram, called Seetha Chakra given in Tamil almanacs. In some almanacs it is given as a magic square of order 3 whereas in some others it is an arrangement of 9 small squares in 3 rows and 3 columns in which the digits 1, 2, .. 9 are written in that order column-wise from left to right.
These Chakras are used by the believers to predict the future. A believer takes a small flower, prays to God seeking divine directions and drops the flower randomly on a board containing an inscription of one of the Chakras. The number on which the flower falls is believed to give a broad indication of the future of the believer. For example, if the design is Sri Rama Chakra in the form of a magic square and the number on which the flower has fallen is 11 |
https://en.wikipedia.org/wiki/Splenocyte | A splenocyte can be any one of the different white blood cell types as long as it is situated in the spleen or purified from splenic tissue.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions. |
https://en.wikipedia.org/wiki/Photochemical%20Reflectance%20Index | The Photochemical Reflectance Index (PRI) is a reflectance measurement developed by John Gamon during his tenure as a postdoctorate fellow supervised by Christopher Field at the Carnegie Institution for Science at Stanford University. The PRI is sensitive to changes in carotenoid pigments (e.g. xanthophyll pigments) in live foliage. Carotenoid pigments are indicative of photosynthetic light use efficiency, or the rate of carbon dioxide uptake by foliage per unit energy absorbed. As such, it is used in studies of vegetation productivity and stress. Because the PRI measures plant responses to stress, it can be used to assess general ecosystem health using satellite data or other forms of remote sensing. Applications include vegetation health in evergreen shrublands, forests, and agricultural crops prior to senescence. PRI is defined by the following equation using reflectance (ρ) at 531 and 570 nm wavelength:
Some authors use
The values range from –1 to 1.
Sources
ENVI Users Guide
John Gamon, Josep Penuelas, and Christopher Field (1992). A narrow-waveband spectral index that tracks diurnal changes in photosynthetic efficiency. Remote Sensing of environment, 41, 35-44.
Drolet, G.G. Heummrich, K.F. Hall, F.G., Middleton, E.M., Black, T.A., Barr, A.G. and Margolis, H.A. (2005). A MODIS-derived photochemical reflectance index to detect inter-annual variations in the photosynthetic light-use efficiency of a boreal deciduous forest. Remote Sensing of environment, 98, 212-224.
Biophysics
Botany
Remote sensing
1992 introductions |
https://en.wikipedia.org/wiki/Failure%20to%20thrive | Failure to thrive (FTT), also known as weight faltering or faltering growth, indicates insufficient weight gain or absence of appropriate physical growth in children. FTT is usually defined in terms of weight, and can be evaluated either by a low weight for the child's age, or by a low rate of increase in the weight.
The term "failure to thrive" has been used in different ways, as there is no objective standard or universally accepted definition for when to diagnose FTT. One definition describes FTT as a fall in one or more weight centile spaces on a World Health Organization (WHO) growth chart depending on birth weight or when weight is below the 2nd percentile of weight for age irrespective of birth weight. Another definition of FTT is a weight for age that is consistently below the 5th percentile or weight for age that falls by at least two major percentile lines on a growth chart. While weight loss after birth is normal and most babies return to their birth weight by three weeks of age, clinical assessment for FTT is recommended for babies who lose more than 10% of their birth weight or do not return to their birth weight after three weeks. Failure to thrive is not a specific disease, but a sign of inadequate weight gain.
In veterinary medicine, FTT is also referred to as ill-thrift.
Signs and symptoms
Failure to thrive is most commonly diagnosed before two years of age, when growth rates are highest, though FTT can present among children and adolescents of any age. Caretakers may express concern about poor weight gain or smaller size compared to peers of a similar age. Physicians often identify failure to thrive during routine office visits, when a child's growth parameters such as height and weight are not increasing appropriately on growth curves. Other signs and symptoms may vary widely depending on the etiology of FTT. It is also important to differentiate stunting from wasting, as they can indicate different causes of FTT. "Wasting" refers to a deceler |
https://en.wikipedia.org/wiki/Computational%20epigenetics | Computational epigenetics uses statistical methods and mathematical modelling in epigenetic research. Due to the recent explosion of epigenome datasets, computational methods play an increasing role in all areas of epigenetic research.
Definition
Research in computational epigenetics comprises the development and application of bioinformatics methods for solving epigenetic questions, as well as computational data analysis and theoretical modeling in the context of epigenetics. This includes modelling of the effects of histone and DNA CpG island methylation.
Current research areas
Epigenetic data processing and analysis
Various experimental techniques have been developed for genome-wide mapping of epigenetic information, the most widely used being ChIP-on-chip, ChIP-seq and bisulfite sequencing. All of these methods generate large amounts of data and require efficient ways of data processing and quality control by bioinformatic methods.
Epigenome prediction
A substantial amount of bioinformatic research has been devoted to the prediction of epigenetic information from characteristics of the genome sequence. Such predictions serve a dual purpose. First, accurate epigenome predictions can substitute for experimental data, to some degree, which is particularly relevant for newly discovered epigenetic mechanisms and for species other than human and mouse. Second, prediction algorithms build statistical models of epigenetic information from training data and can therefore act as a first step toward quantitative modeling of an epigenetic mechanism. Successful computational prediction of DNA and lysine methylation and acetylation has been achieved by combinations of various features.
Applications in cancer epigenetics
The important role of epigenetic defects for cancer opens up new opportunities for improved diagnosis and therapy. These active areas of research give rise to two questions that are particularly amenable to bioinformatic analysis. First, given a list of |
https://en.wikipedia.org/wiki/Crescent%20Porter%20Hale | Crescent Porter Hale (1872–1937) was an American industrialist who was involved in the canned salmon industry in Bristol Bay, Alaska throughout his adult life.
Early life
Born in Santa Cruz, CA as the 7th child of gold rush pioneer Titus Hale, the family moved to a farm on the lower Sacramento River in 1880. In 1885 Hale's sister Rose married a Sacramento cannery man, Joseph Peter Haller, who was hired to build the first cannery on Bristol Bay's Nushagak River earlier that year. Haller was then asked by William Bradford to build another cannery across the river and when he returned to Alaska in 1886, he brought along his 14-year-old brother in law Crescent.
Crescent Hale, also known as Cress or Cres, returned to the Nushagak, near present-day Dillingham, AK, in the summers that followed. At the Bradford cannery, he was schooled in the salmon canning trade. He was there in 1893 when all four Nushagak canneries merged with others to form the Alaska Packers Association (APA), a cannery cartel designed to control production and sell off surplus stockpiles of canned salmon.
At age 21, Cress was named superintendent of the Bradford cannery, later renamed the Diamond BB after the company name, the Bristol Bay Packing Co. Hale caught the eye of competitors and in 1899 the Pacific Steam Whaling Company hired 27-year-old Cress to build a salmon cannery at Nushagak Point and serve as its first superintendent.
North Alaska Salmon Company
Then in 1900, Joseph Haller formed the North Alaska Salmon Company and returned to Bristol Bay with two of his brothers-in-law. Cress Hale was named general superintendent of Haller's company and Hale's brother William as bookkeeper.
Cress built two new canneries: one on the Kvichak River at a place named Hallerville and the other on the Egegik River across from the APA's Diamond E. Two years later, he expanded North Alaska's holdings again, building the Lockenok Cannery at the confluence of the Kvichak and Alagnak Rivers and then ret |
https://en.wikipedia.org/wiki/WebCL | WebCL (Web Computing Language) is a JavaScript binding to OpenCL for heterogeneous parallel computing within any compatible web browser without the use of plug-ins, first announced in March 2011. It is developed on similar grounds as OpenCL and is considered as a browser version of the latter. Primarily, WebCL allows web applications to actualize speed with multi-core CPUs and GPUs. With the growing popularity of applications that need parallel processing like image editing, augmented reality applications and sophisticated gaming, it has become more important to improve the computational speed. With these background reasons, a non-profit Khronos Group designed and developed WebCL, which is a Javascript binding to OpenCL with a portable kernel programming, enabling parallel computing on web browsers, across a wide range of devices. In short, WebCL consists of two parts, one being Kernel programming, which runs on the processors (devices) and the other being JavaScript, which binds the web application to OpenCL. The completed and ratified specification for WebCL 1.0 was released on March 19, 2014.
Implementation
Currently, no browsers natively support WebCL. However, non-native add-ons are used to implement WebCL. For example, Nokia developed a WebCL extension. Mozilla does not plan to implement WebCL in favor of WebGL Compute Shaders, which were in turn scrapped in favor of WebGPU.
Mozilla (Firefox) -
WebCL working draft
Samsung (WebKit) - (unavailable)
Nokia (Firefox) - (down since Nov 2014, Last Version for FF 34)
Intel (Crosswalk) -
Example C code
The basic unit of a parallel program is kernel. A kernel is any parallelizable task used to perform a specific job. More often functions can be realized as kernels. A program can be composed of one or more kernels. In order to realize a kernel, it is essential that a task is parallelizable. Data dependencies and order of execution play a vital role in producing efficient parallelized algorithms. A simple ex |
https://en.wikipedia.org/wiki/Hemerochory | Hemerochory (Ancient Greek ἥμερος, hemeros: 'tame, ennobled, cultivated, cultivated' and Greek χωρίς choris: separate, isolated) is the distribution of cultivated plants or their seeds and cuttings, consciously or unconsciously, by humans into an area that they could not colonize through their natural mechanisms of spread, but are able to maintain themselves without specific human help in their new habitat.
Hemerochory is one of the main propagation mechanisms of a plant. Hemerochoric plants can both increase and decrease the biodiversity of a habitat.
Categorisation
Hemerochoric plants are classified according to the manner of introduction into, for example:
Ethelochory: the conscious introduction by seed or young plants.
Speirochoria: the unintentional introduction by contaminated seed. Examples are the true chamomile and the cornflower.
Agochory: the introduction by unintentional transport with, among other things, ships, trains and cars. These plants are common in port areas, roadsides, stations and railways.
Division
Chronologically the hemerochoric plants are divided in:
Archaeophytes: plants that were introduced before the onset of world trade around the year 1500, or before the year 1492 (discovery of America).
Neophytes: plants that were introduced later.
Related terms
Anthropochory is often used synonymously but does not mean exactly the same. Anthropochory is the spread by humans. The spread through domestic animals does not belong to the anthropochoric, but to the hemerochoric, because domestic animals belong to the human culture. Strictly speaking, anthropochoric means the spread through humans as a transport medium. These can also be native species that were either adapted from the outset to locations created by human cultural activity or have adapted to them afterwards; As a result, their area of distribution has often, but not always, increased.
The term adventitious plants is sometimes used synonymously with hemerochory, but is often re |
https://en.wikipedia.org/wiki/PEPA | Performance Evaluation Process Algebra (PEPA) is a stochastic process algebra designed for modelling computer and communication systems introduced by Jane Hillston in the 1990s. The language extends classical process algebras such as Milner's CCS and Hoare's CSP by introducing probabilistic branching and timing of transitions.
Rates are drawn from the exponential distribution and PEPA models are finite-state and so give rise to a stochastic process, specifically a continuous-time Markov process (CTMC). Thus the language can be used to study quantitative properties of models of computer and communication systems such as throughput, utilisation and response time as well as qualitative properties such as freedom from deadlock. The language is formally defined using a structured operational semantics in the style invented by Gordon Plotkin.
As with most process algebras, PEPA is a parsimonious language. It has only four combinators, prefix, choice, co-operation and hiding. Prefix is the basic building block of a sequential component: the process (a, r).P performs activity a at rate r before evolving to behave as component P. Choice sets up a competition between two possible alternatives: in the process (a, r).P + (b, s).Q either a wins the race (and the process subsequently behaves as P) or b wins the race (and the process subsequently behaves as Q).
The co-operation operator requires the two "co-operands" to join for those activities which are specified in the co-operation set: in the process P < a, b> Q the processes P and Q must co-operate on activities a and b, but any other activities may be performed independently. The reversed compound agent theorem gives a set of sufficient conditions for a co-operation to have a product form stationary distribution.
Finally, the process P/{a} hides the activity a from view (and prevents other processes from joining with it).
Syntax
Given a set of action names, the set of PEPA processes is defined by the following BNF |
https://en.wikipedia.org/wiki/Conscious%20breathing | Conscious breathing is an umbrella term for methods that direct awareness to the breath. These methods may have the goal of improving breathing, or the primary goal can be to build mindfulness. Human respiration is controlled consciously or unconsciously.
Training methods
Pranayama is part of the Yoga tradition and mainly deals with exercises, such as prolonging in- and outbreaths, holding pauses on the in- or outbreath or both, alternate nostril breathing, or breathing with the glottis slightly engaged etc.
The Buteyko method focuses on nasal breathing, relaxation and reduced breathing. These techniques provide the lungs with more NO and thus dilate the airways and should prevent the excessive exhalation of and thus improve oxygen metabolism.
Coherent Breathing is a method that involves breathing at the rate of five breaths per minute with equal periods of inhalation and exhalation and conscious relaxation of anatomical zones.
Applications
Meditation
Conscious breathing in meditation usually does not change the depth or rhythm of breathing, but uses breathing as an anchor for concentration and awareness.
Mindfulness and Awareness Trainings use conscious breathing for training awareness and body consciousness.
Vipassana Meditation focuses on breathing in and around the nose to calm the mind (anapanasati).
Psychology and psycho-therapy
Many breathwork methods claim that breathing can be used to access nonverbal memories.
Rebirthing uses conscious breathing to purge repressed birth memories and traumatic childhood memories.
Holotropic Breathing was developed by Stanislav Grof and uses deepened breathing to allow access to non-ordinary states of consciousness.
Transformational Breath uses a full relaxed breath that originates in the lower abdomen and repeats inhalation and exhalation without pausing. It integrates other healing modalities and breath analysis. A key feature is intensive personal coaching and the use of 'bodymapping' (acupressure points).
I |
https://en.wikipedia.org/wiki/Anton%20Formann | Anton K. Formann (August 27, 1949, Vienna, Austria – July 12, 2010, Vienna) was an Austrian research psychologist, statistician, and psychometrician. He is renowned for his contributions to item response theory (Rasch models), latent class analysis, the measurement of change, mixture models, categorical data analysis, and quantitative methods for research synthesis (meta-analysis).
Biography
Anton K. Formann studied psychology with statistics and anthropology (individual curriculum approved by the university) at the University of Vienna, Austria, where he received his PhD in psychology in 1973 under the supervision of Gerhard H. Fischer at the university's Department of Psychology. He worked as a post doc researcher and Assistant Professor at Fischer's division until 1985, when he earned his postdoctoral professorial qualification (habilitation in psychology) and became Associate Professor at the University of Vienna.
He also studied statistics at Sheffield Hallam University (UK) where he graduated (MSc with distinction) in 1998. In 1999, he gained his second postdoctoral professional qualification (habilitation in applied statistics).
In 2004, after being substitute chair holder for 5 years, he became full professor for psychological methods at the University of Vienna, succeeding the chair of mathematical psychology of Gerhard H. Fischer.
From 2005 onwards, Formann was Vice Head of the Department of Basic Psychological Research within the Faculty of Psychology at the University of Vienna, and during 2006-08 additionally Vice Dean of the Faculty.
Scientific Work
Formann led long-standing research collaborations with colleagues in the statistical, medical, and psychological sciences. His substantial research activities in all these fields are documented in numerous books and more than 50 publications in prestigious high-impact journals, including Biometrics, the Journal of the American Statistical Association, the British Journal of Mathematical and Statistical |
https://en.wikipedia.org/wiki/Notoceratops | Notoceratops (meaning "southern horned face") is a dubious genus of extinct ornithischian dinosaur based on an incomplete, toothless left dentary (now lost) from the Late Cretaceous of Patagonia (in Argentina), probably dating to the Campanian or Maastrichtian. It was most likely a ceratopsian and it was found in the Lago Colhué Huapi Formation.
Discovery and naming
In 1918, palaeontologist Augusto Tapia (1893–1966) discovered the genus holotype. He also named the type species, N. bonarellii (originally spelt as Notoceratops Bonarelli), in 1918. The generic name is derived from Greek notos, "the south", keras, "horn" and ops, "face". The specific name honours Guido Bonarelli (1871-1951), who advised Tapia in his study of the find. By present conventions the epithet is spelled bonarellii, thus without a capital B. In many later publications the specific name is misspelled "bonarelli", with a single "i", from the incorrect assumption it would be derived from a Latinised "Bonarell~ius". The fossil, found near the Lago Colhué Huapi in Chubut, was eventually described by Friedrich von Huene in 1929, but it has since been lost.
Phylogeny
Originally referred as a ceratopsian by Tapia in 1918, it was later dismissed because no other members of that group were known from the Southern Hemisphere. However, the 2003 discovery of another possible ceratopsian, Serendipaceratops, from Australia could change this view. Notoceratops has since been considered a nomen dubium and may have been a hadrosaur instead. An analysis published by Tom Rich et al. in 2014, which focused on the validity of Serendipaceratops, also examined the published material from Notoceratops. They concluded that the holotype had ceratopsian features and that the genus is probably valid. |
https://en.wikipedia.org/wiki/Mir-337%20microRNA%20precursor%20family | In molecular biology, mir-337 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
See also
MicroRNA |
https://en.wikipedia.org/wiki/Toxidrome | A toxidrome (a portmanteau of toxic and syndrome, coined in 1970 by Mofenson and Greensher) is a syndrome caused by a dangerous level of toxins in the body. It is often the consequence of a drug overdose. Common symptoms include dizziness, disorientation, nausea, vomiting and oscillopsia. It may indicate a medical emergency requiring treatment at a poison control center. Aside from poisoning, a systemic infection may also lead to one. Classic toxidromes are presented below, which are variable or obscured by co-ingestion of multiple drugs.
Anticholinergic
The symptoms of an anticholinergic toxidrome include blurred vision, coma, decreased bowel sounds, delirium, dry skin, fever, flushing, hallucinations, ileus, memory loss, mydriasis (dilated pupils), myoclonus, psychosis, seizures and urinary retention. Complications include hypertension, hyperthermia and tachycardia. Substances that may cause this toxidrome include antihistamines, antipsychotics, antidepressants, antiparkinsonian drugs, atropine, benztropine, datura, diphenhydramine and scopolamine.
Cholinergic
The symptoms of a cholinergic toxidrome include bronchorrhea, confusion, defecation, diaphoresis, diarrhea, emesis, lacrimation, miosis, muscle fasciculations, salivation, seizures, urination and weakness. Complications include bradycardia, hypothermia and tachypnea. Substances that may cause this toxidrome include carbamates, mushrooms and organophosphates.
Hallucinogenic
The symptoms of a hallucinogenic toxidrome include disorientation, hallucinations, hyperactive bowel sounds, panic and seizures. Complications include hypertension, tachycardia and tachypnea. Substances that may cause this toxidrome include substituted amphetamines, cocaine and phencyclidine.
Opiate
The symptoms of an opiate toxidrome include the classic triad of coma, pinpoint pupils and respiratory depression as well as altered mental states, shock, pulmonary edema and unresponsiveness. Complications include bradycardia, hypotens |
https://en.wikipedia.org/wiki/Anaerohalosphaera | Anaerohalosphaera lusitana is a species of bacteria.
See also
List of bacterial orders
List of bacteria genera |
https://en.wikipedia.org/wiki/Sphenopalatine%20foramen | The sphenopalatine foramen is a fissure of the skull that connects the nasal cavity and the pterygopalatine fossa. It gives passage to the sphenopalatine artery, nasopalatine nerve, and the superior nasal nerve (all passing from the pterygopalatine fossa into the nasal cavity).
Structure
The processes of the superior border of the palatine bone are separated by the sphenopalatine notch, which is converted into the sphenopalatine foramen by the under surface of the body of the sphenoid.
The sphenopalatine foramen is situated posterior to the middle nasal meatus orbital process of palatine bone, anterior to the sphenoidal process of palatine bone, inferior to the body and of the sphenoid bone, and superior to the superior margin of the perpendicular plate of palatine bone.
Relations
The ethmoid crest (a reliable surgical landmark) is situated anterior to the sphenopalatine foramen.
Additional images |
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%202 | Mothers against decapentaplegic homolog 2 also known as SMAD family member 2 or SMAD2 is a protein that in humans is encoded by the SMAD2 gene. MAD homolog 2 belongs to the SMAD, a family of proteins similar to the gene products of the Drosophila gene 'mothers against decapentaplegic' (Mad) and the C. elegans gene Sma. SMAD proteins are signal transducers and transcriptional modulators that mediate multiple signaling pathways.
Function
SMAD2 mediates the signal of the transforming growth factor (TGF)-beta, and thus regulates multiple cellular processes, such as cell proliferation, apoptosis, and differentiation. This protein is recruited to the TGF-beta receptors through its interaction with the SMAD anchor for receptor activation (SARA) protein. In response to TGF-beta signal, this protein is phosphorylated by the TGF-beta receptors. The phosphorylation induces the dissociation of this protein with SARA and the association with the family member SMAD4. The association with SMAD4 is important for the translocation of this protein into the cell nucleus, where it binds to target promoters and forms a transcription repressor complex with other cofactors. This protein can also be phosphorylated by activin type 1 receptor kinase, and mediates the signal from the activin. Alternatively spliced transcript variants encoding the same protein have been observed.
Like other Smads, Smad2 plays a role in the transmission of extracellular signals from ligands of the Transforming Growth Factor beta (TGFβ) superfamily of growth factors into the cell nucleus. Binding of a subgroup of TGFβ superfamily ligands to extracellular receptors triggers phosphorylation of Smad2 at a Serine-Serine-Methionine-Serine (SSMS) motif at its extreme C-terminus. Phosphorylated Smad2 is then able to form a complex with Smad4. These complexes accumulate in the cell nucleus, where they are directly participating in the regulation of gene expression.
Nomenclature
The SMAD proteins are homologs of bot |
https://en.wikipedia.org/wiki/AstraZeneca | AstraZeneca plc () is an Anglo-Swedish multinational pharmaceutical and biotechnology company with its headquarters at the Cambridge Biomedical Campus in Cambridge, England. It has a portfolio of products for major diseases in areas including oncology, cardiovascular, gastrointestinal, infection, neuroscience, respiratory, and inflammation. It has been involved in developing the Oxford–AstraZeneca COVID-19 vaccine.
The company was founded in 1999 through the merger of the Swedish Astra AB and the British Zeneca Group (itself formed by the demerger of the pharmaceutical operations of Imperial Chemical Industries in 1993). Since the merger it has been among the world's largest pharmaceutical companies and has made numerous corporate acquisitions, including Cambridge Antibody Technology (in 2006), MedImmune (in 2007), Spirogen (in 2013) and Definiens (by MedImmune in 2014). It has its research and development concentrated in three strategic centres: Cambridge, England; Gothenburg, Sweden and Gaithersburg in Maryland, U.S.
AstraZeneca traces its earliest corporate history to 1913, when Astra AB was formed by a large group of doctors and apothecaries in Södertälje. Throughout the twentieth century, it grew into the largest pharmaceutical company in Sweden, and was considered a large company by the time of the merger. Its British counterpart, Zeneca PLC was formed in 1993 when ICI divested its pharmaceuticals businesses; Astra AB and Zeneca PLC merged six years later, with the chosen headquarters in the United Kingdom.
AstraZeneca's primary listing is on the London Stock Exchange and is a constituent of the FTSE 100 Index; it also has a secondary listing on the Nasdaq Stockholm. AstraZeneca has one of the highest market capitalisations of pharmaceutical companies worldwide.
History
Astra AB was founded in 1913 in Södertälje, Sweden, by 400 doctors and apothecaries. In 1993 the British chemicals company ICI (established from four British chemical companies) demerged i |
https://en.wikipedia.org/wiki/PMX%20%28technology%29 | PMX refers to the technology developed by Pelmorex to generate local weather information on The Weather Network. PMX consists of computers, typically installed at a cable headend, that takes data fed to it (the video feed of The Weather Network, forecast information, and triggers to run said forecasts) and packages it for broadcast. Unlike the Weather Star systems, it does not generate full graphical or video segments, rather the information is super-imposed over the main video feed. There are 4 different PMX units: PMX-1500, PMX-3200, PMX-NG and PMX-XD.
History
PMX was developed by Pelmorex in 1995 as a standard localization system that would replace the units and the text based that were still used in smaller communities. The PMX technology quickly rolled out starting in 1996, with all communities receiving the new units by 1998. PMX generates local weather information to over 1200 communities across Canada.
Timeline
1996-1998: PMX-1500 units are gradually deployed at major cable head-ends locations across Canada. They were originally formatted as a replica of the previous WeatherSTAR units, only with a slightly different icon and font set. The earliest known date of a PMX unit working is March 28, 1996, from Timmins, ON.
December 1997: Format changes occur with PMX-1500 system. Notable changes include, extended forecasts for the next 5 days, and national forecast maps immediately following the Local Forecast. The maps were discontinued one year later.
April 1999: Topographical Satellite and Radar Maps are introduced, replacing the solid color maps previously used.
August 2001: The PMX-1500 receives minor graphical updates, including an updated font set and fade effects in various segments.
Winter 2001: New icons are introduced to the PMX-1500 system. The 7-day outlook and the short-term precipitation forecast is also introduced.
July 2002: The classic current weather conditions ticker is replaced with a bar containing both current conditions and forecasts at a |
https://en.wikipedia.org/wiki/Multipath%20I/O | In computer storage, multipath I/O is a fault-tolerance and performance-enhancement technique that defines more than one physical path between the CPU in a computer system and its mass-storage devices through the buses, controllers, switches, and bridge devices connecting them.
As an example, a SCSI hard disk drive may connect to two SCSI controllers on the same computer, or a disk may connect to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route the I/O through the remaining controller, port or switch transparently and with no changes visible to the applications, other than perhaps resulting in increased latency.
Multipath software layers can leverage the redundant paths to provide performance-enhancing features, including dynamic load balancing, traffic shaping, automatic path management, and dynamic reconfiguration.
See also
Device mapper
Linux DM Multipath
External links
Linux Multipathing, Linux Symposium 2005 p. 147
VxDMP white paper, Veritas Dynamic Multi pathing
Linux Multipath Usage guide
Computer data storage
Computer storage technologies
Fault-tolerant computer systems |
https://en.wikipedia.org/wiki/Cayley%E2%80%93Bacharach%20theorem | In mathematics, the Cayley–Bacharach theorem is a statement about cubic curves (plane curves of degree three) in the projective plane . The original form states:
Assume that two cubics and in the projective plane meet in nine (different) points, as they do in general over an algebraically closed field. Then every cubic that passes through any eight of the points also passes through the ninth point.
A more intrinsic form of the Cayley–Bacharach theorem reads as follows:
Every cubic curve over an algebraically closed field that passes through a given set of eight points also passes through (counting multiplicities) a ninth point which depends only on .
A related result on conics was first proved by the French geometer Michel Chasles and later generalized to cubics by Arthur Cayley and Isaak Bacharach.
Details
If seven of the points lie on a conic, then the ninth point can be chosen on that conic, since will always contain the whole conic on account of Bézout's theorem. In other cases, we have the following.
If no seven points out of are co-conic, then the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) (with multiplicity for double points) has dimension two.
In that case, every cubic through also passes through the intersection of any two different cubics through , which has at least nine points (over the algebraic closure) on account of Bézout's theorem. These points cannot be covered by only, which gives us .
Since degenerate conics are a union of at most two lines, there are always four out of seven points on a degenerate conic that are collinear. Consequently:
If no seven points out of lie on a non-degenerate conic, and no four points out of lie on a line, then the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) has dimension two.
On the other hand, assume are collinear and no seven points out of are co-conic. Then no five points of and no three points of are colline |
https://en.wikipedia.org/wiki/Sepiapterin%20reductase%20deficiency | Sepiapterin reductase deficiency is an inherited pediatric disorder characterized by movement problems, and most commonly displayed as a pattern of involuntary sustained muscle contractions known as dystonia. Symptoms are usually present within the first year of age, but diagnosis is delayed due to physicians lack of awareness and the specialized diagnostic procedures. Individuals with this disorder also have delayed motor skills development including sitting, crawling, and need assistance when walking. Additional symptoms of this disorder include intellectual disability, excessive sleeping, mood swings, and an abnormally small head size. SR deficiency is a very rare condition. The first case was diagnosed in 2001, and since then there have been approximately 30 reported cases. At this time, the condition seems to be treatable, but the lack of overall awareness and the need for a series of atypical procedures used to diagnose this condition pose a dilemma.
Signs and symptoms
Cognitive problems
Intellectual disability: Delay in cognitive development
Extreme mood swings
Language delay
Motor problems
Dystonia: involuntary muscle contractions
Axial hypotonia: low muscle tone and strength
Dysarthria: impairment in muscles used for speech
Muscle stiffness and tremors
Seizures
Coordination and balance impairment
Oculogyric crises: abnormal rotation of the eyes
The oculogyric crises usually occur in the later half of the day and during these episodes patients undergo extreme agitation and irritability along with uncontrolled head and neck movements. Apart from the aforementioned symptoms, patients can also display parkinsonism, sleep disturbances, small head size (microcephaly), behavioral abnormalities, weakness, drooling, and gastrointestinal symptoms.
Causes
This disorder occurs through a mutation in the SPR gene, which is responsible for encoding the sepiapterin reductase enzyme. The enzyme is involved in the last step of producing tetrahydrobiopterin, better kno |
https://en.wikipedia.org/wiki/Canis%20lupus%20dingo | In the taxonomic treatment presented in the third (2005) edition of Mammal Species of the World, Canis lupus dingo is a taxonomic rank that includes both the dingo that is native to Australia and the New Guinea singing dog that is native to the New Guinea Highlands. It also includes some extinct dogs that were once found in coastal Papua New Guinea and the island of Java in the Indonesian Archipelago. In this treatment it is a subspecies of Canis lupus, the wolf (the domestic dog is treated as a different wolf subspecies), although other treatments consider the dog as a full species, with the dingo and its relatives either as a subspecies of the dog (as Canis familiaris dingo), a species in its own right (Canis dingo), or simply as an unnamed variant or genetic clade within the larger population of dogs (thus, Canis familiaris, not further differentiated). The genetic evidence indicates that the dingo clade originated from East Asian domestic dogs and was introduced through the Malay Archipelago into Australia, with a common ancestry between the Australian dingo and the New Guinea singing dog. The New Guinea singing dog is genetically closer to those dingoes that live in southeastern Australia than to those that live in the northwest.
Taxonomic debate – the domestic dog, dingo, and New Guinea singing dog
Nomenclature
Zoological nomenclature is a system of naming animals. In 1758, the Swedish botanist and zoologist Carl Linnaeus published in his Systema Naturae the two-word naming of species (binomial nomenclature). Canis is the Latin word meaning "dog", and under this genus he listed the domestic dog, grey wolf and the golden jackal. He classified the domestic dog as Canis familiaris, and on the next page he classified the grey wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its upturning tail (cauda recurvata), which is not found in any other canid.
The International Commission on Zoological Nomenclature (ICZN) |
https://en.wikipedia.org/wiki/Cheyette%20model | In mathematical finance, the Cheyette Model is a quasi-Gaussian, quadratic volatility model of interest rates intended to overcome certain limitations of the Heath-Jarrow-Morton framework.
By imposing a special time dependent structure on the forward rate volatility function, the Cheyette approach allows for dynamics which are Markovian, in contrast to the general HJM model.
This in turn allows the application of standard econometric valuation concepts.
External links and references
Cheyette, O. (1994). Markov representation of the Heath-Jarrow-Morton model (working paper). Berkeley: BARRA Inc.
Chibane, M. and Law, D. (2013). A quadratic volatility Cheyette model, Risk.net
Financial models
Mathematical finance
Fixed income analysis
Heath–Jarrow–Morton framework |
https://en.wikipedia.org/wiki/Exodermis | The exodermis is a physiological barrier that has a role in root function and protection. The exodermis is a membrane of variable permeability responsible for the radial flow of water, ions, and nutrients. It is the outer layer of a plant's cortex. The exodermis serves a double function as it can protect the root from invasion by foreign pathogens and ensures that the plant does not lose too much water through diffusion through the root system and can properly replenish its stores at an appropriate rate.
Overview and function
The exodermis is a specialized type of hypodermis that develops Casparian strips in its cell wall, as well as further wall modifications. The Casparian strip is a band of hydrophobic, corky-like tissue that is found on the outside of the endodermis and the exodermis. Its main function is to prevent solution backflow into the cortex and to maintain root pressure. It is also involved in ensuring that soil is not pulled directly into the root system during nutrient uptake.
Exodermis cells are found on the outermost layer of almost all seeded vascular plants and the outer layer of the cortex of many angiosperms including onion, hoya canoas, maize, and sunflower plants but not on seedless vascular plants. As with most plant species, there is a large variety in the thickness and permeability of the exodermis, to better allow the plants to be suited to their environments.
Although the term barrier is used to describe the exodermis, the exodermis behaves more like a membrane through which different materials can pass through. It can modify its permeability so that in response to different external stimuli, it can change to better suit the root's requirements. This serves as a function for survival, as root systems are exposed to changing environmental conditions and thus the plant needs to modify itself as necessary, either in thickening or thinning of Casparian strips or by changing the permeability of the band to certain ions. It also has been f |
https://en.wikipedia.org/wiki/Amorph%20%28gene%29 | An amorph is a mutated allele that has lost the ability of the parent allele (whether wild type or any other type) to encode any functional protein. An amorph mutation, or null, is the loss of genetic information for the synthesis of appropriate mRNA. Depending on the relationships of the parent allele, an amorphous mutant can have various forms of gene interactions.
The term "amorph" was used by Hermann Joseph Muller in 1932.
See also
Allele
Gene mutation
Muller's morphs |
https://en.wikipedia.org/wiki/Misalignment%20mechanism | The misalignment mechanism is a hypothesized effect in the Peccei–Quinn theory proposed solution to the strong-CP problem in quantum mechanics. The effect occurs when a particle's field has an initial value that is not at or near a potential minimum. This causes the particle's field to oscillate around the nearest minimum, eventually dissipating energy by decaying into other particles until the minimum is attained.
In the case of hypothesized axions created in the early universe, the initial values are random because of the masslessness of axions in the high temperature plasma. Near the critical temperature of quantum chromodynamics, axions possess a temperature-dependent mass that enters a damped oscillation until the potential minimum is reached. |
https://en.wikipedia.org/wiki/Band%20sum | In geometric topology, a band sum of two n-dimensional knots K1 and K2 along an (n + 1)-dimensional 1-handle h called a band is an n-dimensional knot K such that:
There is an (n + 1)-dimensional 1-handle h connected to (K1, K2) embedded in Sn+2.
There are points and such that is attached to along .
K is the n-dimensional knot obtained by this surgery.
A band sum is thus a generalization of the usual connected sum of knots.
See also
Manifold decomposition |
https://en.wikipedia.org/wiki/Windows%20RT | Windows RT is a mobile operating system developed by Microsoft. It is a version of Windows 8 or Windows 8.1 built for the 32-bit ARM architecture (ARMv7). First unveiled in January 2011 at Consumer Electronics Show, the Windows RT 8 operating system was officially launched alongside Windows 8 on October 26, 2012, with the release of three Windows RT-based devices, including Microsoft's original Surface tablet. Unlike Windows 8, Windows RT is only available as preloaded software on devices specifically designed for the operating system by original equipment manufacturers (OEMs).
Microsoft intended for devices with Windows RT to take advantage of the architecture's power efficiency to allow for longer battery life, to use system-on-chip (SoC) designs to allow for thinner devices and to provide a "reliable" experience over time. In comparison to other mobile operating systems, Windows RT also supports a relatively large number of existing USB peripherals and accessories and includes a version of Microsoft Office 2013 optimized for ARM devices as pre-loaded software. However, while Windows RT inherits the appearance and functionality of Windows 8, it has a number of limitations; it can only execute software that is digitally signed by Microsoft (which includes pre-loaded software and Windows Store apps), and it lacks certain developer-oriented features. It also lacks support for running applications designed for x86 processors, which were the main platform for Windows at the time. This would later be corrected with the release of Windows 10 version 1709 for ARM64 devices.
Windows RT was released to mixed reviews from various outlets and critics. Some felt that Windows RT devices had advantages over other mobile platforms (such as iOS or Android) because of its bundled software and the ability to use a wider variety of USB peripherals and accessories, but the platform was criticized for its poor software ecosystem, citing the early stage of Windows Store and its incomp |
https://en.wikipedia.org/wiki/Fast%20path | Fast path is a term used in computer science to describe a path with shorter instruction path length through a program compared to the normal path. For a fast path to be effective it must handle the most commonly occurring tasks more efficiently than the normal path, leaving the latter to handle uncommon cases, corner cases, error handling, and other anomalies. Fast paths are a form of optimization.
For example dedicated packet routing hardware used to build computer networks will often support software dedicated to handling the most common kinds of packets, with other kinds, for example with control information or packets directed at the device itself instead of being routed elsewhere, put on the metaphorical "slow path", in this example usually implemented by software running on the control processor.
Specific implementations of networking software architectures have been developed that leverage the concept of a fast path to maximize the performance of packet processing software. In these implementations, the networking stack is split into two layers and the lower layer, typically called the fast path, processes the majority of incoming packets outside the OS environment without incurring any of the OS overheads that degrade overall performance. Only those rare packets that require complex processing are forwarded to the OS networking stack, which performs the necessary management, signaling and control functions.
Some hardware RAID controllers implement a "fast path" for write-through access which bypasses the controller's cache in certain situations. This tends to increase IOPS, particularly for solid-state drives.
See also
Control plane
Data plane
Self-modifying code |
https://en.wikipedia.org/wiki/Hessenberg%20matrix | In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.
A Hessenberg decomposition is a matrix decomposition of a matrix into a unitary matrix and a Hessenberg matrix such that where denotes the conjugate transpose.
Definitions
Upper Hessenberg matrix
A square matrix is said to be in upper Hessenberg form or to be an upper Hessenberg matrix if for all with .
An upper Hessenberg matrix is called unreduced if all subdiagonal entries are nonzero, i.e. if for all .
Lower Hessenberg matrix
A square matrix is said to be in lower Hessenberg form or to be a lower Hessenberg matrix if its transpose is an upper Hessenberg matrix or equivalently if for all with .
A lower Hessenberg matrix is called unreduced if all superdiagonal entries are nonzero, i.e. if for all .
Examples
Consider the following matrices.
The matrix is an upper unreduced Hessenberg matrix, is a lower unreduced Hessenberg matrix and is a lower Hessenberg matrix but is not unreduced.
Computer programming
Many linear algebra algorithms require significantly less computational effort when applied to triangular matrices, and this improvement often carries over to Hessenberg matrices as well. If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular one, reduction to Hessenberg form is often the next best thing. In fact, reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps (for example, through Householder's transformation of unitary similarity transforms). Subsequent reduction of Hessenberg matrix to a triangular matrix can be achieved through iterative procedures, such as shifted QR-factorization. In eigenvalue algorithms, the Hessenberg |
https://en.wikipedia.org/wiki/Net2Phone | net2phone is a Cloud Communications provider offering cloud based telephony services to businesses worldwide. The company is a subsidiary of IDT Corporation.
History
net2phone was founded in 1990 by telecom entrepreneur Howard Jonas, the chairman and chief executive officer of net2phone’s parent company, IDT Corporation. The company was an early pioneer in the commercialization of voice-over-Internet protocol (VoIP) technologies leveraging the global carrier business and infrastructure of IDT and focusing on transitioning businesses and consumers from PSTN, traditional telecom interconnects, to Voice over IP.
On July 30, 1999, during the dot-com bubble, the company became a public company via an initial public offering, raising $81 million. Shares rose 77% on the first day of trading to $26 per share. After completion of the IPO, IDT owned 57% of the company. Within a few weeks, the shares increased another 100% in value, to $53 per share.
In March 2000, in a transaction facilitated by IDT CEO Howard Jonas, a consortium of telecommunications companies led by AT&T announced a $1.4 billion investment for a 32% stake in the company, buying shares for $75 each. The transaction was completed in August 2000. AOL had expressed an interest in buying all or part of the company but was not agreeable to the price.
In August 2000, Jonathan Fram, president of the company, left the company to join eVoice.
In September 2000, the company formed Adir Technologies, a joint venture with Cisco Systems. In March 2002, the company sued Cisco for breach of contract.
In February 2002, the company announced 110 layoffs, or 28% of its workforce.
In October 2004, Liore Alroy became chief executive officer of the company.
On March 13, 2006, IDT Corporation acquired the shares of the company that it did not already own for $2.05 per share.
In 2015, net2phone began providing Unified Communications as a Service (UCaaS) targeted to the SMB market. net2phone’s UCaaS initiative was develope |
https://en.wikipedia.org/wiki/Piquette | Piquette is a French wine term which commonly refers to a vinous beverage produced by adding water to grape pomace but sometimes refers to a very simple wine or a wine substitute.
From pomace
If water is added to the pomace remaining after grapes intended for wine production have been pressed, it is possible to produce a thin, somewhat wine-like beverage.
The ancient Greeks and Romans used pomace in this way under the name lora, and the product was used for slaves and common workers. After the wine grapes were pressed twice, the pomace was soaked in water for a day and pressed for a third time. The resulting liquid was mixed with more water to produce a thin, tepid "wine" that was not very appealing.
The production of piquette by poor farmers, or for consumption by farmhands and workers continued during the centuries, and is known to have been in practice as late as the mid-20th century. However, piquette seems to have been primarily associated with poor conditions, where real wine could not be afforded.
EU regulations
The European Union wine regulations define piquette as the product obtained by the fermentation of untreated grape pomace macerated in water, or by leaching fermented grape pomace with water. In cases where an EU member state allows the production of piquette, it may only be used for distillation or for consumption in the families of individual wine-growers. It may not be sold.
Produced by other methods
During the Great French Wine Blight in the late 19th century, the production of wine fell so dramatically in France that several types of "Ersatz wine" were frequently produced in France under the designation piquette, and not just consumed locally, but also sold. Some of it was coloured and flavoured to appear as real wine, or was blended into actual wine to increase the amount available.
A common way to produce such piquettes was to mix raisins with water. The raisins used were imported to France from Mediterranean countries, and were produc |
https://en.wikipedia.org/wiki/Unix%20philosophy | The Unix philosophy, originated by Ken Thompson, is a set of cultural norms and philosophical approaches to minimalist, modular software development. It is based on the experience of leading developers of the Unix operating system. Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a "software tools" movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software; these norms became as important and influential as the technology of Unix itself, and have been termed the "Unix philosophy."
The Unix philosophy emphasizes building simple, compact, clear, modular, and extensible code that can be easily maintained and repurposed by developers other than its creators. The Unix philosophy favors composability as opposed to monolithic design.
Origin
The Unix philosophy is documented by Doug McIlroy in the Bell System Technical Journal from 1978:
Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
It was later summarized by Peter H. Salus in A Quarter-Century of Unix (1994):
Write programs that do one thing and do it well.
Write programs to work together.
Write programs to handle text streams, because that is a universal interface.
In their award-winning Unix pap |
https://en.wikipedia.org/wiki/Rank%20of%20a%20partition | In mathematics, particularly in the fields of number theory and combinatorics, the rank of a partition of a positive integer is a certain integer associated with the partition. In fact at least two different definitions of rank appear in the literature. The first definition, with which most of this article is concerned, is that the rank of a partition is the number obtained by subtracting the number of parts in the partition from the largest part in the partition. The concept was introduced by Freeman Dyson in a paper published in the journal Eureka. It was presented in the context of a study of certain congruence properties of the partition function discovered by the Indian mathematical genius Srinivasa Ramanujan. A different concept, sharing the same name, is used in combinatorics, where the rank is taken to be the size of the Durfee square of the partition.
Definition
By a partition of a positive integer n we mean a finite multiset λ = { λk, λk − 1, . . . , λ1 } of positive integers satisfying the following two conditions:
λk ≥ . . . ≥ λ2 ≥ λ1 > 0.
λk + . . . + λ2 + λ1 = n.
If λk, . . . , λ2, λ1 are distinct, that is, if
λk > . . . > λ2 > λ1 > 0
then the partition λ is called a strict partition of n.
The integers λk, λk − 1, ..., λ1 are the parts of the partition. The number of parts in the partition λ is k and the largest part in the partition is λk. The rank of the partition λ (whether ordinary or strict) is defined as λk − k.
The ranks of the partitions of n take the following values and no others:
n − 1, n −3, n −4, . . . , 2, 1, 0, −1, −2, . . . , −(n − 4), −(n − 3), −(n − 1).
The following table gives the ranks of the various partitions of the number 5.
Ranks of the partitions of the integer 5
Notations
The following notations are used to specify how many partitions have a given rank. Let n, q be a positive integers and m be any integer.
The total number of partitions of n is denoted by p(n).
The number of partitions of n with rank m is d |
https://en.wikipedia.org/wiki/Microbacteriaceae | Microbacteriaceae is a family of bacteria of the order Actinomycetales. They are Gram-positive soil organisms.
Genera
The family Microbacteriaceae comprises the following genera:
Agreia Evtushenko et al. 2001
Agrococcus Groth et al. 1996
Agromyces Gledhill and Casida 1969 (Approved Lists 1980)
Allohumibacter Kim et al. 2016
Alpinimonas Schumann et al. 2012
Amnibacterium Kim and Lee 2011
Arenivirga Hamada et al. 2017
Aurantimicrobium Nakai et al. 2015
Canibacter Aravena-Román et al. 2014
Clavibacter Davis et al. 1984
Cnuibacter Zhou et al. 2016
Compostimonas Kim et al. 2012
Conyzicola Kim et al. 2014
"Crocebacterium" Rogers & Doran-Peterson 2006
Cryobacterium Suzuki et al. 1997
"Cryocola" Gavrish et al. 2003
Curtobacterium Yamada and Komagata 1972 (Approved Lists 1980)
Diaminobutyricibacter Kim et al. 2014
Diaminobutyricimonas Jang et al. 2013
Frigoribacterium Kämpfer et al. 2000
Frondihabitans Greene et al. 2009
Galbitalea Kim et al. 2014
Glaciibacter Katayama et al. 2009
Glaciihabitans Li et al. 2014
Gryllotalpicola Kim et al. 2012
Gulosibacter Manaia et al. 2004
Herbiconiux Behrendt et al. 2011
Homoserinibacter Kim et al. 2014
Homoserinimonas Kim et al. 2012
Huakuichenia Zhang et al. 2016
Humibacter Vaz-Moreira et al. 2008
Klugiella Cook et al. 2008
Labedella Lee 2007
Lacisediminihabitans Zhuo et al. 2020
Leifsonia Evtushenko et al. 2000
Leucobacter Takeuchi et al. 1996
"Luethyella" O'Neal et al. 2017
Lysinibacter Tuo et al. 2015
Lysinimonas Jang et al. 2013
"Marinisubtilis" Qin et al. 2021
Marisediminicola Li et al. 2010
Microbacterium Orla-Jensen 1919 (Approved Lists 1980)
Microcella Tiago et al. 2005
Microterricola Matsumoto et al. 2008
Mycetocola Tsukamoto et al. 2001
Naasia Weon et al. 2013
Okibacterium Evtushenko et al. 2002
Parafrigoribacterium Kong et al. 2016
Planctomonas Liu et al. 2019
"Candidatus Planktoluna" Hahn 2009
Plantibacter Behrendt et al. 2002
Pontimonas Jang et al. 2013
Protaetiiba |
https://en.wikipedia.org/wiki/Marine%20ecosystem | Marine ecosystems are the largest of Earth's aquatic ecosystems and exist in waters that have a high salt content. These systems contrast with freshwater ecosystems, which have a lower salt content. Marine waters cover more than 70% of the surface of the Earth and account for more than 97% of Earth's water supply and 90% of habitable space on Earth. Seawater has an average salinity of 35 parts per thousand of water. Actual salinity varies among different marine ecosystems. Marine ecosystems can be divided into many zones depending upon water depth and shoreline features. The oceanic zone is the vast open part of the ocean where animals such as whales, sharks, and tuna live. The benthic zone consists of substrates below water where many invertebrates live. The intertidal zone is the area between high and low tides. Other near-shore (neritic) zones can include mudflats, seagrass meadows, mangroves, rocky intertidal systems, salt marshes, coral reefs, lagoons. In the deep water, hydrothermal vents may occur where chemosynthetic sulfur bacteria form the base of the food web. Marine ecosystems are characterized by the biological community of organisms that they are associated with and their physical environment. Classes of organisms found in marine ecosystems include brown algae, dinoflagellates, corals, cephalopods, echinoderms, and sharks.
Marine ecosystems are important sources of ecosystem services and food and jobs for significant portions of the global population. Human uses of marine ecosystems and pollution in marine ecosystems are significantly threats to the stability of these ecosystems. Environmental problems concerning marine ecosystems include unsustainable exploitation of marine resources (for example overfishing of certain species), marine pollution, climate change, and building on coastal areas. Moreover, much of the carbon dioxide causing global warming and heat captured by global warming are absorbed by the ocean, ocean chemistry is changing through |
https://en.wikipedia.org/wiki/Mitochondrial%20amidoxime%20reducing%20component%201 | Mitochondrial amidoxime-reducing component 1 (also known as MOCO sulphurase C-terminal domain containing 1, MOSC1 or MARC1) is a mammalian molybdenum-containing enzyme. It is located in the outer mitochondrial membrane and consists of a N-terminal mitochondrial signal domain facing the inter-membrane space, a transmembrane domain, and a C-terminal catalytic domain facing the cytosol. In humans it is encoded by the MOSC1 gene.
MOCO stands for molybdenum cofactor.
MOSC1 has been reported to reduce amidoximes to amidines.
Genetic variation in MARC1 has been reported to be associated with lower blood cholesterol levels, blood liver enzyme levels, reduced liver fat and protection from cirrhosis suggesting that MARC1 deficiency may protect against liver disease. A genome-wide association study involving subjects from the UK Biobank further established as association of alcoholic-related liver disease.
See also
MOCOS |
https://en.wikipedia.org/wiki/81st%20meridian%20west | The meridian 81° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Atlantic Ocean, the Caribbean Sea, Panama, the Pacific Ocean, South America, the Southern Ocean, and Antarctica to the South Pole.
The 81st meridian west forms a great circle with the 99th meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 81st meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="120" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Ellesmere Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Jones Sound
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Devon Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Lancaster Sound
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Navy Board Inlet
| style="background:#b0e0e6;" | Passing just west of Bylot Island, Nunavut, (at )
|-
|
! scope="row" |
| Nunavut — Baffin Island
|-valign-"top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Foxe Basin
| style="background:#b0e0e6;" | Passing just east of the Melville Peninsula, Nunavut, (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Foxe Channel
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Bell Peninsula, Southampton Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Hudson Bay
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | James Bay
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Akimiski Island
|- |
https://en.wikipedia.org/wiki/Quinupristin | Quinupristin is a streptogramin B antibiotic, used in combination with dalfopristin under the trade name Synercid. It has activity against Gram-positive and atypical bacteria but not Gram-negative bacteria. It inhibits bacterial protein synthesis. The combination of quinupristin and dalfopristin is not active against Enterococcus faecalis and needs to be given in combination with other antibacterials for mixed infections that involve Gram-negative organisms.
Antibiotics |
https://en.wikipedia.org/wiki/Stable%20storage | Stable storage is a classification of computer data storage technology that guarantees atomicity for any given write operation and allows software to be written that is robust against some hardware and power failures. To be considered atomic, upon reading back a just written-to portion of the disk, the storage subsystem must return either the write data or the data that was on that portion of the disk before the write operations.
Most computer disk drives are not considered stable storage because they do not guarantee atomic write; an error could be returned upon subsequent read of the disk where it was just written to in lieu of either the new or prior data.
Implementation
Multiple techniques have been developed to achieve the atomic property from weakly atomic devices such as disks. Writing data to a disk in two places in a specific way is one technique and can be done by application software.
Most often though, stable storage functionality is achieved by mirroring data on separate disks via RAID technology (level 1 or greater). The RAID controller implements the disk writing algorithms that enable separate disks to act as stable storage.
The RAID technique is robust against some single disk failure in an array of disks whereas the software technique of writing to separate areas of the same disk only protects against some kinds of internal disk media failures such as bad sectors in single disk arrangements.
Computer data storage |
https://en.wikipedia.org/wiki/Clyde%20%28mascot%29 | Clyde was the official mascot of the 2014 Commonwealth Games in Glasgow. Clyde is an anthropomorphic thistle (the floral emblem of Scotland) and is named after the River Clyde which flows through the centre of Glasgow. The mascot was designed by Beth Gilmour from Cumbernauld, who won a competition run by Glasgow 2014 for children to design the Mascot. Beth's drawing was then brought to life by digital agency Nerv, who turned it into a commercial character, created a full backstory, gave it a name – Clyde – and created a website for him. Clyde was finally revealed in a seven-minute animated film created by Nerv at a ceremony at BBC Scotland's headquarters in Glasgow. The organiser, Glasgow 2014, said the mascot's design was chosen, because of its "Scottish symbolism and Glaswegian charm and likeability".
Clyde was named as the official mascot for Team Scotland at the 2022 Commonwealth Games in Birmingham, England.
Statues
25 life-size Clyde statues were erected at places of public interest across the city including the Glasgow Botanic Gardens, the Kelvingrove Art Gallery and at George Square. However following vandalism at a statue in the Govan area of the city, the statues were taken down. They are expected to be re-erected in secure areas.
Commercial
By the final day of the Games, over 50,000 Clyde mascot cuddly toys had been sold.
Post-Glasgow 2014
Due to popularity in the city, a petition was set up to make Clyde the mascot of the Glasgow City Council in the aftermath of the Commonwealth Games, the petition gained over 1,500 signatures.
In October 2021, it was announced that Clyde would be the official mascot for Team Scotland at the 2022 Commonwealth Games in Birmingham, England.
See also
List of Commonwealth Games mascots
Borobi (mascot)
Karak (mascot)
Matilda (mascot)
Shera (mascot)
Perry (mascot) |
https://en.wikipedia.org/wiki/Machine%20Learning%20%28journal%29 | Machine Learning is a peer-reviewed scientific journal, published since 1986.
In 2001, forty editors and members of the editorial board of Machine Learning resigned in order to support the Journal of Machine Learning Research (JMLR), saying that in the era of the internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. Instead, they wrote, they supported the model of JMLR, in which authors retained copyright over their papers and archives were freely available on the internet.
Following the mass resignation, Kluwer changed their publishing policy to allow authors to self-archive their papers online after peer-review.
Selected articles |
https://en.wikipedia.org/wiki/Buddy%20L | Buddy L (also known as Buddy "L" or Buddy-L) is an American toy brand and company founded in 1920 as the Buddy L Toy Company in East Moline, Illinois, by Fred Lundahl.
History
Buddy "L" toys were originally manufactured by the Moline Pressed Steel Company, which was started by Fred A. Lundahl in 1910. The company originally manufactured automobile fenders and other stamped auto body parts for the automobile industry, instead of toy products. The company primarily supplied parts for the McCormick-Deering line of farm implements and the International Harvester Company for its trucks. Moline Pressed Steel did not begin manufacturing toys until 1921. Mr. Lundhal wanted to make something new, different, and durable for his son Arthur. He designed and produced an all-steel miniature truck, reportedly a model of an International Harvester truck made from 18- and 20-gauge steel which had been discarded to the company's scrap pile.
Buddy L made such products as toy cars, dump trucks, delivery vans, fire engines, construction equipment, and trains. Fred Lundahl used to manufacture for International Harvester trucks. He started by making a toy dump truck out of steel scraps for his son Buddy. Soon after, he started selling Buddy L "toys for boys", made of pressed steel. Many were large enough for a child to straddle, propelling himself with his feet. Others were pull toys. A pioneer in the steel-toy field, Lundahl persuaded Marshall Field's and F. A. O. Schwarz to carry his line. He did very well until the Great Depression, then sold the company.
In 1941, Henry Katz and Company purchased Buddy L from the Molene Manufacturing Company.
From 1976 to 1990, Buddy L was owned by Richard Keats, a well-known New York toy designer who went to work for Buddy L the day after he graduated from Brown University in 1948. By 1978, the company was located in Clifton, New Jersey.
In 1990, Keats sold Buddy L to SLM International. SLM sold Buddy L off in 1995 under bankruptcy protection. By |
https://en.wikipedia.org/wiki/CHRNA10 | Neuronal acetylcholine receptor subunit alpha-10, also known as nAChRα10 and cholinergic receptor nicotinic alpha 10, is a protein that in humans is encoded by the CHRNA10 gene. The protein encoded by this gene is a subunit of certain nicotinic acetylcholine receptors (nAchR).
This nAchR subunit is required for the normal function of the olivocochlear system which is part of the auditory system. Furthermore, selective block of α9α10 nicotinic acetylcholine receptors by the conotoxin RgIA has been shown to be analgesic in an animal model of nerve injury pain.
α10 subunit-containing receptors are notably blocked by nicotine. The role of this antagonism in the effects of tobacco are unknown. |
https://en.wikipedia.org/wiki/Industrial%20data%20processing | Industrial data processing is a branch of applied computer science that covers the area of design and programming of computerized systems which are not computers as such — often referred to as embedded systems (PLCs, automated systems, intelligent instruments, etc.). The products concerned contain at least one microprocessor or microcontroller, as well as couplers (for I/O).
Another current definition of industrial data processing is that it concerns those computer programs whose variables in some way represent physical quantities; for example the temperature and pressure of a tank, the position of a robot arm, etc.
Computer engineering |
https://en.wikipedia.org/wiki/Trustworthy%20computing | The term Trustworthy Computing (TwC) has been applied to computing systems that are inherently secure, available, and reliable. It is particularly associated with the Microsoft initiative of the same name, launched in 2002.
History
Until 1995, there were restrictions on commercial traffic over the Internet.
On, May 26, 1995, Bill Gates sent the "Internet Tidal Wave" memorandum to Microsoft executives assigning "...the Internet this highest level of importance..." but Microsoft's Windows 95 was released without a web browser as Microsoft had not yet developed one. The success of the web had caught them by surprise but by mid 1995, they were testing their own web server, and on August 24, 1995, launched a major online service, MSN.
The National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure and produced an important report in 1999, "Trust in Cyberspace". This report reviews the cost of un-trustworthy systems and identifies actions required for improvement.
Microsoft and Trustworthy Computing
Bill Gates launched Microsoft's "Trustworthy Computing" initiative with a January 15, 2002 memo, referencing an internal whitepaper by Microsoft CTO and Senior Vice President Craig Mundie. The move was reportedly prompted by the fact that they "...had been under fire from some of its larger customers–government agencies, financial companies and others–about the security problems in Windows, issues that were being brought front and center by a series of self-replicating worms and embarrassing attacks." such as Code Red, Nimda, Klez and Slammer.
Four areas were identified as the initiative's key areas: Security, Privacy, Reliability, and Business Integrity, and despite some initial scepticism, at its 10-year anniversary it was generally accepted as having "...made a positive impact on the industry...".
The Trustworthy Computing campaign was t |
https://en.wikipedia.org/wiki/Cooper%20pair | In condensed matter physics, a Cooper pair or BCS pair (Bardeen–Cooper–Schrieffer pair) is a pair of electrons (or other fermions) bound together at low temperatures in a certain manner first described in 1956 by American physicist Leon Cooper.
Description
Cooper showed that an arbitrarily small attraction between electrons in a metal can cause a paired state of electrons to have a lower energy than the Fermi energy, which implies that the pair is bound. In conventional superconductors, this attraction is due to the electron–phonon interaction. The Cooper pair state is responsible for superconductivity, as described in the BCS theory developed by John Bardeen, Leon Cooper, and John Schrieffer for which they shared the 1972 Nobel Prize.
Although Cooper pairing is a quantum effect, the reason for the pairing can be seen from a simplified classical explanation. An electron in a metal normally behaves as a free particle. The electron is repelled from other electrons due to their negative charge, but it also attracts the positive ions that make up the rigid lattice of the metal. This attraction distorts the ion lattice, moving the ions slightly toward the electron, increasing the positive charge density of the lattice in the vicinity. This positive charge can attract other electrons. At long distances, this attraction between electrons due to the displaced ions can overcome the electrons' repulsion due to their negative charge, and cause them to pair up. The rigorous quantum mechanical explanation shows that the effect is due to electron–phonon interactions, with the phonon being the collective motion of the positively-charged lattice.
The energy of the pairing interaction is quite weak, of the order of 10−3 eV, and thermal energy can easily break the pairs. So only at low temperatures, in metal and other substrates, are a significant number of the electrons bound in Cooper pairs.
The electrons in a pair are not necessarily close together; because the interaction is |
https://en.wikipedia.org/wiki/Frame%20rate%20control | Frame rate control (FRC) or temporal dithering is a method for achieving greater color depth in TFT LCD displays.
Older, cheaper, faster LCDs, especially those using TN, often represent colors using only 6 bits per RGB color, or 18 bit in total, and were unable to display the 16.78 million color shades (24-bit truecolor) that contemporary display devices like graphics cards, video game consoles, tablet computers and set-top boxes can output. Instead, they used a spatial dithering method that combines adjacent pixels to simulate the desired shade.
FRC cycles between different color shades with each new frame to simulate an intermediate shade. This can create a potentially noticeable 30 Hz (half frame rate) flicker. Temporal dithering tends to be most noticeable in darker tones, while spatial dithering appears to make the individual pixels of the LCD visible. TFT panels available in 2020 often use FRC to display 30-bit deep color or HDR10 with 24-bit color panels.
This method is similar in principle to field-sequential color system by CBS and other sequential methods, such as used for grays in DLP, and also colors in single-chip DLP.
In the demonstration video green and cyan-green are mixed both statically (for reference) and by rapidly alternating. A display with a refresh rate of at least 60hz is recommended for this video. Pausing the video shows that the perceived color of the bottom-right square during playback is different from the color seen in any individual frame. In an LCD display that uses FRC the colors that are alternated between would be more similar than those in the demonstration video, further reducing the flicker effect.
See also
Computer monitor
LCD television
ZX_Spectrum_graphic_modes#GigaScreen |
https://en.wikipedia.org/wiki/Farmacovigilancia%20Espa%C3%B1ola%2C%20Datos%20de%20Reacciones%20Adversas | The Farmacovigilancia Española, Datos de Reacciones Adversas (FEDRA), also known as the Spanish Pharmacovigilance Datatabase or Spanish Pharmacovigilance System, is a pharmacovigilance database in Spain which was developed in 1982.
See also
Pharmacovigilance
FDA Adverse Event Reporting System (FAERS) |
https://en.wikipedia.org/wiki/Prunasin | (R)-prunasin is a cyanogenic glycoside related to amygdalin. Chemically, it is the glucoside of (R)-mandelonitrile.
Natural occurrences
Prunasin is found in species in the genus Prunus such as Prunus japonica or P. maximowiczii and in bitter almonds. It is also found in leaves and stems of Olinia ventosa, O. radiata, O. emarginata and O. rochetiana and in Acacia greggii. It is a biosynthetic precursor of and intermediate in the biosynthesis of amygdalin, the chemical compound responsible for the taste of bitter almond.
It is also found in dandelion coffee, a coffee substitute.
Sambunigrin
Sambunigrin, a diastereomer of prunasin derived from (S)-mandelonitrile instead of it the (R)-isomer, has been isolated from leaves of the elder tree (Sambucus nigra). Sambunigrin is present in the leaves and stems of elder at a 1:3 ratio of sambunigrin to prunasin, and 2:5 in the immature seed. It is not found in the root.
Biosynthesis
Overview
(R)-prunasin begins with the common amino acid phenylalanine, which in plants is produced via the Shikimate pathway in primary metabolism. The pathway is catalyzed mainly by two cytochrome P450 (CYP) enzymes and a UDP-glucosyltransferase (UGT). After (R)-prunasin is formed, it is either converted into amygdalin by an additional UDP-glucosyltransferase or degraded into benzaldehyde and hydrogen cyanide.
Researchers have shown that the accumulation (or lack of) of prunasin and amygdalin in the almond kernel is responsible for sweet and bitter genotypes. Because amygdalin is responsible for the bitter almond taste, almond growers have selected genotypes which minimize the biosynthesis of amygdalin. The CYP enzymes responsible for generation of prunasin are conserved across Prunus species. There is a correlation between high concentration of prunasin in the vegetative regions of the plant and the sweetness of the almond, which is relevant to the almond agricultural industry. In almonds, the amygdalin biosynthetic genes are expressed at |
https://en.wikipedia.org/wiki/Thermodynamic%20potential | A thermodynamic potential (or more accurately, a thermodynamic potential energy) is a scalar quantity used to represent the thermodynamic state of a system. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions.
One main thermodynamic potential that has a physical interpretation is the internal energy . It is the energy of configuration of a given system of conservative forces (that is why it is called potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for . In other words, each thermodynamic potential is equivalent to other thermodynamic potentials; each potential is a different expression of the others.
In thermodynamics, external forces, such as gravity, are counted as contributing to total energy rather than to thermodynamic potentials. For example, the working fluid in a steam engine sitting on top of Mount Everest has higher total energy due to gravity than it has at the bottom of the Mariana Trench, but the same thermodynamic potentials. This is because the gravitational potential energy belongs to the total energy rather than to thermodynamic potentials such as internal energy.
Description and interpretation
Five common thermodynamic potentials are:
where = temperature, = entropy, = pressure, = volume. is the number of particles of type in the system and is the chemical potential for an -type particle. The set of all are also included as natural variables but may be ignored when no chemical reactions are occurring which cause them to change. The Helmholtz free energy is in ISO/IEC standard called Helmholtz energy or Helmholtz function. It is often denoted by the symb |
https://en.wikipedia.org/wiki/Cayley%E2%80%93Menger%20determinant | In linear algebra, geometry, and trigonometry, the Cayley–Menger determinant is a formula for the content, i.e. the higher-dimensional volume, of a -dimensional simplex in terms of the squares of all of the distances between pairs of its vertices. The determinant is named after Arthur Cayley and Karl Menger.
The pairwise distance polynomials between n points in a real Euclidean space are Euclidean invariants that are associated via the Cayley-Menger relations. These relations served multiple purposes such as generalising Heron's Formula, computing the content of a n-dimensional simplex, and ultimately determining if any real symmetric matrix is a Euclidean distance matrix in the field of Distance geometry.
History
Karl Menger was a young geometry professor at the University of Vienna and Arthur Cayley was a British mathematician who specialized in algebraic geometry. Menger extended Cayley's algebraic excellence to propose a new axiom of metric spaces using the concepts of distance geometry and relation of congruence, known as the Cayley-Menger determinant. This ended up generalising one of the first discoveries in distance geometry, Heron's formula, which computes the area of a triangle given its side lengths.
Definition
Let be points in -dimensional Euclidean space, with . These points are the vertices of an n-dimensional simplex: a triangle when ; a tetrahedron when , and so on. Let be the Euclidean distances between vertices and . The content, i.e. the n-dimensional volume of this simplex, denoted by , can be expressed as a function of determinants of certain matrices, as follows:
This is the Cayley–Menger determinant. For it is a symmetric polynomial in the 's and is thus invariant under permutation of these quantities. This fails for but it is always invariant under permutation of the vertices.
Except for the final row and column of 1s, the matrix in the second form of this equation is a Euclidean distance matrix.
Special cases
2-Simplex
To |
https://en.wikipedia.org/wiki/Flag%20of%20Manitoba | The flag of Manitoba consists of a Red Ensign defaced with the shield of the provincial coat of arms. Adopted in 1965 shortly after the new national flag was inaugurated, it has been the flag of the province since May 12 of the following year. Its adoption was intended to maintain the legacy of the Canadian Red Ensign as the country's unofficial flag, after the adoption of the Maple Leaf Flag in 1965. Manitoba's flag has been frequently mistaken for the flag of the neighbouring province of Ontario, which is also a Red Ensign with its respective coat of arms. This along with criticisms of a lack of inclusivity of the flag, has led some Manitobans to call for a new and more distinct flag.
History
The Manitoba Act received royal assent on May 12, 1870, allowing for the creation of the province of Manitoba; it officially joined Confederation two months later on July 15. On August 2 of that same year, an Order in Council was promulgated to establish a seal for the new province. It featured the Cross of Saint George at the chief and a bison on a green field for the lower portion. Subsequently, King Edward VII issued a Royal Warrant on May 10, 1905, allowing Manitoba to utilize their own coat of arms. At the time, this consisted solely of a shield identical to the Great Seal of the province.
On February 15, 1965, the federal government introduced a new national flag featuring a maple leaf to replace the Union Jack (the official flag) and the Canadian Red Ensign, the country's civil ensign at the time that had been used unofficially as the national flag. The Great Canadian Flag Debate that preceded this change showed there were still parts of Canada where imperialist nostalgia was strong. Lamenting the demise of the Canadian Red Ensign, its proponents in those regions endeavoured to have it modified as a provincial flag. Resistance to the new national flag was most vociferous in the rural areas of Manitoba and Ontario. Consequently, both provinces chose to inc |
https://en.wikipedia.org/wiki/Schema%20evolution | In computer science, schema versioning and schema evolution, deal with the need to retain current data and software system functionality in the face of changing database structure. The problem is not limited to the modification of the schema. It, in fact, affects the data stored under the given schema and the queries (and thus the applications) posed on that schema.
A database design is sometimes created as a "as of now" instance and thus schema evolution is not considered. (This is different but related to where a database is designed as a "one size fits all" which doesn't cover attribute volatility). This assumption, almost unrealistic in the context of traditional information systems, becomes unacceptable in the context of systems that retain large volumes of historical information or those such as Web Information Systems, that due to the distributed and cooperative nature of their development, are subject of an even stronger pressure toward change (from 39% to over 500% more intense than in traditional settings). Due to this historical heritage the process of schema evolution is nowadays a particularly taxing one. It is, in fact, widely acknowledged that the data management core of an applications is one of the most difficult and critical components to evolve. The key problem is the impact
of the schema evolution on queries and applications. As shown in (which provides an analysis of the MediaWiki evolution) each evolution step might affect up to 70% of the queries operating on the schema, that must be manually reworked consequently.
The problem has been recognized as a pressing one by the database community for more than 12 years. Supporting Schema Evolution is a difficult problem involving complex mapping among schema versions and the tool support has been so far very limited. The recent theoretical advances on mapping composition and mapping invertibility, which represent the core problems underlying the schema evolution remains almost inaccessible to the |
https://en.wikipedia.org/wiki/Copha | Copha, a registered trademark of Peerless Foods, is a form of vegetable fat shortening made from hydrogenated coconut oil. Copha is produced only in Australia, but there are many suppliers of hydrogenated coconut fat in various forms worldwide. It is 100% fat, at least 98% of which is saturated. It also contains soybean lecithin.
It is used in Australia for confectionery, such as rocky road, and a number of foods for children, being an essential ingredient in white Christmas, and in chocolate crackles, which are made from Rice Bubbles, copha and cocoa powder. It is also used as a "chocolate coating" on baked goods, that amounts to a form of compound chocolate.
Concern about the health hazards of hydrogenated fats (trans fats) is a contributor to the declining popularity of Copha-based confectionery.
In New Zealand, it is marketed as Kremelta. Known in Europe as coconut fat, it is available either in its pure form, or in solid form with lecithin added as an emulsifier. In France it is marketed as Végétaline and in Germany and Denmark it is marketed as Palmin. It is not readily available in the United States.
See also
Hydrogenation
Coconut oil
Saturated fat
Shortening |
https://en.wikipedia.org/wiki/List%20of%20mathematical%20shapes | Following is a list of some mathematically well-defined shapes.
Algebraic curves
Cubic plane curve
Quartic plane curve
Rational curves
Degree 2
Conic sections
Unit circle
Unit hyperbola
Degree 3
Folium of Descartes
Cissoid of Diocles
Conchoid of de Sluze
Right strophoid
Semicubical parabola
Serpentine curve
Trident curve
Trisectrix of Maclaurin
Tschirnhausen cubic
Witch of Agnesi
Degree 4
Ampersand curve
Bean curve
Bicorn
Bow curve
Bullet-nose curve
Cruciform curve
Deltoid curve
Devil's curve
Hippopede
Kampyle of Eudoxus
Kappa curve
Lemniscate of Booth
Lemniscate of Gerono
Lemniscate of Bernoulli
Limaçon
Cardioid
Limaçon trisectrix
Trifolium curve
Degree 5
Quintic of l'Hospital
Degree 6
Astroid
Atriphtaloid
Nephroid
Quadrifolium
Families of variable degree
Epicycloid
Epispiral
Epitrochoid
Hypocycloid
Lissajous curve
Poinsot's spirals
Rational normal curve
Rose curve
Curves of genus one
Bicuspid curve
Cassini oval
Cassinoide
Cubic curve
Elliptic curve
Watt's curve
Curves with genus greater than one
Butterfly curve
Elkies trinomial curves
Hyperelliptic curve
Klein quartic
Classical modular curve
Bolza surface
Macbeath surface
Curve families with variable genus
Polynomial lemniscate
Fermat curve
Sinusoidal spiral
Superellipse
Hurwitz surface
Transcendental curves
Bowditch curve
Brachistochrone
Butterfly curve
Catenary
Clélies
Cochleoid
Cycloid
Horopter
Isochrone
Isochrone of Huygens (Tautochrone)
Isochrone of Leibniz
Isochrone of Varignon
Lamé curve
Pursuit curve
Rhumb line
Spirals
Archimedean spiral
Cornu spiral
Cotes' spiral
Fermat's spiral
Galileo's spiral
Hyperbolic spiral
Lituus
Logarithmic spiral
Nielsen's spiral
Syntractrix
Tractrix
Trochoid
Piecewise constructions
Bézier curve
Splines
B-spline
Nonuniform rational B-spline
Ogee
Loess curve
Lowess
Polygonal curve
Maurer rose
Reuleaux triangle
Bézier triangle
Curves generated by other curves
Caustic including Catacaustic and Diacaustic
Cissoid
Conchoid
Evolute
Glissette
Inverse curve
Inv |
https://en.wikipedia.org/wiki/Tucotuzumab%20celmoleukin | Tucotuzumab celmoleukin is an anti-cancer drug. It is a fusion protein of a humanized monoclonal antibody (tucotuzumab) and an interleukin-2 (celmoleukin).
This drug was developed by EMD Pharmaceuticals. |
https://en.wikipedia.org/wiki/Entropy%20unit | The entropy unit is a non-S.I. unit of thermodynamic entropy, usually denoted "e.u." or "eU" and equal to one calorie per kelvin per mole, or 4.184 joules per kelvin per mole. Entropy units are primarily used in chemistry to describe enthalpy changes.
Sources
Units of measurement |
https://en.wikipedia.org/wiki/Balliol-Trinity%20Laboratories | The Balliol-Trinity Laboratories in Oxford, England, was an early chemistry laboratory at the University of Oxford.
The laboratory was located between Balliol College and Trinity College, hence the name. It was especially known for physical chemistry.
Chemistry was first recognized as a separate discipline at Oxford University in the 19th century. From 1855, a chemistry laboratory existed in a basement at Balliol College. In 1879, Balliol and Trinity agreed to have a laboratory at the boundary of the two colleges. The laboratory became the strongest of the Oxford college research institutions in chemistry. It remained in operation until the Second World War when a new Physical Chemistry Laboratory (PCL) was constructed by Oxford University in the Science Area.
People
The following scientists of note worked in the Balliol-Trinity Laboratories:
E. J. Bowen
Sir John Conroy
Sir Harold Hartley
Sir Cyril Norman Hinshelwood (Nobel Prize winner)
Henry Moseley
See also
Abbot's Kitchen, Oxford, another early chemistry laboratory in Oxford
Department of Chemistry, University of Oxford
Physical Chemistry Laboratory, which replaced the Balliol-Trinity Laboratories |
https://en.wikipedia.org/wiki/Insider%20investment%20strategy | The insider investment strategy is an investment strategy that follows the buying and selling decisions of so-called "insiders" in a stock market. The primary insiders have an advantage because they have access to more information about issues that could affect the current and future value of stock, which is known as an "information advantage." However, in the world there are only a few investment funds that follow the insider trades, both of them were established in 2011.
In the United States, Catalyst Capital Advisors LLC manages Catalyst Insider Buying Fund. This fund is a large-cap, long-only equity fund that only invests in companies where corporate insiders are buying their own company's stock on the open market. In Europe, Dovre Forvaltning UAB manages Dovre Inside Nordic fund.
Insider trading studies
A Lorie-Niederhoffer study indicates that proper and prompt analysis of data on insider trading can be profitable.
In 2014, Dovre Forvaltning shared his analysis on Insider Influence in the Nordic Region. The company analyzed these different yearly portfolios (both for Purchases and Sales):
Had an insider Purchases in the past 1/3/6 months.
Had only insider Purchases in the past 1/3/6 months.
Last insider Transaction in the past 1/3/6 months was an insider Purchase.
Had an insider Sales in the past 1/3/6 months.
Had only insider Sales in the past 1/3/6 months.
Last insider Transaction in the past 1/3/6 months was an insider Sale.
Only transactions above 80.000 SEK were included (33% of all insider trades were excluded because they were too small). If there were no Purchases/Sales in 1,3,6 months after a company's inclusion, it was excluded from the portfolio. All stocks are equally weighted. The analysis showed that:
Highest out performance is in Small Caps insider Purchase portfolios. Smaller in Mid Caps Purchase portfolios. And smallest in Large Caps insider Purchase portfolios.
'Insider effect' fades away with longer holding horizon.
Sell |
https://en.wikipedia.org/wiki/Sakurai%20Prize | The J. J. Sakurai Prize for Theoretical Particle Physics, is presented by the American Physical Society at its annual April Meeting, and honors outstanding achievement in particle physics theory. The prize consists of a monetary award (US$10,000), a certificate citing the contributions recognized by the award, and a travel allowance for the recipient to attend the presentation. The award is endowed by the family and friends of particle physicist J. J. Sakurai. The prize has been awarded annually since 1985.
Prize recipients
The following have won this prize:
2023 Heinrich Leutwyler: "For fundamental contributions to the effective field theory of pions at low energies, and for proposing that the gluon is a color octet."
2022 Nima Arkani-Hamed: "For the development of transformative new frameworks for physics beyond the standard model with novel experimental signatures, including work on large extra dimensions, the little Higgs, and more generally for new ideas connected to the origin of the electroweak scale."
2021 Vernon Barger: "For pioneering work in collider physics contributing to the discovery and characterization of the W boson, top quark, and Higgs boson, and for the development of incisive strategies to test theoretical ideas with experiments."
2020 Pierre Sikivie: "For seminal work recognizing the potential visibility of the invisible axion, devising novel methods to detect it, and for theoretical investigations of its cosmological implications."
2019 Lisa Randall and Raman Sundrum: "For creative contributions to physics beyond the Standard Model, in particular the discovery that warped extra dimensions of space can solve the hierarchy puzzle, which has had a tremendous impact on searches at the Large Hadron Collider."
2018 Ann Nelson and Michael Dine: "For groundbreaking explorations of physics beyond the standard model of particle physics, including their seminal joint work on dynamical super-symmetry breaking, and for their innovative contributio |
https://en.wikipedia.org/wiki/Dumaresq | The Dumaresq is a mechanical calculating device invented around 1902 by Lieutenant John Dumaresq of the Royal Navy. It is a computer that relates vital variables of the fire control problem to the movement of one's own ship and that of a target ship.
It was often used with other devices, such as a Vickers range clock, to generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded.
Geometric principle
The dumaresq relies on sliding and rotating bars and dials to represent the motion of the two ships.
Normally the motion of the ship carrying the dumaresq is represented by a metal bar running above the instrument. Below the bar is a round metal plate inscribed with a coordinate plot, and an angle scale around its outer rim. The fixed bar is mounted on a bearing that allows it to be turned to represent the direction of motion of the ship, measured against the scale. Hanging down from the metal bar is a device that is slid along the bar to represent the speed of the ship. This sliding part is normally in the form of a ring, sometimes referred to as the "inclination ring", that is suspended just above the coordinate plate.
The motion of the enemy ship is represented by a bar connected to the sliding ring, the "enemy bar". This is normally in the form of a long pointer that extends from the ring towards the edge of the plot, which allows the angle of the enemy ship to be input by rotating the pointer (and ring) as measured against the angle scale at the edge of the plot. A smaller pointer connected to this bar, the "enemy pointer", extends downward from the bar, and can slide along it to represent the speed of the enemy ship.
The central coordinate plate also rotates, which is used to represent the current bearing to the target. When correctly set, the enemy pointer will point to a location on the coordinate plate. The coordinates can be |
https://en.wikipedia.org/wiki/Beveridge%20curve | A Beveridge curve, or UV curve, is a graphical representation of the relationship between unemployment and the job vacancy rate, the number of unfilled jobs expressed as a proportion of the labour force. It typically has vacancies on the vertical axis and unemployment on the horizontal. The curve, named after William Beveridge, is hyperbolic-shaped and slopes downward, as a higher rate of unemployment normally occurs with a lower rate of vacancies. If it moves outward over time, a given level of vacancies would be associated with higher and higher levels of unemployment, which would imply decreasing efficiency in the labour market. Inefficient labour markets are caused by mismatches between available jobs and the unemployed and an immobile labour force.
The position on the curve can indicate the current state of the economy in the business cycle. For example, recessionary periods are indicated by high unemployment and low vacancies, corresponding to a position on the lower side of the 45° line, and high vacancies and low unemployment indicate the expansionary periods on the upper side of the 45° line.
In the United States, following the Great Recession, there was a marked shift in the Beveridge curve. A 2012 International Monetary Fund (IMF) said the shift can be explained in part by "extended unemployment insurance benefits" and "skill mismatch" between unemployment and vacancies.
History
The Beveridge curve, or UV curve, was developed in 1958 by Christopher Dow and Leslie Arthur Dicks-Mireaux. They were interested in measuring excess demand in the goods market for the guidance of Keynesian fiscal policies and took British data on vacancies and unemployment in the labour market as a proxy, since excess demand is unobservable. By 1958, they had 12 years of data available since the British government had started collecting data on unfilled vacancies from notification at labour exchanges in 1946. Dow and Dicks-Mireaux presented the unemployment and vacancy data in |
https://en.wikipedia.org/wiki/Maximal%20ergodic%20theorem | The maximal ergodic theorem is a theorem in ergodic theory, a discipline within mathematics.
Suppose that is a probability space, that is a (possibly noninvertible) measure-preserving transformation, and that . Define by
Then the maximal ergodic theorem states that
for any λ ∈ R.
This theorem is used to prove the point-wise ergodic theorem. |
https://en.wikipedia.org/wiki/Complex%20conjugate | In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if and are real numbers then the complex conjugate of is The complex conjugate of is often denoted as or .
In polar form, if and are real numbers then the conjugate of is This can be shown using Euler's formula.
The product of a complex number and its conjugate is a real number: (or in polar coordinates).
If a root of a univariate polynomial with real coefficients is complex, then its complex conjugate is also a root.
Notation
The complex conjugate of a complex number is written as or The first notation, a vinculum, avoids confusion with the notation for the conjugate transpose of a matrix, which can be thought of as a generalization of the complex conjugate. The second is preferred in physics, where dagger (†) is used for the conjugate transpose, as well as electrical engineering and computer engineering, where bar notation can be confused for the logical negation ("NOT") Boolean algebra symbol, while the bar notation is more common in pure mathematics.
If a complex number is represented as a matrix, the notations are identical, and the complex conjugate corresponds to a flip along the diagonal.
Properties
The following properties apply for all complex numbers and unless stated otherwise, and can be proved by writing and in the form
For any two complex numbers, conjugation is distributive over addition, subtraction, multiplication and division:
A complex number is equal to its complex conjugate if its imaginary part is zero, that is, if the number is real. In other words, real numbers are the only fixed points of conjugation.
Conjugation does not change the modulus of a complex number:
Conjugation is an involution, that is, the conjugate of the conjugate of a complex number is In symbols,
The product of a complex number with its conjugate is equal to the s |
https://en.wikipedia.org/wiki/Termination%20factor | In molecular biology, a termination factor is a protein that mediates the termination of RNA transcription by recognizing a transcription terminator and causing the release of the newly made mRNA. This is part of the process that regulates the transcription of RNA to preserve gene expression integrity and are present in both eukaryotes and prokaryotes, although the process in bacteria is more widely understood. The most extensively studied and detailed transcriptional termination factor is the Rho (ρ) protein of E. coli.
Prokaryotic
Prokaryotes use one type of RNA polymerase, transcribing mRNAs that code for more than one type of protein. Transcription, translation and mRNA degradation all happen simultaneously. Transcription termination is essential to define boundaries in transcriptional units, a function necessary to maintain the integrity of the strands and provide quality control. Termination in E. coli may be Rho dependent, utilizing Rho factor, or Rho independent, also known as intrinsic termination. Although most operons in DNA are Rho independent, Rho dependent termination is also essential to maintain correct transcription.
ρ factor
The Rho protein is an RNA translocase that recognizes a cytosine-rich region of the elongating mRNA, but the exact features of the recognized sequences and how the cleaving takes place remain unknown. Rho forms a ring-shaped hexamer and advances along the mRNA, hydrolyzing ATP toward RNA polymerase (5' to 3' with respect to the mRNA). When the Rho protein reaches the RNA polymerase complex, transcription is terminated by dissociation of the RNA polymerase from the DNA. The structure and activity of the Rho protein is similar to that of the F1 subunit of ATP synthase, supporting the theory that the two share an evolutionary link.
Rho factor is widely present in different bacterial sequences and is responsible for the genetic polarity in E. coli. It works as a sensor of translational status, inhibiting non-productive transcri |
https://en.wikipedia.org/wiki/Steven%20Handel | Steven Neil Handel (born January 29, 1945, in Brooklyn) is an American educator and restoration ecologist. Handel is currently Distinguished Professor of Ecology at Rutgers University and Visiting Professor at the Harvard University Graduate School of Design.
Career
A native of Brooklyn, Handel received a Bachelor of Arts in Biological Sciences from Columbia University (1969) and a Master's degree (1974) and Doctor of Philosophy (1976) in Ecology and Evolutionary Biology from Cornell University.
Handel began his professorial career as a biology professor at the University of South Carolina, Yale University—where he also held the post of director of the Marsh Botanical Garden—and Rutgers University. In 1996, he was promoted by Rutgers to a full professor of ecology and was named director of their Center for Urban Restoration Ecology.
Handel was the lead ecologist for the restoration of Orange County Great Park in Irvine, California, and his other projects include Brooklyn Bridge Park, and the landscape for the 2008 Summer Olympics in Beijing, China.
Awards
2009 - American Society of Landscape Architects Honor Research Award
2011 - Society for Ecological Restoration Theodore M. Sperry Award
2015 - American Society of Landscape Architects Honor Communications Award |
https://en.wikipedia.org/wiki/Zen%204 | Zen 4 is the codename for a CPU microarchitecture designed by AMD, released on September 27, 2022. It is the successor to Zen 3 and uses TSMC's N5 process for CCDs. Zen 4 powers Ryzen 7000 mainstream desktop processors (codenamed "Raphael") and is used in high-end mobile processors (codenamed "Dragon Range"), thin & light mobile processors (codenamed "Phoenix"), as well as EPYC 9004 server processors (codenamed "Genoa" and "Bergamo").
Features
Like its predecessor, Zen 4 in its Desktop Ryzen variants features one or two Core Complex Dies (CCDs) built on TSMC's 5 nm process and one I/O die built on 6 nm. Previously, the I/O die on Zen 3 was built on GlobalFoundries' 14 nm process for EPYC and 12 nm process for Ryzen. Zen 4's I/O die includes integrated RDNA 2 graphics for the first time on any Zen architecture. Zen 4 marks the first utilization of the 5 nm process for x86-based desktop processors.
On desktop and server platforms, Zen 4 supports only DDR5 memory, with support for DDR4 dropped. Additionally, Zen 4 supports new AMD EXPO SPD profiles for more comprehensive memory tuning and overclocking by the RAM manufacturers. Unlike Intel XMP, AMD EXPO is marketed as an open, license and royalty-free standard for describing memory kit parameters, such as operating frequency, timings and voltages. It allows to encode a wider set of timings to achieve better performance and compatibility. However, XMP memory profiles are still supported. EXPO can also support Intel processors.
All Ryzen desktop processors feature 28 (24 + 4) PCIe 5.0 lanes. This means that a discrete GPU can be connected by 16 PCIe lanes or two GPUs by 8 PCIe lanes each. Additionally, there are now 2 x 4 lane PCIe interfaces, most often used for M.2 storage devices. Whether the lanes connecting the GPUs in the mechanical x16 slots are executed as PCIe 4.0 or PCIe 5.0 can be configured by the mainboard manufacturers. Finally, 4 PCIe 5.0 lanes are reserved for connecting the south bridge chip or chips |
https://en.wikipedia.org/wiki/HMG-CoA%20reductase%20family | In molecular biology, the HMG-CoA reductase family is a family of enzymes which participate in the mevalonate pathway, the metabolic pathway that produces cholesterol and other isoprenoids.
There are two distinct classes of hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase enzymes: class I consists of eukaryotic and most archaeal enzymes , while class II consists of prokaryotic enzymes .
Class I HMG-CoA reductases catalyse the NADP-dependent synthesis of mevalonate from 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA). In vertebrates, membrane-bound HMG-CoA reductase is the rate-limiting enzyme in the biosynthesis of cholesterol and other isoprenoids. In plants, mevalonate is the precursor of all isoprenoid compounds. The reduction of HMG-CoA to mevalonate is regulated by feedback inhibition by sterols and non-sterol metabolites derived from mevalonate, including cholesterol. In archaea, HMG-CoA reductase is a cytoplasmic enzyme involved in the biosynthesis of the isoprenoids side chains of lipids. Class I HMG-CoA reductases consist of an N-terminal membrane domain (lacking in archaeal enzymes), and a C-terminal catalytic region. The catalytic region can be subdivided into three domains: an N-domain (N-terminal), a large L-domain, and a small S-domain (inserted within the L-domain). The L-domain binds the substrate, while the S-domain binds NADP.
Class II HMG-CoA reductases catalyse the reverse reaction of class I enzymes, namely the NAD-dependent synthesis of HMG-CoA from mevalonate and CoA. Some bacteria, such as Pseudomonas mevalonii, can use mevalonate as the sole carbon source. Class II enzymes lack a membrane domain. Their catalytic region is structurally related to that of class I enzymes, but it consists of only two domains: a large L-domain and a small S-domain (inserted within the L-domain). As with class I enzymes, the L-domain binds substrate, but the S-domain binds NAD (instead of NADP in class I). |
https://en.wikipedia.org/wiki/Cooler | A cooler, portable ice chest, ice box, cool box, chilly bin (in New Zealand), or esky (Australia) is an insulated box used to keep food or drink cool.
Ice cubes are most commonly placed in it to help the contents inside stay cool. Ice packs are sometimes used, as they either contain the melting water inside, or have a gel sealed inside that stays cold longer than plain ice (absorbing heat as it changes phase).
Coolers are often taken on picnics, and on vacation or holiday. Where summers are hot, they may also be used just for getting cold groceries home from the store, such as keeping ice cream from melting in a hot automobile. Even without adding ice, this can be helpful, particularly if the trip home will be lengthy. Some coolers have built-in cupholders in the lid.
They are usually made with interior and exterior shells of plastic, with a hard foam in between. They come in sizes from small personal ones to large family ones with wheels. Disposable ones are made solely from polystyrene foam (such as is a disposable coffee cup) about 2 cm or one inch thick. Most reusable ones have molded-in handles; a few have shoulder straps. The cooler has developed from just a means of keeping beverages cold into a mode of transportation with the ride-on cooler. A thermal bag, cooler bag or cool bag is very similar in concept, but typically smaller and not rigid.
History
The original inventor of the cooler is unknown, with versions becoming available in various parts of the world throughout the 1950s.
The portable ice chest was patented in the USA by Richard C. Laramy of Joliet, Illinois. On February 24, 1951, Laramy filed an application with the United States Patent Office for a portable ice chest (Serial No. 212,573). The patent (#2,663,157) was issued December 22, 1953.
In 1952, the portable Esky Auto Box was released in Australia by the Sydney refrigeration company Malley’s. Made from steel and finished in baked enamel and chrome, with cork sheeting for insulati |
https://en.wikipedia.org/wiki/Rostrum%20%28anatomy%29 | Rostrum (from Latin , meaning beak) is a term used in anatomy for a number of phylogenetically unrelated structures in different groups of animals.
Invertebrates
In crustaceans, the rostrum is the forward extension of the carapace in front of the eyes. It is generally a rigid structure, but can be connected by a hinged joint, as seen in Leptostraca.
Among insects, the rostrum is the name for the piercing mouthparts of the order Hemiptera as well as those of the snow scorpionflies, among many others. The long snout of weevils is also called a rostrum.
Gastropod molluscs have a rostrum or proboscis.
Cephalopod molluscs have hard beak-like mouthparts referred to as the rostrum.
Vertebrates
In mammals, the rostrum is that part of the cranium located in front of the zygomatic arches, where it holds the teeth, palate, and nasal cavity. Additionally, the corpus callosum of the human brain has a nerve tract known as the rostrum.
The beak or snout of a vertebrate may also be referred to as the rostrum.
Some cetaceans, including toothed whales such as dolphins and beaked whales, have rostrums (beaks) which evolved from their jawbones. The narwhal possesses a large rostrum (tusk) which evolved from a protruding canine tooth.
Some fish have permanently protruding rostrums which evolved from their upper jawbones. Billfish (marlin, swordfish and sailfish) use rostrums (bills) to slash and stun prey. Paddlefish, goblin sharks and hammerhead sharks have rostrums packed with electroreceptors which signal the presence of prey by detecting weak electrical fields. Sawsharks and the critically endangered sawfish have rostrums (saws) which are both electro-sensitive and used for slashing. The rostrums extend ventrally in front of the fish. In the case of hammerheads the rostrum (hammer) extends both ventrally and laterally (sideways).
See also |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.