source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Code%20of%20the%20Quipu | Code of the Quipu is a book on the Inca system of recording numbers and other information by means of a quipu, a system of knotted strings. It was written by mathematician Marcia Ascher and anthropologist Robert Ascher, and published as Code of the Quipu: A Study in Media, Mathematics, and Culture by the University of Michigan Press in 1981. Dover Books republished it with corrections in 1997 as Mathematics of the Incas: Code of the Quipu. The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries.
Topics
The book describes (necessarily by inference, as there is no written record beyond the quipu the themselves) the uses of the quipu, for instance in accounting and taxation. Although 400 quipu are known to survive, the book's study is based on a selection of 191 of them, described in a companion databook. It analyzes the mathematical principles behind the use of the quipu, including a decimal form of positional notation, the concept of zero, rational numbers, and arithmetic, and the way the spatial relations between the strings of a quipu recorded hierarchical and categorical information.
It argues that beyond its use in recording numbers, the quipu acted as a method for planning for future events, and as a writing system for the Inca, and that it provides a tangible representation of "insistence", the thematic concerns in Inca culture for symmetry and spatial and hierarchical connections.
The initial chapters of the book provide an introduction to Inca society and the physical organization of a quipu (involving the colors, size, direction, and hierarchy of its strings), and discussions of repeated themes in Inca society and of the place of the quipu and its makers in that society. Later chapters discuss the mathematical structure of the quipu and of the information it stores, with reference to similarly-structured data in modern society and exercises that ask students to constr |
https://en.wikipedia.org/wiki/Johnny%20Mnemonic%20%28film%29 | Johnny Mnemonic is a 1995 cyberpunk action film directed by Robert Longo in his feature directorial debut. William Gibson, who wrote the 1981 short story, wrote the screenplay. The film, set in 2021, portrays a dystopian future wracked by a tech-induced plague, awash with conspiracies, and dominated by megacorporations and organized crime. Keanu Reeves plays Johnny, a data courier with an overloaded brain implant designed to securely store confidential information. Takeshi Kitano portrays a Yakuza affiliated with a megacorporation attempting to suppress the data; he hires a psychopathic assassin played by Dolph Lundgren to do so. Ice-T and Dina Meyer co-star as Johnny's allies, a freedom fighter and a bodyguard, respectively.
It was shot in Canada; Toronto and Montreal filled in for Newark and Beijing. The project was difficult for Gibson and Longo. After they struggled for years to finance a low-budget adaptation of Gibson's story, Sony greenlit Johnny Mnemonic with a $26 million budget. When Reeves' previous film, Speed, unexpectedly became a major hit, Sony attempted to retool Johnny Mnemonic as a blockbuster. Longo experienced extensive creative differences with the studio, who forced casting choices and script rewrites on him. The film was ultimately recut without Longo's involvement, resulting in a version that he felt did not reflect his artistic vision. Described by Longo and Gibson as originally full of irony, it was edited into a mainstream action film and received negative reviews from critics.
A longer version (103 mins) of the film premiered in Japan on April 15, 1995, featuring a score by Mychael Danna and more scenes involving Kitano. The film was released in the United States on May 26, 1995. In 2022, a black-and-white edition of the film, titled Johnny Mnemonic: In Black and White, which Gibson characterized as closer to his original vision.
Plot
In 2021, society is driven by a virtual Internet, which has created a degenerate effect called |
https://en.wikipedia.org/wiki/Hyperbolic%20functions | In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the unit hyperbola. Also, similarly to how the derivatives of and are and respectively, the derivatives of and are and respectively.
Hyperbolic functions occur in the calculations of angles and distances in hyperbolic geometry. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
The basic hyperbolic functions are:
hyperbolic sine "" (),
hyperbolic cosine "" (),
from which are derived:
hyperbolic tangent "" (),
hyperbolic cosecant "" or "" ()
hyperbolic secant "" (),
hyperbolic cotangent "" (),
corresponding to the derived trigonometric functions.
The inverse hyperbolic functions are:
area hyperbolic sine "" (also denoted "", "" or sometimes "")
area hyperbolic cosine "" (also denoted "", "" or sometimes "")
and so on.
The hyperbolic functions take a real argument called a hyperbolic angle. The size of a hyperbolic angle is twice the area of its hyperbolic sector. The hyperbolic functions may be defined in terms of the legs of a right triangle covering this sector.
In complex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine are entire functions. As a result, the other hyperbolic functions are meromorphic in the whole complex plane.
By Lindemann–Weierstrass theorem, the hyperbolic functions have a transcendental value for every non-zero algebraic value of the argument.
Hyperbolic functions were introduced in the 1760s independe |
https://en.wikipedia.org/wiki/Besicovitch%20covering%20theorem | In mathematical analysis, a Besicovitch cover, named after Abram Samoilovitch Besicovitch, is an open cover of a subset E of the Euclidean space RN by balls such that each point of E is the center of some ball in the cover.
The Besicovitch covering theorem asserts that there exists a constant cN depending only on the dimension N with the following property:
Given any Besicovitch cover F of a bounded set E, there are cN subcollections of balls A1 = {Bn1}, …, AcN = {BncN} contained in F such that each collection Ai consists of disjoint balls, and
Let G denote the subcollection of F consisting of all balls from the cN disjoint families A1,...,AcN.
The less precise following statement is clearly true: every point x ∈ RN belongs to at most cN different balls from the subcollection G, and G remains a cover for E (every point y ∈ E belongs to at least one ball from the subcollection G). This property gives actually an equivalent form for the theorem (except for the value of the constant).
There exists a constant bN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there is a subcollection G of F such that G is a cover of the set E and every point x ∈ E belongs to at most bN different balls from the subcover G.
In other words, the function SG equal to the sum of the indicator functions of the balls in G is larger than 1E and bounded on RN by the constant bN,
Application to maximal functions and maximal inequalities
Let μ be a Borel non-negative measure on RN, finite on compact subsets and let be a -integrable function. Define the maximal function by setting for every (using the convention )
This maximal function is lower semicontinuous, hence measurable. The following maximal inequality is satisfied for every λ > 0 :
Proof.
The set Eλ of the points x such that clearly admits a Besicovitch cover Fλ by balls B such that
For every bounded Borel subset E´ of Eλ, one can find a subcollection G extrac |
https://en.wikipedia.org/wiki/ExoMars | ExoMars (Exobiology on Mars) is an astrobiology programme of the European Space Agency (ESA) and the Russian space agency (Roscosmos).
The goals of ExoMars are to search for signs of past life on Mars, investigate how the Martian water and geochemical environment varies, investigate atmospheric trace gases and their sources and, by doing so, demonstrate the technologies for a future Mars sample-return mission.
The first part of the programme is a mission launched in 2016 that placed the Trace Gas Orbiter into Mars orbit and released the Schiaparelli EDM lander. The orbiter is operational but the lander crashed on the planet's surface. The second part of the programme was planned to launch in July 2020, when the Kazachok lander would have delivered the Rosalind Franklin rover on the surface, supporting a science mission that was expected to last into 2022 or beyond. On 12 March 2020, it was announced that the second mission was being delayed to 2022 as a result of problems with the parachutes, which could not be resolved in time for the launch window.
The Trace Gas Orbiter (TGO) and a test stationary lander called Schiaparelli were launched on 14 March 2016. TGO entered Mars orbit on 19 October 2016 and proceeded to map the sources of methane () and other trace gases present in the Martian atmosphere that could be evidence for possible biological or geological activity. The TGO features four instruments and will also act as a communications relay satellite. The Schiaparelli experimental lander separated from TGO on 16 October and was maneuvered to land in Meridiani Planum, but it crashed on the surface of Mars. The landing was designed to test new key technologies to safely deliver the subsequent rover mission.
In June 2023, a Roscosmos lander named Kazachok ("little Cossack", referring to a folk dance), was due to deliver the ESA Rosalind Franklin rover to the Martian surface. The rover would also include some Roscosmos built instruments. The second mission oper |
https://en.wikipedia.org/wiki/Anatomical%20terms%20of%20neuroanatomy | This article describes anatomical terminology that is used to describe the central and peripheral nervous systems - including the brain, brainstem, spinal cord, and nerves.
Anatomical terminology in neuroanatomy
Neuroanatomy, like other aspects of anatomy, uses specific terminology to describe anatomical structures. This terminology helps ensure that a structure is described accurately, with minimal ambiguity. Terms also help ensure that structures are described consistently, depending on their structure or function. Terms are often derived from Latin and Greek, and like other areas of anatomy are generally standardised based on internationally accepted lexicons such as Terminologia Anatomica.
To help with consistency, humans and other species are assumed when described to be in standard anatomical position, with the body standing erect and facing observer, arms at sides, palms forward.
Location
Anatomical terms of location depend on the location and species that is being described.
To understand the terms used for anatomical localisation, consider an animal with a straight CNS, such as a fish or lizard. In such animals the terms "rostral", "caudal", "ventral" and "dorsal" mean respectively towards the rostrum, towards the tail, towards the belly and towards the back. For a full discussion of those terms, see anatomical terms of location.
For many purposes of anatomical description, positions and directions are relative to the standard anatomical planes and axes. Such reference to the anatomical planes and axes is called the stereotactic approach.
Standard terms used throughout anatomy include anterior / posterior for the front and back of a structure, superior / inferior for above and below, medial / lateral for structures close to and away from the midline respectively, and proximal / distal for structures close to and far away from a set point.
Some terms are used more commonly in neuroanatomy, particularly:
Rostral and caudal: In animals with linear ne |
https://en.wikipedia.org/wiki/Bumblebee%20models | Bumblebee models are effective field theories describing a vector field with a vacuum expectation value that spontaneously breaks Lorentz symmetry. A bumblebee model is the simplest case of a theory with spontaneous Lorentz symmetry breaking.
The development of bumblebee models was motivated primarily by the discovery that mechanisms in string theory (and subsequently other quantum theories of gravity) can lead to tensor-valued fields acquiring vacuum expectation values. Bumblebee models are different from local U(1) gauge theories. Nevertheless, in some bumblebee models, massless modes that behave like photons can appear.
Introduction
Alan Kostelecký and Stuart Samuel showed in 1989 that mechanisms arising in the context of string theory can lead to spontaneous breaking of Lorentz symmetry. A set of models at the level of effective field theory were defined that contained gravitational fields and a vector field Bµ that has a nonzero vacuum expectation value, <Bµ> = bµ. These have become known as bumblebee models.
Typically in these models, spontaneous Lorentz violation is caused by the presence of a potential term in the action. The vacuum value bµ, along with a background metric, give a solution that minimizes the bumblebee potential.
The vacuum value bµ acts as a fixed background field that spontaneously breaks Lorentz symmetry. It is an example, for the case of a vector, of a coefficient for Lorentz violation as defined in the Standard-Model Extension.
The name bumblebee model, coined by Kostelecký, is based on an insect whose ability to fly has sometimes been questioned on theoretical grounds, but which nonetheless is able to fly successfully.
Lagrangian
Different examples of bumblebee Lagrangians can be constructed. Their expressions include
kinetic terms for the gravitational and bumblebee fields, a potential V that induces spontaneous Lorentz breaking, and matter terms. In addition, there can be couplings between the gravitational, bumblebee, and matt |
https://en.wikipedia.org/wiki/Outline%20of%20physiology | The following outline is provided as an overview of and topical guide to physiology:
Physiology – scientific study of the normal function in living systems. A branch of biology, its focus is in how organisms, organ systems, organs, cells, and biomolecules carry out the chemical or physical functions that exist in a living system.
What type of thing is physiology?
Physiology can be described as all of the following:
An academic discipline
A branch of science
A branch of biology
Branches of physiology
By approach
Applied physiology
Clinical physiology
Exercise physiology
Nutrition physiology
Comparative physiology
Mathematical physiology
Yoga physiology
By organism
Animal physiology
Mammal physiology
Human physiology
Fish physiology
Insect physiology
Plant physiology
By process
Developmental physiology
Ecophysiology
Evolutionary physiology
By subsystem
Cardiovascular physiology
Renal physiology
Defense physiology
Gastrointestinal physiology
Musculoskeletal physiology
Neurophysiology
Respiratory physiology
History of physiology
History of physiology
General physiology concepts
Physiology organizations
American Physiological Society
International Union of Physiological Sciences
Physiology publications
American Journal of Physiology
Experimental Physiology
Journal of Applied Physiology
Persons influential in physiology
List of Nobel laureates in Physiology or Medicine
List of physiologists
See also
Outline of biology |
https://en.wikipedia.org/wiki/Lumboinguinal%20nerve | The lumboinguinal nerve, also known as the femoral or crural branch of genitofemoral, is a nerve in the abdomen. The lumboinguinal nerve is a branch of the genitofemoral nerve. The "femoral" part supplies skin to the femoral triangle area.
Structure
The lumboinguinal nerve arises from the genitofemoral nerve. It descends alongside the external iliac artery, sending a few filaments around it, and, passing beneath the inguinal ligament, enters the sheath of the femoral vessels, lying superficial and lateral to the femoral artery. Here, it pierces the anterior layer of the sheath of the vessels and the fascia lata, and supplies the skin of the anterior surface of the upper part of the thigh.
On the front of the thigh it communicates with the anterior cutaneous branches of the femoral nerve.
A few filaments from the lumboinguinal nerve may be traced to the femoral artery.
Additional images
See also
Genitofemoral nerve
External links
Photo of model at Waynesburg College |
https://en.wikipedia.org/wiki/SPRY4 | Protein sprouty homolog 4 is a protein that in humans is encoded by the SPRY4 gene.
Function
SPRY4 is an inhibitor of the receptor-transduced mitogen-activated protein kinase (MAPK) signaling pathway. It is positioned upstream of RAS (see HRAS; MIM 190020) activation and impairs the formation of active GTP-RAS (Leeksma et al., 2002).
Interactions
SPRY4 has been shown to interact with TESK1.
See also
MAPK signaling pathway
Ras subfamily |
https://en.wikipedia.org/wiki/Fleet%20Radio%20Unit%2C%20Melbourne | Fleet Radio Unit, Melbourne (FRUMEL) was a United States–Australian–British signals intelligence unit, founded in Melbourne, Australia, during World War II. It was one of two major Allied signals intelligence units called Fleet Radio Units in the Pacific theatre, the other being FRUPAC (also known as Station HYPO), in Hawaii. FRUMEL was a U.S. Navy organization, reporting directly to CINCPAC (Admiral Nimitz) in Hawaii and the Chief of Naval Operations (Admiral King) in Washington, D.C., and hence to the central cryptographic organization. The separate Central Bureau in Melbourne (later Brisbane) was attached and reported to General Douglas MacArthur's Allied South West Pacific Area command headquarters.
History
FRUMEL was established at the Monterey Apartments in Queens Road in early 1942, and was made up of three main groups. First was Lieutenant Rudolph J. (Rudi) Fabian's 75-man codebreaker unit, previously based at the United States Navy's Station CAST in the Philippines before being evacuated by submarine on 8 April 1942. The second was Commander Eric Nave's small Royal Australian Navy-supported cryptography unit, which had moved to the Monterey Apartments from Victoria Barracks in February 1942. Nave's unit was made up of a core of naval personnel, heavily assisted by university academics and graduates specialising in linguistics and mathematics (including from June 1941 a "cipher group" of four from Sydney University). These included Thomas Room, Dale Trendall, Athanasius Treweek, Eric Barnes, Jack Davies and Ronald Bond. The third group was a trio of British Foreign Office linguists (Henry Archer, Arthur Cooper and Hubert Graves), and Royal Navy support staff, evacuated from Singapore, particularly from the Far East Combined Bureau (FECB) there. IBM punched card tabulating machines were obtained in 1942 to replace those left behind in Manila Bay on leaving Corregidor.
Nave and Fabian had a difficult relationship, and Nave was forced out of FRUMEL, going to |
https://en.wikipedia.org/wiki/McN5652 | McN5652 is a molecule that can be radiolabeled and then used as a radioligand in positron emission tomography (PET) studies. The [11C]-(+)-McN5652 enantiomer binds to the serotonin transporter. The radioligand is used for molecular neuroimaging and for imaging of the lungs.
It was developed by Johnson & Johnson's McNeil Laboratories. According to McNeil, McN5652 was among the strongest SRI ever reported at the time of its discovery (sub nM Ki). However, it is not completely 5-HT selective: the racemate has 5-HT=0.68, NA=2.9, and D=36.8nM, whereas (+)-enantiomer has 5-HT=0.39, NA=1.8, and D=23.5 nM. Paroxetine was listed as 5-HT=0.44 nM, NA=20, and DA=460nM in the same paper by the same authors.
Derivatives
McN5652 and related structures have been analyzed for QSAR in terms of binding to the MAT receptor binding site.
See also
DASB
JNJ-7925476 (p-ethynyl) |
https://en.wikipedia.org/wiki/MPIR%20%28mathematics%20software%29 | Multiple Precision Integers and Rationals (MPIR) is an open-source software multiprecision integer library forked from the GNU Multiple Precision Arithmetic Library (GMP) project. It consists of much code from past GMP releases, and some original contributed code.
According to the MPIR-devel mailing list, "MPIR is no longer maintained", except for building the old code on Windows using new versions of Microsoft Visual Studio.
According to the MPIR developers, some of the main goals of the MPIR project were:
Maintaining compatibility with GMP – so that MPIR can be used as a replacement for GMP.
Providing build support for Linux, Mac OS, Solaris and Windows systems.
Supporting building MPIR using Microsoft based build tools for use in 32- and 64-bit versions of Windows.
MPIR is optimized for many processors (CPUs). Assembly language code exists for these : ARM, DEC Alpha 21064, 21164, and 21264, AMD K6, K6-2, Athlon, K8 and K10, Intel Pentium, Pentium Pro-II-III, Pentium 4, generic x86, Intel IA-64, Core 2, i7, Atom, Motorola-IBM PowerPC 32 and 64, MIPS R3000, R4000, SPARCv7, SuperSPARC, generic SPARCv8, UltraSPARC.
Language bindings
See also
Arbitrary-precision arithmetic, data type: bignum
GNU Multiple Precision Arithmetic Library
GNU Multiple Precision Floating-Point Reliably (MPFR)
Class Library for Numbers supporting GiNaC |
https://en.wikipedia.org/wiki/Tropical%20ulcer | Tropical ulcer, more commonly known as jungle rot, is a chronic ulcerative skin lesion thought to be caused by polymicrobial infection with a variety of microorganisms, including mycobacteria. It is common in tropical climates.
Ulcers occur on exposed parts of the body, primarily on anterolateral aspect of the lower limbs and may erode muscles and tendons, and sometimes, the bones. These lesions may frequently develop on preexisting abrasions or sores sometimes beginning from a mere scratch.
Signs and symptoms
The vast majority of the tropical ulcers occur below the knee, usually around the ankle. They may also occur on arms. They are often initiated by minor trauma, and subjects with poor nutrition are at higher risk. Once developed, the ulcer may become chronic and stable, but also it can run a destructive course with deep tissue invasion, osteitis, and risk of amputation. Unlike Buruli ulcer, tropical ulcers are very painful. Lesions begin with inflammatory papules that progress into vesicles and rupture with the formation of an ulcer. Chronic ulcers involve larger areas and may eventually develop into squamous epithelioma after 10 years or more.
Complications
Skin color: Rarely, jungle rot will result in complications with skin pigmentation. It has been known to leave the victim with different colors such as bright red, blue, green, and a rare color change of orange.
Deep tissue invasion: Infection may spread deep to the subcutaneous tissue, but rarely involve the bone.
Chronic ulceration: Characterised by thick rim of fibrous tissue around the ulcer edges.
Recurrent ulceration: Most commonly in children.
Squamous cell carcinoma: May develop at the rate of 2 to 15% of the chronic ulcers that persists for more than three years.
Tetanus: By entry of tetanus bacilli through the ulcer.
Microbiology
There is now considerable evidence to suggest that this disease is an infection. Mycobacterium ulcerans has recently been isolated from lesions and is unique |
https://en.wikipedia.org/wiki/Anatoly%20Logunov | Anatoly Alekseyevich Logunov (; December 30, 1926 – March 1, 2015) was a Soviet and Russian theoretical physicist, academician of the USSR Academy of Sciences and Russian Academy of Sciences. He was awarded the Bogolyubov Prize in 1996.
Biography
Anatoly Logunov was born in Obsharovka village, now in Privolzhsky District, Samara Oblast, Russia. In 1951 he graduated from Moscow University where he studied theoretical physics. From 1954 to 1956 he worked in Moscow University, later worked at Joint Institute for Nuclear Research (Dubna). He became doktor nauk in 1959 and professor in 1961. In 1968 he was elected a corresponding member of The Academy of Sciences of USSR. In 1971 the department of quantum theory and high energy physics was founded on faculty of physics of Moscow University. Anatoly Logunov was the head of this department right from the start at least until 2006. In 1972 Anatoly Logunov was elected an academician in the field of nuclear physics. From 1977 till 1992 he was the Rector of Moscow University. Anatoly Logunov died on 1 March 2015 in Moscow, Russia. He was buried at Troyekurovskoye Cemetery in Moscow.
Research
Logunov made a notable contribution to theory of gravity. He studied quantum field theory. In 1956 he built generalized finite multiplicative renormalization groups and functional and differential renormalization group equations of electrodynamics in case of arbitrary calibration. Jointly with Piotr Isayev (Russian: Пётр Степанович Исаев), Lev Soloviov (Russian: Лев Дмитриевич Соловьев), Albert Tavkhelidze (Russian: Альберт Никифорович Тавхелидзе) and Ivan Todorov (Bulgarian: Иван Тодоров) et al. he derived dispersion relations for different processes of elementary particle interactions, among them the processes of photobirth of -mesons in nucleons. He studied Bell's spaceship paradox, the ideas of Henri Poincaré.
Relativistic theory of gravitation
After studying works of Poincare, Lorentz, Hilbert and Einstein in great detail, Logunov |
https://en.wikipedia.org/wiki/Saprotrophic%20nutrition | Saprotrophic nutrition or lysotrophic nutrition is a process of chemoheterotrophic extracellular digestion involved in the processing of decayed (dead or waste) organic matter. It occurs in saprotrophs, and is most often associated with fungi (for example Mucor) and soil bacteria. Saprotrophic microscopic fungi are sometimes called saprobes. Saprotrophic plants or bacterial flora are called saprophytes (sapro- 'rotten material' + -phyte 'plant'), although it is now believed that all plants previously thought to be saprotrophic are in fact parasites of microscopic fungi or other plants. The process is most often facilitated through the active transport of such materials through endocytosis within the internal mycelium and its constituent hyphae.
Various word roots relating to decayed matter (detritus, sapro-), eating and nutrition (-vore, -phage), and plants or life forms (-phyte, -obe) produce various terms, such as detritivore, detritophage, saprotroph, saprophyte, saprophage, and saprobe; their meanings overlap, although technical distinctions (based on physiologic mechanisms) narrow the senses. For example, usage distinctions can be made based on macroscopic swallowing of detritus (as an earthworm does) versus microscopic lysis of detritus (as a mushroom does).
Process
As matter decomposes within a medium in which a saprotroph is residing, the saprotroph breaks such matter down into its composites.
Proteins are broken down into their amino acid composites through the breaking of peptide bonds by proteases.
Lipids are broken down into fatty acids and glycerol by lipases.
Starch is broken down into pieces of simple disaccharides by amylases.
Cellulose, a major portion of plant cells, and therefore a major constituent of decaying matter is broken down into glucose
These products are re-absorbed into the hypha through the cell wall by endocytosis and passed on throughout the mycelium complex. This facilitates the passage of such materials throughout the orga |
https://en.wikipedia.org/wiki/Order%20One%20Network%20Protocol | The OrderOne MANET Routing Protocol is an algorithm for computers communicating by digital radio in a mesh network to find each other, and send messages to each other along a reasonably efficient path. It was designed for, and promoted as working with wireless mesh networks.
OON's designers say it can handle thousands of nodes, where most other protocols handle less than a hundred. OON uses hierarchical algorithms to minimize the total amount of transmissions needed for routing. Routing overhead is limited to between 1% and 5% of node-to-node bandwidth in any network and does not grow as the network size grows.
The basic idea is that a network organizes itself into a tree. Nodes meet at the root of the tree to establish an initial route. The route then moves away from the root by cutting corners, as ant-trails do. When there are no more corners to cut, a nearly optimum route exists. This route is continuously maintained.
Each process can be performed with localized minimal communication, and very small router tables. OORP requires about 200K of memory. A simulated network with 500 nodes transmitting at 200 bytes/second organized itself in about 20 seconds.
As of 2004, OORP was patented or had other significant intellectual property restrictions. See the link below.
Assumptions
Each computer, or "node" of the network has a unique name, at least one network link, and a computer with some capacity to hold a list of neighbors.
Organizing the tree
The network nodes form a hierarchy by having each node select a parent. The parent is a neighbor node that is the next best step to the most other nodes. This method creates a hierarchy around nodes that are more likely to be present, and which have more capacity, and which are closer to the topological center of the network. The memory limitations of a small node are reflected in its small routing table, which automatically prevents it from being a preferred central node.
At the top, one or two nodes are un |
https://en.wikipedia.org/wiki/Frost%20heaving | Frost heaving (or a frost heave) is an upwards swelling of soil during freezing conditions caused by an increasing presence of ice as it grows towards the surface, upwards from the depth in the soil where freezing temperatures have penetrated into the soil (the freezing front or freezing boundary). Ice growth requires a water supply that delivers water to the freezing front via capillary action in certain soils. The weight of overlying soil restrains vertical growth of the ice and can promote the formation of lens-shaped areas of ice within the soil. Yet the force of one or more growing ice lenses is sufficient to lift a layer of soil, as much as or more. The soil through which water passes to feed the formation of ice lenses must be sufficiently porous to allow capillary action, yet not so porous as to break capillary continuity. Such soil is referred to as "frost susceptible". The growth of ice lenses continually consumes the rising water at the freezing front. Differential frost heaving can crack road surfaces—contributing to springtime pothole formation—and damage building foundations. Frost heaves may occur in mechanically refrigerated cold-storage buildings and ice rinks.
Needle ice is essentially frost heaving that occurs at the beginning of the freezing season, before the freezing front has penetrated very far into the soil and there is no soil overburden to lift as a frost heave.
Mechanisms
Historical understanding of frost heaving
Urban Hjärne described frost effects in soil in 1694.
By 1930, Stephen Taber, head of the Department of Geology at the University of South Carolina, had disproved the hypothesis that frost heaving results from molar volume expansion with freezing of water already present in the soil prior to the onset of subzero temperatures, i.e. with little contribution from the migration of water within the soil.
Since the molar volume of water expands by about 9% as it changes phase from water to ice at its bulk freezing point, 9% |
https://en.wikipedia.org/wiki/Neurobiology%20of%20Aging | Neurobiology of Aging is a peer-reviewed monthly scientific journal published by Elsevier. The editor-in-chief is Peter R. Rapp. Neurobiology of Aging publishes research in which the primary emphasis addresses the mechanisms of nervous system-changes during aging and in age-related diseases. Approaches are behavioral, biochemical, cellular, molecular, morphological, neurological, neuropathological, pharmacological, and physiological.
Abstracting and indexing
Neurobiology of Aging is abstracted and indexed in
BIOSIS,
Current Contents/Life Sciences,
EMBASE,
MEDLINE,
PsycINFO,
Research Alert,
Science Citation Index,
Scopus.
According to the Journal Citation Reports, Neurobiology of Aging has a 2020 impact factor of 4.673. |
https://en.wikipedia.org/wiki/April%20Showers%20%28song%29 | "April Showers" is a 1921 popular song composed by Louis Silvers with lyrics by B. G. De Sylva.
History
The song was introduced in the 1921 Broadway musical Bombo, where it was performed by Al Jolson. It became a well-known Jolson standard: the first of his several recordings of the song was on Columbia Records in October 1921. It has also been recorded by many other artists.
Spike Jones and Doodles Weaver produced a parody that began with the lyrics: "When April showers, she never closes the curtain..."
The British comedians Morecambe and Wise performed a skit featuring the song, which involved a light sprinkling of water drizzling on straight man Ernie Wise whenever he sang it, but a bucket of water being thrown over Eric Morecambe whenever he did the same.
Film appearances
1926 A Plantation Act sung by Al Jolson
1936 The Singing Kid sung by Al Jolson
1939 Rose of Washington Square sung by Al Jolson
1946 The Jolson Story sung by Al Jolson
1946 Margie Sung by Jeanne Crain (dubbed by Louanne Hogan) and chorus
1948 April Showers
1949 Always Leave Them Laughing played at the Canal Street Boys Club and sung by Milton Berle.
1949 Jolson Sings Again ung by Al Jolson
1956 The Eddy Duchin Story
1962 Wet Hare sung by Mel Blanc as Bugs Bunny
Recorded versions
Victor Arden
John Arpin
Chris Barber - included in the album Chris Barber Plays - Vol. 2 (1956)
Les Brown and His Band of Renown (1949)
Carol Burnett
Cab Calloway
Steve Conway
Bing Crosby (1956) (Songs I Wish I Had Sung the First Time Around) & (1977) (Seasons)
Ruth Etting
Arthur Fields (1922)
Eddie Fisher (1954)
Judy Garland - Judy (1956)
Eydie Gorme - for her album Love Is a Season (1958)
Ernie Hare (1922)
Charles Harrison (1922)
Ted Heath
Woody Herman
Joni James
Al Jolson
(1921 Broadway Production)
Commercial recording October 21, 1921
(Performed by in 1926's A Plantation Act)
Commercial recording December 20, 1932 with Guy Lombardo and his Orchestra
(1936, in the film The Singing Kid)
Commercial recordi |
https://en.wikipedia.org/wiki/Active%20object | The active object design pattern decouples method execution from method invocation for objects that each reside in their own thread of control. The goal is to introduce concurrency, by using asynchronous method invocation and a scheduler for handling requests.
The pattern consists of six elements:
A proxy, which provides an interface towards clients with publicly accessible methods.
An interface which defines the method request on an active object.
A list of pending requests from clients.
A scheduler, which decides which request to execute next.
The implementation of the active object method.
A callback or variable for the client to receive the result.
Example
Java
An example of active object pattern in Java.
Firstly we can see a standard class that provides two methods that set a double to be a certain value. This class does NOT conform to the active object pattern.
class MyClass {
private double val = 0.0;
void doSomething() {
val = 1.0;
}
void doSomethingElse() {
val = 2.0;
}
}
The class is dangerous in a multithreading scenario because both methods can be called simultaneously, so the value of val (which is not atomic—it's updated in multiple steps) could be undefined—a classic race condition. You can, of course, use synchronization to solve this problem, which in this trivial case is easy. But once the class becomes realistically complex, synchronization can become very difficult.
To rewrite this class as an active object, you could do the following:
class MyActiveObject {
private double val = 0.0;
private BlockingQueue<Runnable> dispatchQueue = new LinkedBlockingQueue<Runnable>();
public MyActiveObject() {
new Thread (new Runnable() {
@Override
public void run() {
try {
while (true) {
dispatchQueue.take().run();
}
|
https://en.wikipedia.org/wiki/Composr%20CMS | Composr CMS (or Composr) is a web application for creating websites. It is a combination of a Web content management system and Online community (Social Networking) software. Composr is licensed as free software and primarily written in the PHP programming language.
Composr is available on various web application distributions platforms, including Installatron, Softaculous, Web Platform Installer and Bitnami.
History
Composr was launched in 2004 as ocPortal.
ocPortal was featured on the CMS Report (a CMS editorial website) "Top 30 Web Applications" list.
ocPortal was developed up until version 9 and renamed to Composr CMS in 2016 alongside the new version 10 as a product and branding overhaul.
Features
The main features are for:
Content Management of website structure and pages
Content Management of custom content types ("Catalogues")
Galleries (Photos and Videos)
News and Blogging
Discussion Forums
Chat Rooms
Advertising management ("Banners")
Calendars
File management ("Downloads")
wikis ("Wiki+")
Quizzes
Newsletters
Community Points
Composr uses a number of built-in languages to build up web content and structure, mainly:
Comcode (for creating high-level web content, similar to BBCode)
Tempcode (a templating language)
Filtercode (for defining content filtering)
Selectcode (for defining content selection)
Composr is developed distinctly compared to most other Open Source CMSs, with the main distinctions being:
Composr is module-orientated, rather than node-orientated
Common software components are designed for maximum integration, rather than maximum choice
Composr is sponsored by a commercial software company, rather than being volunteer led
Some unique (or rare) features of Composr are:
Automatic banning of hackers (if hacking attempts are detected)
Core integration with spammer block lists
Integration with third-party forum software for user accounts and forums (although this is no longer a focus)
Automatic color scheme generat |
https://en.wikipedia.org/wiki/Heike%20Kamerlingh%20Onnes | Heike Kamerlingh Onnes (; 21 September 185321 February 1926) was a Dutch physicist and Nobel laureate. He exploited the Hampson–Linde cycle to investigate how materials behave when cooled to nearly absolute zero and later to liquefy helium for the first time, in 1908. He also discovered superconductivity in 1911.
Biography
Early years
Kamerlingh Onnes was born in Groningen, Netherlands. His father, Harm Kamerlingh Onnes, was a brickworks owner. His mother was Anna Gerdina Coers of Arnhem.
In 1870, Kamerlingh Onnes attended the University of Groningen. He studied under Robert Bunsen and Gustav Kirchhoff at the University of Heidelberg from 1871 to 1873. Again at Groningen, he obtained his master's degree in 1878 and a doctorate in 1879. His thesis was Nieuwe bewijzen voor de aswenteling der aarde (tr. New proofs of the rotation of the earth). His doctoral thesis was on Foucault's pendulum. From 1878 to 1882 he was assistant to Johannes Bosscha, the director of the Delft Polytechnic, for whom he substituted as lecturer in 1881 and 1882.
Family
He was married to Maria Adriana Wilhelmina Elisabeth Bijleveld (m. 1887) and had one child, named Albert. His brother Menso Kamerlingh Onnes (1860–1925) was a painter (and father of another painter, Harm Kamerlingh Onnes), while his sister Jenny married another painter, Floris Verster (1861–1927).
University of Leiden
From 1882 to 1923 Kamerlingh Onnes served as professor of experimental physics at the University of Leiden. In 1904 he founded a very large cryogenics laboratory and invited other researchers to the location, which made him highly regarded in the scientific community. The laboratory is known now as Kamerlingh Onnes Laboratory. Only one year after his appointment as professor he became member of the Royal Netherlands Academy of Arts and Sciences.
Liquefaction of helium
On 10 July 1908, he was the first to liquefy helium, using several precooling stages and the Hampson–Linde cycle based on the Joule–Thomson e |
https://en.wikipedia.org/wiki/Wielandt%20theorem | In mathematics, the Wielandt theorem characterizes the gamma function, defined for all complex numbers for which by
as the only function defined on the half-plane such that:
is holomorphic on ;
;
for all and
is bounded on the strip .
This theorem is named after the mathematician Helmut Wielandt.
See also
Bohr–Mollerup theorem
Hadamard's gamma function |
https://en.wikipedia.org/wiki/Disease%20theory%20of%20alcoholism | The modern disease theory of alcoholism states that problem drinking is sometimes caused by a disease of the brain, characterized by altered brain structure and function.
The largest association of physicians – the American Medical Association (AMA) – declared that alcoholism was an illness in 1956. In 1991, the AMA further endorsed the dual classification of alcoholism by the International Classification of Diseases under both psychiatric and medical sections.
Theory
Alcoholism is a chronic problem. However, if managed properly, damage to the brain can be stopped and to some extent reversed. In addition to problem drinking, the disease is characterized by symptoms including an impaired control over alcohol, compulsive thoughts about alcohol, and distorted thinking. Alcoholism can also lead indirectly, through excess consumption, to physical dependence on alcohol, and diseases such as cirrhosis of the liver.
The risk of developing alcoholism depends on many factors, such as environment. Those with a family history of alcoholism are more likely to develop it themselves (Enoch & Goldman, 2001); however, many individuals have developed alcoholism without a family history of the disease. Since the consumption of alcohol is necessary to develop alcoholism, the availability of and attitudes towards alcohol in an individual's environment affect their likelihood of developing the disease. Current evidence indicates that in both men and women, alcoholism is 50–60% genetically determined, leaving 40-50% for environmental influences.
In a review in 2001, McLellan et al. compared the diagnoses, heritability, etiology (genetic and environmental factors), pathophysiology, and response to treatments (adherence and relapse) of drug dependence vs type 2 diabetes mellitus, hypertension, and asthma. They found that genetic heritability, personal choice, and environmental factors are comparably involved in the etiology and course of all of these disorders, providing evidenc |
https://en.wikipedia.org/wiki/Cochlear%20cupula | The cochlear cupula is a structure in the cochlea. It is the apex of the cochlea.
The bony canal of the cochlea takes two and three-quarter turns around the modiolus. The modiolus is about 30 mm in length, and diminishes gradually in diameter from the base to the summit, where it terminates in the cupula.
Auditory system |
https://en.wikipedia.org/wiki/Residue%20field | In mathematics, the residue field is a basic construction in commutative algebra. If R is a commutative ring and m is a maximal ideal, then the residue field is the quotient ring k = R/m, which is a field. Frequently, R is a local ring and m is then its unique maximal ideal.
This construction is applied in algebraic geometry, where to every point x of a scheme X one associates its residue field k(x). One can say a little loosely that the residue field of a point of an abstract algebraic variety is the 'natural domain' for the coordinates of the point.
Definition
Suppose that R is a commutative local ring, with maximal ideal m. Then the residue field is the quotient ring R/m.
Now suppose that X is a scheme and x is a point of X. By the definition of scheme, we may find an affine neighbourhood U = Spec(A), with A some commutative ring. Considered in the neighbourhood U, the point x corresponds to a prime ideal p ⊆ A (see Zariski topology). The local ring of X in x is by definition the localization R = Ap, with the maximal ideal m = p·Ap. Applying the construction above, we obtain the residue field of the point x :
k(x) := Ap / p·Ap.
One can prove that this definition does not depend on the choice of the affine neighbourhood U.
A point is called K-rational for a certain field K, if k(x) = K.
Example
Consider the affine line A1(k) = Spec(k[t]) over a field k. If k is algebraically closed, there are exactly two types of prime ideals, namely
(t − a), a ∈ k
(0), the zero-ideal.
The residue fields are
, the function field over k in one variable.
If k is not algebraically closed, then more types arise, for example if k = R, then the prime ideal (x2 + 1) has residue field isomorphic to C.
Properties
For a scheme locally of finite type over a field k, a point x is closed if and only if k(x) is a finite extension of the base field k. This is a geometric formulation of Hilbert's Nullstellensatz. In the above example, the points of the first kind are closed, havin |
https://en.wikipedia.org/wiki/Chemiresistor | A chemiresistor is a material that changes its electrical resistance in response to changes in the nearby chemical environment. Chemiresistors are a class of chemical sensors that rely on the direct chemical interaction between the sensing material and the analyte. The sensing material and the analyte can interact by covalent bonding, hydrogen bonding, or molecular recognition. Several different materials have chemiresistor properties: metal-oxide semiconductors, some conductive polymers, and nanomaterials like graphene, carbon nanotubes and nanoparticles. Typically these materials are used as partially selective sensors in devices like electronic tongues or electronic noses.
A basic chemiresistor consists of a sensing material that bridges the gap between two electrodes or coats a set of interdigitated electrodes. The resistance between the electrodes can be easily measured. The sensing material has an inherent resistance that can be modulated by the presence or absence of the analyte. During exposure, analytes interact with the sensing material. These interactions cause changes in the resistance reading. In some chemiresistors the resistance changes simply indicate the presence of analyte. In others, the resistance changes are proportional to the amount of analyte present; this allows for the amount of analyte present to be measured.
History
As far back as 1965 there are reports of semiconductor materials exhibiting electrical conductivities that are strongly affected by ambient gases and vapours. However, it was not until 1985 that Wohltjen and Snow coined the term chemiresistor. The chemiresistive material they investigated was copper phthalocyanine, and they demonstrated that its resistivity decreased in the presence of ammonia vapour at room temperature.
In recent years chemiresistor technology has been used to develop promising sensors for many applications, including conductive polymer sensors for secondhand smoke, carbon nanotube sensors for gaseous ammo |
https://en.wikipedia.org/wiki/Remote%20radio%20head | A remote radio head (RRH), also called a remote radio unit (RRU) in wireless networks, is a remote radio transceiver that connects to an operator radio control panel via electrical or wireless interface. When used to describe aircraft radio cockpit radio systems, the control panel is often called the radio head.
In wireless system technologies such as GSM, CDMA, UMTS, LTE, 5G NR the radio equipment is remote to the BTS/NodeB/eNodeB/gNB. The equipment is used to extend the coverage of a BTS/NodeB/eNodeB/gNB in challenging environments such as rural areas or tunnels. They are generally connected to the BTS/NodeB/eNodeB/gNB via a fiber optic cable using Common Public Radio Interface protocols.
RRHs have become one of the most important subsystems of today's new distributed base stations. The RRH contains the base station's RF circuitry plus analog-to-digital/digital-to-analog converters and up/down converters, and connects to, and thus drives the base station's antenna. RRHs also have operation and management processing capabilities and a standardized optical interface to connect to the rest of the base station. This will be increasingly true as LTE and WiMAX are deployed. Remote radio heads make MIMO operation easier; they increase a base station's efficiency and facilitate easier physical location for gap coverage problems. RRHs will use the latest RF component technology including gallium nitride (GaN) RF power devices and envelope tracking technology within the RRH RF power amplifier (RFPA).
RRH protection in fiber to the antenna systems
Fourth generation (4G) and beyond infrastructure deployments will include the implementation of Fiber to the Antenna (FTTA) architecture. FTTA architecture has enabled lower power requirements, distributed antenna sites, and a reduced base station footprint than conventional tower sites. The use of FTTA will promote the separation of power and signal components from the base station and their relocation to the top of the tower |
https://en.wikipedia.org/wiki/History%20of%20research%20ships | The research ship had origins in the early voyages of exploration. By the time of James Cook's Endeavour, the essentials of what today we would call a research ship are clearly apparent. In 1766, the Royal Society hired Cook to travel to the Pacific Ocean to observe and record the transit of Venus across the Sun. The Endeavour was a sturdy boat, well designed and equipped for the ordeals she would face, and fitted out with facilities for her research personnel, Joseph Banks. And, as is common with contemporary research vessels, Endeavour carried out more than one kind of research, including comprehensive hydrographic survey work.
Some other notable early research vessels were HMS Beagle, RV Calypso, HMS Challenger, and the Endurance and Terra Nova.
The race to the poles
19th century
At the end of the 19th century there was intense international interest in exploring the North and South Poles. The search operations for the lost Franklin expedition were barely forgotten as Russia, Great Britain, Germany and Sweden set new scientific tasks for the Arctic Ocean. In 1868, the Swedish ship Sofia carried out temperature measurements and oceanographic observation in the sea area around Svalbard. During this year the Greenland, built in Norway, operated in the same area under the German command of Carl Koldeway. In 1868 to 1869, the ship owner A. Rosenthal gave scientists the opportunity to come aboard on his whaling trips and by 1869, the ship Germania, which was escorted by the Hansa and led the Second German North Polar Expedition, was built. The Germania returned safely from the expedition and was used later for further research. The Hansa, in contrast, was crushed by the ice and sunk. In 1874, the Austrian-Hungarian Tegetthoff as well as the American schooner Polaris under the command of Captain Hull met the same fate.
The Royal Navy ships Alert and Discovery of the British Arctic Expedition of 1875-76 were more successful. In 1875 they left Portsmouth in order to |
https://en.wikipedia.org/wiki/Symbols%20of%20the%20Northwest%20Territories | The Northwest Territories, one of Canada's territories, has established several territorial symbols.
Symbols |
https://en.wikipedia.org/wiki/Fibre%20Channel%20Protocol | Fibre Channel Protocol (FCP) is the SCSI interface protocol utilising an underlying Fibre Channel connection. The Fibre Channel standards define a high-speed data transfer mechanism that can be used to connect workstations, mainframes, supercomputers, storage devices and displays. FCP addresses the need for very fast transfers of large volumes of information and could relieve system manufacturers from the burden of supporting a variety of channels and networks, as it provides one standard for networking, storage and data transfer.
Some Fibre Channel characteristics are:
Performance from 266 megabits/second to 16 gigabits/second
Support both optical and copper media, with distances up to 10 km.
Small connectors (sfp+ are most common)
High-bandwidth utilisation with distance insensitivity
Support for multiple cost/performance levels, from small systems to supercomputers
Ability to carry multiple existing interface command sets, including Internet Protocol (IP), SCSI, IPI, HIPPI-FP, and audio/video.
Fibre Channel consists of the following layers:
FC-0 -- The interface to the physical media
FC-1 -- The encoding and decoding of data and out-of-band physical link control information for transmission over the physical media
FC-2 -- The transfer of frames, sequences and exchanges comprising protocol information units.
FC-3 -- Common services required for advanced features such as striping, hunt group and multicast.
FC-4 -- Application interfaces that can execute over Fibre Channel such as the Fibre Channel Protocol for SCSI (FCS).
Unlike a layered network architecture, a Fibre Channel network is largely specified by functional elements and the interfaces between them. These consist, in part, of the following:
N_PORTs—The end points for traffic.
FC Devices—The devices to which the N_PORTs provide access.
Fabric Ports—The interfaces within a network that provide attachment for an N_PORT.
The network infrastructure for carrying frame traffic between N_PORTs.
Within a |
https://en.wikipedia.org/wiki/Wd~50 | wd~50 was a molecular gastronomy New American/international restaurant in Manhattan, New York City. It was opened in 2003 by chef Wylie Dufresne. wd~50 closed November 30, 2014.
Awards and ratings
It was listed among the S. Pellegrino World's 50 Best Restaurants for 2010. In 2006 the restaurant received a Michelin star in the New York City guide, which it retained until it closed.
In 2013, Zagat gave it a food rating of 25 out of 30.
Closure
On June 10, 2014, The New York Times reported that wd~50 would be closing due to a real estate developer planning on constructing a new building at the site; The restaurant closed on November 30, 2014. The restaurant was located at 50 Clinton Street (between Rivington Street and Stanton Street), on the Lower East Side.
See also
List of New American restaurants
List of restaurants in New York City |
https://en.wikipedia.org/wiki/Test%20management | Test management most commonly refers to the activity of managing a testing process. A test management tool is software used to manage tests (automated or manual) that have been previously specified by a test procedure. It is often associated with automation software. Test management tools often include requirement and/or specification management modules that allow automatic generation of the requirement test matrix (RTM), which is one of the main metrics to indicate functional coverage of a system under test (SUT).
Creating tests definitions in a database
Test definition includes: test plan, association with product requirements and specifications. Eventually, some relationship can be set between tests so that precedences can be established.
E.g. if test A is parent of test B and if test A is failing, then it may be useless to perform test B.
Tests should also be associated with priorities.
Every change on a test must be versioned so that the QA team has a comprehensive view of the history of the test.
Preparing test campaigns
This includes building some bundles of test cases and executing them (or scheduling their execution).
Execution can be either manual or automatic.
Manual execution
The user will have to perform all the test steps manually and inform the system of the result.
Some test management tools includes a framework to interface the user with the test plan to facilitate this task. There are several ways to run tests. The simplest way to run a test is to run a test case. The test case can be associated with other test artifacts such as test plans, test scripts, test environments, test case execution records, and test suites.
Automatic execution
There are numerous ways of implementing automated tests.
Automatic execution requires the test management tool to be compatible with the tests themselves.
To do so, test management tools may propose proprietary automation frameworks or APIs to interface with third-party or proprietary automated tests.
Gener |
https://en.wikipedia.org/wiki/Location | In geography, location or place are used to denote a region (point, line, or area) on Earth's surface. The term location generally implies a higher degree of certainty than place, the latter often indicating an entity with an ambiguous boundary, relying more on human or social attributes of place identity and sense of place than on geometry. A populated place is called a settlement.
Types
Locality
A locality, settlement, or populated place is likely to have a well-defined name but a boundary that is not well defined varies by context. London, for instance, has a legal boundary, but this is unlikely to completely match with general usage. An area within a town, such as Covent Garden in London, also almost always has some ambiguity as to its extent. In geography, location is considered to be more precise than "place".
Relative location
A relative location, or situation, is described as a displacement from another site. An example is "3 miles northwest of Seattle".
Absolute location
An absolute location can be designated using a specific pairing of latitude and longitude in a Cartesian coordinate grid (for example, a spherical coordinate system or an ellipsoid-based system such as the World Geodetic System) or similar methods. For example, the position of New York City in the United States can be expressed using the coordinate system as the location 40.7128°N (latitude), 74.0060°W (longitude).
Absolute locations are also relative locations, since even absolute locations are expressed relative to something else. For example, longitude is the number of degrees east or west of the Prime Meridian, a line arbitrarily chosen to pass through Greenwich, England. Similarly, latitude is the number of degrees north or south of the equator. Because latitude and longitude are expressed relative to these lines, a position expressed in latitude and longitude is also a relative location.
See also
Locale (geography)
Location,Location
Location Location Location Austra |
https://en.wikipedia.org/wiki/Categorical%20quotient | In algebraic geometry, given a category C, a categorical quotient of an object X with action of a group G is a morphism that
(i) is invariant; i.e., where is the given group action and p2 is the projection.
(ii) satisfies the universal property: any morphism satisfying (i) uniquely factors through .
One of the main motivations for the development of geometric invariant theory was the construction of a categorical quotient for varieties or schemes.
Note need not be surjective. Also, if it exists, a categorical quotient is unique up to a canonical isomorphism. In practice, one takes C to be the category of varieties or the category of schemes over a fixed scheme. A categorical quotient is a universal categorical quotient if it is stable under base change: for any , is a categorical quotient.
A basic result is that geometric quotients (e.g., ) and GIT quotients (e.g., ) are categorical quotients. |
https://en.wikipedia.org/wiki/S%C3%B8rensen%20formol%20titration | The Sørensen formol titration(SFT) invented by S. P. L. Sørensen in 1907 is a titration of an amino acid with potassium hydroxide in the presence of formaldehyde. It is used in the determination of protein content in samples.
If instead of an amino acid an ammonium salt is used the reaction product with formaldehyde is hexamethylenetetramine:
The liberated hydrochloric acid is then titrated with the base and the amount of ammonium salt used can be determined.
With an amino acid the formaldehyde reacts with the amino group to form a methylene amino (R-N=CH2) group. The remaining acidic carboxylic acid group can then again be titrated with base.
In winemaking
Formol titration is one of the methods used in winemaking to measure yeast assimilable nitrogen needed by wine yeast in order to successfully complete fermentation.
Accuracy in formol titration
There has been some inaccuracies of the SFT caused by the differences in the basicity of the nitrogen in different amino acids which were explained by S. L. Jodidi. For instances, proline(an amino acid), histidine, and lysine yields too low values compared to the theory. Unlike alpha, monobasic (containing one amino group per molecule) amino acids, these amino (or imino) acids' nitrogens have inconstant basicity, which results in partial reaction with formaldehyde.
In case of tyrosine, the actual results are too high due to the negative hydroxyl group (-OH), which acts as a base. This explanation is supported by the fact that phenylalanine can be accurately titrated. |
https://en.wikipedia.org/wiki/Pelagic%20red%20clay | Pelagic red clay, also known as simply red clay, brown clay or pelagic clay, is a type of pelagic sediment.
Pelagic clay accumulates in the deepest and most remote areas of the ocean. It covers 38% of the ocean floor and accumulates more slowly than any other sediment type, at only 0.1–0.5 cm/1000 yr. Containing less than 30% biogenic material, it consists of sediment that remains after the dissolution of both calcareous and siliceous biogenic particles while they settled through the water column. These sediments consist of eolian quartz, clay minerals, volcanic ash, subordinate residue of siliceous microfossils, and authigenic minerals such as zeolites, limonite and manganese oxides. The bulk of red clay consists of eolian dust. Accessory constituents found in red clay include meteorite dust, fish bones and teeth, whale ear bones, and manganese micro-nodules.
These pelagic sediments are typically bright red to chocolate brown in color. The color results from coatings of iron oxide and manganese oxide on the sediment particles. In the absence of organic carbon, iron and manganese remain in their oxidized states and these clays remain brown after burial. When more deeply buried, brown clay may change into red clay due to the conversion of iron hydroxides to hematite.
These sediments accumulate on the ocean floor within areas characterized by little planktonic production. The clays which comprise them are transported into the deep ocean in suspension, either in the air over the oceans or in surface waters. Both wind and ocean currents transport these sediments in suspension thousands of kilometers from their terrestrial source. As they are transported, the finer clays may stay in suspension for a hundred years or more within the water column before they settle to the ocean bottom. The settling of this clay-size sediment occurs primarily by the formation of clay aggregates by flocculation and by their incorporation into fecal pellets by pelagic organisms. |
https://en.wikipedia.org/wiki/Spacetime%20triangle%20diagram%20technique | In physics and mathematics, the spacetime triangle diagram (STTD) technique,
also known as the Smirnov method of incomplete separation of variables, is the direct space-time domain method for electromagnetic and scalar wave motion.
Basic stages
(Electromagnetics) The system of Maxwell's equations is reduced to a second-order PDE for the field components, or potentials, or their derivatives.
The spatial variables are separated using convenient expansions into series and/or integral transforms—except one that remains bounded with the time variable, resulting in a PDE of hyperbolic type.
The resulting hyperbolic PDE and the simultaneously transformed initial conditions compose a problem, which is solved using the Riemann–Volterra integral formula. This yields the generic solution expressed via a double integral over a triangle domain in the bounded-coordinate—time space. Then this domain is replaced by a more complicated but smaller one, in which the integrant is essentially nonzero, found using a strictly formalized procedure involving specific spacetime triangle diagrams (see, e.g., Refs.).
In the majority of cases the obtained solutions, being multiplied by known functions of the previously separated variables, result in the expressions of a clear physical meaning (nonsteady-state modes). In many cases, however, more explicit solutions can be found summing up the expansions or doing the inverse integral transform.
STTD versus Green's function technique
The STTD technique belongs to the second among the two principal ansätze for theoretical treatment of waves — the frequency domain and the direct spacetime domain.
The most well-established method for the inhomogeneous (source-related) descriptive equations of wave motion is one based on the Green's function technique. For the circumstances described in Section 6.4 and Chapter 14 of Jackson's Classical Electrodynamics, it can be reduced to calculation of the wave field via retarded potentials (in particular, |
https://en.wikipedia.org/wiki/Teltron%20tube | A teltron tube (named for Teltron Inc., which is now owned by 3B Scientific Ltd.) is a type of cathode ray tube used to demonstrate the properties of electrons. There were several different types made by Teltron including a diode, a triode, a Maltese Cross tube, a simple deflection tube with a fluorescent screen, and one which could be used to measure the charge-to-mass ratio of an electron. The latter two contained an electron gun with deflecting plates. The beams can be bent by applying voltages to various electrodes in the tube or by holding a magnet close by. The electron beams are visible as fine bluish lines. This is accomplished by filling the tube with low pressure helium (He) or Hydrogen (H2) gas. A few of the electrons in the beam collide with the helium atoms, causing them to fluoresce and emit light.
They are usually used to teach electromagnetic effects because they show how an electron beam is affected by electric fields and by magnetic fields like the Lorentz force.
Motions in fields
Charged particles in a uniform electric field follow a parabolic trajectory, since the electric field term (of the Lorentz force which acts on the particle) is the product of the particle's charge and the magnitude of the electric field, (oriented in the direction of the electric field). In a uniform magnetic field however, charged particles follow a circular trajectory due to the cross product in the magnetic field term of the Lorentz force. (That is, the force from the magnetic field acts on the particle in a direction perpendicular to the particle's direction of motion. See: Lorentz force for more details.)
Apparatus
The 'teltron' apparatus consists of a Teltron type electron deflection tube, a Teltron stand, EHT power supply (, variable).
Experimental setup
In an evacuated glass bulb some hydrogen gas (H2) is filled, so that the tube has a hydrogen atmosphere at low pressure of about is formed. The pressure is such that the electrons are decelerated by col |
https://en.wikipedia.org/wiki/Boulengeromyrus | Knoepffler's elephantfish (Boulengeromyrus knoepffleri) is a species of elephantfish in the family Mormyridae being the only member of its genus. It occurs only in the Ivindo River and the Ntem River basins of Gabon and Cameroon. It reaches a maximum length of about . |
https://en.wikipedia.org/wiki/Horus%20on%20the%20Crocodiles | Horus on the Crocodiles is a motif found on ancient Egyptian healing amulets from the Third Intermediate Period until the end of the Ptolemaic dynasty, as well as on larger cippi and stelae. Both the portable amulets and the larger statues are sometimes referred to simply as Horus stelae.
The Horus amulet or stele usually takes the form of a stone slab depicting the god Horus in the form of a child (Harpocrates) standing on two crocodiles and holding other dangerous animals such as snakes and scorpions. In older specimens, the head of the protective god Bes is depicted above the child's figure, protruding from the body of the cippus, which later became part of the frame. The stelae contain Egyptian hieroglyphs with mythological and magical texts recited in the treatment of diseases and for protection against stings or bites. This portrayal is thought to follow the myth of Horus triumphing over dangerous animals in the marshes of Khemmis (Akhmim).
Well-known specimens in this genre include the so-called Metternich stela, the Banobal stele, the Egyptian Museum's Djedhor healing statue, and the Louvre's Priest of Bastet statue.
Gallery
Bibliography
General
Sternberg-el-Hotabi 1994. C. Sternberg-el-Hotabi. Der Untergang der Hieroglyphenschrift, Chronique d'Egypte 69 (1994). 218-248
Andrews, Carol, 1994. Amulets of Ancient Egypt. London: British Museum Press.
Gasse, Annie, 2004. Les stèles d’Horus sur les crocodiles. Paris: Éditions de la Réunion des musées nationaux.
Quack, Joachim, 2002. “Review of Sternberg-el Hotabi 1999,” Orientalische Literaturzeiting 97:6, 713-39.
Ritner, Robert K., 1989. “Horus on the Crocodiles: a Juncture of Religion and Magic in Late Dynastic Egypt.” In Religion and Philosophy in Ancient Egypt, ed. William Kelly Simpson. New Haven: Yale University Press. 103-16.
Individual stelae
Berlev/Hodjash 1982. O. Berlev/S Hodjash. Egyptian Stelae in the Pushkin Museum of Fine Arts. Moscow
Berlandini 2002. J. Berlandini. Un monument magiqu |
https://en.wikipedia.org/wiki/Hungarian%20forint | The forint (sign Ft; code HUF) is the currency of Hungary. It was formerly divided into 100 fillér, but fillér coins are no longer in circulation. The introduction of the forint on 1 August 1946 was a crucial step in the post-World War II stabilisation of the Hungarian economy, and the currency remained relatively stable until the 1980s. Transition to a market economy in the early 1990s adversely affected the value of the forint; inflation peaked at 35% in 1991. Between 2001 and 2022, inflation was in single digits, and the forint has been declared fully convertible. In May 2022, inflation reached 10.7% amid the war in Ukraine and economic uncertainty. As a member of the European Union, the long-term aim of the Hungarian government may be to replace the forint with the euro, although under the current government there is no target date for adopting the euro.
History
The forint's name comes from the city of Florence, where gold coins called fiorino d'oro were minted from 1252. In Hungary, the florentinus (later forint), also a gold-based currency, was used from 1325 under Charles Robert, with several other countries following Hungary's example.
Between 1868 and 1892, the forint was the name used in Hungarian for the currency of the Austro-Hungarian Empire, known in German as the Gulden. It was subdivided into 100 krajczár (krajcár in modern Hungarian orthography; cf German Kreuzer).
The forint was reintroduced on 1 August 1946, after the pengő was rendered worthless by massive hyperinflation in 1945–46, the highest ever recorded. This was brought about by a mixture of the high demand for reparations from the USSR, Soviet plundering of Hungarian industries, and the holding of Hungary's gold reserves in the United States. The different parties in the government had different plans to solve this problem. To the Independent Smallholders' Party–which had won a large majority in the 1945 Hungarian parliamentary election–as well as the Social Democrats, outside support |
https://en.wikipedia.org/wiki/Tobacco%20virtovirus%201 | Tobacco virtovirus 1, informally called Tobacco mosaic satellite virus, Satellite tobacco mosaic virus (STMV), or tobacco mosaic satellite virus, is a satellite virus first reported in Nicotiana glauca from southern California, U.S.. Its genome consists of linear positive-sense single-stranded RNA.
Tobacco virtovirus 1 is a small, icosahedral plant virus which worsens the symptoms of infection by Tobacco mosaic virus (TMV). Satellite viruses are some of the smallest possible reproducing units in nature; they achieve this by relying on both the host cell and a host-virus (in this case, TMV) for the machinery necessary for them to reproduce. The entire Tobacco virtovirus 1 particle consists of 60 identical copies of a single protein (CP) that make up the viral capsid (coating), and a 1063-nucleotide single-stranded RNA genome which codes for the capsid and one other protein of unknown function.
In a broader sense, the Tobacco Mosaic Virus holds distinctive properties, which primarily include how they are distributed and the range of their hosts. They can be found within Nicotious Glauna plants, which are typically located in warmer areas, such as the United States in California and the South American region in Bolivia and Argentina. Satellite viruses like the Tobacco Vitro Virus 1 tend to be commonly located in the same tobacco tree plant(N. Glauca), which can be described as a tall shrub that possesses small leaves, that show signs of viral infection through its mosaic and yellow complexion. The Satellite Tobacco Mosaic Virus also has a variety of alternative virus helpers, which include tomatoes tobacco, and peppers, but has yet to be found in alternate crop plants.
Additionally, the Tobacco Mosaic Virus has distinctive features in cells, which are particularly instances where virus crystals may form, as well as other protein bodies within unit membrane-bound structures. The membrane that surrounds these crystals contains many vesicles which allows for genome r |
https://en.wikipedia.org/wiki/Aggregation-induced%20emission | Aggregation-induced emission (AIE) is a phenomenon that is observed with certain organic luminophores (fluorescent dyes).
The photoemission efficiencies of most organic compounds is higher in solution than in the solid state. Photoemission from some organic compounds follows the reverse pattern, being greater in the solid than in solution. The effect is attributed to the decreased flexibility in the solid.
Aggregation-induced emission enhancement
The phenomenon in which organic luminophores show higher photoluminescence efficiency in the aggregated state than in solution is called aggregation-induced emission enhancement (AIEE). Some luminophores, e.g., diketopyrrolopyrrole-based and sulfonamide-based luminophores, only display enhanced emission upon entering the crystalline state. That is, these luminophores are said to exhibit crystallization-induced emission enhancement (CIEE).
Luminophores such as noble metallic nanoclusters show higher
photoluminescence efficiency in the aggregated state than homogenous
dispersion in solution. This phenomenon is known as
Aggregation-Induced Emission (AIE).
Aggregation-induced emission polymer
Fluorescence-emission Polymer is a kind of polymer which can absorb light of certain frequency and then give out light. These polymers can be applied in biomaterial area. Due to their high biocapacity and fluorescence, they can help researchers to find and mark the location of proteins. And polymers with property of aggregation-induced emission can also help to protect the healthy tissues from the harm of the medicines. |
https://en.wikipedia.org/wiki/Thought%20vector | Thought vector is a term popularized by Geoffrey Hinton, the prominent deep-learning researcher, which uses vectors based on natural language to improve its search results. |
https://en.wikipedia.org/wiki/Multi%20expression%20programming | Multi Expression Programming (MEP) is an evolutionary algorithm for generating mathematical functions describing a given set of data. MEP is a Genetic Programming variant encoding multiple solutions in the same chromosome. MEP representation is not specific (multiple representations have been tested). In the simplest variant, MEP chromosomes are linear strings of instructions. This representation was inspired by Three-address code. MEP strength consists in the ability to encode multiple solutions, of a problem, in the same chromosome. In this way, one can explore larger zones of the search space. For most of the problems this advantage comes with no running-time penalty compared with genetic programming variants encoding a single solution in a chromosome.
Representation
MEP chromosomes are arrays of instructions represented in Three-address code format.
Each instruction contains a variable, a constant, or a function. If the instruction is a function, then the arguments (given as instruction's addresses) are also present.
Example of MEP program
Here is a simple MEP chromosome (labels on the left side are not a part of the chromosome):
1: a
2: b
3: + 1, 2
4: c
5: d
6: + 4, 5
7: * 3, 5
Fitness computation
When the chromosome is evaluated it is unclear which instruction will provide the output of the program. In many cases, a set of programs is obtained, some of them being completely unrelated (they do not have common instructions).
For the above chromosome, here is the list of possible programs obtained during decoding:
E1 = a,
E2 = b,
E4 = c,
E5 = d,
E3 = a + b.
E6 = c + d.
E7 = (a + b) * d.
Each instruction is evaluated as a possible output of the program.
The fitness (or error) is computed in a standard manner. For instance, in the case of symbolic regression, the fitness is the sum of differences (in absolute value) between the expected output (called target) and the actual output.
Fitness assignment process
Which expression will represent the chromosom |
https://en.wikipedia.org/wiki/Ankylosing%20spondylitis | Ankylosing spondylitis (AS) is a type of arthritis characterized by long-term inflammation of the joints of the spine, typically where the spine joins the pelvis. Occasionally, areas affected may include other joints such as the shoulders or hips. Eye and bowel problems may occur as well as back pain. Joint mobility in the affected areas generally worsens over time.
Although the cause of ankylosing spondylitis is unknown, it is believed to involve a combination of genetic and environmental factors. More than 85% of those affected in the UK have a specific human leukocyte antigen known as the HLA-B27 antigen. The underlying mechanism is believed to be autoimmune or autoinflammatory. Diagnosis is typically based on the symptoms with support from medical imaging and blood tests. AS is a type of seronegative spondyloarthropathy, meaning that tests show no presence of rheumatoid factor (RF) antibodies.
There is no known cure for AS. Treatments may include medication, exercise, physical therapy, and in rare cases surgery. Medications used include NSAIDs, steroids, DMARDs such as sulfasalazine, and biologic agents such as TNF inhibitors.
Approximately 0.1% to 0.8% of all humans are affected with onset typically occurring in young adults. Males and females are equally affected; however, women are more likely than men to experience inflammation rather than fusion.
Signs and symptoms
The signs and symptoms of ankylosing spondylitis often appear gradually, with peak onset between 20 and 30 years of age. Initial symptoms are usually a chronic dull pain in the lower back or gluteal region combined with stiffness of the lower back. Individuals often experience pain and stiffness that awakens them in the early morning hours.
As the disease progresses, loss of spinal mobility and chest expansion, with a limitation of anterior flexion, lateral flexion, and extension of the lumbar spine are seen. Systemic features are common with weight loss, fever, or fatigue often present. Pa |
https://en.wikipedia.org/wiki/Bio21%20Institute | The Bio21 Institute of Molecular Science and Biotechnology, abbreviated as the Bio21 Institute, is an Australian scientific research institute that focuses on basic science and applied biotechnology. The Bio21 Institute is based at the University of Melbourne on Flemington Road in , Melbourne, Victoria.
The institute is managed by the University of Melbourne and is supported by funding from the Victorian Government.
History
Established in 2002 and officially opened in 2005, the research centre conducts interdisciplinary learning and houses research groups specializing in biochemistry, cell biology, chemistry, and other life sciences. Bio21 accommodates over 500 research scientists, students and industry participants, making it one of the largest biotechnology research centres in Australia.
In September 2006, Bio21 formed a partnership with Australian-based global bio-pharmaceutical company CSL Limited. 50 scientists from CSL were relocated to participate in activities at the Bio21. The goal of the partnership was for Bio21 to gain the expertise of industry professionals and for CSL to gain access to state-of-the-art equipment.
See also
Health in Australia |
https://en.wikipedia.org/wiki/Guided%20filter | A guided filter is an edge-preserving smoothing image filter. As with a bilateral filter, it can filter out noise or texture while retaining sharp edges.
Comparison
Compared to the bilateral filter, the guided image filter has two advantages: bilateral filters have high computational complexity, while the guided image filter uses simpler calculations with linear computational complexity. Bilateral filters sometimes include unwanted gradient reversal artifacts and cause image distortion. The guided image filter is based on linear combination, making the output image consistent with the gradient direction of the guidance image, preventing gradient reversal.
Definition
One key assumption of the guided filter is that the relation between guidance and the filtering output is linear. Suppose that is a linear transformation of in a window centered at the pixel .
In order to determine the linear coefficient , constraints from the filtering input are required. The output is modeled as the input with unwanted components , such as noise/textures subtracted.
The basic model:
(1)
(2)
in which:
is the output pixel;
is the input pixel;
is the pixel of noise components;
is the guidance image pixel;
are some linear coefficients assumed to be constant in .
The reason to use a linear combination is that the boundary of an object is related to its gradient. The local linear model ensures that has an edge only if has an edge, since .
Subtract (1) and (2) to get formula (3);At the same time, define a cost function (4):
(3)
(4)
in which
is a regularization parameter penalizing large ;
is a window centered at the pixel .
And the cost function's solution is:
(5)
(6)
in which
and are the mean and variance of in ;
is the number of pixels in ;
is the mean of in .
After obtaining the linear coefficients , the filtering output is provided by the following algorithm:
Algorithm
By definition, the algorithm can be written as:
|
https://en.wikipedia.org/wiki/Morse%20code%20abbreviations | Morse code abbreviations are used to speed up Morse communications by foreshortening textual words and phrases. Morse abbreviations are short forms, representing normal textual words and phrases formed from some (fewer) characters taken from the word or phrase being abbreviated. Many are typical English abbreviations, or short acronyms for often-used phrases.
Distinct from prosigns and commercial codes
Morse code abbreviations are not the same as prosigns. Morse abbreviations are composed of (normal) textual alpha-numeric character symbols with normal Morse code inter-character spacing; the character symbols in abbreviations, unlike the delineated character groups representing Morse code prosigns, are not "run together" or concatenated in the way most prosigns are formed.
Although a few abbreviations (such as for "dollar") are carried over from former commercial telegraph codes, almost all Morse abbreviations are not commercial codes. From 1845 until well into the second half of the 20th century, commercial telegraphic code books were used to shorten telegrams, e.g. = "Locals have plundered everything from the wreck." However, these cyphers are typically "fake" words six characters long, or more, used for replacing commonly used whole phrases, and are distinct from single-word abbreviations.
Word and phrase abbreviations
The following Table of Morse code abbreviations and further references to Brevity codes such as 92 Code, Q code, Z code, and R-S-T system serve to facilitate fast and efficient Morse code communications.
{| class="wikitable"
|+Table of selected Morse code abbreviations
|-
! Abbreviation
! Meaning
! Defined in
! Type of abbreviation
|-
|
| All after (used after question mark to request a repetition)
| ITU-R M.1172
| operating signal
|-
|
| All before (similarly)
| ITU-R M.1172
| operating signal
|-
|
| Address
| ITU-T Rec. F.1
| operating signal
|-
|
| Address
| ITU-R M.1172
| operating signal
|-
|
| Again
|
| operating signal
|-
|
| Ante |
https://en.wikipedia.org/wiki/Vincennes%20Trace | The Vincennes Trace was a major trackway running through what are now the American states of Kentucky, Indiana, and Illinois. Originally formed by millions of migrating bison, the Trace crossed the Ohio River near the Falls of the Ohio and continued northwest to the Wabash River, near present-day Vincennes, before it crossed to what became known as Illinois. This buffalo migration route, often 12 to 20 feet wide in places, was well known and used by American Indians. Later European traders and American settlers learned of it, and many used it as an early land route to travel west into Indiana and Illinois. It is considered the most important of the traces to the Illinois country.
It was known by various names, including Buffalo Trace, Louisville Trace, Clarksville Trace, and Old Indian Road. After being improved as a turnpike, the New Albany-Paoli Pike, among others. The Trace's continuous use encouraged improvements over the years, including paving and roadside development. U.S. Route 150 between Vincennes, Indiana, and Louisville, Kentucky, follows a portion of this path. Sections of the improved Trace have been designated as part of a National Scenic Byway that crosses southern Indiana.
History
The Trace was created by millions of migrating bison that were numerous in the region from the Great Lakes to Tennessee. It was part of a greater buffalo migration route that extended from present-day Big Bone Lick State Park in Kentucky, through Bullitt's Lick, south of present-day Louisville, and across the Falls of the Ohio River to Indiana, then northwest to Vincennes, before crossing the Wabash River into Illinois. The trail was well known among the area's natives and used for centuries. It later became known and used by European traders and white settlers who crossed the Ohio River at the Falls and followed the Trace overland to the western territories. It is considered to be the most important of the early traces leading to the Illinois country.
In Indiana the T |
https://en.wikipedia.org/wiki/152nd%20meridian%20east | The meridian 152° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, Australasia, the Southern Ocean, and Antarctica to the South Pole.
The 152nd meridian east forms a great circle with the 28th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 152nd meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" width="165" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Sakha Republic Magadan Oblast — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Sea of Okhotsk
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Sakhalin Oblast — island of Simushir, Kuril Islands
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Chuuk Islands
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Tabar islands
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Island of New Ireland
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bismarck Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Island of New Britain
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Solomon Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Island of New Britain
|-valign="top"
| style="background:#b0e0e6;" |
https://en.wikipedia.org/wiki/Android%20Things | Android Things is a deprecated Android-based embedded operating system platform by Google, announced at Google I/O 2015, and launched in 2018. Android Things Dashboard shutdown began on January 5, 2021. After January 5, 2022, Android Things Dashboard has been shut down completely and all remaining data has been deleted.
Originally, Android Things was aimed for low-power and memory constrained Internet of Things (IoT) devices, but in 2019 the project dropped support for low-power hardware and refocused on smartphone-class devices.
History
Pre-release
During Google I/O 2015, Google announced an upcoming Android-based embedded operating system platform, codenamed Brillo. At the time, project aimed to support low-memory devices with as little as 32-64 MB of RAM. Brillo platform was not just an OS for IoT devices, but instead a complete software stack with cloud component which included management console for device provisioning and update delivery. Brillo supported Wi-Fi and Bluetooth Low Energy and Weave protocol for communicating with cloud (including update delivery), communication with Android phones, and other compatible devices (including Google Nest products).
In 2016, Google revamped Brillo under the new name Android Things.
Originally, Android Things was aimed for low-power and memory constrained Internet of Things (IoT) devices, which are usually built from different MCU platforms.
Release
In 2018, Android Things was officially released, with version number 1.0. At the same time, multiple OEMs (including JBL, Lenovo, and LG Electronics) released smart home devices powered by Android Things. These devices were based on two Qualcomm "Home Hub" systems-on-chip solutions and Google-provided implementations of Android Things tailored for Google Assistant-powered smart speakers and displays.
In February 2019, Android Things refocused on smart speakers and displays. The project dropped support for resource-constrained IoT devices and changed focus to smartph |
https://en.wikipedia.org/wiki/Cloaca%20%28embryology%29 | The cloaca (: cloacae) is a structure in the development of the urinary and reproductive organs.
The hind-gut is at first prolonged backward into the body-stalk as the tube of the allantois; but, with the growth and flexure of the tail-end of the embryo, the body-stalk, with its contained allantoic tube, is carried forward to the ventral aspect of the body, and consequently a bend is formed at the junction of the hind-gut and allantois.
This bend becomes dilated into a pouch, which constitutes the endodermal cloaca; into its dorsal part the hind-gut opens, and from its ventral part the allantois passes forward.
At a later stage the Wolffian duct and Müllerian duct open into its ventral portion.
The cloaca is, for a time, shut off from the anterior by the cloacal membrane, formed by the apposition of the ectoderm and endoderm, and reaching, at first, as far forward as the future umbilicus.
Behind the umbilicus, however, the mesoderm subsequently extends to form the lower part of the abdominal wall and pubic symphysis.
By the growth of the surrounding tissues the cloacal membrane comes to lie at the bottom of a depression, which is lined by ectoderm and named the ectodermal cloaca.
Clinical significance
A birth defect can arise known as a persistent cloaca where the rectum, vagina, and urinary tract fuse to create a common channel or cloaca.
A rare birth defect which leaves much of the abdominal organs exposed is known as cloacal exstrophy.
Additional images |
https://en.wikipedia.org/wiki/Rate-limiting%20step%20%28biochemistry%29 | In biochemistry, a rate-limiting step is a step that controls the rate of a series of biochemical reactions. The statement is, however, a misunderstanding of how a sequence of enzyme catalyzed reaction steps operate. Rather than a single step controlling the rate, it has been discovered that multiple steps control the rate. Moreover, each controlling step controls the rate to varying degrees.
Blackman (1905) stated as an axiom: "when a process is conditioned as to its rapidity by a number of separate factors, the rate of the process is limited by the pace of the slowest factor." This implies that it should be possible, by studying the behavior of a complicated system such as a metabolic pathway, to characterize a single factor or reaction (namely the slowest), which plays the role of a master or rate-limiting step. In other words, the study of flux control can be simplified to the study of a single enzyme since, by definition, there can only be one 'rate-limiting' step. Since its conception, the 'rate-limiting' step has played a significant role in suggesting how metabolic pathways are controlled. Unfortunately, the notion of a 'rate-limiting' step is erroneous, at least under steady-state conditions. Modern biochemistry textbooks have begun to play down the concept. For example, the seventh edition of Lehninger Principles of Biochemistry explicitly states: "It has now become clear that, in most pathways, the control of flux is distributed among several enzymes, and the extent to which each contributes to the control varies with metabolic circumstances". However, the concept is still incorrectly used in research articles.
Historical perspective
From the 1920s to the 1950s, there were a number of authors who discussed the concept of rate-limiting steps, also known as master reactions. Several authors have stated that the concept of the 'rate-limiting' step is incorrect. Burton (1936) was one of the first to point out that: "In the steady state of reaction chain |
https://en.wikipedia.org/wiki/Sagan%20%28software%29 | Sagan is an open source (GNU/GPLv2) multi-threaded, high performance, real-time log analysis & correlation engine developed by Quadrant Information Security that runs on Unix operating systems. It is written in C and uses a multi-threaded architecture to deliver high performance log & event analysis. Sagan's structure and rules work similarly to the Sourcefire Snort IDS/IPS engine. This allows Sagan to be compatible with Snort or Suricata rule management softwares and give Sagan the ability to correlate with Snort IDS/IPS data.
Sagan supports different output formats for reporting and analysis, log normalization, script execution on event detection, GeoIP detection/alerting and time sensitive alerting.
See also
Host-based intrusion detection system comparison |
https://en.wikipedia.org/wiki/J%20chain | The Joining (J) chain is a protein component that links monomers of antibodies IgM and IgA to form polymeric antibodies capable of secretion. The J chain is well conserved in the animal kingdom, but its specific functions are yet to be fully understood. It is a 137 residue polypeptide, encoded by the IGJ gene.
Structure
The J chain is a glycoprotein of molecular weight 15 kDa. Its secondary structure remains undetermined but is believed to adopt either a single β-barrel or two-domain folded structure with standard immunoglobulin domains. The J chain's primary structure is unusually acidic having a high content of negatively charged amino acids. It has 8 cysteine residues, 6 of which are involved in intramolecular disulfide bonds while the remaining two function to bind the Fc tailpiece regions of IgA or IgM antibodies, the α chain and μ chain respectively. An N-linked carbohydrate resulting from N-glycosylation is also essential in the protein's incorporation to antibody polymers. There is no known protein family with significant homology to the J chain.
Function
Antibody polymerization
The J chain regulates the multimerization of IgM and IgA in mammals. When expressed in cells, it favors the formation of a pentameric IgM and an IgA dimer. IgM pentamers are most commonly found with a single J chain, but some studies have seen as many as 4 J chains associated to a single IgM pentamer.
The J chain is incorporated late in the formation of IgM polymers and thermodynamically favors the formation of pentamers as opposed to hexamers. In J chain-knockout (KO) mice, the hexameric IgM polymer dominates. These J chain negative IgM hexamers are 15-20 times more effective at activating complement than J chain positive IgM pentamers. However, J chain-KO mice have been shown have low concentrations of hexameric IgM and a deficiency in complement activation, suggesting additional in vivo regulatory mechanisms. Another consequence of pentameric IgM reduced complement activ |
https://en.wikipedia.org/wiki/Information%20behavior | Information behavior is a field of information science research that seeks to understand the way people search for and use information in various contexts. It can include information seeking and information retrieval, but it also aims to understand why people seek information and how they use it. The term 'information behavior' was coined by Thomas D. Wilson in 1982 and sparked controversy upon its introduction. The term has now been adopted and Wilson's model of information behavior is widely cited in information behavior literature. In 2000, Wilson defined information behavior as "the totality of human behavior in relation to sources and channels of information".
A variety of theories of information behavior seek to understand the processes that surround information seeking. An analysis of the most cited publications on information behavior during the early 21st century shows its theoretical nature. Information behavior research can employ various research methodologies grounded in broader research paradigms from psychology, sociology and education.
In 2003, a framework for information-seeking studies was introduced that aims to guide the production of clear, structured descriptions of research objects and positions information-seeking as a concept within information behavior.
Concepts of information behavior
Information need
Information need is a concept introduced by Wilson. Understanding the information need of an individual involved three elements:
Why the individual decides to look for information,
What purpose the information they find will serve, and
How the information is used once it is retrieved
Information-seeking behavior
Information-seeking behavior is a more specific concept of information behavior. It specifically focuses on searching, finding, and retrieving information. Information-seeking behavior research can focus on improving information systems or, if it includes information need, can also focus on why the user behaves the way th |
https://en.wikipedia.org/wiki/Price%27s%20model | Price's model (named after the physicist Derek J. de Solla Price) is a mathematical model for the growth of citation networks. It was the first model which generalized the Simon model to be used for networks, especially for growing networks. Price's model belongs to the broader class of network growing models (together with the Barabási–Albert model) whose primary target is to explain the origination of networks with strongly skewed degree distributions. The model picked up the ideas of the Simon model reflecting the concept of rich get richer, also known as the Matthew effect. Price took the example of a network of citations between scientific papers and expressed its properties. His idea was that the way an old vertex (existing paper) gets new edges (new citations) should be proportional to the number of existing edges (existing citations) the vertex already has. This was referred to as cumulative advantage, now also known as preferential attachment. Price's work is also significant in providing the first known example of a scale-free network (although this term was introduced later). His ideas were used to describe many real-world networks such as the Web.
The model
Basics
Considering a directed graph with n nodes. Let denote the fraction of nodes with degree k so that . Each new node has a given out-degree (namely those papers it cites) and it is fixed in the long run. This does not mean that the out-degrees can not vary across nodes, simply we assume that the mean out-degree m is fixed over time. It is clear, that , consequently m is not restricted to integers. The most trivial form of preferential attachment means that a new node connects to an existing node proportionally to its in-degrees. In other words, a new paper cites an existing paper in proportional to its in-degrees. The caveat of such idea is that no new paper is cited when it is joined to the network so it is going to have zero probability of being cited in the future (which necessarily is not |
https://en.wikipedia.org/wiki/Suspension%20of%20a%20ring | In algebra, more specifically in algebraic K-theory, the suspension of a ring R is given by where is the ring of all infinite matrices with coefficients in R having only finitely many nonzero elements in each row or column and is its ideal of matrices having only finitely many nonzero elements. It is an analog of suspension in topology.
One then has: . |
https://en.wikipedia.org/wiki/Hume-Rothery%20rules | Hume-Rothery rules, named after William Hume-Rothery, are a set of basic rules that describe the conditions under which an element could dissolve in a metal, forming a solid solution. There are two sets of rules; one refers to substitutional solid solutions, and the other refers to interstitial solid solutions.
Substitutional solid solution rules
For substitutional solid solutions, the Hume-Rothery rules are as follows:
The atomic radius of the solute and solvent atoms must differ by no more than 15%:
The crystal structures of solute and solvent must be similar.
Complete solubility occurs when the solvent and solute have the same valency. A metal is more likely to dissolve a metal of higher valency, than vice versa.
The solute and solvent should have similar electronegativity. If the electronegativity difference is too great, the metals tend to form intermetallic compounds instead of solid solutions.
Interstitial solid solution rules
For interstitial solid solutions, the Hume-Rothery Rules are:
Solute atoms should have a smaller radius than 59% of the radius of solvent atoms.
The solute and solvent should have similar electronegativity.
Valency factor: two elements should have the same valence. The greater the difference in valence between solute and solvent atoms, the lower the solubility.
Solid solution rules for multicomponent systems
Fundamentally, the Hume-Rothery rules are restricted to binary systems that form either substitutional or interstitial solid solutions. However, this approach limits assessing advanced alloys which are commonly multicomponent systems. Free energy diagrams (or phase diagrams) offer in-depth knowledge of equilibrium restraints in complex systems. In essence the Hume-Rothery rules (and Pauling's rules) are based on geometrical restraints. Likewise are the advancements being done to the Hume-Rothery rules. Where they are being considered as critical contact criterion describable with Voronoi diagrams. This could eas |
https://en.wikipedia.org/wiki/Lanix | Lanix Internacional, S.A. de C.V. is a multinational computer and mobile phone manufacturer company based in Hermosillo, Mexico. Lanix primarily markets and sells its products in Mexico and the Latin American export market.
History
Lanix was founded in Hermosillo, Sonora, Mexico in 1990, and released its first computer, the PC 286 the same year.
Throughout the 1990s Lanix expanded into the development and production of more sophisticated electronics components such as optical drives, servers, memory drives and flash memory. In 2002 Lanix opened its first factory outside of Mexico in Santiago, Chile to cater to the South American market.
By 2006 Lanix had gained a market share of 5% of Mexico's electronics market and began diversifying its product line to include LCD televisions and monitors and in 2007 began manufacturing mobile phones. Currently Lanix offers products in the consumer, professional and government markets throughout Latin America.
In 2010 Lanix announced an ambitious plan to gain market share in the Latin American computer market and expanded operations to include every country in Latin America
Lanix has production facilities at its original headquarters in Hermosillo, Sonora, Mexico and international facilities in Santiago, Chile and Bogota, Colombia.
At the 2009 Intel Solutions Summit hosted by Intel, Lanix won an award in the "mobile solution" category.
In March 2011, Lanix began offering a system where buyers can custom build their own computer, choosing different types of chipsets, memory, and other components.
In 2012 Lanix expanded its product portfolio by integrating its first Smartphone, Ilium S100, and positioned itself as one of the bestselling brands in the Mexican market.
In 2015 announces the first smartphone with Windows Phone of the company.
In June 2017 Lanix image is renewed by updating its logo, launching new high-end smartphones, and updating its webpage.
Products
, Lanix manufactures desktops, laptops, tablets, server |
https://en.wikipedia.org/wiki/The%20Power%20of%2010%3A%20Rules%20for%20Developing%20Safety-Critical%20Code | The Power of 10 Rules were created in 2006 by Gerard J. Holzmann of the NASA/JPL Laboratory for Reliable Software. The rules are intended to eliminate certain C coding practices which make code difficult to review or statically analyze. These rules are a complement to the MISRA C guidelines and have been incorporated into the greater set of JPL coding standards.
Rules
The ten rules are:
Avoid complex flow constructs, such as goto and recursion.
All loops must have fixed bounds. This prevents runaway code.
Avoid heap memory allocation.
Restrict functions to a single printed page.
Use a minimum of two runtime assertions per function.
Restrict the scope of data to the smallest possible.
Check the return value of all non-void functions, or cast to void to indicate the return value is useless.
Use the preprocessor sparingly.
Limit pointer use to a single dereference, and do not use function pointers.
Compile with all possible warnings active; all warnings should then be addressed before release of the software.
Uses
The NASA study of the Toyota electronic throttle control firmware found at least 243 violations of these rules.
See also
Life critical system
Coding conventions
Software quality
Software assurance
Further reading |
https://en.wikipedia.org/wiki/Internet%20Buzzword%20Award | is a Japanese award that determines the most popular buzzwords on the Internet during a year.
This article also deals with the Anime Buzzword Award, which has been held in conjunction with the awards since 2013.
Overview
The Internet Buzzword of the Year Award is selected annually by the online media company Gadget Tsushin Launched in 2007, candidates are solicited from the members of 2channel search,and the popular words are decided by the votes of those members. Since 2013, the "Anime Buzzword Awards" have been held only for words related to anime that aired that year.
Since the Grand Prize is held at the end of the year, words that were popular at the end of the year have a comparative advantage. Also, due to the selection method, words that were popular on 2channel and Nico Nico Douga tend to be selected. Many people may be more familiar with this award than the "New Word and Popular Word Awards" sponsored by You Can.
Award-winning terms |
https://en.wikipedia.org/wiki/Stunt%20%28botany%29 | In botany and agriculture, stunting describes a plant disease that results in dwarfing and loss of vigor. It may be caused by infectious or noninfectious means. Stunted growth can affect foliage and crop yields, as well as eating quality in edible plants.
Infectious
A stunt caused by infectious means usually is too late to cure.
Nematodes (eelworm)
Fungi
Bacteria
Viruses
Noninfectious
A stunt caused by noninfectious means could sometimes be remedied.
Physical environment
Excess of water
Lack of water
Too-deep planting
Excess light
Nutrition-related
Soil nutrient imbalance
Injuries
Chemical injury
Physical injury
Pest feeding
See also
Soil retrogression and degradation
Soil pH
Soil types
Ramu stunt disease, a disease of the sugarcane widespread throughout Papua New Guinea |
https://en.wikipedia.org/wiki/5K%20resolution | 5K resolution refers to display formats with a horizontal resolution of around 5,000 pixels. The most common 5K resolution is , which has an aspect ratio of with around 14.7 million pixels (just over seven times as many pixels as 1080p Full HD), with exactly twice the linear resolution of 1440p and four times that of 720p. This resolution is typically used in computer monitors to achieve a higher pixel density, and is not a standard format in digital television and digital cinematography, which feature 4K resolutions and 8K resolutions.
In comparison to 4K UHD (), the 5K resolution of offers 1280 extra columns and 720 extra lines of display area, an increase of 33.% in each dimension. This additional display area can allow 4K content to be displayed at native resolution without filling the entire screen, which means that additional software such as video editing suite toolbars will be available without having to downscale the content previews.
As of 2016, the world uses 1080p as the mainstream HD standard. However, there is a rapid increase in media content being released in 4K and even 5K resolution. Online streaming services such as Netflix and Amazon Prime Video launched videos in 4K resolution in 2014 and are actively expanding their collection of videos in 4K resolution. As 4K content becomes more common, the usefulness of 5K displays in editing and content creation may lead to a higher demand in the future.
History
First camera with 5K video capture
On April 14, 2008, Red Digital Cinema launched one of the first cameras capable of video capture at 5K resolutions. Red Epic uses the Mysterium X sensor which has a resolution of 51202700 and can capture at a framerate of up to 100fps. Cameras with 5K resolution are used occasionally for recording films in digital cinematography.
Some photographic still cameras such as DSLRs can exceed 5K resolution when capturing still images, but not when capturing video. For example, the Canon EOS 5D Mark IV announced i |
https://en.wikipedia.org/wiki/Dickson%27s%20lemma | In mathematics, Dickson's lemma states that every set of -tuples of natural numbers has finitely many minimal elements. This simple fact from combinatorics has become attributed to the American algebraist L. E. Dickson, who used it to prove a result in number theory about perfect numbers. However, the lemma was certainly known earlier, for example to Paul Gordan in his research on invariant theory.
Example
Let be a fixed number, and let be the set of pairs of numbers whose product is at least . When defined over the positive real numbers, has infinitely many minimal elements of the form , one for each positive number ; this set of points forms one of the branches of a hyperbola. The pairs on this hyperbola are minimal, because it is not possible for a different pair that belongs to to be less than or equal to in both of its coordinates. However, Dickson's lemma concerns only tuples of natural numbers, and over the natural numbers there are only finitely many minimal pairs. Every minimal pair of natural numbers has and , for if x were greater than K then (x − 1, y) would also belong to S, contradicting the minimality of (x, y), and symmetrically if y were greater than K then (x, y − 1) would also belong to S. Therefore, over the natural numbers, has at most minimal elements, a finite number.
Formal statement
Let be the set of non-negative integers (natural numbers), let n be any fixed constant, and let be the set of -tuples of natural numbers. These tuples may be given a pointwise partial order, the product order, in which if and only if for every .
The set of tuples that are greater than or equal to some particular tuple forms a positive orthant with its apex at the given tuple.
With this notation, Dickson's lemma may be stated in several equivalent forms:
In every non-empty subset of there is at least one but no more than a finite number of elements that are minimal elements of for the pointwise partial order.
For every infinite sequence of -t |
https://en.wikipedia.org/wiki/Polymer-bonded%20explosive | Polymer-bonded explosives, also called PBX or plastic-bonded explosives, are explosive materials in which explosive powder is bound together in a matrix using small quantities (typically 5–10% by weight) of a synthetic polymer. PBXs are normally used for explosive materials that are not easily melted into a casting, or are otherwise difficult to form.
PBX was first developed in 1952 at Los Alamos National Laboratory, as RDX embedded in polystyrene with dioctyl phthalate plasticizer. HMX compositions with teflon-based binders were developed in 1960s and 1970s for gun shells and for Apollo Lunar Surface Experiments Package (ALSEP) seismic experiments, although the latter experiments are usually cited as using hexanitrostilbene (HNS).
Potential advantages
Polymer-bonded explosives have several potential advantages:
If the polymer matrix is an elastomer (rubbery material), it tends to absorb shocks, making the PBX very insensitive to accidental detonation, and thus ideal for insensitive munitions.
Hard polymers can produce PBX that is very rigid and maintains a precise engineering shape even under severe stress.
PBX powders can be pressed into a desired shape at room temperature; casting normally requires hazardous melting of the explosive. High pressure pressing can achieve density for the material very close to the theoretical crystal density of the base explosive material.
Many PBXes are safe to machine; turning solid blocks into complex three-dimensional shapes. For example, a billet of PBX can be precisely shaped on a lathe or CNC machine. This technique is used to machine explosive lenses necessary for modern nuclear weapons.
Binders
Fluoropolymers
Fluoropolymers are advantageous as binders due to their high density (yielding high detonation velocity) and inert chemical behavior (yielding long shelf stability and low aging). They are somewhat brittle, as their glass transition temperature is at room temperature or above. This limits their use to insens |
https://en.wikipedia.org/wiki/Stem-cell%20line | A stem cell line is a group of stem cells that is cultured in vitro and can be propagated indefinitely. Stem cell lines are derived from either animal or human tissues and come from one of three sources: embryonic stem cells, adult stem cells, or induced stem cells. They are commonly used in research and regenerative medicine.
Properties
By definition, stem cells possess two properties: (1) they can self-renew, which means that they can divide indefinitely while remaining in an undifferentiated state; and (2) they are pluripotent or multipotent, which means that they can differentiate to form specialized cell types. Due to the self-renewal capacity of stem cells, a stem cell line can be cultured in vitro indefinitely.
A stem-cell line is distinctly different from an immortalized cell line, such as the HeLa line. While stem cells can propagate indefinitely in culture due to their inherent properties, immortalized cells would not normally divide indefinitely but have gained this ability due to mutation. Immortalized cell lines can be generated from cells isolated from tumors, or mutations can be introduced to make the cells immortal.
A stem cell line is also distinct from primary cells. Primary cells are cells that have been isolated and then used immediately. Primary cells cannot divide indefinitely and thus cannot be cultured for long periods of time in vitro.
Types and methods of derivation
Embryonic stem cell line
An embryonic stem cell line is created from cells derived from the inner cell mass of a blastocyst, an early stage, pre-implantation embryo. In humans, the blastocyst stage occurs 4–5 days post fertilization. To create an embryonic stem cell line, the inner cell-mass is removed from the blastocyst, separated from the trophoectoderm, and cultured on a layer of supportive cells in vitro. In the derivation of human embryonic stem cell lines, embryos left over from in vitro fertilization (IVF) procedures are used. The fact that the blastocyst is dest |
https://en.wikipedia.org/wiki/Surface%20freezing | Surface freezing is the appearance of long-range crystalline order in a near-surface layer of a liquid. The surface freezing effect is opposite to a far more common surface melting, or premelting. Surface Freezing was experimentally discovered in melts of alkanes and related chain molecules in the early 1990s independently by two groups. John Earnshaw and his group (Queen's University of Belfast) used light scattering, which did not allow a determination of the frozen layer's thickness, and whether or not it is laterally ordered. A group led by Ben Ocko (Brookhaven National Laboratory), Eric Sirota (Exxon) and Moshe Deutsch (Bar-Ilan University, Israel) discovered independently the same effect, using x-ray surface diffraction which allowed them to show that the frozen layer is a crystalline monolayer, with molecules oriented roughly along the surface normal, and ordered in an hexagonal lattice. A related effect, the existence of a smectic phase at the surface of a nematic liquid bulk was observed in liquid crystals by Jens Als-Nielsen (Risø National Laboratory, Denmark) and Peter Pershan (Harvard University) in the early 1980s. However, the surface layer there was neither ordered, nor confined to a single layer. Surface freezing has since been found in a wide range of chain molecules and at various interfaces: liquid-air, liquid-solid and liquid-liquid.
Phases of matter |
https://en.wikipedia.org/wiki/Overheating%20%28electricity%29 | Overheating is a phenomenon of rising temperatures in an electrical circuit. Overheating causes damage to the circuit components and can cause fire, explosion, and injury. Damage caused by overheating is usually irreversible; the only way to repair it is to replace some components.
Causes
When overheating, the temperature of the part rises above the operating temperature. Overheating can take place:
if heat is produced in more than expected amount (such as in cases of short-circuits, or applying more voltage than rated), or
if heat dissipation is poor, so that normally produced waste heat does not drain away properly.
Overheating may be caused from any accidental fault of the circuit (such as short-circuit or spark-gap), or may be caused from a wrong design or manufacture (such as the lack of a proper heat dissipation system).
Due to accumulation of heat, the system reaches an equilibrium of heat accumulation vs. dissipation at a much higher temperature than expected.
Preventive measures
Use of circuit breaker or fuse
Circuit-breakers can be placed at portions of a circuit in series to the path of current it will affect. If more current than expected goes through the circuit-breaker, the circuit breaker "opens" the circuit and stops all current. A fuse is a common type of circuit breaker that involves direct effect of Joule-overheating. A fuse is always placed in series with the path of current it will affect. Fuses usually consist of a thin strand of wire of definite-material. When more that the rated current flows through the fuse, the wire melts and breaks the circuit.
Use of heat-dissipating systems
Many systems use ventilation holes or slits kept on the box of equipment to dissipate heat. Heat sinks are often attached to portions of the circuit that produce most heat or are vulnerable to heat. Fans are also often used. Some high-voltage instruments are kept immersed in oil. In some cases, to remove unwanted heat, a cooling system like air conditioning o |
https://en.wikipedia.org/wiki/Toilet%20Duck | Toilet Duck is a brand name of toilet cleaner noted for the duck-shape of its bottle shaped to assist in dispensing the cleaner under the rim. The design was patented in 1980 by Durgol from Dällikon, Switzerland. It is now produced by S. C. Johnson & Son.
The Toilet Duck brand can be found in the United States, United Kingdom and other countries around the world. In Germany, it is known as WC-Ente, previously produced by Henkel, and now by S. C. Johnson (Germany). In the Netherlands and Flanders it is called "Wc-eend", in France it is sold as "Canard-WC" and in Italy as "Anitra WC". In Hungary it used to have the name "Toalett Kacsa". Meanwhile, in Spain, it is sold as "Pato WC", in Portugal as "WC Pato", and in Mexico, Brazil, Colombia and Argentina as "Pato Purific" or simply "Pato". In Indonesia, it is one of the "Bebek" (duck) line of products, such as Bebek Kloset, Bebek Semerbak, Bebek Semerbak Flush, Bebek In Tank, and Bebek Kamar Mandi.
The "Toilet" moniker has been dropped from the name in the UK and Ireland, and the product is now called "Duck". The same change is occurred in Hungary either, however also with the English "Duck" instead of "Kacsa". Today, the duck-shaped bottle is sold in North America under the Scrubbing Bubbles brand.
Ingredients
The following ingredients are part of all Duck toilet-cleaning products:
L-lactic acid
Water
Ethoxylated alcohol
Xanthan gum
Sodium laureth sulfate
Depending on variant, various dyes and fragrances are used, such as:
Liquid Marine: Liquitint Blue Dye, Liquitint Pink AL Dye, 2,6-dimethyl-7-octen-2-ol, 2-methoxy-4-propylphenol, 3,7-dimethyloct-6-enenitrile, coumarin, dipropylene glycol, eucalyptol, geraniol, isobornyl acetate, isobutyl salicylate, linalool.
Liquid Citrus: Liquitint Orange 157, 2-t-butylcyclohexyl acetate, 3,7-dimethylnona-2,6-dienenitrile, allyl 3-cyclohexylpropionate, decanal, dipropylene glycol, ethyl 2-methylvalerate, gamma-undecalactone, methylbenzyl acetate, tricyclo(5.2.1.02,6)dec |
https://en.wikipedia.org/wiki/Gustav%20Fechner | Gustav Theodor Fechner (; ; 19 April 1801 – 18 November 1887) was a German physicist, philosopher, and experimental psychologist. A pioneer in experimental psychology and founder of psychophysics (techniques for measuring the mind), he inspired many 20th-century scientists and philosophers. He is also credited with demonstrating the non-linear relationship between psychological sensation and the physical intensity of a stimulus via the formula: , which became known as the Weber–Fechner law.
Early life and scientific career
Fechner was born at Groß Särchen, near Muskau, in Lower Lusatia, where his father, a maternal uncle, and his paternal grandfather were pastors. His mother, Johanna Dorothea Fechner (b. 1774), née Fischer, also came from a religious family. Despite these religious influences, Fechner became an atheist in later life.
Fechner's father, Samuel Traugott Fischer Fechner (1765-1806) was free-thinking in many ways, for example by having his children be vaccinated, teaching them Latin, and being a passionate grower of fruit. He died unexpectedly in 1806, leaving the family destitute. Fechner had an elder brother, Eduard Clemens Fechner (1799-1861) and three younger sisters: Emilie, Clementine, and Mathilde. Fechner and his brother were then raised for a few years by his maternal uncle—the pastor, before being reunited with his mother and sistes in Dresden.
Fechner was educated first at Sorau (now Żary in Western Poland).
In 1817 Fechner studied medicine for six months at the in Dresden and from 1818 at the University of Leipzig, the city in which he spent the rest of his life. He earned his PhD from Leipzig in 1823.
In 1834 he was appointed professor of physics at Leipzig. But in 1839, he injured his eyes in the research on afterimages by gazing at the Sun through colored glasses, while studying the phenomena of color and vision, and, after much suffering, resigned. Subsequently, recovering, he turned to the study of the mind and its relations with |
https://en.wikipedia.org/wiki/Ginkgotoxin | Ginkgotoxin (4'-O-methylpyridoxine) is a neurotoxin naturally occurring in Ginkgo biloba. It is an antivitamin structurally related to vitamin B6 (pyridoxine). It has the capacity to induce epileptic seizures.
Occurrence
Seeds and phytopharmaceuticals derived from the plant Ginkgo biloba are dietary supplements used to improve memory, brain metabolism, and blood flow, and to treat neuronal disorders. It has been long used for a wide range of medicinal purposes. For instance, in Japan and China, Ginkgo biloba is used to treat cough, bronchial asthma, irritable bladder and alcohol use disorder.
Ginkgotoxin is found in the seeds and, in lesser amounts, in the leaves of Ginkgo biloba. The seeds can be consumed as is and the leaves can be used to prepare the dietary supplements. Analyses of raw seeds from eight different location in Japan by high-performance liquid chromatography showed concentrations of ginkgotoxin varying from 0.173 to 0.4 mg/g of seeds. Also, there is a seasonal variation of ginkgotoxin concentration in the seeds. The maximum has been observed in August. Analyses of the powder of Ginkgo biloba capsules revealed the presence of ginkgotoxin. However, as the leaves contain very small amounts that are not of toxicological relevance, it shouldn't pose any threat to the consumers.
Ginkgotoxin-5'-glucoside is a derivative of ginkgotoxin that possesses a glycosyl in the 5' position. Its content is higher than the concentration of ginkgotoxin in heated seeds (boiled or roasted). Liberation of ginkgotoxin by enzymatic hydrolysis of the glycosidic linkage is possible. Nevertheless, the toxicity of the mechanism of action glucoside form is not fully understood.
Ginkgotoxin can also be found in other plants of the genus Albizia. However, these plants have no known dietary use for humans, so their production of ginkgotoxin is of lesser concern.
Biosynthesis
Ginkgotoxin is the 4'-O-methyl derivative of vitamin B6 (pyridoxine), but the presence of the vitamin |
https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention%20software%20model | The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.
Overview
In order to achieve this separation, the BDI software model implements the principal aspects of Michael Bratman's theory of human practical reasoning (also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. For Bratman, desire and intention are both pro-attitudes (mental attitudes concerned with action). He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed.
An important aspect of the BDI software model (in terms of its research relevance) is the ex |
https://en.wikipedia.org/wiki/Xuan%20Son%20virus | Xuan Son virus is a single-stranded, enveloped, negative-sense RNA virus of the genus Mobatvirus.
Natural reservoir
It was isolated in Pomona roundleaf bats in Xuân Sơn National Park, a nature reserve in Thanh Sơn District, Phú Thọ Province, Vietnam, within a 50-mile radius of Hanoi. |
https://en.wikipedia.org/wiki/Suillus%20serotinus | Suillus serotinus is a species of bolete fungus found in eastern North America. Originally described as a species of Boletus by American botanist Charles Christopher Frost in 1874, it was transferred to Suillus in 1996. The bolete has a dark red brown and sticky cap up to in diameter. The pore surface is initially white before turning reddish brown in age; the angular pores number from 1 to 3 per millimeter. Mushroom flesh slowly stains bluish after injury, later becoming purplish gray then finally reddish brown. The fungus grows in a mycorrhizal association with larch and fruits on the ground scattered or in groups. The spore print is purplish brown; spores are oblong to ellipsoid, smooth, and measure 8–12 by 4–5 µm. The fruit bodies are edible, but lack any distinctive taste or odor.
See also
List of North American boletes |
https://en.wikipedia.org/wiki/Generating%20function | In mathematics, a generating function is a way of encoding an infinite sequence of numbers () by treating them as the coefficients of a formal power series. This series is called the generating function of the sequence. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers.
There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series; definitions and examples are given below. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed.
Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations defined for formal series. These expressions in terms of the indeterminate may involve arithmetic operations, differentiation with respect to and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of . Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of , and which has the formal series as its series expansion; this explains the designation "generating functions". |
https://en.wikipedia.org/wiki/Refrigeration | Refrigeration is any of various types of cooling of a space, substance, or system to lower and/or maintain its temperature below the ambient one (while the removed heat is ejected to a place of higher temperature). Refrigeration is an artificial, or human-made, cooling method.
Refrigeration refers to the process by which energy, in the form of heat, is removed from a low-temperature medium and transferred to a high-temperature medium. This work of energy transfer is traditionally driven by mechanical means (whether ice or electromechanical machines), but it can also be driven by heat, magnetism, electricity, laser, or other means. Refrigeration has many applications, including household refrigerators, industrial freezers, cryogenics, and air conditioning. Heat pumps may use the heat output of the refrigeration process, and also may be designed to be reversible, but are otherwise similar to air conditioning units.
Refrigeration has had a large impact on industry, lifestyle, agriculture, and settlement patterns. The idea of preserving food dates back to human prehistory, but for thousands of years humans were limited regarding the means of doing so. They used curing via salting and drying, and they made use of natural coolness in caves, root cellars, and winter weather, but other means of cooling were unavailable. In the 19th century, they began to make use of the ice trade to develop cold chains. In the late 19th through mid-20th centuries, mechanical refrigeration was developed, improved, and greatly expanded in its reach. Refrigeration has thus rapidly evolved in the past century, from ice harvesting to temperature-controlled rail cars, refrigerator trucks, and ubiquitous refrigerators and freezers in both stores and homes in many countries. The introduction of refrigerated rail cars contributed to the settlement of areas that were not on earlier main transport channels such as rivers, harbors, or valley trails.
These new settlement patterns sparked the buildin |
https://en.wikipedia.org/wiki/Ecosystem%20Functional%20Type | Ecosystem Functional Type (EFT) is an ecological concept to characterize ecosystem functioning. Ecosystem Functional Types are defined as groups of ecosystems or patches of the land surface that share similar dynamics of matter and energy exchanges between the biota and the physical environment. The EFT concept is analogous to the Plant Functional Types (PFTs) concept, but defined at a higher level of the biological organization. As plant species can be grouped according to common functional characteristics, ecosystems can be grouped according to their common functional behavior.
One of the most used approaches to implement this concept has been the identification of EFTs from the satellite-derived dynamics of primary production, an essential and integrative descriptor of ecosystem functioning.
History
In 1992, Soriano and Paruelo proposed the concept of Biozones to identify vegetation units that share ecosystem functional characteristics using time-series of satellite images of spectral vegetation indices. Biozones were later renamed to EFTs by Paruelo et al. (2001), using an equivalent definition and methodology. was one of the first authors that used the term EFT as "aggregated components of ecosystems whose interactions with one another and with the environment produce differences in patterns of ecosystem structure and dynamics". Walker (1997) proposed the use of a similar term, vegetation functional types, for groups of PFTs in sets that constitute the different states of vegetation succession in non-equilibrium ecosystems. The same term was applied by Scholes et al. in a wider sense for those areas having similar ecological attributes, such as PFTs composition, structure, phenology, biomass or productivity. Several studies have applied hierarchy and patch dynamic theories for the definition of ecosystem and landscape functional types at different spatial scales, by scaling-up emergent structural and functional properties from patches to regions. Valentin |
https://en.wikipedia.org/wiki/Codec%20acceleration | Codec acceleration describes computer hardware that offloads the computationally intensive compression or decompression. This allows, for instance, a mobile phone to decode what would generally be a very difficult, and expensive video to decode it with no stuttering, and using less battery life than un-accelerated decoding would have taken. Similar acceleration is used on a broad variety of other appliances and computers for similar reasons. What could take a general purpose processor 100 Watts to decode on a general purpose processor, could take 10W on a graphics processing unit, and even less on a dedicated hardware codec.
Video codec acceleration
Video codec acceleration is where video (usually including audio as well) encoding and decoding is accelerated in hardware.
Audio codec acceleration
Audio codec acceleration is where audio encoding and decoding is accelerated in hardware.
See also
iDCT
Motion compensation
Discrete cosine transform (DCT)
Quantization
Variable-length code
Information theory - Entropy
DirectX Video Acceleration
High-Definition Video Processor
Intel Clear Video
Nvidia PureVideo
Unified Video Decoder
Video Immersion
Video Processing Engine
Video acceleration
Video compression
Sound technology |
https://en.wikipedia.org/wiki/Dragon%20Buster | is a platform, action role-playing dungeon crawl game developed by Namco and released in 1984. It runs on Namco Pac-Land hardware, modified to support vertical scrolling. In Japan, the game was ported to the Family Computer (Famicom), MSX, and X68000; the latter version was later released for the Virtual Console in the same region on November 18, 2008. Dragon Buster has been ported for the PSP and is available as part of Namco Museum Battle Collection. It was followed by a Japan-only Famicom sequel, Dragon Buster II: Yami no Fūin, and was later followed by the PlayStation game Dragon Valor, which was both a remake and sequel.
The game has side-scrolling platform gameplay and an overworld map similar to the later platform games for home consoles and personal computers. Dragon Buster was also the earliest game to feature a double jump mechanic, and one of the first to use a visual health meter.
Plot
In the beginning, a prince named Clovis was born the son of the kingdom's chief bodyguard to the royal Lawrence family. As a young child, Clovis was very mischievous and undisciplined, so his father thought it might be best to place him under the care of a monk who lived in the woods far from the kingdom. Under the monk's care, Clovis began to learn various aspects of knowledge, including how to be a superior swordsman. When word reached the monk that King Lawrence's 16-year-old daughter Celia had been abducted and held by a fearsome dragon, who wished to break the kingdom's spirit and coerce the kingdom to do his bidding, Clovis felt a sense of duty to chase after the dragon and rescue Celia in the name of his father. In order to save the Princess, he trained daily with the monk and learned to withstand injury, whether cut by swords or burned by the flame and still be just as capable a fighter as ever.
Gameplay
The player must guide the hero Clovis through each round on to the castle to rescue his beloved Princess Celia. There are multiple Princess Celias in the game, |
https://en.wikipedia.org/wiki/Omphalotus%20japonicus | Omphalotus japonicus, commonly known as the tsukiyotake , is an orange to brown-colored gilled mushroom native to Japan and Eastern Asia. It is a member of the cosmopolitan genus Omphalotus, the members of which have bioluminescent fruit bodies which glow in darkness. A 2004 molecular study shows it to be most closely related to a clade composed of Omphalotus nidiformis of Australia, Omphalotus olivascens of Western North America and Omphalotus olearius of Europe.
Omphalotus japonicus is poisonous, its consumption results in acute nausea and vomiting for several hours. It is often confused with edible fungi and mistakenly consumed in Japan.
Taxonomy
Inoko first described this fungus as Pleurotus noctilucens in 1889, however the name proved invalid as the binomial had already been used for another species. Given the name Pleurotus japonicus by Seiichi Kawamura in 1915, it was given the name Lampteromyces japonicus by Rolf Singer in 1947, until the genus Lampteromyces was sunk into Omphalotus in 2004. Hitoshi Neda has proposed this fungus is the same as one described by Miles Joseph Berkeley as Agaricus guepiniformis in 1878, as the type specimen fits the description of O. japonicus and hence, based on the principle of priority, the name should be Omphalotus guepiniformis (Berk.) Neda. A proposal was submitted in 2006 to conserve the epithet japonicus against guepiniformis and another synonym, Pleurotus harmandii. The proposal was accepted by the Nomenclature Committee for Fungi in 2008.
The little-known Lampteromyces luminescens, described in 1979 in China by M. Zang, is similar genetically and may be a synonym, however the taxon is insufficiently known to confirm this.
The species is mentioned in Konjaku Monogatarishū, an anthology of Japanese folk tales dating from the 12th century. The Japanese name tsukiyotake translates as "moon-night mushroom".
Description
The fleshy fruit bodies have an eccentric stem rendering the cap kidney- or half-moon-shaped and onl |
https://en.wikipedia.org/wiki/Acoustic%20panel | Acoustic panels (also sound absorption panels, soundproof panels or sound panels) are sound-absorbing fabric-wrapped boards designed to control echo and reverberation in a room. Most commonly used to resolve speech intelligibility issues in commercial soundproofing treatments. Most panels are constructed with a wooden frame, filled with sound absorption material (mineral wool, fiber glass, cellulose, open cell foam, or combination of) and wrapped with fabric.
An acoustic board is a board made from sound absorbing materials, designed to provide sound insulation. Between two outer walls sound absorbing material is inserted and the wall is porous. Thus, when sound passes through an acoustic board, the intensity of sound is decreased. The loss of sound energy is balanced by producing heat energy. They are used in auditoriums, halls, seminar rooms, libraries, courts and wherever sound insulation is needed. Acoustic boards are also used in speaker boxes.
See also
Acoustics
Architectural acoustics
Room acoustics
Absorption (acoustics) |
https://en.wikipedia.org/wiki/Windows%20SteadyState | Windows SteadyState (formerly Shared Computer Toolkit) is a discontinued freeware tool developed by Microsoft that gives administrators enhanced options for configuring shared computers, such as hard drive protection and advanced user management. It is primarily designed for use on computers shared by many people, such as internet cafes, schools and libraries.
SteadyState was available until December 31, 2010 from Microsoft for 32-bit versions of Windows XP and Windows Vista. It is incompatible with Windows 7 and later. A similar disk protection component was included in Windows MultiPoint Server 2012.
Features
SteadyState can revert a computer to a previously stored state every time it reboots, or on administrator's request. When Windows Disk Protection (WDP) component of SteadyState is turned on, changes to the hard disk are redirected to a temporary cache. WDP offers three modes of protection:
Discard mode: The cache is cleared upon every reboot, thus returning the system to its previous state.
Persist mode: Changes saved in the cache remain intact across reboots. An administrator may later opt to commit these changes. Alternatively, at the specified date and time, the cache expires and its contents are cleared.
Commit mode: Contents of the cache is written out to disk and become permanent. In addition, new changes to the system are no longer redirected to the cache.
SteadyState can prepare user environments. User accounts can be locked or forced to log off after certain intervals. A locked account uses a temporary copy of the user's profile during the user's session. When the user logs off, the temporary profile is deleted. This ensures that any changes the user made during his session are not permanent.
SteadyState provides simple control of more than 80 restrictions covering both individual users as well as the system as a whole. Many of these settings are based on Windows Group Policies, while others are implemented by SteadyState itself. Using Ste |
https://en.wikipedia.org/wiki/Neil%20Robertson%20%28mathematician%29 | George Neil Robertson (born November 30, 1938) is a mathematician working mainly in topological graph theory, currently a distinguished professor emeritus at the Ohio State University.
Education
Robertson earned his B.Sc. from Brandon College in 1959, and his Ph.D. in 1969 at the University of Waterloo under his doctoral advisor William Tutte.
Biography
In 1969, Robertson joined the faculty of the Ohio State University, where he was promoted to Associate Professor in 1972 and Professor in 1984. He was a consultant with Bell Communications Research from 1984 to 1996. He has held visiting faculty positions in many institutions, most extensively at Princeton University from 1996 to 2001, and at Victoria University of Wellington, New Zealand, in 2002. He also holds an adjunct position at King Abdulaziz University in Saudi Arabia.
Research
Robertson is known for his work in graph theory, and particularly for a long series of papers co-authored with Paul Seymour and published over a span of many years, in which they proved the Robertson–Seymour theorem (formerly Wagner's Conjecture). This states that families of graphs closed under the graph minor operation may be characterized by a finite set of forbidden minors. As part of this work, Robertson and Seymour also proved the graph structure theorem describing the graphs in these families.
Additional major results in Robertson's research include the following:
In 1964, Robertson discovered the Robertson graph, the smallest possible 4-regular graph with girth five.
In 1994, with Seymour and Robin Thomas, Robertson extended the number of colors for which the Hadwiger conjecture relating graph coloring to graph minors is known to be true. As of 2012 this remains the strongest known result on this conjecture.
In 1996, Robertson, Seymour, Thomas, and Daniel P. Sanders published a new proof of the four color theorem, confirming the Appel–Haken proof which until then had been disputed. Their proof also leads to an efficient a |
https://en.wikipedia.org/wiki/Compal%20Electronics | Compal Electronics () is a Taiwanese original design manufacturer (ODM), handling the production of notebook computers, monitors, tablets and televisions for a variety of clients around the world, including Apple Inc., Alphabet Inc., Acer, Lenovo, Dell, Toshiba, Hewlett-Packard and Fujitsu. It also licenses brands of its clients.
It is the second-largest contract laptop manufacturer in the world behind Quanta Computer, and shipped over 48 million notebooks in 2010.
Overview
The company is known for producing selected models for Dell (Alienware included), Hewlett-Packard, Compaq, and Toshiba. Compal has designed and built laptops for all major brands and custom builders for over 22 years. The company is listed in Taiwan Stock Exchange. As of 2017, revenues were US$24 billion, with a total workforce of 64,000. The company's headquarters is in Taipei, Taiwan, with offices in mainland China, South Korea, the United Kingdom, and the United States. Compal's main production facility is in Kunshan, China.
Compal is the second largest notebook manufacturer in the world after Quanta Computers, also based in Taiwan.
History
Compal was founded in June 1984 as a computer peripherals supplier. It went public in April 1990.
In September 2011, Compal announced it would form a joint venture with Lenovo to make laptops in China. The venture was expected to start producing laptops by the end of 2012.
In January 2015, Toshiba announced that due to intense price competition, it will stop selling televisions in the USA and will instead license the Toshiba TV brand to Compal.
In September 2018, it was revealed that due to overwhelming demand for the Apple Watch, Compal was brought on as a second contract manufacturer to produce the Apple Watch Series 4.
CCI
Compal subsidiary Compal Communications (華寶通訊, CCI) is a major manufacturer of mobile phones. The phones are produced on an ODM basis, i.e., the handsets are sold through other brands. In 2006, CCI produced 68.8 million hands |
https://en.wikipedia.org/wiki/Hybrid%20integrated%20circuit | A hybrid integrated circuit (HIC), hybrid microcircuit, hybrid circuit or simply hybrid is a miniaturized electronic circuit constructed of individual devices, such as semiconductor devices (e.g. transistors, diodes or monolithic ICs) and passive components (e.g. resistors, inductors, transformers, and capacitors), bonded to a substrate or printed circuit board (PCB). A PCB having components on a Printed Wiring Board (PWB) is not considered a true hybrid circuit according to the definition of MIL-PRF-38534.
Overview
"Integrated circuit" as the term is currently used refers to a monolithic IC which differs notably from a HIC in that a HIC is fabricated by inter-connecting a number of components on a substrate whereas an IC's (monolithic) components are fabricated in a series of steps entirely on a single wafer which is then diced into chips. Some hybrid circuits may contain monolithic ICs, particularly Multi-chip module (MCM) hybrid circuits.
Hybrid circuits could be encapsulated in epoxy, as shown in the photo, or in military and space applications, a lid was soldered onto the package. A hybrid circuit serves as a component on a PCB in the same way as a monolithic integrated circuit; the difference between the two types of devices is in how they are constructed and manufactured. The advantage of hybrid circuits is that components which cannot be included in a monolithic IC can be used, e.g., capacitors of large value, wound components, crystals, inductors. In military and space applications, numerous integrated circuits, transistors and diodes, in their die form, would be placed on either a ceramic or beryllium substrate. Either gold or aluminum wire would be bonded from the pads of the IC, transistor, or diode to the substrate.
Thick film technology is often used as the interconnecting medium for hybrid integrated circuits. The use of screen printed thick film interconnect provides advantages of versatility over thin film although feature sizes may be larg |
https://en.wikipedia.org/wiki/Statistical%20alchemy | Statistical alchemy was a term originated by John Maynard Keynes to describe econometrics in 1939.
The phrase has subsequently been used by Alvan Feinstein to describe meta-analysis. It is generally regarded as a deprecatory term which undermines attempts to present such activities as meeting the rigorous standards of science.
Econometrics
Keynes (1939) wrote a review of Jan Tinbergen's Statistical Testing of Business-Cycle Theories. Although he praised Tinbergen for his objectivity, he however depicted his methodology as "black magic" which he regarded as essentially untrustworthy. He was unpersuaded that "this brand of statistical alchemy is ripe to become a branch of science" (emphasis in the original).
Often this metaphor is seen as a way of suggesting that econometricians were following a foolhardy pursuit comparable to the alchemical quest of turning base metal into gold. However G. M. P. Swann points out that Keynes was well aware that such eminent early scientists as Isaac Newton. He rather proposes a more nuanced interpretation of the metaphor as referring to the Alkahest, a universal solvent, which, it was claimed could turn stone into water. He claimed that by restricting econometrics to theory, mathematics and statistics, econometricians had discarded other important applied techniques. Although Ragnar Frisch had made warnings about this, these had been subsequently ignored by other econometricians who had ended up claiming that econometrics constituted a universal solvent.
Meta-analysis
Feinstein (1995) published "Meta-analysis: statistical alchemy for the 21st century" where he claimed that in meta-analysis scientific requirements had been removed or destroyed, eliminating the scientific requirements of reproducibility and precision. This was equivalent to a free lunch, comparable to the alchemical transmutation of base metals to gold. Detourning the adage concerning the combination of apples and oranges, Feinstein suggested that meta-analytic mixt |
https://en.wikipedia.org/wiki/Kirby%20%28character%29 | is the titular character and protagonist of the Kirby series of video games owned by Nintendo and HAL Laboratory. He first appeared in Kirby's Dream Land (1992), a platform game for the Game Boy. Since then, Kirby has appeared in over 50 games, ranging from action platformers to puzzle, racing, and even pinball, and has been featured as a playable character in every installment of the Super Smash Bros. series (1999–present). He has also starred in his own anime and manga series. Since 1999, he has been voiced by Makiko Ohmoto.
Kirby's signature skill is his ability to inhale objects or creatures and spit them out as projectiles, as well as the ability to suck in air to float over obstacles. His Copy Ability grants him the power to adopt the abilities of the creatures he inhales, while also wearing various costumes or transforming his shape. He uses these abilities to rescue various lands, such as his homeworld Planet Popstar, from evil forces and antagonists, such as Dark Matter or Nightmare. On these adventures, he often crosses paths with his rivals, King Dedede and Meta Knight. In virtually all of his appearances, Kirby is depicted as a cheerful, innocent and food-loving character.
Critics have described him as one of the cutest and most lovable characters in gaming. He has achieved high popularity with gamers in Japan. He has also been praised for being one of the most versatile characters, due to starring in a large catalogue of games that cuts across a variety of video game genres.
Concept and creation
Kirby was created by Masahiro Sakurai as the player character of the 1992 video game Kirby's Dream Land. Sakurai conceived the idea around May 1990 at the age of 19 while he was working at HAL Laboratory. The character's design was intended to serve as a placeholder graphic for the game's original protagonist in early development and thus was given a simplistic ball-like appearance. Sakurai switched to the placeholder design after deciding that it served the |
https://en.wikipedia.org/wiki/International%20Society%20for%20Design%20and%20Development%20in%20Education | The International Society for Design and Development in Education (ISDDE) was formed in 2005 with the goal of improving educational design in mathematics and science education around the world. Educational design has been an invisible topic relative to educational research, and there has been very little direct attention focused on design principles and design processes in educational design.
Society goals
This international society, focused on mathematics and science education for strategic reasons, has the following main goals:
broadly improve design and development processes used in educational design
build and support a community among educational designers and create transformational training opportunities for new educational designers
increase the impact of educational designers on educational practice throughout the world
Governance
The society is run by an Executive of approximately 12 members. Three officers have particular duties (such as appointing local chairs of the annual conference, organizing the prize process, recruiting and reviewing new Fellows and members, and directing the journal).
Current executive chair Lynne McClure, University of Cambridge
Secretary Kristen Tripet, Australian Academy of Science
Chairs History
Hugh Burkhardt 2005-2009
Christian Schunn 2010-2014
Susan McKenney 2015-2016
Lynne McClure 2017-2018
Jacquey Barber 2019-2020
Additional details on society governance are described in the society's constitution.
ISDDE Journal
Starting in 2008, the society developed an open access Electronic journal, called the Educational Designer, with roughly annual issues. The editor-in-chief is Kaye Stacey from the University of Melbourne. As an online-only journal, it has the advantage of being able to provide detailed worked examples for other designers.
Annual conference
2005 Oxford, England; Conference chair Hugh Burkhardt
2006 Oxford, England; Conference chair Hugh Burkhardt
2007 Berkeley, California, USA; Conference chai |
https://en.wikipedia.org/wiki/R-SMAD | R-SMADs are receptor-regulated SMADs. SMADs are transcription factors that transduce extracellular TGF-β superfamily ligand signaling from cell membrane bound TGF-β receptors into the nucleus where they activate transcription TGF-β target genes. R-SMADS are directly phosphorylated on their c-terminus by type 1 TGF-β receptors through their intracellular kinase domain, leading to R-SMAD activation.
R-SMADS include SMAD2 and SMAD3 from the TGF-β/Activin/Nodal branch, and SMAD1, SMAD5 and SMAD8 from the BMP/GDP branch of TGF-β signaling.
In response to signals by the TGF-β superfamily of ligands these proteins associate with receptor kinases and are phosphorylated at an SSXS motif at their extreme C-terminus. These proteins then typically bind to the common mediator Smad or co-SMAD SMAD4.
Smad complexes then accumulate in the cell nucleus where they regulate transcription of specific target genes:
SMAD2 and SMAD3 are activated in response to TGF-β/Activin or Nodal signals.
SMAD1, SMAD5 and SMAD8 (also known as SMAD9) are activated in response to BMPs bone morphogenetic protein or GDP signals.
SMAD6 and SMAD7 may be referred to as I-SMADs (inhibitory SMADS), which form trimers with R-SMADS and block their ability to induce gene transcription by competing with R-SMADs for receptor binding and by marking TGF-β receptors for degradation.
See also
TGF beta signaling pathway |
https://en.wikipedia.org/wiki/Weyl%20integration%20formula | In mathematics, the Weyl integration formula, introduced by Hermann Weyl, is an integration formula for a compact connected Lie group G in terms of a maximal torus T. Precisely, it says there exists a real-valued continuous function u on T such that for every class function f on G:
Moreover, is explicitly given as: where is the Weyl group determined by T and
the product running over the positive roots of G relative to T. More generally, if is only a continuous function, then
The formula can be used to derive the Weyl character formula. (The theory of Verma modules, on the other hand, gives a purely algebraic derivation of the Weyl character formula.)
Derivation
Consider the map
.
The Weyl group W acts on T by conjugation and on from the left by: for ,
Let be the quotient space by this W-action. Then, since the W-action on is free, the quotient map
is a smooth covering with fiber W when it is restricted to regular points. Now, is followed by and the latter is a homeomorphism on regular points and so has degree one. Hence, the degree of is and, by the change of variable formula, we get:
Here, since is a class function. We next compute . We identify a tangent space to as where are the Lie algebras of . For each ,
and thus, on , we have:
Similarly we see, on , . Now, we can view G as a connected subgroup of an orthogonal group (as it is compact connected) and thus . Hence,
To compute the determinant, we recall that where and each has dimension one. Hence, considering the eigenvalues of , we get:
as each root has pure imaginary value.
Weyl character formula
The Weyl character formula is a consequence of the Weyl integral formula as follows. We first note that can be identified with a subgroup of ; in particular, it acts on the set of roots, linear functionals on . Let
where is the length of w. Let be the weight lattice of G relative to T. The Weyl character formula then says that: for each irreducible character of , there exists |
https://en.wikipedia.org/wiki/Microsoft%20Office%20password%20protection | Microsoft Office password protection is a security feature that allows Microsoft Office documents (e.g. Word, Excel, PowerPoint) to be protected with a user-provided password.
Types
There are two types of passwords that can be set to a document:
A password to encrypt a document restricts opening and viewing it. This is possible in all Microsoft Office applications. Since Office 2007, they are hard to break if a sufficiently complex password was chosen. If the password can be determined through social engineering, the underlying cipher is not important.
Passwords that do not encrypt, but restrict modification. They can be circumvented easily.
In Word and PowerPoint the password restricts modification of the entire document.
In Excel passwords restrict modification of the workbook, a worksheet within it, or individual elements in the worksheet.
History of Office encryption
Weak encryptions
In Excel and Word 95 and prior editions a weak protection algorithm is used that converts a password to a 16-bit verifier and a 16-byte XOR obfuscation array key. Hacking software is now readily available to find a 16-byte key and decrypt the password-protected document.
Office 97, 2000, XP and 2003 use RC4 with 40 bits. The implementation contains multiple vulnerabilities rendering it insecure.
In Office XP and 2003 an opportunity to use a custom protection algorithm was added. Choosing a non-standard Cryptographic Service Provider allows increasing the key length. Weak passwords can still be recovered quickly even if a custom CSP is on.
AES since Office 2007
In Office 2007, protection was significantly enhanced since a modern protection algorithm named Advanced Encryption Standard was used. At present, there is no software that can break this encryption. With the help of the SHA-1 hash function, the password is stretched into a 128-bit key 50,000 times before opening the document; as a result, the time required to crack it is vastly increased, similar to PBKDF2, sc |
https://en.wikipedia.org/wiki/Alpha%20Chiang | Alpha Chung-i Chiang (born 1927) is an American mathematical economist, Professor Emeritus of Economics at the University of Connecticut, and author of perhaps the most well known mathematical economics textbook; Fundamental Methods of Mathematical Economics.
Chiang's undergraduate studies at St. John's University, Shanghai led to a BA in 1946, and his postgraduates studies at the University of Colorado an MA in 1948 and at Columbia University a PhD in 1954.
He taught at Denison University in Ohio from 1954 to 1964, serving as Chairman of the Department of Economics in the last three years there. Then he joined the University of Connecticut as Professor of Economics in 1964. He taught for 28 years at the University of Connecticut—becoming in 1992 Professor Emeritus of Economics. He also held Visiting Professorships at New Asia College (Hong Kong), Cornell University, Lingnan College (Hong Kong), and Helsinki School of Economics and Business Administration.
Married to Emily Chiang, he has a son Darryl, and a daughter Tracey. His wide extracurricular interests include ballroom dancing, Chinese opera, Chinese painting/calligraphy, photography, and piano. A piano-music composition of his is featured in Tammy Lum's CD "Ballades & Ballads" (2015).
Selected publications
Chiang, A. C., (1967). Fundamental methods of mathematical economics. McGraw-Hill, New York. (Now (2005) in 4th Edition with Wainwright, Kevin)
Chiang, A. C. (1992). Elements of dynamic optimization. McGraw-Hill, New York. Now, published by Waveland Press Inc., Illinois. |
https://en.wikipedia.org/wiki/Lineage%20%28genetic%29 | A genetic lineage, also known as genetic pedigree, is a series of mutations or changes in the genetic code which connect an ancestor's genetic code to their descendant's genetic code. These pieces of genetic code can be categorized in several different sizes as alleles, haplotypes, or haplogroups. A genetic lineage is different from an evolutionary lineage because a genetic lineage applies to a specific area of genetic code or locus while an evolutionary lineage applies to the organism as a whole.
In cases where the genetic tree is very bushy, the order of mutations in the lineage is mostly known. For example, the order of mutations between E1b1b and E1b1b1a1a for the human Y-chromosomesal L0 or L1 nodes.
A genetic lineage is different from an evolutionary lineage because a genetic lineage applies to a specific area of genetic code or locus while an evolutionary lineage applies to the organism as a whole. For example, the ancient African ape evolved into the gorilla-chimpanzee-human ancestor, which further evolved into the chimpanzee-human ancestor and then to humans. While most human lineages coalesce with chimpanzee lineages, which then converge with gorilla lineages, a few human lineages coalesce with gorilla lineages and then converge with chimpanzee lineages (or chimpanzee lineages that coalesce with gorilla lineages and then converge with human lineages). This occurs because speciation splits evolutionary lineages in non-discrete events that involve 10s to 10000s of individuals in each developing taxon.
Basal lineage
In genetics, a basal lineage is a genetic lineage that connects a variant allele (type) possessed by a more common ancestor that evolves into two descendant variants possessed by a branch ancestor. An example of a basal lineage is the lineage between mitochondrial 'Eve' and L0 or L1. Basal lineages may have types that are no longer represented in the extant population, only being defined by derivative types such as CRS for L1.
Peripheral line |
https://en.wikipedia.org/wiki/Cesare%20Arzel%C3%A0 | Cesare Arzelà (6 March 1847–15 March 1912) was an Italian mathematician who taught at the University of Bologna and is recognized for his contributions in the theory of functions, particularly for his characterization of sequences of continuous functions, generalizing the one given earlier by Giulio Ascoli in the Arzelà–Ascoli theorem.
Life
He was a pupil of the Scuola Normale Superiore of Pisa where he graduated in 1869. Arzelà came from a poor household; therefore he could not start his study until 1871, when he studied in Pisa under Enrico Betti and Ulisse Dini.
He was working in Florence (from 1875) and in 1878 obtained the Chair of Algebra at the University of Palermo.
After that he became a professor in 1880 at the University of Bologna at the department of analysis. He conducted research in the field of theory of functions. His most famous student was Leonida Tonelli.
In 1889 he generalized the Ascoli theorem to Arzelà–Ascoli theorem, an important theorem in the theory of functions.
He was a member of the Accademia Nazionale dei Lincei, and of several other academies.
Works
See also
Total variation
Further reading
. Available from the website of the
External links
1847 births
1912 deaths
20th-century Italian mathematicians
Mathematical analysts
Academic staff of the University of Palermo |
https://en.wikipedia.org/wiki/Canonical%20Huffman%20code | In computer science and information theory, a canonical Huffman code is a particular type of Huffman code with unique properties which allow it to be described in a very compact manner. Rather than storing the structure of the code tree explicitly, canonical Huffman codes are ordered in such a way that it suffices to only store the lengths of the codewords, which reduces the overhead of the codebook.
Motivation
Data compressors generally work in one of two ways. Either the decompressor can infer what codebook the compressor has used from previous context, or the compressor must tell the decompressor what the codebook is. Since a canonical Huffman codebook can be stored especially efficiently, most compressors start by generating a "normal" Huffman codebook, and then convert it to canonical Huffman before using it.
In order for a symbol code scheme such as the Huffman code to be decompressed, the same model that the encoding algorithm used to compress the source data must be provided to the decoding algorithm so that it can use it to decompress the encoded data. In standard Huffman coding this model takes the form of a tree of variable-length codes, with the most frequent symbols located at the top of the structure and being represented by the fewest bits.
However, this code tree introduces two critical inefficiencies into an implementation of the coding scheme. Firstly, each node of the tree must store either references to its child nodes or the symbol that it represents. This is expensive in memory usage and if there is a high proportion of unique symbols in the source data then the size of the code tree can account for a significant amount of the overall encoded data. Secondly, traversing the tree is computationally costly, since it requires the algorithm to jump randomly through the structure in memory as each bit in the encoded data is read in.
Canonical Huffman codes address these two issues by generating the codes in a clear standardized format; all t |
https://en.wikipedia.org/wiki/Random%20digit%20dialing | Random digit dialing (RDD) is a method for selecting people for involvement in telephone statistical surveys by generating telephone numbers at random. Random digit dialing has the advantage that it includes unlisted numbers that would be missed if the numbers were selected from a phone book. In populations where there is a high telephone-ownership rate, it can be a cost efficient way to get complete coverage of a geographic area.
RDD is widely used for statistical surveys, including election opinion polling and selection of experimental control groups.
When the desired coverage area matches up closely enough with country codes and area codes, random digits can be chosen within the desired area codes. In cases where the desired region doesn't match area codes (for instance, electoral districts), surveys must rely on telephone databases, and must rely on self-reported address information for unlisted numbers. Increasing use of mobile phones (although there are currently techniques which allow infusion of wireless phones into the RDD sampling frame), number portability, and VoIP have begun to decrease the ability for RDD to target specific areas within a country and achieve complete coverage.
See also
Autodialer |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.