source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Hardware%20performance%20counter | In computers, hardware performance counters (HPC), or hardware counters are a set of special-purpose registers built into modern microprocessors to store the counts of hardware-related activities within computer systems. Advanced users often rely on those counters to conduct low-level performance analysis or tuning.
Implementations
The number of available hardware counters in a processor is limited while each CPU model might have a lot of different events that a developer might like to measure. Each counter can be programmed with the index of an event type to be monitored, like a L1 cache miss or a branch misprediction.
One of the first processors to implement such counter and an associated instruction RDPMC to access it was the Intel Pentium, but they were not documented until Terje Mathisen wrote an article about reverse engineering them in Byte July 1994.
The following table shows some examples of CPUs and the number of available hardware counters:
Versus software techniques
Compared to software profilers, hardware counters provide low-overhead access to a wealth of detailed performance information related to CPU's functional units, caches and main memory etc. Another benefit of using them is that no source code modifications are needed in general. However, the types and meanings of hardware counters vary from one kind of architecture to another due to the variation in hardware organizations.
There can be difficulties correlating the low level performance metrics back to source code. The limited number of registers to store the counters often force users to conduct multiple measurements to collect all desired performance metrics.
Instruction based sampling
Modern superscalar processors schedule and execute multiple instructions out-of-order at one time. These "in-flight" instructions can retire at any time, depending on memory access, hits in cache, stalls in the pipeline and many other factors. This can cause performance counter events to be attributed |
https://en.wikipedia.org/wiki/Oxford%20Centre%20for%20Gene%20Function | The Oxford Centre for Gene Function is a multidisciplinary research institute in the University of Oxford, England. It is directed by Frances Ashcroft, Kay Davies and Peter Donnelly.
It involves the departments of Human anatomy and genetics, Physiology, and Statistics.
External links
Oxford Centre for Gene Function website
Wellcome Trust Centre for Human Genetics
Departments of the University of Oxford
Genetics in the United Kingdom
Human genetics
Research institutes in Oxford |
https://en.wikipedia.org/wiki/Leiden%20Conventions | The Leiden Conventions or Leiden system is an established set of rules, symbols, and brackets used to indicate the condition of an epigraphic or papyrological text in a modern edition. In previous centuries of classical scholarship, scholars who published texts from inscriptions, papyri, or manuscripts used divergent conventions to indicate the condition of the text and editorial corrections or restorations. The Leiden meeting was designed to help to redress this confusion.
The earliest form of the conventions was agreed at a meeting of classical scholars at the University of Leiden in 1931 and published the following year. There are minor variations in the use of the conventions between epigraphy and papyrology (and even between Greek and Latin epigraphy). More recently, scholars have published improvements and adjustments to the system.
Most important sigla
See also
+++ (modem) (for use of sequence in telecommunication possibly inspired by Leiden Conventions)
Corpus Inscriptionum Latinarum
Ellipsis
EpiDoc
Lacuna (manuscripts)
Primary source
Supplementum Epigraphicum Graecum
Citations
General and cited references
Marcus Dohnicht, "Zusammenstellung der diakritischen Zeichen zur Wiedergabe der lateinischen Inschrifttexte der Antike für den Unicode" (Entwurft Juli 2000).
Sterling Dow, "Conventions in editing: a suggested reformulation of the Leiden System", Greek, Roman and Byzantine Studies Scholarly Aids 2, Durham, 1969.
Tom Elliott et al. (2000–2008), "All Transcription Guidelines" in EpiDoc Guidelines.
Traianos Gagos (1996), "Conventions", in A Select Bibliography of Papyrology.
J. J. E. Hondius, "Praefatio", Suplementum Epigraphicum Graecum 7 (1934), p. i.
A. S. Hunt, "A note on the transliteration of papyri", Chronique d'Égypte 7 (1932), pp. 272–274.
Hans Krummrey, Silvio Panciera, "Criteri di edizione e segni diacritici", Tituli 2 (1980), pp. 205–215.
Silvio Panciera, "Struttura dei supplementi e segni diacritici dieci anni dopo" in SupIt |
https://en.wikipedia.org/wiki/Common%20species | Common species and uncommon species are designations used in ecology to describe the population status of a species. Commonness is closely related to abundance. Abundance refers to the frequency with which a species is found in controlled samples; in contrast, species are defined as common or uncommon based on their overall presence in the environment. A species may be locally abundant without being common.
However, "common" and "uncommon" are also sometimes used to describe levels of abundance, with a common species being less abundant than an abundant species, while an uncommon species is more abundant than a rare species.
Common species are frequently regarded as being at low risk of extinction simply because they exist in large numbers, and hence their conservation status is often overlooked. While this is broadly logical, there are several cases of once common species being driven to extinction such as the passenger pigeon and the Rocky Mountain locust, which numbered in the billions and trillions respectively before their demise. Moreover, a small proportional decline in a common species results in the loss of a large number of individuals, and the contribution to ecosystem function that those individuals represented. A recent paper argued that because common species shape ecosystems, contribute disproportionately to ecosystem functioning, and can show rapid population declines, conservation should look more closely at how the trade-off between species extinctions and the depletion of populations.
See also
Rare species
Abundance (ecology)
Notes
Environmental conservation
Environmental terminology
Ecology terminology |
https://en.wikipedia.org/wiki/Regular%20Polytopes%20%28book%29 | Regular Polytopes is a geometry book on regular polytopes written by Harold Scott MacDonald Coxeter. It was originally published by Methuen in 1947 and by Pitman Publishing in 1948, with a second edition published by Macmillan in 1963 and a third edition by Dover Publications in 1973.
The Basic Library List Committee of the Mathematical Association of America has recommended that it be included in undergraduate mathematics libraries.
Overview
The main topics of the book are the Platonic solids (regular convex polyhedra), related polyhedra, and their higher-dimensional generalizations. It has 14 chapters, along with multiple appendices, providing a more complete treatment of the subject than any earlier work, and incorporating material from 18 of Coxeter's own previous papers. It includes many figures (both photographs of models by Paul Donchian and drawings), tables of numerical values, and historical remarks on the subject.
The first chapter discusses regular polygons, regular polyhedra, basic concepts of graph theory, and the Euler characteristic. Using the Euler characteristic, Coxeter derives a Diophantine equation whose integer solutions describe and classify the regular polyhedra. The second chapter uses combinations of regular polyhedra and their duals to generate related polyhedra, including the semiregular polyhedra, and discusses zonohedra and Petrie polygons. Here and throughout the book, the shapes it discusses are identified and classified by their Schläfli symbols.
Chapters 3 through 5 describe the symmetries of polyhedra, first as permutation groups and later, in the most innovative part of the book, as the Coxeter groups, groups generated by reflections and described by the angles between their reflection planes. This part of the book also describes the regular tessellations of the Euclidean plane and the sphere, and the regular honeycombs of Euclidean space. Chapter 6 discusses the star polyhedra including the Kepler–Poinsot polyhedra.
The rema |
https://en.wikipedia.org/wiki/Asterix%20%28character%29 | Asterix (; ) is a fictional character and the titular hero of the French comic book series Asterix.
The series portrays him as a diminutive but fearless Gaulish warrior living in the time of Julius Caesar's Gallic Wars. Asterix was created in 1959 by writer René Goscinny and illustrator Albert Uderzo. Since then, thirty-five books in the series have been released, with Uderzo taking over writing duties after the death of Goscinny in 1977. Asterix has also appeared in several animated and live-action film adaptations of the series, and serves as the mascot of the amusement park Parc Astérix. Before that, he was also the mascot of the magazine Pilote.
Character synopsis
Asterix is a diminutive but fearless and cunning warrior, ever eager for new adventures. He lives around 50 BC in a fictional village in northwest Armorica (a region of ancient Gaul mostly equivalent to modern Brittany). This village is celebrated as the only part of Gaul still not conquered by Julius Caesar and his Roman legions. The inhabitants of the village gain superhuman strength by drinking a magic potion prepared by the druid, Getafix (). The village is surrounded by, on one side, the ocean, and on the other by four unlucky Roman garrisons, intended to keep a watchful eye and ensure that the Gauls do not get up to mischief. These camps are Compendium, Aquarium, Laudanum and Totorum.
Asterix' parents are former villagers who now live in the city of Condatum (Rennes), and he has cousins in Britannia (Britain). He shares his birthday with his clumsy, oversized, but extremely strong and good-hearted best friend, Obelix.
Asterix is one of the smartest and most sensible members of the village, and so he is usually chosen for any dangerous, important or exotic mission. Unlike most of the other villagers, he does not start or join brawls for the fun of it, although he does enjoy a good fight when there's cause. He rarely resorts to weapons, preferring to rely on his wits, and when necessary, his ( |
https://en.wikipedia.org/wiki/Flap%20endonuclease | Flap endonucleases (FENs, also known as 5' durgs in older references) are a class of nucleolytic enzymes that act as both 5'-3' exonucleases and structure-specific endonucleases on specialised DNA structures that occur during the biological processes of DNA replication, DNA repair, and DNA recombination. Flap endonucleases have been identified in eukaryotes, prokaryotes, archaea, and some viruses. Organisms can have more than one FEN homologue; this redundancy may give an indication of the importance of these enzymes. In prokaryotes, the FEN enzyme is found as an N-terminal domain of DNA polymerase I, but some prokaryotes appear to encode a second homologue.
The endonuclease activity of FENs was initially identified as acting on a DNA duplex which has a single-stranded 5' overhang on one of the strands (termed a "5' flap", hence the name flap endonuclease). FENs catalyse hydrolytic cleavage of the phosphodiester bond at the junction of single- and double-stranded DNA. Some FENs can also act as 5'-3' exonucleases on the 5' terminus of the flap strand and on 'nicked' DNA substrates.
Protein structure models based on X-ray crystallography data suggest that FENs have a flexible arch created by two α-helices through which the single 5' strand of the 5' flap structure can thread.
Flap endonucleases have been used in biotechnology, for example the Taqman PCR assay and the Invader Assay for mutation and single nucleotide polymorphism (SNP) detection.
See also
Endonucleases |
https://en.wikipedia.org/wiki/Shallow%20water%20equations | The shallow-water equations (SWE) are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called Saint-Venant equations, after Adhémar Jean Claude Barré de Saint-Venant (see the related section below).
The equations are derived from depth-integrating the Navier–Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived.
While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation.
Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow.
Shallow-water e |
https://en.wikipedia.org/wiki/Glycosome | The glycosome is a membrane-enclosed organelle that contains the glycolytic enzymes. The term was first used by Scott and Still in 1968 after they realized that the glycogen in the cell was not static but rather a dynamic molecule. It is found in a few species of protozoa including the Kinetoplastida which include the suborders Trypanosomatida and Bodonina, most notably in the human pathogenic trypanosomes, which can cause sleeping sickness, Chagas's disease, and leishmaniasis. The organelle is bounded by a single membrane and contains a dense proteinaceous matrix. It is believed to have evolved from the peroxisome. This has been verified by work done on Leishmania genetics.
The glycosome is currently being researched as a possible target for drug therapies.
Glycosomes are unique to kinetoplastids and their sister diplonemids. The term glycosome is also used for glycogen-containing structures found in hepatocytes responsible for storing sugar, but these are not membrane bound organelles.
Structure
Glycosomes are composed of glycogen and proteins. The proteins are the enzymes that are associated with the metabolism of glycogen. These proteins and glycogen form a complex to make a distinct and separate organelle. The proteins for glycosomes are imported from free cytosolic ribosomes. The proteins imported into the organelle have a specific sequence, a PTS1 ending sequence to make sure they go to the right place. They are similar to alpha-granules in the cytosol of a cell that are filled with glycogen. Glycosomes are typically round-to-oval shape with size varying in each cell. Although glycogen is found in the cytoplasm, that in the glycosome is separate, surrounded by membrane. The membrane is a lipid bilayer. The glycogen that is found within the glycosome is identical to glycogen found freely in the cytosol. Glycosomes can be associated or attached to many different types of organelles. They have been found to be attached to the sarcoplasmic reticulum and its in |
https://en.wikipedia.org/wiki/Comparison%20of%20content-control%20software%20and%20providers | This is a list of content-control software and services. The software is designed to control what content may or may not be viewed by a reader, especially when used to restrict material delivered over the Internet via the Web, e-mail, or other means. Restrictions can be applied at various levels: a government can apply them nationwide, an ISP can apply them to its clients, an employer to its personnel, a school to its teachers and/or students, a library to its patrons and/or staff, a parent to a child's computer or computer account or an individual to his or her own computer.
Programs and services
Providers
Amesys
Awareness Technologies
Barracuda Networks
Blue Coat Systems
CronLab
Cyberoam
Detica
Dope.security
Fortinet
GoGuardian
Huawei
Isheriff
Lightspeed Systems
Retina-X Studios
SafeDNS
Securly
SmoothWall
SonicWall
Sophos
SurfControl
Webroot
Websense
MICT, 456.ir
See also
Accountability software
Ad filtering
Computer surveillance
Deep packet inspection
Deep content inspection
Internet censorship
Internet safety
Parental controls
Wordfilter |
https://en.wikipedia.org/wiki/Software%20metering | Software metering is the monitoring and controlling of software for analytics and the enforcement of agreements. It can be either passive, where data is simply collected and no action is taken, or active, where access is restricted for enforcement.
Types
Software metering refers to several areas:
Tracking and maintaining software licenses. One needs to make sure that only the allowed number of licenses are in use, and at the same time, that there are enough licenses for everyone using it. This can include monitoring of concurrent usage of software for real-time enforcement of license limits. Such license monitoring usually includes when a license needs to be updated due to version changes or when upgrades or even rebates are possible.
Real-time monitoring of all (or selected) applications running on the computers within the organization in order to detect unregistered or unlicensed software and prevent its execution, or limit its execution to within certain hours. The systems administrator can configure the software metering agent on each computer in the organization, for example, to prohibit the execution of games before 17:00.
Fixed planning to allocate software usage to computers according to the policies a company specifies and to maintain a record of usage and attempted usage. A company can check out and check in licenses for mobile users, and can also keep a record of all licenses in use. This is often used when limited license counts are available to avoid violating strict license controls.
A method of software licensing where the licensed software automatically records how many times, or for how long one or more functions in the software are used, and the user pays fees based on this actual usage (also known as 'pay-per-use') |
https://en.wikipedia.org/wiki/Gravitation%20%28book%29 | Gravitation is a widely adopted textbook on Albert Einstein's general theory of relativity, written by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler. It was originally published by W. H. Freeman and Company in 1973 and reprinted by Princeton University Press in 2017. It is frequently abbreviated MTW (for its authors' last names). The cover illustration, drawn by Kenneth Gwin, is a line drawing of an apple with cuts in the skin to show the geodesics on its surface.
The book contains 10 parts and 44 chapters, each beginning with a quotation. The bibliography has a long list of original sources and other notable books in the field. While this may not be considered the best introductory text because its coverage may overwhelm a newcomer, and even though parts of it are now out of date, it remains a highly valued reference for advanced graduate students and researchers.
Content
Subject matter
After a brief review of special relativity and flat spacetime, physics in curved spacetime is introduced and many aspects of general relativity are covered; particularly about the Einstein field equations and their implications, experimental confirmations, and alternatives to general relativity. Segments of history are included to summarize the ideas leading up to Einstein's theory. The book concludes by questioning the nature of spacetime and suggesting possible frontiers of research. Although the exposition on linearized gravity is detailed, one topic which is not covered is gravitoelectromagnetism. Some quantum mechanics is mentioned, but quantum field theory in curved spacetime and quantum gravity are not included.
The topics covered are broadly divided into two "tracks", the first contains the core topics while the second has more advanced content. The first track can be read independently of the second track. The main text is supplemented by boxes containing extra information, which can be omitted without loss of continuity. Margin notes are also inserted |
https://en.wikipedia.org/wiki/PowerPC%20600 | The PowerPC 600 family was the first family of PowerPC processors built. They were designed at the Somerset facility in Austin, Texas, jointly funded and staffed by engineers from IBM and Motorola as a part of the AIM alliance. Somerset was opened in 1992 and its goal was to make the first PowerPC processor and then keep designing general purpose PowerPC processors for personal computers. The first incarnation became the PowerPC 601 in 1993, and the second generation soon followed with the PowerPC 603, PowerPC 604 and the 64-bit PowerPC 620.
Nuclear family
PowerPC 601
The PowerPC 601 was the first generation of microprocessors to support the basic 32-bit PowerPC instruction set. The design effort started in earnest in mid-1991 and the first prototype chips were available in October 1992. The first 601 processors were introduced in an IBM RS/6000 workstation in October 1993 (alongside its more powerful multichip cousin IBM POWER2 line of processors) and the first Apple Power Macintoshes on March 14, 1994. The 601 was the first advanced single-chip implementation of the POWER/PowerPC architecture designed on a crash schedule to establish PowerPC in the marketplace and cement the AIM alliance. In order to achieve an extremely aggressive schedule while including substantially new functionality (such as substantial performance enhancements, new instructions and importantly POWER/PowerPC's first symmetric multiprocessing (SMP) implementation) the design leveraged a number of key technologies and project management strategies. The 601 team leveraged much of the basic structure and portions of the IBM RISC Single Chip (RSC) processor, but also included support for the vast majority of the new PowerPC instructions not in the POWER instruction set. While nearly every portion of the RSC design was modified, and many design blocks were substantially modified or completely redesigned given the completely different unified I/O bus structure and SMP/memory coherency support. Ne |
https://en.wikipedia.org/wiki/Fighters%20against%20Nazis%20Medal | Fighters against Nazis Medal ( Ot Halohem BaNatzim) is an Israeli decoration that is awarded to World War II veterans.
First instituted as a ribbon bar in 1967, it was first awarded to World War II veterans at Yom HaShoah (7 May) the same year by the Prime Minister of Israel, Levi Eshkol.
In 1985 to mark 40 years to victory in World War II, an Israeli flag with the ribbon bar was awarded to a number of museums dedicated to the war.
In 2000, a medal was created from the ribbon bar.
Award criteria
An Israeli citizen or permanent resident of Israel who during World War II was engaged in the military campaign against the Nazi oppressors and their supporters and fought them in one of the Allies armed forces, including the British brigade, as a partisan or as an underground movement member during the set period of time which is between 1/9/1939 and the 1/9/1945, as specified by Yad Vashem regulation of 1968, and also based upon the "status of the World War II veterans" Act of 2000.
If the person eligible for this award has died, a family member (a spouse, a son, a daughter, a father, a mother, a brother, a sister, a grandson, a granddaughter) is entitled to submit an application requesting the ribbon, or an equivalent to the ribbon in event of loss or wear and tear.
Design
The Fighters against Nazis Medal is a disc. The non-swivelling straight bar suspender is attached to the medal at the top of the medal. The medals were struck on a gold coloured metal.
Obverse
The obverse shows two Stars of David; one of them is shaped as a Yellow badge and has a bayonet on it while the other has an olive branch on it.
Reverse
The reverse shows the Emblem of the State of Israel, a menorah surrounded by an olive branch on each side, and the writing "ישראל" (Hebrew for Israel) below it, in the centre; around it the writing "לוחם בנאצים - ותיק מלחמת העולם השנייה" (Hebrew for Fighter against the Nazis - veteran of World War II).
Ribbon
The ribbon is red in the centre with a str |
https://en.wikipedia.org/wiki/Security%20bug | A security bug or security defect is a software bug that can be exploited to gain unauthorized access or privileges on a computer system. Security bugs introduce security vulnerabilities by compromising one or more of:
Authentication of users and other entities
Authorization of access rights and privileges
Data confidentiality
Data integrity
Security bugs do not need be identified nor exploited to be qualified as such and are assumed to be much more common than known vulnerabilities in almost any system.
Causes
Security bugs, like all other software bugs, stem from root causes that can generally be traced to either absent or inadequate:
Software developer training
Use case analysis
Software engineering methodology
Quality assurance testing
and other best practices
Taxonomy
Security bugs generally fall into a fairly small number of broad categories that include:
Memory safety (e.g. buffer overflow and dangling pointer bugs)
Race condition
Secure input and output handling
Faulty use of an API
Improper use case handling
Improper exception handling
Resource leaks, often but not always due to improper exception handling
Preprocessing input strings before they are checked for being acceptable
Mitigation
See software security assurance.
See also
Computer security
Hacking: The Art of Exploitation
IT risk
Threat (computer)
Vulnerability (computing)
Hardware bug
Secure coding |
https://en.wikipedia.org/wiki/Weakly%20contractible | In mathematics, a topological space is said to be weakly contractible if all of its homotopy groups are trivial.
Property
It follows from Whitehead's Theorem that if a CW-complex is weakly contractible then it is contractible.
Example
Define to be the inductive limit of the spheres . Then this space is weakly contractible. Since is moreover a CW-complex, it is also contractible. See Contractibility of unit sphere in Hilbert space for more.
The Long Line is an example of a space which is weakly contractible, but not contractible. This does not contradict Whitehead theorem since the Long Line does not have the homotopy type of a CW-complex.
Another prominent example for this phenomenon is the Warsaw circle. |
https://en.wikipedia.org/wiki/Popliteal%20lymph%20nodes | The popliteal lymph nodes, small in size and some six or seven in number, are embedded in the fat contained in the popliteal fossa, sometimes referred to as the 'knee pit'. One lies immediately beneath the popliteal fascia, near the terminal part of the small saphenous vein, and drains the region from which this vein derives its tributaries, such as superficial regions of the posterolateral aspect of the leg and the plantar aspect of the foot.
Another is between the popliteal artery and the posterior surface of the knee-joint. It receives afferents from the knee-joint, together with those that accompany the genicular arteries. The others lie at the sides of the popliteal vessels, and receive, as efferents, the trunks that accompany the anterior and posterior tibial vessels.
The efferents of the popliteal lymph nodes pass almost entirely alongside the femoral vessels to the deep inguinal lymph nodes, but a few may accompany the great saphenous vein, and end in the glands of the superficial subinguinal group. The flow of lymph from the legs towards the heart is the result of the calf pump– during walking the calf muscle contracts, squeezing lymph out of the leg via the lymphatic vessels. When the muscle relaxes, valves in the vessels shut preventing the fluid from returning to the lower extremities. |
https://en.wikipedia.org/wiki/Jim%20Al-Khalili | Jameel Sadik "Jim" Al-Khalili (; born 20 September 1962) is an Iraqi-British theoretical physicist, author and broadcaster. He is professor of theoretical physics and chair in the public engagement in science at the University of Surrey. He is a regular broadcaster and presenter of science programmes on BBC radio and television, and a frequent commentator about science in other British media.
In 2014, Al-Khalili was named as a RISE (Recognising Inspirational Scientists and Engineers) leader by the UK's Engineering and Physical Sciences Research Council (EPSRC). He was President of Humanists UK between January 2013 and January 2016.
Early life and education
Al-Khalili was born in Baghdad in 1962. His father was an Iraqi Air Force engineer, and his English mother was a librarian. Al-Khalili settled permanently in the United Kingdom in 1979. After completing (and retaking) his A-levels over three years until 1982, he studied physics at the University of Surrey and graduated with a Bachelor of Science degree in 1986. He stayed on at Surrey to pursue a Doctor of Philosophy degree in nuclear reaction theory, which he obtained in 1989, rather than accepting a job offer from the National Physical Laboratory.
Career and research
In 1989, Al-Khalili was awarded a Science and Engineering Research Council (SERC) postdoctoral fellowship at University College London, after which he returned to Surrey in 1991, first as a research assistant, then as a lecturer. In 1994, Al-Khalili was awarded an Engineering and Physical Sciences Research Council (EPSRC) Advanced Research Fellowship for five years, during which time he established himself as a leading expert on mathematical models of exotic atomic nuclei. He has published widely in his field.
Al-Khalili is a professor of physics at the University of Surrey, where he also holds a chair in the Public Engagement in Science. He has been a trustee (2006–2012) and vice president (2008–2011) of the British Science Association. He a |
https://en.wikipedia.org/wiki/TCP-Illinois | TCP-Illinois is a variant of TCP congestion control protocol, developed at the University of Illinois at Urbana–Champaign. It is especially targeted at high-speed, long-distance networks. A sender side modification to the standard TCP congestion control algorithm, it achieves a higher average throughput than the standard TCP, allocates the network resource fairly as the standard TCP, is compatible with the standard TCP, and provides incentives for TCP users to switch.
Principles of operation
TCP-Illinois is a loss-delay based algorithm, which uses packet loss as the primary congestion signal to determine the direction of window size change, and uses queuing delay as the secondary congestion signal to adjust the pace of window size change. Similarly to the standard TCP, TCP-Illinois increases the window size W by for each acknowledgment, and decreases by for each loss event. Unlike the standard TCP, and are not constants. Instead, they are functions of average queuing delay : , where is decreasing and is increasing.
There are numerous choices of and . One such class is:
We let and be continuous functions and thus , and . Suppose is the maximum average queuing delay and we denote , then we also have . From these conditions, we have
This specific choice is demonstrated in Figure 1.
Properties and Performance
TCP-Illinois increases the throughput much more quickly than TCP when congestion is far and increases the throughput very slowly when congestion is imminent. As a result, the window curve is concave and the average throughput achieved is much larger than the standard TCP, see Figure 2.
It also has many other desirable features, like fairness, compatibility with the standard TCP, providing incentive for TCP users to switch, robust against inaccurate delay measurement.
See also
H-TCP
BIC TCP
HSTCP
TCP
FAST TCP |
https://en.wikipedia.org/wiki/Link-local%20address | In computer networking, a link-local address is a unicast network address that is valid only for communications within the subnetwork that the host is connected to. Link-local addresses are most often assigned automatically with a process known as stateless address autoconfiguration or link-local address autoconfiguration, also known as automatic private IP addressing (APIPA) or auto-IP.
Link-local addresses are not guaranteed to be unique beyond their network segment. Therefore, routers do not forward packets with link-local source or destination addresses.
IPv4 link-local addresses are assigned from address block ( through ). In IPv6, they are assigned from the block .
Address assignment
Link-local addresses may be assigned manually by an administrator or by automatic operating system procedures. In Internet Protocol (IP) networks, they are assigned most often using stateless address autoconfiguration, a process that often uses a stochastic process to select the value of link-local addresses, assigning a pseudo-random address that is different for each session. However, in IPv6 the link-local address may be derived from the interface media access control (MAC) address in a rule-based method, although this is deprecated for privacy and security reasons.
In IPv4, link-local addresses are normally only used when no external, stateful mechanism of address configuration exists, such as the Dynamic Host Configuration Protocol (DHCP), or when another primary configuration method has failed. In IPv6, link-local addresses are always assigned, along with addresses of other scopes, and are required for the internal functioning of various protocol components.
IPv4
The Internet Engineering Task Force (IETF) has reserved the IPv4 address block () for link-local addressing. The entire range may be used for this purpose, except for the first 256 and last 256 addresses ( and ), which are reserved for future use and must not be selected by a host using this dynamic configura |
https://en.wikipedia.org/wiki/Dot%20plot%20%28bioinformatics%29 | In bioinformatics a dot plot is a graphical method for comparing two biological sequences and identifying regions of close similarity after sequence alignment. It is a type of recurrence plot.
History
One way to visualize the similarity between two protein or nucleic acid sequences is to use a similarity matrix, known as a dot plot. These were introduced by Gibbs and McIntyre in 1970 and are two-dimensional matrices that have the sequences of the proteins being compared along the vertical and horizontal axes. For a simple visual representation of the similarity between two sequences, individual cells in the matrix can be shaded black if residues are identical, so that matching sequence segments appear as runs of diagonal lines across the matrix.
Interpretation
Some idea of the similarity of the two sequences can be gleaned from the number and length of matching segments shown in the matrix. Identical proteins will obviously have a diagonal line in the center of the matrix. Insertions and deletions between sequences give rise to disruptions in this diagonal. Regions of local similarity or repetitive sequences give rise to further diagonal matches in addition to the central diagonal. One way of reducing this noise is to only shade runs or 'tuples' of residues, e.g. a tuple of 3 corresponds to three residues in a row. This is effective because the probability of matching three residues in a row by chance is much lower than single-residue matches.
Dot plots compare two sequences by organizing one sequence on the x-axis, and another on the y-axis, of a plot. When the residues of both sequences match at the same location on the plot, a dot is drawn at the corresponding position. Note, that the sequences can be written backwards or forwards, however the sequences on both axes must be written in the same direction. Also note, that the direction of the sequences on the axes will determine the direction of the line on the dot plot. Once the dots have been plotted, they wi |
https://en.wikipedia.org/wiki/Ladislav%20Tauc | Ladislav Tauc (1926–1999) was a French neuroscientist, born in Pardubice, Czechoslovakia.
He was a pioneer in neuroethology and neuronal physiology, who immigrated to France in 1949 to work at the Institut Marey in Paris. Tauc was the founder and former director of the Laboratoire de Neurobiologie Cellulaire et Moléculaire of the French National Centre for Scientific Research (CNRS). He was one of the teachers of Eric R. Kandel. There Eric R. Kandel started to investigate the gill withdrawal reflex and postsynaptic potentials (PSP) in identified neurons in the abdominal ganglion of Aplysia.
"The introduction of a 'simplified' brain (the Aplysia nervous system) to study the cellular and molecular basis of organized neuronal interactions, has been described as one of Tauc’s essential contributions to neuroscience."
Since the year 2000 an annual meeting is organized in honor of Ladislav Tauc.
See also
Eric R. Kandel
Torsten Wiesel
Stephen Kuffler |
https://en.wikipedia.org/wiki/Pompeiu%27s%20theorem | Pompeiu's theorem is a result of plane geometry, discovered by the Romanian mathematician Dimitrie Pompeiu. The theorem is simple, but not classical. It states the following:
Given an equilateral triangle ABC in the plane, and a point P in the plane of the triangle ABC, the lengths PA, PB, and PC form the sides of a (maybe, degenerate) triangle.
The proof is quick. Consider a rotation of 60° about the point B. Assume A maps to C, and P maps to P '. Then , and . Hence triangle PBP ' is equilateral and . Then . Thus, triangle PCP ' has sides equal to PA, PB, and PC and the proof by construction is complete (see drawing).
Further investigations reveal that if P is not in the interior of the triangle, but rather on the circumcircle, then PA, PB, PC form a degenerate triangle, with the largest being equal to the sum of the others, this observation is also known as Van Schooten's theorem.
Generally, by the point P and the lengths to the vertices of the equilateral triangle - PA, PB, and PC two equilateral triangles ( the larger and the smaller) with sides and are defined:
.
The symbol △ denotes the area of the triangle whose sides have lengths PA, PB, PC.
Pompeiu published the theorem in 1936, however August Ferdinand Möbius had published a more general theorem about four points in the Euclidean plane already in 1852. In this paper Möbius also derived the statement of Pompeiu's theorem explicitly as a special case of his more general theorem. For this reason the theorem is also known as the Möbius-Pompeiu theorem.
External links
MathWorld's page on Pompeiu's Theorem
Pompeiu's theorem at cut-the-knot.org
Notes
Elementary geometry
Theorems about equilateral triangles
Theorems about triangles and circles
Articles containing proofs |
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion%20system | Reaction–diffusion systems are mathematical models which correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space.
Reaction–diffusion systems are naturally applied in chemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found in biology, geology and physics (neutron diffusion theory) and ecology. Mathematically, reaction–diffusion systems take the form of semi-linear parabolic partial differential equations. They can be represented in the general form
where represents the unknown vector function, is a diagonal matrix of diffusion coefficients, and accounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation of travelling waves and wave-like phenomena as well as other self-organized patterns like stripes, hexagons or more intricate structure like dissipative solitons. Such patterns have been dubbed "Turing patterns". Each function, for which a reaction diffusion differential equation holds, represents in fact a concentration variable.
One-component reaction–diffusion equations
The simplest reaction–diffusion equation is in one spatial dimension in plane geometry,
is also referred to as the Kolmogorov–Petrovsky–Piskunov equation. If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation is Fick's second law. The choice yields Fisher's equation that was originally used to describe the spreading of biological populations, the Newell–Whitehead-Segel equation with to describe Rayleigh–Bénard convection, the more general Zeldovich–Frank-Kamenetskii equation with and (Zeldovich number) that arises in combustion theory, and its part |
https://en.wikipedia.org/wiki/Ackermann%20set%20theory | In mathematics and logic, Ackermann set theory (AST) is an axiomatic set theory proposed by Wilhelm Ackermann in 1956.
AST differs from Zermelo–Fraenkel set theory (ZF) in that it allows proper classes, that is, objects that are not sets, including a class of all sets.
It replaces several of the standard ZF axioms for constructing new sets with a principle known as Ackermann's schema. Intuitively, the schema allows a new set to be constructed if it can be defined by a formula which does not refer to the class of all sets.
In its use of classes, AST differs from other alternative set theories such as Morse–Kelley set theory and Von Neumann–Bernays–Gödel set theory in that a class may be an element of another class.
William N. Reinhardt established in 1970 that AST is effectively equivalent in strength to ZF, putting it on equal foundations. In particular, AST is consistent if and only if ZF is consistent.
Preliminaries
AST is formulated in first-order logic. The language of AST contains one binary relation denoting set membership and one constant denoting the class of all sets. Ackermann used a predicate instead of ; this is equivalent as each of and can be defined in terms of the other.
We will refer to elements of as sets, and general objects as classes. A class that is not a set is called a proper class.
Axioms
The following formulation is due to Reinhardt.
The five axioms include two axiom schemas.
Ackermann's original formulation included only the first four of these, omitting the axiom of regularity.
1. Axiom of extensionality
If two classes have the same elements, then they are equal.
This axiom is identical to the axiom of extensionality found in many other set theories, including ZF.
2. Heredity
Any element or a subset of a set is a set.
3. Comprehension schema
For any property, we can form the class of sets satisfying that property. Formally, for any formula where is not free:
That is, the only restriction is that comprehension is |
https://en.wikipedia.org/wiki/Amorphous%20computing | Amorphous computing refers to computational systems that use very large numbers of identical, parallel processors each having limited computational ability and local interactions. The term Amorphous Computing was coined at MIT in 1996 in a paper entitled "Amorphous Computing Manifesto" by Abelson, Knight, Sussman, et al.
Examples of naturally occurring amorphous computations can be found in many fields, such as: developmental biology (the development of multicellular organisms from a single cell), molecular biology (the organization of sub-cellular compartments and intra-cell signaling), neural networks, and chemical engineering (non-equilibrium systems) to name a few. The study of amorphous computation is hardware agnostic—it is not concerned with the physical substrate (biological, electronic, nanotech, etc.) but rather with the characterization of amorphous algorithms as abstractions with the goal of both understanding existing natural examples and engineering novel systems.
Amorphous computers tend to have many of the following properties:
Implemented by redundant, potentially faulty, massively parallel devices.
Devices having limited memory and computational abilities.
Devices being asynchronous.
Devices having no a priori knowledge of their location.
Devices communicating only locally.
Exhibit emergent or self-organizational behavior (patterns or states larger than an individual device).
Fault-tolerant, especially to the occasional malformed device or state perturbation.
Algorithms, tools, and patterns
(Some of these algorithms have no known names. Where a name is not known, a descriptive one is given.)
"Fickian communication". Devices communicate by generating messages which diffuse through the medium in which the devices dwell. Message strength will follow the inverse square law as described by Fick's law of diffusion. Examples of such communication are common in biological and chemical systems.
"Link diffusive communication". Devices com |
https://en.wikipedia.org/wiki/George%20W.%20Hart | George William Hart (born 1955) is an American sculptor and geometer. Before retiring, he was an associate professor of Electrical Engineering at Columbia University in New York City and then an interdepartmental research professor at Stony Brook University. His work includes both academic and artistic approaches to mathematics.
He is the father of mathematics popularizer and YouTuber Vi Hart.
Education and career
Hart received a B.S. in Mathematics from MIT (1977), an M.A. in Linguistics from Indiana University (1979), and a Ph.D. in Electrical Engineering and Computer Science from MIT (1987).
His academic work includes the online publication Encyclopedia of Polyhedra, the textbook Multidimensional Analysis, and the instruction book Zome Geometry. He has also published over sixty academic articles. His artistic work includes sculpture, computer images, toys (e.g. Zome) and puzzles.
He worked with John H. Conway to promote and standardize the Conway polyhedron notation.
Sculptures
Hart's public sculptures can be seen at locations around the world, including MIT, U.C. Berkeley, Stony Brook University, Princeton University, Duke University, The University of Arizona, Queen's University at Kingston, Macalester College, Pratt Institute, Albion College, Middlesex University, Aalto University, and The Polytechnic University of Valencia.
Inventions
Hart is a coinventor on two US patents, Digital ac monitor and Non-intrusive appliance monitor apparatus. These patents cover, in part, an improved electrical meter for homes called nonintrusive load monitors. These meters track changes in voltage and current usage by a given household and then deduce which appliances are using how much electricity and when.
Museum of Mathematics
Hart is a co-founder of North America's only Museum of Mathematics, MoMath, in New York City. As chief of content, he set the "Math is Cool!" tone of the museum and spent five years designing original exhibits and workshop activities for |
https://en.wikipedia.org/wiki/Pipeline%20stall | In the design of pipelined computer processors, a pipeline stall is a delay in execution of an instruction in order to resolve a hazard.
Details
In a standard five-stage pipeline, during the decoding stage, the control unit will determine whether the decoded instruction reads from a register to which the currently executed instruction writes. If this condition holds, the control unit will stall the instruction by one clock cycle. It also stalls the instruction in the fetch stage, to prevent the instruction in that stage from being overwritten by the next instruction in the program.
In a Von Neumann architecture which uses the program counter (PC) register to determine the current instruction being fetched in the pipeline, to prevent new instructions from being fetched when an instruction in the decoding stage has been stalled, the value in the PC register and the instruction in the fetch stage are preserved to prevent changes. The values are preserved until the instruction causing the conflict has passed through the execution stage. Such an event is often called a bubble, by analogy with an air bubble in a fluid pipe.
In some architectures, the execution stage of the pipeline must always be performing an action at every cycle. In that case, the bubble is implemented by feeding NOP ("no operation") instructions to the execution stage, until the bubble is flushed past it.
Examples
Timeline
The following is two executions of the same four instructions through a 4-stage pipeline but, for whatever reason, a delay in fetching of the purple instruction in cycle #2 leads to a bubble being created delaying all instructions after it as well.
Classic RISC pipeline
The below example shows a bubble being inserted into a classic RISC pipeline, with five stages (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB = Register write back). In this example, data available after the MEM stage (4th stage) of the first instruction is required |
https://en.wikipedia.org/wiki/Superferromagnetism | Superferromagnetism is the magnetism of an ensemble of magnetically interacting super-moment-bearing material particles that would be superparamagnetic if they were not interacting. Nanoparticles of iron oxides, such as ferrihydrite (nominally FeOOH), often cluster and interact magnetically. These interactions change the magnetic behaviours of the nanoparticles (both above and below their blocking temperatures) and lead to an ordered low-temperature phase with non-randomly oriented particle super-moments.
Discovery
The phenomenon appears to have been first described and the term "superferromagnatism" introduced by Bostanjoglo and Röhkel, for a metallic film system. A decade later, the same phenomenon was rediscovered and described to occur in small-particle systems. The discovery is attributed as such in the scientific literature. |
https://en.wikipedia.org/wiki/Dioptric%20correction | Dioptric correction is the expression for the adjustment of the optical instrument to the varying visual acuity of a person's eyes. It is the adjustment of one lens to provide compatible focus when the viewer's eyes have differing visual capabilities. One result is less strain on the eyes that allow for optimal viewing and depth and contrast focusing when composing a photograph or viewing an item through a device made of lenses or lens elements. |
https://en.wikipedia.org/wiki/Beta-glucan | Beta-glucans, β-glucans comprise a group of β-D-glucose polysaccharides (glucans) naturally occurring in the cell walls of cereals, bacteria, and fungi, with significantly differing physicochemical properties dependent on source. Typically, β-glucans form a linear backbone with 1–3 β-glycosidic bonds but vary with respect to molecular mass, solubility, viscosity, branching structure, and gelation properties, causing diverse physiological effects in animals.
At dietary intake levels of at least 3 g per day, oat fiber β-glucan decreases blood levels of LDL cholesterol and so may reduce the risk of cardiovascular diseases. β-glucans are natural gums and are used as texturing agents in various nutraceutical and cosmetic products, and as soluble fiber supplements.
History
Cereal and fungal products have been used for centuries for medicinal and cosmetic purposes; however, the specific role of β-glucan was not explored until the 20th century. β-glucans were first discovered in lichens, and shortly thereafter in barley. A particular interest in oat β-glucan arose after a cholesterol lowering effect from oat bran reported in 1981.
In 1997, the FDA approved of a claim that intake of at least 3.0 g of β-glucan from oats per day decreased absorption of dietary cholesterol and reduced the risk of coronary heart disease. The approved health claim was later amended to include these sources of β-glucan: rolled oats (oatmeal), oat bran, whole oat flour, oatrim (the soluble fraction of alpha-amylase hydrolyzed oat bran or whole oat flour), whole grain barley and barley beta-fiber. An example of an allowed label claim: "Soluble fiber from foods such as oatmeal, as part of a diet low in saturated fat and cholesterol, may reduce the risk of heart disease. A serving of oatmeal supplies 0.75 grams of the 3.0 g of β-glucan soluble fiber necessary per day to have this effect." The claim language is in the Federal Register 21 CFR 101.81 Health Claims: "Soluble fiber from certain foods |
https://en.wikipedia.org/wiki/Disk%20buffer | In computer storage, disk buffer (often ambiguously called disk cache or cache buffer) is the embedded memory in a hard disk drive (HDD) or solid state drive (SSD) acting as a buffer between the rest of the computer and the physical hard disk platter or flash memory that is used for storage. Modern hard disk drives come with 8 to 256 MiB of such memory, and solid-state drives come with up to 4 GB of cache memory.
Since the late 1980s, nearly all disks sold have embedded microcontrollers and either an ATA, Serial ATA, SCSI, or Fibre Channel interface. The drive circuitry usually has a small amount of memory, used to store the data going to and coming from the disk platters.
The disk buffer is physically distinct from and is used differently from the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the microcontroller in the hard disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, ranging between 8 MB to 4 GB, and the page cache is generally all unused main memory. While data in the page cache is reused multiple times, the data in the disk buffer is rarely reused. In this sense, the terms disk cache and cache buffer are misnomers; the embedded controller's memory is more appropriately called disk buffer.
Note that disk array controllers, as opposed to disk controllers, usually have normal cache memory of around 0.5–8 GiB.
Uses
Read-ahead/read-behind
When a disk's controller executes a physical read, the actuator moves the read/write head to (or near to) the correct cylinder. After some settling and possibly fine-actuating the read head begins to pick up track data, and all is left to do is wait until platter rotation brings the requested data.
The data read ahead of request during this wait is unrequested but free, so typically saved in the disk buffer in case it is requested later.
Similarly, data can be read for fre |
https://en.wikipedia.org/wiki/Page%20cache | In computing, a page cache, sometimes also called disk cache, is a transparent cache for the pages originating from a secondary storage device such as a hard disk drive (HDD) or a solid-state drive (SSD). The operating system keeps a page cache in otherwise unused portions of the main memory (RAM), resulting in quicker access to the contents of cached pages and overall performance improvements. A page cache is implemented in kernels with the paging memory management, and is mostly transparent to applications.
Usually, all physical memory not directly allocated to applications is used by the operating system for the page cache. Since the memory would otherwise be idle and is easily reclaimed when applications request it, there is generally no associated performance penalty and the operating system might even report such memory as "free" or "available".
When compared to main memory, hard disk drive read/writes are slow and random accesses require expensive disk seeks; as a result, larger amounts of main memory bring performance improvements as more data can be cached in memory. Separate disk caching is provided on the hardware side, by dedicated RAM or NVRAM chips located either in the disk controller (in which case the cache is integrated into a hard disk drive and usually called disk buffer), or in a disk array controller. Such memory should not be confused with the page cache.
Memory conservation
Pages in the page cache modified after being brought in are called dirty pages. Since non-dirty pages in the page cache have identical copies in secondary storage (e.g. hard disk drive or solid-state drive), discarding and reusing their space is much quicker than paging out application memory, and is often preferred over flushing the dirty pages into secondary storage and reusing their space. Executable binaries, such as applications and libraries, are also typically accessed through page cache and mapped to individual process spaces using virtual memory (this is do |
https://en.wikipedia.org/wiki/Weinstein%E2%80%93Aronszajn%20identity | In mathematics, the Weinstein–Aronszajn identity states that if and are matrices of size and respectively (either or both of which may be infinite) then,
provided (and hence, also ) is of trace class,
where is the identity matrix.
It is closely related to the matrix determinant lemma and its generalization. It is the determinant analogue of the Woodbury matrix identity for matrix inverses.
Proof
The identity may be proved as follows.
Let be a matrix consisting of the four blocks , , and :
Because is invertible, the formula for the determinant of a block matrix gives
Because is invertible, the formula for the determinant of a block matrix gives
Thus
Substituting for then gives the Weinstein–Aronszajn identity.
Applications
Let . The identity can be used to show the somewhat more general statement that
It follows that the non-zero eigenvalues of and are the same.
This identity is useful in developing a Bayes estimator for multivariate Gaussian distributions.
The identity also finds applications in random matrix theory by relating determinants of large matrices to determinants of smaller ones. |
https://en.wikipedia.org/wiki/Process%20integration | Process integration is a term in chemical engineering which has two possible meanings.
A holistic approach to process design which emphasizes the unity of the process and considers the interactions between different unit operations from the outset, rather than optimising them separately. This can also be called integrated process design or process synthesis. El-Halwagi (1997 and 2006) and Smith (2005) describe the approach well. An important first step is often product design (Cussler and Moggridge 2003) which develops the specification for the product to fulfil its required purpose.
Pinch analysis, a technique for designing a process to minimise energy consumption and maximise heat recovery, also known as heat integration, energy integration or pinch technology. The technique calculates thermodynamically attainable energy targets for a given process and identifies how to achieve them. A key insight is the pinch temperature, which is the most constrained point in the process. The most detailed explanation of the techniques is by Linnhoff et al. (1982), Shenoy (1995), Kemp (2006) and Kemp and Lim (2020), and it also features strongly in Smith (2005). This definition reflects the fact that the first major success for process integration was the thermal pinch analysis addressing energy problems and pioneered by Linnhoff and co-workers. Later, other pinch analyses were developed for several applications such as mass-exchange networks (El-Halwagi and Manousiouthakis, 1989), water minimization (Wang and Smith, 1994), and material recycle (El-Halwagi et al., 2003). A very successful extension was "Hydrogen Pinch", which was applied to refinery hydrogen management (Nick Hallale et al., 2002 and 2003). This allowed refiners to minimise the capital and operating costs of hydrogen supply to meet ever stricter environmental regulations and also increase hydrotreater yields.
Description
In the context of chemical engineering, process integration can be defined as a hol |
https://en.wikipedia.org/wiki/Pinch%20analysis | Pinch analysis is a methodology for minimising energy consumption of chemical processes by calculating thermodynamically feasible energy targets (or minimum energy consumption) and achieving them by optimising heat recovery systems, energy supply methods and process operating conditions. It is also known as process integration, heat integration, energy integration or pinch technology.
The process data is represented as a set of energy flows, or streams, as a function of heat load (product of specific enthalpy and mass flow rate; SI unit W) against temperature (SI unit K). These data are combined for all the streams in the plant to give composite curves, one for all hot streams (releasing heat) and one for all cold streams (requiring heat). The point of closest approach between the hot and cold composite curves is the pinch point (or just pinch) with a hot stream pinch temperature and a cold stream pinch temperature. This is where the design is most constrained. Hence, by finding this point and starting the design there, the energy targets can be achieved using heat exchangers to recover heat between hot and cold streams in two separate systems, one for temperatures above pinch temperatures and one for temperatures below pinch temperatures. In practice, during the pinch analysis of an existing design, often cross-pinch exchanges of heat are found between a hot stream with its temperature above the pinch and a cold stream below the pinch. Removal of those exchangers by alternative matching makes the process reach its energy target.
History
In 1971, Ed Hohmann stated in his PhD that 'one can
compute the least amount of hot and cold utilities required for a process
without knowing the heat exchanger network that could accomplish it. One
also can estimate the heat exchange area required'.
In late 1977, Ph.D. student Bodo Linnhoff under the supervision of Dr John Flower at the University of Leeds showed the existence in many processes of a heat integration bottle |
https://en.wikipedia.org/wiki/DPVweb | DPVweb is a database for virologists working on plant viruses combining taxonomic, bioinformatic and symptom data.
Description
DPVweb is a central web-based source of information about viruses, viroids and satellites of plants, fungi and protozoa.
It provides comprehensive taxonomic information, including brief descriptions of each family and genus, and classified lists of virus sequences. It makes use of a large database that also holds detailed, curated, information for all sequences of viruses, viroids and satellites of plants, fungi and protozoa that are complete or that contain at least one complete gene. There are currently about 10,000 such sequences. For comparative purposes, DPVweb also contains a representative sequence of all other fully sequenced virus species with an RNA or single-stranded DNA genome. For each curated sequence the database contains the start and end positions of each feature (gene, non-translated region, etc.), and these have been checked for accuracy. As far as possible, the nomenclature for genes and proteins are standardized within genera and families. Sequences of features (either as DNA or amino acid sequences) can be directly downloaded from the website in FASTA format.
The sequence information can also be accessed via client software for personal computers.
History
The Descriptions of Plant Viruses (DPVs) were first published by the Association of Applied Biologists in 1970 as a series of leaflets, each one written by an expert describing a particular plant virus. In 1998 all of the 354 DPVs published in paper were scanned, and converted into an electronic format in a database and distributed on CDROM. In 2001 the descriptions were made available on the new DPVweb site, providing open access to the now 400+ DPVs (currently 415) as well as taxonomic and sequence data on all plant viruses.
Uses
DPVweb is an aid to researchers in the field of plant virology as well as an educational resource for students of virology and molec |
https://en.wikipedia.org/wiki/Protocol%20pipelining | Protocol pipelining is a technique in which multiple requests are written out to a single socket without waiting for the corresponding responses. Pipelining can be used in various application layer network protocols, like HTTP/1.1, SMTP and FTP.
The pipelining of requests results in a dramatic improvement in protocol performance, especially over high latency connections (such as satellite Internet connections). Pipelining reduces waiting time of a process.
See also
HTTP pipelining |
https://en.wikipedia.org/wiki/Cassareep | Cassareep is a thick black liquid made from cassava root, often with additional spices, which is used as a base for many sauces and especially in Guyanese pepperpot. Besides use as a flavoring and browning agent, it is commonly regarded as a food preservative although laboratory testing is inconclusive.
Production
Cassareep is made from the juice of the bitter cassava root, which is poisonous (it contains acetone cyanohydrin, a compound which decomposes to the highly toxic hydrogen cyanide on contact with water). Hydrogen cyanide, traditionally called "prussic acid", is volatile and quickly dissipates when heated. Nevertheless, improperly cooked cassava has been blamed for a number of deaths. Amerindians from Guyana reportedly made an antidote by steeping chili peppers in rum.
To make cassareep, the juice is boiled until it is reduced by half in volume, to the consistency of molasses and flavored with spices—including cloves, cinnamon, salt, sugar, and cayenne pepper. Traditionally, cassareep was boiled in a soft pot, the actual "pepper pot", which would absorb the flavors and also impart them (even if dry) to foods such as rice and chicken cooked in it.
Most cassareep is exported from Guyana. The natives of Guyana traditionally brought the product to town in bottles, and it is available on the US market in bottled form. Though the cassava root traveled from Brazil to Africa, where the majority of cassava is grown, there is no production of cassareep in Africa.
Culinary use
Cassareep is used for two distinct goals, that originate from two important aspects of the ingredient: its particular flavor, and its preservative quality.
Cassareep is essential in the preparation of pepperpot, and gives the dish its "distinctive bittersweet flavor." Cassareep can also be used as an added flavoring to dishes, "imparting upon them the richness and flavour of strong beef-soup."
A peculiar quality of cassareep, which works as an antiseptic, is that it allows food to be kept |
https://en.wikipedia.org/wiki/Common%20Terminology%20Criteria%20for%20Adverse%20Events | The Common Terminology Criteria for Adverse Events (CTCAE), formerly called the Common Toxicity Criteria (CTC or NCI-CTC), are a set of criteria for the standardized classification of adverse effects of drugs used in cancer therapy.
The CTCAE system is a product of the US National Cancer Institute (NCI).
The first Iteration was prior to 1998. In 1999, the FDA released version 2.0. CTCAE version 4.0 in 2009 with an update to y version 4.03 in 2010. The current version 5.0 was released on November 27, 2017. Many clinical trials, now extending beyond oncology, encode their observations based on the CTCAE system.
It uses a range of grades from 1 to 5. Specific conditions and symptoms may have values or descriptive comment for each level, but the general guideline is:
1 - Mild
2 - Moderate
3 - Severe
4 - Life-threatening
5 - Death
Grade 1: is defined as mild, asymptomatic symptoms. clinical or diagnostic observations only; Intervention not indicated.
Grade 2: is moderate; minimal, local or noninvasive intervention was needed.
Grade 3: Severe symptoms or medically significant but not life-threatening but may be disabling or limit self care in ADL
Grade 4: is Life threatening consequences; urgent or emergent intervention needed
Grade 5: Death related to or due to adverse event |
https://en.wikipedia.org/wiki/Ninja%20JaJaMaru-kun | is an action-platform video game developed and published by Jaleco for the Famicom. It was released in Japan on November 15, 1985, and was ported to the MSX in 1986. The MSX version was released in Europe as Ninja II, being marketed as a sequel to Ninja-kun: Majou no Bouken, a game that used the name Ninja for its European MSX release.
The game was a commercial success, selling nearly units. An arcade video game port for the Nintendo VS. System was released in April 1986. A remake, Ganso JaJaMaru-kun, was released in 1999 for the WonderSwan. Ninja JaJaMaru-kun was released for Nintendo's Japanese Virtual Console on December 26, 2006, and in PAL regions on September 21, 2007, as part of Ninja Week for the Hanabi Festival promotion. It was released on the North American Virtual Console on October 22 the same year. The game was released on the Nintendo Switch through Nintendo Switch Online in May 2021.
Gameplay
In Ninja JaJaMaru-kun, the player takes control of JaJaMaru, who sets out to rescue Princess Sakura from the pirate lord Namazu Dayuu, or "Catfish Pirate". JaJaMaru can run, jump, and throw shurikens at enemies, all of which are taken from Japanese folklore and are introduced before a level begins. Each level has eight enemies total, who give chase to JaJaMaru if he occupies the same floor as them. Defeating enemies will cause their spirit to appear and ascend to the top of the screen, which can be grabbed for additional points. Once all the enemies are defeated, JaJaMaru moves onto the next level.
JaJaMaru can destroy bricks scattered in levels, some of which yield power-ups when destroyed. These include a cart that temporarily makes JaJaMaru invincible, a bottle that allows him to walk through enemies, a red ball that increases speed, coins that yield extra points, and 1UPs. Some bricks contain bombs which will kill JaJaMaru if he touches it. Collecting three different power-ups will summon a giant frog named Gamapa-kun, who JaJaMaru will ride and be comp |
https://en.wikipedia.org/wiki/Br.%20Alfred%20Shields%20FSC%20Marine%20Biological%20Station | The Br. Alfred Shields FSC Marine Biological Station is the marine laboratory of De La Salle University. It is located in Sitio Matuod, Lian, Batangas near Talim Bay and also near Mt. Tikbalang. Most of undergraduate and graduate thesis/researches on marine science of the university are done in this facility. Dr. Wilfredo Roehl Y. Licuanan is the current director of the Station.
History
The marine station was named after Br. Alfred Shields FSC, the founder of DLSU's Biology Department. The land where the station is now located was donated by the Limjoco Family. The Marine Station is in the Municipality of Lian in Batangas, a province in the Southern part of Luzon. The Marine Station is near Talim Bay. It is also near a hill, Mt. Tikbalang.
Facilities
The station has basic laboratory and field facilities. These include SCUBA diving gear, tanks, and compressors as well as snorkels and masks; A small outrigger boat, a dry laboratory, reference collections of common marine organisms, computers and various communications and video equipment. Basic housing facilities for faculty and students are also available including two 10-bed dormitory rooms, and a small kitchen. Freshwater supply is provided from a deep well and a generator is available for emergency power.
Faculty and students
A resident scientist (a faculty member of the Biology Department of the university) may be available to supervise and assist in the day-to-day activities of the station. Common users over the past year are faculty and students of the Departments of Biology and Chemistry, as well as natural science students from De La Salle University-Dasmariñas, University of Santo Tomas, University of the Philippines, and the Ateneo de Manila University.
Research and outreach
Recent research and outreach activities conducted by the MBS include: a marine resource assessment of Cauayan, Negros Occidental, evaluation of coral reef conservation at Maricaban Strait, and various undergraduate and graduate th |
https://en.wikipedia.org/wiki/Robert%20G.%20Bartle | Robert Gardner Bartle (November 20, 1927 – September 18, 2003) was an American mathematician specializing in real analysis. He is known for writing the popular textbooks The Elements of Real Analysis (1964), The Elements of Integration (1966), and Introduction to Real Analysis (2011) with Donald R. Sherbert, published by John Wiley & Sons.
Bartle was born in Kansas City, Missouri, and was the son of Glenn G. Bartle and Wanda M. Bartle.
He was married to Doris Sponenberg Bartle (born 1927) from 1952 to 1982 and they had two sons, James A. Bartle (born 1955) and John R. Bartle (born 1958). He was on the faculty of the Department of Mathematics at the University of Illinois from 1955 to 1990.
Bartle was Executive Editor of Mathematical Reviews from 1976 to 1978 and from 1986 to 1990. From 1990 to 1999 he taught at Eastern Michigan University. In 1997, he earned a writing award from the Mathematical Association of America for his paper "Return to the Riemann Integral". |
https://en.wikipedia.org/wiki/Transcritical%20cycle | A transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states. In particular, for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour and/or supercritical conditions during the expansion phase. The ultrasupercritical steam Rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels, where water is used as working fluid. Other typical applications of transcritical cycles to the purpose of power generation are represented by organic Rankine cycles, which are especially suitable to exploit low temperature heat sources, such as geothermal energy, heat recovery applications or waste to energy plants. With respect to subcritical cycles, the transcritical cycle exploits by definition higher pressure ratios, a feature that ultimately yields higher efficiencies for the majority of the working fluids. Considering then also supercritical cycles as a valid alternative to the transcritical ones, the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work. This evidences the extreme potential of transcritical cycles to the purpose of producing the most power (measurable in terms of the cycle specific work) with the least expenditure (measurable in terms of spent energy to compress the working fluid).
While in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid, in transcritical cycles one pressure level is above the critical pressure and the other is below. In the refrigeration field carbon dioxide, CO2, is increasingly considered of interest as refrigerant.
Transcritical conditions of the working fluid
In trascritical cycles, the pressure of the working fluid at the outlet of the pump is higher than the critical pressure, while the inlet conditions are close to the saturated |
https://en.wikipedia.org/wiki/Comparison%20of%20FTP%20server%20software%20packages |
Graphical
Console/terminal-based
Libraries
Summary board
Graphical UI based FTP Servers
Terminal/Console based FTP Servers
See also
File Transfer Protocol (FTP)
Comparison of FTP client software
FTPS (FTP over SSL/TLS)
FTP over SSH
SSH File Transfer Protocol (SFTP)
Comparison of SSH servers
Comparison of SSH clients
Notes
External links
FTP servers
FTP servers |
https://en.wikipedia.org/wiki/Consolidated%20Engineering%20Corporation | Consolidated Engineering Corporation was a chemical instrument manufacturer from 1937 to 1960 when it became a subsidiary of Bell and Howell Corp.
History
CEC was founded in 1937 by Herbert Hoover Jr., eldest son of former United States president Herbert Hoover, as sole proprietor. Harold Washburn was hired in 1938 as VP for Research, with a mandate to develop instruments applicable to petroleum prospecting.
Like his father, Hoover had trained as a mining engineer at Stanford University, studying under Washburn. He earned a PhD in Electrical Engineering from California Institute of Technology in 1932. His thesis Professor was Ernest Lawrence, a physicist at the University of California, Berkeley. Four physicists from California Institute of Technology were hired into the Research Department in a project to develop a mass spectrometer. The initial product was the 21-101 Mass Spectrometer delivered in December 1942, installed in early 1943, initial price $12,000, with no options.
CEC became a publicly held corporation in 1945, with Hoover selling all of his stock. Philip Fogg became President. The name changed to Consolidated Electrodynamics Corp. in 1955, because some states required that a service engineer for an engineering company be a licensed engineer in that state.
The mass spectrometer products and other analytical instrument products were separated from other product lines in a “Chemical Instruments” marketing department sometime between 1945 and 1948 with Harold Wiley as Manager for Chemical Instruments. The Chemical Instruments Department became the Analytical and Control Division in about 1959 with Harold Wiley as General Manager. This name was later changed to the Analytical Instruments Div.
Acquisition by Bell
CEC became a subsidiary of Bell & Howell in 1960. In 1968 the CEC Corporation was dissolved and CEC became the Electronics Instrument Group of Bell and Howell. In the mid-1970s the Analytical Instruments Division of Bell and Howell |
https://en.wikipedia.org/wiki/Critical%20hours | Critical hours for radio stations is the time from sunrise to two hours after sunrise, and from two hours before sunset until sunset, local time. During this time, certain American radio stations may be operating with reduced power as a result of Section 73.187 of the Federal Communications Commission's rules.
Canadian restricted hours are similar to critical hours, except that the restriction results from the January 17, 1984, U.S.-Canadian AM Agreement. Canadian restricted hours are called "critical hours" in the U.S.-Canadian Agreement, but in the AM Engineering database, the FCC calls them "Canadian restricted hours" to distinguish them from the domestically defined critical hours. Canadian restricted hours is that time from sunrise to one and one-half hours after sunrise, and from one and one-half hours before sunset until sunset, local time. U.S. stations operate with restricted hours because of Canadian stations, and vice versa.
Those radio stations that must lower their power during the critical hours are required to do so because this is when the propagation of radio waves changes from groundwave to skywave (at sunset) or vice versa (at sunrise). This can cause radio stations to be picked up much farther away, possibly causing interference with other stations on the same frequency or adjacent frequencies. Usually stations operating under the restrictions of Critical Hours must sign off the air between the end of the evening critical hours and the beginning of the morning critical hours. In effect, permission to operate during critical hours gives daytime-only stations a few more hours in their broadcast day. This is especially important in autumn and winter, when these stations might otherwise need to be off the air during the important morning and afternoon drive times, when AM radio listening is at its highest.
See also
Pre-sunrise and post-sunset authorization |
https://en.wikipedia.org/wiki/USNS%20Adventurous | USNS Adventurous (T-AGOS-13) was a Stalwart-class modified tactical auxiliary general ocean surveillance ship of the United States Navy in service from 1988 to 1992. She was in non-commissioned service in the Military Sealift Command from 1988 to 1992, operating during the final years of the Cold War. She was transferred to the National Oceanic and Atmospheric Administration (NOAA) in 1992 and in 2003 was commissioned into service with NOAA as the fisheries research ship NOAAS Oscar Elton Sette (R 335).
Construction
The U.S. Navy awarded the contract to build Adventurous to VT Halter Marine, Inc., on 5 April 1985. She was laid down on at VT Halter Marines shipyard at Moss Point, Mississippi, on 19 December 1985 and launched on 23 September 1987. VT Halter Marine delivered her to the Navy on 19 August 1988.
U.S. Navy service
The U.S. Navy placed the ship in non-commissioned service with the Military Sealift Command upon delivery as USNS Adventurous (T-AGOS-13). Designed to collect underwater acoustical data in support of anti-submarine warfare operations, Adventurous spent the final years of the Cold War towing sonar equipment to hunt for Soviet Navy submarines. She operated with a mixed crew of Navy personnel and civilian merchant mariners.
The Cold War ended with the collapse of the Soviet Union in late December 1991. The Navy withdrew Adventurous from service on 5 June 1992 and struck her from the Naval Vessel Register the same day.
National Oceanic and Atmospheric Administration service
Acquisition and conversion
On the same day the Navy took her out of service, the ship was transferred to the National Oceanic and Atmospheric Administration (NOAA). NOAA originally intended to assign the ship the hull number R 331 and to convert her for use as a survey ship, but after a stint as a platform for basic training in 1993 she was laid up without having undergone any modifications. She remained inactive until October 2001, when she arrived at Jacksonville, Flo |
https://en.wikipedia.org/wiki/USNS%20Relentless | USNS Relentless (T-AGOS-18) was a Stalwart-class modified tactical auxiliary general ocean surveillance ship in service in the United States Navy from 1990 to 1993. Since 1998, she has been in commission in the National Oceanic and Atmospheric Administration (NOAA) fleet as the fisheries research ship NOAAS Gordon Gunter (R 336).
Construction
The U.S. Navy ordered Relentless from VT Halter Marine, on 20 February 1987. VT Halter Marine laid her down at Moss Point, Mississippi, on 22 April 1988, launched her on 12 May 1989, and delivered her to the U.S. Navy on 12 January 1990.
United States Navy service
On the day of her delivery, the U.S. Navy placed the ship in non-commissioned service in the Military Sealift Command as USNS Relentless (T-AGOS-18). Like the other Stalwart-class ships, she was designed to collect underwater acoustical data in support of Cold War anti-submarine warfare operations against Soviet Navy submarines using Surveillance Towed Array Sensor System (SURTASS) sonar equipment. She operated with a mixed crew of U.S. Navy personnel and civilian merchant mariners.
After the Cold War ended with the collapse of the Soviet Union in late December 1991, the requirement for SURTASS collection declined. The Navy took Relentless out of service on 17 March 1993 and transferred her to the National Oceanic and Atmospheric Administration (NOAA) the same day. She was stricken from the Naval Vessel Register on 20 May 1993.
National Oceanic and Atmospheric Administration service
NOAA converted the ship into a fisheries research ship and commissioned her into NOAA service as NOAAS Gordon Gunter (R 336) on 28 August 1998. She replaced the decommissioned NOAA fisheries research ship NOAAS Chapman (R 446).
Capabilities
Gordon Gunter is outfitted for fishing operations employing stern trawling, longlining, plankton tows, dredging, and trap fishing. She is fitted with modern navigation electronics and oceanographic winches, as well as sophisticated sensors and sa |
https://en.wikipedia.org/wiki/Association%20of%20Mathematics%20Teachers%20of%20India | The Association of Mathematics Teachers of India or AMTI is an academically oriented body of professionals and students interested in the fields of mathematics and mathematics education.
The AMTI's main base is Tamil Nadu, but it has recently been spreading its network in other parts of India, particularly in South India.
Examinations and Olympiads
National Mathematics Talent Contest
AMTI conducts a National Mathematics Talent Contest or NMTC at Primary(Gauss Contest) (Standards 4 to 6), Sub-junior (Kaprekar Contest) (Standards 7 and 8), Junior (Bhaskara Contest) (Standards 9 and 10), Inter(Ramanujan Contest) (Standards 11 and 12) and Senior (Aryabhata Contest) (B.Sc.) levels. For students at the Junior and Inter levels from Tamil Nadu, the NMTC also plays the role of Regional Mathematical Olympiad. Although the question papers are different for Junior and Inter levels, students from both levels may be chosen to appear at INMO based on their performance.
The NMTC is usually held around the last week of October. A preliminary examination is conducted earlier (in September) for all levels except B.Sc. students. Students (Junior and Inter) qualifying the preliminary examination are invited for an Orientation Camp one week before the NMTC, where Olympiad problems and theories are taught. This is also useful for those students qualifying further for INMO.
Grand Achievement Test
This test is for students studying in 12th standard under the Tamil Nadu State Board. It is intended to give a perfectly simulated atmosphere of the board's examination.
Training Activities
Ten-week training session
In 2005, AMTI started a ten-week training programme for students for Olympiad-related problems. The training batches were split into:
Primary level: Standards 4 to 6
Sub-junior level: Standards 7 and 8
Junior level: Standards 9 and 10
Inter level: Standards 11 and 12
Around 85 students attended the ten-week training session.
AMTI conducted the programme again in 2006, |
https://en.wikipedia.org/wiki/Plant%20perception%20%28physiology%29 | Plant perception is the ability of plants to sense and respond to the environment by adjusting their morphology and physiology. Botanical research has revealed that plants are capable of reacting to a broad range of stimuli, including chemicals, gravity, light, moisture, infections, temperature, oxygen and carbon dioxide concentrations, parasite infestation, disease, physical disruption, sound, and touch. The scientific study of plant perception is informed by numerous disciplines, such as plant physiology, ecology, and molecular biology.
Aspects of perception
Light
Many plant organs contain photoreceptors (phototropins, cryptochromes, and phytochromes), each of which reacts very specifically to certain wavelengths of light. These light sensors tell the plant if it is day or night, how long the day is, how much light is available, and where the light is coming from. Shoots generally grow towards light, while roots grow away from it, responses known as phototropism and skototropism, respectively. They are brought about by light-sensitive pigments like phototropins and phytochromes and the plant hormone auxin.
Many plants exhibit certain behaviors at specific times of the day; for example, flowers that open only in the mornings. Plants keep track of the time of day with a circadian clock. This internal clock is synchronized with solar time every day using sunlight, temperature, and other cues, similar to the biological clocks present in other organisms. The internal clock coupled with the ability to perceive light also allows plants to measure the time of the day and so determine the season of the year. This is how many plants know when to flower (see photoperiodism). The seeds of many plants sprout only after they are exposed to light. This response is carried out by phytochrome signalling. Plants are also able to sense the quality of light and respond appropriately. For example, in low light conditions, plants produce more photosynthetic pigments. If the light i |
https://en.wikipedia.org/wiki/Convex%20body | In mathematics, a convex body in -dimensional Euclidean space is a compact convex set with non-empty interior.
A convex body is called symmetric if it is centrally symmetric with respect to the origin; that is to say, a point lies in if and only if its antipode, also lies in Symmetric convex bodies are in a one-to-one correspondence with the unit balls of norms on
Important examples of convex bodies are the Euclidean ball, the hypercube and the cross-polytope.
Kinds of convex bodies
A convex body may be defined as:
A Convex set of points.
The Convex Hull of a set of points.
The intersection of Hyperplanes.
The interior of any Convex polygon or Convex polytope.
Polar body
If is a bounded convex body containing the origin in its interior, the polar body is . The polar body has several nice properties including , is bounded, and if then . The polar body is a type of duality relation.
See also |
https://en.wikipedia.org/wiki/Tare%20weight | Tare weight , sometimes called unladen weight, is the weight of an empty vehicle or container.
By subtracting tare weight from gross weight (laden weight), one can determine the weight of the goods carried or contained (the net weight).
Etymology
The word tare originates from the Middle French word 'wastage in goods, deficiency, imperfection' (15th ), from Italian , from Arabic , lit. 'thing deducted or rejected', from 'to reject'.
Usage
This can be useful in computing the cost of the goods carried for purposes of taxation or for tolls related to barge, rail, road, or other traffic, especially where the toll will vary with the value of the goods carried (e.g., tolls on the Erie Canal). Tare weight is often published upon the sides of railway cars and transport vehicles to facilitate the computation of the load carried. Tare weight is also used in body composition assessment when doing underwater weighing.
Tare weight is accounted for in kitchen scales, analytical (scientific) and other weighing scales which include a button that resets the display of the scales to zero when an empty container is placed on the weighing platform, in order subsequently to display only the weight of the contents of the container.
See also
Curb weight
Dry weight
Gross vehicle weight rating
Hydrostatic weighing
Trett (obsolete) |
https://en.wikipedia.org/wiki/Pharyngeal%20recess | Behind the ostium of the eustachian tube (ostium pharyngeum tuba auditiva) is a deep recess, the pharyngeal recess (fossa of Rosenmüller).
Clinical significance
At the base of this recess is the retropharyngeal lymph node (the Node of Rouvier). This is clinically significant in that it may be involved in certain head and neck cancers, notably nasopharyngeal cancer. |
https://en.wikipedia.org/wiki/Ethmoidal%20infundibulum | The ethmoidal infundibulum is a funnel-shaped/slit-like/curved opening/passage/space/cleft upon the anterosuperior portion of the middle nasal meatus (and thus of the lateral wall of the nasal cavity) at the hiatus semilunaris (which represents the medial extremity of the infundibulum). The anterior ethmoidal air cells, and (usually) the frontonasal duct (which drains the frontal sinus) open into the ethmoidal infundibulum. The ethmoidal infundibulum extends anterosuperiorly from its opening into the nasal cavity.
Anatomy
The ethmoidal infundibulum is bordered medially by the uncinate process of the ethmoid bone, and laterally by the orbital plate of the ethmoid bone.
The ethmoid infundibulum leads towards the maxillary hiatus.The anterior ethmoidal cells open into the anterior part of the infundibulum.
Variation
The frontonasal duct may or may not drain into the ethmoidal infundibulum - this is determined by the place of attachment of the uncinate process of the ethmoid bone: if the uncinate process is attached to the lateral nasal wall, the frontonasal duct will open directly into the middle nasal meatus; if otherwise, it will drain drain into the infundibulum. In slightly over 50% of subjects, it is directly continuous with the frontonasal duct. When the anterior end of the uncinate process fuses with the anterior part of the bulla, however, this continuity is interrupted and the frontonasal duct then drains directly into the anterior end of the middle meatus. |
https://en.wikipedia.org/wiki/Reciprocity%20%28evolution%29 | Reciprocity in evolutionary biology refers to mechanisms whereby the evolution of cooperative or altruistic behaviour may be favoured by the probability of future mutual interactions. A corollary is how a desire for revenge can harm the collective and therefore be naturally deselected.
Main types
Three types of reciprocity have been studied extensively:
Direct reciprocity
Indirect
Network reciprocity
Direct reciprocity
Direct reciprocity was proposed by Robert Trivers as a mechanism for the evolution of cooperation. If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect", then a strategy of mutual cooperation may be favoured even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the
evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act: w > c / b
Indirect reciprocity
"In the standard framework of indirect reciprocity, there are randomly chosen pairwise encounters between members of a population; the same two individuals need not meet again. One individual acts as donor, the other as recipient. The donor can decide whether or not to cooperate. The interaction is observed by a subset of the population who might inform others. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help." In many situations cooperation is favoured and it even benefits an individual to forgive an occasional defection but cooperative societies are always unstable because mutants inclined to defect can upset any balance.
The calculations of indirect reciprocity are complicated, but again a simple rule has emerged. Indirect reciprocit |
https://en.wikipedia.org/wiki/Night%20hunting | "Night hunting", known in Bhutan as Bomena, is a traditional "courtship" custom that is practiced in some parts of Bhutan.
Similar customs have also existed in other cultures, namely in Japan.
Practice
"Night hunting" is a traditional culture of nightly courtship and romance that is practiced mostly in eastern and central rural Bhutan. There is neither the word "night" nor the word "hunting" in the original terms. The original words can be best rendered as "prowling for girls".
Young men go out at night to sneak into girls' windows to engage in sexual activities. The prowling can be solo or in groups depending on whether or not the man has a fixed date. It is the rural equivalent of an urban date. If one has talked with the girl in advance then it can be a solo activity but usually it happens after a gathering when friends decide to go prowling for girls. Most boys would have a girl in mind. Although they set out as a group, they disperse gradually as they find a partner.
Traditional two-story buildings makes the prowling difficult but the sliding window shutter with only wooden latches from inside makes it easier. Strategies vary from sneaking in the door to climbing up the side of a house to enter a window or even dropping in from the roof. The uniform architecture of Bhutanese houses, with same design of doors and windows also make it easier. The age old tradition has also come up with special tools to undo doors and windows. If the boy successfully infiltrates the dwelling, he still may be rejected by the girl he is pursuing.
The prowling may be foiled due to wrong footing, which may wake up the whole family. The intruder may get chased away with hot water splashed on him, or be thrown out of the window. Strict parents chase the intruder or threaten him with marriage or a stick while liberal ones pretend to be asleep even if they know the prowler is around. This is more likely if they know the prowler is a suitor they would like to have for their daughter. |
https://en.wikipedia.org/wiki/Convergence%20problem | In the analytic theory of continued fractions, the convergence problem is the determination of conditions on the partial numerators ai and partial denominators bi that are sufficient to guarantee the convergence of the continued fraction
This convergence problem for continued fractions is inherently more difficult than the corresponding convergence problem for infinite series.
Elementary results
When the elements of an infinite continued fraction consist entirely of positive real numbers, the determinant formula can easily be applied to demonstrate when the continued fraction converges. Since the denominators Bn cannot be zero in this simple case, the problem boils down to showing that the product of successive denominators BnBn+1 grows more quickly than the product of the partial numerators a1a2a3...an+1. The convergence problem is much more difficult when the elements of the continued fraction are complex numbers.
Periodic continued fractions
An infinite periodic continued fraction is a continued fraction of the form
where k ≥ 1, the sequence of partial numerators {a1, a2, a3, ..., ak} contains no values equal to zero, and the partial numerators {a1, a2, a3, ..., ak} and partial denominators {b1, b2, b3, ..., bk} repeat over and over again, ad infinitum.
By applying the theory of linear fractional transformations to
where Ak-1, Bk-1, Ak, and Bk are the numerators and denominators of the k-1st and kth convergents of the infinite periodic continued fraction x, it can be shown that x converges to one of the fixed points of s(w) if it converges at all. Specifically, let r1 and r2 be the roots of the quadratic equation
These roots are the fixed points of s(w). If r1 and r2 are finite then the infinite periodic continued fraction x converges if and only if
the two roots are equal; or
the k-1st convergent is closer to r1 than it is to r2, and none of the first k convergents equal r2.
If the denominator Bk-1 is equal to zero then an infinite number of the deno |
https://en.wikipedia.org/wiki/Sofar%20bomb | In oceanography, a sofar bomb (Sound Fixing And Ranging bomb), occasionally referred to as a sofar disc, is a long-range position-fixing system that uses impulsive sounds in the deep sound channel (SOFAR channel) of the ocean to enable pinpointing of the location of ships or crashed planes. The deep sound channel is ideal for the device, as the minimum speed of sound at that depth improves the signal's traveling ability. A position is determined from the differences in arrival times at receiving stations of known geographic locations. The useful range from the signal sources to the receiver can exceed .
Design
For this device to work as intended, it must have several qualities. Firstly, the bomb needs to detonate at the correct depth, so that it can take full advantage of the deep sound channel. The sofar bomb has to sink fast enough so that it reaches the required depth within a reasonable amount of time (usually about 5 minutes).
To determine the position of a sofar bomb that has been detonated, three or more naval stations combine their reports of when they received the signal.
Benefits of the deep sound channel
Detonating the sofar bomb in the deep sound channel gives it huge benefits. The channel itself helps keep the sound waves contained within the same depth, as the rays of sound that have an upward or downward velocity are pushed back towards the deep sound channel because of refraction. Because the sound waves do not spread out vertically, the horizontal sound rays maintain far more strength than they would otherwise. This makes it far easier for the stations on shore to pick up and analyze the signal. Usually, the blasts use frequencies between 30 and 150 Hz, which also helps stop the signal from weakening too much. A side effect of this is that the slightly higher frequencies of sound waves emitted move a bit faster than the lower frequencies, making the signal that the naval stations hear have a longer duration.
History
Dr. Maurice Ewing, a pioneer |
https://en.wikipedia.org/wiki/Single%20version%20of%20the%20truth | In computerized business management, single version of the truth (SVOT), is a technical concept describing the data warehousing ideal of having either a single centralised database, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent and non-redundant form. This contrasts with the related concept of single source of truth (SSOT), which refers to a data storage principle to always source a particular piece of information from one place.
Applied to message sequencing
In some systems and in the context of message processing systems (often real-time systems), this term also refers to the goal of establishing a single agreed sequence of messages within a database formed by a particular but arbitrary sequencing of records. The key concept is that data combined in a certain sequence is a "truth" which may be analyzed and processed giving particular results, and that although the sequence is arbitrary (and thus another correct but equally arbitrary sequencing would ultimately provide different results in any analysis), it is desirable to agree that the sequence enshrined in the "single version of the truth" is the version that will be considered "the truth", and that any conclusions drawn from analysis of the database are valid and unarguable, and (in a technical context) the database may be duplicated to a backup environment to ensure a persistent record is kept of the "single version of the truth".
The key point is when the database is created using an external data source (such as a sequence of trading messages from a stock exchange) an arbitrary selection is made of one possibility from two or more equally valid representations of the input data, but henceforth the decision sets "in stone" one and only one version of the truth.
As applied to message sequencing
Critics of SVOT as applied to message sequencing argue that this concept is not scalable. As the world moves towards systems spread over many processing |
https://en.wikipedia.org/wiki/Programming%20language%20reference | In computing, a programming language reference or language reference manual is part of the documentation associated with most mainstream programming languages. It is written for users and developers, and describes the basic elements of the language and how to use them in a program. For a command-based language, for example, this will include details of every available command and of the syntax for using it.
The reference manual is usually separate and distinct from a more detailed programming language specification meant for implementors of the language rather than those who simply use it to accomplish some processing task.
There may also be a separate introductory guide aimed at giving newcomers enough information to start writing programs, after which they can consult the reference manual for full details. Frequently, however, a single publication contains both the introductory material and the language reference.
External links
Ada 2005 Language Reference Manual (at adaic.com)
The Python Language Reference (at python.org)
The Python Language Reference Manual by Guido van Rossum and Fred L. Drake, Jr. () (at network-theory.co.uk) |
https://en.wikipedia.org/wiki/American%20Association%20of%20Physics%20Teachers | The American Association of Physics Teachers (AAPT) was founded in 1930 for the purpose of "dissemination of knowledge of physics, particularly by way of teaching." There are more than 10,000 members in over 30 countries. AAPT publications include two peer-reviewed journals, the American Journal of Physics and The Physics Teacher. The association has two annual National Meetings (winter and summer) and has regional sections with their own meetings and organization. The association also offers grants and awards for physics educators, including the Richtmyer Memorial Award and programs and contests for physics educators and students. It is headquartered at the American Center for Physics in College Park, Maryland.
History
The American Association of Physics Teachers was founded on December 31, 1930, when forty-five physicists held a meeting during the joint APS-AAAS meeting in Cleveland specifically for that purpose.
The AAPT became a founding member of the American Institute of Physics after the other founding members were convinced of the stability of the AAPT itself after a new constitution for the AAPT was agreed upon.
Contests
The AAPT sponsors a number of competitions. The Physics Bowl, Six Flags' roller coaster contest, and the US Physics Team are just a few. The US physics team is determined by two preliminary exams and a week and a half long "boot camp." Each year, five members are selected to compete against dozens of countries in the International Physics Olympiad (IPHO).
See also
The Physics Teacher
Oersted Medal
American Institute of Physics
American Journal of Physics
Physics outreach |
https://en.wikipedia.org/wiki/Center%20of%20gravity%20of%20an%20aircraft | The center of gravity (CG) of an aircraft is the point over which the aircraft would balance. Its position is calculated after supporting the aircraft on at least two sets of weighing scales or load cells and noting the weight shown on each set of scales or load cells. The center of gravity affects the stability of the aircraft. To ensure the aircraft is safe to fly, the center of gravity must fall within specified limits established by the aircraft manufacturer.
Terminology
Ballast Ballast is removable or permanently installed weight in an aircraft used to bring the center of gravity into the allowable range.
Center-of-Gravity Limits Center of gravity (CG) limits are specified longitudinal (forward and aft) and/or lateral (left and right) limits within which the aircraft's center of gravity must be located during flight. The CG limits are indicated in the airplane flight manual. The area between the limits is called the CG range of the aircraft.
Weight and BalanceWhen the weight of the aircraft is at or below the allowable limit(s) for its configuration (parked, ground movement, take-off, landing, etc.) and its center of gravity is within the allowable range, and both will remain so for the duration of the flight, the aircraft is said to be within weight and balance. Different maximum weights may be defined for different situations; for example, large aircraft may have maximum landing weights that are lower than maximum take-off weights (because some weight is expected to be lost as fuel is burned during the flight). The center of gravity may change over the duration of the flight as the aircraft's weight changes due to fuel burn or by passengers moving forward or aft in the cabin.
Reference DatumThe reference datum is a reference plane that allows accurate, and uniform, measurements to any point on the aircraft. The location of the reference datum is established by the manufacturer and is defined in the aircraft flight manual. The horizontal reference dat |
https://en.wikipedia.org/wiki/Deltopectoral%20lymph%20nodes | One or two deltopectoral lymph nodes (or infraclavicular nodes) are found beside the cephalic vein, between the pectoralis major and deltoideus, immediately below the clavicle.
They are situated in the course of the external collecting trunks of the arm.
Additional images |
https://en.wikipedia.org/wiki/Dielectric%20barrier%20discharge | Dielectric-barrier discharge (DBD) is the electrical discharge between two electrodes separated by an insulating dielectric barrier. Originally called silent (inaudible) discharge and also known as ozone production discharge or partial discharge, it was first reported by Ernst Werner von Siemens in 1857.
Process
The process normally uses high voltage alternating current, ranging from lower RF to microwave frequencies. However, other methods were developed to extend the frequency range all the way down to the DC. One method was to use a high resistivity layer to cover one of the electrodes. This is known as the resistive barrier discharge. Another technique using a semiconductor layer of gallium arsenide (GaAs) to replace the dielectric layer, enables these devices to be driven by a DC voltage between 580 V and 740 V.
Construction
DBD devices can be made in many configurations, typically planar, using parallel plates separated by a dielectric or cylindrical, using coaxial plates with a dielectric tube between them. In a common coaxial configuration, the dielectric is shaped in the same form as common fluorescent tubing. It is filled at atmospheric pressure with either a rare gas or rare gas-halide mix, with the glass walls acting as the dielectric barrier. Due to the atmospheric pressure level, such processes require high energy levels to sustain. Common dielectric materials include glass, quartz, ceramics and polymers. The gap distance between electrodes varies considerably, from less than 0.1 mm in plasma displays, several millimetres in ozone generators and up to several centimetres in CO2 lasers.
Depending on the geometry, DBD can be generated in a volume (VDBD) or on a surface (SDBD). For VDBD the plasma is generated between two electrodes, for example between two parallel plates with a dielectric in between. At SDBD the microdischarges are generated on the surface of a dielectric, which results in a more homogeneous plasma than can be achieved using the VD |
https://en.wikipedia.org/wiki/Central%20lymph%20nodes | A central or intermediate group of three or four large glands is imbedded in the adipose tissue near the base of the axilla.
Its afferent lymphatic vessels are the efferent vessels of all the preceding groups of axillary glands; its efferents pass to the subclavicular group.
Additional images |
https://en.wikipedia.org/wiki/Marc%20Tremblay | Marc Tremblay is a distinguished engineer at Microsoft. Prior to joining Microsoft in April 2009, he was senior vice president and chief technology officer of the microelectronics business unit at Sun Microsystems. He was instrumental in the design of various microprocessors at Sun, including the UltraSPARC, UltraSPARC II, MAJC, UltraSPARC T1, and the cancelled Rock processor. In the process, he was awarded more patents than any other Sun employee.
Career
Tremblay worked at Sun Microsystems for 18 years. In 2009, he joined Microsoft's strategic software/silicon architecture group, led by chief research and strategy officer, Craig Mundie. A team worked under Tremblay developing software semiconductor technologies. The group, known as SiArch, works on green, adaptive and parallel computing.
Education
He received his bachelor's degree from Laval University in Canada, and both his M.S. (1985) and Ph.D. (1991) degrees from UCLA. |
https://en.wikipedia.org/wiki/Brachial%20lymph%20nodes | A brachial lymph nodes (or lateral group) are group of four to six lymph nodes which lies in relation to the medial and posterior aspects of the axillary vein; the afferents of these glands drain the whole arm with the exception of that portion whose vessels accompany the cephalic vein.
The efferent vessels pass partly to the central and subclavicular groups of axillary glands and partly to the inferior deep cervical glands.
Additional images |
https://en.wikipedia.org/wiki/Pectoral%20axillary%20lymph%20nodes | An anterior or pectoral group consists of four or five glands along the lower border of the Pectoralis minor, in relation with the lateral thoracic artery.
Their afferents drain the skin and muscles of the anterior and lateral thoracic walls, and the central and lateral parts of the mamma; their efferents pass partly to the central and partly to the subclavicular groups of axillary glands.
Additional images |
https://en.wikipedia.org/wiki/Subscapular%20axillary%20lymph%20nodes | A posterior or subscapular group of six or seven glands is placed along the lower margin of the posterior wall of the axilla in the course of the subscapular artery.
The afferents of this group drain the skin and muscles of the lower part of the back of the neck and of the posterior thoracic wall; their efferents pass to the central group of axillary glands.
Additional images |
https://en.wikipedia.org/wiki/Apical%20lymph%20nodes | An apical (or medial or subclavicular) group of six to twelve glands is situated partly posterior to the upper portion of the Pectoralis minor and partly above the upper border of this muscle.
Its only direct territorial afferents are those that accompany the cephalic vein, and one that drains the upper peripheral part of the mamma. However, it receives the efferents of all the other axillary glands.
The efferent vessels of the subclavicular group unite to form the subclavian trunk, which opens either directly into the junction of the internal jugular and subclavian veins or into the jugular lymphatic trunk; on the left side it may end in the thoracic duct.
A few efferents from the subclavicular glands usually pass to the inferior deep cervical glands.
Additional images |
https://en.wikipedia.org/wiki/Supratrochlear%20lymph%20nodes | One or two supratrochlear lymph nodes are placed above the medial epicondyle of the humerus, medial to the basilic vein.
Their afferents drain the middle, ring, and little fingers, the medial portion of the hand, and the superficial area over the ulnar side of the forearm; these vessels are, however, in free communication with the other lymphatic vessels of the forearm.
Their efferents accompany the basilic vein and join the deeper vessels.
They are distinguished in Terminologia anatomica from the "epitrochlear" (or "cubital") lymph nodes, but the region is similar.
Clinical significance
The supratrochlear lymph nodes swell up when an infection is detected in the hand or forearm areas. They may be palpable.
Additional images
See also
Trochlea of humerus |
https://en.wikipedia.org/wiki/Behaviorally%20anchored%20rating%20scales | Behaviorally anchored rating scales (BARS) are scales used to rate performance. BARS are normally presented vertically with scale points ranging from five to nine. It is an appraisal method that aims to combine the benefits of narratives, critical incidents, and quantified ratings by anchoring a quantified scale with specific narrative examples of good, moderate, and poor performance.
Background
BARS were developed in response to dissatisfaction with the subjectivity involved in using traditional rating scales such as the graphic rating scale. A review of BARS concluded that the strength of this rating format may lie primarily in the performance dimensions which are gathered rather than the distinction between behavioral and numerical scale anchors.
Benefits of BARS
BARS are rating scales that add behavioral scale anchors to traditional rating scales (e.g., graphic rating scales). In comparison to other rating scales, BARS are intended to facilitate more accurate ratings of the target person's behavior or performance. However, whereas the BARS is often regarded as a superior performance appraisal method, BARS may still suffer from unreliability, leniency bias and lack of discriminant validity between performance dimensions.
Developing BARS
BARS are developed using data collected through the critical incident technique, or through the use of comprehensive data about the tasks performed by a job incumbent, such as might be collected through a task analysis. In order to construct BARS, several basic steps, outlined below, are followed.
Examples of effective and ineffective behavior related to job are collected from people with knowledge of job using the critical incident technique. Alternatively, data may be collected through the careful examination of data from a recent task analysis.
These data are then converted into performance dimensions. To convert these data into performance dimensions, examples of behavior (such as critical incidents) are sorted into homog |
https://en.wikipedia.org/wiki/Cabozoa | In the classification of eukaryotes (living organisms with a cell nucleus), Cabozoa was a taxon proposed by Cavalier-Smith. It was a putative clade comprising the Rhizaria and Excavata. More recent research places the Rhizaria with the Alveolata and Stramenopiles instead of the Excavata, however, so the "Cabozoa" is polyphyletic.
See also
Corticata |
https://en.wikipedia.org/wiki/Journal%20of%20Mathematical%20Physics | The Journal of Mathematical Physics is a peer-reviewed journal published monthly by the American Institute of Physics devoted to the publication of papers in mathematical physics. The journal was first published bimonthly beginning in January 1960; it became a monthly publication in 1963. The current editor is Jan Philip Solovej from University of Copenhagen. Its 2018 Impact Factor is 1.355
Abstracting and indexing
This journal is indexed by the following services: |
https://en.wikipedia.org/wiki/Hypergeometric%20function%20of%20a%20matrix%20argument | In mathematics, the hypergeometric function of a matrix argument is a generalization of the classical hypergeometric series. It is a function defined by an infinite summation which can be used to evaluate certain multivariate integrals.
Hypergeometric functions of a matrix argument have applications in random matrix theory. For example, the distributions of the extreme eigenvalues of random matrices are often expressed in terms of the hypergeometric function of a matrix argument.
Definition
Let and be integers, and let
be an complex symmetric matrix.
Then the hypergeometric function of a matrix argument
and parameter is defined as
where means is a partition of , is the generalized Pochhammer symbol, and
is the "C" normalization of the Jack function.
Two matrix arguments
If and are two complex symmetric matrices, then the hypergeometric function of two matrix arguments is defined as:
where is the identity matrix of size .
Not a typical function of a matrix argument
Unlike other functions of matrix argument, such as the matrix exponential, which are matrix-valued, the hypergeometric function of (one or two) matrix arguments is scalar-valued.
The parameter α
In many publications the parameter is omitted. Also, in different publications different values of are being implicitly assumed. For example, in the theory of real random matrices (see, e.g., Muirhead, 1984), whereas in other settings (e.g., in the complex case—see Gross and Richards, 1989), . To make matters worse, in random matrix theory researchers tend to prefer a parameter called instead of which is used in combinatorics.
The thing to remember is that
Care should be exercised as to whether a particular text is using a parameter or and which the particular value of that parameter is.
Typically, in settings involving real random matrices, and thus . In settings involving complex random matrices, one has and . |
https://en.wikipedia.org/wiki/Generalized%20Pochhammer%20symbol | In mathematics, the generalized Pochhammer symbol of parameter and partition generalizes the classical Pochhammer symbol, named after Leo August Pochhammer, and is defined as
It is used in multivariate analysis. |
https://en.wikipedia.org/wiki/Jack%20function | In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.
Definition
The Jack function
of an integer partition , parameter , and arguments can be recursively defined as
follows:
For m=1
For m>1
where the summation is over all partitions such that the skew partition is a horizontal strip, namely
( must be zero or otherwise ) and
where equals if and otherwise. The expressions and refer to the conjugate partitions of and , respectively. The notation means that the product is taken over all coordinates of boxes in the Young diagram of the partition .
Combinatorial formula
In 1997, F. Knop and S. Sahi gave a purely combinatorial formula for the Jack polynomials in n variables:
The sum is taken over all admissible tableaux of shape and
with
An admissible tableau of shape is a filling of the Young diagram with numbers 1,2,…,n such that for any box (i,j) in the tableau,
whenever
whenever and
A box is critical for the tableau T if and
This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.
C normalization
The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:
This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as
where
For is often denoted by and called the Zonal polynomial.
P normalization
The P normalization is given by the identity , where
where and denotes the arm and leg length respectively. Therefore, for is the usual Schur function.
Similar to Schur polynomials, can be expressed as a sum over Young tableaux. However, one need to add an extra |
https://en.wikipedia.org/wiki/Earth%20Gravitational%20Model | The Earth Gravitational Models (EGM) are a series of geopotential models of the Earth published by the National Geospatial-Intelligence Agency (NGA). They are used as the geoid reference in the World Geodetic System.
The NGA provides the models in two formats: as the series of numerical coefficients to the spherical harmonics which define the model, or a dataset giving the geoid height at each coordinate at a given resolution.
Three model versions have been published: EGM84 with n=m=180, EGM96 with n=m=360, and EGM2008 with n=m=2160. n and m are the degree and orders of harmonic coefficients; the higher they are, the more parameters the models have, and the more precise they are. EGM2008 also contains expansions to n=2190. Developmental versions of the EGM are referred to as Preliminary Gravitational Models (PGMs).
Each version of EGM has its own EPSG code as a vertical datum.
History
EGM84
The first EGM, EGM84, was defined as a part of WGS84. WGS84 combines the old GRS 80 with the then-latest data, namely available Doppler, satellite laser ranging, and Very Long Baseline Interferometry (VLBI) observations, and a new least squares method called collocation. It allowed for a model with n=m=180 to be defined, providing a raster for every half degree (30', 30 minute) of latitude and longitude of the world. NIMA also computed and made available 30′×30′ mean altimeter derived gravity anomalies from the GEOSAT Geodetic Mission. 15′×15′ is also available.
EGM96
EGM96 from 1996 is the result of a collaboration between the National Imagery and Mapping Agency (NIMA), the NASA Goddard Space Flight Center (GSFC), and the Ohio State University. It took advantage of new surface gravity data from many different regions of the globe, including data newly released from the NIMA archives. Major terrestrial gravity acquisitions by NIMA since 1990 include airborne gravity surveys over Greenland and parts of the Arctic and the Antarctic, surveyed by the Naval Research Lab (NRL) |
https://en.wikipedia.org/wiki/Savart%20wheel | The Savart wheel is an acoustical device named after the French physicist Félix Savart (1791–1841), which was originally conceived and developed by the English scientist Robert Hooke (1635–1703).
A card held to the edge of a spinning toothed wheel will produce a tone whose pitch varies with the speed of the wheel. A mechanism of this sort, made using brass wheels, allowed Hooke to produce sound waves of a known frequency, and to demonstrate to the Royal Society in 1681 how pitch relates to frequency. For practical purposes Hooke's device was soon supplanted by the invention of the tuning fork.
About a century and a half after Hooke's work, the mechanism was taken up again by Savart for his investigations into the range of human hearing. In the 1830s Savart was able to construct large, finely-toothed brass wheels producing frequencies of up to 24 kHz that seem to have been the world's first artificial ultrasonic generators. In the later 19th century, Savart's wheels were also used in physiological and psychological investigations of time perception.
Nowadays, Savart wheels are commonly demonstrated in physics lectures, sometimes driven and sounded by an air hose (in place of the card mechanism).
Description
The basic device consists of a ratchet-wheel with a large number of uniformly spaced teeth. When the wheel is turned slowly while the edge of a card is held against the teeth a succession of distinct clicks can be heard. When the wheel is spun rapidly it produces a shrill tone, whereas if the wheel is allowed to turn more slowly the tone progressively decreases in pitch. Since the frequency of the tone is directly proportional to the rate at which the teeth strike the card, a Savart wheel can be calibrated to provide an absolute measure of pitch. Multiple wheels of different sizes, carrying different numbers of teeth, can also be attached so as to allow several pitches (or chords) to be produced while the axle is being turned at a constant rate.
Hooke's whee |
https://en.wikipedia.org/wiki/Physcomitrella%20patens | Physcomitrium patens, (synonym: Physcomitrella patens ) the spreading earthmoss, is a moss (bryophyte) used as a model organism for studies on plant evolution, development, and physiology.
Distribution and ecology
Physcomitrella patens is an early colonist of exposed mud and earth around the edges of pools of water. P. patens has a disjunct distribution in temperate parts of the world, with the exception of South America. The standard laboratory strain is the "Gransden" isolate, collected by H. Whitehouse from Gransden Wood, in Cambridgeshire in 1962.
Model organism
Mosses share fundamental genetic and physiological processes with vascular plants, although the two lineages diverged early in land-plant evolution. A comparative study between modern representatives of the two lines may give insight into the evolution of mechanisms that contribute to the complexity of modern plants. In this context, P. patens is used as a model organism.
P. patens is one of a few known multicellular organisms with highly efficient homologous recombination. meaning that an exogenous DNA sequence can be targeted to a specific genomic position (a technique called gene targeting) to create knockout mosses. This approach is called reverse genetics and it is a powerful and sensitive tool to study the function of genes and, when combined with studies in higher plants such as Arabidopsis thaliana, can be used to study molecular plant evolution.
The targeted deletion or alteration of moss genes relies on the integration of a short DNA strand at a defined position in the genome of the host cell. Both ends of this DNA strand are engineered to be identical to this specific gene locus. The DNA construct is then incubated with moss protoplasts in the presence of polyethylene glycol. As mosses are haploid organisms, the regenerating moss filaments (protonemata) can be directly assayed for gene targeting within 6 weeks using PCR methods. The first study using knockout moss appeared in 1998 and |
https://en.wikipedia.org/wiki/Molecular%20Koch%27s%20postulates | Molecular Koch's postulates are a set of experimental criteria that must be satisfied to show that a gene found in a pathogenic microorganism encodes a product that contributes to the disease caused by the pathogen. Genes that satisfy molecular Koch's postulates are often referred to as virulence factors. The postulates were formulated by the microbiologist Stanley Falkow in 1988 and are based on Koch's postulates.
Postulates
As per Falkow's original descriptions, the three postulates are:
"The phenotype or property under investigation should be associated with pathogenic members of a genus or pathogenic strains of a species.
Specific inactivation of the gene(s) associated with the suspected virulence trait should lead to a measurable loss in pathogenicity or virulence.
Reversion or allelic replacement of the mutated gene should lead to restoration of pathogenicity."
To apply the molecular Koch's postulates to human diseases, researchers must identify which microbial genes are potentially responsible for symptoms of pathogenicity, often by sequencing the full genome to compare which nucleotides are homologous to the protein-coding genes of other species. Alternatively, scientists can identify which mRNA transcripts are at elevated levels in the diseased organs of infected hosts. Additionally, the tester must identify and methods for inactivating and reactivating the gene being studied.
In 1996, Fredricks and Relman proposed seven molecular guidelines for establishing microbial disease causation:
"A nucleic acid sequence belonging to a putative pathogen should be present in most cases of an infectious disease. Microbial nucleic acids should be found preferentially in those organs or gross anatomic sites known to be diseased (i.e., with anatomic, histologic, chemical, or clinical evidence of pathology) and not in those organs that lack pathology.
Fewer, or no, copy numbers of pathogen-associated nucleic acid sequences should occur in hosts or tissues without |
https://en.wikipedia.org/wiki/Animal%20transporter | Animal transporters are used to transport livestock or non-livestock animals over long distances. They could be specially-modified vehicles, trailers, ships or aircraft containers. While some animal transporters like horse trailers only carry a few animals, modern ships engaged in live export can carry tens of thousands.
The Animal Transportation Association campaigns for humane transporting of animals as do many other animal welfare organisations.
See also
Animal-powered transport
Drover's caboose
Horse box
Horse trailer
livestock carrier (Maritime)
Livestock transportation
Road transport
Stock car (rail) |
https://en.wikipedia.org/wiki/Soy%20candle | Soy candles are candles made from soy wax, which is a processed form of soybean oil. They are usually container candles because soy wax typically has a lower melting point than traditional waxes, but can also be made into pillar candles if certain additives are mixed into the soy wax.
Soy wax
Soy wax is made by the full hydrogenation of soybean oil; chemically this gives a triglyceride, containing a high proportion of stearic acid. It is typically softer than paraffin wax and with a lower melting temperature, in most combinations. However, additives can raise this melting point to temperatures typical for paraffin-based candles. The melting point ranges from 49 to 82 degrees Celsius (130 to 150 degrees Fahrenheit), depending on the blend. The density of soy wax is about 90% that of water or 0.9 g/ml. This means nine pounds (144 oz) of wax will fill about ten 16-oz jars (160 fluid ounces of volume). Soy wax is available in flake and pellet form and has an off-white, opaque appearance. Its lower melting temperature can mean that candles will melt in hot weather. Since soy wax is usually used in container candles, this is not much of an issue.
Some soy candles are made up of a blend of different waxes, including beeswax, paraffin, or palm wax.
Soy candles
Sоу candles dіѕtrіbutе frаgrаnсеѕ and ѕсеntѕ slightly less than paraffin candles. Paraffin is usually added to make a 'soy blend' which allows for a better scent throw and works better in hotter weather conditions. Soy is often referred to as a superior wax in comparison to paraffin but in reality, there is very little difference in soot production and carcinogenic compounds released by both waxes. The low melting роіnt trаnѕlаtеѕ to сооlеr-burning, longer-lasting саndlеѕ in temperate areas.
Soy candles can also come with coconut wax as an additive because coconut wax is viewed as more sustainable and is softer than soy making for a larger melt pool and disbursement of scent. |
https://en.wikipedia.org/wiki/Password%20fatigue | Password fatigue is the feeling experienced by many people who are required to remember an excessive number of passwords as part of their daily routine, such as to log in to a computer at work, undo a bicycle lock or conduct banking from an automated teller machine. The concept is also known as password chaos, or more broadly as identity chaos.
Causes
The increasing prominence of information technology and the Internet in employment, finance, recreation and other aspects of people's lives, and the ensuing introduction of secure transaction technology, has led to people accumulating a proliferation of accounts and passwords.
According to a survey conducted in February 2020 by password manager Nordpass, a typical user has 100 passwords.
Some factors causing password fatigue are:
unexpected demands that a user create a new password
unexpected demands that a user create a new password that uses a particular pattern of letters, digits, and special characters
demand that the user type the new password twice
frequent and unexpected demands for the user to re-enter their password throughout the day as they surf to different parts of an intranet
blind typing, both when responding to a password prompt and when setting a new password.
Responses
Some companies are well organized in this respect and have implemented alternative authentication methods, or have adopted technologies so that a user's credentials are entered automatically. However, others may not focus on ease of use, or even worsen the situation, by constantly implementing new applications with their own authentication system.
Single sign-on software (SSO) can help mitigate this problem by only requiring users to remember one password to an application that in turn will automatically give access to several other accounts, with or without the need for agent software on the user's computer. A potential disadvantage is that loss of a single password will prevent access to all services using the SSO system, an |
https://en.wikipedia.org/wiki/List%20of%20second%20moments%20of%20area | The following is a list of second moments of area of some shapes. The second moment of area, also known as area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with respect to an arbitrary axis. The unit of dimension of the second moment of area is length to fourth power, L4, and should not be confused with the mass moment of inertia. If the piece is thin, however, the mass moment of inertia equals the area density times the area moment of inertia.
Second moments of area
Please note that for the second moment of area equations in the below table: and
Parallel axis theorem
The parallel axis theorem can be used to determine the second moment of area of a rigid body about any axis, given the body's second moment of area about a parallel axis through the body's centroid, the area of the cross section, and the perpendicular distance (d) between the axes.
See also
List of moments of inertia
List of centroids
Second polar moment of area |
https://en.wikipedia.org/wiki/Prevertebral%20fascia | The prevertebral fascia (also known as prevertebral layer of cervical fascia or vertebral fascia) is the layer of deep cervical fascia that surrounds the vertebral column. It is the deepest layer of deep cervical fascia.
It encloses the sympathetic trunk, brachial plexus, phrenic nerve, prevertebral muscles, and the cervical vertebral column.
Anatomy
The prevertebral fascia extends medially behind the carotid vessels, where it assists in forming their sheath, and passes in front of the prevertebral muscles.
The prevertebral fascia is fixed above to the base of the skull, and below it extends behind the esophagus into the posterior mediastinal cavity of the thorax. It descends in front of the longus colli muscles.
The prevertebral fascia is prolonged downward and laterally behind the carotid vessels and in front of the scalene muscles. It forms a sheath for the brachial nerves, subclavian artery, and subclavian vein in the posterior triangle of the neck; it is continued under the clavicle as the axillary sheath and is attached to the deep surface of the coracoclavicular fascia.
Relations
The prevertebral fascia represents the floor of the posterior triangle of the neck.
The cervical sympathetic trunk lies upon it.
The prevertebral fascia borders the vertebral compartment of the neck.
It forms the posterior limit of a fibrous compartment, which contains the larynx and trachea, the thyroid gland, and the pharynx and esophagus.
Parallel to the carotid sheath and along its medial aspect the prevertebral fascia gives off a thin lamina, the buccopharyngeal fascia, which closely invests the constrictor muscles of the pharynx, and is continued forward from the constrictor pharyngis superior on to the buccinator.
Anterior to it, the alar (retrovisceral) fascia is attached to it by loose connective tissue only, and thus an easily distended space, the retropharyngeal space, is found between them.
Immediately anteroposteriorly the clavicle an areolar space exists bet |
https://en.wikipedia.org/wiki/Investing%20layer%20of%20deep%20cervical%20fascia | The investing layer of deep cervical fascia is the most superficial part of the deep cervical fascia, and encloses the whole neck.
It is considered by some sources to be incomplete or nonexistent.
Attachments
It surrounds the neck like a collar, it splits around the sternocleidomastoid muscle and the trapezius muscle. It is attached as;
Posteriorly - Ligamentum nuchae
Anteriorly - Attached to the hyoid bone
Superiorly - (from backwards to forwards);
External occipital protuberance and Superior nuchal line of occipital bone
Mastoid process of Temporal bone
External acoustic meatus
Lower margin of the zygomatic arch
Lower border of body of mandible from the angle of mandible to the symphysis menti
Inferiorly - (from backwards to forwards);
Spine and acromial process of scapula
Upper surface of the clavicle
Suprasternal notch of manubrium sterni
Tracings
Horizontal extent - From ligamentum nuchae when traced forward, the fascia splits and encloses trapezius, reunites and form roof of posterior triangle of neck; again splits and encloses sternocleidomastoid, reunites and forms the roof of anterior triangle.
Vertical extent -
Superior tracing - It splits and encloses submandibular gland and parotid gland;
- It splits at lower border of submandibular gland into superficial and deep layers;which attach to lower body of body of mandible and mylohyoid line of mandible
- It splits at lower pole of parotid gland into superficial and deep layers; superficial layer attaches to zygomatic arch and forms parotido-masseteric fascia after blending with masseter, deep layer attaches to tympanic plate and styloid process forming the stylomandibular ligament
Inferior tracing - The fascia splits to enclose two spaces; suprasternal space and supraclavicular space |
https://en.wikipedia.org/wiki/Pretracheal%20fascia | The pretracheal fascia is a layer of the deep cervical fascia at the front of the neck. It attaches to the hyoid bone above, and - extending down into the thorax - blends with the fibrous pericardium below. It encloses the thyroid gland and parathyroid glands, trachea, and esophagus. It extends medially in front of the carotid vessels. It assists in forming the carotid sheath.
The back portion of the pretracheal fascia is known as the buccopharyngeal fascia.
Structure
The pretracheal fascia is continued behind the depressor muscles of the hyoid bone. After enveloping the thyroid gland, it is prolonged in front of the trachea to meet the corresponding layer of the opposite side. The pretracheal layer of the deep cervical fascia passes in front of the carotid sheath (i.e., common carotid artery, internal jugular vein, and vagus nerve) and in front of the cervical viscera (larynx, oesophagus, and pharynx). The muscular layer ensheathes the infrahyoid muscles.
Above, the pretracheal fascia is fixed to the hyoid bone. Below, it is carried downward in front of the trachea and large vessels at the root of the neck, and ultimately blends with the fibrous pericardium.
The pretracheal fascia is fused on either side with the prevertebral fascia, and with it completes the compartment containing the larynx and trachea, the thyroid gland, and the pharynx and esophagus.
Function
The pretracheal fascia encloses the thyroid gland, and is responsible for its movement during deglutition.
See also
Fascia |
https://en.wikipedia.org/wiki/Coronoid%20process%20of%20the%20ulna | The coronoid process of the ulna is a triangular process projecting forward from the anterior proximal portion of the ulna.
Structure
Its base is continuous with the body of the bone, and of considerable strength. Anatomy
Its apex is pointed, slightly curved upward, and in flexion of the forearm is received into the coronoid fossa of the humerus.
Its upper surface is smooth, convex, and forms the lower part of the semilunar notch.
Its antero-inferior surface is concave, and marked by a rough impression for the insertion of the brachialis muscle. At the junction of this surface with the front of the body is a rough eminence, the tuberosity of the ulna, which gives insertion to a part of the brachialis; to the lateral border of this tuberosity the oblique cord is attached.
Its lateral surface presents a narrow, oblong, articular depression, the radial notch.
Its medial surface, by its prominent, free margin, serves for the attachment of part of the ulnar collateral ligament. At the front part of this surface is a small rounded eminence for the origin of one head of the flexor digitorum superficialis muscle; behind the eminence is a depression for part of the origin of the flexor digitorum profundus muscle; descending from the eminence is a ridge which gives origin to one head of the pronator teres muscle.
Frequently, the flexor pollicis longus muscle arises from the lower part of the coronoid process by a rounded bundle of muscular fibers.
Function
The coronoid process stabilises the elbow joint and prevents hyperflexion.
Clinical significance
The coronoid process can be fractured from its anteromedial facet.
Additional images |
https://en.wikipedia.org/wiki/Coronoid%20fossa%20of%20the%20humerus | Superior to the anterior portion of the trochlea is a small depression, the coronoid fossa, which receives the coronoid process of the ulna during flexion of the forearm. It is directly adjacent to the radial fossa of the humerus.
Additional images |
https://en.wikipedia.org/wiki/Tuberosity%20of%20the%20ulna | The tuberosity of the ulna is a rough eminence on the proximal end of the ulna. It occurs at the junction of the antero-inferior surface of the coronoid process with the front of the body. It provides an insertion point to a tendon of the brachialis (the oblique cord of the brachialis is attached to the lateral border). |
https://en.wikipedia.org/wiki/Midcarpal%20joint | The midcarpal joint is formed by the scaphoid, lunate, and triquetral bones in the proximal row, and the trapezium, trapezoid, capitate, and hamate bones in the distal row. The distal pole of the scaphoid articulates with two trapezial bones as a gliding type of joint. The proximal end of the scaphoid combines with the lunate and triquetrum to form a deep concavity that articulates with the convexity of the combined capitate and hamate in a form of diarthrodial, almost condyloid joint.
Description
The cavity of the midcarpal joint is very extensive and irregular. The major portion of the cavity is located between the distal surfaces of the scaphoid, lunate, and triquetrum and proximal surfaces of the four bones of the distal row. Proximal prolongations of the cavity occur between the scaphoid and lunate and between the lunate and triquetrum. These extensions reach almost to the proximal surface of the bones in the proximal row and are separated from the cavity of the radiocarpal joint by the thin interosseous ligaments. There are three distal prolongations of the midcarpal joint cavity between the four bones of the distal row. The joint space between trapezium and trapezoid, or that between trapezoid and capitate, may communicate with cavities of the carpometacarpal joints, most commonly the second and third. The cavity between the first metacarpal and carpus is always separate from the midcarpal joint; the joint cavity between the hamate and fourth and fifth metacarpals is a separate cavity more often than not, but it may communicate normally with the midcarpal joint.
The Wrist
The wrist is perhaps the most complicated joint in the body. It permits movements in two planes - extension/flexion, ulnar deviation/radial deviation - and allows complex patterns of motion under significant strain.
Optimal wrist function requires stability of the carpal components in all joint positions under static and dynamic conditions.
Stability is achieved by a sophisticated geome |
https://en.wikipedia.org/wiki/Tricholoma%20magnivelare | Tricholoma magnivelare, commonly known as the matsutake, white matsutake, ponderosa mushroom, pine mushroom, or American matsutake, is a gilled mushroom found East of the Rocky Mountains in North America growing in coniferous woodland. These ectomycorrhizal fungi are typically edible species that exist in a symbiotic relationship with various species of pine, commonly jack pine. They belong to the genus Tricholoma, which includes the closely related East Asian songi or matsutake as well as the Western matsutake (T. murrillianum) and Meso-American matsutake (Tricholoma mesoamericanum).
Species designation
Until recently, Tricholoma magnivelare was the name used to describe all matsutake mushrooms found growing in North America. Since the early 2000s, molecular data has indicated the presence of separate species previously grouped within T. magnivelare. Only those found in the Eastern United States and Canada have retained the T. magnivelare name.
Description
The cap ranges from in width, and is white with reddish-yellow or brown spots. The stalk is tall and 2–6 cm wide. The spores are white.
The mycelium is thought to be parasitized by the plant Allotropa virgata, which primarily feeds on matsutake.
Edibility
While tough, the mushroom can be eaten both raw and cooked and is considered choice. In recent years, globalization and wider social acceptability of mushroom hunting has made collection of pine mushrooms widely popular in North America.
Local mushroom hunters sell their harvest daily to local depots, which rush them to airports. The mushrooms are then shipped fresh by air to Asia where demand is high and prices are at a premium.
Serious poisonings have resulted from confusion of this mushroom with poisonous white Amanita species.
Similar species
Similar species in the genus include Tricholoma apium, T. caligatum, T. focale, and T. vernaticum. Other similar species include Catathelasma imperiale, C. ventricosum, Russula brevipes, and the poisonous Amani |
https://en.wikipedia.org/wiki/Spring%20house | A spring house, or springhouse, is a small building, usually of a single room, constructed over a spring. While the original purpose of a springhouse was to keep the spring water clean by excluding fallen leaves, animals, etc., the enclosing structure was also used for refrigeration before the advent of ice delivery and, later, electric refrigeration. The water of the spring maintains a constant cool temperature inside the spring house throughout the year. Food that would otherwise spoil, such as meat, fruit, or dairy products, could be kept there, safe from animal depredations as well. Springhouses thus often also served as pumphouses, milkhouses, and root cellars.
The Tomahawk Spring spring house at Tomahawk, West Virginia, was listed on the National Register of Historic Places in 1994.
Gallery
See also
Ice house (building)
Smokehouse
Windcatcher |
https://en.wikipedia.org/wiki/Chief%20of%20Staff%20Medal%20of%20Appreciation | The Chief Of Staff Medal of Appreciation () is an Israeli military decoration.
The medal was instituted in 1981. It is awarded to both civilians and military personnel who contribute to the strengthening of the IDF or the security of Israel; the medal could also be awarded to foreign civilians.
The medal's most well-known recipient is the Israeli astronaut Ilan Ramon, who was awarded the medal after his death during a mission of the Space Shuttle Columbia.
It has since been given 10 times; 4 posthumously, 2 group awards (Iron Dome and Patriot missile defense soldiers), and the rest to individuals (most of them American).
Design
The medal is round; on the front there is a Star of David with an olive branch and a sword rest on its left; the reverse is plain.
The medal is attached to a white ribbon with two blue strips on the edges, similar to those on the Israeli flag.
Notable recipients
Notable recipients include:
Colonel Ilan Ramon, IAF (posthumously)
General Martin Dempsey, USA (Chairman Joint Chiefs of Staff)
General Joseph Dunford, USMC (Chairman Joint Chiefs of Staff)
Members of the Kiryat Arba Emergency Response Team for 2002 Hebron ambush
Yitzhak Buanish (posthumously)
Alexander Zwitman (posthumously)
Alexander Dohan (posthumously)
Elijah Libman
Operation Protective Edge units
Iron Dome units
Technological unit of the Military Intelligence Directorate |
https://en.wikipedia.org/wiki/Negative%20priming | Negative priming is an implicit memory effect in which prior exposure to a stimulus unfavorably influences the response to the same stimulus. It falls under the category of priming, which refers to the change in the response towards a stimulus due to a subconscious memory effect. Negative priming describes the slow and error-prone reaction to a stimulus that is previously ignored. For example, a subject may be imagined trying to pick a red pen from a pen holder. The red pen becomes the target of attention, so the subject responds by moving their hand towards it. At this time, they mentally block out all other pens as distractors to aid in closing in on just the red pen. After repeatedly picking the red pen over the others, switching to the blue pen results in a momentary delay picking the pen out (however, there is a decline in the negative priming effect when there is more than one nontarget item that is selected against). The slow reaction due to the change of the distractor stimulus to target stimulus is called the negative priming effect.
Negative priming is believed to play a crucial role in attention and memory retrieval processes. When stimuli are perceived through the senses, all the stimuli are encoded within the brain, where each stimulus has its own internal representation. In this perceiving process, some of the stimuli receive more attention than others. Similarly, only some of them are stored in short-term memory. Negative priming is highly related to the selective nature of attention and memory.
Broadly, negative priming is also known as the mechanism by which inhibitory control is applied to cognition. This refers only to the inhibition stimuli that can interfere with the current short-term goal of creating a response. The effectiveness of inhibiting the interferences depends on the cognitive control mechanism as a higher number of distractors yields higher load on working memory. Increased load on working memory can in turn result in slower perce |
https://en.wikipedia.org/wiki/Haplogroup%20I-M438 | Haplogroup I-M438, also known as I2 (ISOGG 2019), is a human DNA Y-chromosome haplogroup, a subclade of haplogroup I-M170. Haplogroup I-M438 originated some time around 26,000–31,000 BCE. It originated in Europe and developed into several main subgroups : I2-M438*, I2a-L460, I2b-L415 and I2c-L596. The haplogroup can be found all over Europe and reaches its maximum frequency in the Dinaric Alps (Balkans) via founder effect. Examples of basal I-M438* have been found in males from Crete and Sicily .
Origin & prehistoric presence
Haplogroup I2a was the most frequent Y-DNA among western European mesolithic hunter gatherers (WHG) belonging to Villabruna Cluster. A 2015 study found haplogroup I2a in 13,500 year old remains from the Azilian culture (from Grotte du Bichon, modern Switzerland). Subclades of I2a1 (I-P37.2), namely I-M423 and I-M26 have been found in remains of Western European Hunter-Gatherers dating from 10,000 to 8,000 years before present respectively.
In a 2015 study published in Nature, the remains of six individuals from Motala ascribed to the Kongemose culture were successfully analyzed. With regards to Y-DNA, two individuals were ascribed to haplogroup I2a1b, one individual was ascribed to haplogroup I2a1, and one individual was ascribed to haplogroup I2c.
Subclades of I-L460
I-P37.2
The I-P37.2+, also known as I2a1a (ISOGG 2019) (The subclade divergence for I-P37.2 occurred 10.7±4.8 kya. The age of YSTR variation for the P37.2 subclade is 8.0±4.0 kya. It is the predominant version of I2 in Eastern Europe. The I2a is further made up by sub-groups I-M26, I-M423, I-L1286, I-L880.
I-L158
Haplogroup I-M26 (or M26) I2a1a1a (ISOGG 2019).
Haplogroup I-L158 (L158, L159.1/S169.1, M26) accounts for approximately 40% of all patrilines among the Sardinians. It is also found at low to moderate frequency among populations of the Pyrenees (9.5% in Bortzerriak, Navarra; 9.7% in Chazetania, Aragon; 8% in Val d'Aran, Catalunya; 2.9% in Alt Urgell, Catalunya; and |
https://en.wikipedia.org/wiki/Structured%20Audio%20Orchestra%20Language | Structured Audio Orchestra Language (SAOL) is an imperative, MUSIC-N programming language designed for describing virtual instruments, processing digital audio, and applying sound effects. It was published as subpart 5 of MPEG-4 Part 3 (ISO/IEC 14496-3:1999) in 1999.
As part of the MPEG-4 international standard, SAOL is one of the key components of the MPEG-4 Structured Audio toolset, along with:
Structured Audio Score Language (SASL)
Structured Audio Sample Bank Format (SASBF)
The MPEG-4 SA scheduler
MIDI support
See also
Csound
MPEG-4 Structured Audio |
https://en.wikipedia.org/wiki/City%20of%20Hope%20National%20Medical%20Center | City of Hope is a private, non-profit clinical research center, hospital and graduate school located in Duarte, California, United States. The center's main campus resides on of land adjacent to the boundaries of Duarte and Irwindale, with a network of clinical practice locations throughout Southern California, satellite offices in Monrovia and Irwindale, and regional fundraising offices throughout the United States.
City of Hope is best known as a cancer treatment center. It has been designated a Comprehensive Cancer Center by the National Cancer Institute. City of Hope has also been ranked one of the nation's Best Cancer Hospitals by U.S. News & World Report for over ten years and is a founding member of the National Comprehensive Cancer Network.
City of Hope played a role in the development of synthetic human insulin in 1978. The center has performed 13,000 hematopoietic stem cell transplants as of 2016 with patient outcomes that consistently exceed national averages.
History
In the late 19th and early 20th centuries, the spread of tuberculosis, also known as "consumption", was a growing concern in the United States and Europe. Owing to advancements in the scientific understanding of its contagious nature, a movement to house and quarantine sufferers became prevalent. Construction of tuberculosis sanatoria, including tent cities, became common in the United States, with many sanatoriums located in the Southwestern United States, where it was believed that the more arid climate would aid sufferers.
In 1913, the Jewish Consumptive Relief Association was chartered in Los Angeles, California, with the intent of raising money to establish a free, non-sectarian sanatorium for persons from throughout the United States diagnosed with tuberculosis. After raising sufficient funds, the association purchased of land in Duarte, California, a small town in the more arid San Gabriel Valley, approximately east of downtown Los Angeles, and dubbed the property the Los Ange |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.