source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Orientation%20of%20churches | The orientation of a building refers to the direction in which it is constructed and laid out, taking account of its planned purpose and ease of use for its occupants, its relation to the path of the sun and other aspects of its environment. Within church architecture, orientation is an arrangement by which the point of main interest in the interior is towards the east (). The east end is where the altar is placed, often within an apse. The façade and main entrance are accordingly at the west end.
The opposite arrangement, in which the church is entered from the east and the sanctuary is at the other end, is called occidentation.
Since the eighth century most churches are oriented. Hence, even in the many churches where the altar end is not actually to the east, terms such as "east end", "west door", "north aisle" are commonly used as if the church were oriented, treating the altar end as the liturgical east.
History
The first Christians faced east when praying, likely an outgrowth of the ancient Jewish custom of praying in the direction of the Holy Temple in Jerusalem. Due to this established custom, Tertullian says some non-Christians thought they worshipped the sun. Origen says: "The fact that [...] of all the quarters of the heavens, the east is the only direction we turn to when we pour out prayer, the reasons for this, I think, are not easily discovered by anyone." Later on, various Church Fathers advanced mystical reasons for the custom. One such explanation is that Christ's Second Coming was expected to be from the east: "For as the lightning comes from the east and shines as far as the west, so will be the coming of the Son of Man".
At first, the orientation of the building in which Christians met was unimportant, but after the legalization of the religion in the fourth century, customs developed in this regard. These differed in Eastern and Western Christianity.
The Apostolic Constitutions, a work of Eastern Christianity written between 375 and 380 |
https://en.wikipedia.org/wiki/Potassium%20bisulfite | Potassium bisulfite (or potassium hydrogen sulfite) is a chemical mixture with the approximate chemical formula KHSO3. Potassium bisulfite in fact is not a real compound, but a mixture of salts that dissolve in water to give solutions composed of potassium ions and bisulfite ions. It is a white solid with an odor of sulfur dioxide. Attempts to crystallize potassium bisulfite yield potassium metabisulfite, K2S2O5.
Potassium bisulfite is used as a sterilising agent in the production of alcoholic beverages. This additive is classified as E number E228 under the current EU-approved food additive legislation.
Production
It is made by the reaction of sulfur dioxide and potassium carbonate. The sulfur dioxide is passed through a solution of the potassium carbonate until no more carbon dioxide is evolved. The solution is concentrated.
See also
Calcium bisulfite
Sodium bisulfite |
https://en.wikipedia.org/wiki/Mercer%20%28consulting%20firm%29 | Mercer is an American consulting firm founded in 1945. It is one of the four operating subsidiaries of global professional services firm Marsh McLennan (NYSE: MMC). Mercer is headquartered in New York City with offices in 43 countries and operations in 130 countries. The company primarily provides human resources and financial services consulting services to its clients.
Mercer has several distinct lines of business, namely: health and benefits, investments and retirement, workforce and careers, and M&A advisory services. It is the world's largest investment advisory with over US$300 billion outsourced assets under management and US$15 trillion under advisement in total.
History
Foundation and early years (1937–1959)
William Manson Mercer founded William M. Mercer, Limited in Vancouver, Canada in 1945. It was acquired by Marsh McLennan and merged into their employee benefits department in 1959.
Post acquisition growth (1959–2002)
Mercer Consulting Group
In 1975, Marsh McLennan converted their benefits operations into a wholly owned subsidiary, William M. Mercer, Inc. In 1992, a holding company was created for Marsh McLennan's three global consulting businesses, known as Mercer Consulting Group. William M. Mercer, Inc. continued to provide actuarial and employee benefits consulting within the group alongside two sister companies: Mercer Management Consulting and National Economic Research Associates, Inc, which provided corporate strategy consulting and economic consulting, respectively.
Mercer Delta Consulting
In 2000, Mercer Consulting Group acquired Delta Consulting Group for its organizational development and change management expertise. Founded by organizational theorist David A. Nadler in 1980, Delta Consulting Group worked to structure effective executive teams. The firm had an influential client list, including corporations such as 3M, Citicorp, Procter & Gamble, The New York Times, and Xerox.
The new entity was renamed Mercer Delta Consulting, and ma |
https://en.wikipedia.org/wiki/Transversal%20%28geometry%29 | In geometry, a transversal is a line that passes through two lines in the same plane at two distinct points. Transversals play a role in establishing whether two or more other lines in the Euclidean plane are parallel. The intersections of a transversal with two lines create various types of pairs of angles: consecutive interior angles, consecutive exterior angles, corresponding angles, and alternate angles. As a consequence of Euclid's parallel postulate, if the two lines are parallel, consecutive interior angles are supplementary, corresponding angles are equal, and alternate angles are equal.
Angles of a transversal
A transversal produces 8 angles, as shown in the graph at the above left:
4 with each of the two lines, namely α, β, γ and δ and then α1, β1, γ1 and δ1; and
4 of which are interior (between the two lines), namely α, β, γ1 and δ1 and 4 of which are exterior, namely α1, β1, γ and δ.
A transversal that cuts two parallel lines at right angles is called a perpendicular transversal. In this case, all 8 angles are right angles
When the lines are parallel, a case that is often considered, a transversal produces several congruent supplementary angles. Some of these angle pairs have specific names and are discussed below: corresponding angles, alternate angles, and consecutive angles.
Alternate angles
Alternate angles are the four pairs of angles that:
have distinct vertex points,
lie on opposite sides of the transversal and
both angles are interior or both angles are exterior.
If the two angles of one pair are congruent (equal in measure), then the angles of each of the other pairs are also congruent.
Proposition 1.27 of Euclid's Elements, a theorem of absolute geometry (hence valid in both hyperbolic and Euclidean Geometry), proves that if the angles of a pair of alternate angles of a transversal are congruent then the two lines are parallel (non-intersecting).
It follows from Euclid's parallel postulate that if the two lines are parallel, then the |
https://en.wikipedia.org/wiki/H-infinity%20loop-shaping | H-infinity loop-shaping is a design methodology in modern control theory. It combines the traditional intuition of classical control methods, such as Bode's sensitivity integral, with H-infinity optimization techniques to achieve controllers whose stability and performance properties hold despite bounded differences between the nominal plant assumed in design and the true plant encountered in practice. Essentially, the control system designer describes the desired responsiveness and noise-suppression properties by weighting the plant transfer function in the frequency domain; the resulting 'loop-shape' is then 'robustified' through optimization. Robustification usually has little effect at high and low frequencies, but the response around unity-gain crossover is adjusted to maximise the system's stability margins. H-infinity loop-shaping can be applied to multiple-input multiple-output (MIMO) systems.
H-infinity loop-shaping can be carried out using commercially available software.
H-infinity loop-shaping has been successfully deployed in industry. In 1995, R. Hyde, K. Glover and G. T. Shanks published a paper describing the successful application of the technique to a VTOL aircraft. In 2008, D. J. Auger, S. Crawshaw and S. L. Hall published another paper describing a successful application to a steerable marine radar tracker, noting that the technique had the following benefits:
Easy to apply – commercial software handles the hard math.
Easy to implement – standard transfer functions and state-space methods can be used.
Plug and play – no need for re-tuning on an installation-by-installation basis.
A closely related design methodology, developed at about the same time, was based on the theory of the gap metric. It was applied in 1993 for designing controllers to dampen vibrations in large flexible structures at Wright-Patterson Air Force Base and Jet Propulsion Laboratory
See also
Control theory
H-infinity control |
https://en.wikipedia.org/wiki/Whipple%20formulae | In the theory of special functions, Whipple's transformation for Legendre functions, named after Francis John Welsh Whipple, arise from a general expression, concerning associated Legendre functions. These formulae have been presented previously in terms of a viewpoint aimed at spherical harmonics, now that we view the equations in terms of toroidal coordinates, whole new symmetries of Legendre functions arise.
For associated Legendre functions of the first and second kind,
and
These expressions are valid for all parameters and . By shifting the complex degree and order in an appropriate fashion, we obtain Whipple formulae for general complex index interchange of general associated Legendre functions of the first and second kind. These are given by
and
Note that these formulae are well-behaved for all values of the degree and order, except for those with integer values. However, if we examine these formulae for toroidal harmonics, i.e. where the degree is half-integer, the order is integer, and the argument is positive and greater than unity one obtains
and
.
These are the Whipple formulae for toroidal harmonics. They show an important property of toroidal harmonics under index (the integers associated with the order and the degree) interchange.
External links |
https://en.wikipedia.org/wiki/Catterline%20Cartie%20Challenge | The Catterline Cartie Challenge is a competition for homemade soapbox carts (or "carties", as they are known locally) held annually in Catterline, near Stonehaven, Scotland. It is part of the Catterline Gala Weekend and is held annually on the second weekend in June , with the carties being displayed at the gala on the Saturday and then time-trialed down the brae from the Creel Inn to the harbour the following day.
It was first held on 11/12 June 2005, when 11 carties were entered. The number of entries has grown in subsequent years, and in 2008 there were 26 carties taking part.
Prizes are awarded for the single fastest run (The Connons Shield), fastest aggregate time (Constructors Championship), Best Engineered, Best Decorated, Champagne Moment, Furthest Travelled, Cartie Sprint and "The Great Catterline Cartie Race".
The course is almost exactly 1000 ft (304.8m) long with a drop of almost 100 ft (30.5m) from start to finish, and the carties can reach speeds of around 30 mph at the finish line. As a result, the construction rules require the carties to have adequate brakes and steering. Other than these safety considerations, however, there are very few restrictions on the size and shape of the carties, and as a consequence there tends to be a wide range of designs entered, with many teams eschewing pure speed in favour of colourful novelty carties. These carties are very popular with the spectators and are often more memorable than the eventual winners.
Winners
2009
Connons Shield
1st. The Cheats / Tequilla Slammer
2nd. A La Cartie / The Auld Alliance
3rd. Bervie Allstars / The Bervie Bomber
Constructors Trophy
1st. The Cheats / Tequilla Slammer
2nd. Bitter and Twisted / Once a Fortnight
3rd. Team Weasel / The Flying Ferret
2008
Connons Shield
1st. A La Cartie / The Auld Alliance
2nd. Firstdrive Cars / The Bandit
3rd. Team Riley / The Black Bomber
Constructors Trophy
1st. Bitter and Twisted / Once a Fortnight
2nd. Firstdrive Cars / The Bandit
3r |
https://en.wikipedia.org/wiki/Isomalathion | Isomalathion is an impurity found in some batches of malathion. Whereas the structure of malation is, generically, RSP(S)(OCH3)2, the connectivity of isomalathion is RSPO(SCH3)(OCH3). It arises by heating malathion. Being significantly more toxic to humans than malathion, it has resulted in human poisonings.
In 1976, numerous malaria workers in Pakistan were poisoned by isomalathion. It is an inhibitor of carboxyesterase. |
https://en.wikipedia.org/wiki/Framework-oriented%20design | Framework Oriented Design (FOD) is a programming paradigm that uses existing frameworks as the basis for an application design.
The framework can be thought of as fully functioning template application. The application development consists of modifying callback procedure behaviour and modifying object behaviour using inheritance.
This paradigm provides the patterns for understanding development with Rapid Application Development (RAD) systems such as Delphi, where the Integrated Development Environment (IDE) provides the template application and the programmer fills in the appropriate event handlers. The developer has the option of modifying existing objects via inheritance. |
https://en.wikipedia.org/wiki/Pyroglutamyl-histidyl-glycine | Pyroglutamyl-histidyl-glycine (pEHG) is an endogenous tripeptide that acts as a tissue-specific antimitotic and selectively inhibits the proliferation of colon epithelial cells. Early research indicated that pEHG had anorectic effects in mice and was possibly involved in the pathophysiology of anorexia nervosa. However, subsequent studies have found that pEHG lacks anorectic effects and does not alter food intake in mice. |
https://en.wikipedia.org/wiki/FG-7142 | FG-7142 (ZK-31906) is a drug which acts as a partial inverse agonist at the benzodiazepine allosteric site of the GABAA receptor. It has anorectic, anxiogenic and pro-convulsant effects. It also increases release of acetylcholine and noradrenaline, and improves memory retention in animal studies. |
https://en.wikipedia.org/wiki/United%20States%20Radium%20Corporation | The United States Radium Corporation was a company, most notorious for its operations between the years 1917 to 1926 in Orange, New Jersey, in the United States that led to stronger worker protection laws. After initial success in developing a glow-in-the-dark radioactive paint, the company was subject to several lawsuits in the late 1920s in the wake of severe illnesses and deaths of workers (the Radium Girls) who had ingested radioactive material. The workers had been told that the paint was harmless. During World War I and World War II, the company produced luminous watches and gauges for the United States Army for use by soldiers.
U.S. Radium workers, especially women who painted the dials of watches and other instruments with luminous paint, suffered serious radioactive contamination. Lawyer Edward Markley was in charge of defending the company in these cases.
History
The company was founded in 1914 in New York City, by Dr. Sabin Arnold von Sochocky and Dr. George S. Willis, as the Radium Luminous Material Corporation. The company produced uranium from carnotite ore and eventually moved into the business of producing radioluminescent paint, and then to the application of that paint. Over the next several years, it opened facilities in Newark, Jersey City, and Orange. In August 1921, von Sochocky was forced from the presidency, and the company was renamed the United States Radium Corporation, Arthur Roeder became the president of the company. In Orange, where radium was extracted from 1917 to 1926, the U.S. Radium facility processed half a ton of ore per day. The ore was obtained from "Undark mines" in Paradox Valley, Colorado and in Utah.
A notable employee from 1921 to 1923 was Victor Francis Hess, who would later receive the Nobel Prize in Physics.
The company's luminescent paint, marketed as Undark, was a mixture of radium and zinc sulfide; the radiation causing the sulfide to fluoresce. During World War I, demand for dials, watches, and aircraft instr |
https://en.wikipedia.org/wiki/Seed%20nucleus | A seed nucleus is an isotope that is the starting point for any of a variety of fusion chain reactions. The mix of nuclei produced at the conclusion of the chain reaction generally depends strongly on the relative availability of the seed nucleus or nuclei and the component being fused--whether neutrons as in the r-process and s-process or protons as in the rp-process. A smaller proportion of seed nuclei will generally result in products of larger mass, whereas a larger seed-to-neutron or seed-to-proton ratio will tend to produce comparatively lighter masses.
Nuclear physics |
https://en.wikipedia.org/wiki/Parasternal%20lymph%20nodes | The parasternal lymph nodes (or sternal glands) are placed at the anterior ends of the intercostal spaces, by the side of the internal thoracic artery.
They derive afferents from the mamma; from the deeper structures of the anterior abdominal wall above the level of the umbilicus; from the upper surface of the liver through a small group of glands which lie behind the xiphoid process; and from the deeper parts of the anterior portion of the thoracic wall.
Their efferents usually unite to form a single trunk on either side; this may open directly into the junction of the internal jugular and subclavian veins, or that of the right side may join the right subclavian trunk, and that of the left the thoracic duct. The parasternal lymph nodes drain into the bronchomediastinal trunks, in a similar fashion to the upper intercostal lymph nodes. |
https://en.wikipedia.org/wiki/Fragment%20separator | A fragment separator is an ion-optical device used to focus and separate products from the collision of relativistic ion beams with thin targets. Selected products can then be studied individually. Fragment separators typically consist of a series of superconducting magnetic multipole elements. The thin target immediately before the separator allows the fragments produced through various reactions to escape the target material still at a very high velocity. The products are forward-focused because of the high velocity of the center-of-mass in the beam-target interaction, which allows fragment separators to collect a large fraction (in some cases nearly all) of the fragments produced in the target. Some examples of currently operating Fragment separators are the FRS at GSI, the A1900 at NSCL, and BigRIPS of Radioactive Isotope Beam Factory at RIKEN.
Experimental physics |
https://en.wikipedia.org/wiki/Daintree%20Networks | Daintree Networks, Inc. was a building automation company that provided wireless control systems for commercial and industrial buildings. Founded in 2003, Daintree was headquartered in Los Altos, California, with an R&D lab in Melbourne, Australia.
Daintree's ControlScope wireless control includes switches, sensors, LED drivers, programmable thermostats, and plug load controllers. Wireless communication is achieved either by wireless adaptation to traditional wired devices (such as sensors), or by building wireless communications modules directly into the devices.
Daintree had produced a design verification and operational support tool, the Sensor Network Analyzer (SNA), which supports wireless embedded technologies including IEEE 802.15.4, Zigbee, Zigbee RF4CE, 6LoWPAN, JenNet (from Jennic Limited), SimpliciTI (from Texas Instruments), and Synkro (from Freescale Semiconductor).
History
Daintree was founded in 2003 by Bill Wood, who had previously worked as a General Manager for Agilent Technologies, and Hewlett-Packard.
Daintree managers have previously held roles within wireless standards bodies, including chair of several working groups within the Zigbee Alliance.
In 2003, when many wireless technologies were new, Daintree provided design verification and operational support tools for wireless embedded developers. In 2007 the company began developing and delivering wireless systems for specific purposes; by 2009 it had narrowed its focus to lighting and building control.
On April 21, 2016, Current Lighting Solutions, an energy management startup within GE, acquired Daintree Networks for US$77 million to combine its open-standard wireless network with GE's open source platform Predix to offer a new energy management system to businesses.
Products
ControlScope Manager (CSM): Software used to configure, manage, and maintain key energy loads in commercial buildings. It includes management of individual devices and "zones" of multiple devices. This includes |
https://en.wikipedia.org/wiki/Blue%20dwarf%20%28red-dwarf%20stage%29 | A blue dwarf is a predicted class of star that develops from a red dwarf after it has exhausted much of its hydrogen fuel supply. Because red dwarfs fuse their hydrogen slowly and are fully convective (allowing their entire hydrogen supply to be fused, instead of merely that in the core), they are predicted to have lifespans of trillions of years; the Universe is currently not old enough for any blue dwarfs to have formed yet. Their future existence is predicted based on theoretical models.
Hypothetical scenario
Stars increase in luminosity as they age, and a more luminous star needs to radiate energy more quickly to maintain equilibrium. Stars larger than red dwarfs do this by increasing their size and becoming red giants with larger surface areas. Rather than expanding, however, red dwarfs with less than 0.25 solar masses are predicted to increase their radiative rate by increasing their surface temperatures and becoming "bluer". This is because the surface layers of red dwarfs do not become significantly more opaque with increasing temperature.
Despite their name, blue dwarfs do not necessarily increase in temperature enough to become blue stars. Simulations have been conducted on the future evolution of red dwarfs with stellar mass between 0.06 and 0.25 .
Of the masses simulated, the bluest of the blue dwarf stars at the end of the simulation had begun as a 0.14 red dwarf, and ended with surface temperature approximately 8600 K, making it a type A blue-white star.
End of stellar life
Blue dwarfs are believed to eventually completely exhaust their store of hydrogen fuel, and their interior pressures are insufficient to fuse any other fuel. Once fusion ends, they are no longer main-sequence "dwarf" stars and become so-called white dwarfs – which, despite the name, are not main-sequence "dwarfs" and are not stars, but rather stellar remnants.
Once the former "blue"-dwarf stars have become degenerate, non-stellar white dwarfs, they cool, losing the remnant hea |
https://en.wikipedia.org/wiki/Tirpitz%20%28pig%29 | Tirpitz was a pig captured from the Imperial German Navy after a naval skirmish (the Battle of Más a Tierra) following the Battle of the Falkland Islands in 1914. She became the mascot of the cruiser .
Early life
Pigs were often kept on board warships to supply fresh meat. Tirpitz was aboard , when she was ordered into the South Atlantic to join with the forces of Vice Admiral Maximilian von Spee to raid Allied merchants. The ship's first encounter with was at the Battle of Coronel, where the German fleet was victorious. They were later defeated at the Battle of the Falkland Islands, though the faster Dresden managed to escape. She was located in Cumberland Bay on the Chilean island of Más a Tierra (today known as Robinson Crusoe Island), by HMS Glasgow and off the coast of South America on 15 March 1915. The Germans scuttled the ship, but Tirpitz was left on board as she sank.
Capture and Royal Navy service
Tirpitz was able to make her way above deck and swim clear of the sinking Dresden. She struck out for the nearby Royal Navy ships and was spotted an hour later by a petty officer aboard HMS Glasgow. The officer entered the water, but the frightened Tirpitz nearly drowned him. He was however eventually able to rescue the pig and bring her aboard. The animal was adopted by the crew of HMS Glasgow, who made her their mascot, and named her 'Tirpitz', after Alfred von Tirpitz, the German Admiral, and Secretary of State of the Imperial Naval Office. Tirpitz remained with the Glasgow for a year and was then placed in quarantine until she was allowed to be adopted by the Petty Officer who had first seen her, who transferred her to Whale Island Gunnery School, Portsmouth for the rest of her career. The Times newspaper reported:
The animal, which is known as 'Tirpitz', was once owned by the German light cruiser Dresden, and when, during the action with Glasgow, Kent, and Orama, the Germans escaped to the shore after causing an explosion which sank the Dresden, an |
https://en.wikipedia.org/wiki/LU%20reduction | LU reduction is an algorithm related to LU decomposition. This term is usually used in the context of super computing and highly parallel computing. In this context it is used as a benchmarking algorithm, i.e. to provide a comparative measurement of speed for different computers. LU reduction is a special parallelized version of an LU decomposition algorithm, an example can be found in (Guitart 2001). The parallelized version usually distributes the work for a matrix row to a single processor and synchronizes the result with the whole matrix (Escribano 2000).
Sources
J. Oliver, J. Guitart, E. Ayguadé, N. Navarro and J. Torres. Strategies for Efficient Exploitation of Loop-level Parallelism in Java. Concurrency and Computation: Practice and Experience(Java Grande 2000 Special Issue), Vol.13 (8-9), pp. 663–680. ISSN 1532-0634, July 2001, , last retrieved on Sept. 14 2007
J. Guitart, X. Martorell, J. Torres, and E. Ayguadé, Improving Java Multithreading Facilities: the Java Nanos Environment, Research Report UPC-DAC-2001-8, Computer Architecture Department, Technical University of Catalonia, March 2001, .
Arturo González-Escribano, Arjan J. C. van Gemund, Valentín Cardeñoso-Payo et al., Measuring the Performance Impact of SP-Restricted Programming in Shared-Memory Machines, In Vector and Parallel Processing — VECPAR 2000, Springer Verlag, pp. 128–141, , 2000,
Numerical linear algebra
Supercomputers |
https://en.wikipedia.org/wiki/Prolate%20spheroidal%20wave%20function | The prolate spheroidal wave functions are eigenfunctions of the Laplacian in prolate spheroidal coordinates, adapted to boundary conditions on certain ellipsoids of revolution (an ellipse rotated around its long axis, “cigar shape“). Related are the oblate spheroidal wave functions (“pancake shaped” ellipsoid).
Solutions to the wave equation
Solve the Helmholtz equation,
, by the method of separation of variables in prolate spheroidal coordinates, , with:
and , , and . Here, is the interfocal distance of the elliptical cross section of the prolate spheroid.
Setting , the solution can be written
as the product of , a radial spheroidal wave function and an angular spheroidal wave function .
The radial wave function satisfies the linear ordinary differential equation:
The angular wave function satisfies the differential equation:
It is the same differential equation as in the case of the radial wave function. However, the range of the variable is different: in the radial wave function, , while in the angular wave function, . The eigenvalue of this Sturm–Liouville problem is fixed by the requirement that must be finite for .
For both differential equations reduce to the equations satisfied by the associated Legendre polynomials. For , the angular spheroidal wave functions can be expanded as a series of Legendre functions.
If one writes , the function satisfies
which is known as the spheroidal wave equation. This auxiliary equation has been used by Stratton.
Band-limited signals
In signal processing, the prolate spheroidal wave functions (PSWF) are useful as eigenfunctions of a time-limiting operation followed by a low-pass filter. Let denote the time truncation operator, such that if and only if has support on . Similarly, let denote an ideal low-pass filtering operator, such that if and only if its Fourier transform is limited to . The operator turns out to be linear, bounded and self-adjoint. For we denote with the -th eigenfunction, |
https://en.wikipedia.org/wiki/Global%20brain | The global brain is a neuroscience-inspired and futurological vision of the planetary information and communications technology network that interconnects all humans and their technological artifacts. As this network stores ever more information, takes over ever more functions of coordination and communication from traditional organizations, and becomes increasingly intelligent, it increasingly plays the role of a brain for the planet Earth.
Basic ideas
Proponents of the global brain hypothesis claim that the Internet increasingly ties its users together into a single information processing system that functions as part of the collective nervous system of the planet. The intelligence of this network is collective or distributed: it is not centralized or localized in any particular individual, organization or computer system. Therefore, no one can command or control it. Rather, it self-organizes or emerges from the dynamic networks of interactions between its components. This is a property typical of complex adaptive systems.
The World Wide Web in particular resembles the organization of a brain with its web pages (playing a role similar to neurons) connected by hyperlinks (playing a role similar to synapses), together forming an associative network along which information propagates. This analogy becomes stronger with the rise of social media, such as Facebook, where links between personal pages represent relationships in a social network along which information propagates from person to person.
Such propagation is similar to the spreading activation that neural networks in the brain use to process information in a parallel, distributed manner.
History
Although some of the underlying ideas were already expressed by Nikola Tesla in the late 19th century and were written about by many others before him, the term "global brain" was coined in 1982 by Peter Russell in his book The Global Brain. How the Internet might be developed to achieve this was set out in 1986. |
https://en.wikipedia.org/wiki/M3UA | M3UA is a communication protocol of the SIGTRAN family, used in telephone networks to carry signaling over Internet Protocol (IP). M3UA enables the SS7 protocol's User Parts (e.g. ISUP, SCCP and TUP) to run over virtually any network technology breaking its limitation to telephony equipment like T-carrier, E-carrier or Asynchronous Transfer Mode (ATM), which highly improves scalability of the signaling networks.
M3UA stands for MTP Level 3 (MTP3) User Adaptation Layer as defined by the IETF SIGTRAN working group in (which replaces and supersedes ). Like other adaptation protocols, M3UA uses SCTP to transmit messages between its network elements.
Implementation scheme
Typical scheme
_ __
| | | | | MGC|
| SP |<----------------->| SGW |<---------------|-->(AS) |
|__| SS7 network |___| IP network ||
MTP3
point-code common point-code
PC1 PC2
Use SGW as STP
Several AS owns PC itself and uses SGW as STP (transit pointcode).
_ ___
| | | SGW | | MGC|
| | | | /-------------|-->(AS) | point-code PC3
| SP |<----------------|-->(STP)<--|- | |
| | | | \-------------|-->(AS) | point-code PC4
|__| SS7 network |___| IP network |_|
MTP3 point-code
point-code PC2
PC1
Protocol
M3UA uses a complex state machine to manage and indicate states it's running. Several M3UA messages are mandatory to make an M3UA association or peering fully functional (ASP UP, ASP UP Acknowledge, ASP Active, ASP Active Acknowledge), some others are recommended (Notify, Destination Audits (DAUD)).
Additional info
An open implementation of the M3UA standard can be found at OpenSS7's web site.
Wireshark is |
https://en.wikipedia.org/wiki/Empirical%20software%20engineering | Empirical software engineering (ESE) is a subfield of software engineering (SE) research that uses empirical research methods to study and evaluate an SE phenomenon of interest. The phenomenon may refer to software development tools/technology, practices, processes, policies, or other human and organizational aspects.
ESE has roots in experimental software engineering, but as the field has matured the need and acceptance for both quantitative and qualitative research has grown. Today, common research methods used in ESE for primary and secondary research are the following:
Primary research (experimentation, case study research, survey research, simulations in particular software Process simulation)
Secondary research methods (Systematic reviews, Systematic mapping studies, rapid reviews, tertiary review)
Teaching empirical software engineering
Some comprehensive books for students, professionals and researchers interested in ESE are available.
Research community
Journals, conferences, and communities devoted specifically to ESE:
Empirical Software Engineering: An International Journal
International Symposium on Empirical Software Engineering and Measurement
International Software Engineering Research Network (ISERN) |
https://en.wikipedia.org/wiki/Data%20breach | A data breach is a security violation, in which sensitive, protected or confidential data is copied, transmitted, viewed, stolen, altered or used by an individual unauthorized to do so. Other terms are unintentional information disclosure, data leak, information leakage and data spill. Incidents range from concerted attacks by individuals who hack for personal gain or malice (black hats), organized crime, political activists or national governments, to poorly configured system security or careless disposal of used computer equipment or data storage media. Leaked information can range from matters compromising national security, to information on actions which a government or official considers embarrassing and wants to conceal. A deliberate data breach by a person privy to the information, typically for political purposes, is more often described as a "leak".
Data breaches may involve financial information such as credit card and debit card details, bank details, personal health information (PHI), Personally identifiable information (PII), trade secrets of corporations or intellectual property. Data breaches may involve overexposed and vulnerable unstructured data – files, documents, and sensitive information.
Data breaches can be quite costly to organizations with direct costs (remediation, investigation, etc.) and indirect costs (reputational damages, providing cyber security to victims of compromised data, etc.).
According to the nonprofit consumer organization Privacy Rights Clearinghouse, a total of 227,052,199 individual records containing sensitive personal information were involved in security breaches in the United States between January 2005 and May 2008, excluding incidents where sensitive data was apparently not actually exposed.
Many jurisdictions have passed data breach notification laws, which requires a company that has been subject to a data breach to inform customers and take other steps to remediate possible injuries. 50 U.S. states have some |
https://en.wikipedia.org/wiki/Resistive%20ballooning%20mode | The resistive ballooning mode (RBM) is an instability occurring in magnetized plasmas, particularly in magnetic confinement devices such as tokamaks, when the pressure gradient is opposite to the effective gravity created by a magnetic field.
Linear growth rate
The linear growth rate of the RBM instability is given as
where is the pressure gradient is the effective gravity produced by a non-homogeneous magnetic field, R0 is the major radius of the device, Lp is a characteristic length of the pressure gradient, and cs is the plasma sound speed.
Similarity with the Rayleigh–Taylor instability
The RBM instability is similar to the Rayleigh–Taylor instability (RT), with Earth gravity replaced by the effective gravity , except that for the RT instability, acts on the mass density of the fluid, whereas for the RBM instability, acts on the pressure of the plasma.
Plasma instabilities
Stability theory
Tokamaks |
https://en.wikipedia.org/wiki/Lineshaft%20roller%20conveyor | A lineshaft roller conveyor or line-shaft conveyor is, as its name suggests, powered by a shaft beneath rollers. These conveyors are suitable for light applications up to 50 kg such as cardboard boxes and tote boxes.
A single shaft runs below the rollers running the length of the conveyor. On the shaft are a series of spools, one spool for each roller. An elastic polyurethane o-ring belt runs from a spool on the powered shaft to each roller. When the shaft is powered, the o-ring belt acts as a chain between the spool and the roller making the roller rotate. The rotation of the rollers pushes the product along the conveyor. The shaft is usually driven by an electrical motor that is generally controlled by an electronic PLC (programmable logic controller). The PLC electronically controls how specific sections of the conveyor system interact with the products being conveyed.
Advantages of this conveyor are quiet operation, easy installation, moderate maintenance and low expense. Line-shaft conveyors are also extremely safe for people to work around because the elastic belts can stretch and not injure fingers should any get caught underneath them. Moreover, the spools will slip and allow the rollers to stop moving if clothing, hands or hair gets caught in them. In addition, since the spools are slightly loose on the shaft, they act like clutches that slip when products are required to accumulate (stop moving and bump up against each other. i.e. queue up). With the exception of soft bottomed containers like cement bags, these conveyors can be utilized for almost all applications.
A disadvantage of the roller lineshaft conveyor is that it can only be used to convey products that span at least three rollers, but rollers can be as small as 17mm in diameter and as close together as 18.5mm. For items shorter than 74mm, the conveyor belt system is generally used as an alternative option.
See also
Conveyor systems
Conveyor belt
Chain conveyor
Line shaft
External li |
https://en.wikipedia.org/wiki/Paleodictyon | Paleodictyon is a trace fossil, usually interpreted to be a burrow, which appears in the geologic marine record beginning in the Precambrian/Early Cambrian and in modern ocean environments. Paleodictyon were first described by Giuseppe Meneghini in 1850. The origin of the trace fossil is enigmatic and numerous candidates have been proposed.
Description
Paleodictyon consist of thin tunnels or ridges that usually form hexagonal or polygonal-shaped honeycomb-like network. Both irregular and regular nets are known throughout the stratigraphic range of Paleodictyon, but it is the striking regular honeycomb pattern of some forms such as P. carpathecum and P. nodosum which make it notable and widely studied.
Individual mesh elements may be millimeters to centimeters, usually from 1-1.5 to 2-3 cm, and entire mesh patterns can cover areas up to a square meter. The edges or threads that make up the mesh are usually cylindrical or ellipsoid in cross-section, and some forms have vertical tubes connecting the mesh upwards to the sediment-water interface. Dolf Seilacher proposed in 1977 that it may be a trap for food, a mechanism for farming, or a foraging path. Alternatively, it has been suggested that it may be a cast of a xenophyophoran protist.
History of study
Much modeling work has been done on Paleodictyon. Roy Plotnick, trace fossils researcher at University of Illinois at Chicago, modeled the form as resulting from the iterative modular growth of an unknown organism. Garlick and Miller modeled it as a burrow with a relatively simple burrow algorithm.
Hypotheses about origin
The question is whether these patterns are burrows of marine animals such as worms or fossilized remains of ancient organisms (sponges or algae). Observations on Paleodictyon using Euler graph theory suggest that it is unlikely to be an excavation trace fossil, and that it is more likely to be an imprint or body fossil, or to be of abiotic origin.
It has been suggested that Paleodictyon may repr |
https://en.wikipedia.org/wiki/Pentagonal%20bipyramidal%20molecular%20geometry | In chemistry, a pentagonal bipyramid is a molecular geometry with one atom at the centre with seven ligands at the corners of a pentagonal bipyramid. A perfect pentagonal bipyramid belongs to the molecular point group D5h.
The pentagonal bipyramid is a case where bond angles surrounding an atom are not identical (see also trigonal bipyramidal molecular geometry). This is one of the three common shapes for heptacoordinate transition metal complexes, along with the capped octahedron and the capped trigonal prism.
Pentagonal bipyramids are claimed to be promising coordination geometries for lanthanide-based single-molecule magnets, since (a) they present no extradiagonal crystal field terms, therefore minimising spin mixing, and (b) all of their diagonal terms are in first approximation protected from low-energy vibrations, minimising vibronic coupling.
Examples
Iodine heptafluoride (IF7) with 7 bonding groups
Osmium heptafluoride (OsF7)
Peroxo chromium(IV) complexes, e.g. [Cr(O2)2(NH3)3] where the peroxo groups occupy four of the planar positions.
and |
https://en.wikipedia.org/wiki/Reversing%3A%20Secrets%20of%20Reverse%20Engineering | Reversing: Secrets of Reverse Engineering is a textbook written by Eldad Eilam on the subject of reverse engineering software, mainly within a Microsoft Windows environment. It covers the use of debuggers and other low-level tools for working with binaries. Of particular interest is that it uses OllyDbg in examples, and is therefore one of the few practical, modern books on the subject that uses popular, real-world tools to facilitate learning. The book is designed for independent study and does not contain problem sets, but it is also used as a course book in some university classes.
The book covers several different aspects of reverse engineering, and demonstrates what can be accomplished:
How copy protection and DRM technologies can be defeated, and how they can be made stronger.
How malicious software such as worms can be analyzed and neutralized.
How to obfuscate code so that it becomes more difficult to reverse engineer.
The book also includes a detailed discussion of the legal aspects of reverse engineering, and examines some famous court cases and rulings that were related to reverse engineering.
Considering its relatively narrow subject matter, Reversing is a bestseller that has remained on Amazon.com's list of top 100 software books for several years, since its initial release.
Chapter Outline
Part I: Reversing 101.
Chapter 1: Foundations.
Chapter 2: Low-Level Software.
Chapter 3: Windows Fundamentals.
Chapter 4: Reversing Tools.
Part II: Applied Reversing.
Chapter 5: Beyond the Documentation.
Chapter 6: Deciphering File Formats.
Chapter 7: Auditing Program Binaries.
Chapter 8: Reversing Malware.
Part III: Cracking.
Chapter 9: Piracy and Copy Protection.
Chapter 10: Antireversing Techniques.
Chapter 11: Breaking Protections.
Part IV: Beyond Disassembly.
Chapter 12: Reversing .NET.
Chapter 13: Decompilation.
Appendix A: Deciphering Code Structures.
Appendix B: Understanding Compiled Arithmetic.
Appendix C: Deciphering Program Data.
Editions
Reversing: S |
https://en.wikipedia.org/wiki/Convective%20momentum%20transport | Convective momentum transport usually describes a vertical flux of the momentum of horizontal winds or currents. That momentum is carried like a non-conserved flow tracer by vertical air motions in convection.
In the atmosphere, convective momentum transport by small but vigorous (cumulus type) cloudy updrafts can be understood as an interplay of three main mechanisms:
Vertical advection of ambient momentum due to subsidence of environmental air that compensates the in-cloud upward mass flux,
Detrainment of in-cloud momentum where updrafts stop ascending,
Accelerations by the pressure gradient force around clouds whose inner momentum differs from their environment.
The net effect of these interacting mechanisms depends on the detailed configuration or 'organization' of the convective cloud or storm system.
See also
momentum
vertical motion |
https://en.wikipedia.org/wiki/EIAJ%20MTS | EIAJ MTS is a multichannel television sound standard created by the EIAJ.
Bilingual and stereo sound television programs started being broadcast in Japan in October 1978 using an "FM-FM" system originally developed by NHK Technical Research Labs during 1962–1969. This system was modified and standardised by the EIAJ in January 1979. Television stations in Japan with capability for bilingual and stereo sound transmissions used the callsign JO**-TAM, where "TAM" denotes their audio FM multiplex sub-carrier designation, until digital switchover to ISDB-T in 2010–2012 which eventually rendered EIAJ MTS obsolete.
The original System M TV standard has a monaural FM transmission at 4.5 MHz. For Japanese multichannel television sound a second channel, or sub-channel, is added to the original signal by using an FM sub-carrier at twice the line frequency (Fh, or 15374 Hz). In order to identify the different modes (mono, stereo, or dual sound) a pilot tone is also added on an AM carrier at 3.5 times the line frequency. The pilot tone frequencies are 982.5 Hz for stereo and 922.5 Hz for dual sound. Contrary to Zweikanalton these pilot tones are not coupled to the line frequency but were instead chosen to allow use of filters already employed in the Pocket Bell pager system.
See also
Multichannel Television Sound (3 additional audio channels on 4.5 Mhz audio carriers)
NICAM
Zweikanalton A2 |
https://en.wikipedia.org/wiki/Therapy | A therapy or medical treatment is the attempted remediation of a health problem, usually following a medical diagnosis. Both words, treatment and therapy, are often abbreviated tx, Tx, or Tx.
As a rule, each therapy has indications and contraindications. There are many different types of therapy. Not all therapies are effective. Many therapies can produce unwanted adverse effects.
Treatment and therapy are often synonymous, especially in the usage of health professionals. However, in the context of mental health, the term therapy may refer specifically to psychotherapy.
History
Before the creating of therapy as a formal procedure, people told stories to one another to inform and assist about the world. The term "healing through words" was used over 3,500 years ago in Greek and Egyptian writing. The term psychotherapy was invented in the 19th century, and psychoanalysis was founded by Sigmund Freud under a decade later.
Semantic field
The words care, therapy, treatment, and intervention overlap in a semantic field, and thus they can be synonymous depending on context. Moving rightward through that order, the connotative level of holism decreases and the level of specificity (to concrete instances) increases. Thus, in health care contexts (where its senses are always noncount), the word care tends to imply a broad idea of everything done to protect or improve someone's health (for example, as in the terms preventive care and primary care, which connote ongoing action), although it sometimes implies a narrower idea (for example, in the simplest cases of wound care or postanesthesia care, a few particular steps are sufficient, and the patient's interaction with that provider is soon finished). In contrast, the word intervention tends to be specific and concrete, and thus the word is often countable; for example, one instance of cardiac catheterization is one intervention performed, and coronary care (noncount) can require a series of interventions (count). At |
https://en.wikipedia.org/wiki/International%20email | International email arises from the combined provision of internationalized domain names (IDN) and email address internationalization (EAI). The result is email that contains international characters (characters which do not exist in the ASCII character set), encoded as UTF-8, in the email header and in supporting mail transfer protocols. The most significant aspect of this is the allowance of email addresses (also known as email identities) in most of the world's writing systems, at both interface and transport levels.
Email addresses
Traditional email addresses are limited to characters from the English alphabet and a few other special characters.
The following are valid traditional email addresses:
Abc@example.com (English, ASCII)
Abc.123@example.com (English, ASCII)
user+mailbox/department=shipping@example.com (English, ASCII)
!#$%&'*+-/=?^_`.{|}~@example.com (English, ASCII)
"Abc@def"@example.com (English, ASCII)
"Fred\ Bloggs"@example.com (English, ASCII)
"Joe.\\Blow"@example.com (English, ASCII)
A Russian might wish to use иван.сергеев@пример.рф as their identifier but be forced to use a transcription such as ivan.sergeev@example.ru or even some other completely unrelated identifier instead. The same is true of Chinese, Japanese, and other nationalities that do not use Latin scripts, but also applies to users from non-English-speaking European countries whose desired addresses might contain diacritics (e.g. André or Płużyna). As a result, email users are forced to identify themselves using non-native scripts, which may result in errors due to ambiguity of transliteration (for example, иван.сергеев may become ivan.sergeev, ivan.sergeyev, or something else). Alternatively, developers of email systems must compensate for this by converting identifiers from their native scripts to ASCII scripts and back again at th |
https://en.wikipedia.org/wiki/Astrophysics%20and%20Space%20Science | Astrophysics and Space Science is a bimonthly peer-reviewed scientific journal covering astronomy, astrophysics, and space science and astrophysical aspects of astrobiology. It was established in 1968 and is published by Springer Science+Business Media. From 2016 to 2020 the editors-in-chief were both Prof. Elias Brinks and Prof. Jeremy Mould. Since 2020 the sole editor-in-chief is Prof. Elias Brinks. Other editors-in-chief in the past have been Zdeněk Kopal (Univ. of Manchester) (1968-1993) and Michael A. Dopita (Australian National University) (1994-2015).
Abstracting and indexing
The journal is abstracted and indexed in the following databases:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.830. |
https://en.wikipedia.org/wiki/SAINT%20%28software%29 | SAINT (Security Administrator’s Integrated Network Tool) is computer software used for scanning computer networks for security vulnerabilities, and exploiting found vulnerabilities.
SAINT Network Vulnerability Scanner
The SAINT scanner screens every live system on a network for TCP and UDP services. For each service it finds running, it launches a set of probes designed to detect anything that could allow an attacker to gain unauthorized access, create a denial-of-service, or gain sensitive information about the network.
SAINT provides support to the Security Content Automation Protocol (SCAP) specification as an Unauthenticated Vulnerability Scanner and Authenticated Vulnerability and Patch Scanner. SAINT is also an approved scanning vendor with the Payment Card Industry (PCI).
The Four Steps of a SAINT Scan:
Step 1 – SAINT screens every live system on a network for TCP and UDP services.
Step 2 – For each service it finds running, it launches a set of probes designed to detect anything that could allow an attacker to gain unauthorized access, create a denial-of-service, or gain sensitive information about the network.
Step 3 – The scanner checks for vulnerabilities.
Step 4 – When vulnerabilities are detected, the results are categorized in several ways, allowing customers to target the data they find most useful.
SAINT can group vulnerabilities according to severity, type, or count. It can also provide information about a particular host or group of hosts. SAINT describes each of the vulnerabilities it locates; references Common Vulnerabilities and Exposures (CVE), CERT advisories, and IAVA (Information Assurance Vulnerability Alerts); and describes ways to correct the vulnerabilities. In many cases, the SAINT scanner provides links to patches or new software versions that will eliminate the detected vulnerabilities.
A vulnerability is a flaw in a system, device, or application that, if leveraged by an attacker, could impact the security of the system. E |
https://en.wikipedia.org/wiki/Sticking%20coefficient | Sticking coefficient is the term used in surface physics to describe the ratio of the number of adsorbate atoms (or molecules) that adsorb, or "stick", to a surface to the total number of atoms that impinge upon that surface during the same period of time. Sometimes the symbol Sc is used to denote this coefficient, and its value is between 1 (all impinging atoms stick) and 0 (no atoms stick). The coefficient is a function of surface temperature, surface coverage (θ) and structural details as well as the kinetic energy of the impinging particles. The original formulation was for molecules adsorbing from the gas phase and the equation was later extended to adsorption from the liquid phase by comparison with molecular dynamics simulations. For use in adsorption from liquids the equation is expressed based on solute density (molecules per volume) rather than the pressure.
Derivation
When arriving at a site of a surface, an adatom has three options. There is a probability that it will adsorb to the surface (), a probability that it will migrate to another site on the surface (), and a probability that it will desorb from the surface and return to the bulk gas (). For an empty site (θ=0) the sum of these three options is unity.
For a site already occupied by an adatom (θ>0), there is no probability of adsorbing, and so the probabilities sum as:
For the first site visited, the P of migrating overall is the P of migrating if the site is filled plus the P of migrating if the site is empty. The same is true for the P of desorption. The P of adsorption, however, does not exist for an already filled site.
The P of migrating from the second site is the P of migrating from the first site and then migrating from the second site, and so we multiply the two values.
Thus the sticking probability () is the P of sticking of the first site, plus the P of migrating from the first site and then sticking to the second site, plus the P of migrating from the second site and then stick |
https://en.wikipedia.org/wiki/Evolution%20of%20mammalian%20auditory%20ossicles | The evolution of mammalian auditory ossicles was an evolutionary process that resulted in the formation of the bones of the mammalian middle ear. These bones, or ossicles, are a defining characteristic of all mammals. The event is well-documented and important as a demonstration of transitional forms and exaptation, the re-purposing of existing structures during evolution.
The ossicles evolved from skull bones present in most tetrapods, including the reptilian lineage. The reptilian quadrate bone, articular bone, and columella evolved into the mammalian incus, malleus, and stapes (anvil, hammer, and stirrup), respectively.
In reptiles, the eardrum is connected to the inner ear via a single bone, the columella, while the upper and lower jaws contain several bones not found in mammals. Over the course of the evolution of mammals, one bone from the lower and one from the upper jaw (the articular and quadrate bones) lost their function in the jaw joint and migrated to the middle ear. The shortened columella connected to these bones within the middle ear to form a chain of three bones, the ossicles, which serve to effectively transmit air-based vibrations and facilitate more acute hearing.
History
Following on the ideas of Étienne Geoffroy Saint-Hilaire (1818), and studies by Johann Friedrich Meckel the Younger (1820), Carl Gustav Carus (1818), Martin Rathke (1825), and Karl Ernst von Baer (1828), the relationship between the reptilian jaw bones and mammalian middle-ear bones was first established on the basis of embryology and comparative anatomy by Karl Bogislaus Reichert (in 1837, before the publication of On the Origin of Species in 1859). These ideas were advanced by Ernst Gaupp, and are now known as the Reichert–Gaupp theory.
The discovery of the link in homology between the reptilian jaw joint and mammalian malleus and incus is considered an important milestone in the history of comparative anatomy. Work on extinct theromorphs by Owen (1845), and continued |
https://en.wikipedia.org/wiki/Untranslated%20region | In molecular genetics, an untranslated region (or UTR) refers to either of two sections, one on each side of a coding sequence on a strand of mRNA. If it is found on the 5' side, it is called the 5' UTR (or leader sequence), or if it is found on the 3' side, it is called the 3' UTR (or trailer sequence). mRNA is RNA that carries information from DNA to the ribosome, the site of protein synthesis (translation) within a cell. The mRNA is initially transcribed from the corresponding DNA sequence and then translated into protein. However, several regions of the mRNA are usually not translated into protein, including the 5' and 3' UTRs.
Although they are called untranslated regions, and do not form the protein-coding region of the gene, uORFs located within the 5' UTR can be translated into peptides.
The 5' UTR is upstream from the coding sequence. Within the 5' UTR is a sequence that is recognized by the ribosome which allows the ribosome to bind and initiate translation. The mechanism of translation initiation differs in prokaryotes and eukaryotes. The 3' UTR is found immediately following the translation stop codon. The 3' UTR plays a critical role in translation termination as well as post-transcriptional modification.
These often long sequences were once thought to be useless or junk mRNA that has simply accumulated over evolutionary time. However, it is now known that the untranslated region of mRNA is involved in many regulatory aspects of gene expression in eukaryotic organisms. The importance of these non-coding regions is supported by evolutionary reasoning, as natural selection would have otherwise eliminated this unusable RNA.
It is important to distinguish the 5' and 3' UTRs from other non-protein-coding RNA. Within the coding sequence of pre-mRNA, there can be found sections of RNA that will not be included in the protein product. These sections of RNA are called introns. The RNA that results from RNA splicing is a sequence of exons. The reason why intr |
https://en.wikipedia.org/wiki/Unified%20Code%20for%20Units%20of%20Measure | The Unified Code for Units of Measure (UCUM) is a system of codes for unambiguously representing measurement units. Its primary purpose is machine-to-machine communication rather than communication between humans.
The code set includes all units defined in ISO 1000, ISO 2955-1983, ANSI X3.50-1986, HL7 and ENV 12435, and explicitly and verifiably addresses the naming conflicts and ambiguities in those standards to resolve them. It provides for representations of units in 7 bit ASCII for machine-to-machine communication, with unambiguous mapping between case-sensitive and case-insensitive representations.
A reference open-source implementation is available as a Java applet. Also an OSGi based implementation at Eclipse Foundation.
Base units
Units are represented in UCUM with reference to a set of seven base units. The UCUM base units are the metre for measurement of length, the second for time, the gram for mass, the coulomb for charge, the kelvin for temperature, the candela for luminous intensity, and the radian for plane angle. The UCUM base units form a set of mutually independent dimensions as required by dimensional analysis.
Some of the UCUM base units are different from the SI base units. UCUM is compatible with, but not isomorphic with SI. There are four differences between the two sets of base units:
The gram is the base unit of mass instead of the kilogram, since in UCUM base units do not have prefixes.
Electric charge is the base quantity for electromagnetic phenomena instead of electric current, since the elementary charge of electrons is more fundamental physically.
The mole is dimensionless in UCUM, since it can be defined in terms of the Avogadro number,
The radian is a distinct base unit for plane angle, to distinguish angular velocity from rotational frequency and to distinguish the radian from the steradian for solid angles.
Metric and non-metric units
Each unit represented in UCUM is identified as either "metric" or "non-metric". Metric un |
https://en.wikipedia.org/wiki/List%20of%20types%20of%20systems%20theory | This list of types of systems theory gives an overview of different types of systems theory, which are mentioned in scientific book titles or articles. The following more than 40 types of systems theory are all explicitly named systems theory and represent a unique conceptual framework in a specific field of science.
Systems theory has been formalized since the 1950s, and a long set of specialized systems theories and cybernetics exist. In the beginnings, general systems theory was developed by Ludwig von Bertalanffy to overcome the over-specialisation of the modern times and as a worldview using holism. The systems theories nowadays are closer to the traditional specialisation than to holism, by interdependencies and mutual division by mutually-different specialists.
A
Abstract systems theory (also see: formal system)
Action Theory
Adaptive systems theory (also see: complex adaptive system)
Applied general systems theory (also see: general systems theory)
Applied multidimensional systems theory
Archaeological systems theory (also see: Systems theory in archaeology)
Systems theory in anthropology
Associated systems theory
B
Behavioral systems theory
Biochemical systems theory
Biomatrix systems theory
Body system
C
Complex adaptive systems theory (also see: complex adaptive system)
Complex systems theory (also see: complex systems)
Computer-aided systems theory
Conceptual systems theory (also see: conceptual system)
Control systems theory (also see: control system)
Critical systems theory (also see: critical systems thinking, and critical theory)
Cultural Agency Theory
D
Developmental systems theory
Distributed parameter systems theory
Dynamical systems theory
E
Ecological systems theory (also see: ecosystem, ecosystem ecology)
Economic systems theory (also see: economic system)
Electric energy systems theory
F
Family systems theory (also see: systemic therapy)
Fuzzy systems theory (also see: fuzzy logic)
G
General sys |
https://en.wikipedia.org/wiki/Skyhook%20Wireless | Skyhook is a location technology company based in Boston, Massachusetts, that specializes in location positioning, context, and intelligence. Founded in 2003, Skyhook originally began by geolocating Wi-Fi access points. It has since then has been focusing on hybrid positioning technology, which incorporates with Wi-Fi, GPS, cell towers, IP address and device sensors to improve device location.
History
Skyhook was founded in 2003 by Ted Morgan and Michael Shean. Skyhook's database was initially gathered through wardriving, when the company sent teams of drivers around the United States, Canada, Western Europe and selected Asian countries to map out Wi-Fi hot spots.
Skyhook powers location-based services for companies such as Apple, Samsung, Sony, HP, Dell, Sharp, Philips and MapQuest.
The firm received its first patent in 2007 and as of early 2020 holds over 650 patents across the United States and foreign markets.
In 2010, Skyhook sued Google over the use of Wi-Fi locator technology in cell phones. The complaint claimed that Andy Rubin, Google's Vice President for Engineering gave Sanjay K. Jha, Chief Executive of Motorola's mobile devices' division, a “stop ship” order, preventing Motorola from shipping phones with the Android operating system using the Skyhook software. The litigation was settled in 2015: Skyhook received $90 million in a settlement with the tech giant, a third of which was consumed by legal fees. The figure was revealed in a securities filing by Liberty Broadband Corp., Skyhook's Colorado-based parent company.
In February 2014, Skyhook Wireless was acquired by True Position Inc, a subsidiary of Liberty Broadband. In 2016, the two companies merged under the Skyhook brand, which now rests under Liberty Broadband, which is a part of the Liberty Media family.
In 2016 Skyhook launched new products dedicated to the advertising technology market: Retailer Personas, Power Personas and On-Demand Personas. These solution based on Skyhook's processin |
https://en.wikipedia.org/wiki/Many-to-many%20%28data%20model%29 | In systems analysis, a many-to-many relationship is a type of cardinality that refers to the relationship between two entities, say, A and B, where A may contain a parent instance for which there are many children in B and vice versa.
Data relationships
For example, think of A as Authors, and B as Books. An Author can write several Books, and a Book can be written by several Authors. In a relational database management system, such relationships are usually implemented by means of an associative table (also known as join table, junction table or cross-reference table), say, AB with two one-to-many relationships and . In this case the logical primary key for AB is formed from the two foreign keys (i.e. copies of the primary keys of A and B).
In web application frameworks such as CakePHP and Ruby on Rails, a many-to-many relationship between entity types represented by logical model database tables is sometimes referred to as a HasAndBelongsToMany (HABTM) relationship.
Artificial intelligence (AI) presents these relationships in many complex ways, including in areas where healthcare equity factors contribute to biased algorithms.
See also
Associative entity
One-to-one (data model)
One-to-many (data model) |
https://en.wikipedia.org/wiki/Baorangia%20bicolor | Baorangia bicolor, also known as the two-colored bolete or red and yellow bolete after its two-tone coloring scheme of red and yellow, is an edible fungus in the genus Baorangia. It inhabits most of eastern North America, primarily east of the Rocky Mountains and in season during the summer and fall months but can be found across the globe in China and Nepal. Its fruit body, the mushroom, is classed as medium or large in size, which helps distinguish it from the many similar appearing species that have a smaller stature. A deep blue/indigo bruising of the pore surface and a less dramatic bruising coloration change in the stem over a period of several minutes are identifying characteristics that distinguish it from the similar poisonous species Boletus sensibilis. There are two variations of this species, variety borealis and variety subreticulatus, and several other similar species of fungi are not poisonous.
Taxonomy and naming
Baorangia bicolor was originally named in 1807 by the Italian botanist Giuseppe Raddi. American mycologist Charles Horton Peck named a species collected in Sandlake, New York, in 1870, Boletus bicolor. Although this naming is considered illegitimate due to article 53.1 of the International Code of Botanical Nomenclature, Peck is still given as the authority in the Bessette et al. (2000) monograph of North American boletes. Boletus bicolor (Raddi) is not a synonym of "Boletus bicolor" Peck. Peck's Boletus bicolor describes the Eastern North American species that is the familiar "two-colored bolete", while Raddi's Boletus bicolor describes a separate European species that is lost to science. This taxonomic conflict has yet to be resolved. In 1909 a species found in Singapore was named Boletus bicolor by George Edward Massee; this naming is illegitimate and is synonymous with Boletochaete bicolor according to Singer. Molecular studies found that Boletus bicolor was not closely related to the type species of Boletus, Boletus edulis, and in |
https://en.wikipedia.org/wiki/Ozsv%C3%A1th%E2%80%93Sch%C3%BCcking%20metric | The Ozsváth–Schücking metric, or the Ozsváth–Schücking solution, is a vacuum solution of the Einstein field equations. The metric was published by István Ozsváth and Engelbert Schücking in 1962. It is noteworthy among vacuum solutions for being the first known solution that is stationary, globally defined, and singularity-free but nevertheless not isometric to the Minkowski metric. This stands in contradiction to a claimed strong Mach principle, which would forbid a vacuum solution from being anything but Minkowski without singularities, where the singularities are to be construed as mass as in the Schwarzschild metric.
With coordinates , define the following tetrad:
It is straightforward to verify that e(0) is timelike, e(1), e(2), e(3) are spacelike, that they are all orthogonal, and that there are no singularities. The corresponding proper time is
The Riemann tensor has only one algebraically independent, nonzero component
which shows that the spacetime is Ricci flat but not conformally flat. That is sufficient to conclude that it is a vacuum solution distinct from Minkowski spacetime. Under a suitable coordinate transformation, the metric can be rewritten as
and is therefore an example of a pp-wave spacetime. |
https://en.wikipedia.org/wiki/Listing%20%28computer%29 | A listing or program listing is a printed list of lines of computer code or digital data (in human-readable form).
Use cases
Listings are commonly used in education and computer-related books to show examples of code.
In the early days of programming, it was used to hand-check a program and as permanent storage. It was also common in 1970s and 1980s computer enthusiast magazines (for instance Creative Computing) and books like BASIC Computer Games for type-in programs.
Today, hard copy listings are seldom used because display screens can present more lines than formerly, programs tend to be modular, storage in soft copy is considered preferable to hard copy, and digital material is easily transmitted via networks, or on disks or tapes. Furthermore, data sets tend to be too large to be conveniently put on paper, and they are more easily searched in soft-copy form.
Assembly-code listings are occasionally analysed by programmers who want to understand how a compiler is translating their source code into assembly language. For example, the GNU C Compiler (gcc) will produce an assembly code listing if it is invoked with the command-line option -S.
Listings of computer programs are still important in US patent law. They are defined as follows in the Manual of Patent Examining Procedure: |
https://en.wikipedia.org/wiki/Computational%20Science%20%26%20Discovery | Computational Science & Discovery was a peer-reviewed scientific journal covering computational science in physics, chemistry, biology, and applied science. The editor-in-chief was Nathan A Baker (Pacific Northwest National Laboratory), who succeeded Anthony Mezzacappa (Oak Ridge National Laboratory) in 2011. The journal was established in 2008 and ceased publication in 2015, but all articles will remain available online.
Abstracting and indexing
This journal was indexed by the following services:
Scopus
Inspec
Chemical Abstracts Service
FLUIDEX
International Nuclear Information System/Atomindex
NASA Astrophysics Data System
MathSciNet
PASCAL |
https://en.wikipedia.org/wiki/DISC1 | Disrupted in schizophrenia 1 is a protein that in humans is encoded by the DISC1 gene. In coordination with a wide array of interacting partners, DISC1 has been shown to participate in the regulation of cell proliferation, differentiation, migration, neuronal axon and dendrite outgrowth, mitochondrial transport, fission and/or fusion, and cell-to-cell adhesion. Several studies have shown that unregulated expression or altered protein structure of DISC1 may predispose individuals to the development of schizophrenia, clinical depression, bipolar disorder, and other psychiatric conditions. The cellular functions that are disrupted by permutations in DISC1, which lead to the development of these disorders, have yet to be clearly defined and are the subject of current ongoing research. Although, recent genetic studies of large schizophrenia cohorts have failed to implicate DISC1 as a risk gene at the gene level, the DISC1 interactome gene set was associated with schizophrenia, showing evidence from genome-wide association studies of the role of DISC1 and interacting partners in schizophrenia susceptibility.
Discovery
In 1970, researchers from the University of Edinburgh performing cytogenetic research on a group of juvenile offenders in Scotland found an abnormal translocation in chromosome 1 of one of the boys, who also displayed characteristics of an affective psychological disorder. After this initial observation, the boy's family was studied and it was found that 34 out of 77 family members displayed the same translocation. According to the Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (or DSM-IV) criteria, sixteen of the 34 individuals identified as having the genetic mutation were diagnosed with psychiatric problems. In contrast, five of the 43 unaffected family members were identified to have psychological indispositions. The psychiatric illnesses observed in the family ranged from schizophrenia and major depression to bipolar disorder |
https://en.wikipedia.org/wiki/Journal%20of%20the%20Royal%20Statistical%20Society | The Journal of the Royal Statistical Society is a peer-reviewed scientific journal of statistics. It comprises three series and is published by Oxford University Press for the Royal Statistical Society.
History
The Statistical Society of London was founded in 1834, but would not begin producing a journal for four years. From 1834 to 1837, members of the society would read the results of their studies to the other members, and some details were recorded in the proceedings. The first study reported to the society in 1834 was a simple survey of the occupations of people in Manchester, England. Conducted by going door-to-door and inquiring, the study revealed that the most common profession was mill-hands, followed closely by weavers.
When founded, the membership of the Statistical Society of London overlapped almost completely with the statistical section of the British Association for the Advancement of Science. In 1837 a volume of Transactions of the Statistical Society of London were written, and in May 1838 the society began its journal. The first editor-in-chief of the journal was Rawson W. Rawson. In the early days of the society and the journal, there was dispute over whether or not opinions should be expressed, or merely the numbers. The symbol of the society was a wheatsheaf, representing a bundle of facts, and the motto Aliis exterendum, Latin for "to be threshed out by others." Many early members chafed under this prohibition, and in 1857 the motto was dropped.
From 1838 to 1886, the journal was published as the Journal of the Statistical Society of London (). In 1887 it was renamed the Journal of the Royal Statistical Society () when the society was granted a Royal Charter.
On its centenary in 1934, the society inaugurated a Supplement to the Journal of the Royal Statistical Society to publish work on industrial and agricultural applications. In 1948 the society reorganised its journals and the main journal became the Journal of the Royal Statistical |
https://en.wikipedia.org/wiki/Sleep%E2%80%93wake%20activity%20inventory | The sleep–wake activity inventory (SWAI) is a subjective multidimensional questionnaire intended to measure sleepiness.
The instrument
The SWAI consists of 59 items that provide six subscale scores: excessive daytime sleepiness, nocturnal sleep, ability to relax, energy level, social desirability, and psychic distress. Each item is rated on a 1 to 9 semicontinuous Likert type scale from "always" to "never", based on the previous seven days. The SWAI was normed on 554 subjects in the early 1990s and is currently being validated or has been validated in multiple languages, including Spanish, French and Dutch.
For the excessive daytime sleepiness subscale (SWAI-EDS), a score of 40 or below indicates excessive sleepiness, a score of between 40 and 50 indicates possible sleepiness and a score of greater than 50 is normal.
A short form of the SWAI exists that contains items for the excessive daytime sleepiness and nocturnal sleep subscales only.
Comparison with other sleepiness assessments
The SWAI has been compared to the multiple sleep latency test (MSLT), which is an objective measure that is considered the gold standard of sleepiness assessment; it measures sleep onset latency during several daytime opportunities. The SWAI-EDS has been found to correlate moderately to highly with average MSLT scores.
Other sleepiness scales, including the Stanford sleepiness scale and the Epworth sleepiness scale (ESS), exist. However, the ESS does not correlate as highly with the MSLT as the SWAI. The ESS is currently the most prevalent measure of excessive sleepiness.
History
The SWAI was developed by Drs. Leon Rosenthal, Timothy Roehrs and Tom Roth at the Sleep Disorders and Research Center at the Henry Ford Hospital in Detroit, Michigan. |
https://en.wikipedia.org/wiki/Combustion%20light-gas%20gun | A combustion light-gas gun (CLGG) is a projectile weapon that utilizes the explosive force of low molecular-weight combustible gases, such as hydrogen mixed with oxygen, as propellant. When the gases are ignited, they burn, expand and propel the projectile out of the barrel with higher efficiency relative to solid propellant and have achieved higher muzzle velocities in experiments. Combustion light-gas gun technology is one of the areas being explored in an attempt to achieve higher velocities from artillery to gain greater range. Conventional guns use solid propellants, usually nitrocellulose-based compounds, to develop the chamber pressures needed to accelerate the projectiles. CLGGs' gaseous propellants are able to increase the propellant's specific impulse. Therefore, hydrogen is typically the first choice; however, other propellants like methane can be used.
While this technology does appear to provide higher velocities, the main drawback with gaseous or liquid propellants for gun systems is the difficulty in getting uniform and predictable ignition and muzzle velocities. Variance with muzzle velocities affects precision in range, and the further a weapon shoots, the more significant these variances become. If an artillery system cannot maintain uniform and predictable muzzle velocities it will be of no use at longer ranges. Another issue is the survival of projectile payloads at higher accelerations. Fuzes, explosive fill, and guidance systems all must be "hardened" against the significant acceleration loads of conventional artillery to survive and function properly. Higher velocity weapons, like the CLGG, face these engineering challenges as they edge the boundaries of firing accelerations higher.
The research and development firm UTRON, Inc is experimenting with a combustion light-gas gun design for field use. The corporation claims to have a system ready for testing as a potential long-range naval fire support weapon for emerging ships, such as the Zumw |
https://en.wikipedia.org/wiki/Insulated%20shipping%20container | Insulated shipping containers are a type of packaging used to ship temperature sensitive products such as foods, pharmaceuticals, organs, blood, biologic materials, vaccines and chemicals. They are used as part of a cold chain to help maintain product freshness and efficacy. The term can also refer to insulated intermodal containers or insulated swap bodies.
Construction
A variety of constructions have been developed. An insulated shipping container might be constructed of:
a vacuum flask, similar to a "thermos" bottle
fabricated thermal blankets or liners
molded expanded polystyrene foam (EPS, styrofoam), similar to a cooler
other molded foams such as polyurethane, polyethylene
sheets of foamed plastics
Vacuum Insulated Panels (VIPs)
reflective materials: (metallised film)
bubble wrap or other gas filled panels
other packaging materials and structures
Some are designed for single use while others are returnable for reuse. Some insulated containers are decommissioned refrigeration units. Some empty containers are sent to the shipper disassembled or “knocked down”, assembled and used, then knocked down again for easier return shipment.
Shipping containers are available for maintaining cryogenic temperatures, with the use of liquid nitrogen. Some carriers have these as a specialized service
Use
Insulated shipping containers are part of a comprehensive cold chain which controls and documents the temperature of a product through its entire distribution cycle. The containers may be used with a refrigerant or coolant such as:
block or cube ice, slurry ice
dry ice
Gel or ice packs (often formulated for specific temperature ranges)
Phase change materials (PCMs)
Some products (such as frozen meat) have sufficient thermal mass to contribute to the temperature control and no excess coolant is required
A digital Temperature data logger or a time temperature indicator is often enclosed to monitor the temperature inside the container for its entire shipme |
https://en.wikipedia.org/wiki/Distributed%20concurrency%20control | Distributed concurrency control is the concurrency control of a system distributed over a computer network (Bernstein et al. 1987, Weikum and Vossen 2001).
In database systems and transaction processing (transaction management) distributed concurrency control refers primarily to the concurrency control of a distributed database. It also refers to the concurrency control in a multidatabase (and other multi-transactional object) environment (e.g., federated database, grid computing, and cloud computing environments. A major goal for distributed concurrency control is distributed serializability (or global serializability for multidatabase systems). Distributed concurrency control poses special challenges beyond centralized one, primarily due to communication and computer latency. It often requires special techniques, like distributed lock manager over fast computer networks with low latency, like switched fabric (e.g., InfiniBand). Commitment ordering (or commit ordering) is a general serializability technique that achieves distributed serializability (and global serializability in particular) effectively on a large scale, without concurrency control information distribution (e.g., local precedence relations, locks, timestamps, or tickets), and thus without performance penalties that are typical to other serializability techniques (Raz 1992).
The most common distributed concurrency control technique is strong strict two-phase locking (SS2PL, also named rigorousness), which is also a common centralized concurrency control technique. SS2PL provides both the serializability, strictness, and commitment ordering properties. Strictness, a special case of recoverability, is utilized for effective recovery from failure, and commitment ordering allows participating in a general solution for global serializability. For large-scale distribution and complex transactions, distributed locking's typical heavy performance penalty (due to delays, latency) can be saved by using the |
https://en.wikipedia.org/wiki/Ostwald%27s%20rule | In materials science, Ostwald's rule or Ostwald's step rule, conceived by Wilhelm Ostwald, describes the formation of polymorphs. The rule states that usually the less stable polymorph crystallizes first. Ostwald's rule is not a universal law but a common tendency observed in nature.
This can be explained on the basis of irreversible thermodynamics, structural relationships, or a combined consideration of statistical thermodynamics and structural variation with temperature. Unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged.
For example, out of hot water, metastable, fibrous crystals of benzamide appear first, only later to spontaneously convert to the more stable rhombic polymorph. Another example is magnesium carbonate, which more readily forms dolomite. A dramatic example is phosphorus, which upon sublimation first forms the less stable white phosphorus, which only slowly polymerizes to the red allotrope. This is notably the case for the anatase polymorph of titanium dioxide, which having a lower surface energy is commonly the first phase to form by crystallisation from amorphous precursors or solutions despite being metastable, with rutile being the equilibrium phase at all temperatures and pressures. |
https://en.wikipedia.org/wiki/Sebastian%20%281968%20film%29 | Sebastian is a 1968 British spy film directed by David Greene, produced by Michael Powell, Herbert Brodkin and Gerry Fisher, and distributed by Paramount Pictures. The motion picture is based on a story by Leo Marks, and Gerald Vaughan-Hughes wrote the screenplay.
Plot
Mr. Sebastian is a former Oxford professor, who in the late 60s directs the all-female decoding office of British Intelligence. One day, while running through the streets of Oxford to attend the bestowing of an honorary degree on his friend the Prime Minister, Sebastian runs into Rebecca (Becky) Howard and her jeep. After insulting Sebastian on the spot, Becky is intrigued by him and follows him to the ceremony. After Becky is able to spell her own name backwards, he gives her a phone number to call if she wants an unspecified "job."
Becky calls the number, and after Sebastian's personal assistant Miss Elliott describes the job as being part of the "civil service," Becky is turned off by the idea. Overcoming her concerns, she calls again, and after a successful interview, obtains a job deciphering codes used by secret agents and foreign spies. Once settled in her new job, Becky slowly starts to fall for the aloof Mr Sebastian. However, problems arise when Gen. John Phillips, Head of Security, accuses Sebastian's senior Jewish decoder Elsa Shahn of being a poor security risk, because of her left-wing Communist leanings.
Sebastian convinces the Head of Intelligence to retain Shahn despite Phillips' objections, expressing how vital Shahn is to the decoding office and reaffirming that she enjoys his full confidence. Eventually, Becky and Sebastian engage in an affair, which upsets Sebastian's longtime girlfriend (and washed-up pop singer) Carol Fancy. Ultimately, Shahn betrays Sebastian's trust by providing recently decoded information to a left-wing political organisation. When confronted with the security breach by the Head of Intelligence and by Phillips' watchdog Jameson, Sebastian tenders |
https://en.wikipedia.org/wiki/Robik | Robik (ru: Арифметико Логическое Устройство «РОБИК») was a soviet ZX Spectrum clone produced between 1989 and 1994 by NPO Selto-Rotor in Cherkasy.
The computer came with a full QWERTY keyboard with 55 keys, separate , three , double , , separate and stop keys. It had the possibility to switch between Latin and Russian fonts. It had built-in Kempston interface and cursor keys that also worked as a joystick.
It had no edge connector and video output was analog RGB on a 5-pin DIN or digital TTL on an 8-pin DIN for connecting to monochrome MDA/Hercules or color EGA monitor. There was no composite video and all I/O ports were 5- and 7-pin DINs. Inside the case there was a male 64-pin connector that could be mapped to the standard edge connector.
The hardware contained about three to four grams of gold and almost eighteen grams of silver. The letters on the keyboard were written using laser beam technology. The buttons used reed switches instead of copper or iron contact plates.
When writing, the screen memory to the TV/monitor screen did not begin from the top left of the border, but instead began from border right under paper. This meant that most multicolor effects and some games did not work correctly. Errors in the ROM have been fixed and Cyrillic letters were also inserted.
The keyboard matrix was extended from five keys in eight rows to five keys in nine rows to allow for more buttons. A reset could be performed by pressing two reset buttons.
The Robik came in four versions, with only minor changes made for Russian internationalization and localization. The hardware remained largely unchanged, but cheaper parts were used for each version. The fourth version had the new addition of a single integrated circuit. This version did not sell well because by then the main market for the Robik was hardware enthusiasts and this design did not allow for modifications.
Robik had two EPROM chips. There are two languages in the M2764AF-1 chip from ST, which can be swi |
https://en.wikipedia.org/wiki/Radiation%20Effects%20Research%20Foundation | The Radiation Effects Research Foundation (RERF) is a joint U.S.-Japan research organization responsible for studying the medical effects of radiation and associated diseases in humans for the welfare of the survivors and all humankind. The organization's scientific laboratories are located in Hiroshima and Nagasaki, Japan.
RERF's studies into radiation health effects have continued for more than 70 years, making RERF unique for its conduct of epidemiological and other research on such a large population (more than 120,000 individuals) over such a long timeframe. RERF continues its research with the aim of further elucidating the effects of A-bomb radiation on human health.
RERF carries out research in numerous scientific fields, including epidemiology, clinical medicine, genetics, and immunology. Findings from RERF's studies are utilized not only for the medical care and welfare of the A-bomb survivors but also for the establishment of international radiation protection standards.
History
The predecessor organization to RERF was the Atomic Bomb Casualty Commission (ABCC), established in 1947 by the U.S. government. ABCC's mission was to determine the long-term effects on health from exposure to radiation in A-bomb survivors and their children.
In the 1950s, an extensive interview survey was conducted, based on which records were compiled for each of the A-bomb survivor participants in the ABCC studies. These records detailed location of each survivor at the time of the bombing and structure of the building, or "shielding" as it is known, that the survivor may have been in at the time. Based on such records, radiation doses from the atomic bombings were estimated for the A-bomb survivors. Accurate estimates of radiation exposure were crucial for tying a specific dose to a certain health effect observed in later studies of health effects in the survivors.
ABCC was reorganized into RERF, a research institute jointly funded by the governments of Japan and the Uni |
https://en.wikipedia.org/wiki/Cognitively%20Guided%20Instruction | Cognitively Guided Instruction is "a professional development program based on an integrated program of research on (a) the development of students' mathematical thinking; (b) instruction that influences that development; (c) teachers' knowledge and beliefs that influence their instructional practice; and (d) the way that teachers' knowledge, beliefs, and practices are influenced by their understanding of students' mathematical thinking". CGI is an approach to teaching mathematics rather than a curriculum program. At the core of this approach is the practice of listening to children's mathematical thinking and using it as a basis for instruction. Research based frameworks of children's thinking in the domains of addition and subtraction, multiplication and division, base-ten concepts, multidigit operations, algebra, geometry and fractions provide guidance to teachers about listening to their students. Case studies of teachers using CGI have shown the most accomplished teachers use a variety of practices to extend children's mathematical thinking. It's a tenet of CGI that there is no one way to implement the approach and that teachers' professional judgment is central to making decisions about how to use information about children's thinking.
The research base on children' mathematical thinking upon which CGI is based shows that children are able to solve problems without direct instruction by drawing upon informal knowledge of everyday situations. For example, a study of kindergarten children showed that young children can solve problems involving what are normally considered advanced mathematics such as multiplication, division, and multistep problems, by using direct modeling. Direct modeling is an approach to problem solving in which the child, in the absence of more sophisticated knowledge of mathematics, constructs a solution to a story problem by modeling the action or structure. For example, about half of the children in a study of kindergartners' problem |
https://en.wikipedia.org/wiki/Tubular%20heart | The tubular heart or primitive heart tube is the earliest stage of heart development.
From the inflow to the outflow, it consists of sinus venosus, primitive atrium, the primitive ventricle, the bulbus cordis, and truncus arteriosus.
It forms primarily from splanchnic mesoderm. More specifically, they form from endocardial tubes, starting at day 21. |
https://en.wikipedia.org/wiki/Work%20systems | Work system has been used loosely in many areas. This article concerns its use in understanding IT-reliant systems in organizations. A notable use of the term occurred in 1977 in the first volume of MIS Quarterly in two articles by Bostrom and Heinen (1977). Later Sumner and Ryan (1994) used it to explain problems in the adoption of CASE (computer-aided software engineering). A number of socio-technical systems researchers such as Trist and Mumford also used the term occasionally, but seemed not to define it in detail. In contrast, the work system approach defines work system carefully and uses it as a basic analytical concept.
A work system is a system in which human participants and/or machines perform work (processes and activities) using information, technology, and other resources to produce products/services for internal or external customers. Typical business organizations contain work systems that procure materials from suppliers, produce products, deliver products to customers, find customers, create financial reports, hire employees, coordinate work across departments, and perform many other functions.
The work system concept is like a common denominator for many of the types of systems that operate within or across organizations. Operational information systems, service systems, projects, supply chains, and ecommerce web sites can all be viewed as special cases of work systems.
An information system is a work system whose processes and activities are devoted to processing information.
A service system is a work system that produces services for its customers.
A project is a work system designed to produce a product and then go out of existence.
A supply chain is an interorganizational work system devoted to procuring materials and other inputs required to produce a firm's products.
An ecommerce web site can be viewed as a work system in which a buyer uses a seller's web site to obtain product information and perform purchase transactions.
The rela |
https://en.wikipedia.org/wiki/Water%20chiller | A water chiller is a device used to lower the temperature of water. Most chillers use refrigerant in a closed loop system to facilitate heat exchange from water where the refrigerant is then pumped to a location where the waste heat is transferred to the atmosphere. However, there are other methods in performing this action.
In hydroponics, pumps, lights and ambient heat can warm the reservoir water temperatures, leading to plant root and health problems. For ideal plant health, a chiller can be used to lower the water temperature below ambient level; is a good temperature for most plants. This results in healthy root production and efficient absorption of nutrients.
In air conditioning, chilled water is often used to cool a building's air and equipment, especially in situations where many individual rooms must be controlled separately, such as a hotel. A chiller lowers water temperature to between and before the water is pumped to the location to be cooled.
See also
Chiller
Gardening
Notes
Hydroponics
Cooling technology
Heating, ventilation, and air conditioning
Mechanical engineering |
https://en.wikipedia.org/wiki/Model-based%20design | Model-based design (MBD) is a mathematical and visual method of addressing problems associated with designing complex control, signal processing and communication systems. It is used in many motion control, industrial equipment, aerospace, and automotive applications. Model-based design is a methodology applied in designing embedded software.
Overview
Model-based design provides an efficient approach for establishing a common framework for communication throughout the design process while supporting the development cycle (V-model). In model-based design of control systems, development is manifested in these four steps:
modeling a plant,
analyzing and synthesizing a controller for the plant,
simulating the plant and controller,
integrating all these phases by deploying the controller.
The model-based design is significantly different from traditional design methodology. Rather than using complex structures and extensive software code, designers can use Model-based design to define plant models with advanced functional characteristics using continuous-time and discrete-time building blocks. These built models used with simulation tools can lead to rapid prototyping, software testing, and verification. Not only is the testing and verification process enhanced, but also, in some cases, hardware-in-the-loop simulation can be used with the new design paradigm to perform testing of dynamic effects on the system more quickly and much more efficiently than with traditional design methodology.
History
As early as the 1920s two aspects of engineering, control theory and control systems, converged to make large-scale integrated systems possible. In those early days controls systems were commonly used in the industrial environment. Large process facilities started using process controllers for regulating continuous variables such as temperature, pressure, and flow rate. Electrical relays built into ladder-like networks were one of the first discrete control devices to au |
https://en.wikipedia.org/wiki/ARC-ECRIS | ARC-ECRIS is an Electron Cyclotron Resonance Ion Source (ECRIS) based on arc-shaped coils unlike the conventional ECRIS which bases on a multipole magnet (usually a hexapole magnet) inside a solenoid magnet.
First time the arc-shaped coils were used already in the 1960s in fusion experiments, for example at the Lawrence Livermore National Laboratory (MFTF, Baseball II, ...) and later in Japan (GAMMA10, ...).
In 2006 the JYFL ion source group designed, constructed and tested similar plasma trap to produce highly charged heavy ion beams. The first tests were promising and showed that a stable plasma can be confined in an arc-coil magnetic field structure (see references). |
https://en.wikipedia.org/wiki/Bacillus%20virus%20phi29 | Bacillus virus Φ29 (bacteriophage Φ29) is a double-stranded DNA (dsDNA) bacteriophage with a prolate icosahedral head and a short tail that belongs to the genus Salasvirus, order Caudovirales, and family Salasmaviridae. They are in the same order as phages PZA, Φ15, BS32, B103, M2Y (M2), Nf, and GA-1. First discovered in 1965, the Φ29 phage is the smallest Bacillus phage isolated to date and is among the smallest known dsDNA phages.
Φ29 has a unique DNA packaging motor structure that employs prohead packaging RNA (pRNA) to guide the translocation of the phage genome during replication. This novel structure system has inspired ongoing research in nanotechnology, drug delivery, and therapeutics.
In nature, the Φ29 phage infects Bacillus subtilis, a species of gram-positive, endospore-forming bacteria that is found in soil, as well as the gastrointestinal tracts of various marine and terrestrial organisms, including human beings.
History
In 1965, American microbiologist Dr. Bernard Reilly discovered the Φ29 phage in Dr. John Spizizen’s lab at the University of Minnesota. Due to its small size and complex morphology, it has become an ideal model for the study of many processes in molecular biology, such as morphogenesis, viral DNA packaging, viral replication, and transcription.
Structure
The structure of Φ29 is composed of seven main proteins: the terminal protein (p3), the head or capsid protein (p8), the head or capsid fiber protein (p8.5), the distal tail knob (p9), the portal or connector protein (p10), the tail tube or lower collar proteins (p11), and the tail fibers or appendage proteins (p12*).
The main difference between Φ29's structure and that of other phages is its use of pRNA in its DNA packaging motor.
DNA packaging motor
The Φ29 DNA packaging motor packages the phage genome into the procapsid during viral replication. The Φ29 packaging motor is structurally composed of the procapsid and the connector proteins, which interact with the pRNA, the p |
https://en.wikipedia.org/wiki/Asymmetric%20norm | In mathematics, an asymmetric norm on a vector space is a generalization of the concept of a norm.
Definition
An asymmetric norm on a real vector space is a function that has the following properties:
Subadditivity, or the triangle inequality:
Nonnegative homogeneity: and every non-negative real number
Positive definiteness:
Asymmetric norms differ from norms in that they need not satisfy the equality
If the condition of positive definiteness is omitted, then is an asymmetric seminorm. A weaker condition than positive definiteness is non-degeneracy: that for at least one of the two numbers and is not zero.
Examples
On the real line the function given by
is an asymmetric norm but not a norm.
In a real vector space the of a convex subset that contains the origin is defined by the formula
for
This functional is an asymmetric seminorm if is an absorbing set, which means that and ensures that is finite for each
Corresponce between asymmetric seminorms and convex subsets of the dual space
If is a convex set that contains the origin, then an asymmetric seminorm can be defined on by the formula
For instance, if is the square with vertices then is the taxicab norm Different convex sets yield different seminorms, and every asymmetric seminorm on can be obtained from some convex set, called its dual unit ball. Therefore, asymmetric seminorms are in one-to-one correspondence with convex sets that contain the origin. The seminorm is
positive definite if and only if contains the origin in its topological interior,
degenerate if and only if is contained in a linear subspace of dimension less than and
symmetric if and only if
More generally, if is a finite-dimensional real vector space and is a compact convex subset of the dual space that contains the origin, then is an asymmetric seminorm on
See also |
https://en.wikipedia.org/wiki/Unparticle%20physics | In theoretical physics, unparticle physics is a speculative theory that conjectures a form of matter that cannot be explained in terms of particles using the Standard Model of particle physics, because its components are scale invariant.
Howard Georgi proposed this theory in two 2007 papers, "Unparticle Physics"
and "Another Odd Thing About Unparticle Physics". His papers were followed by further work by other researchers into the properties and phenomenology of unparticle physics and its potential impact on particle physics, astrophysics, cosmology, CP violation, lepton flavour violation, muon decay, neutrino oscillations, and supersymmetry.
Background
All particles exist in states that may be characterized by a certain energy, momentum and mass. In most of the Standard Model of particle physics, particles of the same type cannot exist in another state with all these properties scaled up or down by a common factor – electrons, for example, always have the same mass regardless of their energy or momentum. But this is not always the case: massless particles, such as photons, can exist with their properties scaled equally. This immunity to scaling is called "scale invariance".
The idea of unparticles comes from conjecturing that there may be "stuff" that does not necessarily have zero mass but is still scale-invariant, with the same physics regardless of a change of length (or equivalently energy). This stuff is unlike particles, and described as unparticle. The unparticle stuff is equivalent to particles with a continuous spectrum of mass.
Such unparticle stuff has not been observed, which suggests that if it exists, it must couple with normal matter weakly at observable energies. Since the Large Hadron Collider (LHC) team announced it will begin probing a higher energy frontier in 2009, some theoretical physicists have begun to consider the properties of unparticle stuff and how it may appear in LHC experiments. One of the great hopes for the LHC is that it |
https://en.wikipedia.org/wiki/SGI%20Onyx | SGI Onyx is a series of visualization systems designed and manufactured by SGI, introduced in 1993 and offered in two models, deskside and rackmount, codenamed Eveready and Terminator respectively. The Onyx's basic system architecture is based on the SGI Challenge servers, but with graphics hardware.
The Onyx was employed in early 1995 for development kits used to produce software for the Nintendo 64 and, because the technology was so new, the Onyx was noted as the major factor for the impressively high price of – for such kits.
The Onyx was succeeded by the Onyx2 in 1996 and was discontinued on March 31, 1999.
CPU
The deskside variant can accept one CPU board, and the rackmount variant can take up to six CPU boards. Both models were launched with the IP19 CPU board with one, two, or four MIPS R4400 CPUs, initially with 100 and 150 MHz options and later increased to 200 and 250 MHz. Later, the IP21 CPU board was introduced, with one or two R8000 microprocessors at 75 or 90 MHz; machines with this board were referred to as POWER Onyx. Finally, SGI introduced the IP25 board with one, two, or four R10000 CPUs at 195 MHz.
Graphics
The Onyx was launched with the RealityEngine2 or VTX graphics subsystems, and InfiniteReality was introduced in 1995.
RealityEngine2
The RealityEngine2 is the original high-end graphics subsystem for the Onyx and was found in two different versions: deskside and rack. The deskside model has one GE10 (Geometry Engine) board with 12 Intel i860XP processors, up to four RM4 or RM5 (Raster Manager) boards, and a DG2 (Display Generator) board. The rack model differs by supporting up to three RealityEngine2 pipes (display outputs) vs the single pipe of the deskside.
VTX
The VTX graphics subsystem is a cost reduced version of the RealityEngine2, using the same hardware but in a feature reduced configuration that can not be upgraded. It consists of one GE10 board (6 Intel i860XP processors vs 12 in RE2), a single RM4 or RM5 board, and a DG2 boar |
https://en.wikipedia.org/wiki/Single-input%20single-output%20system | In control engineering, a single-input and single-output (SISO) system is a simple single-variable control system with one input and one output. In radio, it is the use of only one antenna both in the transmitter and receiver.
Details
SISO systems are typically less complex than multiple-input multiple-output (MIMO) systems. Usually, it is also easier to make an order of magnitude or trending predictions "on the fly" or "back of the envelope". MIMO systems have too many interactions for most of us to trace through them quickly, thoroughly, and effectively in our heads.
Frequency domain techniques for analysis and controller design dominate SISO control system theory. Bode plot, Nyquist stability criterion, Nichols plot, and root locus are the usual tools for SISO system analysis. Controllers can be designed through the polynomial design, root locus design methods to name just two of the more popular. Often SISO controllers will be PI, PID, or lead-lag.
See also
Control theory |
https://en.wikipedia.org/wiki/Robbins%20lemma | In statistics, the Robbins lemma, named after Herbert Robbins, states that if X is a random variable having a Poisson distribution with parameter λ, and f is any function for which the expected value E(f(X)) exists, then
Robbins introduced this proposition while developing empirical Bayes methods. |
https://en.wikipedia.org/wiki/List%20of%20Zimbabwean%20flags | This is a list of flags used in Zimbabwe (Africa) from 1980 to the present date. For flags before April 1980 see List of Rhodesian flags.
National flag
Presidential flag
Military flags
Political party flags
Town flags
Historical flags
Ethnic groups
See also
Coat of arms of Zimbabwe
Flag of Zimbabwe
Simudzai Mureza wedu WeZimbabwe
List
Zimbabwe
Flags |
https://en.wikipedia.org/wiki/Aging-associated%20diseases | An aging-associated disease (commonly termed age-related disease, ARD) is a disease that is most often seen with increasing frequency with increasing senescence. They are essentially complications of senescence, distinguished from the aging process itself because all adult animals age (with rare exceptions) but not all adult animals experience all age-associated diseases. The term does not refer to age-specific diseases, such as the childhood diseases chicken pox and measles, only diseases of the elderly. They are also not accelerated aging diseases, all of which are genetic disorders.
Examples of aging-associated diseases are atherosclerosis and cardiovascular disease, cancer, arthritis, cataracts, osteoporosis, type 2 diabetes, hypertension and Alzheimer's disease. The incidence of all of these diseases increases exponentially with age.
Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is higher, reaching 90%.
Patterns of differences
By age 3, about 30% of rats have had cancer, whereas by age 85 about 30% of humans have had cancer. Humans, dogs and rabbits get Alzheimer's disease, but rodents do not. Elderly rodents typically die of cancer or kidney disease, but not of cardiovascular disease. In humans, the relative incidence of cancer increases exponentially with age for most cancers, but levels off or may even decline by age 60–75 (although colon/rectal cancer continues to increase).
People with the so-called segmental progerias are vulnerable to different sets of diseases. Those with Werner's syndrome experience osteoporosis, cataracts, and, cardiovascular disease, but not neurodegeneration or Alzheimer's disease; those with Down syndrome have type 2 diabetes and Alzheimer's disease, but not high blood pressure, osteoporosis or cataracts. In Bloom syndrome, those affected most often die of cancer.
Research
Aging (senescence) increases vuln |
https://en.wikipedia.org/wiki/Schilder%27s%20theorem | In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path (which is constant with value 0). This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.
Statement of the theorem
Let C0 = C0([0, T]; Rd) be the Banach space of continuous functions such that , equipped with the supremum norm ||·||∞ and be the subspace of absolutely continuous functions whose derivative is in (the so-called Cameron-Martin space). Define the rate function
on and let be two given functions, such that (the "action") has a unique minimum .
Then under some differentiability and growth assumptions on which are detailed in Schilder 1966, one has
where denotes expectation with respect to the Wiener measure on and is the Hessian of at the minimum ; is meant in the sense of an inner product.
Application to large deviations on the Wiener measure
Let B be a standard Brownian motion in d-dimensional Euclidean space Rd starting at the origin, 0 ∈ Rd; let W denote the law of B, i.e. classical Wiener measure. For ε > 0, let Wε denote the law of the rescaled process B. Then, on the Banach space C0 = C0([0, T]; Rd) of continuous functions such that , equipped with the supremum norm ||·||∞, the probability measures Wε satisfy the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by
if ω is absolutely continuous, and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0,
and
Example
Taking ε = 1/c2, one can use Schilder's theorem to obtain estimates for the probability that a standard Brownian motion B strays further tha |
https://en.wikipedia.org/wiki/Sammon%20mapping | Sammon mapping or Sammon projection is an algorithm that maps a high-dimensional space to a space of lower dimensionality (see multidimensional scaling) by trying to preserve the structure of inter-point distances in high-dimensional space in the lower-dimension projection.
It is particularly suited for use in exploratory data analysis.
The method was proposed by John W. Sammon in 1969.
It is considered a non-linear approach as the mapping cannot be represented as a linear combination of the original variables as possible in techniques such as principal component analysis, which also makes it more difficult to use for classification applications.
Denote the distance between ith and jth objects in the original space by , and the distance between their projections by .
Sammon's mapping aims to minimize the following error function, which is often referred to as Sammon's stress or Sammon's error:
The minimization can be performed either by gradient descent, as proposed initially, or by other means, usually involving iterative methods.
The number of iterations needs to be experimentally determined and convergent solutions are not always guaranteed.
Many implementations prefer to use the first Principal Components as a starting configuration.
The Sammon mapping has been one of the most successful nonlinear metric multidimensional scaling methods since its advent in 1969, but effort has been focused on algorithm improvement rather than on the form of the stress function.
The performance of the Sammon mapping has been improved by extending its stress function using left Bregman divergence
and right Bregman divergence.
See also
Prefrontal cortex basal ganglia working memory
State–action–reward–state–action
Constructing skill trees |
https://en.wikipedia.org/wiki/Richard%20Kilmer | Richard Kilmer (born Hemet, California, 1969) is a technology entrepreneur, software programmer and conference host and speaker in the open-source software community. He is an open-source contributor and developer of commercial software applications built in Ruby and Flash. His best known open-source software creation is of RubyGems, a package manager for the Ruby programming language most commonly used in downloads and deployments of the Ruby on Rails web application framework. He is currently the Co-Founder and CEO of CargoSense, Inc.
In 2001, he co-founded both the non-profit corporation Ruby Central, Inc. dedicated to the promotion of the Ruby programming language, and the for-profit corporation InfoEther, Inc., created to focus on applying the Ruby computer language in business. He served as president and CEO of InfoEther until its acquisition by LivingSocial in March 2011. At LivingSocial he was appointed a vice president working in roles in R&D, and led the software development of numerous projects in Merchant Services and mobile.
After several years at LivingSocial, he left in 2013 to form his current company, CargoSense, Inc. , a Software-as-a-Service (SaaS) company aimed at bringing innovation to the logistics supply chain in numerous industries using sensor technology in the Internet of Things arena.
Prior to 2001, he was the co-founder and Chief Technology Officer for a leading-edge P2P software company where he was granted two U.S. patents and co-wrote a massive Java codebase.
Between 2002 and 2005 his for-profit company performed work for DARPA on both a massively multi-agent logistics software system and the Semantic Web project developing an early Ontological Web Language (OWL) library. Both projects drew on his expertise in computer security gained as a systems security manager while in the U.S. Air Force stationed at The Pentagon.
When an active board member in the non-profit Ruby Central, he played host to the annual international confe |
https://en.wikipedia.org/wiki/Freidlin%E2%80%93Wentzell%20theorem | In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion.
Statement
Let B be a standard Brownian motion on Rd starting at the origin, 0 ∈ Rd, and let Xε be an Rd-valued Itō diffusion solving an Itō stochastic differential equation of the form
where the drift vector field b : Rd → Rd is uniformly Lipschitz continuous. Then, on the Banach space C0 = C0([0, T]; Rd) equipped with the supremum norm ||·||∞, the family of processes (Xε)ε>0 satisfies the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by
if ω lies in the Sobolev space H1([0, T]; Rd), and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0,
and |
https://en.wikipedia.org/wiki/Virtual%20black%20hole | In quantum gravity, a virtual black hole is a hypothetical micro black hole that exists temporarily as a result of a quantum fluctuation of spacetime. It is an example of quantum foam and is the gravitational analog of the virtual electron–positron pairs found in quantum electrodynamics. Theoretical arguments suggest that virtual black holes should have mass on the order of the Planck mass, lifetime around the Planck time, and occur with a number density of approximately one per Planck volume.
The emergence of virtual black holes at the Planck scale is a consequence of the uncertainty relation
where is the radius of curvature of spacetime small domain, is the coordinate of the small domain, is the Planck length, is the reduced Planck constant, is the Newtonian constant of gravitation, and is the speed of light. These uncertainty relations are another form of Heisenberg's uncertainty principle at the Planck scale.
If virtual black holes exist, they provide a mechanism for proton decay. This is because when a black hole's mass increases via mass falling into the hole, and is theorized to decrease when Hawking radiation is emitted from the hole, the elementary particles emitted are, in general, not the same as those that fell in. Therefore, if two of a proton's constituent quarks fall into a virtual black hole, it is possible for an antiquark and a lepton to emerge, thus violating conservation of baryon number.
The existence of virtual black holes aggravates the black hole information loss paradox, as any physical process may potentially be disrupted by interaction with a virtual black hole.
See also
Quantum foam
Virtual particle
Quantum tunnelling |
https://en.wikipedia.org/wiki/Human%20knot | A human knot is a common icebreaker game or team building activity for new people to learn to work together in physical proximity.
The knot is a disentanglement puzzle in which a group of people in a circle each hold hands with two people who are not next to them, and the goal is to disentangle the limbs to get the group into a circle, without letting go of grasped hands. Instead, group members should step over or under arms to try to untangle the knot. Not all human knots are solvable (see unknotting problem) and can remain knots or may end up as two or more circles.
An easy way to ensure that the game will end up with a single circle with no nodes is to start from a circle of people holding each other hand, looking all towards the center of the circle, and ask some of them to cross his/her arms and swap his left hand with his right hand grasping again the same neighbors. When the game is successfully completed a certain number of people will appear to be outside of the circle. This number equals the number of people having crossed arms. The challenge is to solve the game several times starting each time with an increasing number of crossed people. To increase the difficulty level, players can be blindfolded or required that the game be played silently (no talking).
The game is recommended for children from 12 years and up, and is best suited to a group of ten or so players, although it can be played with as few as five and with much larger groups as well. No materials are required. The purpose of the human knot puzzle is to gain team building skills, problem solving skills, and communication skills among a group of people and onto the individuals participating. |
https://en.wikipedia.org/wiki/Luminosity%20function%20%28astronomy%29 | In astronomy, a luminosity function gives the number of stars or galaxies per luminosity interval. Luminosity functions are used to study the properties of large groups or classes of objects, such as the stars in clusters or the galaxies in the Local Group.
Note that the term "function" is slightly misleading, and the luminosity function might better be described as a luminosity distribution. Given a luminosity as input, the luminosity function essentially returns the abundance of objects with that luminosity (specifically, number density per luminosity interval).
Main sequence luminosity function
The main sequence luminosity function maps the distribution of main sequence stars according to their luminosity. It is used to compare star formation and death rates, and evolutionary models, with observations. Main sequence luminosity functions vary depending on their host galaxy and on selection criteria for the stars, for example in the Solar neighbourhood or the Small Magellanic Cloud.
White dwarf luminosity function
The white dwarf luminosity function (WDLF) gives the number of white dwarf stars with a given luminosity. As this is determined by the rates at which these stars form and cool, it is of interest for the information it gives about the physics of white dwarf cooling and the age and history of the Galaxy.
Schechter luminosity function
The Schechter luminosity function provides an approximation of the abundance of galaxies in a luminosity interval . The luminosity function has units of a number density per unit luminosity and is given by a power law with an exponential cut-off at high luminosity
where is a characteristic galaxy luminosity controlling the cut-off, and the normalization has units of number density.
Equivalently, this equation can be expressed in terms of log-quantities with
The galaxy luminosity function may have different parameters for different populations and environments; it is not a universal function. One measurement from |
https://en.wikipedia.org/wiki/Radiobiology | Radiobiology (also known as radiation biology, and uncommonly as actinobiology) is a field of clinical and basic medical sciences that involves the study of the effects of ionizing radiation on living things, in particular health effects of radiation. Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.
Health effects
In general, ionizing radiation is harmful and potentially lethal to living beings but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis.
Most adverse health effects of radiation exposure may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to the killing or malfunction of cells following high doses; and
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
Stochastic
Some effects of ionizing radiation on human health are stochastic, meaning that their probability of occurrence increases with dose, while the severity is independent of dose. Radiation-induced cancer, teratogenesis, cognitive decline, and heart disease are all stochastic effects induced by ionizing radiation.
Its most common impact is the stochastic induction of cancer with a latent period of years or decades after exposure. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence |
https://en.wikipedia.org/wiki/Meromyosin | Meromyosin is a part of myosin (mero meaning "part of"). With regards to human anatomy myosin and actin constitute the basic functional unit of a muscle fiber, called sarcomere, playing a role in muscle contraction.
Biochemically viewed meromyosin form subunits of the actin-associated motor protein, myosin, Following proteolysis, two types of meromyosin are formed: heavy meromyosin (HMM) and light meromyosin (LMM).
Light meromyosin has a long, straight portion in the “tail” region. Heavy meromyosin (HMM) is a protein chain terminating in a globular head portion/cross bridge. HMM consists of two subunits, Heavy Meromyosin Subunit 1 and 2 (HMMS-1 and HMMS-2). The majority of myosin activity is concentrated in HMMS-1. HMMS-1 has an actin binding site and ATP binding site (myosin ATPase) that determines the rate of muscle contraction when muscle is stretched.
Light and heavy meromyosin are subunits of myosin filaments (thick myofilaments). |
https://en.wikipedia.org/wiki/Blackett%20effect | The Blackett effect, also called gravitational magnetism, is the hypothetical generation of a magnetic field by an uncharged, rotating body. This effect has never been observed.
History
Gravitational magnetism was proposed by the German-British physicist Arthur Schuster as an explanation for the magnetic field of the Earth, but was found nonexistent in a 1923 experiment by H. A. Wilson. The hypothesis was revived by the British physicist P. M. S. Blackett in 1947 when he proposed that a rotating body should generate a magnetic field proportional to its angular momentum. This was never generally accepted, and by the 1950s even Blackett felt it had been refuted., pp. 39–43
The Blackett effect was used by the science fiction writer James Blish in his series Cities in Flight (1955–1962) as the basis for his fictional stardrive, the spindizzy. |
https://en.wikipedia.org/wiki/Sodium%20methylparaben | Sodium methylparaben (sodium methyl para-hydroxybenzoate) is a compound with formula Na(CH3(C6H4COO)O). It is the sodium salt of methylparaben.
It is a food additive with the E number E219 which is used as a preservative. |
https://en.wikipedia.org/wiki/Furstenberg%27s%20rosette | The Furstenberg's rosette is a structure in the teat of cattle, sheep and other ruminants, located at the internal end of the teat canal (also known as the streak canal or teat duct) at the junction with the teat cistern. It often is considered a barrier for pathogens, yet it offers little resistance to milk leaving the teat.
The rosette consists of 6–10 connective tissue folds covered with an epithelium which is two cells thick.
It has a leukocyte population, mainly consisting of plasma cells and lymphocytes; leukocytes are thought to leave the teat wall and enter the cistern via Furstenberg's rosette. It contains bactericidal cationic proteins (e.g. ubiquitin); some researchers consider these might be secreted by the rosette tissue. |
https://en.wikipedia.org/wiki/Werner%20Kuhn%20%28chemist%29 | Werner Kuhn (February 6, 1899 – August 27, 1963) was a Swiss physical chemist who developed the first model of the viscosity of polymer solutions using statistical mechanics. He is known for being the first to apply Boltzmann's entropy formula:
to the modeling of rubber molecules, i.e. the "rubber band entropy model", molecules which he imagined as chains of N independently oriented links of length b with an end-to-end distance of r. This model, which resulted in the derivation of the thermal equation of state of rubber, has since been extrapolated to the entropic modeling of proteins and other conformational polymer chained molecules attached to a surface.
Kuhn received a degree in chemical engineering at the Eidgenössische Technische Hochschule (ETH, Federal Institute of Technology), in Zürich, and later a doctorate (1923) in physical chemistry. He was appointed professor of physical chemistry at the University of Kiel (1936–39) and then returned to Switzerland as director of the Physico-Chemical Institute of the University of Basel (1939–63), where he also served as rector (1955–56).
In a 1951 lecture along with his student V.B. Hargitay, he was the first to hypothesize the countercurrent multiplier mechanism in the mammalian kidney, later to be discovered in many other similar biological systems.
See also
Excluded volume
Kuhn length |
https://en.wikipedia.org/wiki/H%20antigen | H antigen can refer to one of various types of antigens having diverse biological functions. H antigen is located on the 19th chromosome in humans, and has a variety of functions and definitions as follows:
Also known as substance H, H antigen is a precursor to each of the ABO blood group antigens, apparently present in all people except those with the Bombay Blood phenotype (see hh blood group)
Histocompatibility antigen, a major factor in graft rejection. Even when Major Histocompatibility Complex genotype is perfectly matched, can cause slow rejection of a graft.
major H antigens "encode molecules that present foreign peptides to T cells"
minor H antigens "present polymorphic self peptides to T cells". Includes, e.g. the H-Y antigen
a bacterial flagellar antigen |
https://en.wikipedia.org/wiki/Receptacle%20%28botany%29 | In botany, the receptacle refers to vegetative tissues near the end of reproductive stems that are situated below or encase the reproductive organs.
Angiosperms
In angiosperms, the receptacle or torus (an older term is thalamus, as in Thalamiflorae) is the thickened part of a stem (pedicel) from which the flower organs grow. In some accessory fruits, for example the pome and strawberry, the receptacle gives rise to the edible part of the fruit. The fruit of Rubus species is a cluster of drupelets on top of a conical receptacle. When a raspberry is picked, the receptacle separates from the fruit, but in blackberries, it remains attached to the fruit.
In the Daisy family (Compositae or Asteraceae), small individual flowers are arranged on a round or dome-like structure that is also called receptacle.
Algae and bryophyta
In phycology, receptacles occur at the ends of branches of algae mainly in the brown algae or Heterokontophyta in the Order Fucales. They are specialised structures which contain the reproductive organs called conceptacles. Receptacles also function as a structure that captures food. |
https://en.wikipedia.org/wiki/Michael%20R.%20Fine | Michael R. Fine (born November 29, 1966) is a beta testing consultant, author, and inventor. He is the author of "Beta Testing for Better Software" (Wiley, 2002), and is a founder of and currently a senior test manager at Centercode, a beta testing software and services company. Fine is actively engaged in the promotion of beta testing as a concept, speaking on the topic, teaching courses, and authoring several articles on it.
Fine conducted beta tests on the first Palm handheld devices, as well as the original Bluetooth designs, and was significantly involved in the launch of xDSL and WiFi. He was also actively engaged in the testing of new modem standards, starting with V.32 up through V.92. Prior to helping found Centercode, Fine was the beta test manager for Megahertz, U.S. Robotics, and ultimately, 3Com Corporation. He was responsible for managing the beta testing of 3Com's networking and communications products for three of their global divisions.
In addition to technical writing for Iomega and several magazine articles, Fine is the author of three books:
Beta Testing for Better Software ()
Utah: The Complete Ski and Snowboard Guide ()
Canoeing and Kayaking Utah ()
Fine contributed to U.S. Patent 6215799 for an ISDN analog interface, and U.S. Patent 6275933 for a security design.
Fine's experience in quality assurance, beta testing, alpha testing, and delta testing has him teaching courses on these subjects for LinkedIn Learning.
Fine graduated from Loyola Academy in 1984, then Weber State University in 1989, and currently serves as a member of its Alumni Association Board. Fine is a member of the Pi Kappa Alpha national fraternity. |
https://en.wikipedia.org/wiki/Parallel%20Extensions | Parallel Extensions was the development name for a managed concurrency library developed by a collaboration between Microsoft Research and the CLR team at Microsoft. The library was released in version 4.0 of the .NET Framework. It is composed of two parts: Parallel LINQ (PLINQ) and Task Parallel Library (TPL). It also consists of a set of coordination data structures (CDS) – sets of data structures used to synchronize and co-ordinate the execution of concurrent tasks.
Parallel LINQ
PLINQ, or Parallel LINQ, parallelizing the execution of queries on objects (LINQ to Objects) and XML data (LINQ to XML). PLINQ is intended for exposing data parallelism by use of queries. Any computation on objects that has been implemented as queries can be parallelized by PLINQ. However, the objects need to implement the IParallelEnumerable interface, which is defined by PLINQ itself. Internally it uses TPL for execution.
Task Parallel Library
The Task Parallel Library (TPL) is the task parallelism component of the Parallel Extensions to .NET. It exposes parallel constructs like parallel For and ForEach loops, using regular method calls and delegates, thus the constructs can be used from any CLI languages. The job of spawning and terminating threads, as well as scaling the number of threads according to the number of available processors, is done by the library itself, using a work stealing scheduler.
TPL also includes other constructs like Task and Future. A Task is an action that can be executed independent of the rest of the program. In that sense, it is semantically equivalent to a thread, except that it is a more light-weight object and comes without the overhead of creating an OS thread. Tasks are queued by a Task Manager object and are scheduled to run on multiple OS threads in a thread pool when their turn comes.
Future is a task that returns a result. The result is computed in a background thread encapsulated by the Future object, and the result is buffered until it is re |
https://en.wikipedia.org/wiki/Dawson%E2%80%93G%C3%A4rtner%20theorem | In mathematics, the Dawson–Gärtner theorem is a result in large deviations theory. Heuristically speaking, the Dawson–Gärtner theorem allows one to transport a large deviation principle on a “smaller” topological space to a “larger” one.
Statement of the theorem
Let (Yj)j∈J be a projective system of Hausdorff topological spaces with maps pij : Yj → Yi. Let X be the projective limit (also known as the inverse limit) of the system (Yj, pij)i,j∈J, i.e.
Let (με)ε>0 be a family of probability measures on X. Assume that, for each j ∈ J, the push-forward measures (pj∗με)ε>0 on Yj satisfy the large deviation principle with good rate function Ij : Yj → R ∪ {+∞}. Then the family (με)ε>0 satisfies the large deviation principle on X with good rate function I : X → R ∪ {+∞} given by |
https://en.wikipedia.org/wiki/Cognitive%20miser | In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.
The term cognitive miser was first introduced by Susan Fiske and Shelley Taylor in 1984. It is an important concept in social cognition theory and has been influential in other social sciences such as economics and political science.
Assumption
The metaphor of the cognitive miser assumes that the human mind is limited in time, knowledge, attention, and cognitive resources. Usually people do not think rationally or cautiously, but use cognitive shortcuts to make inferences and form judgments. These shortcuts include the use of schemas, scripts, stereotypes, and other simplified perceptual strategies instead of careful thinking. For example, people tend to make correspondent reasoning and are likely to believe that behaviors should be correlated to or representative of stable characteristics.
Background
The naïve scientist and attribution theory
Before Fiske and Taylor's cognitive miser theory, the predominant model of social cognition was the naïve scientist. First proposed in 1958 by Fritz Heider in The Psychology of Interpersonal Relations, this theory holds that humans think and act with dispassionate rationality whilst engaging in detailed and nuanced thought processes for both complex and routine actions. In this way, humans were thought to think like scientists, albeit naïve ones, measuring and analyzing the world around them. Applying this framework to human thought processes, naïve scientists seek the consistency and |
https://en.wikipedia.org/wiki/Varadhan%27s%20lemma | In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables.
Statement of the lemma
Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition
where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition
Then
See also
Laplace principle (large deviations theory) |
https://en.wikipedia.org/wiki/Syntax%20diagram | Syntax diagrams (or railroad diagrams) are a way to represent a context-free grammar. They represent a graphical alternative to Backus–Naur form, EBNF, Augmented Backus–Naur form, and other text-based grammars as metalanguages. Early books using syntax diagrams include the "Pascal User Manual" written by Niklaus Wirth (diagrams start at page 47) and the Burroughs CANDE Manual. In the compilation field, textual representations like BNF or its variants are usually preferred. BNF is text-based, and used by compiler writers and parser generators. Railroad diagrams are visual, and may be more readily understood by laypeople, sometimes incorporated into graphic design. The canonical source defining the JSON data interchange format provides yet another example of a popular modern usage of these diagrams.
Principle of syntax diagrams
The representation of a grammar is a set of syntax diagrams. Each diagram defines a "nonterminal" stage in a process. There is a main diagram which defines the language in the following way: to belong to the language, a word must describe a path in the main diagram.
Each diagram has an entry point and an end point. The diagram describes possible paths between these two points by going through other nonterminals and terminals. Historically, terminals have been represented by round boxes and nonterminals by rectangular boxes but there is no official standard.
Example
We use arithmetic expressions as an example, in various grammar formats.
BNF:
<expression> ::= <term> | <term> "+" <expression>
<term> ::= <factor> | <factor> "*" <term>
<factor> ::= <constant> | <variable> | "(" <expression> ")"
<variable> ::= "x" | "y" | "z"
<constant> ::= <digit> | <digit> <constant>
<digit> ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"
EBNF:
expression = term , [ "+" , expression ];
term = factor , [ "*" , term ];
factor = constant | variable | "(" , expression , ")";
variable = "x" | "y" | "z";
constant |
https://en.wikipedia.org/wiki/Heterotrophic%20nutrition | Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes |
https://en.wikipedia.org/wiki/Convention%20over%20configuration | Convention over configuration (also known as coding by convention) is a software design paradigm used by software frameworks that attempts to decrease the number of decisions that a developer using the framework is required to make without necessarily losing flexibility and don't repeat yourself (DRY) principles.
The concept was introduced by David Heinemeier Hansson to describe the philosophy of the Ruby on Rails web framework, but is related to earlier ideas like the concept of "sensible defaults" and the principle of least astonishment in user interface design.
The phrase essentially means a developer only needs to specify unconventional aspects of the application. For example, if there is a class Sales in the model, the corresponding table in the database is called "sales" by default. It is only if one deviates from this convention, such as the table "product sales", that one needs to write code regarding these names.
When the convention implemented by the tool matches the desired behavior, it behaves as expected without having to write configuration files. Only when the desired behavior deviates from the implemented convention is explicit configuration required.
Ruby on Rails' use of the phrase is particularly focused on its default project file and directory structure, which prevent developers from having to write XML configuration files to specify which modules the framework should load, which was common in many earlier frameworks.
Disadvantages
Disadvantages of the convention over configuration approach can occur due to conflicts with other software design principles, like the Zen of Python's "explicit is better than implicit." A software framework based on convention over configuration often involves a domain-specific language with a limited set of constructs or an inversion of control in which the developer can only affect behavior using a limited set of hooks, both of which can make implementing behaviors not easily expressed by the provided convent |
https://en.wikipedia.org/wiki/Curculin | Curculin or neoculin is a sweet protein that was discovered and isolated in 1990 from the fruit of Curculigo latifolia (Hypoxidaceae), a plant from Malaysia. Like miraculin, curculin exhibits taste-modifying activity; however, unlike miraculin, it also exhibits a sweet taste by itself. After consumption of curculin, water and sour solutions taste sweet. The plant is referred to locally as 'Lumbah' or 'Lemba'.
Protein structure
The active form of curculin is a heterodimer consisting of two monomeric units connected through two disulfide bridges. The mature monomers each consist of a sequence of 114 amino acids, weighing 12.5 kDa (curculin 1) and 12.7 kDa (curculin 2), respectively. While each of the two isoforms is capable of forming a homodimer, these do not possess the sweet taste nor the taste-modifying activity of the heterodimeric form. To avoid confusion, the heterodimeric form is sometimes referred to as "neoculin".
1, 1-50: DNVLLSGQTL HADHSLQAGA YTLTIQNKCN LVKYQNGRQI WASNTDRRGS
1, 51-100: GCRLTLLSDG NLVIYDHNNN DVWGSACWGD NGKYALVLQK DGRFVIYGPV
1, 101-114: LWSLGPNGCR RVNG
2, 1-50: DSVLLSGQTL YAGHSLTSGS YTLTIQNNCN LVKYQHGRQI WASDTDGQGS
2, 51-100: QCRLTLRSDG NLIIYDDNNM VVWGSDCWGN NGTYALVLQQ DGLFVIYGPV
2, 101-113: LWPLGLNGCR SLN
Amino acid sequence of sweet proteins curculin-1 and curculin-2 adapted from Swiss-Prot biological database of protein sequences. Intra-chain disulfide bonds in bold, inter-chain disulfide bonds underlined.
Sweetness properties
Curculin is considered to be a high-intensity sweetener, with a reported relative sweetness of 430-2070 times sweeter than sucrose on a weight basis.
A sweet taste, equivalent to a 6.8% or 12% sucrose solution, was observed after holding curculin in the mouth in combination with clear water or acidified water (citric acid), respectively. The sweet taste lasts for 5 minutes with water and 10 minutes with an acidic solution.
The taste-modifying activity of curculin is reduced in the presence of ions with two po |
https://en.wikipedia.org/wiki/Crataegus%20pinnatifida | Crataegus pinnatifida, also known as mountain hawthorn, Chinese haw, Chinese hawthorn or Chinese hawberry, refers to a small to medium-sized tree, as well as the fruit of the tree. The fruit is bright red, in diameter.
Use
Culinary use
In northern Chinese cuisine, ripe C. pinnatifida fruits are used in the desserts tanghulu and shanzhagao. It is also used to make the traditional candies haw flakes and haw rolls, as well as candied fruit slices, jam, jelly, and wine. It is also traditionally used as a finishing ingredient in Cantonese sweet and sour sauce, although it has since been partially supplanted by ketchup.
Traditional medicine
In traditional Chinese medicine, the dried fruits of C. pinnatifida have been used as a digestive aid.
See also
List of culinary fruits
Phytotherapy |
https://en.wikipedia.org/wiki/Bioamplifier | A Bioamplifier is an electrophysiological device, a variation of the instrumentation amplifier, used to gather and increase the signal integrity of physiologic electrical activity for output to various sources. It may be an independent unit, or integrated into the electrodes.
History
Efforts to amplify biosignals started with the development of electrocardiography. In 1887, Augustus Waller, a British physiologist, successfully measured the electrocardiograph of his dog using two buckets of saline, in which he submerged each of the front and the hind paws. A few months later, Waller successfully recorded the first human electrocardiography using the capillary electrometer. However, at the time of invention, Waller did not envision that electrocardiography would be used extensively in healthcare. The electrocardiograph was impractical to use until Willem Einthoven, a Dutch physiologist, innovated the use of the string galvanometer for cardiac signal amplification. Significant improvements in amplifier technologies led to the usage of smaller electrodes that were more easily attached to body parts. In the 1920s, a way to electrically amplify the cardiac signals using vacuum tubes was introduced, which quickly replaced the string galvanometer that amplified the signal mechanically. Vacuum tubes have a larger impedance, so the amplification was more robust. Also, its relatively small size compared to the string galvanometer contributed the widespread use of the vacuum tubes. Furthermore, the large metal buckets were no longer needed, as much smaller metal-plate electrodes were introduced. By the 1930s, electrocardiograph devices could be carried to the patient's home for the purpose of bedside monitoring. With the emergence of electronic amplification, it was quickly discovered that many features of the electrocardiography were revealed with various electrode placement.
Variations
Electrocardiography
Electrocardiography (ECG or EKG) records the electrical activity o |
https://en.wikipedia.org/wiki/Maximum%20Absorbency%20Garment | A Maximum Absorbency Garment (MAG) is an adult-sized diaper with extra absorption material that NASA astronauts wear during liftoff, landing, and extra-vehicular activity (EVA) to absorb urine and feces. It is worn by both male and female astronauts. Astronauts can urinate into the MAG, and usually wait to defecate when they return to the spacecraft. However, the MAG is rarely used for this purpose, since the astronauts use the facilities of the station before EVA and also time the consumption of the in-suit water. Nonetheless, the garment provides peace of mind for the astronauts.
The MAG was developed because astronauts cannot remove their space suits during long operations, such as spacewalks that usually last for several hours. Generally, three MAGs were given during space shuttle missions, one for launch, reentry, and an extra for spacewalking or for a second reentry attempt. Astronauts drink about of salty water before reentry since less fluid is retained in zero G. Without the extra fluids, the astronauts might faint in Earth's gravity, further highlighting the potential necessity of the MAGs. It is worn underneath the Liquid Cooling and Ventilation Garment (LCVG).
History
During the Apollo era, astronauts used urine and fecal containment systems worn under spandex trunks. The fecal containment device (FCD) was a bag attached directly to the body with an adhesive seal, and the urine collection device (UCD) had a condom-like sheath attached to a tube and pouch. Women joined the astronaut corps in 1978 and required devices with similar functions. However, the early attempts to design feminized versions of the male devices were unsuccessful. In the 1980s, NASA designed space diapers which were called Disposable Absorption Containment Trunks (DACTs). These addressed the women's needs since it was comfortable, manageable, and resistant to leaks. These diapers were first used in 1983, during the first Challenger mission.
Disposable underwear, first introduced |
https://en.wikipedia.org/wiki/Smith%E2%80%93Minkowski%E2%80%93Siegel%20mass%20formula | In mathematics, the Smith–Minkowski–Siegel mass formula (or Minkowski–Siegel mass formula) is a formula for the sum of the weights of the lattices (quadratic forms) in a genus, weighted by the reciprocals of the orders of their automorphism groups. The mass formula is often given for integral quadratic forms, though it can be generalized to quadratic forms over any algebraic number field.
In 0 and 1 dimensions the mass formula is trivial, in 2 dimensions it is essentially equivalent to Dirichlet's class number formulas for imaginary quadratic fields, and in 3 dimensions some partial results were given by Gotthold Eisenstein.
The mass formula in higher dimensions was first given by , though his results were forgotten for many years.
It was rediscovered by , and an error in Minkowski's paper was found and corrected by .
Many published versions of the mass formula have errors; in particular the 2-adic densities are difficult to get right, and it is sometimes forgotten that the trivial cases of dimensions 0 and 1 are different from the cases of dimension at least 2.
give an expository account and precise statement of the mass formula for integral quadratic forms, which is reliable because they check it on a large number of explicit cases.
For recent proofs of the mass formula see and .
The Smith–Minkowski–Siegel mass formula is essentially the constant term of the Weil–Siegel formula.
Statement of the mass formula
If f is an n-dimensional positive definite integral quadratic form (or lattice) then the mass
of its genus is defined to be
where the sum is over all integrally inequivalent forms in the same genus as f, and Aut(Λ) is the automorphism group of Λ.
The form of the mass formula given by states that for n ≥ 2 the mass is given by
where mp(f) is the p-mass of f, given by
for sufficiently large r, where ps is the highest power of p dividing the determinant of f. The number N(pr) is the number of n by n matrices
X with coefficients that are integers |
https://en.wikipedia.org/wiki/Xanthosine%20triphosphate | Xanthosine 5'-triphosphate (XTP) is a nucleotide that is not produced by - and has no known function in - living cells. Uses of XTP are, in general, limited to experimental procedures on enzymes that bind other nucleotides. Deamination of purine bases can result in accumulation of such nucleotides as ITP, dITP, XTP, and dXTP. |
https://en.wikipedia.org/wiki/List%20of%20marine%20ecoregions | The following is a list of marine ecoregions, as defined by the WWF and The Nature Conservancy
The WWF/Nature Conservancy scheme groups the individual ecoregions into 12 marine realms, which represent the broad latitudinal divisions of polar, temperate, and tropical seas, with subdivisions based on ocean basins. The marine realms are subdivided into 62 marine provinces, which include one or more of the 232 marine ecoregions.
The WWF/Nature Conservancy scheme currently encompasses only coastal and continental shelf areas.
Arctic realm
(no provinces identified)
North Greenland
North and East Iceland
East Greenland Shelf
West Greenland Shelf
Northern Grand Banks-Southern Labrador
Northern Labrador
Baffin Bay-Davis Strait
Hudson Complex
Lancaster Sound
High Arctic Archipelago
Beaufort-Admunsen-Viscount Melville-Queen Maud
Beaufort Sea-continental coast and shelf
Chukchi Sea
Eastern Bering Sea
East Siberian Sea
Laptev Sea
Kara Sea
North and East Barents Sea
White Sea
Temperate Northern Atlantic
Northern European Seas
South and West Iceland
Faroe Plateau
Southern Norway
Northern Norway and Finnmark
Baltic Sea
North Sea
Celtic Seas
Lusitanian
South European Atlantic Shelf
Saharan Upwelling
Azores Canaries Madeira
Mediterranean Sea
Adriatic Sea
Aegean Sea
Levantine Sea
Tunisian Plateau/Gulf of Sidra
Ionian Sea
Western Mediterranean
Alboran Sea
Black Sea
Black Sea
Cold Temperate Northwest Atlantic
Gulf of St. Lawrence-Eastern Scotian Shelf
Southern Grand Banks-South Newfoundland
Scotian Shelf
Gulf of Maine-Bay of Fundy
Virginian
Warm Temperate Northwest Atlantic
Carolinian
Northern Gulf of Mexico
Temperate Northern Pacific
Cold Temperate Northwest Pacific
Sea of Okhotsk
Kamchatka Shelf and Coast
Oyashio Current
Northern Honshu
Sea of Japan
Yellow Sea
Warm Temperate Northwest Pacific
Central Kuroshio Current
East China Sea
Cold Temperate Northeast Pacific
Aleutian Islands
Gulf of Alaska
North American Pacific Fjordland
Puget Trough/Georgia Basin
Oregon, Washin |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.