source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Trust%20anchor | In cryptographic systems with hierarchical structure, a trust anchor is an authoritative entity for which trust is assumed and not derived.
In the X.509 architecture, a root certificate would be the trust anchor from which the whole chain of trust is derived. The trust anchor must be in the possession of the trusting party beforehand to make any further certificate path validation possible.
Most operating systems provide a built-in list of self-signed root certificates to act as trust anchors for applications. The Firefox web browser also provides its own list of trust anchors. The end-user of an operating system or web browser is implicitly trusting in the correct operation of that software, and the software manufacturer in turn is delegating trust for certain cryptographic operations to the certificate authorities responsible for the root certificates.
See also
Web of trust |
https://en.wikipedia.org/wiki/Matter%20wave%20clock | A matter wave clock is a type of clock whose principle of operation makes use of the apparent wavelike properties of matter.
Matter waves were first proposed by Louis de Broglie and are sometimes called de Broglie waves. They form a key aspect of wave–particle duality and experiments have since supported the idea. The wave associated with a particle of a given mass, such as an atom, has a defined frequency, and a change in the duration of one cycle from peak to peak that is sometimes called its Compton periodicity. Such a matter wave has the characteristics of a simple clock, in that it marks out fixed and equal intervals of time. The twins paradox arising from Albert Einstein's theory of relativity means that a moving particle will have a slightly different period from a stationary particle. Comparing two such particles allows the construction of a practical "Compton clock".
Matter waves as clocks
De Broglie proposed that the frequency f of a matter wave equals E/h, where E is the total energy of the particle and h is Planck's constant. For a particle at rest, the relativistic equation E=mc2 allows the derivation of the Compton frequency f for a stationary massive particle, equal to mc2/h.
De Broglie also proposed that the wavelength λ for a moving particle was equal to h/p where p is the particle's momentum.
The period (one cycle of the wave) is equal to 1/f.
This precise Compton periodicity of a matter wave is said to be the necessary condition for a clock, with the implication that any such matter particle may be regarded as a fundamental clock. This proposal has been referred to as "A rock is a clock."
Applications
In his paper, "Quantum mechanics, matter waves and moving clocks", Müller has suggested that "The description of matter waves as matter-wave clocks ... has recently been applied to tests of general relativity, matter-wave experiments, the foundations of quantum mechanics, quantum space-time decoherence, the matter wave clock/mass standard, and |
https://en.wikipedia.org/wiki/Blunt-jawed%20elephantnose | The blunt-jawed elephantnose or wormjawed mormyrid (Campylomormyrus tamandua) is a species of elephantfish. It is found in rivers in West and Middle Africa. It is brown or black with a long elephant-like snout with the mouth located near the tip. Its diet consists of worms, fish, and insects.
See also
List of freshwater aquarium fish species |
https://en.wikipedia.org/wiki/Harvey%20balls | Harvey balls are round ideograms used for visual communication of qualitative information. They are commonly used in comparison tables to indicate the degree to which a particular item meets a particular criterion.
For example, in a comparison of products, information such as price or weight can be conveyed numerically, and binary information such as the existence or lack of a feature can be conveyed with a check mark; however, information such as "quality" or "safety" or "taste" is often difficult to summarize in a manner allowing easy comparison – thus, Harvey balls are used.
In addition to their use in qualitative comparison, Harvey balls are also commonly used in project management for project tracking; in lean manufacturing for value-stream mapping and continuous improvement tracking; and in business process modeling software for visualisation.
Harvey L. Poppel is generally credited with inventing Harvey balls in the 1970s while working at Booz Allen Hamilton as head of their worldwide IT consulting practice.
Implementations and support
Scalable Vector Graphics
The use of Scalable Vector Graphics to render Harvey balls allows them to be easily used in web browsers and applications that support the XML vector graphics format. The benefit is that no special font is required and the Harvey balls can be displayed using an open format. The main drawback is that SVG is not universally supported.
Microsoft Excel
Harvey balls are available within Microsoft Excel since Excel 2007.
Custom fonts
Custom fonts have been developed such that the numbers 0–9 map to a particular Harvey ball. Incorporating the Harvey ball into a document then becomes a matter of selecting the number which corresponds to the desired Harvey ball and selecting the custom font. The Harvey balls can then be manipulated like any other font (e.g., color, size, underline) and may be easier to use than other implementations. The main drawback of this approach is that the font either needs to be e |
https://en.wikipedia.org/wiki/Viral%20neuraminidase | Viral neuraminidase is a type of neuraminidase found on the surface of influenza viruses that enables the virus to be released from the host cell. Neuraminidases are enzymes that cleave sialic acid (also called neuraminic acid) groups from glycoproteins. Neuraminidase inhibitors are antiviral agents that inhibit influenza viral neuraminidase activity and are of major importance in the control of influenza.
Viral neuraminidases are the members of the glycoside hydrolase family 34 CAZY GH_34 which comprises enzymes with only one known activity; sialidase or neuraminidase . Neuraminidases cleave the terminal sialic acid residues from carbohydrate chains in glycoproteins. Sialic acid is a negatively charged sugar associated with the protein and lipid portions of lipoproteins.
To infect a host cell, the influenza virus attaches to the exterior cell surface using hemagglutinin, a molecule found on the surface of the virus that binds to sialic acid groups. Sialic acids are found on various glycoproteins at the host cell surface. The virus then moves from sialic acid group to sialic acid group until it finds the proper cell surface receptor (whose identity remains unknown). Neuraminadase enables this movement by cleaving sialic acid groups that hemagglutinin was attached to. After the virus has entered the cell and has replicated, new viral particles bud from the host cell membrane. The hemagglutinin on new viral particles remains attached to sialic acid groups of glycoproteins on the external cell surface and the surface of other viral particles; neuraminadase cleaves these groups and thereby allows the release of viral particles and prevents self-aggregation. Neuraminadase also facilitates the movement of virus particles in the presence of mucus rich in silicic acid.
A single hemagglutinin-neuraminidase protein can combine neuraminidase and hemagglutinin functions, such as in mumps virus and human parainfluenza virus.
Function
The enzyme helps viruses to be released a |
https://en.wikipedia.org/wiki/Passive%20electronically%20scanned%20array | A passive electronically scanned array (PESA), also known as passive phased array, is an antenna in which the beam of radio waves can be electronically steered to point in different directions (that is, a phased array antenna), in which all the antenna elements are connected to a single transmitter (such as a magnetron, a klystron or a travelling wave tube) and/or receiver.
The largest use of phased arrays is in radars. Most phased array radars in the world are PESA. The civilian microwave landing system uses PESA transmit-only arrays.
A PESA contrasts with an active electronically scanned array (AESA) antenna, which has a separate transmitter and/or receiver unit for each antenna element, all controlled by a computer; AESA is a more advanced, sophisticated versatile second-generation version of the original PESA phased array technology. Hybrids of the two can also be found, consisting of subarrays that individually resemble PESAs, where each subarray has its own RF front end. Using a hybrid approach, the benefits of AESAs (e.g., multiple independent beams) can be realized at a lower cost compared to true AESAs.
Pulsed radar systems work by connecting an antenna to a powerful radio transmitter to emit a short pulse of signal. The transmitter is then disconnected and the antenna is connected to a sensitive receiver which amplifies any echos from target objects. By measuring the time it takes for the signal to return, the radar receiver can determine the distance to the object. The receiver then sends the resulting output to a display of some sort. The transmitter elements were typically klystron tubes or magnetrons, which are suitable for amplifying or generating a narrow range of frequencies to high power levels. To scan a portion of the sky, a non-PESA radar antenna must be physically moved to point in different directions. In contrast, the beam of a PESA radar can rapidly be changed to point in a different direction, simply by electrically adjusting the phase |
https://en.wikipedia.org/wiki/Adobe%20Atmosphere | Adobe Atmosphere (informally abbreviated Atmo) was a software platform for interacting with 3D computer graphics. 3D models created with the commercial program could be explored socially using a browser plugin available free of charge. Atmosphere was originally developed by Attitude Software as 3D Anarchy and was later bought by Adobe Systems. The product spent the majority of its lifetime in beta testing. Adobe released the last version of Atmosphere, version 1.0 build 216, in February 2004, then discontinued the software in December that year.
Features
Atmosphere focused on explorable "worlds" (later officially called "environments"), which were linked together by "portals", analogous to the World Wide Web's hyperlinks. These portals were represented as spinning squares of red, green, and blue that revolved around each other and floated above the ground. Portals were indicative of the Atmosphere team's desire to mirror the functionality of Web pages. Although the world itself was described in the .aer (or .atmo) file, images and sounds were kept separately, usually in the GIF, WAV or MP3 format. Objects in worlds were scriptable using a specialized dialect of JavaScript, allowing a more immersive environment, and worlds could be generated dynamically using PHP. Using JavaScript, a world author could link an object to a Web page, so that a user could, for example, launch a Web page by clicking on a billboard advertisement (Ctrl+Shift+Click in earlier versions). By version 1.0, Atmosphere also boasted support for using Macromedia Flash animations and Windows Media Video as textures.
Atmosphere-based worlds consisted mainly of parametric primitives, such as floors, walls, and cones. These primitives could be painted a solid color, given an image-based texture, or made "subtractive". Invisible, "subtractive" primitives could be used to cut "holes" in other primitives, to build more complex shapes. Many worlds also contained animated polygon meshes made possible by |
https://en.wikipedia.org/wiki/Incompatible%20Timesharing%20System | Incompatible Timesharing System (ITS) is a time-sharing operating system developed principally by the MIT Artificial Intelligence Laboratory, with help from Project MAC. The name is the jocular complement of the MIT Compatible Time-Sharing System (CTSS).
ITS, and the software developed on it, were technically and culturally influential far beyond their core user community. Remote "guest" or "tourist" access was easily available via the early ARPAnet, allowing many interested parties to informally try out features of the operating system and application programs. The wide-open ITS philosophy and collaborative online community were a major influence on the hacker culture, as described in Steven Levy's book Hackers, and were the direct forerunners of the free and open-source software, open-design, and Wiki movements.
History
ITS development was initiated in the late 1960s by those (the majority of the MIT AI Lab staff at that time) who disagreed with the direction taken by Project MAC's Multics project (which had started in the mid-1960s), particularly such decisions as the inclusion of powerful system security. The name was chosen by Tom Knight as a joke on the name of the earliest MIT time-sharing operating system, the Compatible Time-Sharing System, which dated from the early 1960s.
By simplifying their system compared to Multics, ITS's authors were able to quickly produce a functional operating system for their lab. ITS was written in assembly language, originally for the Digital Equipment Corporation PDP-6 computer, but the majority of ITS development and use was on the later, largely compatible, PDP-10.
Although not used as intensively after about 1986, ITS continued to operate on original hardware at MIT until 1990, and then until 1995 at Stacken Computer Club in Sweden. Today, some ITS implementations continue to be remotely accessible, via emulation of PDP-10 hardware running on modern, low-cost computers supported by interested hackers.
Significant techn |
https://en.wikipedia.org/wiki/Operand%20isolation | In electronic low power digital synchronous circuit design, operand isolation is a technique for minimizing the energy overhead associated with redundant operations by selectively blocking the propagation of switching activity through the circuit.
This technique isolates sections of the circuit (operation) from "seeing" changes on their inputs (operands) unless they are expected to respond to them.
This is usually done using latches at the inputs of the circuit. The latches become transparent only when the result of the operation is going to be used. One can also use multiplexers or simple AND gates instead of latches.
Overhead
There is some area overhead associated with this technique since the circuit designer needs to add extra circuitry, i.e. latches, at the inputs. Also, if the latches are being added in a pipeline stage, they might change the critical path, and hence increase the propagation delay and cycle time. In cases where the overhead is not acceptable, one can think of clock gating as an alternative method of low power design.
See also
Glitch removal
Clock gating
Distributive law - a similar idea in mathematics
Reduction (mathematics) - a similar idea in mathematics
Reducing a fraction - a similar idea in mathematics
Reduced Karnaugh map (RKM) - a similar technique in logic optimization
Infrequent variables - a similar technique in logic optimization |
https://en.wikipedia.org/wiki/Sternocostal%20joints | The sternocostal joints, also known as sternochondral joints or costosternal articulations, are synovial plane joints of the costal cartilages of the true ribs with the sternum. The only exception is the first rib, which has a synchondrosis joint since the cartilage is directly united with the sternum. The sternocostal joints are important for thoracic wall mobility.
The ligaments connecting them are:
Articular capsules
Interarticular sternocostal ligament
Radiate sternocostal ligaments
Costoxiphoid ligaments
Clinical significance
Ankylosis, joint stiffness caused by ossification, may occur at the sternocostal joints.
See also
Costochondritis |
https://en.wikipedia.org/wiki/Einstein%E2%80%93de%20Sitter%20universe | The Einstein–de Sitter universe is a model of the universe proposed by Albert Einstein and Willem de Sitter in 1932. On first learning of Edwin Hubble's discovery of a linear relation between the redshift of the galaxies and their distance, Einstein set the cosmological constant to zero in the Friedmann equations, resulting in a model of the expanding universe known as the Friedmann–Einstein universe. In 1932, Einstein and De Sitter proposed an even simpler cosmic model by assuming a vanishing spatial curvature as well as a vanishing cosmological constant. In modern parlance, the Einstein–de Sitter universe can be described as a cosmological model for a flat matter-only Friedmann–Lemaître–Robertson–Walker metric (FLRW) universe.
In the model, Einstein and de Sitter derived a simple relation between the average density of matter in the universe and its expansion according to H02 = кρ/3, where H0 is the Hubble constant, ρ is the average density of matter and к is the Einstein gravitational constant. The size of the Einstein–de Sitter universe evolves with time as , making its current age 2/3 times the Hubble time. The Einstein–de Sitter universe became a standard model of the universe for many years because of its simplicity and because of a lack of empirical evidence for either spatial curvature or a cosmological constant. It also represented an important theoretical case of a universe of critical matter density poised just at the limit of eventually contracting. However, Einstein's later reviews of cosmology make it clear that he saw the model as only one of several possibilities for the expanding universe.
The Einstein–de Sitter universe was particularly popular in the 1980s, after the theory of cosmic inflation predicted that the curvature of the universe should be very close to zero. This case with zero cosmological constant implies the Einstein–de Sitter model, and the theory of cold dark matter was developed, initially with a cosmic matter budget around 95% |
https://en.wikipedia.org/wiki/Wetting%20transition | A wetting transition (Cassie–Wenzel transition) may occur during the process of wetting of a solid (or liquid) surface with a liquid. The transition corresponds to a certain change in contact angle, the macroscopic parameter characterizing wetting. Various contact angles can co-exist on the same solid substrate. Wetting transitions may occur in a different way depending on whether the surface is flat or rough.
Flat surfaces
When a liquid drop is put onto a flat surface, two situations may result. If the contact angle is zero, the situation is referred to as complete wetting. If the contact angle is between 0 and 180°, the situation is called partial wetting. A wetting transition is a surface phase transition from partial wetting to complete wetting.
Rough surfaces
The situation on rough surfaces is much more complicated. The main characteristic of the wetting properties of rough surfaces is the so-called apparent contact angle (APCA). It is well known that the APCA usually measured are different from those predicted by the Young equation. Two main hypotheses were proposed in order to explain this discrepancy, namely the Wenzel and Cassie wetting models. According to the traditional Cassie model, air can remain trapped below the drop, forming "air pockets". Thus, the hydrophobicity of the surface is strengthened because the drop sits partially on air. On the other hand, according to the Wenzel model the roughness increases the area of a solid surface, which also geometrically modifies the wetting properties of this surface. Transition from Cassie to Wenzel regime is also called wetting transition. Under certain external stimuli, such as pressure or vibration, the Cassie air trapping wetting state could be converted into the Wenzel state. Apart from external stimuli, intrinsic contact angle of the liquid (below or above 90 degree), liquid volatility, structure of cavities (reentrant or non-reentrant, connected or unconnected) are known to be important factors determ |
https://en.wikipedia.org/wiki/Gyrochronology | Gyrochronology is a method for estimating the age of a low-mass (cool) main sequence star (spectral class F8 V or later) from its rotation period. The term is derived from the Greek words gyros, chronos and logos, roughly translated as rotation, age, and study respectively. It was coined in 2003 by Sydney Barnes to describe the associated procedure for deriving stellar ages, and developed extensively in empirical form in 2007.
Gyrochronology builds on a work of Andrew Skumanich,
who found that the average value of (v sin i) for several open clusters was inversely proportional to the square root of the cluster's age. In the expression (v sin i), (v) is the velocity on the star's equator and (i) is the inclination angle of the star's axis of rotation, which is generally an unmeasurable quantity. The gyrochronology method depends on the relationship between the rotation period and the mass of low mass main-sequence stars of the same age, which was verified by early work on the Hyades open cluster. The associated age estimate for a star is known as the gyrochronological age.
The basic idea underlying gyrochronology is that the rotation period P, of a cool main-sequence star is a deterministic function of its age t and its mass M (or a suitable substitute such as color). Although main sequence stars of a given mass form with a range of rotation periods, their periods increase rapidly and converge to a well defined value as they lose angular momentum through magnetically channelled stellar winds. Therefore, their periods converge to a certain function of age and mass, mathematically denoted by P=P(t,M). Consequently, cool stars do not occupy the entire 3-dimensional parameter space of (mass, age, period), but instead define a 2-dimensional surface in this P-t-M space. Therefore, measuring two of these variables yields the third. Of these quantities, the mass (color) and the rotation period are the easier variables to measure, providing access to the star's age, otherwis |
https://en.wikipedia.org/wiki/Ochrea | An ochrea (Latin ocrea, greave or protective legging), also spelled ocrea, is a plant structure formed of stipules fused into a sheath surrounding the stem, and is typically found in the Polygonaceae.
In palms it denotes an extension of the leaf sheath beyond the petiole insertion. |
https://en.wikipedia.org/wiki/Allegations%20of%20biological%20warfare%20in%20the%20Korean%20War | Allegations that the United States military used biological weapons in the Korean War (June 1950 – July 1953) were raised by the governments of the People's Republic of China, the Soviet Union, and North Korea. The claims were first raised in 1951. The story was covered by the worldwide press and led to a highly publicized international investigation in 1952. Secretary of State Dean Acheson and other American and allied government officials denounced the allegations as a hoax. Subsequent scholars are split about the truth of the claims.
Background
Until the end of World War II, Japan operated a covert biological and chemical warfare research and development unit called Unit 731 in Harbin (now China). The unit's activities, including human experimentation, were documented by the Khabarovsk War Crime Trials conducted by the Soviet Union in December 1949. However, at that time, the US government described the Khabarovsk trials as "vicious and unfounded propaganda". It was later revealed that the accusations made against the Japanese military were correct. The US government had taken over the research at the end of the war and had then covered up the program. Leaders of Unit 731 were exempted from war crimes prosecution by the United States and then placed on the payroll of the US.
On 30 June 1950, soon after the outbreak of the Korean War, the US Defense Secretary George Marshall received the Report of the Committee on Chemical, Biological and Radiological Warfare and Recommendations, which advocated urgent development of a biological weapons program. The biological weapons research facility at Fort Detrick, Maryland was expanded, and a new one in Pine Bluff, Arkansas, was developed.
Allegations
During 1951, as the war turned against the United States, the Chinese and North Koreans made vague allegations of biological warfare, but these were not pursued. General Matthew Ridgway, United Nations Commander in Korea, denounced the initial charges as early as May 1951. |
https://en.wikipedia.org/wiki/Reception%20and%20criticism%20of%20WhatsApp%20security%20and%20privacy%20features | This article provides a detailed chronological account of the historical reception and criticism of security and privacy features in the WhatsApp messaging service.
2011
On May 20, 2011, an unidentified security researcher from the Netherlands under the pseudonym "WhatsappHack" published a method to hijack WhatsApp accounts using a flaw in the authentication process, to the Dutch websites Tweakers.net and GeenStijl. The method involved trying to log in to a person's account from another phone number and intercepting the verification text message that would be sent out. "WhatsappHack" provided methods to accomplish this on both Symbian and Android operating systems. One day after the publication of the articles, WhatsApp issued a patch to both the Android and Symbian clients.
In May 2011, another security hole was reported which left communication through WhatsApp susceptible to packet analysis. WhatsApp communications data was sent and received in plaintext, meaning messages could easily be read if packet traces were available.
2012
In May 2012 security researchers noticed that new updates of WhatsApp sent messages with encryption, but described the cryptographic method used as "broken." In August of the same year, the WhatsApp support staff stated that messages sent in the "latest version" of the WhatsApp software for iOS and Android (but not BlackBerry, Windows Phone, and Symbian) were encrypted, but did not specify the cryptographic method.
On January 6, 2012, an unknown hacker published a website that made it possible to change the status of any WhatsApp user, so long as the phone number associated with the user's account was known. On January 9, WhatsApp reported that it had resolved the problem. In reality, WhatsApp's solution had been to block the website's IP address, which had allowed a Windows tool to be made that could accomplish the same thing. This problem has since been resolved by the institution of an IP address check on currently logged-in sess |
https://en.wikipedia.org/wiki/Cytotoxic%20T%20cell | A cytotoxic T cell (also known as TC, cytotoxic T lymphocyte, CTL, T-killer cell, cytolytic T cell, CD8+ T-cell or killer T cell) is a T lymphocyte (a type of white blood cell) that kills cancer cells, cells that are infected by intracellular pathogens (such as viruses or bacteria), or cells that are damaged in other ways.
Most cytotoxic T cells express T-cell receptors (TCRs) that can recognize a specific antigen. An antigen is a molecule capable of stimulating an immune response and is often produced by cancer cells, viruses, bacteria or intracellular signals. Antigens inside a cell are bound to class I MHC molecules, and brought to the surface of the cell by the class I MHC molecule, where they can be recognized by the T cell. If the TCR is specific for that antigen, it binds to the complex of the class I MHC molecule and the antigen, and the T cell destroys the cell.
In order for the TCR to bind to the class I MHC molecule, the former must be accompanied by a glycoprotein called CD8, which binds to the constant portion of the class I MHC molecule. Therefore, these T cells are called CD8+ T cells.
The affinity between CD8 and the MHC molecule keeps the TC cell and the target cell bound closely together during antigen-specific activation. CD8+ T cells are recognized as TC cells once they become activated and are generally classified as having a pre-defined cytotoxic role within the immune system. However, CD8+ T cells also have the ability to make some cytokines, such as TNF-α and IFN-γ, with antitumour and antimicrobial effects.
Development
The immune system must recognize millions of potential antigens. There are fewer than 30,000 genes in the human body, so it is impossible to have one gene for every antigen. Instead, the DNA in millions of white blood cells in the bone marrow is shuffled to create cells with unique receptors, each of which can bind to a different antigen. Some receptors bind to tissues in the human body itself, so to prevent the body fro |
https://en.wikipedia.org/wiki/Dawes%27%20limit | Dawes' limit is a formula to express the maximum resolving power of a microscope or telescope. It is so named after its discoverer, William Rutter Dawes
, although it is also credited to Lord Rayleigh.
The formula takes different forms depending on the units.
This formula agrees with the usual
at a wavelength of about 460nm, somewhat bluer than the peak sensitivity of rod cells at c. 498nm.
See also
Rayleigh criterion |
https://en.wikipedia.org/wiki/ClearCube | ClearCube is a computer systems manufacturer based in Austin, Texas, owned by parent company ClearCube Holdings. The company became known for its blade PC products; it has since expanded its offerings to include desktop virtualization and VDI. It was founded in 1997 by Andrew Heller (former IBM Fellow) and Barry Thornton as Vicinity Systems.
In 2005, ClearCube derived about a third of its revenue from virtual infrastructure products sold into the financial services sector, with the majority of the rest of the revenue coming from customers in the health-care and government sectors. Since 2005, ClearCube has continued to focus on virtualization-capable hardware and management software, which has led to strong revenue growth. In 2011, the company announced 50% year-over-year revenue growth due to the strong performance of its virtual desktop products.
In 2011, ClearCube acquired Dallas-based Network Elites. The acquisition brought roughly 25 additional employees to the company and expanded ClearCube's Cloud services capabilities.
Partnerships
Until 2005, IBM was a reseller of the entire product line of ClearCube. Afterwards, IBM bundled some of its own hardware with ClearCube's software, and also diversified its software offering to include Citrix and VMware products. When IBM sold its PC division to Lenovo, the latter also began reselling ClearCube blades. Other major PC manufactures, like HP, also began to compete in the blade PC niche around this time. Other resellers of ClearCube products included Hitachi and SAIC.
In 2008, ClearCube spun off its software division as VDIworks, and while VDIworks has developed additional OEM relationships, the two companies remain closely associated in OEM partnership, and share the same investors and owners. In January 2008, ClearCube also introduced products implementing Teradici's PC-over-IP protocol, including two dual DVI thin clients, the I9420 I/Port and C7420 C/Port, which connect to the blades using copper-based and f |
https://en.wikipedia.org/wiki/Cation%E2%80%93%CF%80%20interaction | Cation–π interaction is a noncovalent molecular interaction between the face of an electron-rich π system (e.g. benzene, ethylene, acetylene) and an adjacent cation (e.g. Li+, Na+). This interaction is an example of noncovalent bonding between a monopole (cation) and a quadrupole (π system). Bonding energies are significant, with solution-phase values falling within the same order of magnitude as hydrogen bonds and salt bridges. Similar to these other non-covalent bonds, cation–π interactions play an important role in nature, particularly in protein structure, molecular recognition and enzyme catalysis. The effect has also been observed and put to use in synthetic systems.
Origin of the effect
Benzene, the model π system, has no permanent dipole moment, as the contributions of the weakly polar carbon–hydrogen bonds cancel due to molecular symmetry. However, the electron-rich π system above and below the benzene ring hosts a partial negative charge. A counterbalancing positive charge is associated with the plane of the benzene atoms, resulting in an electric quadrupole (a pair of dipoles, aligned like a parallelogram so there is no net molecular dipole moment). The negatively charged region of the quadrupole can then interact favorably with positively charged species; a particularly strong effect is observed with cations of high charge density.
Nature of the cation–π interaction
The most studied cation–π interactions involve binding between an aromatic π system and an alkali metal or nitrogenous cation. The optimal interaction geometry places the cation in van der Waals contact with the aromatic ring, centered on top of the π face along the 6-fold axis. Studies have shown that electrostatics dominate interactions in simple systems, and relative binding energies correlate well with electrostatic potential energy.
The Electrostatic Model developed by Dougherty and coworkers describes trends in binding energy based on differences in electrostatic attraction. It wa |
https://en.wikipedia.org/wiki/Miguel%20San%20Mart%C3%ADn | Alejandro Miguel San Martín (January 6, 1959) is an Argentine engineer of NASA and a science educator. He is best known for his work as Chief Engineer for the Guidance, Navigation, and Control system in the latest missions to Mars. His best known contribution is the Sky Crane system, of which he is coinventor, used in the Curiosity mission for the descent of the rover.
In addition to his work as an engineer, he is dedicated to giving presentations about the work he does with his team at NASA. He has participated as a speaker at various events such as Campus Party, Robotics Day, Real Talks Atlanta and TEDx Río de la Plata, among other conferences. He is featured in the NASA video "Curiosity Seven Minutes of Terror" along with other Curiosity engineers.
Early life and career
He left Argentina after he graduated from industrial school, moving to the United States to get a bachelor's degree in electrical engineering from Syracuse University College of Engineering and Computer Science, being named Engineering Student of the Year. He completed his master's degree at the Massachusetts Institute of Technology.
In various interviews he said that he decided to be a space engineer on a winter's night in 1976 at his parents’ farm, while he listened to the BBC on short wave reporting the arrival of the Viking mission to Mars.
He started working for the NASA Jet Propulsion Laboratory in 1985, where he participated in the Magellan mission to Venus and Cassini mission to Saturn. Later in the Pathfinder mission he was named Chief Engineer for the Guidance, Navigation, and Control system, which landed Sojourner rover. In the same role he was part of the Spirit and Opportunity missions in 2004. He helped to develop the Sky Crane system which landed Curiosity on Mars as part of the Mars Science Laboratory mission, and with his team at JPL he also worked on the software for the landing.
He is a member of the NASA National Engineering and Safety Center.
In February 2019, he was |
https://en.wikipedia.org/wiki/Quasi-arithmetic%20mean | In mathematics and statistics, the quasi-arithmetic mean or generalised f-mean or Kolmogorov-Nagumo-de Finetti mean is one generalisation of the more familiar means such as the arithmetic mean and the geometric mean, using a function . It is also called Kolmogorov mean after Soviet mathematician Andrey Kolmogorov. It is a broader generalization than the regular generalized mean.
Definition
If f is a function which maps an interval of the real line to the real numbers, and is both continuous and injective, the f-mean of numbers
is defined as , which can also be written
We require f to be injective in order for the inverse function to exist. Since is defined over an interval, lies within the domain of .
Since f is injective and continuous, it follows that f is a strictly monotonic function, and therefore that the f-mean is neither larger than the largest number of the tuple nor smaller than the smallest number in .
Examples
If = ℝ, the real line, and , (or indeed any linear function , not equal to 0) then the f-mean corresponds to the arithmetic mean.
If = ℝ+, the positive real numbers and , then the f-mean corresponds to the geometric mean. According to the f-mean properties, the result does not depend on the base of the logarithm as long as it is positive and not 1.
If = ℝ+ and , then the f-mean corresponds to the harmonic mean.
If = ℝ+ and , then the f-mean corresponds to the power mean with exponent .
If = ℝ and , then the f-mean is the mean in the log semiring, which is a constant shifted version of the LogSumExp (LSE) function (which is the logarithmic sum), . The corresponds to dividing by , since logarithmic division is linear subtraction. The LogSumExp function is a smooth maximum: a smooth approximation to the maximum function.
Properties
The following properties hold for for any single function :
Symmetry: The value of is unchanged if its arguments are permuted.
Idempotency: for all x, .
Monotonicity: is monotonic in eac |
https://en.wikipedia.org/wiki/Aaron%20Pixton | Aaron C. Pixton (born January 13, 1986) is an American mathematician at the University of Michigan. He works in enumerative geometry, and is also known for his chess playing, where he is a FIDE Master.
Early life and education
Pixton was born in Binghamton, New York; his father, Dennis Pixton, is a retired professor of mathematics at Binghamton University. He grew up in Vestal, New York. While a student at Vestal Senior High School, he scored a perfect score on the American Mathematics Competition three times from 2002 to 2004. He went on to the International Mathematical Olympiad in 2003 and 2004 to win consecutive gold medals.
He received a Bachelor of Arts in 2008 and a Doctor of Philosophy in 2013, both from Princeton University.
While an undergraduate at Princeton University, Pixton was a three-time Putnam Fellow. For his research conducted as an undergraduate, he was awarded the 2009 Morgan Prize. In 2008, he received a Churchill Scholarship to the University of Cambridge. Pixton received his Ph.D. in 2013 from Princeton under the supervision of Rahul Pandharipande; his dissertation was The tautological ring of the moduli space of curves.
Career
Pixton was appointed as a Clay Research Fellow for a term of five years beginning in 2013. After two years as a postdoctoral researcher at Harvard University, he became an assistant professor of mathematics at the Massachusetts Institute of Technology in 2015. In 2017, he received a Sloan Research Fellowship. In 2020, he moved to the University of Michigan as an assistant professor.
Chess
Pixton is also a former child prodigy in chess. He was the 2001 U.S. Cadet Champion and the 2002 US Junior Chess Champion, and had a win against the former US Champion Joel Benjamin in 2003.
Selected publications |
https://en.wikipedia.org/wiki/Raptor%20code | In computer science, Raptor codes (rapid tornado; see Tornado codes) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/2001 and were first published in 2004 as an extended abstract. Raptor codes are a significant theoretical and practical improvement over LT codes, which were the first practical class of fountain codes.
Raptor codes, as with fountain codes in general, encode a given source block of data consisting of a number k of equal size source symbols into a potentially limitless sequence of encoding symbols such that reception of any k or more encoding symbols allows the source block to be recovered with some non-zero probability. The probability that the source block can be recovered increases with the number of encoding symbols received above k becoming very close to 1, once the number of received encoding symbols is only very slightly larger than k. For example, with the latest generation of Raptor codes, the RaptorQ codes, the chance of decoding failure when k encoding symbols have been received is less than 1%, and the chance of decoding failure when k+2 encoding symbols have been received is less than one in a million. (See Recovery probability and overhead section below for more discussion on this.) A symbol can be any size, from a single byte to hundreds or thousands of bytes.
Raptor codes may be systematic or non-systematic. In the systematic case, the symbols of the original source block, i.e. the source symbols, are included within the set of encoding symbols. Some examples of a systematic Raptor code is the use by the 3rd Generation Partnership Project in mobile cellular wireless broadcasting and multicasting, and also by DVB-H standards for IP datacast to handheld devices (see external links). The Raptor codes used in these standards is also defined in IETF RFC 5053.
Online codes are an example of a non-systematic fountain code.
RaptorQ code
The most advance |
https://en.wikipedia.org/wiki/Frequency-dependent%20selection | Frequency-dependent selection is an evolutionary process by which the fitness of a phenotype or genotype depends on the phenotype or genotype composition of a given population.
In positive frequency-dependent selection, the fitness of a phenotype or genotype increases as it becomes more common.
In negative frequency-dependent selection, the fitness of a phenotype or genotype decreases as it becomes more common. This is an example of balancing selection.
More generally, frequency-dependent selection includes when biological interactions make an individual's fitness depend on the frequencies of other phenotypes or genotypes in the population.
Frequency-dependent selection is usually the result of interactions between species (predation, parasitism, or competition), or between genotypes within species (usually competitive or symbiotic), and has been especially frequently discussed with relation to anti-predator adaptations. Frequency-dependent selection can lead to polymorphic equilibria, which result from interactions among genotypes within species, in the same way that multi-species equilibria require interactions between species in competition (e.g. where αij parameters in Lotka-Volterra competition equations are non-zero). Frequency-dependent selection can also lead to dynamical chaos when some individuals' fitnesses become very low at intermediate allele frequencies.
Negative
The first explicit statement of frequency-dependent selection appears to have been by Edward Bagnall Poulton in 1884, on the way that predators could maintain color polymorphisms in their prey.
Perhaps the best known early modern statement of the principle is Bryan Clarke's 1962 paper on apostatic selection (a synonym of negative frequency-dependent selection). Clarke discussed predator attacks on polymorphic British snails, citing Luuk Tinbergen's classic work on searching images as support that predators such as birds tended to specialize in common forms of palatable species. Clarke |
https://en.wikipedia.org/wiki/Optical%20mesh%20network | An optical mesh network is a type of optical telecommunications network employing wired fiber-optic communication or wireless free-space optical communication in a mesh network architecture.
Most optical mesh networks use fiber-optic communication and are operated by internet service providers in metropolitan and regional but also national and international scenarios. They are faster and less error prone than other network architectures and support backup and recovery plans for established networks in case of any disaster, damage or failure. Currently planned satellite constellations aim to establish optical mesh networks in space by using wireless laser communication.
History of transport networks
Transport networks, the underlying optical fiber-based layer of telecommunications networks, have evolved from Digital cross connect system (DCS)-based mesh architectures in the 1980s, to SONET/SDH (Synchronous Optical Networking/Synchronous Digital Hierarchy) ring architectures in the 1990s. In DCS-based mesh architectures, telecommunications carriers deployed restoration systems for DS3 circuits such as AT&T FASTAR (FAST Automatic Restoration) and MCI Real Time Restoration (RTR), restoring circuits in minutes after a network failure. In SONET/SDH rings, carriers implemented ring protection such as SONET Unidirectional Path Switched Ring (UPSR) (also called Sub-Network Connection Protection (SCNP) in SDH networks) or SONET Bidirectional Line Switched Ring (BLSR) (also called Multiplex Section - Shared Protection Ring (MS-SPRing) in SDH networks), protecting against and recovering from a network failure in 50 ms or less, a significant improvement over the recovery time supported in DCS-based mesh restoration, and a key driver for the deployment of SONET/SDH ring-based protection.
There have been attempts at improving and/or evolving traditional ring architectures to overcome some of its limitations, with trans-oceanic ring architecture (defined in ITU-T Rec. G.841), "P |
https://en.wikipedia.org/wiki/Waking%20up%20early | Waking up early is rising before most others and has also been described as a productivity method - rising early and consistently so as to be able to accomplish more during the day. This method has been recommended since antiquity and is now recommended by a number of personal development gurus.
Commentary
Within the context of religious observances, spiritual writers have called this practice "the heroic minute", referring to the sacrifice which this entails.
The philosopher Aristotle wrote in his Economics that "Rising before daylight is also to be commended; it is a healthy habit, and gives more time for the management of the household as well as for liberal studies."
Benjamin Franklin is quoted to have said: "Early to bed and early to rise, makes a man healthy, wealthy, and wise". It is a saying that is viewed as a commonsensical proverb, which was included in "A Method of Prayer" by Mathew Henry who also listed it as a phrase "long said." Franklin is also quoted as saying: "The early morning has gold in its mouth", a translation of the German proverb "Morgenstund hat Gold im Mund".
"The early bird gets the worm" is a proverb that suggests that getting up early will lead to success during the day.
Which brings to mind the immediate counterpoint: "what about the early worm, shouldn't he have stayed in bed?"
James Thurber, in his book Fables for our Time, ended the Fable of the Shrike: "Early to rise and early to bed, makes a Shrike healthy, and wealthy, and dead".
Criticisms
Such recommendations may cast individuals with different natural sleep patterns as lazy or unmotivated when it is a much different matter for a person with a longer or delayed sleep cycle to get up earlier in the morning than for a person with an advanced sleep cycle. In effect, the person accustomed to a later wake time is being asked not to wake up an hour early but 3–4 hours early, while waking up "normally" may already be an unrecognized challenge imposed by the environment.
|
https://en.wikipedia.org/wiki/Pollination%20network | A pollination network is a bipartite mutualistic network in which plants and pollinators are the nodes, and the pollination interactions form the links between these nodes. The pollination network is bipartite as interactions only exist between two distinct, non-overlapping sets of species, but not within the set: a pollinator can never be pollinated, unlike in a predator-prey network where a predator can be depredated. A pollination network is two-modal, i.e., it includes only links connecting plant and animal communities.
Nested structure of pollination networks
A key feature of pollination networks is their nested design. A study of 52 mutualist networks (including plant-pollinator interactions and plant-seed disperser interactions) found that most of the networks were nested. This means that the core of the network is made up of highly connected generalists (a pollinator that visits many different species of plant), while specialized species interact with a subset of the species that the generalists interact with (a pollinator that visits few species of plant, which are also visited by generalist pollinators). As the number of interactions in a network increases, the degree of nestedness increases as well. One property that results from nested structure of pollination networks is an asymmetry in specialization, where specialist species are often interacting with some of the most generalized species. This is in contrast to the idea of reciprocal specialization, where specialist pollinators interact with specialist plants. Similar to the relationship between network complexity and network nestedness, the amount of asymmetry in specialization increases as the number of interactions increases.
Modularity of networks
Another feature that is common in pollination networks is modularity. Modularity occurs when certain groups of species within a network are much more highly connected to each other than they are with the rest of the network, with weak interactions c |
https://en.wikipedia.org/wiki/Distance-regular%20graph | In the mathematical field of graph theory, a distance-regular graph is a regular graph such that for any two vertices and , the number of vertices at distance from and at distance from depends only upon , , and the distance between and .
Some authors exclude the complete graphs and disconnected graphs from this definition.
Every distance-transitive graph is distance-regular. Indeed, distance-regular graphs were introduced as a combinatorial generalization of distance-transitive graphs, having the numerical regularity properties of the latter without necessarily having a large automorphism group.
Intersection arrays
It turns out that a graph of diameter is distance-regular if and only if there is an array of integers such that for all , gives the number of neighbours of at distance from and gives the number of neighbours of at distance from for any pair of vertices and at distance on . The array of integers characterizing a distance-regular graph is known as its intersection array.
Cospectral distance-regular graphs
A pair of connected distance-regular graphs are cospectral if they have the same intersection array.
A distance-regular graph is disconnected if and only if it is a disjoint union of cospectral distance-regular graphs.
Properties
Suppose is a connected distance-regular graph of valency with intersection array . For all : let denote the -regular graph with adjacency matrix formed by relating pairs of vertices on at distance , and let denote the number of neighbours of at distance from for any pair of vertices and at distance on .
Graph-theoretic properties
for all .
and .
Spectral properties
has distinct eigenvalues.
if is a simple eigenvalue of
for any eigenvalue multiplicity of unless is a complete multipartite graph.
for any eigenvalue multiplicity of unless is a cycle graph or a complete multipartite graph.
If is strongly regular, then and .
Examples
Some first examples of distance-re |
https://en.wikipedia.org/wiki/Virtual%20Viking%20%E2%80%93%20The%20Ambush | Virtual Viking – The Ambush is a 2019 short film directed by Erik Gustavson, using volumetric video capture to create one of the first films in virtual reality. Produced for The Viking Planet centre in Oslo, Norway, the film is part of a wider exhibition of the lives of Norse seafarers and uses a number of VR headsets to enable visitors to experience a Viking longship in the heat of battle.
Plot
Skald recounts the story of how he was captured, in his youth, during an unsuccessful Viking raid.
Cast
Murray McArthur as Skald
Luke White as Ulf
Wolfie Hughes as Grim
Christopher Rogers as Trym
Ross O'Hennessy as Viking
Awards and nominations
At the Aesthetica Short Film Festival 2019, Virtual Viking – The Ambush was awarded Best VR Film. |
https://en.wikipedia.org/wiki/Prohesperocyon | Prohesperocyon ("before Hesperocyon") is an extinct genus of the first canid endemic to North America appearing during the Late Eocene around 36.6 mya (AEO).
Fossil distribution
Prohesperocyon wilsoni was unearthed at the Airstrip (TMM 40504) site, Presidio County, Texas dating between 36.6 and 36.5 million years ago. This fossil species bears a combination of features that definitively mark it as a Canidae, including teeth that include the loss of the upper third molar (a general trend in canids toward a more shearing bite), and the characteristically enlarged bony bulla (the rounded covering over the middle ear). Based on what we know about its descendants, Prohesperocyon likely had slightly more elongated limbs than its predecessors, along with toes that were parallel and closely touching, rather than splayed, as in bears. |
https://en.wikipedia.org/wiki/Gulkand | Gulkand (also written gulqand or gulkhand) is a sweet preserve of rose petals originating in the Indian subcontinent. The term is derived from Persian; gul (rose) and qand (sugar/sweet).
Preparation
Traditionally, gulkand has been prepared with Damask roses. Other common types of roses used include China rose, French rose, and Cabbage rose. It is prepared using special pink rose petals and is mixed with sugar. Rose petals are slow-cooked with sugar, which reduces the juices into a thick consistency.
Uses in holistic medicine
Gulkand is used in the Unani system of medicine as a cooling tonic. It is also used in Ayurvedic and Persian medicine to help with bodily imbalances. |
https://en.wikipedia.org/wiki/Hybrid%20Access%20Networks | Hybrid Access Networks refer to a special architecture for broadband access networks where two different network technologies are combined to improve bandwidth. A frequent motivation for such Hybrid Access Networks to combine one xDSL network with a wireless network such as LTE. The technology is generic and can be applied to combine different types of access networks such as DOCSIS, WiMAX, 5G or satellite networks. The Broadband Forum has specified an architecture as a framework for the deployment of such converged networks.
Use cases
One of the main motivations for such Hybrid Access Networks is to provide faster Internet services in rural areas where it is not always cost-effective to deploy faster xDSL technologies such as G.Fast or VDSL2 that cannot cover long distances between the street cabinet and the home. Several governments, notably in Europe, required network operators to provide fast Internet services to all inhabitants with a minimum of 30 Mbps by 2020.
A second use case is to improve the reliability of the access link given that it is unlikely that both the xDSL network and the wireless network will fail at the same time.
A third motivation is the fast service turnup. The customer can immediately install the hybrid network access and use the wireless leg while the network operator is installing the wired part.
Technology
Several techniques are defined by the Broadband Forum to create Hybrid Access Networks. To illustrate them, we assume that the end user has an hybrid CPE (Customer-premises_equipment) router that is attached to both a wired access network such as xDSL and a wireless one such as LTE. Other deployments are possible, e.g., the end user might have two different access routers that are linked together by a cable instead of a single hybrid CPE router.
The first deployment scenario is where the network operator provides a hybrid CPE router to each subscriber but no specialised equipment in the operator's network. There are two possible |
https://en.wikipedia.org/wiki/White%20dragon | The white dragon () is a symbol associated in Welsh mythology with the Anglo-Saxons.
Origin of tradition
The earliest usage of the white dragon as a symbol of the Anglo-Saxons is found in the Historia Brittonum. The relevant story takes place at Dinas Emrys when Vortigern tries to build a castle there. Every night, unseen forces demolish the castle walls and foundations. Vortigern consults his advisers, who tell him to find a boy with no natural father, and to sacrifice him. Vortigern finds such a boy, but on hearing that he is to be put to death to solve the demolishing of the walls, the boy dismisses the knowledge of the advisors. The boy tells the king of the two dragons. Vortigern excavates the hill, freeing the dragons. They continue their fight and the red dragon finally defeats the white dragon. The boy tells Vortigern that the white dragon symbolises the Saxons and that the red dragon symbolises the people of Vortigern.
The story is repeated in Geoffrey of Monmouth's fictional History of the Kings of Britain (c. 1136). In this telling the boy is identified as the young Merlin. The Historia Brittonum and History of the Kings of Britain are the only medieval texts to use the white dragon as a symbol of the English.
A similar story of white and red dragons fighting is found in the medieval romance Lludd and Llefelys, although in this case the dragons are not used to symbolize Britons or Saxons. The battle between the two dragons is the second plague to strike the Island of Britain, as the White Dragon would strive to overcome the Red Dragon, making the Red cry out a fearful shriek which was heard over every Brythonic hearth. This shriek went through people's hearts, scaring them so much that the men lost their hue and their strength, women lost their children, young men and the maidens lost their senses, and all the animals and trees and the earth and the waters were left barren. Lludd finally eradicated the plague by catching the dragons and burying both o |
https://en.wikipedia.org/wiki/Electron%20multiplier | An electron multiplier is a vacuum-tube structure that multiplies incident charges. In a process called secondary emission, a single electron can, when bombarded on secondary-emissive material, induce emission of roughly 1 to 3 electrons. If an electric potential is applied between this metal plate and yet another, the emitted electrons will accelerate to the next metal plate and induce secondary emission of still more electrons. This can be repeated a number of times, resulting in a large shower of electrons all collected by a metal anode, all having been triggered by just one.
History
In 1930, Russian physicist Leonid Aleksandrovitch Kubetsky proposed a device which used photocathodes combined with dynodes, or secondary electron emitters, in a single tube to remove secondary electrons by increasing the electric potential through the device. The electron multiplier can use any number of dynodes in total, which use a coefficient, σ, and created a gain of σn where n is the number of emitters.
Discrete dynode
Secondary electron emission begins when one electron hits a dynode inside a vacuum chamber and ejects electrons that cascade onto more dynodes and repeats the process over again. The dynodes are set up so that each time an electron hits the next one it will have an increase of about 100 electron Volts greater than the last dynode. Some advantages of using this include a response time in the picoseconds, a high sensitivity, and an electron gain of about 108 electrons.
Continuous dynode
A continuous dynode system uses a horn-shaped funnel of glass coated with a thin film of semiconducting materials. The electrodes have increasing resistance to allow secondary emission. Continuous dynodes use a negative high voltage in the wider end and goes to a positive near ground at the narrow end. The first device of this kind was called a Channel Electron Multiplier (CEM). CEMs required 2-4 kilovolts in order to achieve a gain of 106 electrons.
Microchannel plate
Anot |
https://en.wikipedia.org/wiki/Molecular%20orbital%20diagram | A molecular orbital diagram, or MO diagram, is a qualitative descriptive tool explaining chemical bonding in molecules in terms of molecular orbital theory in general and the linear combination of atomic orbitals (LCAO) method in particular. A fundamental principle of these theories is that as atoms bond to form molecules, a certain number of atomic orbitals combine to form the same number of molecular orbitals, although the electrons involved may be redistributed among the orbitals. This tool is very well suited for simple diatomic molecules such as dihydrogen, dioxygen, and carbon monoxide but becomes more complex when discussing even comparatively simple polyatomic molecules, such as methane. MO diagrams can explain why some molecules exist and others do not. They can also predict bond strength, as well as the electronic transitions that can take place.
History
Qualitative MO theory was introduced in 1928 by Robert S. Mulliken and Friedrich Hund. A mathematical description was provided by contributions from Douglas Hartree in 1928 and Vladimir Fock in 1930.
Basics
Molecular orbital diagrams are diagrams of molecular orbital (MO) energy levels, shown as short horizontal lines in the center, flanked by constituent atomic orbital (AO) energy levels for comparison, with the energy levels increasing from the bottom to the top. Lines, often dashed diagonal lines, connect MO levels with their constituent AO levels. Degenerate energy levels are commonly shown side by side. Appropriate AO and MO levels are filled with electrons by the Pauli Exclusion Principle, symbolized by small vertical arrows whose directions indicate the electron spins. The AO or MO shapes themselves are often not shown on these diagrams. For a diatomic molecule, an MO diagram effectively shows the energetics of the bond between the two atoms, whose AO unbonded energies are shown on the sides. For simple polyatomic molecules with a "central atom" such as methane () or carbon dioxide (), |
https://en.wikipedia.org/wiki/Psychometrics%20of%20racism | Psychometrics of racism is an emerging field that aims to measure the incidence and impacts of racism on the psychological well-being of people of all races. At present, there are few instruments that attempt to capture the experience of racism in all of its complexity.
Self-reported inventories
The Schedule of Racist Events (SRE) is questionnaire for assessing frequency of racial discrimination in lives of African Americans created in 1998 by Hope Landrine and Elizabeth A. Klonoff. SRE is an 18-item self-report inventory, assesses frequency of specific racist events in past year and in one's entire life, and measures to what extent this discrimination was stressful.
Other psychometric tools for assessing the impacts of racism include:
The Racism Reaction Scale (RRS)
Perceived Racism Scale (PRS)
Index of Race-Related Stress (IRRS)
Racism and Life Experience Scale-Brief Version (RaLES-B)
Telephone-Administered Perceived Racism Scale (TPRS)
Physiological metrics
In a summary of recent research Jules P. Harrell, Sadiki Hall, and James Taliaferro describe how a growing body of research has explored the impact of encounters with racism or discrimination on physiological activity. Several of the studies suggest that higher blood pressure levels are associated with the tendency not to recall or report occurrences identified as racist and discriminatory. In other words, failing to recognize instances of racism is directly impacted by the blood pressure of the person experiencing the racist event. Investigators have reported that physiological arousal is associated with laboratory analogues of ethnic discrimination and mistreatment.
See also
Race and health
Stereotype threat
White guilt |
https://en.wikipedia.org/wiki/GridRPC | GridRPC in distributed computing, is Remote Procedure Call over a grid. This paradigm has been proposed by the GridRPC working group of the Open Grid Forum (OGF), and an API has been defined in order for clients to access remote servers as simply as a function call. It is used among numerous Grid middleware for its simplicity of implementation, and has been standardized by the OGF in 2007.
For interoperability reasons between the different existing middleware, the API has been followed by a document describing good use and behavior of the different GridRPC API implementations. Works have then been conducted on the GridRPC Data Management, which has been standardized in 2011.
Scope
The scope of this standard is to offer recommendations for the implementation of middleware. It deals with the following topics:
Definition of a specific data structure for arguments in GridRPC middleware.
Definition of the data type to be used in conjunction with the arguments' data structure.
Definition of the creation, destruction, lifetime and copy semantics for the arguments' data structure.
Definition of possible introspection capabilities for call arguments and attributes of remote functions (e.g. data types, counts).
Definition of mechanisms for handling persistent data, e.g., definition and use of a concept such as "data handles" (which might be the same as or similar to a grpc_data_t data type). This may also involve concepts such as lazy copy semantics, and data leases or time-outs.
Definition of API mechanisms to enable workflow management.
Evaluate the compatibility and interoperability with other systems, e.g., Web Services Resource Framework.
Desirable Properties—the Proposed Recommendation will not necessarily specify any properties, such as thread safety, security, and fault tolerance, but it should not be incompatible with any such useful properties.
Demonstrate implementability of all parts of the API.
Demonstrate and evaluate at least two implementations |
https://en.wikipedia.org/wiki/Virus%20quantification | Virus quantification is counting or calculating the number of virus particles (virions) in a sample to determine the virus concentration. It is used in both research and development (R&D) in academic and commercial laboratories as well as in production situations where the quantity of virus at various steps is an important variable that must be monitored. For example, the production of virus-based vaccines, recombinant proteins using viral vectors, and viral antigens all require virus quantification to continually monitor and/or modify the process in order to optimize product quality and production yields and to respond to ever changing demands and applications. Other examples of specific instances where viruses need to be quantified include clone screening, multiplicity of infection (MOI) optimization, and adaptation of methods to cell culture.
There are many ways to categorize virus quantification methods. Here, the methods are grouped according to what is being measured and in what biological context. For example, cell-based assays typically measure infectious units (active virus). Other methods may measure the concentration of viral proteins, DNA, RNA, or molecular particles, but not necessarily measure infectivity. Each method has its own advantages and disadvantages, which often determines which method is used for specific applications.
Cell-based assays
Plaque assay
Plaque-based assays are a commonly used method to determine virus concentration in terms of infectious dose. Plaque assays determine the number of plaque forming units (PFU) in a virus sample, which is one measure of virus quantity. This assay is based on a microbiological method conducted in petri dishes or multi-well cell culture plates. Specifically, a confluent monolayer of host cells is infected by applying a sample containing the virus at varying dilutions and then covered with a semi-solid medium, such as agar or carboxymethyl cellulose, to prevent the virus infection from spreading i |
https://en.wikipedia.org/wiki/Hartley%20oscillator | The Hartley oscillator is an electronic oscillator circuit in which the oscillation frequency is determined by a tuned circuit consisting of capacitors and inductors, that is, an LC oscillator. The circuit was invented in 1915 by American engineer Ralph Hartley. The distinguishing feature of the Hartley oscillator is that the tuned circuit consists of a single capacitor in parallel with two inductors in series (or a single tapped inductor), and the feedback signal needed for oscillation is taken from the center connection of the two inductors.
History
The Hartley oscillator was invented by Hartley while he was working for the Research Laboratory of the Western Electric Company. Hartley invented and patented the design in 1915 while overseeing Bell System's transatlantic radiotelephone tests; it was awarded patent number 1,356,763 on October 26, 1920. Note that the basic schematic shown below labeled "Common-drain Hartley circuit" is essentially the same as in the patent drawing, except that the tube is replaced by a JFET, and that the battery for a negative grid bias is not needed.
In 1946 Hartley was awarded the IRE medal of honor "For his early work on oscillating circuits employing triode tubes and likewise for his early recognition and clear exposition of the fundamental relationship between the total amount of information which may be transmitted over a transmission system of limited band-width and the time required."(The second half of the citation refers to Hartley's work in information theory which largely paralleled Harry Nyquist.)
Operation
The Hartley oscillator is distinguished by a tank circuit consisting of two series-connected coils (or, often, a tapped coil) in parallel with a capacitor, with an amplifier between the relatively high impedance across the entire LC tank and the relatively low voltage/high current point between the coils. The original 1915 version used a triode as the amplifying device in common plate (cathode follower) config |
https://en.wikipedia.org/wiki/Professor%20Kageyama%27s%20Maths%20Training%3A%20The%20Hundred%20Cell%20Calculation%20Method | Professor Kageyama's Maths Training: The Hundred Cell Calculation Method is a puzzle video game published by Nintendo and developed by Jupiter for the Nintendo DS handheld video game console. It was first released in Japan, then later in Europe and Australasia. It was released in North America as Personal Trainer: Math on January 12, 2009 and also in South Korea in 2009. The game is part of both the Touch! Generations and Personal Trainer series. The game received mixed reviews, with common criticisms cited for the game's difficulty in recognizing some numbers and for not being very entertaining to play. At GameRankings, it holds an average review score of 65%.
Gameplay
Maths Training, designed to be played daily, uses a method called "The Hundred Cell Calculation Method" that focuses on repetition of basic arithmetic. This method was developed by Professor Kageyama who works at the Centre for Research and Educational Development at Ritsumeikan University, Kyoto. Utilizing a 10 x 10 grid of blank squares lined with rows of numbers along the top and side of the grid, the player has to match up each top number with each side number and add or subtract or multiply them. They then fill in the appropriate square with the appropriate answer.
The game is played by holding the Nintendo DS vertically like a book, and it supports both right- and left-handed users, allowing them to view the exercises on the message screen while they note down their answers with the stylus on the Touch Screen. The user can play against up to 15 other Nintendo DS users by using the DS Download Play option or with multiple game cards.
See also
List of Nintendo DS games
Personal Trainer: Cooking
Personal Trainer: Walking
Notes |
https://en.wikipedia.org/wiki/Dual%20norm | In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.
Definition
Let be a normed vector space with norm and let denote its continuous dual space. The dual norm of a continuous linear functional belonging to is the non-negative real number defined by any of the following equivalent formulas:
where and denote the supremum and infimum, respectively.
The constant map is the origin of the vector space and it always has norm
If then the only linear functional on is the constant map and moreover, the sets in the last two rows will both be empty and consequently, their supremums will equal instead of the correct value of
Importantly, a linear function is not, in general, guaranteed to achieve its norm on the closed unit ball meaning that there might not exist any vector of norm such that (if such a vector does exist and if then would necessarily have unit norm ).
R.C. James proved James's theorem in 1964, which states that a Banach space is reflexive if and only if every bounded linear function achieves its norm on the closed unit ball.
It follows, in particular, that every non-reflexive Banach space has some bounded linear functional that does not achieve its norm on the closed unit ball.
However, the Bishop–Phelps theorem guarantees that the set of bounded linear functionals that achieve their norm on the unit sphere of a Banach space is a norm-dense subset of the continuous dual space.
The map defines a norm on (See Theorems 1 and 2 below.)
The dual norm is a special case of the operator norm defined for each (bounded) linear map between normed vector spaces.
Since the ground field of ( or ) is complete, is a Banach space.
The topology on induced by turns out to be stronger than the weak-* topology on
The double dual of a normed linear space
The double dual (or second dual) of is the dual of the normed vector space . There is a natural map . Indeed, fo |
https://en.wikipedia.org/wiki/One-Net | ONE-NET is an open-source standard for wireless networking.
ONE-NET was designed for low-cost, low-power (battery-operated) control networks for applications such as home automation, security & monitoring, device control, and sensor networks. ONE-NET is not tied to any proprietary hardware or software, and can be implemented with a variety of low-cost off-the-shelf radio transceivers and micro controllers from a number of different manufacturers.
Wireless Transmission
ONE-NET uses UHF ISM radio transceivers and currently operates in the 868 MHz and 915 MHz frequencies with 25 channels available for use in the United States. The ONE-NET standard allows for implementation on other frequencies, and some work is being done to implement it in the 433 MHz and 2.4 GHz frequency ranges.
ONE-NET utilizes Wideband FSK (Frequency-shift keying) to encode data for transmission.
ONE-NET features a dynamic data rate protocol with a base data rate of 38.4 kbit/s. The specification allows per-node dynamic data rate configuration for data rates up to 230 kbit/s.
Network Characteristics
ONE-NET supports star, peer-to-peer and multi-hop topology. Star network topology can be used to lower complexity and cost of peripherals, and also simplifies encryption key management. In peer-to-peer mode, a master device configures and authorizes peer-to-peer transactions. Employing repeaters and a configurable repetition radius multi-hop mode allows to cover larger areas or route around dead areas. Mesh routing is not supported.
Outdoor peer-to-peer range has been measured to over 500 m, indoor peer-to-peer range has been demonstrated from 60 m to over 100 m, and mesh mode can extend operational range to several kilometers.
Simple, block, and streaming transactions are supported.
Simple transactions typically use message types as defined by the ONE-NET protocol to exchange sensor data such as temperature or energy consumption, and control data such as on/off messages. Simple transactions |
https://en.wikipedia.org/wiki/SGI%20VPro | VPro, also known as Odyssey, is a computer graphics architecture for Silicon Graphics workstations. First released on the Octane2, it was subsequently used on the Fuel, Tezro workstations and the Onyx visualization systems, where it was branded InfinitePerformance.
VPro provides some very advanced capabilities such as per-pixel lighting, also known as "phong shading", (through the SGIX_fragment_lighting extension) and 48-bit RGBA color. On the other hand, later designs suffered from constrained bandwidth and poorer texture mapping performance compared to competing GPU solutions, which rapidly caught up to SGI in the market.
Four different Odyssey-based VPro graphics board revisions existed, designated V6, V8, V10 and V12. The first series were the V6 and V8, with 32MB and 128MB of RAM respectively; the V10 and V12 had double the geometry performance of the older V6/V8, but were otherwise similar. The V6 and V10 can have up to 8MB RAM allocated to textures, while V8 and V12 can have up to 108MB RAM used for textures. The V10 and V12 boards used in Fuel, Tezro and Onyx 3000 computers use a different XIO connector than the cards used in Octane2 workstations.
The VPro graphics subsystem consists of an SGI proprietary chip set and associated software. The chip set consists of the buzz ASIC, the pixel blaster and jammer (PB&J) ASIC, and associated SDRAM. The buzz ASIC is a single-chip graphics pipeline. It operates at 251 MHz and contains on-chip SRAM. The buzz ASIC has three interfaces:
Host (16-bit, 400-MHz peer-to-peer XIO link)
SDRAM (The SDRAM is 32 MB (V6 or V10) or 128 MB (V8 or V12); the memory bus operates at half the speed of the buzz ASIC.)
PB&J ASIC
As a result of a patent infringement settlement, SGI acquired rights to some of the Nvidia Quadro GPUs and released VPro-branded products (the V3, VR3, V7 and VR7) based on these (the GeForce 256, Quadro, Quadro 2 MXR, and Quadro 2 Pro, respectively). These cards share nothing with the original Odyssey line |
https://en.wikipedia.org/wiki/Phospholipid%20scramblase | Scramblase is a protein responsible for the translocation of phospholipids between the two monolayers of a lipid bilayer of a cell membrane. In humans, phospholipid scramblases (PLSCRs) constitute a family of five homologous proteins that are named as hPLSCR1–hPLSCR5. Scramblases are not members of the general family of transmembrane lipid transporters known as flippases. Scramblases are distinct from flippases and floppases. Scramblases, flippases, and floppases are three different types of enzymatic groups of phospholipid transportation enzymes. The inner-leaflet, facing the inside of the cell, contains negatively charged amino-phospholipids and phosphatidylethanolamine. The outer-leaflet, facing the outside environment, contains phosphatidylcholine and sphingomyelin. Scramblase is an enzyme, present in the cell membrane, that can transport (scramble) the negatively charged phospholipids from the inner-leaflet to the outer-leaflet, and vice versa.
Expression
Whereas hPLSCR1, -3, and -4 are expressed in a variety of tissues with few exceptions, expression of hPLSCR2 is restricted only to the testis. hPLSCR4 is not expressed in peripheral blood lymphocytes, whereas hPLSCR1 and -3 were not detected in the brain. However, the functional significance of this differential gene expression is not yet understood. While the gene and the mRNA of hPLSCR5 provide evidence of its existence, the protein has yet to be described in the literature.
Structure
Scramblase proteins contain a region of conservation that possesses a 12-stranded beta barrel surrounding a central alpha helix. This structure shows similarity to the Tubby protein.
Enzyme activation
The enzymatic activity of scramblase depends on the calcium concentration present inside the cell. The calcium concentration inside cells is, under normal conditions, very low; therefore, scramblase has a low activity under resting conditions. Phospholipid redistribution is triggered by increased cytosolic calcium and seems |
https://en.wikipedia.org/wiki/Neurocognition | Neurocognitive functions are cognitive functions closely linked to the function of particular areas, neural pathways, or cortical networks in the brain, ultimately served by the substrate of the brain's neurological matrix (i.e. at the cellular and molecular level). Therefore, their understanding is closely linked to the practice of neuropsychology and cognitive neuroscience – two disciplines that broadly seek to understand how the structure and function of the brain relate to cognition and behaviour.
A neurocognitive deficit is a reduction or impairment of cognitive function in one of these areas, but particularly when physical changes can be seen to have occurred in the brain, such as aging related physiological changes or after neurological illness, mental illness, drug use, or brain injury.
A clinical neuropsychologist may specialise in using neuropsychological tests to detect and understand such deficits, and may be involved in the rehabilitation of an affected person. The discipline that studies neurocognitive deficits to infer normal psychological function is called cognitive neuropsychology.
Etymology
The term neurocognitive is a recent addition to the nosology of clinical Psychiatry and Psychology. It was rarely used before the publication of the DSM-5, which updated the psychiatric classification of disorders listed in the "Delirium, Dementia, and Amnestic and Other Cognitive Disorders" chapter of the DSM-IV. Following the 2013 publication of the DSM-5, the use of the term "neurocognitive" − increased steadily.
Adding the prefix "neuro-" to the word "cognitive" is an example of pleonasm because analogous to expressions like "burning fire" and "black darkness," the prefix "neuro-" adds no further useful information to the term "cognitive". In the field of clinical neurology, clinicians continue using the simpler term "cognitive", due to the absence of evidence for human cognitive processes that do not involve the nervous system.
See also
Cognition
|
https://en.wikipedia.org/wiki/Neopluramycin | Neopluramycin is an antibiotic that inhibits nucleic acid synthesis. It has been isolated from the cultured broth of a strain of Streptomyces pluricolorescens as orange crystals, and analytical data and molecular weight determination are consistent with the empirical formula .
Neopluramycin resembles pluramycin A, but is differentiated by its antibacterial spectrum, toxicity, thin-layer chromatography, and infrared absorption spectrum.
Neopluramycin inhibits growth of Gram-positive bacteria, leukemia L-1210 in mice and Yoshida rat sarcoma cells in tissue culture. |
https://en.wikipedia.org/wiki/Mitovirus | Mitovirus is a genus of positive-strand RNA viruses, in the family Mitoviridae. Fungi serve as natural hosts. There are five species in the genus.
Structure
Mitoviruses have no true virion. They do not have structural proteins or a capsid.
Genome
Mitoviruses have nonsegmented, linear, positive-sense, single-stranded RNA genomes. The genome has one open reading frame which encodes the RNA-dependent RNA polymerase (RdRp). The genome is associated with the RdRp in the cytoplasm of the fungi host and forms a naked ribonucleoprotein complex.
Life cycle
Viral replication is cytoplasmic. Replication follows the positive-strand RNA virus replication model. Positive-strand RNA virus transcription is the method of transcription. The virus exits the host cell by cell-to-cell movement. Fungi serve as the natural host. Transmission routes are parental and sexual.
Taxonomy
There are five species in the genus:
Cryphonectria mitovirus 1
Ophiostoma mitovirus 4
Ophiostoma mitovirus 5
Ophiostoma mitovirus 6
Ophiostoma mitovirus 3a |
https://en.wikipedia.org/wiki/Coframe | In mathematics, a coframe or coframe field on a smooth manifold is a system of one-forms or covectors which form a basis of the cotangent bundle at every point. In the exterior algebra of , one has a natural map from , given by . If is dimensional a coframe is given by a section of such that . The inverse image under of the complement of the zero section of forms a principal bundle over , which is called the coframe bundle. |
https://en.wikipedia.org/wiki/Distress%20signal | A distress signal, also known as a distress call, is an internationally recognized means for obtaining help. Distress signals are communicated by transmitting radio signals, displaying a visually observable item or illumination, or making a sound audible from a distance.
A distress signal indicates that a person or group of people, watercraft, aircraft, or other vehicle is threatened by a serious or imminent danger and requires immediate assistance. Use of distress signals in other circumstances may be against local or international law. An urgency signal is available to request assistance in less critical situations.
For distress signalling to be the most effective, two parameters must be communicated:
Alert or notification of an emergency in progress
Position or location (or localization or pinpointing) of the party in distress.
For example, a single aerial flare alerts observers to the existence of a vessel in distress somewhere in the general direction of the flare sighting on the horizon but extinguishes within one minute or less. A hand-held flare burns for three minutes and can be used to localize or pinpoint more precisely the exact location or position of the party in trouble. An EPIRB both notifies or alerts authorities and at the same time provides position indication information.
Maritime
Distress signals at sea are defined in the International Regulations for Preventing Collisions at Sea and in the International Code of Signals. Mayday signals must only be used where there is grave and imminent danger to life. Otherwise, urgent signals such as pan-pan can be sent. Most jurisdictions have large penalties for false, unwarranted, or prank distress signals.
Distress can be indicated by any of the following officially sanctioned methods:
Transmitting a spoken voice Mayday message by radio over very high frequency channel 16 (156.8 MHz) or medium frequency on 2182 kHz
Transmitting a digital distress signal by activating (or pressing) the distress |
https://en.wikipedia.org/wiki/Population%20density | Population density (in agriculture: standing stock or plant density) is a measurement of population per unit land area. It is mostly applied to humans, but sometimes to other living organisms too. It is a key geographical term.
Biological population densities
Population density is population divided by total land area, sometimes including seas and oceans, as appropriate.
Low densities may cause an extinction vortex and further reduce fertility. This is called the Allee effect after the scientist who identified it. Examples of the causes of reduced fertility in low population densities are:
Increased problems with locating sexual mates
Increased inbreeding
===Human densities===
Population density is the number of people per unit of area, usually transcribed as "per square kilometer" or square mile, and which may include or exclude, for example, areas of water or glaciers. Commonly this is calculated for a county, city, country, another territory or the entire world.
The world's population is around 8,000,000,000 and the Earth's total area (including land and water) is . Therefore, from this very crude type of calculation, the worldwide human population density is approximately 8,000,000,000 ÷ 510,000,000 = . However, if only the Earth's land area of is taken into account, then human population density is . This includes all continental and island land area, including Antarctica. However, if Antarctica is excluded, then population density rises to over .
The European Commission's Joint Research Centre (JRC) has developed a suite of (open and free) data and tools named the Global Human Settlement Layer (GHSL) to improve the science for policy support to the European Commission Directorate Generals and Services and as support to the United Nations system.
Several of the most densely populated territories in the world are city-states, microstates and urban dependencies. In fact, 95% of the world's population is concentrated on just 10% of the world's land. |
https://en.wikipedia.org/wiki/Arrhenotoky | Arrhenotoky (from Greek -τόκος -tókos "birth of -" + ἄρρην árrhēn "male person"), also known as arrhenotokous parthenogenesis, is a form of parthenogenesis in which unfertilized eggs develop into males. In most cases, parthenogenesis produces exclusively female offspring, hence the distinction.
The set of processes included under the term arrhenotoky depends on the author: arrhenotoky may be restricted to the production of males that are haploid (haplodiploidy); may include diploid males that permanently inactivate one set of chromosomes (parahaploidy); or may be used to cover all cases of males being produced by parthenogenesis (including such cases as aphids, where the males are XO diploids). The form of parthenogenesis in which females develop from unfertilized eggs is known as thelytoky; when both males and females develop from unfertilized eggs, the term "deuterotoky" is used.
In the most commonly used sense of the term, arrhenotoky is synonymous with haploid arrhenotoky or haplodiploidy: the production of haploid males from unfertilized eggs in insects having a haplodiploid sex-determination system. Males are produced parthenogenetically, while diploid females are usually produced biparentally from fertilized eggs. In a similar phenomenon, parthenogenetic diploid eggs develop into males by converting one set of their chromosomes to heterochromatin, thereby inactivating those chromosomes. This is referred to as diploid arrhenotoky or parahaploidy.
Arrhenotoky occurs in members of the insect order Hymenoptera (bees, ants, and wasps) and the Thysanoptera (thrips). The system also occurs sporadically in some spider mites, Hemiptera, Coleoptera (bark beetles), and rotifers.
See also
Apomixis
Genomic imprinting
Pseudo-arrhenotoky
Thelytoky
Notes |
https://en.wikipedia.org/wiki/Microsoft%20Messaging | Messaging (also known as Microsoft Messaging, and as of recently, Windows Operator Messages) is an instant messaging Universal Windows Platform app for Windows 8.0, Windows 10 and Windows 10 Mobile. The mobile version allows SMS, MMS and RCS messaging. The desktop version is restricted to showing SMS messages sent via Skype, and billing SMS message from an LTE operator.
As of recently, the app was refocused into a SMS data plan app, where your mobile operator sends messages about your data plan, this is due to the functionality of the app switching to Skype. It was also partially renamed to Windows Operator Messages.
External links
Send a text message — Microsoft Support |
https://en.wikipedia.org/wiki/Diseases%20from%20Space | Diseases from Space is a book published in 1979 that was authored by astronomers Fred Hoyle and Chandra Wickramasinghe, where they propose that many of the most common diseases which afflict humanity, such as influenza, the common cold and whooping cough, have their origins in extraterrestrial sources. The two authors argue the case for outer space being the main source for these pathogens or at least their causative agents.
The claim connecting terrestrial disease and extraterrestrial pathogens was rejected by the scientific community.
Overview
Fred Hoyle and Chandra Wickramasinghe spent over 20 years investigating the nature and composition of interstellar dust. Though many hypotheses regarding this dust had been postulated by various astronomers since the middle of the 19th century, all were found to be wanting as and when new data on the gas and dust clouds became available. Chandra Wickramasinghe proposed the existence of polymeric composition based on the molecule formaldehyde (H2CO).
In 1974 Wickramasinghe first proposed the hypothesis that some dust in interstellar space was largely organic (containing carbon and nitrogen), and followed this up with other research confirming the hypothesis. Wickramasinghe also proposed and confirmed the existence of polymeric compounds based on formaldehyde. Fred Hoyle and Wickramasinghe later proposed the identification of bicyclic aromatic compounds from an analysis of the ultraviolet extinction absorption at 2175A, thus demonstrating the existence of polycyclic aromatic hydrocarbon molecules in space.
Hoyle and Wickramasinghe went further and speculated that the overall spectroscopic data of cosmic dust and gas clouds also matched those for desiccated bacteria. This led them to conclude that diseases such as influenza and the common cold are incident from space and fall upon the Earth in what they term "pathogenic patches." Hoyle and Wickramasinghe viewed the process of evolution in a manner at variance with the sta |
https://en.wikipedia.org/wiki/Poisson%20summation%20formula | In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
Forms of the equation
Consider an aperiodic function with Fourier transform alternatively designated by and
The basic Poisson summation formula is:
Also consider periodic functions, where parameters and are in the same units as :
Then is a special case (P=1, x=0) of this generalization:
which is a Fourier series expansion with coefficients that are samples of the function Similarly:
also known as the important Discrete-time Fourier transform.
The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as
Applicability
holds provided is a continuous integrable function which satisfies
for some and every Note that such is uniformly continuous, this together with the decay assumption on , show that the series defining converges uniformly to a continuous function. holds in the strong sense that both sides converge uniformly and absolutely to the same limit.
holds in a pointwise sense under the strictly weaker assumption that has bounded variation and
The Fourier series on the right-hand side of is then understood as a (conditionally convergent) limit of symmetric partial sums.
As shown above, holds under the much less restrictive assumption that is in , but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) F |
https://en.wikipedia.org/wiki/The%20Quarterly%20Journal%20of%20Mechanics%20and%20Applied%20Mathematics | The Quarterly Journal of Mechanics and Applied Mathematics is a quarterly, peer-reviewed scientific journal covering research on classical mechanics and applied mathematics. The editors-in-chief are P. W. Duck, P. A. Martin and N. V. Movchan. The journal was established in 1948 to meet a need for a separate English journal that publishes articles focusing on classical mechanics only, in particular, including fluid mechanics and solid mechanics, that were usually published in journals like Proceedings of the Royal Society and Philosophical Transactions of the Royal Society.
Abstracting and indexing
The journal is abstracted and indexed in, |
https://en.wikipedia.org/wiki/Stockfish | Stockfish is unsalted fish, especially cod, dried by cold air and wind on wooden racks (which are called "hjell" in Norway) on the foreshore. The drying of food is the world's oldest known preservation method, and dried fish has a storage life of several years. The method is cheap and effective in suitable climates; the work can be done by the fisherman and family, and the resulting product is easily transported to market.
Over the centuries, several variants of dried fish have evolved. The stockfish (fresh dried, not salted) category is often mistaken for the klippfisk, or salted cod, category where the fish is salted before drying. Salting was not economically feasible until the 17th century, when cheap salt from southern Europe became available to the maritime nations of northern Europe.
Stockfish is cured in a process called fermentation where cold-adapted bacteria matures the fish, similar to the maturing process of cheese.
In English legal records of the medieval period, stock fishmongers are differentiated from ordinary fishmongers when the occupation of a plaintiff or defendant is recorded.
Etymology
The word stockfish is a loan word from West Frisian stokfisk (stick fish), possibly referring to the wooden racks on which stockfish are traditionally dried or because the dried fish resembles a stick. "Stock" may also refer to a wooden yoke or harness on a horse or mule, once used to carry large fish from the sea or after drying/smoking for trade in nearby villages. This etymology is consistent with the fact that "Stockmaß" is German for the height of a horse at the withers.
Importance
Stockfish is Norway's longest sustained export commodity. Stockfish is first mentioned as a commodity in the 13th-century Icelandic prose work Egil's Saga, where chieftain Thorolf Kveldulfsson, in the year 875 AD, ships stockfish from Helgeland in mid-Norway to Britain. This product accounted for most of Norway's trade income from the Viking Age throughout the Medieval per |
https://en.wikipedia.org/wiki/Yakalo | The yakalo is a cross of the yak (Bos grunniens) and the American bison (Bison bison, known as a buffalo in North America). It was produced by hybridisation experiments in the 1920s, when crosses were made between yak bulls and both pure bison cows and bison-cattle hybrid cows. As with many other inter-specific crosses, only female hybrids were found to be fertile (Haldane's rule). Few of the hybrids survived, and the experiments were discontinued in 1928.
See also
Beefalo
Dzo
Żubroń
Footnotes
Bovid hybrids
Intergeneric hybrids |
https://en.wikipedia.org/wiki/Strange%E2%80%93Rahman%E2%80%93Smith%20equation | The Strange–Rahman–Smith equation is used in the cryoporometry method of measuring porosity.
NMR cryoporometry is a recent technique for measuring total porosity and pore size distributions. NMRC is based on two equations: the Gibbs–Thomson equation, which maps the melting point depression to pore size, and the Strange–Rahman–Smith equation, which maps the melted signal amplitude at a particular temperature to pore volume.
Equation
If the pores of the porous material are filled with a liquid, then the incremental volume of the pores with pore diameter between and may be obtained from the increase in melted liquid volume for an increase of temperature between and by:
Where: is the Gibbs–Thomson coefficient for the liquid in the pores. |
https://en.wikipedia.org/wiki/Non-wellfounded%20mereology | In philosophy, specifically metaphysics, mereology is the study of parthood relationships. In mathematics and formal logic, wellfoundedness prohibits for any x.
Thus non-wellfounded mereology treats topologically circular, cyclical, repetitive, or other eventual self-containment.
More formally, non-wellfounded partial orders may exhibit for some x whereas well-founded orders prohibit that.
See also
Aczel's anti-foundation axiom
Peter Aczel
John Barwise
Steve Awodey
Dana Scott
External links
Mereology
Mathematical logic |
https://en.wikipedia.org/wiki/ScaLAPACK | The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.
ScaLAPACK is designed for heterogeneous computing and is portable on any computer that supports MPI or PVM.
ScaLAPACK depends on PBLAS operations in the same way LAPACK depends on BLAS.
As of version 2.0 the code base directly includes PBLAS and BLACS and has dropped support for PVM.
Examples
Programming with Big Data in R fully utilizes ScaLAPACK and two-dimensional block cyclic decomposition for Big Data statistical analysis which is an extension to R. |
https://en.wikipedia.org/wiki/Energetic%20neutral%20atom | Energetic Neutral Atom (ENA) imaging, often described as "seeing with atoms", is a technology used to create global images of otherwise invisible phenomena in the magnetospheres of planets and throughout the heliosphere.
Solar wind is composed of plasma emitted from the Sun. Solar wind plasma is mostly composed of hydrogen, bare electrons and protons, along with some other kinds of nuclei, which are mostly helium. The space between solar systems is of a similar composition, but those components come from other stars in the Milky Way galaxy. These charged particles can be redirected by magnetic fields; for instance, Earth's magnetic field deflects these solar wind particles around the Earth. Occasionally, charged particles within plasmas capture electrons from colliding neutral atoms, turning them neutral and not subject to large-scale electromagnetic fields and waves. However, because they are still moving very fast, they travel mostly in a straight line, thereby still being subject to gravity. These are called Energetic Neutral Atoms. ENA images are constructed from the detection of these energetic neutral atoms.
Earth's magnetosphere preserves its atmosphere and protects life on Earth from cell-damaging radiation. This region of "space weather" is the site of geomagnetic storms that disrupt communications systems and pose radiation hazards to humans traveling in airplanes (at high altitude and latitude) or in orbiting spacecraft. Geomagnetic weather systems have been late to benefit from the satellite imagery taken for granted in weather forecasting and space physics because their origins in magnetospheric plasma frequency present the added problem of invisibility.
The heliosphere protects the Solar System from the majority of cosmic rays, but is so remote that only an imaging technique such as ENA imaging will reveal its properties. The heliosphere's structure is due to the invisible interaction between the solar wind and cold gas from the local interstellar m |
https://en.wikipedia.org/wiki/Cognition%20Network%20Technology | Cognition Network Technology (CNT), also known as Definiens Cognition Network Technology, is an object-based image analysis method developed by Nobel laureate Gerd Binnig together with a team of researchers at Definiens AG in Munich, Germany. It serves for extracting information from images using a hierarchy of image objects (groups of pixels), as opposed to traditional pixel processing methods.
To emulate the human mind's cognitive powers, Definiens used patented image segmentation and classification processes, and developed a method to render knowledge in a semantic network. CNT examines pixels not in isolation, but in context. It builds up a picture iteratively, recognizing groups of pixels as objects. It uses the color, shape, texture and size of objects as well as their context and relationships to draw conclusions and inferences, similar to human analysis.
History
In 1994 Professor Gerd Binnig founded Definiens. CNT was first available with the launch of the eCognition software in May 2000. In June 2010, Trimble Navigation Ltd (NASDAQ: TRMB) acquired Definiens business asset in earth sciences markets, including eCognition software, and also licensed Definiens' patented CNT. In 2014, Definiens was acquired by MedImmune, the global biologics research and development arm of AstraZeneca, for an initial consideration of $150 million.
Software
Definiens Tissue Studio
Definiens Tissue Studio is a digital pathology image analysis software application based on CNT.
The intended use of Definiens Tissue Studio is for biomarker translational research in formalin-fixed, paraffin-embedded tissue samples which have been treated with immunohistochemical staining assays, or hematoxylin and eosin (H&E).
The central concept behind Definiens Tissue Studio is a user interface that facilitates machine learning from example digital histopathology images in order to derive an image analysis solution suitable for the measurement of biomarkers and/or histological features within p |
https://en.wikipedia.org/wiki/VERSAdos | VERSAdos is an operating system dating back to the early 1980s for use on the Motorola 68000 development system called the EXORmacs which featured the VERSAbus and an array of option cards. They were typically connected to CDC Phoenix disk drives running one to four 14-inch platters. The EXORmacs was used to emulate a 680xx processor in-circuit, speeding development of 680xx based systems. It also hosted several compilers and assemblers.
VERSAdos and the EXORmacs were produced by Motorola's Microsystems Division.
Overview
VERSAdos was a real-time, multi-user operating system. It was the follow on product to the single user MDOS that ran the 6800 development system called the EXORciser.
Both systems features a harness with a CPU socket compatible connector.
A Modula 2 compiler was ported to VERSAdos.
Commands
The following list of commands and utilities are supported by VERSAdos.
^
ACCT
ARGUMENTS
ASSIGN
BACKUP
BATCH
BSTOP
BTERM
BUILDS
BYE
CANCEL
CHAIN
CLOSE
CONFIG
CONNECT
CONTINUE
COPY
CREF
DATE
DEFAULTS
DEL
DIR
DMT
DUMP
DUMPANAL
ELIMINATE
EMFGEN
END
FREE
HELP
INIT
LIB
LIST
LOAD
LOGOFF
MBLM
MERGEOS
MIGR
MT
NEWS
NOARGUMENTS
NOVALID
OFF
OPTION
PASS
PASSWORD
PATCH
PROCEED
PRTDUMP
QUERY
R?
RENAME
REPAIR
RETRY
SCRATCH
SECURE
SESSIONS
SNAPSHOT
SPL
SPOOL
SRCCOM
START
STOP
SWORD
SYSANAL
TERMINATE
TIME
TRANSFER
UPLOADS
USE
VALID
See also
CP/M-68K |
https://en.wikipedia.org/wiki/Group%20III%20pyridoxal-dependent%20decarboxylases | In molecular biology, group III pyridoxal-dependent decarboxylases are a family of bacterial enzymes comprising ornithine decarboxylase , lysine decarboxylase and arginine decarboxylase .
Pyridoxal-5'-phosphate-dependent amino acid decarboxylases can be divided into four groups based on amino acid sequence. Group III comprises prokaryotic ornithine and lysine decarboxylase and the prokaryotic biodegradative type of arginine decarboxylase.
Structure
These enzymes consist of several conserved domains.
The N-terminal domain has a flavodoxin-like fold, and is termed the "wing" domain because of its position in the overall 3D structure. Ornithine decarboxylase from Lactobacillus 30a (L30a OrnDC) is representative of the large, pyridoxal-5'-phosphate-dependent decarboxylases that act on lysine, arginine or ornithine. The crystal structure of the L30a OrnDC has been solved to 3.0 A resolution. Six dimers related by C6 symmetry compose the enzymatically active dodecamer (approximately 106 Da). Each monomer of L30a OrnDC can be described in terms of five sequential folding domains. The amino-terminal domain, residues 1 to 107, consists of a five-stranded beta-sheet termed the "wing" domain. Two wing domains of each dimer project inward towards the centre of the dodecamer and contribute to dodecamer stabilisation.
The major domain contains a conserved lysine residue, which is the site of attachment of the pyridoxal-phosphate group.
See also
Group I pyridoxal-dependent decarboxylases
Group II pyridoxal-dependent decarboxylases
Group IV pyridoxal-dependent decarboxylases |
https://en.wikipedia.org/wiki/Canada%27s%20Olympic%20Broadcast%20Media%20Consortium | Established in 2007, Canada's Olympic Broadcast Media Consortium (legal name 7048467 Canada Inc., also sometimes referred to informally in branding as CTV Olympics and RDS Olympiques, additionally referred to as the National Olympic Network by BBM Canada) was a joint venture set up by Canadian media companies Bell Media (formerly CTVglobemedia) and Rogers Media to produce the Canadian broadcasts of the 2010 Winter Olympics in Vancouver, British Columbia, Canada, and the 2012 Summer Olympics in London, England, as well as the two corresponding Paralympic Games. Bell owned 80% of the joint venture, and Rogers owned 20%.
The consortium encompassed many of the properties owned by both companies, including Bell Media's CTV Television Network, TSN, RDS and RDS Info, and Rogers Media's Omni Television, Sportsnet, OLN, and the Rogers radio stations group. Several other broadcasters carried consortium coverage, including Noovo (formerly V), and several channels owned by Asian Television Network. Finally, dedicated websites in English and French (ctvolympics.ca and rdsolympiques.ca) were set up to stream live coverage over the Internet to Canadian viewers. The consortium replaced CBC Sports, which had held the Canadian rights to all Olympics beginning with the 1996 games, although some cable rights had been sub-licensed to TSN / RDS beginning in 1998.
Rogers announced in September 2011 that it would withdraw from the consortium following London 2012, and therefore not participate in its bid for rights to the 2014 Winter Olympics and 2016 Summer Olympics. The company cited scheduling conflicts and financial considerations for the decision. Bell Media then announced a new partnership with the CBC to bid for Canadian broadcasting rights of Sochi 2014 and Rio 2016. Broadcast details for the joint bid were never released. The joint Bell/CBC bid was considered the prohibitive favourite to win the rights when the International Olympic Committee accepted bids. However, the Bell/CBC |
https://en.wikipedia.org/wiki/Topological%20censorship | The topological censorship theorem (if valid) states that general relativity does not allow an observer to probe the topology of spacetime: any topological structure collapses too quickly to allow light to traverse it. More precisely, in a globally hyperbolic, asymptotically flat spacetime satisfying the null energy condition, every causal curve from past null infinity to future null infinity is fixed-endpoint homotopic to a curve in a topologically trivial neighbourhood of infinity.
A 2013 paper by Sergey Krasnikov claims that the topological censorship theorem was not proven in the original article because of a gap in the proof. |
https://en.wikipedia.org/wiki/DNA%20transposon | DNA transposons are DNA sequences, sometimes referred to "jumping genes", that can move and integrate to different locations within the genome. They are class II transposable elements (TEs) that move through a DNA intermediate, as opposed to class I TEs, retrotransposons, that move through an RNA intermediate. DNA transposons can move in the DNA of an organism via a single-or double-stranded DNA intermediate. DNA transposons have been found in both prokaryotic and eukaryotic organisms. They can make up a significant portion of an organism's genome, particularly in eukaryotes. In prokaryotes, TE's can facilitate the horizontal transfer of antibiotic resistance or other genes associated with virulence. After replicating and propagating in a host, all transposon copies become inactivated and are lost unless the transposon passes to a genome by starting a new life cycle with horizontal transfer. It is important to note that DNA transposons do not randomly insert themselves into the genome, but rather show preference for specific sites.
With regard to movement, DNA transposons can be categorized as autonomous and nonautonomous. Autonomous ones can move on their own, while nonautonomous ones require the presence of another transposable element's gene, transposase, to move. There are three main classifications for movement for DNA transposons: "cut and paste," "rolling circle" (Helitrons), and "self-synthesizing" (Polintons). These distinct mechanisms of movement allow them to move around the genome of an organism. Since DNA transposons cannot synthesize DNA, they replicate using the host replication machinery. These three main classes are then further broken down into 23 different superfamilies characterized by their structure, sequence, and mechanism of action.
DNA transposons are a cause of gene expression alterations. As newly inserted DNA into active coding sequences, they can disrupt normal protein functions and cause mutations. Class II TEs make up about 3% of the |
https://en.wikipedia.org/wiki/Point%E2%80%93line%E2%80%93plane%20postulate | In geometry, the point–line–plane postulate is a collection of assumptions (axioms) that can be used in a set of postulates for Euclidean geometry in two (plane geometry), three (solid geometry) or more dimensions.
Assumptions
The following are the assumptions of the point-line-plane postulate:
Unique line assumption. There is exactly one line passing through two distinct points.
Number line assumption. Every line is a set of points which can be put into a one-to-one correspondence with the real numbers. Any point can correspond with 0 (zero) and any other point can correspond with 1 (one).
Dimension assumption. Given a line in a plane, there exists at least one point in the plane that is not on the line. Given a plane in space, there exists at least one point in space that is not in the plane.
Flat plane assumption. If two points lie in a plane, the line containing them lies in the plane.
Unique plane assumption. Through three non-collinear points, there is exactly one plane.
Intersecting planes assumption. If two different planes have a point in common, then their intersection is a line.
The first three assumptions of the postulate, as given above, are used in the axiomatic formulation of the Euclidean plane in the secondary school geometry curriculum of the University of Chicago School Mathematics Project (UCSMP).
History
The axiomatic foundation of Euclidean geometry can be dated back to the books known as Euclid's Elements (circa 300 B.C.). These five initial axioms (called postulates by the ancient Greeks) are not sufficient to establish Euclidean geometry. Many mathematicians have produced complete sets of axioms which do establish Euclidean geometry. One of the most notable of these is due to Hilbert who created a system in the same style as Euclid. Unfortunately, Hilbert's system requires 21 axioms. Other systems have used fewer (but different) axioms. The most appealing of these, from the viewpoint of having the fewest axioms, is due to G.D. Birkhoff ( |
https://en.wikipedia.org/wiki/Yellow%20Dog%20Linux | Yellow Dog Linux (YDL) is a discontinued free and open-source operating system for high-performance computing on multi-core processor computer architectures, focusing on GPU systems and computers using the POWER7 processor. The original developer was Terra Soft Solutions, which was acquired by Fixstars in October 2008. Yellow Dog Linux was first released in the spring of 1999 for Apple Macintosh PowerPC-based computers. The most recent version, Yellow Dog Linux 7, was released on August 6, 2012. Yellow Dog Linux lent its name to the popular YUM Linux software updater, derived from YDL's YUP (Yellowdog UPdater) and thus called Yellowdog Updater, Modified.
Features
Yellow Dog Linux is based on Red Hat Enterprise Linux/CentOS and relies on the RPM Package Manager. Its software includes applications such as Ekiga (a voice-over-IP and videoconferencing application), GIMP (a raster graphics editor), Gnash (a free Adobe Flash player), gThumb (an image viewer), the Mozilla Firefox Web browser, the Mozilla Thunderbird e-mail and news client, the OpenOffice.org productivity suite, Pidgin (an instant messaging and IRC client), the Rhythmbox music player, and the KDE Noatun and Totem media players.
Starting with YDL version 5.0 'Phoenix', Enlightenment is the Yellow Dog Linux default desktop environment, although GNOME and KDE are also included.
Like other Linux distributions, Yellow Dog Linux supports software development with GCC (compiled with support for C, C++, Java, and Fortran), the GNU C Library, GDB, GLib, the GTK+ toolkit, Python, the Qt toolkit, Ruby and Tcl. Standard text editors such as Vim and Emacs are complemented with IDEs such as Eclipse and KDevelop, as well as by graphical debuggers such as KDbg. Standard document preparation tools such as TeX and LaTeX are also included.
Yellow Dog Linux includes software for running a Web server (such as Apache/httpd, Perl, and PHP), database server (such as MySQL and PostgreSQL), and network server (NFS and Webmin). |
https://en.wikipedia.org/wiki/Network%20security%20policy | A network security policy (NSP) is a generic document that outlines rules for computer network access, determines how policies are enforced and lays out some of the basic architecture of the company security/ network security environment. The document itself is usually several pages long and written by a committee.
A security policy is a complex document, meant to govern data access, web-browsing habits, use of passwords, encryption, email attachments and more. It specifies these rules for individuals or groups of individuals throughout the company. The policies could be expressed as a set of instructions that understood by special purpose network hardware dedicated for securing the network.
Security policy should keep the malicious users out and also exert control over potential risky users within an organization.
Understanding what information and services are available and to which users, as well as what the potential is for damage and whether any protection is already in place to prevent misuse are important when writing a network security policy. In addition, the security policy should dictate a hierarchy of access permissions, granting users access only to what is necessary for the completion of their work. The National Institute of Standards and Technology provides an example security-policy guideline.
See also
Internet security
Security engineering
Computer security
Cybersecurity information technology list
Network security
Industrial espionage
Information security
Security policy |
https://en.wikipedia.org/wiki/Extelligence | Extelligence is a term coined by Ian Stewart and Jack Cohen in their 1997 book Figments of Reality. They define it as the cultural capital that is available to us in the form of external media (e.g. tribal legends, folklore, nursery rhymes, books, videotapes, CD-ROMs, etc.)
They contrast extelligence with intelligence, or the knowledge and cognitive processes within the brain. Furthermore, they regard the 'complicity' of extelligence and intelligence as fundamental to the development of consciousness in evolutionary terms for both the species and the individual. 'Complicity' is a combination of complexity and simplicity, and Cohen and Stewart use it to express the interdependent relationship between knowledge-inside-one's-head and knowledge-outside-one's-head that can be readily accessed.
Although Cohen's and Stewart's respective disciplines are biology and mathematics, their description of the complicity of intelligence and extelligence is in the tradition of Jean Piaget, Belinda Dewar and David A. Kolb. Philosophers, notably Popper, have also considered the relation between subjective knowledge (which he calls world 2), objective knowledge (world 1) and the knowledge represented by man-made artifacts (world 3).
One of Cohen and Stewart's contributions is the way they relate, through the idea of complicity, the individual to the sum of human knowledge. From the mathematics of complexity and game theory, they use the idea of phase space and talk about extelligence space. There is a total phase space (intelligence space) for the human race, which consists of everything that can be known and represented. Within this there is a smaller set of what is known at any given time. Cohen and Stewart propose the idea that each individual can access the parts of the extelligence space with which their intelligence is complicit.
In other words, there has to be, at some level, an appreciation of what is out there and what it means. Much of this 'appreciation' falls into the c |
https://en.wikipedia.org/wiki/Situational%20strength | Situational strength is defined as cues provided by environmental forces regarding the desirability of potential behaviors. Situational strength is said to result in psychological pressure on the individual to engage in and/or refrain from particular behaviors. A consequence of this psychological pressure to act in a certain way is the likelihood that despite an individual's personality, they will act in a certain manner. As such, when strong situations (situations where situational strength is high) exist, the relationship between personality variables (for example, extraversion or risk-taking behaviors) and behaviors is reduced, because no matter what the personality of the individual is, they will act in a way dictated by the situation. When weak situations exist, there is less structure and more ambiguity with respect to what behaviors to perform.
Contrasting strong and soft situations
An example of a strong situation is a red traffic light. Traffic rules dictate how people are supposed to act when they see a red light, and this influence often prevents people from engaging in behaviors that are consistent with their personalities. For example, most people, no matter whether they are daring or cautious, will stop in front of a red traffic light. Therefore, one could not reasonably predict how a person would behave with personality in this situation.
In contrast, an example of a soft situation is a yellow traffic light because the most appropriate course of action is not especially well defined and norms are inconsistent. Thus, individuals who are more daring are likely to speed through the intersection on a yellow light, whereas cautious individuals are likely to stop.
Origins and history
Although it is difficult to formally express when situations restricting individual differences in personality began in psychology, work conducted by Carl Rogers suggested that certain individual differences are most likely to manifest themselves in situations where there is |
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Liechtenstein | As a member of the EFTA, Liechtenstein (LI) is included in the Nomenclature of Territorial Units for Statistics (NUTS). The three NUTS levels all correspond to the country itself:
NUTS-1: LI0 Liechtenstein
NUTS-2: LI00 Liechtenstein
NUTS-3: LI000 Liechtenstein
Below the NUTS levels, there are two LAU levels (LAU-1: electoral districts; LAU-2: municipalities).
See also
Subdivisions of Liechtenstein
Electoral District of Oberland
Electoral District of Unterland
ISO 3166-2 codes of Liechtenstein
FIPS region codes of Liechtenstein
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EFTA countries - Statistical regions at level 1
LIECHTENSTEIN - Statistical regions at level 2
LIECHTENSTEIN - Statistical regions at level 3
Correspondence between the regional levels and the national administrative units
Communes of Liechtenstein, Statoids.com
Liechtenstein
Subdivisions of Liechtenstein |
https://en.wikipedia.org/wiki/Devor%C3%A9 | Devoré (also called burnout) is a fabric technique particularly used on velvets, where a mixed-fibre material undergoes a chemical process to dissolve the cellulose fibers to create a semi-transparent pattern against more solidly woven fabric. The same technique can also be applied to textiles other than velvet, such as lace or the fabrics in burnout t-shirts.
Devoré comes from the French verb dévorer, meaning literally to devour.
History
Burnout fabrics are thought to have originated in France, possibly as a cheap alternative to lace that could be created using caustic paste on fabric. The commercial chemical process used in fashion garments was developed in Lyon at the turn of the 19th and 20th centuries.
The technique was popularised in the 1920s – typically used on evening gowns and shawls – and revived in the 1980s and '90s, notably by Jasper Conran on theatrical costumes and then evening wear and by Georgina von Etzdorf on scarves.
1990s revival
Conran is credited with popularising devoré, introducing it in 1989 and taking the technique forward in the 1990s in his main fashion line. He refined his techniques on theatrical costumes; in the 1992 production of My Fair Lady directed by Simon Callow, burnout fabrics were heavily used for the costumes of Eliza Doolittle and street vendors. Conran's devoré technique also featured in David Bintley's 1993 Royal Ballet production of Tombeaux, where it was used to create the two-tone velvet tutu worn by Darcey Bussell and the corps de ballet costumes. In 1994, it featured in the Scottish Ballet production of The Sleeping Beauty, where Conran said it produced better results for lower cost than appliqué techniques.
Conran's most elaborate devoré fashion pieces – which were oven baked as part of the process – were time-consuming to produce and expensive to buy; in 1993, a panelled evening skirt retailed at £572 and an acid-treated shirt cost £625.
Established as a Wiltshire textile printing workshop in 1981, Georgina |
https://en.wikipedia.org/wiki/Microtechnology | Microtechnology deals with technology whose features have dimensions of the order of one micrometre (one millionth of a metre, or 10−6 metre, or 1μm). It focuses on physical and chemical processes as well as the production or manipulation of structures with one-micrometre magnitude.
Development
Around 1970, scientists learned that by arraying large numbers of microscopic transistors on a single chip, microelectronic circuits could be built that dramatically improved performance, functionality, and reliability, all while reducing cost and increasing volume. This development led to the Information Revolution.
More recently, scientists have learned that not only electrical devices, but also mechanical devices, may be miniaturized and batch-fabricated, promising the same benefits to the mechanical world as integrated circuit technology has given to the electrical world. While electronics now provide the ‘brains’ for today's advanced systems and products, micro-mechanical devices can provide the sensors and actuators — the eyes and ears, hands and feet — which interface to the outside world.
Today, micromechanical devices are the key components in a wide range of products such as automobile airbags, ink-jet printers, blood pressure monitors, and projection display systems. It seems clear that in the not-too-distant future these devices will be as pervasive as electronics. The process has also become more precise, driving the dimensions of the technology down to sub-micrometer range as demonstrated in the case of advanced microelectric circuits that reached below 20 nm.
Micro electromechanical systems
The term MEMS, for Micro Electro Mechanical Systems, was coined in the 1980s to describe new, sophisticated mechanical systems on a chip, such as micro electric motors, resonators, gears, and so on. Today, the term MEMS in practice is used to refer to any microscopic device with a mechanical function, which can be fabricated in a batch process (for example, an array of |
https://en.wikipedia.org/wiki/IEEE%20Heinrich%20Hertz%20Medal | The IEEE Heinrich Hertz Medal was a science award presented by the IEEE for outstanding achievements in the field of electromagnetic waves. The medal was named in honour of German physicist Heinrich Hertz, and was first proposed in 1986 by IEEE Region 8 (Germany) as a centennial recognition of Hertz's work on electromagnetic radiation theory from 1886 to 1891. The medal was first awarded in 1988, and was presented annually until 2001. It was officially discontinued in November 2009.
Recipients
1988: Hans-Georg Unger (Technical University at Brunswick, Germany) for outstanding merits in radio-frequency science, particularly the theory of dielectric wave guides and their application in modern wide-band communication.
1989: Nathan Marcuvitz (Polytechnic University of New York, United States) for fundamental theoretical and experimental contributions to the engineering formulation of electromagnetic field theory.
1990: John D. Kraus (Ohio State University, United States) for pioneering work in radio astronomy and the development of the helical antenna and the corner reflector antenna.
1991: Leopold B. Felsen (Polytechnic University of New York, United States) for highly original and significant developments in the theories of propagation, diffraction and dispersion of electromagnetic waves.
1992: James R. Wait (University of Arizona, United States) for fundamental contributions to electromagnetic theory, to the study of propagation of Hertzian waves through the atmosphere, ionosphere and the Earth, and to their applications in communications, navigation and geophysical exploration.
1993: Kenneth Budden (Cavendish Laboratory, University of Cambridge, United Kingdom) for major original contributions to the theory of electromagnetic waves in ionized media with applications to terrestrial and space communications.
1994: Ronald N. Bracewell (Stanford University, United States) for pioneering work in antenna aperture synthesis and image reconstruction as applied to radioast |
https://en.wikipedia.org/wiki/Vivaldi%20coordinates | Vivaldi Coordinate System is a decentralized Network Coordinate System, that allows for distributed systems such as peer-to-peer networks to estimate round-trip time (RTT) between arbitrary nodes in a network.
Through this scheme, network topology awareness can be used to tune the network behavior to more efficiently distribute data. For example, in a peer-to-peer network, more responsive identification and delivery of content can be achieved. In the Azureus application, Vivaldi is used to improve the performance of the distributed hash table that facilitates query matches.
Design
The algorithm behind Vivaldi is an optimization algorithm that figures out the most stable configuration of points in a euclidean space such that distances between the points are as close as possible to real-world measured distances. In effect, the algorithm attempts to embed the multi-dimensional space that is latency measurements between computers into a low-dimensional euclidean space. A good analogy might be a spring-and-mass system in 3D space where each node is a mass and each connection between nodes are springs. The default lengths of the springs are the measured RTTs between nodes, and when the system is simulated, the coordinates of nodes correspond to the resulting 3D positions of the masses in the lowest energy state of the system. This design is taken from previous work in the field, the contribution that Vivaldi makes is to make this algorithm run in parallel across all the nodes in the network.
Advantages
Vivaldi can theoretically can scale indefinitely.
The Vivaldi algorithm is relatively simple implement.
Drawbacks
Vivaldi's coordinates are points in a euclidean space, which requires the predicted distances to obey the triangle inequality as well as euclidean symmetry. However, there are many triangle inequality violations (TIVs) and symmetry violations on the Internet, mostly because of inefficient routing or distance distortion because connections on the inter |
https://en.wikipedia.org/wiki/Numerosity%20adaptation%20effect | The numerosity adaptation effect is a perceptual phenomenon in numerical cognition which demonstrates non-symbolic numerical intuition and exemplifies how numerical percepts can impose themselves upon the human brain automatically. This effect was first described in 2008.
Presently, this effect is described only for controlled experimental conditions. In the illustration, a viewer should have a strong impression that the left display (lower figure) is more numerous than the right, after 30 seconds of viewing the adaptation (upper figure), although both have exactly the same number of dots. The viewer might also underestimate the number of dots presented in the display.
Both effects are resistant to manipulation of the non-numerical parameters of the display. Thus, this effect cannot be simply explained in terms of size, density, or contrast.
Perhaps the most astonishing aspect of these effects is that they happen immediately, and without conscious control (i.e., knowing that the numbers are equal would not hamper their happening). This points to the operation of a special and largely automatic processing system. As noted by Burr & Ross (2008):
Possible explanations
Few explanations were suggested to explain these phenomena. It was argued that they are heavily dependent on density and less on numerosity. Also, it was suggested that numerosity may be correlated with kurtosis and that the results may be better explained in terms of texture density such that only dots falling within the spatial region where the test is displayed effectively adapt the region.
However, as the display in the original experiments was of spots uniformly either white or black, the kurtosis account is inapplicable. The texture density explanation doesn't seem to disentangle the complexity of these phenomena as in the display the left field adapts to many dots, the right field to few, and these adapters selectively affect the relevant test stimuli. It is not the number of dots in the entir |
https://en.wikipedia.org/wiki/Graphical%20system%20design | Graphical system design (GSD) is a modern approach to designing measurement and control systems that integrates system design software with COTS hardware to dramatically simplify development. This approach combines user interfaces, models of computation, math and analysis, Input/output signals, technology abstractions, and various deployment target. It allows domain experts, or non- implementation experts, to access to design capabilities where they would traditionally need to outsource a system design expert.
This approach to system design is a super-set of electronic system-level (ESL) design. Graphical system design expands on the EDA-based ESL definition to include other types of embedded system design including industrial machines and medical devices. Many of these expanded applications can be defined as "the long tail" applications.
System-level design
Graphical system design is an approach to designing an entire system, using more intuitive graphical software and off-the-shelf (non-custom) hardware devices to refine the design, create initial prototypes and even use for the few run of deployments. The approach may involve Algorithm engineering. The approach can prove successful when designers need to get something to market quickly (medical video: ) or with a team of non-embedded experts like Boston Engineering to create a mechatronics-based machine.
"Graphical system design is a complementary but encompassing platform-based approach that includes embedded and electronic system design, implementation, and verification tools. ESL and graphical system design are really part of the same movement--higher abstraction and more design automation looking to solve the real engineering challenges that designers are facing today--addressing design flaws that are introduced at the specification stage to ensure they're detected well before validation for on-time product delivery."
Tools
Graphical system design relies on open connectivity. For example, tools that can |
https://en.wikipedia.org/wiki/Elastic%20recoil | Elastic recoil means the rebound of the lungs after having been stretched by inhalation, or rather, the ease with which the lung rebounds. With inhalation, the intrapleural pressure (the pressure within the pleural cavity) of the lungs decreases. Relaxing the diaphragm during expiration allows the lungs to recoil and regain the intrapleural pressure experienced previously at rest. Elastic recoil is inversely related to lung compliance.
This phenomenon occurs because of the elastin in the elastic fibers in the connective tissue of the lungs, and because of the surface tension of the film of fluid that lines the alveoli. As water molecules pull together, they also pull on the alveolar walls causing the alveoli to recoil and become smaller. But two factors prevent the lungs from collapsing: surfactant and the intrapleural pressure. Surfactant is a surface-active lipoprotein complex formed by type II alveolar cells. The proteins and lipids that comprise surfactant have both a hydrophilic region and a hydrophobic region. By absorbing to the air-water interface of alveoli with the hydrophilic head groups in the water and the hydrophobic tails facing towards the air, the main lipid component of surfactant, dipalmitoylphosphatidylcholine, reduces surface tension. It also means the rate of shrinking is more regular because of the stability of surface area caused by surfactant. Pleural pressure is the pressure in the pleural space. When this pressure is lower than the pressure of alveoli they tend to expand. This prevents the elastic fibers and outside pressure from crushing the lungs. It is a homeostatic mechanism.
Notes and references
Respiratory physiology |
https://en.wikipedia.org/wiki/Absorption%20of%20water | In higher plants water and minerals are absorbed through root hairs which are in contact with soil water and from the root hairs zone a little the root tips.
Active absorption
Active absorption refers to the absorption of water by roots with the help of adenosine triphosphate, generated by the root respiration: as the root cells actively take part in the process, it is called active absorption. According to Jenner, active absorption takes place in low transpiring and well-watered plants, and 4% of total water absorption is carried out in this process. The active absorption is carried out by two theories; active osmotic water absorption and Active non-osmotic water absorption. In this process, energy is not required.
Active absorption is important for the plants.
Active osmotic water absorption
The root cells behave as an ideal osmotic pressure system through which water moves up from the soil solution to the root xylem along an increasing gradient of D.P.D. (suction pressure, which is the real force for water absorption). If the solute concentration is high and water potential is low in the root cells, water can enter from soil to root cells through endosmosis. Mineral nutrients are absorbed actively by the root cells due to utilisation of adenosine triphosphate (ATP). As a result, the concentration of ions (osmotica) in the xylem vessels is more in comparison to the soil water. A concentration gradient is established between the root and the soil water. The solute potential of xylem water is more in comparison to that of soil and correspondingly water potential is low than the soil water. If stated, water potential is comparatively positive in the soil water. This gradient of water potential causes endosmosis. The endosmosis of water continues until the water potential both in the root and soil becomes equal. It is the absorption of minerals that utilise metabolic energy, but not water absorption. Hence, the absorption of water is indirectly an active process |
https://en.wikipedia.org/wiki/Uruk%20GNU/Linux | Uruk GNU/Linux-libre is a PureOS-based Linux distribution. The name Uruk is an Iraqi city that states its Iraqi origin.
Uruk GNU/Linux 1.0 was released on 13 April 2016 and it ships with the most common software for popular tasks.
Features
Uruk uses Linux-libre kernel for the system and MATE desktop environment for its graphical interfaces.
One of the special features of Uruk is the ability to run various types of package managers at ease (including GNU Guix, urpmi, pacman, dnf). It implements simple one-line command to do that, that use a program named Package Managers Simulator to simulate the commands of popular package managers.
Version history
See also
Parabola GNU/Linux-libre
Linux-libre |
https://en.wikipedia.org/wiki/Fifth%20power%20%28algebra%29 | In arithmetic and algebra, the fifth power or sursolid of a number n is the result of multiplying five instances of n together:
.
Fifth powers are also formed by multiplying a number by its fourth power, or the square of a number by its cube.
The sequence of fifth powers of integers is:
0, 1, 32, 243, 1024, 3125, 7776, 16807, 32768, 59049, 100000, 161051, 248832, 371293, 537824, 759375, 1048576, 1419857, 1889568, 2476099, 3200000, 4084101, 5153632, 6436343, 7962624, 9765625, ...
Properties
For any integer n, the last decimal digit of n5 is the same as the last (decimal) digit of n, i.e.
By the Abel–Ruffini theorem, there is no general algebraic formula (formula expressed in terms of radical expressions) for the solution of polynomial equations containing a fifth power of the unknown as their highest power. This is the lowest power for which this is true. See quintic equation, sextic equation, and septic equation.
Along with the fourth power, the fifth power is one of two powers k that can be expressed as the sum of k − 1 other k-th powers, providing counterexamples to Euler's sum of powers conjecture. Specifically,
(Lander & Parkin, 1966)
See also
Eighth power
Seventh power
Sixth power
Fourth power
Cube (algebra)
Square (algebra)
Perfect power
Footnotes |
https://en.wikipedia.org/wiki/Comparison%20of%20DNS%20server%20software | This article presents a comparison of the features, platform support, and packaging of many independent implementations of Domain Name System (DNS) name server software.
Servers compared
Each of these DNS servers is an independent implementation of the DNS protocols, capable of resolving DNS names for other computers, publishing the DNS names of computers, or both. Excluded from consideration are single-feature DNS tools (such as proxies, filters, and firewalls) and redistributions of servers listed here (many products repackage BIND, for instance, with proprietary user interfaces).
DNS servers are grouped into several categories of specialization of servicing domain name system queries. The two principal roles, which may be implemented either uniquely or combined in a given product are:
Authoritative server: authoritative name servers publish DNS mappings for domains under their authoritative control. Typically, a company (e.g. "Acme Example Widgets") would provide its own authority services to respond to address queries, or for other DNS information, for www.example.int. These servers are listed as being at the top of the authority chain for their respective domains, and are capable of providing a definitive answer. Authoritative name servers can be primary name servers, also known as master servers, i.e. they contain the original set of data, or they can be secondary or slave name servers, containing data copies usually obtained from synchronization directly with the primary server, either via a DNS mechanism, or by other data store synchronization mechanisms.
Recursive server: recursive servers (sometimes called "DNS caches", "caching-only name servers") provide DNS name resolution for applications, by relaying the requests of the client application to the chain of authoritative name servers to fully resolve a network name. They also (typically) cache the result to answer potential future queries within a certain expiration (time-to-live) period. Most I |
https://en.wikipedia.org/wiki/Connection%20%28affine%20bundle%29 | Let be an affine bundle modelled over a vector bundle . A connection on is called the affine connection if it as a section of the jet bundle of is an affine bundle morphism over . In particular, this is an affine connection on the tangent bundle of a smooth manifold . (That is, the connection on an affine bundle is an example of an affine connection; it is not, however, a general definition of an affine connection. These are related but distinct concepts both unfortunately making use of the adjective "affine".)
With respect to affine bundle coordinates on , an affine connection on is given by the tangent-valued connection form
An affine bundle is a fiber bundle with a general affine structure group of affine transformations of its typical fiber of dimension . Therefore, an affine connection is associated to a principal connection. It always exists.
For any affine connection , the corresponding linear derivative of an affine morphism defines a unique linear connection on a vector bundle . With respect to linear bundle coordinates on , this connection reads
Since every vector bundle is an affine bundle, any linear connection on
a vector bundle also is an affine connection.
If is a vector bundle, both an affine connection and an associated linear connection are
connections on the same vector bundle , and their difference is a basic soldering form on
Thus, every affine connection on a vector bundle is a sum of a linear connection and a basic soldering form on .
Due to the canonical vertical splitting , this soldering form is brought into a vector-valued form
where is a fiber basis for .
Given an affine connection on a vector bundle , let and be the curvatures of a connection and the associated linear connection , respectively. It is readily observed that , where
is the torsion of with respect to the basic soldering form .
In particular, consider the tangent bundle of a manifold coordinated by . There is the canonical |
https://en.wikipedia.org/wiki/Cray%20XE6 | The Cray XE6 (codename during development: Baker) made by Cray is an enhanced version of the Cray XT6 supercomputer, officially announced on 25 May 2010. The XE6 uses the same computer blade found in the XT6, with eight- or 12-core Opteron 6100 processors giving up to 3,072 cores per cabinet, but replaces the SeaStar2+ interconnect router used in the Cray XT5 and XT6 with the faster and more scalable Gemini router ASIC. This is used to provide a 3-dimensional torus network topology between nodes. Each XE6 node has two processor sockets and either 32 or 64 GB of DDR3 SDRAM memory. Two nodes share one Gemini router ASIC.
The XE6 runs the Cray Linux Environment version 3. This incorporates SUSE Linux Enterprise Server and Cray's Compute Node Linux. |
https://en.wikipedia.org/wiki/Measure-preserving%20dynamical%20system | In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem, and are a special case of conservative systems. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics (in particular, most non-dissipative systems) as well as systems in thermodynamic equilibrium.
Definition
A measure-preserving dynamical system is defined as a probability space and a measure-preserving transformation on it. In more detail, it is a system
with the following structure:
is a set,
is a σ-algebra over ,
is a probability measure, so that , and ,
is a measurable transformation which preserves the measure , i.e., .
Discussion
One may ask why the measure preserving transformation is defined in terms of the inverse instead of the forward transformation . This can be understood in a fairly easy fashion. Consider a mapping of power sets:
Consider now the special case of maps which preserve intersections, unions and complements (so that it is a map of Borel sets) and also sends to (because we want it to be conservative). Every such conservative, Borel-preserving map can be specified by some surjective map by writing . Of course, one could also define , but this is not enough to specify all such possible maps . That is, conservative, Borel-preserving maps cannot, in general, be written in the form one might consider, for example, the map of the unit interval given by this is the Bernoulli map.
has the form of a pushforward, whereas is generically called a pullback. Almost all properties and behaviors of dynamical systems are defined in terms of the pushforward. For example, the transfer operator is defined in terms of the pushforward of the transformation map ; the measure can now be understood as an invariant measure; it is just t |
https://en.wikipedia.org/wiki/Stefan%20Bergman%20Prize | The Stefan Bergman Prize is a mathematics award, funded by the estate of the widow of mathematician Stefan Bergman and supported by the American Mathematical Society. The award is granted for mathematical research in: "1) the theory of the kernel function and its applications in real and complex analysis; or 2) function-theoretic methods in the theory of partial differential equations of elliptic type with attention to Bergman's operator method."
The award is given in honor of Stefan Bergman, a mathematician known for his work on complex analysis. Recipients of the prize are selected by a committee of judges appointed by the American Mathematical Society. The monetary value of the prize is variable and based on the income from the prize fund; in 2005 the award was valued at approximately $17,000.
Laureates
1989 David W. Catlin
1991 Steven R. Bell, Ewa Ligocka
1992 Charles Fefferman
1993 Yum-Tong Siu
1994 John Erik Fornæss
1995 Harold P. Boas, Emil J. Straube
1997 David E. Barrett, Michael Christ
1999 John P. D'Angelo
2000 Masatake Kuranishi
2001 László Lempert, Sidney Webster
2003 M. Salah Baouendi, Linda Preiss Rothschild
2004 Joseph J. Kohn
2005 Elias Stein
2006 Kengo Hirachi
2007-08 Alexander Nagel, Stephen Wainger
2009 Ngaiming Mok, Duong H. Phong
2011 Gennadi Henkin
2012 David Jerison, John M. Lee
2013 Xiaojun Huang, Steve Zelditch
2014 Sławomir Kołodziej, Takeo Ohsawa
2015 Eric Bedford, Jean-Pierre Demailly
2016 Charles L. Epstein, François Trèves
2017 Bo Berndtsson, Nessim Sibony
2018 Johannes Sjöstrand
2019 Franc Forstnerič, Mei-Chi Shaw
2020 Aline Bonami, Peter Ebenfelt
See also
List of mathematics awards |
https://en.wikipedia.org/wiki/Methyl%20anthranilate | Methyl anthranilate, also known as MA, methyl 2-aminobenzoate, or carbomethoxyaniline, is an ester of anthranilic acid. Its chemical formula is C8H9NO2. It has a strong and fruity grape smell, and one of its key uses is as a flavoring agent.
Chemical properties
It is a colorless to pale yellow liquid with melting point 24 °C and boiling point 256 °C. It has a density of 1.168 g/cm3 at 20 °C. It has a refractive index of 1.583 at 589 nm of wavelength and 20 °C. It shows a light blue-violet fluorescence. It is very slightly soluble in water, and soluble in ethanol and propylene glycol. It is insoluble in paraffin oil. It is combustible, with flash point at 104 °C. Pure, it has a fruity grape smell; at 25 ppm it has a sweet, fruity, Concord grape-like smell with a musty and berry nuance.
Uses
Methyl anthranilate acts as a bird repellent. It is food-grade and can be used to protect corn, sunflowers, rice, fruit, and golf courses. Dimethyl anthranilate (DMA) has a similar effect. It is also used for part of the flavor of grape Kool-Aid. It is used for flavoring of candy, soft drinks (e.g. grape soda), fruit (e.g. Grāpples), chewing gum, and nicotine products.
Methyl anthranilate both as a component of various natural essential oils and as a synthesised aroma-chemical is used extensively in modern perfumery. It is also used to produce Schiff bases with aldehydes, many of which are also used in perfumery. In a perfumery context the most common Schiff's Base is known as aurantiol, produced by combining methyl anthranilate and hydroxycitronellal.
In organic synthesis, methyl anthranilate can be used as a source of the highly reactive aryne, benzyne. It is obtained by diazotization of the amine group using sodium nitrite which eliminates nitrogen and CO2 giving benzyne as an intermediate for Diels-Alder addition or other substitution at the ring.
Occurrence
Methyl anthranilate naturally occurs in the Concord grapes and other Vitis labrusca grapes and hybrids thereof, |
https://en.wikipedia.org/wiki/Inter-protocol%20exploitation | Inter-protocol exploitation is a class of security vulnerabilities that takes advantage of interactions between two communication protocols, for example the protocols used in the Internet. It is commonly discussed in the context of the Hypertext Transfer Protocol (HTTP). This attack uses the potential of the two different protocols meaningfully communicating commands and data.
It was popularized in 2007 and publicly described in research of the same year. The general class of attacks that it refers to has been known since at least 1994 (see the Security Considerations section of RFC 1738).
Internet Protocol implementations allow for the possibility of encapsulating exploit code to compromise a remote program that uses a different protocol. Inter-protocol exploitation can utilize inter-protocol communication to establish the preconditions for launching an inter-protocol exploit. For example, this process could negotiate the initial authentication communication for a vulnerability in password parsing. Inter-protocol exploitation is where one protocol attacks a service running a different protocol. This is a legacy problem because the specifications of the protocols did not take into consideration an attack of this type.
Technical details
The two protocols involved in the vulnerability are termed the carrier and target. The carrier encapsulates the commands and/or data. The target protocol is used for communication to the intended victim service. Inter-protocol communication will be successful if the carrier protocol can encapsulate the commands and/or data sufficiently to meaningfully communicate to the target service.
Two preconditions need to be met for successful communication across protocols: encapsulation and error tolerance. The carrier protocol must encapsulate the data and commands in a manner that the target protocol can understand. It is highly likely that the resulting data stream with induce parsing errors in the target protocol.
The target protocol |
https://en.wikipedia.org/wiki/Versit%20Consortium | The versit Consortium was a multivendor initiative founded by Apple Computer, AT&T, IBM and Siemens in the early 1990s in order to create Personal Data Interchange (PDI) technology, open specifications for exchanging personal data over the Internet, wired and wireless connectivity and Computer Telephony Integration (CTI). The Consortium started a number of projects to deliver open specifications aimed at creating industry standards.
Computer Telephony Integration
One of the most ambitious projects of the Consortium was the Versit CTI Encyclopedia (VCTIE), a 3,000 page, 6 volume set of specifications defining how computer and telephony systems are to interact and become interoperable. The Encyclopedia was built on existing technologies and specifications such as ECMA's call control specifications, TSAPI and industry expertise of the core technical team. The volumes are:
Volume 1, Concepts & Terminology
Volume 2, Configurations & Landscape
Volume 3, Telephony Feature Set
Volume 4, Call Flow Scenarios
Volume 5, CTI Protocols
Volume 6, Versit TSAPI
Appendices include:
Versit TSAPI header file
Protocol 1 ASN.1 description
Protocol 2 ASN.1 description
Versit Server Mapper Interface header file
Versit TSDI header file
The core Versit CTI Encyclopedia technical team was composed of David H. Anderson and Marcus W. Fath from IBM, Frédéric Artru and Michael Bayer from Apple Computer, James L. Knight and Steven Rummel from AT&T (then Lucent Technologies), Tom Miller from Siemens, and consultants Ellen Feaheny and Charles Hudson. Upon completion, the Versit CTI Encyclopedia was transferred to the ECTF and has been adopted in the form of ECTF C.001. This model represents the basis for the ECTF's call control efforts.
Though the Versit CTI Encyclopedia ended up influencing many products, there was one full compliant implementation of the specifications that was brought to market: Odisei, a French company founded by team member Frédéric Artru developed the IntraSw |
https://en.wikipedia.org/wiki/Cognitive%20holding%20power | Cognitive holding power is a concept introduced and measured by John C. Stevenson in 1994 using a questionnaire, the Cognitive Holding Power Questionnaire (CHPQ). This tool assesses first- or second-order cognitive processing preferences as a result of the characteristics of the learning environment.
Impact
Studies involving cognitive holding power have been able to suggest improvements to mathematical education.
Notes
Cognition
Mathematics education |
https://en.wikipedia.org/wiki/Systems%20and%20Synthetic%20Biology | Systems and Synthetic Biology is a peer-reviewed scientific journal covering systems and synthetic biology. It was established in 2007 and was published by Springer Science+Business Media. The editors-in-chief were Pawan K. Dhar (University of Kerala) and Ron Weiss (Massachusetts Institute of Technology). The journal's last volume was in 2015.
Abstracting and indexing
The journal is abstracted and indexed in: |
https://en.wikipedia.org/wiki/Synusia | Synusia (plural Synusiae) is a term in plant ecology that refers to a layer of vegetation consisting of species with shared life forms. It has been compared with guilds in zoology.
The term synusia was introduced by Helmut Gams in 1918 although similar ideas were proposed using terms such as "Genossenschaften" (brotherhoods) and "Schicht" (society). They have been defined as ecological groups of plants that share similarities in their life-form, share the same niche and play a similar role. They can be taxonomically different but have similar habitats. |
https://en.wikipedia.org/wiki/Mixmaster%20anonymous%20remailer | Mixmaster is a Type II anonymous remailer which sends messages in fixed-size packets and reorders them, preventing anyone watching the messages go in and out of remailers from tracing them. It is an implementation of a Chaumian Mix network.
History
Mixmaster was originally written by Lance Cottrell, and was maintained by Len Sassaman. Peter Palfrader is the current maintainer. Current Mixmaster software can be compiled to handle Cypherpunk messages as well; they are needed as reply blocks for nym servers.
See also
Anonymity
Anonymous P2P
Anonymous remailer
Cypherpunk anonymous remailer (Type I)
Mixminion (Type III)
Onion routing
Tor (network)
Pseudonymous remailer (a.k.a. nym servers)
Penet remailer
Data privacy
Traffic analysis |
https://en.wikipedia.org/wiki/100-year%20flood | A 100-year flood is a flood event that has on average a 1 in 100 chance (1% probability) of being equaled or exceeded in any given year.
The 100-year flood is also referred to as the 1% flood. For coastal or lake flooding, the 100-year flood is generally expressed as a flood elevation or depth, and may include wave effects. For river systems, the 100-year flood is generally expressed as a flowrate. Based on the expected 100-year flood flow rate, the flood water level can be mapped as an area of inundation. The resulting floodplain map is referred to as the 100-year floodplain. Estimates of the 100-year flood flowrate and other streamflow statistics for any stream in the United States are available. In the UK, the Environment Agency publishes a comprehensive map of all areas at risk of a 1 in 100 year flood. Areas near the coast of an ocean or large lake also can be flooded by combinations of tide, storm surge, and waves. Maps of the riverine or coastal 100-year floodplain may figure importantly in building permits, environmental regulations, and flood insurance. These analyses generally represent 20th-century climate.
Probability
A common misunderstanding is that a 100-year flood is likely to occur only once in a 100-year period. In fact, there is approximately a 63.4% chance of one or more 100-year floods occurring in any 100-year period. On the Danube River at Passau, Germany, the actual intervals between 100-year floods during 1501 to 2013 ranged from 37 to 192 years. The probability Pe that one or more floods occurring during any period will exceed a given flood threshold can be expressed, using the binomial distribution, as
where T is the threshold return period (e.g. 100-yr, 50-yr, 25-yr, and so forth), and n is the number of years in the period. The probability of exceedance Pe is also described as the natural, inherent, or hydrologic risk of failure. However, the expected value of the number of 100-year floods occurring in any 100-year period is 1.
Te |
https://en.wikipedia.org/wiki/Bit%20banging | In computer engineering and electrical engineering, bit banging is a "term of art" for any method of data transmission that employs software as a substitute for dedicated hardware to generate transmitted signals or process received signals. Software directly sets and samples the states of GPIOs (e.g., pins on a microcontroller), and is responsible for meeting all timing requirements and protocol sequencing of the signals. In contrast to bit banging, dedicated hardware (e.g., UART, SPI, I²C) satisfies these requirements and, if necessary, provides a data buffer to relax software timing requirements. Bit banging can be implemented at very low cost, and is commonly used in some embedded systems.
Bit banging allows a device to implement different protocols with minimal or no hardware changes. In some cases, bit banging is made feasible by newer, faster processors because more recent hardware operates much more quickly than hardware did when standard communications protocols were created.
C code example
The following C language code example transmits a byte of data on an SPI bus.
// transmit byte serially, MSB first
void send_8bit_serial_data(unsigned char data)
{
int i;
// select device (active low)
output_low(SD_CS);
// send bits 7..0
for (i = 0; i < 8; i++)
{
// consider leftmost bit
// set line high if bit is 1, low if bit is 0
if (data & 0x80)
output_high(SD_DI);
else
output_low(SD_DI);
// pulse the clock state to indicate that bit value should be read
output_low(SD_CLK);
delay();
output_high(SD_CLK);
// shift byte left so next bit will be leftmost
data <<= 1;
}
// deselect device
output_high(SD_CS);
}
Considerations
The question whether to deploy bit banging or not is a trade-off between load, performance and reliability on one hand, and the availability of a hardware alternative on the other. The software emulation process consumes more |
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20257 | Zinc finger protein 257 is a protein that in humans is encoded by the ZNF257 gene. |
https://en.wikipedia.org/wiki/Croatian%20Society%20of%20Medical%20Biochemistry%20and%20Laboratory%20Medicine | The Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) is national autonomous, voluntary and non-profit professional association of medical biochemists established with the aim of professional and scientific development and improvement of medical biochemistry profession in Republic of Croatia.
Main activities of the CSMBLM are the promotion and development of medical biochemistry profession, external assessment of the performance quality of all medical biochemistry laboratories in the Republic of Croatia, publication of a scientific journal Biochemia Medica.
History
The Croatian Society of Medical Biochemists (CSMB) was founded in 1953. Until 1988, CSMB had been part of the Croatian Pharmaceutical Society and afterwards became an autonomous association. In 2012, it changed its name to the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM), in line with the current trends within the profession and with the recommendations of European and global professional associations. In 2015 it had 750 members.
Objectives
Through the voluntary work of its members, CSMBLM is dedicated to improve the profession and scientific field of medical biochemistry and laboratory medicine, raise the level of public awareness at national level, encourage and provide assistance to its members in achieving professional and scientific excellence especially in performance quality of every segment of work in medical biochemistry laboratories.
Activities
Biochemia Medica
CSMBLM publishes the scientific journal Biochemia Medica in the English language three times a year. It is included in databases such as Current Contents (Clinical Medicine), Medline, PubMed Central (PMC), Science Citation Index Expanded™ (SCIE, Thomson Reuters), Journal Citation Reports/Science Edition (JCR, Thomson Reuters), EMBASE/Excerpta Medica, Scopus, CAS (Chemical Abstracts Service), EBSCO/Academic Search Complete and DOAJ (Directory of Open Access Journals).
CROQALM – na |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.