source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/High-water%20mark%20%28computer%20security%29 | In the fields of physical security and information security, the high-water mark for access control was introduced by Clark Weissmann in 1969. It pre-dates the Bell–LaPadula security model, whose first volume appeared in 1972.
Under high-water mark, any object less than the user's security level can be opened, but the object is relabeled to reflect the highest security level currently open, hence the name.
The practical effect of the high-water mark was a gradual movement of all objects towards the highest security level in the system. If user A is writing a CONFIDENTIAL document, and checks the unclassified dictionary, the dictionary becomes CONFIDENTIAL. Then, when user B is writing a SECRET report and checks the spelling of a word, the dictionary becomes SECRET. Finally, if user C is assigned to assemble the daily intelligence briefing at the TOP SECRET level, reference to the dictionary makes the dictionary TOP SECRET, too.
Low-water mark
Low-water mark is an extension to Biba Model. In the Biba model, no-write-up and no-read-down rules are enforced. In this model, the rules are exactly opposite of the rules in Bell-La Padula model. In the low-water mark model, read down is permitted, but the subject label, after reading, will be degraded to object label. It can be classified in floating label security models.
See also
Watermark (data synchronization) |
https://en.wikipedia.org/wiki/Gut%20microbiota | Gut microbiota, gut microbiome, or gut flora, are the microorganisms, including bacteria, archaea, fungi, and viruses, that live in the digestive tracts of animals. The gastrointestinal metagenome is the aggregate of all the genomes of the gut microbiota. The gut is the main location of the human microbiome. The gut microbiota has broad impacts, including effects on colonization, resistance to pathogens, maintaining the intestinal epithelium, metabolizing dietary and pharmaceutical compounds, controlling immune function, and even behavior through the gut–brain axis.
The microbial composition of the gut microbiota varies across regions of the digestive tract. The colon contains the highest microbial density of any human-associated microbial community studied so far, representing between 300 and 1000 different species. Bacteria are the largest and to date, best studied component and 99% of gut bacteria come from about 30 or 40 species. Up to 60% of the dry mass of feces is bacteria. Over 99% of the bacteria in the gut are anaerobes, but in the cecum, aerobic bacteria reach high densities. It is estimated that the human gut microbiota have around a hundred times as many genes as there are in the human genome.
Overview
In humans, the gut microbiota has the largest numbers and species of bacteria compared to other areas of the body. The approximate number of bacteria composing the gut microbiota is about 1013-1014. In humans, the gut flora is established at one to two years after birth, by which time the intestinal epithelium and the intestinal mucosal barrier that it secretes have co-developed in a way that is tolerant to, and even supportive of, the gut flora and that also provides a barrier to pathogenic organisms.
The relationship between some gut microbiota and humans is not merely commensal (a non-harmful coexistence), but rather a mutualistic relationship. Some human gut microorganisms benefit the host by fermenting dietary fiber into short-chain fatty acids |
https://en.wikipedia.org/wiki/Systems%20biomedicine | Systems biomedicine, also called systems biomedical science, is the application of systems biology to the understanding and modulation of developmental and pathological processes in humans, and in animal and cellular models. Whereas systems biology aims at modeling exhaustive networks of interactions (with the long-term goal of, for example, creating a comprehensive computational model of the cell), mainly at intra-cellular level, systems biomedicine emphasizes the multilevel, hierarchical nature of the models (molecule, organelle, cell, tissue, organ, individual/genotype, environmental factor, population, ecosystem) by discovering and selecting the key factors at each level and integrating them into models that reveal the global, emergent behavior of the biological process under consideration.
Such an approach will be favorable when the execution of all the experiments necessary to establish exhaustive models is limited by time and expense (e.g., in animal models) or basic ethics (e.g., human experimentation).
In the year of 1992, a paper on system biomedicine by Kamada T. was published (Nov.-Dec.), and an article on systems medicine and pharmacology by Zeng B.J. was also published (April) in the same time period.
In 2009, the first collective book on systems biomedicine was edited by Edison T. Liu and Douglas A. Lauffenburger.
In October 2008, one of the first research groups uniquely devoted to systems biomedicine was established at the European Institute of Oncology. One of the first research centers specialized on systems biomedicine was founded by Rudi Balling. The Luxembourg Centre for Systems Biomedicine is an interdisciplinary center of the University of Luxembourg. The first centre devoted to spatial issues in systems biomedicine has been recently established at Oregon Health and Science University.
The first peer-reviewed journal on this topic, Systems Biomedicine, was recently established by Landes Bioscience.
See also
Systems biology
Systems med |
https://en.wikipedia.org/wiki/Icyball | Icyball is a name given to two early refrigerators, one made by Australian Sir Edward Hallstrom in 1923, and the other design patented by David Forbes Keith of Toronto (filed 1927, granted 1929), and manufactured by American Powel Crosley Jr., who bought the rights to the device. Both devices are unusual in design in that they did not require the use of electricity for cooling. They can run for a day on a cup of kerosene, allowing rural users lacking electricity the benefits of refrigeration.
Operation (Crosley Icyball)
The Crosley Icyball is as an example of a gas-absorption refrigerator, as can be found today in recreational vehicles or campervans. Unlike most refrigerators, the Icyball has no moving parts, and instead of operating continuously, is manually cycled. Typically it is charged in the morning for 1.5 hours, and provides cooling throughout the heat of the day.
Absorption refrigerators and the more common mechanical refrigerators both cool by the evaporation of refrigerant. (Evaporation of a liquid causes cooling, as for example, liquid sweat on the skin evaporating cools, and the reverse process releases much heat.) In absorption refrigerators, the buildup of pressure due to evaporation of refrigerant is relieved not by suction at the inlet of a compressor, but by absorption into an absorptive medium (water in the case of the Icy Ball).
The Icyball system moves heat from the refrigerated cabinet to the warmer room by using ammonia as the refrigerant. It consists of two metal balls: a hot ball, which in the fully charged state contains the absorber (water) and a cold ball containing liquid ammonia. These are joined by a pipe in the shape of an inverted U. The pipe allows ammonia gas to move in either direction.
After approximately a day's use (varying depending on load), the Icyball stops cooling, and needs recharging. The Icyball is removed from the refrigerated cabinet, and the cold ball, from which all the ammonia has evaporated during the previous |
https://en.wikipedia.org/wiki/Airespace | Airespace, Inc., formerly Black Storm Networks, was a networking hardware company founded in 2001, manufacturing wireless access points and Controllers. The company developed the AP-Controller model for fast deployment and the Lightweight Access Point Protocol, the precursor to the CAPWAP protocol.
Corporate history
Airespace was founded in 2001 by Pat Calhoun, Bob Friday, Bob O'Hara, and Ajay Mishra. The company was venture backed by Storm Ventures, Norwest Venture Partners and Battery Ventures. In 2003, it entered into an agreement to provide OEM equipment to NEC. In 2004 it signed an agreement with Alcatel and Nortel to provide equipment to the two companies on an OEM basis.
Airespace was first to market with integrated location tracking. Within a year and a half, the company grew rapidly into the market leader of enterprise Wi-Fi.
Cisco Systems acquired Airespace in 2005 for $450 million; this was one of 13 acquisitions Cisco made that year and the largest up to that point. Airespace products were merged into Cisco Aironet product line. |
https://en.wikipedia.org/wiki/Umov%20effect | The Umov effect, also known as Umov's law, is a relationship between the albedo of an astronomical object, and the degree of polarization of light reflecting off it. The effect was discovered by the Russian physicist Nikolay Umov in 1905, and can be observed for celestial objects such as the surface of the Moon and the asteroids.
The degree of linear polarization of light P is defined by
where and are the intensities of light in the directions perpendicular and parallel to the plane of a polarizer aligned in the plane of reflection. Values of P are zero for unpolarized light, and ±1 for linearly polarized light.
Umov's law states
where α is the albedo of the object. Thus, highly reflective objects tend to reflect mostly unpolarized light, and dimly reflective objects tend to reflect polarized light. The law is only valid for large phase angles (angles between the incident light and the reflected light). |
https://en.wikipedia.org/wiki/Grammar-based%20code | Grammar-based codes or Grammar-based compression are compression algorithms based on the idea of constructing a context-free grammar (CFG) for the string to be compressed. Examples include universal lossless data compression algorithms. To compress a data sequence , a grammar-based code transforms into a context-free grammar .
The problem of finding a smallest grammar for an input sequence (smallest grammar problem) is known to be NP-hard, so many grammar-transform algorithms are proposed from theoretical and practical viewpoints.
Generally, the produced grammar is further compressed by statistical encoders like arithmetic coding.
Examples and characteristics
The class of grammar-based codes is very broad. It includes block codes, the multilevel pattern matching (MPM) algorithm, variations of the incremental parsing Lempel-Ziv code, and many other new universal lossless compression algorithms.
Grammar-based codes are universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source with a finite alphabet.
Practical algorithms
The compression programs of the following are available from external links.
Sequitur is a classical grammar compression algorithm that sequentially translates an input text into a CFG, and then the produced CFG is encoded by an arithmetic coder.
Re-Pair is a greedy algorithm using the strategy of most-frequent-first substitution. The compressive performance is powerful, although the main memory space requirement is very large.
GLZA, which constructs a grammar that may be reducible, i.e., contain repeats, where the entropy-coding cost of "spelling out" the repeats is less than the cost creating and entropy-coding a rule to capture them. (In general, the compression-optimal SLG is not irreducible, and the Smallest Grammar Problem is different from the actual SLG compression problem.)
See also
Dictionary coder
Grammar induction
Straight-line grammar |
https://en.wikipedia.org/wiki/Thermal%20velocity | Thermal velocity or thermal speed is a typical velocity of the thermal motion of particles that make up a gas, liquid, etc. Thus, indirectly, thermal velocity is a measure of temperature. Technically speaking, it is a measure of the width of the peak in the Maxwell–Boltzmann particle velocity distribution. Note that in the strictest sense thermal velocity is not a velocity, since velocity usually describes a vector rather than simply a scalar speed.
Since the thermal velocity is only a "typical" velocity, a number of different definitions can be and are used.
Taking to be the Boltzmann constant, the absolute temperature, and the mass of a particle, we can write the different thermal velocities:
In one dimension
If is defined as the root mean square of the velocity in any one dimension (i.e. any single direction), then
If is defined as the mean of the magnitude of the velocity in any one dimension (i.e. any single direction), then
In three dimensions
If is defined as the most probable speed, then
If is defined as the root mean square of the total velocity, then
If is defined as the mean of the magnitude of the velocity of the atoms or molecules, then
All of these definitions are in the range
Thermal velocity at room temperature
At 20 °C (293.15 kelvins), the mean thermal velocity of common gasses in three dimensions is: |
https://en.wikipedia.org/wiki/Magnesium%20stearate | Magnesium stearate is the chemical compound with the formula . It is a soap, consisting of salt containing two equivalents of stearate (the anion of stearic acid) and one magnesium cation (Mg2+). Magnesium stearate is a white, water-insoluble powder. Its applications exploit its softness, insolubility in many solvents, and low toxicity. It is used as a release agent and as a component or lubricant in the production of pharmaceuticals and cosmetics.
Manufacturing
Magnesium stearate is produced by the reaction of sodium stearate with magnesium salts or by treating magnesium oxide with stearic acid.
Uses
Magnesium stearate is often used as an anti-adherent in the manufacture of medical tablets, capsules and powders. In this regard, the substance is also useful because it has lubricating properties, preventing ingredients from sticking to manufacturing equipment during the compression of chemical powders into solid tablets; magnesium stearate is the most commonly used lubricant for tablets. However, it might cause lower wettability and slower disintegration of the tablets and slower and even lower dissolution of the drug.
Magnesium stearate can also be used efficiently in dry coating processes.
In the production of pressed candies, magnesium stearate serves as a release agent. It is also used to bind sugar in hard candies such as mints.
Magnesium stearate is a common ingredient in baby formulas.
In the EU and EFTA it is listed as food additive E470b.
Occurrence
Magnesium stearate is a major component of bathtub rings. When produced by soap and hard water, magnesium stearate and calcium stearate both form a white solid insoluble in water, and are collectively known as soap scum.
Safety
Magnesium stearate is generally considered safe for human consumption at levels below 2500 mg per kg of body weight per day and is classified in the United States as generally recognized as safe (GRAS). In 1979, the FDA's Subcommittee on GRAS Substances (SCOGS) reported, "There |
https://en.wikipedia.org/wiki/Initial%20ramdisk | In Linux systems, initrd (initial ramdisk) is a scheme for loading a temporary root file system into memory, to be used as part of the Linux startup process. initrd and initramfs (from INITial RAM File System) refer to two different methods of achieving this. Both are commonly used to make preparations before the real root file system can be mounted.
Rationale
Many Linux distributions ship a single, generic Linux kernel image one that the distribution's developers create specifically to boot on a wide variety of hardware. The device drivers for this generic kernel image are included as loadable kernel modules because statically compiling many drivers into one kernel causes the kernel image to be much larger, perhaps too large to boot on computers with limited memory, or in some cases to cause boot-time crashes or other problems due to probing for nonexistent or conflicting hardware. This static-compiled kernel approach also leaves modules in kernel memory which are no longer used or needed, and raises the problem of detecting and loading the modules necessary to mount the root file system at boot time, or for that matter, deducing where or what the root file system is.
To further complicate matters, the root file system may be on a software RAID volume, LVM, NFS (on diskless workstations), or on an encrypted partition. All of these require special preparations to mount.
Another complication is kernel support for hibernation, which suspends the computer to disk by dumping an image of the entire contents of memory to a swap partition or a regular file, then powering off. On next boot, this image has to be made accessible before it can be loaded back into memory.
To avoid having to hardcode handling for so many special cases into the kernel, an initial boot stage with a temporary root file-system – now dubbed early user space – is used. This root file-system can contain user-space helpers which do the hardware detection, module loading and device discovery necessar |
https://en.wikipedia.org/wiki/Bill%20the%20Goat | Bill the Goat is the mascot of the United States Naval Academy. The mascot is a live goat and is also represented by a costumed midshipman. There is also a bronze statue of the goat in the north end zone of Navy–Marine Corps Memorial Stadium. This statue also plays a role in "Army Week" traditions.
The first Bill the Goat appeared in 1893. Currently, Bill XXXVI reigns as the 39th mascot and is the 36th goat to be named Bill. His backup is Bill XXXVII.
The legend of Bill the Goat
Goats at sea
For centuries, ships sailed with livestock in order to provide sailors with fresh food. Ships in the British and early American navies often carried goats, to eat the garbage and other undesirable food and to return milk and butter. The first usage of "billy goat" for a male goat occurs in the 19th century replacing the older term "he-goat." And the first creature, animal or otherwise, to circle the earth twice was a (female) goat that traveled first with Wallis (1767) and then with Captain Cook (1768). After the Cook trip she was allowed to retire.
Goats at USNA
There is a legend that a Navy ship once sailed with a pet goat, and that the goat died during the cruise. The officers preserved the skin to have it mounted when they returned to port. Two young ensigns were entrusted with the skin. On their way to the taxidermist, they stopped by the United States Naval Academy to watch a football game. At halftime, for reasons the legend does not specify, one ensign decided to dress up in the goat skin. The crowd appreciated the effort, and Navy won the game.
Early years
In 1893, a live goat named El Cid made his debut as a mascot at the fourth Army–Navy Game. El Cid was a gift to the Brigade of Midshipmen from officers of the USS New York. With the goat, Navy gained a 6–3 win over Army that year, so he was adopted as part of the team.
In the early 1900s, the beloved mascot was finally given a name. On the return trip to the Naval Academy after the Midshipmen triumphed over |
https://en.wikipedia.org/wiki/List%20of%20Xbox%20games%20compatible%20with%20Xbox%20360 | The Xbox 360 gaming console has received updates from Microsoft from its launch in 2005 until November 2007 that enable it to play select games from its predecessor, Xbox. The Xbox 360 launched with backward compatibility with the number of supported Xbox games varying depending on region. Microsoft continued to update the list of Xbox games that were compatible with Xbox 360 until November 2007 when the list was finalized. Microsoft later launched the Xbox Originals program on December 7, 2007 where select backward compatible Xbox games could be purchased digitally on Xbox 360 consoles with the program ending less than two years later in June 2009. The following is a list of all backward compatible games on Xbox 360 under this functionality.
History
At its launch in November 2005, the Xbox 360 did not possess hardware-based backward compatibility with Xbox games due to the different types of hardware and architecture used in the Xbox and Xbox 360. Instead backward compatibility was achieved using software emulation. When the Xbox 360 launched in North America 212 Xbox games were supported while in Europe 156 games were supported. The Japanese market had the fewest titles supported at launch with only 12 games. Microsoft's final update to the list of backward compatible titles was in November 2007 bringing the final total to 461 Xbox games.
In order to use the backwards compatibility feature on Xbox 360 a hard drive is required. Updates to the list were provided from Microsoft as part of regular software updates via the Internet, ordering a disc by mail from the official website or downloading the update from the official website then burning it to either a CD or DVD. Subscribers to Official Xbox Magazine would also have updates to the backwards compatibility list on the demo discs included with the magazine.
Supported original Xbox games will run each with an emulation profile that has been recompiled for each game with the emulation profiles stored on the cons |
https://en.wikipedia.org/wiki/Lamb%E2%80%93Oseen%20vortex | In fluid dynamics, the Lamb–Oseen vortex models a line vortex that decays due to viscosity. This vortex is named after Horace Lamb and Carl Wilhelm Oseen.
Mathematical description
Oseen looked for a solution for the Navier–Stokes equations in cylindrical coordinates with velocity components of the form
where is the circulation of the vortex core. Navier-Stokes equations lead to
which, subject to the conditions that it is regular at and becomes unity as , leads to
where is the kinematic viscosity of the fluid. At , we have a potential vortex with concentrated vorticity at the axis; and this vorticity diffuses away as time passes.
The only non-zero vorticity component is in the direction, given by
The pressure field simply ensures the vortex rotates in the circumferential direction, providing the centripetal force
where ρ is the constant density
Generalized Oseen vortex
The generalized Oseen vortex may obtained by looking for solutions of the form
that leads to the equation
Self-similar solution exists for the coordinate , provided , where is a constant, in which case . The solution for may be written according to Rott (1958) as
where is an arbitrary constant. For , the classical Lamb–Oseen vortex is recovered. The case corresponds to the axisymmetric stagnation point flow, where is a constant. When , , a Burgers vortex is a obtained. For arbitrary , the solution becomes , where is an arbitrary constant. As , Burgers vortex is recovered.
See also
The Rankine vortex and Kaufmann (Scully) vortex are common simplified approximations for a viscous vortex. |
https://en.wikipedia.org/wiki/Batchelor%20vortex | In fluid dynamics, Batchelor vortices, first described by George Batchelor in a 1964 article, have been found useful in analyses of airplane vortex wake hazard problems.
The model
The Batchelor vortex is an approximate solution to the Navier–Stokes equations obtained using a boundary layer approximation. The physical reasoning behind this approximation is the assumption that the axial gradient of the flow field of interest is of much smaller magnitude than the radial gradient.
The axial, radial and azimuthal velocity components of the vortex are denoted , and respectively and can be represented in cylindrical coordinates as follows:
The parameters in the above equations are
, the free-stream axial velocity,
, the velocity scale (used for nondimensionalization),
, the length scale (used for nondimensionalization),
, a measure of the core size, with initial core size and representing viscosity,
, the swirl strength, given as a ratio between the maximum tangential velocity and the core velocity.
Note that the radial component of the velocity is zero and that the axial and azimuthal components depend only on .
We now write the system above in dimensionless form by scaling time by a factor . Using the same symbols for the dimensionless variables, the Batchelor vortex can be expressed in terms of the dimensionless variables as
where denotes the free stream axial velocity and is the Reynolds number.
If one lets and considers an infinitely large swirl number then the Batchelor vortex simplifies to the Lamb–Oseen vortex for the azimuthal velocity:
where is the circulation. |
https://en.wikipedia.org/wiki/Leaky%20gut%20syndrome | Leaky gut syndrome is a hypothetical, medically unrecognized condition.
Unlike the scientific phenomenon of increased intestinal permeability ("leaky gut"), claims for the existence of "leaky gut syndrome" as a distinct medical condition come mostly from nutritionists and practitioners of alternative medicine. Proponents claim that a "leaky gut" causes chronic inflammation throughout the body that results in a wide range of conditions, including chronic fatigue syndrome, rheumatoid arthritis, lupus, migraines, multiple sclerosis, and autism. There is little evidence to support this hypothesis.
Stephen Barrett has described "leaky gut syndrome" as a fad diagnosis and says that its proponents use the alleged condition as an opportunity to sell a number of alternative-health remedies – including diets, herbal preparations, and dietary supplements. In 2009, Seth Kalichman wrote that some pseudoscientists claim that the passage of proteins through a "leaky" gut is the cause of autism. Evidence for claims that a leaky gut causes autism is weak and conflicting.
Advocates tout various treatments for "leaky gut syndrome", such as dietary supplements, probiotics, herbal remedies, gluten-free foods, and low-FODMAP, low-sugar, or antifungal diets, but there is little evidence that the treatments offered are of benefit.
None have been adequately tested to determine whether they are safe and effective for this purpose. The U.K. National Institute for Health and Care Excellence (NICE) does not recommend the use of any special diets to manage the main symptoms of autism or leaky gut syndrome.
See also
List of topics characterized as pseudoscience |
https://en.wikipedia.org/wiki/Kaufmann%20vortex | The Kaufmann vortex, also known as the Scully model, is a mathematical model for a vortex taking account of viscosity. It uses an algebraic velocity profile. This vortex is not a solution of the Navier–Stokes equations.
Kaufmann and Scully's model for the velocity in the Θ direction is:
The model was suggested by W. Kaufmann in 1962, and later by Scully and Sullivan in 1972 at the Massachusetts Institute of Technology.
See also
Rankine vortex – a simpler, but more crude, approximation for a vortex.
Lamb–Oseen vortex – the exact solution for a free vortex decaying due to viscosity. |
https://en.wikipedia.org/wiki/Nintendo%20Power%20%28cartridge%29 | was a video game distribution service for Super Famicom or Game Boy operated by Nintendo that ran exclusively in Japan from late 1996 until February 2007. The service allowed users to download Super Famicom or Game Boy titles onto a special flash memory cartridge for a lower price than that of a pre-written ROM cartridge.
At its 1996 launch, the service initially offered only Super Famicom titles. Game Boy titles began being offered on March 1, 2000. The service was ultimately discontinued on February 28, 2007.
History
Background
During the market lifespan of the Famicom, Nintendo developed the Disk System, a floppy disk drive peripheral with expanded RAM which allowed players to use re-writable disk media called "disk cards" at Disk Writer kiosks. The system was relatively popular but suffered from issues of limited capacity. However, Nintendo did see a market for an economical re-writable medium due to the popularity of the Disk System.
Nintendo's first dynamic flash storage subsystem for the Super Famicom is the Satellaview, a peripheral released in 1995 that facilitated the delivery of a set of unique Super Famicom games via the St.GIGA satellite network.
Release
The Super Famicom version of Nintendo Power was released in late 1996.
The Game Boy Nintendo Power was originally planned to launch on November 1, 1999; however, due to the 1999 Jiji earthquake disrupting production in Taiwan, it was delayed until March 1, 2000.
Nintendo Power was discontinued in February 2007, with kiosks being removed from stores.
Usage
When this was on the market in the 1990s, the user would first purchase the RAM cartridge, then bring it to a store featuring a Nintendo Power kiosk. The user selects games to be copied to the cartridge and the store provides a printed copy of the manual. Game prices varied, with older games being relatively cheap, and newer games and Nintendo Power exclusives being more expensive.
The proprietary medium made illicit duplication much more dif |
https://en.wikipedia.org/wiki/Dielectric%20breakdown%20model | Dielectric breakdown model (DBM) is a macroscopic mathematical model combining the diffusion-limited aggregation model with electric field. It was developed by Niemeyer, Pietronero, and Weismann in 1984. It describes the patterns of dielectric breakdown of solids, liquids, and even gases, explaining the formation of the branching, self-similar Lichtenberg figures.
See also
Eden growth model
Lichtenberg figure
Diffusion-limited aggregation |
https://en.wikipedia.org/wiki/Officinal | Officinal drugs, plants and herbs are those which are sold in a chemist or druggist shop. Officinal medical preparations of such drugs are made in accordance with the prescriptions authorized by a pharmacopoeia. Officinal is not related to the word official. The classical Latin officina meant a workshop, manufactory, laboratory, and in medieval Latin was applied to a general storeroom. It thus became applied to a shop where goods were sold rather than a place where things were made. Whereas official descends from officium, meaning office, as in duty or position.
In botanical nomenclature, the specific epithet officinalis derives from a plant's historical use in pharmacology.
See also
Herbalism
Officinalis |
https://en.wikipedia.org/wiki/Symmetric%20product%20of%20an%20algebraic%20curve | In mathematics, the n-fold symmetric product of an algebraic curve C is the quotient space of the n-fold cartesian product
C × C × ... × C
or Cn by the group action of the symmetric group Sn on n letters permuting the factors. It exists as a smooth algebraic variety denoted by ΣnC. If C is a compact Riemann surface, ΣnC is therefore a complex manifold. Its interest in relation to the classical geometry of curves is that its points correspond to effective divisors on C of degree n, that is, formal sums of points with non-negative integer coefficients.
For C the projective line (say the Riemann sphere ∪ {∞} ≈ S2), its nth symmetric product ΣnC can be identified with complex projective space of dimension n.
If G has genus g ≥ 1 then the ΣnC are closely related to the Jacobian variety J of C. More accurately for n taking values up to g they form a sequence of approximations to J from below: their images in J under addition on J (see theta-divisor) have dimension n and fill up J, with some identifications caused by special divisors.
For g = n we have ΣgC actually birationally equivalent to J; the Jacobian is a blowing down of the symmetric product. That means that at the level of function fields it is possible to construct J by taking linearly disjoint copies of the function field of C, and within their compositum taking the fixed subfield of the symmetric group. This is the source of André Weil's technique of constructing J as an abstract variety from 'birational data'. Other ways of constructing J, for example as a Picard variety, are preferred now but this does mean that for any rational function F on C
F(x1) + ... + F(xg)
makes sense as a rational function on J, for the xi staying away from the poles of F.
For n > g the mapping from ΣnC to J by addition fibers it over J; when n is large enough (around twice g) this becomes a projective space bundle (the Picard bundle). It has been studied in detail, for example by Kempf and Mukai.
Betti numbers and the E |
https://en.wikipedia.org/wiki/Linearly%20disjoint | In mathematics, algebras A, B over a field k inside some field extension of k are said to be linearly disjoint over k if the following equivalent conditions are met:
(i) The map induced by is injective.
(ii) Any k-basis of A remains linearly independent over B.
(iii) If are k-bases for A, B, then the products are linearly independent over k.
Note that, since every subalgebra of is a domain, (i) implies is a domain (in particular reduced). Conversely if A and B are fields and either A or B is an algebraic extension of k and is a domain then it is a field and A and B are linearly disjoint. However, there are examples where is a domain but A and B are not linearly disjoint: for example, A = B = k(t), the field of rational functions over k.
One also has: A, B are linearly disjoint over k if and only if subfields of generated by , resp. are linearly disjoint over k. (cf. Tensor product of fields)
Suppose A, B are linearly disjoint over k. If , are subalgebras, then and are linearly disjoint over k. Conversely, if any finitely generated subalgebras of algebras A, B are linearly disjoint, then A, B are linearly disjoint (since the condition involves only finite sets of elements.)
See also
Tensor product of fields |
https://en.wikipedia.org/wiki/IMLAC | IMLAC Corporation was an American electronics company in Needham, Massachusetts, that manufactured graphical display systems, mainly the PDS-1 and PDS-4, in the 1970s.
The PDS-1 debuted in 1970. It was the first low-cost commercial realization of Ivan Sutherland's Sketchpad system of a highly interactive computer graphics display with motion. Selling for $8,300 before options, its price was equivalent to the cost of four Volkswagen Beetles. The PDS-1 was functionally similar to the much bigger IBM 2250, which cost 30 times more. It was a significant step forward towards computer workstations and modern displays.
The PDS-1 consisted of a CRT monitor, keyboard, light pen, and a control panel on a small desk with most electronic logic in the desk pedestal. The electronics included a simple 16-bit minicomputer, 8-16 kilobytes of magnetic-core memory, and a display processor for driving CRT beam movements.
IMLAC is not an acronym but is the name of a poet-philosopher from Samuel Johnson's novel, The History of Rasselas, Prince of Abissinia.
Timeline of products
1968: Imlac founded. Their business plan was interactive graphics terminals for stock exchange traders, which did not happen.
1970: PDS-1 introduced for the general graphics market.
1972: PDS-1D introduced. It was similar to the PDS-1 with improved circuits and backplane.
1973: PDS-1G introduced.
1974: PDS-4 introduced. It ran twice as fast and displayed twice as much text or graphics without flicker. Its display processor supported instantaneous interactive magnification with clipping. It had an optional floating point add-on.
1977: A total of about 700 PDS-4 systems had been sold in the US. They were built upon order rather than being mass-produced.
1978: Dynagraphic 3250 introduced. It was designed to be used mainly by a proprietary Fortran-coded graphics library running on larger computers, without customer programming inside the terminal.
????: Dynagraphic 6220 introduced.
1979: Imlac Corporat |
https://en.wikipedia.org/wiki/Prime%20decomposition%20of%203-manifolds | In mathematics, the prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) finite collection of prime 3-manifolds.
A manifold is prime if it cannot be presented as a connected sum of more than one manifold, none of which is the sphere of the same dimension. This condition is necessary since for any manifold M of dimension it is true that
(where means the connected sum of and ). If is a prime 3-manifold then either it is or the non-orientable bundle over
or it is irreducible, which means that any embedded 2-sphere bounds a ball. So the theorem can be restated to say that there is a unique connected sum decomposition into irreducible 3-manifolds and fiber bundles of over
The prime decomposition holds also for non-orientable 3-manifolds, but the uniqueness statement must be modified slightly: every compact, non-orientable 3-manifold is a connected sum of irreducible 3-manifolds and non-orientable bundles over This sum is unique as long as we specify that each summand is either irreducible or a non-orientable bundle over
The proof is based on normal surface techniques originated by Hellmuth Kneser. Existence was proven by Kneser, but the exact formulation and proof of the uniqueness was done more than 30 years later by John Milnor. |
https://en.wikipedia.org/wiki/Child%20mortality | Child mortality is the mortality of children under the age of five. The child mortality rate (also under-five mortality rate) refers to the probability of dying between birth and exactly five years of age expressed per 1,000 live births.
It encompasses neonatal mortality and infant mortality (the probability of death in the first year of life).
Reduction of child mortality is reflected in several of the United Nations' Sustainable Development Goals. Target 3.2 is "by 2030, end preventable deaths of newborns and children under 5 years of age, with all countries aiming to reduce … under‑5 mortality to at least as low as 25 per 1,000 live births."
Child mortality rates have decreased in the last 40 years. Rapid progress has resulted in a significant decline in preventable child deaths since 1990, with the global under-5 mortality rate declining by over half between 1990 and 2016. While in 1990, 12.6 million children under age five died, in 2016 that number fell to 5.6 million children, and then in 2020, the global number fell again to 5 million. However, despite advances, there are still 15,000 under-five deaths per day from largely preventable causes. About 80 per cent of these occur in sub-Saharan Africa and South Asia, and just 6 countries account for half of all under-five deaths: China, India, Pakistan, Nigeria, Ethiopia and the Democratic Republic of the Congo. 45% of these children died during the first 28 days of life. Death rates were highest among children under age 1, followed by children ages 15 to 19, 1 to 4, and 5 to 14.
Types of Child Mortality
Child mortality refers to number of child deaths under the age of 5 per 1000 live births. More specific terms include:
Perinatal mortality rate: Number of child deaths within first week of birth ÷ total number of births.
Neonatal mortality rate: Number of child deaths within first 28 days of life ÷ total number of births.
Infancy mortality rate: Number of child deaths within first 12 months of life ÷ tot |
https://en.wikipedia.org/wiki/E%20Ink | E Ink (electronic ink) is a brand of electronic paper (e-paper) display technology commercialized by the E Ink Corporation, which was co-founded in 1997 by MIT undergraduates JD Albert and Barrett Comiskey, MIT Media Lab professor Joseph Jacobson, Jerome Rubin and Russ Wilcox.
It is available in grayscale and color and is used in mobile devices such as e-readers, digital signage, smartwatches, mobile phones, electronic shelf labels and architecture panels.
History
Background
The notion of a low-power paper-like display had existed since the 1970s, originally conceived by researchers at Xerox PARC, but had never been realized. While a post-doctoral student at Stanford University, physicist Joseph Jacobson envisioned a multi-page book with content that could be changed at the push of a button and required little power to use.
Neil Gershenfeld recruited Jacobson for the MIT Media Lab in 1995, after hearing Jacobson's ideas for an electronic book. Jacobson, in turn, recruited MIT undergrads Barrett Comiskey, a math major, and J.D. Albert, a mechanical engineering major, to create the display technology required to realize his vision.
Product development
The initial approach was to create tiny spheres which were half white and half black, and which, depending on the electric charge, would rotate such that the white side or the black side would be visible on the display. Albert and Comiskey were told this approach was impossible by most experienced chemists and materials scientists and had trouble creating these perfectly half-white, half-black spheres; during his experiments, Albert accidentally created some all-white spheres.
Comiskey experimented with charging and encapsulating those all-white particles in microcapsules mixed in with a dark dye. The result was a system of microcapsules that could be applied to a surface and could then be charged independently to create black and white images. A first patent was filed by MIT for the microencapsulated electrophore |
https://en.wikipedia.org/wiki/Dirichlet%27s%20test | In mathematics, Dirichlet's test is a method of testing for the convergence of a series. It is named after its author Peter Gustav Lejeune Dirichlet, and was published posthumously in the Journal de Mathématiques Pures et Appliquées in 1862.
Statement
The test states that if is a sequence of real numbers and a sequence of complex numbers satisfying
is monotonic
for every positive integer N
where M is some constant, then the series
converges.
Proof
Let and .
From summation by parts, we have that . Since is bounded by M and , the first of these terms approaches zero, as .
We have, for each k, .
Since is monotone, it is either decreasing or increasing:
<li>
If is decreasing,
which is a telescoping sum that equals and therefore approaches as . Thus, converges.
<li>
If is increasing,
which is again a telescoping sum that equals and therefore approaches as . Thus, again, converges.
So, the series converges, by the absolute convergence test. Hence converges.
Applications
A particular case of Dirichlet's test is the more commonly used alternating series test for the case
Another corollary is that converges whenever is a decreasing sequence that tends to zero. To see that
is bounded, we can use the summation formula
Improper integrals
An analogous statement for convergence of improper integrals is proven using integration by parts. If the integral of a function f is uniformly bounded over all intervals, and g is a non-negative monotonically decreasing function, then the integral of fg is a convergent improper integral.
Notes |
https://en.wikipedia.org/wiki/Glossary%20of%20arithmetic%20and%20diophantine%20geometry | This is a glossary of arithmetic and diophantine geometry in mathematics, areas growing out of the traditional study of Diophantine equations to encompass large parts of number theory and algebraic geometry. Much of the theory is in the form of proposed conjectures, which can be related at various levels of generality.
Diophantine geometry in general is the study of algebraic varieties V over fields K that are finitely generated over their prime fields—including as of special interest number fields and finite fields—and over local fields. Of those, only the complex numbers are algebraically closed; over any other K the existence of points of V with coordinates in K is something to be proved and studied as an extra topic, even knowing the geometry of V.
Arithmetic geometry can be more generally defined as the study of schemes of finite type over the spectrum of the ring of integers. Arithmetic geometry has also been defined as the application of the techniques of algebraic geometry to problems in number theory.
A
B
C
D
E
F
G
H
I
K
L
M
N
O
Q
R
S
T
U
V
W
See also
Arithmetic topology
Arithmetic dynamics |
https://en.wikipedia.org/wiki/Height%20function | A height function is a function that quantifies the complexity of mathematical objects. In Diophantine geometry, height functions quantify the size of solutions to Diophantine equations and are typically functions from a set of points on algebraic varieties (or a set of algebraic varieties) to the real numbers.
For instance, the classical or naive height over the rational numbers is typically defined to be the maximum of the numerators and denominators of the coordinates (e.g. for the coordinates ), but in a logarithmic scale.
Significance
Height functions allow mathematicians to count objects, such as rational points, that are otherwise infinite in quantity. For instance, the set of rational numbers of naive height (the maximum of the numerator and denominator when expressed in lowest terms) below any given constant is finite despite the set of rational numbers being infinite. In this sense, height functions can be used to prove asymptotic results such as Baker's theorem in transcendental number theory which was proved by .
In other cases, height functions can distinguish some objects based on their complexity. For instance, the subspace theorem proved by demonstrates that points of small height (i.e. small complexity) in projective space lie in a finite number of hyperplanes and generalizes Siegel's theorem on integral points and solution of the S-unit equation.
Height functions were crucial to the proofs of the Mordell–Weil theorem and Faltings's theorem by and respectively. Several outstanding unsolved problems about the heights of rational points on algebraic varieties, such as the Manin conjecture and Vojta's conjecture, have far-reaching implications for problems in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic.
History
An early form of height function was proposed by Giambattista Benedetti (c. 1563), who argued that the consonance of a musical interval could be measured by the product of its numerator |
https://en.wikipedia.org/wiki/Bombieri%E2%80%93Lang%20conjecture | In arithmetic geometry, the Bombieri–Lang conjecture is an unsolved problem conjectured by Enrico Bombieri and Serge Lang about the Zariski density of the set of rational points of an algebraic variety of general type.
Statement
The weak Bombieri–Lang conjecture for surfaces states that if is a smooth surface of general type defined over a number field , then the points of do not form a dense set in the Zariski topology on .
The general form of the Bombieri–Lang conjecture states that if is a positive-dimensional algebraic variety of general type defined over a number field , then the points of do not form a dense set in the Zariski topology.
The refined form of the Bombieri–Lang conjecture states that if is an algebraic variety of general type defined over a number field , then there is a dense open subset of such that for all number field extensions over , the set of points in is finite.
History
The Bombieri–Lang conjecture was independently posed by Enrico Bombieri and Serge Lang. In a 1980 lecture at the University of Chicago, Enrico Bombieri posed a problem about the degeneracy of rational points for surfaces of general type. Independently in a series of papers starting in 1971, Serge Lang conjectured a more general relation between the distribution of rational points and algebraic hyperbolicity, formulated in the "refined form" of the Bombieri–Lang conjecture.
Generalizations and implications
The Bombieri–Lang conjecture is an analogue for surfaces of Faltings's theorem, which states that algebraic curves of genus greater than one only have finitely many rational points.
If true, the Bombieri–Lang conjecture would resolve the Erdős–Ulam problem, as it would imply that there do not exist dense subsets of the Euclidean plane all of whose pairwise distances are rational.
In 1997, Lucia Caporaso, Barry Mazur, Joe Harris, and Patricia Pacelli showed that the Bombieri–Lang conjecture implies a uniform boundedness conjecture for rational points: th |
https://en.wikipedia.org/wiki/Positive%20definiteness | In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular:
Positive-definite bilinear form
Positive-definite function
Positive-definite function on a group
Positive-definite functional
Positive-definite kernel
Positive-definite matrix
Positive-definite quadratic form |
https://en.wikipedia.org/wiki/D54%20%28protocol%29 | D54 is an analogue lighting communications protocol used to control stage lighting. It was developed by Strand Lighting in the late 1970s and was originally designed to handle 384 channels. Though more advanced protocols exist such as Digital MultipleX DMX (lighting), it was widely used in larger venues such as London's West End theatres which had Strand Lighting dimming installations, and it was popular amongst technicians because all the levels can be "seen" on an oscilloscope. D54 is still supported on legacy equipment such as the Strand 500 series consoles alongside DMX. Generally a protocol converter is now used to convert DMX (lighting) down to the native D54.
History
One of the significant problems in controlling dimmers is getting the control signal from a lighting control unit to the dimmer units. For many years this was achieved by providing a dedicated wire from the control unit to each dimmer (analogue control) where the voltage present on the wire was varied by the control unit to set the output level of the dimmer. In about 1976, to deal with the bulky cable requirements of analogue control, Strand's R&D group in the UK developed an analogue multiplexing control system designated D54 (D54 is the internal standards number, which became the accepted name). Originally developed for use on the Strand Galaxy (1980) and Strand Gemini (1984) control desks.
Although a claimed expansion capability of 768 dimmers was documented, early receivers used simple hardware counters that rolled over before reaching 768, effectively preventing commercial exploitation. The refresh period would also have been slow on such a long dimmer update cycle. Instead, multiple D54 streams were supported by some later consoles.
D54 was developed in the United Kingdom at approximately the same time as AMX192 (another analogue multiplexing protocol) was developed in the United States, and the two protocols remained almost exclusively in those countries.
Protocol
Article Authors No |
https://en.wikipedia.org/wiki/Posterior%20cord | The posterior cord is a part of the brachial plexus. It consists of contributions from all of the roots of the brachial plexus.
The posterior cord gives rise to the following nerves:
Additional images |
https://en.wikipedia.org/wiki/Medial%20pectoral%20nerve | The medial pectoral nerve (also known as the medial anterior thoracic nerve) is (typically) a branch of the medial cord of the brachial plexus and is derived from spinal nerve roots C8-T1. It provides motor innervation to the pectoralis minor muscle, and the lower half (sternal part) of the pectoralis major muscle. It runs along the inferior border of the pectoralis minor muscle.
Damage to the medial pectoral nerve can result in inability to elevate the shoulder.
Anatomy
Origin
The medial pectoral nerve usually arises from the medial cord of the brachial plexus; it can however occasionally arise directly from the anterior division of the inferior trunk of the brachial plexus. It is derived from the eighth cervical (C8) and first thoracic (T1) spinal nerve roots.
The origin is situated posterior to the axillary artery.
Course and relations
It passes behind the first part of the axillary artery, curves forward between the axillary artery and vein, and unites in front of the artery with a filament from the lateral nerve.
It then enters the deep surface of the pectoralis minor muscle, where it divides into a number of branches, which supply the muscle.
Two or three branches pierce the muscle and end in the sternocostal head of the pectoralis major muscle. The medial pectoral nerve pierces both the pectoralis minor and the sternocostal head of the pectoralis major. The lateral pectoral nerve pierces only the clavicular head of the pectoralis major.
Clinical relevance
The medial pectoral nerve can be used as a donor nerve when reconstructing a damaged brachial plexus, or axillary nerve.
See also
Lateral pectoral nerve
Additional images |
https://en.wikipedia.org/wiki/Sputum%20culture | A sputum culture is a test to detect and identify bacteria or fungi that infect the lungs or breathing passages. Sputum is a thick fluid produced in the lungs and in the adjacent airways. Normally, fresh morning sample is preferred for the bacteriological examination of sputum. A sample of sputum is collected in a sterile, wide-mouthed, dry, leak-proof and break-resistant plastic-container and sent to the laboratory for testing. Sampling may be performed by sputum being expectorated (produced by coughing), induced (saline is sprayed in the lungs to induce sputum production), or taken via an endotracheal tube with a protected specimen brush (commonly used on patients on respirators) in an intensive care setting. For selected organisms such as Cytomegalovirus or "Pneumocystis jiroveci" in specific clinical settings (immunocompromised patients) a bronchoalveolar lavage might be taken by an experienced pneumologist. If no bacteria or fungi grow, the culture is negative. If organisms that can cause the infection (Pathogenicity organisms) grow, the culture is positive. The type of bacterium or fungus is identified by microscopy, colony morphology and biochemical tests of bacterial growth.
If bacteria or fungi that can cause infection grow in the culture, other tests can determine which antimicrobial agent will most effectively treat the infection. This is called susceptibility or sensitivity testing.
In a hospital setting, a sputum culture is most commonly ordered if a patient has a pneumonia. The Infectious Diseases Society of America recommends that sputum cultures be done in pneumonia requiring hospitalization, while the American College of Chest Physicians does not. One reason for such a discrepancy is that normal, healthy lungs have bacteria, and sputum cultures collect both normal and pathogenic bacteria. However, pure cultures of common respiratory pathogens in the absence of upper respiratory flora combined with symptoms of respiratory distress provides strong |
https://en.wikipedia.org/wiki/Lepidium%20virginicum | Lepidium virginicum, also known as least pepperwort or Virginia pepperweed, is an herbaceous plant in the mustard family (Brassicaceae). It is native to much of North America, including most of the United States and Mexico and southern regions of Canada, as well as most of Central America. It can be found elsewhere as an introduced species.
Virginia pepperweed grows as a weed in most crops and is found in roadsides, landscapes and waste areas. It prefers sunny locales with dry soil.
Description
Lepidium virginicum is an herbaceous annual or biennial. The entire plant is generally between 10 and 50 centimeters tall. The leaves on the stems of Virginia pepperweed are sessile, linear to lanceolate and get larger as they approach the base.
As with Lepidium campestre, Virginia pepperweed's most identifiable characteristic is its raceme, which comes from the plant's highly branched stem. The racemes give Virginia pepperweed the appearance of a bottlebrush. On the racemes are first small white flowers, and later greenish fruits.
All parts of the plant have a peppery taste.
Uses
The plant is edible. The young leaves can be used as a potherb, sautéed or used raw, such as in salads. The young seedpods can be used as a substitute for black pepper. The leaves contain protein, vitamin A and vitamin C. |
https://en.wikipedia.org/wiki/Euclid%27s%20theorem | Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proved by Euclid in his work Elements. There are several proofs of the theorem.
Euclid's proof
Euclid offered a proof published in his work Elements (Book IX, Proposition 20), which is paraphrased here.
Consider any finite list of prime numbers p1, p2, ..., pn. It will be shown that at least one additional prime number not in this list exists. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then q is either prime or not:
If q is prime, then there is at least one more prime that is not in the list, namely, q itself.
If q is not prime, then some prime factor p divides q. If this factor p were in our list, then it would divide P (since P is the product of every number in the list); but p also divides P + 1 = q, as just stated. If p divides P and also q, then p must also divide the difference of the two numbers, which is (P + 1) − P or just 1. Since no prime number divides 1, p cannot be in the list. This means that at least one more prime number exists beyond those in the list.
This proves that for every finite list of prime numbers there is a prime number not in the list. In the original work, as Euclid had no way of writing an arbitrary list of primes, he used a method that he frequently applied, that is, the method of generalizable example. Namely, he picks just three primes and using the general method outlined above, proves that he can always find an additional prime. Euclid presumably assumes that his readers are convinced that a similar proof will work, no matter how many primes are originally picked.
Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers, though it is actually a proof by cases, a direct proof method. The philosopher Torkel Franzén, in a book on lo |
https://en.wikipedia.org/wiki/Carrion%20flower | Carrion flowers, also known as corpse flowers or stinking flowers, are mimetic flowers that emit an odor that smells like rotting flesh. Apart from the scent, carrion flowers often display additional characteristics that contribute to the mimesis of a decaying corpse. These include their specific coloration (red, purple, brown), the presence of setae and orifice-like flower architecture. Carrion flowers attract mostly scavenging flies and beetles as pollinators. Some species may trap the insects temporarily to ensure the gathering and transfer of pollen.
Plants known as "carrion flower"
Amorphophallus
Many plants in the genus Amorphophallus (family Araceae) are known as carrion flowers. One such plant is the Titan arum (Amorphophallus titanum), which has the world's largest unbranched inflorescence. Rather than a single flower, the titan arum presents an inflorescence or compound flower composed of a spadix or stalk of small and anatomically reduced male and female flowers, surrounded by a spathe that resembles a single giant petal. This plant has a mechanism to heat up the spadix enhancing the emission of the strong odor of decaying meat to attract its pollinators, carrion-eating beetles and "flesh flies" (family Sarcophagidae). It was first described scientifically in 1878 in Sumatra.
Rafflesia
Flowers of plants in the genus Rafflesia (family Rafflesiaceae) emit an odor similar to that of decaying meat. This odor attracts the flies that pollinate the plant. The world's largest single bloom is R. arnoldii. This rare flower is found in the rainforests of Borneo and Sumatra. It can grow to be across and weigh up to . R. arnoldii is a parasitic plant on Tetrastigma vine, which grows only in primary rainforests. It has no visible leaves, roots, or stem. It does not photosynthesize, but rather uses the host plant to obtain water and nutrients.
Stapelia
Plants in the genus Stapelia are also called "carrion flowers". They are small, spineless, cactus-like succulen |
https://en.wikipedia.org/wiki/Belyi%27s%20theorem | In mathematics, Belyi's theorem on algebraic curves states that any non-singular algebraic curve C, defined by algebraic number coefficients, represents a compact Riemann surface which is a ramified covering of the Riemann sphere, ramified at three points only.
This is a result of G. V. Belyi from 1979. At the time it was considered surprising, and it spurred Grothendieck to develop his theory of dessins d'enfant, which describes non-singular algebraic curves over the algebraic numbers using combinatorial data.
Quotients of the upper half-plane
It follows that the Riemann surface in question can be taken to be the quotient
H/Γ
(where H is the upper half-plane and Γ is a subgroup of finite index in the modular group) compactified by cusps. Since the modular group has non-congruence subgroups, it is not the conclusion that any such curve is a modular curve.
Belyi functions
A Belyi function is a holomorphic map from a compact Riemann surface S to the complex projective line P1(C) ramified only over three points, which after a Möbius transformation may be taken to be . Belyi functions may be described combinatorially by dessins d'enfants.
Belyi functions and dessins d'enfants – but not Belyi's theorem – date at least to the work of Felix Klein; he used them in his article to study an 11-fold cover of the complex projective line with monodromy group PSL(2,11).
Applications
Belyi's theorem is an existence theorem for Belyi functions, and has subsequently been much used in the inverse Galois problem. |
https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch%20theorem%20for%20surfaces | In mathematics, the Riemann–Roch theorem for surfaces describes the dimension of linear systems on an algebraic surface. The classical form of it was first given by , after preliminary versions of it were found by and . The sheaf-theoretic version is due to Hirzebruch.
Statement
One form of the Riemann–Roch theorem states that if D is a divisor on a non-singular projective surface then
where χ is the holomorphic Euler characteristic, the dot . is the intersection number, and K is the canonical divisor. The constant χ(0) is the holomorphic Euler characteristic of the trivial bundle, and is equal to 1 + pa, where pa is the arithmetic genus of the surface. For comparison, the Riemann–Roch theorem for a curve states that χ(D) = χ(0) + deg(D).
Noether's formula
Noether's formula states that
where χ=χ(0) is the holomorphic Euler characteristic, c12 = (K.K) is a Chern number and the self-intersection number of the canonical class K, and e = c2 is the topological Euler characteristic. It can be used to replace the
term χ(0) in the Riemann–Roch theorem with topological terms; this gives the Hirzebruch–Riemann–Roch theorem for surfaces.
Relation to the Hirzebruch–Riemann–Roch theorem
For surfaces, the Hirzebruch–Riemann–Roch theorem is essentially the Riemann–Roch theorem for surfaces combined with the Noether formula. To see this, recall that for each divisor D on a surface there is an invertible sheaf L = O(D) such that the linear system of D is more or less the space of sections of L.
For surfaces the Todd class is , and the Chern character of the sheaf L is just , so the Hirzebruch–Riemann–Roch theorem states that
Fortunately this can be written in a clearer form as follows. First putting D = 0 shows that
(Noether's formula)
For invertible sheaves (line bundles) the second Chern class vanishes. The products of second cohomology classes can be identified with intersection numbers in the Picard group, and we get a more classical version of Riemann Roch |
https://en.wikipedia.org/wiki/Motorola%20Mobility | Motorola Mobility LLC, marketed as Motorola, is an American consumer electronics manufacturer primarily producing smartphones and other mobile devices running Android. Headquartered at Merchandise Mart in Chicago, Illinois, it is a subsidiary of the Chinese multinational technology company Lenovo.
Motorola Mobility was formed on January 4, 2011, after a split of Motorola, Inc. into two separate companies, with Motorola Mobility assuming the company's consumer-oriented product lines (including its mobile phone business, as well as its cable modems and pay television set-top boxes), while Motorola Solutions assumed the company's enterprise-oriented product lines.
In May 2012 Google acquired Motorola Mobility for US$12.5 billion; the main intent of the purchase was to gain Motorola Mobility's patent portfolio, in order to protect other Android vendors from litigation. Under Google, Motorola Mobility increased its focus on the entry-level smartphone market, and under the Google ATAP division, began development on Project Ara—a platform for modular smartphones with interchangeable components. Shortly after the purchase, Google sold Motorola Mobility's cable modem and set-top box business to Arris Group.
Google's ownership of the company was short-lived. In January 2014, Google announced that it would sell Motorola Mobility to Lenovo for $6.91 billion. The sale, which excluded ATAP and all but 2,000 of Motorola Mobility's patents, was completed on 30 October 2014. Lenovo disclosed an intent to use Motorola Mobility as a way to expand into the United States smartphone market. In August 2015, Lenovo's existing smartphone division was subsumed by Motorola Mobility.
History
On January 4, 2011, Motorola, Inc. was split into two publicly traded companies; Motorola Solutions took on the company's enterprise-oriented business units, while the remaining consumer division was taken on by Motorola Mobility. Motorola Mobility originally consisted of the mobile devices business, |
https://en.wikipedia.org/wiki/Separation%20of%20duties | Separation of duties (SoD), also known as segregation of duties, is the concept of having more than one person required to complete a task. It is an administrative control used by organisations to prevent fraud, sabotage, theft, misuse of information, and other security compromises. In the political realm, it is known as the separation of powers, as can be seen in democracies where the government is separated into three independent branches: a legislature, an executive, and a judiciary.
General description
Separation of duties is a key concept of internal controls. Increased protection from fraud and errors must be balanced with the increased cost/effort required.
In essence, SoD implements an appropriate level of checks and balances upon the activities of individuals. R. A. Botha and J. H. P. Eloff in the IBM Systems Journal describe SoD as follows.
Separation of duty, as a security principle, has as its primary objective the prevention of fraud and errors. This objective is achieved by disseminating the tasks and associated privileges for a specific business process among multiple users. This principle is demonstrated in the traditional example of separation of duty found in the requirement of two signatures on a cheque.
Actual job titles and organizational structure may vary greatly from one organization to another, depending on the size and nature of the business. Accordingly, rank or hierarchy are less important than the skillset and capabilities of the individuals involved. With the concept of SoD, business critical duties can be categorized into four types of functions: authorization, custody, record keeping, and reconciliation. In a perfect system, no one person should handle more than one type of function.
Principles
Principally several approaches are optionally viable as partially or entirely different paradigms:
sequential separation (two signatures principle)
individual separation (four eyes principle)
spatial separation (separate action in separ |
https://en.wikipedia.org/wiki/Pose%20%28computer%20vision%29 | In the fields of computing and computer vision, pose (or spatial pose) represents the position and orientation of an object, usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.
In computer vision, the pose of an object is often estimated from camera input by the process of pose estimation. This information can then be used, for example, to allow a robot to manipulate an object or to avoid moving into the object based on its perceived position and orientation in the environment. Other applications include skeletal action recognition.
Pose estimation
The specific task of determining the pose of an object in an image (or stereo images, image sequence) is referred to as pose estimation. Pose estimation problems can be solved in different ways depending on the image sensor configuration, and choice of methodology. Three classes of methodologies can be distinguished:
Analytic or geometric methods: Given that the image sensor (camera) is calibrated and the mapping from 3D points in the scene and 2D points in the image is known. If also the geometry of the object is known, it means that the projected image of the object on the camera image is a well-known function of the object's pose. Once a set of control points on the object, typically corners or other feature points, has been identified, it is then possible to solve the pose transformation from a set of equations which relate the 3D coordinates of the points with their 2D image coordinates. Algorithms that determine the pose of a point cloud with respect to another point cloud are known as point set registration algorithms, if the correspondences between points are not already known.
Genetic algorithm methods: If the pose of an object does not have to be computed in real-time a genetic algorithm may be used. This approach is robust especially when |
https://en.wikipedia.org/wiki/DOCSIS%20Set-top%20Gateway | DOCSIS Set-top Gateway (or DSG) is a specification describing how out-of-band data is delivered to a cable set-top box. Cable set-top boxes need a reliable source of out of band data for information such as program guides, channel lineups, and updated code images.
Features
DSG is an extension of the DOCSIS protocol governing cable modems, and applies to all versions of DOCSIS.
The principal features of DSG are:
One-way operation
The original DOCSIS protocol supports only two way connectivity. A cable modem that is unable to acquire an upstream channel will give up and resume scanning for new channels. Likewise, persistent upstream errors will cause a cable modem to "reinitialize its MAC" and scan for new downstream channels. This behavior is appropriate for traditional cable modems, but not for cable set-top boxes. A cable set-top box still needs to acquire its out of band data even if the upstream channel is impaired.
The DSG specification introduced one way (downstream only) modes of operation. When upstream errors occur, the set-top enters a downstream-only state, periodically attempting to reacquire the upstream channel.
Defining how to recognize the correct downstream channel
Set-top out of band data is generally present only on certain downstream channels. The set-top needs a way to distinguish a valid downstream (containing the set-top's data) from an invalid one used only by standalone cable modems.
The DSG specification defines a special downstream keep-alive message so that the set-top can recognize an appropriate downstream channel.
Creating an out-of-band directory
The Advanced Mode of the DSG Specification introduces a special MAC Management message called the Downstream Channel Descriptor (DCD). The DCD provides a directory identifying the MAC and IP parameters associated with the out of band data streams.
Each data consumer is assigned a special Client Identifier that names the out of band data stream in the DCD.
SNMP MIBs
The DSG Spec |
https://en.wikipedia.org/wiki/Enos%20%28chimpanzee%29 | Enos (born about 1957 – died November 4, 1962) was the second chimpanzee launched into space by NASA. He was the first and only chimpanzee, and third hominid after cosmonauts Yuri Gagarin and Gherman Titov, to orbit the Earth. Enos's flight occurred on November 29, 1961.
Enos was brought from the Miami Rare Bird Farm on April 3, 1960. He completed more than 1,250 training hours at the University of Kentucky and Holloman Air Force Base. Training was more intense for him than for his predecessor Ham, who had become the first great ape in space in January 1961, because Enos was exposed to weightlessness and higher gs for longer periods of time. His training included psychomotor instruction and aircraft flights.
Enos was selected for his Project Mercury flight only three days before launch. Two months prior, NASA launched Mercury-Atlas 4 on September 13, 1961, to conduct an identical mission with a "crewman simulator" on board. Enos flew into space aboard Mercury-Atlas 5 on November 29, 1961. He completed his first orbit in 1 hour and 28.5 minutes.
Enos was scheduled to complete three orbits, but the mission was aborted after two, due to two issues: capsule overheating and a malfunctioning "avoidance conditioning" test subjecting the primate to 76 electrical shocks. According to one history of primatology, "The chimpanzee, about five years old, behaved like a true hero: despite the malfunctions of the electronic system, he conscientiously performed all the tasks he had learned during the entire flight of over three hours...Enos demonstrated that he was careful to successfully complete his mission and that he perfectly understood what was expected of him."
The capsule was brought aboard in the late afternoon and Enos was immediately taken below deck by his Air Force handlers. Stormes arrived in Bermuda the next day.
Enos's flight was a full dress rehearsal for the next Mercury launch on February 20, 1962, which would make John Glenn the first American to orbit Ear |
https://en.wikipedia.org/wiki/Three-stratum%20theory | The three-stratum theory is a theory of cognitive ability proposed by the American psychologist John Carroll in 1993. It is based on a factor-analytic study of the correlation of individual-difference variables from data such as psychological tests, school marks and competence ratings from more than 460 datasets. These analyses suggested a three-layered model where each layer accounts for the variations in the correlations within the previous layer.
The three layers (strata) are defined as representing narrow, broad, and general cognitive ability. The factors describe stable and observable differences among individuals in the performance of tasks. Carroll argues further that they are not mere artifacts of a mathematical process, but likely reflect physiological factors explaining differences in ability (e.g., nerve firing rates). This does not alter the effectiveness of factor scores in accounting for behavioral differences.
Carroll proposes a taxonomic dimension in the distinction between level factors and speed factors. The tasks that contribute to the identification of level factors can be sorted by difficulty and individuals differentiated by whether they have acquired the skill to perform the tasks. Tasks that contribute to speed factors are distinguished by the relative speed with which individuals can complete them. Carroll suggests that the distinction between level and speed factors may be the broadest taxonomy of cognitive tasks that can be offered. Carroll distinguishes his hierarchical approach from taxonomic approaches such as Guilford's Structure of Intellect model (three-dimensional model with contents, operations, and products).
Development of the three-stratum theory
The three-stratum theory is derived primarily from Spearman's (1927) model of general intelligence and Horn & Cattell's (1966) theory of fluid and crystallized intelligence. Carroll's model was also heavily influenced by the 1976 edition of the ETS standard kit. His factor analyses |
https://en.wikipedia.org/wiki/The%20Anatomy%20Lesson%20of%20Dr.%20Nicolaes%20Tulp | The Anatomy Lesson of Dr. Nicolaes Tulp is a 1632 oil painting on canvas by Rembrandt housed in the Mauritshuis museum in The Hague, the Netherlands. It was originally created to be displayed by the Surgeons Guild in their meeting room. The painting is regarded as one of Rembrandt's early masterpieces.
In the work, Nicolaes Tulp is pictured explaining the musculature of the arm to a group of doctors. Some of the spectators are various doctors who paid commissions to be included in the painting. The painting is signed in the top-left hand corner Rembrandt. f[ecit] 1632. This may be the first instance of Rembrandt signing a painting with his forename (in its original form) as opposed to the monogram RHL (Rembrandt Harmenszoon of Leiden), and is thus a sign of his growing artistic confidence.
Background
The event can be dated to 31 January 1632: the Amsterdam Guild of Surgeons, of which Tulp was official City Anatomist, permitted only one public dissection a year, and the body would have to be that of an executed criminal.
Anatomy lessons were a social event in the 17th century, taking place in lecture rooms that were actual theatres, with students, colleagues and the general public being permitted to attend on payment of an entrance fee. The spectators are appropriately dressed for this social occasion. It is thought that the uppermost (not holding the paper) and farthest left figures were added to the picture later.
Every five to ten years, the Surgeon's Guild would commission a portrait by a leading portraitist of the period; Rembrandt was commissioned for this task when he was 25 years old, and newly arrived in Amsterdam. It was his first major commission in Amsterdam. Each of the men included in the portrait would have paid a certain amount of money to be included in the work, and the more central figures (in this case, Tulp) probably paid more, even twice as much. Rembrandt's anatomical portrait radically altered the conventions of the genre, by including a |
https://en.wikipedia.org/wiki/Focus-plus-context%20screen | A focus-plus-context screen is a specialized type of display device that consists of one or more high-resolution "focus" displays embedded into a larger low-resolution "context" display. Image content is displayed across all display regions, such that the scaling of the image is preserved, while its resolution varies across the display regions.
The original focus-plus-context screen prototype consisted of an 18"/45 cm LCD screen embedded in a 5'/150 cm front-projected screen. Alternative designs have been proposed that achieve the mixed-resolution effect by combining two or more projectors with different focal lengths
While the high-resolution area of the original prototype was located at a fixed location, follow-up projects have obtained a movable focus area by using a Tablet PC.
Patrick Baudisch is the inventor of focus-plus-context screens (2000, while at Xerox PARC)
Advantages
Allows users to leverage their foveal and their peripheral vision
Cheaper to manufacture than a display that is high-resolution across the entire display surface
Displays entirety and details of large images in a single view. Unlike approaches that combine entirety and details in software (fisheye views), focus-plus-context screens do not introduce distortion.
Disadvantages
In existing implementations, the focus display is either fixed or moving it is physically demanding |
https://en.wikipedia.org/wiki/Local-density%20approximation | Local-density approximations (LDA) are a class of approximations to the exchange–correlation (XC) energy functional in density functional theory (DFT) that depend solely upon the value of the electronic density at each point in space (and not, for example, derivatives of the density or the Kohn–Sham orbitals). Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation, which are then applied to realistic systems (molecules and solids).
In general, for a spin-unpolarized system, a local-density approximation for the exchange-correlation energy is written as
where ρ is the electronic density and εxc is the exchange-correlation energy per particle of a homogeneous electron gas of charge density ρ. The exchange-correlation energy is decomposed into exchange and correlation terms linearly,
so that separate expressions for Ex and Ec are sought. The exchange term takes on a simple analytic form for the HEG. Only limiting expressions for the correlation density are known exactly, leading to numerous different approximations for εc.
Local-density approximations are important in the construction of more sophisticated approximations to the exchange-correlation energy, such as generalized gradient approximations (GGA) or hybrid functionals, as a desirable property of any approximate exchange-correlation functional is that it reproduce the exact results of the HEG for non-varying densities. As such, LDA's are often an explicit component of such functionals.
The local-density approximation was first introduced by Walter Kohn and Lu Jeu Sham in 1965.
Applications
Local density approximations, as with GGAs are employed extensively by solid state physicists in ab-initio DFT studies to interpret electronic and magnetic interactions in semiconduct |
https://en.wikipedia.org/wiki/Skewb | The Skewb () is a combination puzzle and a mechanical puzzle in the style of the Rubik's Cube. It was invented by Tony Durham and marketed by Uwe Mèffert. Although it is cubical in shape, it differs from Rubik's construction in that its axes of rotation pass through the corners of the cube rather than the centres of the faces. There are four such axes, one for each space diagonal of the cube. As a result, it is a deep-cut puzzle in which each twist affects all six faces.
Mèffert's original name for this puzzle was the Pyraminx Cube, to emphasize that it was part of a series including his first tetrahedral puzzle, the Pyraminx. The catchier name Skewb was coined by Douglas Hofstadter in his Metamagical Themas column. Mèffert liked the new name enough to apply it to the Pyraminx Cube, and also named some of his other puzzles after it, such as the Skewb Diamond.
Higher-order Skewbs, named Master Skewb and Elite Skewb, have also been made.
In December 2013, Skewb was recognized as an official World Cube Association competition event.
Mechanism
Despite a simple appearance, its pieces are actually divided into subgroups and have restrictions that are apparent upon examining the puzzle's mechanism. The eight corners are split into two groups—the four corners attached to the central four-armed spider and the four "floating" corners that can be removed from the mechanism easily. These corners cannot be interchanged i.e. in a single group of four corners, their relative positions are unchanged. They can be distinguished by applying pressure on the corner—if it squishes down a bit, it's a floating corner. The centers only have two possible orientations—this becomes apparent either by scrambling a Skewb-alike puzzle where the center orientation is visible (such as the Skewb Diamond or Skewb Ultimate), or by disassembling the puzzle.
Records
The world record time (single) for a Skewb is 0.81 seconds, set by Zayn Khanani of the United States on 9 July 2022 at the Rubik's |
https://en.wikipedia.org/wiki/Schur%20polynomial | In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials.
Definition (Jacobi's bialternant formula)
Schur polynomials are indexed by integer partitions. Given a partition ,
where , and each is a non-negative integer, the functions
are alternating polynomials by properties of the determinant. A polynomial is alternating if it changes sign under any transposition of the variables.
Since they are alternating, they are all divisible by the Vandermonde determinant
The Schur polynomials are defined as the ratio
This is known as the bialternant formula of Jacobi. It is a special case of the Weyl character formula.
This is a symmetric function because the numerator and denominator are both alternating, and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant.
Properties
The degree Schur polynomials in variables are a linear basis for the space of homogeneous degree symmetric polynomials in variables.
For a partition , the Schur polynomial is a sum of monomials,
where the summation is over all semistandard Young tableaux of shape . The exponents give the weight of , in other words each counts the occurrences of the number in . This can be shown to be equivalent to the definition from |
https://en.wikipedia.org/wiki/Fc%20receptor | In immunology, an Fc receptor is a protein found on the surface of certain cells – including, among others, B lymphocytes, follicular dendritic cells, natural killer cells, macrophages, neutrophils, eosinophils, basophils, human platelets, and mast cells – that contribute to the protective functions of the immune system.
Its name is derived from its binding specificity for a part of an antibody known as the Fc (fragment crystallizable) region. Fc receptors bind to antibodies that are attached to infected cells or invading pathogens. Their activity stimulates phagocytic or cytotoxic cells to destroy microbes, or infected cells by antibody-mediated phagocytosis or antibody-dependent cell-mediated cytotoxicity. Some viruses such as flaviviruses use Fc receptors to help them infect cells, by a mechanism known as antibody-dependent enhancement of infection.
Classes
There are several different types of Fc receptors (abbreviated FcR), which are classified based on the type of antibody that they recognize. The Latin letter used to identify a type of antibody is converted into the corresponding Greek letter, which is placed after the 'Fc' part of the name. For example, those that bind the most common class of antibody, IgG, are called Fc-gamma receptors (FcγR), those that bind IgA are called Fc-alpha receptors (FcαR) and those that bind IgE are called Fc-epsilon receptors (FcεR). The classes of FcR's are also distinguished by the cells that express them (macrophages, granulocytes, natural killer cells, T and B cells) and the signalling properties of each receptor.
Fc-gamma receptors
All of the Fcγ receptors (FcγR) belong to the immunoglobulin superfamily and are the most important Fc receptors for inducing phagocytosis of opsonized (marked) microbes. This family includes several members, FcγRI (CD64), FcγRIIA (CD32), FcγRIIB (CD32), FcγRIIIA (CD16a), FcγRIIIB (CD16b), which differ in their antibody affinities due to their different molecular structure. For instance, Fcγ |
https://en.wikipedia.org/wiki/Copper%20indium%20gallium%20selenide | Copper indium gallium (di)selenide (CIGS) is a I-III-VI2 semiconductor material composed of copper, indium, gallium, and selenium. The material is a solid solution of copper indium selenide (often abbreviated "CIS") and copper gallium selenide. It has a chemical formula of CuIn1−xGaxSe2, where the value of x can vary from 0 (pure copper indium selenide) to 1 (pure copper gallium selenide). CIGS is a tetrahedrally bonded semiconductor, with the chalcopyrite crystal structure, and a bandgap varying continuously with x from about 1.0 eV (for copper indium selenide) to about 1.7 eV (for copper gallium selenide).
Structure
CIGS is a tetrahedrally bonded semiconductor, with the chalcopyrite crystal structure. Upon heating it transforms to the zincblende form and the transition temperature decreases from 1045 °C for x = 0 to 805 °C for x = 1.
Applications
It is best known as the material for CIGS solar cells a thin-film technology used in the photovoltaic industry. In this role, CIGS has the advantage of being able to be deposited on flexible substrate materials, producing highly flexible, lightweight solar panels. Improvements in efficiency have made CIGS an established technology among alternative cell materials.
See also
Copper indium gallium selenide solar cells
CZTS
List of CIGS companies |
https://en.wikipedia.org/wiki/Deer%20horn | A deer horn, or deer whistle, is a whistle mounted on automobiles intended to help prevent collisions with deer. Air moving through the device produces sound (ultrasound in some models), intended to warn deer of a vehicle's approach. Deer are highly unpredictable, skittish animals whose normal reaction to an unfamiliar sound is to stop, look and listen to determine if they are being threatened. If the whistle gives them advance warning, they may freeze on the roadside, rather than running across the road into the path of the vehicle.
In Australia, a different product, with electrically powered speakers (Shu Roo), is used to decrease collisions with kangaroos.
Researchers with the University of Wisconsin–Madison measured three devices and a press report said they found these three devices produced "low-pitched and ultrasonic sounds at speeds of 30 to 70 miles per hour; however, researchers were unable to verify that deer responded to the sounds."
Researchers with the Georgia Game and Fish Department have pointed out several reasons for ultrasound devices not to work as advertised:
Some deer whistles do not emit any ultrasonic sound under the advertised operating conditions (typically when the vehicle exceeds 30 mph).
Ultrasonic sound does not carry very well. It does not travel a long enough distance to provide adequate warning, and also is stopped by virtually any intervening object, so any curves in a road will block the sound.
Little is known about the auditory limits of deer, but current knowledge indicates that deer hear approximately the same frequencies as humans, and thus if humans can't hear a sound, deer probably can't either.
If deer could hear ultrasound, it is unknown if it would alarm them or induce a flight response.
In addition to the Georgia and Wisconsin studies, a study by the Ohio State Police Department indicated the whistles are ineffective.
The Department of Zoology at the University of Melbourne did independent testing, funded by th |
https://en.wikipedia.org/wiki/Photoexcitation | Photoexcitation is the production of an excited state of a quantum system by photon absorption. The excited state originates from the interaction between a photon and the quantum system. Photons carry energy that is determined by the wavelengths of the light that carries the photons. Objects that emit light with longer wavelengths, emit photons carrying less energy. In contrast to that, light with shorter wavelengths emit photons with more energy. When the photon interacts with a quantum system, it is therefore important to know what wavelength one is dealing with. A shorter wavelength will transfer more energy to the quantum system than longer wavelengths.
On the atomic and molecular scale photoexcitation is the photoelectrochemical process of electron excitation by photon absorption, when the energy of the photon is too low to cause photoionization. The absorption of the photon takes place in accordance with Planck's quantum theory.
Photoexcitation plays a role in photoisomerization and is exploited in different techniques:
Dye-sensitized solar cells makes use of photoexcitation by exploiting it in cheaper inexpensive mass production solar cells. The solar cells rely on a large surface area in order to catch and absorb as many high energy photons as possible. Shorter wavelengths are more efficient for the energy conversion compared to longer wavelengths, since shorter wavelengths carry photons that are more energy rich. Light containing shorter wavelengths therefore cause a longer and less efficient conversion of energy in dye-sensitized solar cells.
Photochemistry
Luminescence
Optically pumped lasers use photoexcitation in a way that the excited atoms in the lasers get an enormous direct-gap gain needed for the lasers. The density that is needed for the population inversion in the compound Ge, a material often used in lasers, must become 1020 cm−3, and this is acquired via photoexcitation. The photoexcitation causes the electrons in atoms to go to an excited st |
https://en.wikipedia.org/wiki/Quantum%20heterostructure | A quantum heterostructure is a heterostructure in a substrate (usually a semiconductor material), where size restricts the movements of the charge carriers forcing them into a quantum confinement. This leads to the formation of a set of discrete energy levels at which the carriers can exist. Quantum heterostructures have sharper density of states than structures of more conventional sizes.
Quantum heterostructures are important for fabrication of short-wavelength light-emitting diodes and diode lasers, and for other optoelectronic applications, e.g. high-efficiency photovoltaic cells.
Examples of quantum heterostructures confining the carriers in quasi-two, -one and -zero dimensions are:
Quantum wells
Quantum wires
Quantum dots |
https://en.wikipedia.org/wiki/Nightingale%20floor | are floors that make a chirping sound when walked upon. These floors were used in the hallways of some temples and palaces, the most famous example being Nijō Castle, in Kyoto, Japan. Dry boards naturally creak under pressure, but these floors were built in a way that the flooring nails rub against a jacket or clamp, causing chirping noises. It is unclear if the design was intentional. It seems that, at least initially, the effect arose by chance. An information sign in Nijō castle states that "The singing sound is not actually intentional, stemming rather from the movement of nails against clumps in the floor caused by wear and tear over the years". Legend has it that the squeaking floors were used as a security device, assuring that no one could sneak through the corridors undetected.
The English name "nightingale" refers to the Japanese bush warbler, or uguisu, which is a common songbird in Japan.
Etymology
refers to the Japanese bush warbler. The latter segment comes from , which can be used to mean "to lay/board (flooring)", as in the expression yukaita wo haru (床板を張る) meaning "to board a/the floor". The verb haru becomes nominalized as hari and voiced through rendaku to become bari. In this form it refers to the method of boarding, as in other words like herinbōnbari (ヘリンボーン張り), which refers to flooring laid in a Herringbone pattern. As such, uguisubari means "Warbler boarding".
Construction
The floors were made from dried boards. Upside-down V-shaped joints move within the boards when pressure is applied.
Examples
The following locations incorporate nightingale floors:
Nijō Castle, Kyoto
Chion-in, Kyoto
Eikan-dō Zenrin-ji, Kyoto
Daikaku-ji, Kyoto
Modern influences and related topics
Melody Road in Hokkaido, Wakayama, and Gunma
Singing Road in Anyanag, Gyeonggi South Korea
Civic Musical Road in Lancaster, California
Across the Nightingale Floor, 2002 novel by Lian Hearn
Notes |
https://en.wikipedia.org/wiki/Semimodule | In mathematics, a semimodule over a semiring R is like a module over a ring except that it is only a commutative monoid rather than an abelian group.
Definition
Formally, a left R-semimodule consists of an additively-written commutative monoid M and a map from to M satisfying the following axioms:
.
A right R-semimodule can be defined similarly. For modules over a ring, the last axiom follows from the others. This is not the case with semimodules.
Examples
If R is a ring, then any R-module is an R-semimodule. Conversely, it follows from the second, fourth, and last axioms that (-1)m is an additive inverse of m for all , so any semimodule over a ring is in fact a module.
Any semiring is a left and right semimodule over itself in the same way that a ring is a left and right module over itself. Every commutative monoid is uniquely an -semimodule in the same way that an abelian group is a -module. |
https://en.wikipedia.org/wiki/Isotype%20%28picture%20language%29 | Isotype (International System of Typographic Picture Education) is a method of showing social, technological, biological, and historical connections in pictorial form. It consists of a set of standardized and abstracted pictorial symbols to represent social-scientific data with specific guidelines on how to combine the identical figures using serial repetition. It was first known as the Vienna Method of Pictorial Statistics (Wiener Methode der Bildstatistik), due to its having been developed at the Gesellschafts- und Wirtschaftsmuseum in Wien (Social and Economic Museum of Vienna) between 1925 and 1934. The founding director of this museum, Otto Neurath, was the initiator and chief theorist of the Vienna Method. Gerd Arntz was the artist responsible for realising the graphics. The term Isotype was applied to the method around 1935, after its key practitioners were forced to leave Vienna by the rise of Austrian fascism.
Origin and development
The Gesellschafts- und Wirtschaftsmuseum was principally financed by the municipality of Vienna, during a period of expansive municipal social democratic governance known as Red Vienna within the new republic of Austria. An essential task of the museum was to inform the Viennese about their city. Neurath stated that the museum was not a treasure chest of rare objects, but a teaching museum. The aim was to "represent social facts pictorially" and to bring "dead statistics" to life by making them visually attractive and memorable. One of the museum's catch-phrases was: "To remember simplified pictures is better than to forget accurate figures".
The principal instruments of the Vienna Method were pictorial charts, which could be produced in multiple copies and serve both permanent and travelling exhibitions. The museum also innovated with interactive models and other attention-grabbing devices, and there were even some early experiments with animated films.
From its beginning the Vienna Method/Isotype was the work of a team. Neur |
https://en.wikipedia.org/wiki/Pesticide%20residue | Pesticide residue refers to the pesticides that may remain on or in food after they are applied to food crops. The maximum allowable levels of these residues in foods are often stipulated by regulatory bodies in many countries. Regulations such as pre-harvest intervals also often prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. Exposure of the general population to these residues most commonly occurs through consumption of treated food sources, or being in close contact to areas treated with pesticides such as farms or lawns.
Many of these chemical residues, especially derivatives of chlorinated pesticides, exhibit bioaccumulation which could build up to harmful levels in the body as well as in the environment. Persistent chemicals can be magnified through the food chain and have been detected in products ranging from meat, poultry, and fish, to vegetable oils, nuts, and various fruits and vegetables.
Definition
A pesticide is a substance or a mixture of substances used for killing pests: organisms dangerous to cultivated plants or to animals. The term applies to various pesticides such as insecticide, fungicide, herbicide and nematocide. Applications of pesticides to crops and animals may leave residues in or on food when it is consumed, and those specified derivatives are considered to be of toxicological significance.
Background
From post-World War II era, chemical pesticides have become the most important form of pest control. There are two categories of pesticides, first-generation pesticides and second-generation pesticide. The first-generation pesticides, which were used prior to 1940, consisted of compounds such as arsenic, mercury, and lead. These were soon abandoned because they were highly toxic and ineffective. The second-generation pesticides were composed of synthetic organic compounds. The growth in these pesticides accelerated in late |
https://en.wikipedia.org/wiki/Major%20actinide | Major actinides is a term used in the nuclear power industry that refers to the plutonium and uranium present in used nuclear fuel, as opposed to the minor actinides neptunium, americium, curium, berkelium, and californium. |
https://en.wikipedia.org/wiki/Hecke%20character | In number theory, a Hecke character is a generalisation of a Dirichlet character, introduced by Erich Hecke to construct a class of
L-functions larger than Dirichlet L-functions, and a natural setting for the Dedekind zeta-functions and certain others which have functional equations analogous to that of the Riemann zeta-function.
A name sometimes used for Hecke character is the German term Größencharakter (often written Grössencharakter, Grossencharacter, etc.).
Definition using ideles
A Hecke character is a character of the idele class group of a number field or global function field. It corresponds uniquely to a character of the idele group which is trivial on principal ideles, via composition with the projection map.
This definition depends on the definition of a character, which varies slightly between authors: It may be defined as a homomorphism to the non-zero complex numbers (also called a "quasicharacter"), or as a homomorphism to the unit circle in C ("unitary"). Any quasicharacter (of the idele class group) can be written uniquely as a unitary character times a real power of the norm, so there is no big difference between the two definitions.
The conductor of a Hecke character χ is the largest ideal m such that χ is a Hecke character mod m. Here we say that χ is a Hecke character mod m if χ (considered as a character on the idele group) is trivial on the group of finite ideles whose every v-adic component lies in 1 + mOv.
Definition using ideals
The original definition of a Hecke character, going back to Hecke, was in terms of
a character on fractional ideals. For a number field K, let
m = mfm∞ be a
K-modulus, with mf, the "finite part", being an integral ideal of K and m∞, the "infinite part", being a (formal) product of real places of K. Let Im
denote the group of fractional ideals of K relatively prime to mf and
let Pm denote the subgroup of principal fractional ideals (a)
where a is near 1 at each place of m in accordance with the multiplicit |
https://en.wikipedia.org/wiki/Tate%27s%20thesis | In number theory, Tate's thesis is the 1950 PhD thesis of completed under the supervision of Emil Artin at Princeton University. In it, Tate used a translation invariant integration on the locally compact group of ideles to lift the zeta function twisted by a Hecke character, i.e. a Hecke L-function, of a number field to a zeta integral and study its properties. Using harmonic analysis, more precisely the Poisson summation formula, he proved the functional equation and meromorphic continuation of the zeta integral and the Hecke L-function. He also located the poles of the twisted zeta function. His work can be viewed as an elegant and powerful reformulation of a work of Erich Hecke on the proof of the functional equation of the Hecke L-function. Erich Hecke used a generalized theta series associated to an algebraic number field and a lattice in its ring of integers.
Iwasawa–Tate theory
Kenkichi Iwasawa independently discovered essentially the same method (without an analog of the local theory in Tate's thesis) during the Second World War and announced it in his 1950 International Congress of Mathematicians paper and his letter to Jean Dieudonné written in 1952. Hence this theory is often called Iwasawa–Tate theory. Iwasawa in his letter to Dieudonné derived on several pages not only the meromorphic continuation and functional equation of the L-function, he also proved finiteness of the class number and Dirichlet's theorem on units as immediate byproducts of the main computation. The theory in positive characteristic was developed one decade earlier by Ernst Witt, Wilfried Schmid, and Oswald Teichmüller.
Iwasawa-Tate theory uses several structures which come from class field theory, however it does not use any deep result of class field theory.
Generalisations
Iwasawa–Tate theory was extended to the general linear group GL(n) over an algebraic number field and automorphic representations of its adelic group by Roger Godement and Hervé Jacquet in 1972 which f |
https://en.wikipedia.org/wiki/Pseudo-polynomial%20time | In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input (the largest integer present in the input)—but not necessarily in the length of the input (the number of bits required to represent it), which is the case for polynomial time algorithms.
In general, the numeric value of the input is exponential in the input length, which is why a pseudo-polynomial time algorithm does not necessarily run in polynomial time with respect to the input length.
An NP-complete problem with known pseudo-polynomial time algorithms is called weakly NP-complete.
An NP-complete problem is called strongly NP-complete if it is proven that it cannot be solved by a pseudo-polynomial time algorithm unless . The strong/weak kinds of NP-hardness are defined analogously.
Examples
Primality testing
Consider the problem of testing whether a number n is prime, by naively checking whether no number in divides evenly. This approach can take up to divisions, which is sub-linear in the value of n but exponential in the length of n (which is about ). For example, a number n slightly less than would require up to approximately 100,000 divisions, even though the length of n is only 11 digits. Moreover one can easily write down an input (say, a 300-digit number) for which this algorithm is impractical. Since computational complexity measures difficulty with respect to the length of the (encoded) input, this naive algorithm is actually exponential. It is, however, pseudo-polynomial time.
Contrast this algorithm with a true polynomial numeric algorithm—say, the straightforward algorithm for addition: Adding two 9-digit numbers takes around 9 simple steps, and in general the algorithm is truly linear in the length of the input. Compared with the actual numbers being added (in the billions), the algorithm could be called "pseudo-logarithmic time", though such a term is not standard. Thus, adding 30 |
https://en.wikipedia.org/wiki/Thoracodorsal%20nerve | The thoracodorsal nerve is a nerve present in humans and other animals, also known as the middle subscapular nerve or the long subscapular nerve. It supplies the latissimus dorsi muscle.
Structure
The thoracodorsal nerve arises from the brachial plexus. It derives its fibers from the sixth, seventh, and eighth cervical nerves. It is derived from their ventral rami, in spite of the fact that the latissimus dorsi is found in the back. The thoracodorsal nerve is a branch of the posterior cord of the brachial plexus, and is made up of fibres from the posterior divisions of all three trunks of the brachial plexus.
It follows the course of the subscapular artery, along the posterior wall of the axilla to the latissimus dorsi muscle, in which it may be traced as far as the lower border of the muscle.
Function
The thoracodorsal nerve innervates the latissimus dorsi muscle on its deep surface.
Clinical Significance
The latissimus dorsi is occasionally used for transplantation, and for augmentation of systole in cardiac failure. In these cases, the nerve supply is preserved, and transplanted with the muscle (for example, with facial reanimation).
Posterior cord lesions can result in the loss of adduction of the shoulder joint, as innervation to latissimus dorsi is lost.
Additional images |
https://en.wikipedia.org/wiki/Irving%20Kaplansky | Irving Kaplansky (March 22, 1917 – June 25, 2006) was a mathematician, college professor, author, and amateur musician.
Biography
Kaplansky or "Kap" as his friends and colleagues called him was born in Toronto, Ontario, Canada, to Polish-Jewish immigrants; his father worked as a tailor, and his mother ran a grocery and, eventually, a chain of bakeries. He went to Harbord Collegiate Institute receiving the Prince of Wales Scholarship as a teenager. He attended the University of Toronto as an undergraduate and finished first in his class for three consecutive years. In his senior year, he competed in the first William Lowell Putnam Mathematical Competition, becoming one of the first five recipients of the Putnam Fellowship, which paid for graduate studies at Harvard University. Administered by the Mathematical Association of America, the competition is widely considered to be the most difficult mathematics examination in the world and "its difficulty is such that the median score is often zero or one (out of 120) despite being attempted by students specializing in mathematics."
After receiving his Ph.D. from Harvard in 1941 as Saunders Mac Lane's first student, he remained at Harvard as a Benjamin Peirce Instructor, and in 1944 moved with Mac Lane to Columbia University for one year to collaborate on work surrounding World War II working on "miscellaneous studies in mathematics applied to warfare analysis with emphasis upon aerial gunnery, studies of fire control equipment, and rocketry and toss bombing" with the Applied Mathematics Panel. He was a member of the Institute for Advanced Study and attended the 1946 Princeton University Bicentennial.
He was professor of mathematics at the University of Chicago from 1945 to 1984, and Chair of the department from 1962 to 1967. In 1968, Kaplansky was presented an honorary doctoral degree from Queen's University with the university noting "we honour as a Canadian whose clarity of lectures, elegance of writing, and profundi |
https://en.wikipedia.org/wiki/Subscapular%20artery | The subscapular artery, the largest branch of the axillary artery, arises from the third part of the axillary artery at the lower border of the subscapularis muscle, which it follows to the inferior angle of the scapula, where it anastomoses with the lateral thoracic and intercostal arteries, and with the descending branch of the dorsal scapular artery (a.k.a. deep branch of the transverse cervical artery if it arises from the cervical trunk), and ends in the neighboring muscles.
About 4 cm from its origin it gives off two branches, first the scapular circumflex artery and then the thoracodorsal artery.
From the thoracodorsal artery it supplies latissimus dorsi, while the scapular circumflex artery participates in the scapular anastamosis. It terminates in an anastomosis with the dorsal scapular artery.
Additional Images |
https://en.wikipedia.org/wiki/RecLOH | RecLOH is a term in genetics that is an abbreviation for "Recombinant Loss of Heterozygosity".
This is a type of mutation which occurs with DNA by recombination. From a pair of equivalent ("homologous"), but slightly different (heterozygous) genes, a pair of identical genes results. In this case there is a non-reciprocal exchange of genetic code between the chromosomes, in contrast to chromosomal crossover, because genetic information is lost.
For Y chromosome
In genetic genealogy, the term is used particularly concerning similar seeming events in Y chromosome DNA. This type of mutation happens within one chromosome, and does not involve a reciprocal transfer. Rather, one homologous segment "writes over" the other. The mechanism is presumed to be different from RecLOH events in autosomal chromosomes, since the target is the very same chromosome instead of the homologous one.
During the mutation one of these copies overwrites the other. Thus the differences between the two are lost. Because differences are lost, heterozygosity is lost.
Recombination on the Y-chromosome does not only take place during meiosis, but virtually at every mitosis when the Y chromosome condenses, because it doesn't require pairing between chromosomes. Recombination frequency even exceeds the frame shift mutation frequency (slipped strand mispairing) of (average fast) Y-STRs, however many recombination products may lead to infertile germ cells and "daughter out".
Recombination events (RecLOH) can be observed if YSTR databases are searched for twin alleles at 3 or more duplicated markers on the same palindrome (hairpin).
E.g. DYS459, DYS464 and DYS724 (CDY) are located on the same palindrome P1. A high proportion of 9-9, 15-15-17-17, 36-36 combinations and similar twin allelic patterns will be found. PCR typing technologies have been developed (e.g. DYS464X) that are able to verify that there are most frequently really two alleles of each, so we can be sure that there is no gene deletion |
https://en.wikipedia.org/wiki/LocationFree%20Player | Sony's LocationFree is the marketing name for a group of products and technologies for timeshifting and placeshifting streaming video. The LocationFree Player is an Internet-based multifunctional device used to stream live television broadcasts (including digital cable and satellite), DVDs and DVR content over a home network or the Internet. It is in essence a remote video streaming server product (similar to the Slingbox). It was first announced by Sony in Q1 2004 and launched early in Q4 2004 alongside a co-branded wireless tablet TV. The last LocationFree product was the LF-V30 released in 2007.
The LocationFree base station connects to a home network via a wired Ethernet cable, or for newer models, via a wireless connection. Up to three attached video sources can stream content through the network to local content provision devices or across the internet to remote devices. A remote user can connect to the internet at a wireless hotspot or any other internet connection anywhere in the world and receive streamed content. Content may only be streamed to one computer at a time. In addition, the original LocationFree Player software contained a license for only one client computer. Additional (paid) licenses were required to connect to the base station for up to a total of four clients.
On November 29, 2007 Sony modified its LocationFree Player policy to provide free access to the latest LocationFree Player LFA-PC30 software for Windows XP/Vista. In addition, the software no longer requires a unique serial number in order to pair it with a LocationFree base station. In December, 2007 Sony Dropped the $30 license fee for the LocationFree client. However, the software still requires registration to Sony's servers after 30 days.
Clients
The player (server) can stream content to the following (client) devices:
Windows or Mac computer - requires additional software
Mobile/cellular phones - coming later in 2007
Pocket PCs running Windows Mobile
Smartphones/tablets ru |
https://en.wikipedia.org/wiki/DISLIN | DISLIN is a high-level plotting library developed by Helmut Michels at the Max Planck Institute for Solar System Research in Göttingen, Germany. Helmut Michels has worked as a mathematician and Unix system manager at the computer center of the institute. He retired in April 2020 and
founded the company Dislin Software.
The DISLIN library contains routines and functions for displaying data as curves, bar graphs, pie charts, 3D-colour plots, surfaces, contours and maps. Several output formats are supported such as X11, VGA, PostScript, PDF, CGM, WMF, EMF, SVG, HPGL, PNG, BMP, PPM, GIF and TIFF.
DISLIN is available for the programming languages Fortran 77, Fortran 90/95 and C. Plotting extensions for the languages Perl, Python, Java, Julia, Ruby, Go and Tcl are also supported for most operating systems. There is a third-party package to use DISLIN from IDL.
History
The first version 1.0 was released in December 1986. The current version of DISLIN is 11.5, released in March 2022.
Since version 11.3 the DISLIN software is free for non-commercial and commercial use. |
https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe%20algorithm | The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function (taken over the same domain).
Problem statement
Suppose is a compact convex set in a vector space and is a convex, differentiable real-valued function. The Frank–Wolfe algorithm solves the optimization problem
Minimize
subject to .
Algorithm
Initialization: Let , and let be any point in .
Step 1. Direction-finding subproblem: Find solving
Minimize
Subject to
(Interpretation: Minimize the linear approximation of the problem given by the first-order Taylor approximation of around constrained to stay within .)
Step 2. Step size determination: Set , or alternatively find that minimizes subject to .
Step 3. Update: Let , let and go to Step 1.
Properties
While competing methods such as gradient descent for constrained optimization require a projection step back to the feasible set in each iteration, the Frank–Wolfe algorithm only needs the solution of a linear problem over the same set in each iteration, and automatically stays in the feasible set.
The convergence of the Frank–Wolfe algorithm is sublinear in general: the error in the objective function to the optimum is after k iterations, so long as the gradient is Lipschitz continuous with respect to some norm. The same convergence rate can also be shown if the sub-problems are only solved approximately.
The iterations of the algorithm can always be represented as a sparse convex combination of the extreme points of the feasible set, which has helped to the popularity of the algorithm for sparse greedy optimization in machine learning |
https://en.wikipedia.org/wiki/RootkitRevealer | RootkitRevealer is a proprietary freeware tool for rootkit detection on Microsoft Windows by Bryce Cogswell and Mark Russinovich. It runs on Windows XP and Windows Server 2003 (32-bit-versions only). Its output lists Windows Registry and file system API discrepancies that may indicate the presence of a rootkit. It is the same tool that triggered the Sony BMG copy protection rootkit scandal.
RootkitRevealer is no longer being developed.
See also
Sysinternals
Process Explorer
Process Monitor
ProcDump |
https://en.wikipedia.org/wiki/Memory%20refresh | Memory refresh is the process of periodically reading information from an area of computer memory and immediately rewriting the read information to the same area without modification, for the purpose of preserving the information. Memory refresh is a background maintenance process required during the operation of semiconductor dynamic random-access memory (DRAM), the most widely used type of computer memory, and in fact is the defining characteristic of this class of memory.
In a DRAM chip, each bit of memory data is stored as the presence or absence of an electric charge on a small capacitor on the chip. As time passes, the charges in the memory cells leak away, so without being refreshed the stored data would eventually be lost. To prevent this, external circuitry periodically reads each cell and rewrites it, restoring the charge on the capacitor to its original level. Each memory refresh cycle refreshes a succeeding area of memory cells, thus repeatedly refreshing all the cells in a consecutive cycle. This process is conducted automatically in the background by the memory circuitry and is transparent to the user. While a refresh cycle is occurring the memory is not available for normal read and write operations, but in modern memory this "overhead" time is not large enough to significantly slow down memory operation.
Electronic memory that does not require refreshing is available, called static random-access memory (SRAM). SRAM circuits require more area on a chip, because an SRAM memory cell requires four to six transistors, compared to a single transistor and a capacitor for DRAM. As a result, data density is much lower in SRAM chips than in DRAM, and SRAM has higher price per bit. Therefore, DRAM is used for the main memory in computers, video game consoles, graphics cards and applications requiring large capacities and low cost. The need for memory refresh makes DRAM timing and circuits significantly more complicated than SRAM circuits, but the density a |
https://en.wikipedia.org/wiki/Large%20set%20%28Ramsey%20theory%29 | In Ramsey theory, a set S of natural numbers is considered to be a large set if and only if Van der Waerden's theorem can be generalized to assert the existence of arithmetic progressions with common difference in S. That is, S is large if and only if every finite partition of the natural numbers has a cell containing arbitrarily long arithmetic progressions having common differences in S.
Examples
The natural numbers are large. This is precisely the assertion of Van der Waerden's theorem.
The even numbers are large.
Properties
Necessary conditions for largeness include:
If S is large, for any natural number n, S must contain at least one multiple (equivalently, infinitely many multiples) of n.
If is large, it is not the case that sk≥3sk-1 for k≥ 2.
Two sufficient conditions are:
If S contains n-cubes for arbitrarily large n, then S is large.
If where is a polynomial with and positive leading coefficient, then is large.
The first sufficient condition implies that if S is a thick set, then S is large.
Other facts about large sets include:
If S is large and F is finite, then S – F is large.
is large.
If S is large, is also large.
If is large, then for any , is large.
2-large and k-large sets
A set is k-large, for a natural number k > 0, when it meets the conditions for largeness when the restatement of van der Waerden's theorem is concerned only with k-colorings. Every set is either large or k-large for some maximal k. This follows from two important, albeit trivially true, facts:
k-largeness implies (k-1)-largeness for k>1
k-largeness for all k implies largeness.
It is unknown whether there are 2-large sets that are not also large sets. Brown, Graham, and Landman (1999) conjecture that no such sets exists.
See also
Partition of a set
Further reading
External links
Mathworld: van der Waerden's Theorem
Basic concepts in set theory
Ramsey theory
Theorems in discrete mathematics |
https://en.wikipedia.org/wiki/Large%20set%20%28combinatorics%29 | In combinatorial mathematics, a large set of positive integers
is one such that the infinite sum of the reciprocals
diverges. A small set is any subset of the positive integers that is not large; that is, one whose sum of reciprocals converges.
Large sets appear in the Müntz–Szász theorem and in the Erdős conjecture on arithmetic progressions.
Examples
Every finite subset of the positive integers is small.
The set of all positive integers is a large set; this statement is equivalent to the divergence of the harmonic series. More generally, any arithmetic progression (i.e., a set of all integers of the form an + b with a ≥ 1, b ≥ 1 and n = 0, 1, 2, 3, ...) is a large set.
The set of square numbers is small (see Basel problem). So is the set of cube numbers, the set of 4th powers, and so on. More generally, the set of positive integer values of any polynomial of degree 2 or larger forms a small set.
The set {1, 2, 4, 8, ...} of powers of 2 is a small set, and so is any geometric progression (i.e., a set of numbers of the form of the form abn with a ≥ 1, b ≥ 2 and n = 0, 1, 2, 3, ...).
The set of prime numbers is large. The set of twin primes is small (see Brun's constant).
The set of prime powers which are not prime (i.e., all numbers of the form pn with n ≥ 2 and p prime) is small although the primes are large. This property is frequently used in analytic number theory. More generally, the set of perfect powers is small; even the set of powerful numbers is small.
The set of numbers whose expansions in a given base exclude a given digit is small. For example, the set
of integers whose decimal expansion does not include the digit 7 is small. Such series are called Kempner series.
Any set whose upper asymptotic density is nonzero, is large.
Properties
Every subset of a small set is small.
The union of finitely many small sets is small, because the sum of two convergent series is a convergent series. (In set theoretic terminology, the small sets for |
https://en.wikipedia.org/wiki/Card%20catalog%20%28cryptology%29 | The card catalog, or "catalog of characteristics," in cryptography, was a system designed by Polish Cipher Bureau mathematician-cryptologist Marian Rejewski, and first completed about 1935 or 1936, to facilitate decrypting German Enigma ciphers.
History
The Polish Cipher Bureau used the theory of permutations to start breaking the Enigma cipher in late 1932. The Bureau recognized that the Enigma machine's doubled-key (see Grill (cryptology)) permutations formed cycles, and those cycles could be used to break the cipher. With German cipher keys provided by a French spy, the Bureau was able to reverse engineer the Enigma and start reading German messages. At the time, the Germans were using only 6 steckers, and the Polish grill method was feasible. On 1 August 1936, the Germans started using 8 steckers, and that change made the grill method less feasible. The Bureau needed an improved method to break the German cipher.
Although the steckers would change which letters were in a doubled-key's cycle, the steckers would not change the number of cycles or the length of those cycles. Steckers could be ignored. Ignoring the mid-key turnovers, the Enigma machine had only distinct settings of the three rotors, and the three rotors could only be arranged in the machine ways. That meant there were only likely doubled-key permutations. The Bureau set about determining and cataloging the characteristic of each of those likely permutations. Each letter of the key could be one of partition number 13 = 101 possible values, and the 3 letters of the key meant there were possible keys. On average, a key would find one setting of the rotors, but it might find several possible settings.
The Polish cryptanalyst could then collect enough traffic to determine all the cycles in a daily key. That usually took about 60 messages. The result might be:
He would use the lengths of the cycles (132;102-32;102-22-12) to look up the wheel order (II I III) and starting rotor positions in the car |
https://en.wikipedia.org/wiki/Swimlane | A swimlane (as in swimlane diagram) is used in process flow diagrams, or flowcharts, that visually distinguishes job sharing and responsibilities for sub-processes of a business process. Swimlanes may be arranged either horizontally or vertically.
Attributes of a swimlane
The swimlane flowchart differs from other flowcharts in that processes and decisions are grouped visually by placing them in lanes. Parallel lines divide the chart into lanes, with one lane for each person, group or sub process. Lanes are labelled to show how the chart is organized.
In the accompanying example, the vertical direction represents the sequence of events in the overall process, while the horizontal divisions depict what sub-process is performing that step. Arrows between the lanes represent how information or material is passed between the sub processes.
Alternately, the flow can be rotated so that the sequence reads horizontally from left to right, with the roles involved being shown at the left edge. This can be easier to read and design, since computer screens are typically wider than they are tall, which gives an improved view of the flow.
Use of standard symbols enables clear linkage to be shown between related flow charts when charting flows with complex relationships.
Usage
When used to diagram a business process that involves more than one department, swimlanes often serve to clarify not only the steps and who is responsible for each one, but also how delays, mistakes or cheating are most likely to occur.
Many process modeling methodologies utilize the concept of swimlanes, as a mechanism to organize activities into separate visual categories in order to illustrate different functional capabilities or responsibilities (organisational roles). Swimlanes are used in Business Process Modeling Notation (BPMN) and Unified Modeling Language activity diagram modeling methodologies.
Alternative terms
A Swimlane was first introduced to computer-based Process Modeling by IGrafx |
https://en.wikipedia.org/wiki/Tracking%20system | A tracking system, also known as a locating system, is used for the observing of persons or objects on the move and supplying a timely ordered sequence of location data for further processing.
Applications
A myriad of tracking systems exists. Some are 'lag time' indicators, that is, the data is collected after an item has passed a point for example a bar code or choke point or gate. Others are 'real-time' or 'near real-time' like Global Positioning Systems (GPS) depending on how often the data is refreshed. There are bar-code systems which require items to be scanned and automatic identification (RFID auto-id). For the most part, the tracking worlds are composed of discrete hardware and software systems for different applications. That is, bar-code systems are separate from Electronic Product Code (EPC) systems, GPS systems are separate from active real time locating systems or RTLS for example, a passive RFID system would be used in a warehouse to scan the boxes as they are loaded on a truck - then the truck itself is tracked on a different system using GPS with its own features and software. The major technology “silos” in the supply chain are:
Distribution/warehousing/manufacturing
Indoors assets are tracked repetitively reading e.g. a barcode, any passive and active RFID and feeding read data into Work in Progress models (WIP) or Warehouse Management Systems (WMS) or ERP software. The readers required per choke point are meshed auto-ID or hand-held ID applications.
However tracking could also be capable of providing monitoring data without binding to a fixed location by using a cooperative tracking capability, e.g. an RTLS.
Yard management
Outdoors mobile assets of high value are tracked by choke point,
802.11, Received Signal Strength Indication (RSSI), Time Delay on Arrival (TDOA), active RFID or GPS Yard Management; feeding into either third party yard management software from the provider or to an existing system. Yard Management Systems (YMS) couple |
https://en.wikipedia.org/wiki/Virtual%20acoustic%20space | Virtual acoustic space (VAS), also known as virtual auditory space, is a technique in which sounds presented over headphones appear to originate from any desired direction in space. The illusion of a virtual sound source outside the listener's head is created.
Sound localization cues generate an externalized percept
When one listens to sounds over headphones (in what is known as the "closed field") the sound source appears to arise from center of the head. On the other hand, under normal, so-called free-field, listening conditions sounds are perceived as being externalized. The direction of a sound in space (see sound localization) is determined by the brain when it analyses the interaction of incoming sound with head and external ears. A sound arising to one side reaches the near ear before the far ear (creating an interaural time difference, ITD), and will also be louder at the near ear (creating an interaural level difference, ILD – also known as interaural intensity difference, IID). These binaural cues allow sounds to be lateralized. Although conventional stereo headphone signals make used of ILDs (not ITDs) the sound is not perceived as being externalized.
The perception of an externalized sound source is due to the frequency and direction-dependent filtering of the pinna which makes up the external ear structure. Unlike ILDs and ITDs, these spectral localization cues are generated monaurally. The same sound presented from different directions will produce at the eardrum a different pattern of peaks and notches across frequency. The pattern of these monaural spectral cues is different for different listeners. Spectral cues are vital for making elevation judgments and distinguishing if a sound arose from in front or behind the listener. They are also vital for creating the illusion of an externalized sound source. Since only ILDs are present in stereo recordings, the lack of spectral cues means that the sound is not perceived as being externalized. The easi |
https://en.wikipedia.org/wiki/Thyroiditis | Thyroiditis is the inflammation of the thyroid gland. The thyroid gland is located on the front of the neck below the laryngeal prominence, and makes hormones that control metabolism.
Signs and symptoms
There are many different signs and symptoms for thyroiditis, none of which are exclusively limited to this disease. Many of the signs imitate symptoms of other diseases, so thyroiditis can sometimes be difficult to diagnose. Common hypothyroid symptoms manifest when thyroid cell damage is slow and chronic, and may include fatigue, weight gain, feeling "fuzzy headed", depression, dry skin, and constipation. Other, rarer symptoms include swelling of the legs, vague aches and pains, decreased concentration and so on. When conditions become more severe, depending on the type of thyroiditis, one may start to see puffiness around the eyes, slowing of the heart rate, a drop in body temperature, or even incipient heart failure. On the other hand, if the thyroid cell damage is acute, the thyroid hormone within the gland leaks out into the bloodstream causing symptoms of thyrotoxicosis, which is similar to those of hyperthyroidism. These symptoms include weight loss, irritability, anxiety, insomnia, fast heart rate, and fatigue. Elevated levels of thyroid hormone in the bloodstream cause both conditions, but thyrotoxicosis is the term used with thyroiditis since the thyroid gland is not overactive, as in the case of hyperthyroidism.
Causes
Thyroiditis is generally caused by an immune system attack on the thyroid, resulting in inflammation and damage to the thyroid cells. This disease is often considered a malfunction of the immune system and can be associated with IgG4-related systemic disease, in which symptoms of autoimmune pancreatitis, retroperitoneal fibrosis and noninfectious aortitis also occur. Such is also the case in Riedel thyroiditis, an inflammation in which the thyroid tissue is replaced by fibrous tissue which can extend to neighbouring structures. Antibodie |
https://en.wikipedia.org/wiki/Nuclear%20dimorphism | Nuclear dimorphism is a term referred to the special characteristic of having two different kinds of nuclei in a cell. There are many differences between the types of nuclei. This feature is observed in protozoan ciliates, like Tetrahymena, and some foraminifera. Ciliates contain two nucleus types: a macronucleus that is primarily used to control metabolism, and a micronucleus which performs reproductive functions and generates the macronucleus. The compositions of the nuclear pore complexes help determine the properties of the macronucleus and micronucleus. Nuclear dimorphism is subject to complex epigenetic controls. Nuclear dimorphism is continuously being studied to understand exactly how the mechanism works and how it is beneficial to cells. Learning about nuclear dimorphism is beneficial to understanding old eukaryotic mechanisms that have been preserved within these unicellular organisms but did not evolve into multicellular eukaryotes.
Key components
The ciliated protozoan Tetrahymena is a useful research model for studying nuclear dimorphism; it maintains two distinct nuclear genomes, the micronucleus and the macronucleus. The macronucleus and micronucleus are located in the same cytoplasm, however, they are very different. The micronucleus genome contains five chromosomes that undergo mitosis during micronuclear division and meiosis during conjugation, which is the sexual division of the micronucleus. The macronuclear genome is broken down and catabolized once per life cycle during conjugation, allowing it to be site-specific, and a new macronucleus differentiates from a mitotic descendant of the conjugated micronucleus. The differences in division and overall processes show how functionally and structurally different the molecules are. These differences play an active role in the activities and functions of the cells in which they are located.
Macro vs. micronuclei
Macronuclei and micronuclei differ in their functions even though they are located wi |
https://en.wikipedia.org/wiki/Phoebus%20Levene | Phoebus Aaron Theodore Levene (25 February 1869 – 6 September 1940) was a Russian-born American biochemist who studied the structure and function of nucleic acids. He characterized the different forms of nucleic acid, DNA from RNA, and found that DNA contained adenine, guanine, thymine, cytosine, deoxyribose, and a phosphate group.
He was born into a Litvak (Lithuanian Jewish) family as Fishel Rostropovich Levin in the town of Žagarė in Lithuania, then part of the Russian Empire, but grew up in St. Petersburg. There he studied medicine at the Imperial Military Medical Academy (M.D., 1891) and developed an interest in biochemistry. In 1893, because of anti-Semitic pogroms, he and his family emigrated to the United States and he practiced medicine in New York City.
Levene enrolled at Columbia University and in his spare time conducted biochemical research, publishing papers on the chemical structure of sugars. In 1896 he was appointed as an Associate in the Pathological Institute of the New York State Hospitals, but he had to take time off to recuperate from tuberculosis. During this period, he worked with several chemists, including Albrecht Kossel and Emil Fischer, who were the experts in proteins.
In 1905, Levene was appointed as head of the biochemical laboratory at the Rockefeller Institute of Medical Research. He spent the rest of his career at this institute, and it was there that he identified the components of DNA. In 1909, Levene and Walter Jacobs recognised -ribose as a natural product and an essential component of nucleic acids. They also recognised that the unnatural sugar that Emil Fischer and Oscar Piloty had reported in 1891 was the enantiomer of -ribose. Levene went on to discover deoxyribose in 1929. Not only did Levene identify the components of DNA, he also showed that the components were linked together in the order phosphate-sugar-base to form units. He called each of these units a nucleotide, and stated that the DNA molecule consiste |
https://en.wikipedia.org/wiki/Epigenetic%20controls%20in%20ciliates | Epigenetic controls in ciliates is about the unique characteristic of Ciliates, which is that they possess two kinds of nuclei (this phenomenon is called nuclear dimorphism): a micronucleus used for inheritance, and a macronucleus, which controls the metabolism. The micronucleus contains the entirety of the genome whereas the macronucleus only contains the DNA necessary for vegetative growth. The macronucleus divides via amitosis, whereas the micronucleus undergoes typical mitosis. During sexual development a new macronucleus is formed from the meiosis of the micronucleus, where the removal of transposons occurs. On the division or reproduction of ciliates, the two nuclei are under several epigenetic controls. |
https://en.wikipedia.org/wiki/Oophoritis | Oophoritis is an inflammation of the ovaries.
It is often seen in combination with salpingitis (inflammation of the fallopian tubes). It may develop in response to infection. Oophoritis is typically caused by a bacterial infection, and may result from chronic pelvic inflammatory disease (PID). This form differs from autoimmune oophoritis, a disorder caused by a malfunction of the immune system.
See also
Pelvic inflammatory disease |
https://en.wikipedia.org/wiki/Proceedings%20of%20the%20Natural%20History%20Society%20of%20Br%C3%BCnn | The Proceedings of the Natural History Society of Brünn (German: Verhandlungen des naturforschenden Vereines in Brünn) was the official journal of the Natural History Society in Brno (), published from 1861-1920. A free archive of the journal is available through the Biodiversity Heritage Library.
This was the journal where Gregor Mendel published his scientific discoveries on genetics which he made between 1856 and 1863.
The society (German: Naturforschenden Verein Brünn) was organized in 1861 by Franz Czermak, Jacob Kalmus, Alexander Makowsky, Johann Nave, and Gustav von Niessl.
See also
Monatliche Auszüge
"Experiments on Plant Hybridization" |
https://en.wikipedia.org/wiki/Continental%20Circus | Continental Circus is a racing simulation arcade game, created and manufactured by Taito in 1987. In 1989, ports for the Amiga, Amstrad CPC, Atari ST, Commodore 64, MSX and ZX Spectrum were published by Virgin Games.
The arcade version of this game comes in both upright and sit-down models, both of which feature shutter-type 3D glasses hanging above the player's head. According to Computer and Video Games in 1988, it was "the world's first three dimensional racing simulation." The home conversions of Continental Circus lack the full-on 3D and special glasses of the arcade version.
Circus is a common term for racing in France and Japan, likely stemming from the Latin term for a racecourse.
In 2005 the game was made available for the PlayStation 2, Xbox, and PC as part of Taito Legends.
Gameplay
The in-game vehicle is the 1987 Camel-sponsored Honda/Lotus 99T Formula One car as driven by Ayrton Senna and Satoru Nakajima. Due to licensing reasons, sponsor names such as "Camel" or "DeLonghi" are intentionally misspelled to prevent copyright infringement under Japanese law.
The player must successfully qualify in eight different races to win. At the beginning, the player must take 80th place or better to advance. As the player advances, so does the worst possible position to qualify. If the player fails to meet to qualify or if the timer runs out, the game is over. The player does, however, have the option to continue, but if the player fails to qualify in the final race, the game is automatically over, and the player cannot continue.
Hazards
As in the real F1 races, the car is susceptible to damage from contact with another car. Once a player hits a car or a piece of the trackside scenery, they will be called into the pits. If they let the car smoke too long, it will catch fire, and the message "IMPENDING EXPLOSION" will appear. Either way, if they fail to make it back or hit another car, then they will crash or explode, costing several seconds.
Also, if the car r |
https://en.wikipedia.org/wiki/Mostafa%20El-Sayed | Mostafa A. El-Sayed (Arabic: مصطفى السيد) is an Egyptian-American physical chemist, a leading nanoscience researcher, a member of the National Academy of Sciences and a US National Medal of Science laureate. He was the editor-in-chief of the Journal of Physical Chemistry during a critical period of growth. He is also known for the spectroscopy rule named after him, the El-Sayed rule.
Early life and academic career
El-Sayed was born in Zifta, Egypt and spent his early life in Cairo. He earned his B.Sc. in chemistry from Ain Shams University Faculty of Science, Cairo in 1953. El-Sayed earned his doctoral degree in chemistry from Florida State University working with Michael Kasha, the last student of the legendary G. N. Lewis. While attending graduate school he met and married Janice Jones, his wife of 48 years. He spent time as a post-doctoral researcher at Harvard University, Yale University and the California Institute of Technology before joining the faculty of the University of California at Los Angeles in 1961. He spent over thirty years of his career at UCLA, while he and his wife raised five children (Lyla, Tarric, James, Dorea and Ivan). In 1994, he retired from UCLA and accepted the position of Julius Brown Chair and Regents Professor of Chemistry and Biochemistry at the Georgia Institute of Technology. He led the Laser Dynamics Lab there until his full retirement in 2020.
El-Sayed is a former editor-in-chief of the Journal of Physical Chemistry (1980–2004).
Research
El-Sayed and his research group have contributed to many important areas of physical and materials chemistry research. El-Sayed's research interests include the use of steady-state and ultra fast laser spectroscopy to understand relaxation, transport and conversion of energy in molecules, in solids, in photosynthetic systems, semiconductor quantum dots and metal nanostructures. The El-Sayed group has also been involved in the development of new techniques such as magnetophotonic selectio |
https://en.wikipedia.org/wiki/Grill%20%28cryptology%29 | The grill method (), in cryptology, was a method used chiefly early on, before the advent of the cyclometer, by the mathematician-cryptologists of the Polish Cipher Bureau (Biuro Szyfrów) in decrypting German Enigma machine ciphers. The Enigma rotor cipher machine changes plaintext characters into cipher text using a different permutation for each character, and so implements a polyalphabetic substitution cipher.
Background
The German navy started using Enigma machines in 1926; it was called Funkschlüssel C ("Radio cipher C"). By 15 July 1928, the German Army (Reichswehr) had introduced their own version of the Enigma—the Enigma G; a revised Enigma I (with plugboard) appeared in June 1930. The Enigma I used by the German military in the 1930s was a 3-rotor machine. Initially, there were only three rotors labeled I, II, and III, but they could be arranged in any order when placed in the machine. Rejewski identified the rotor permutations by , , and ; the encipherment produced by the rotors altered as each character was encrypted. The rightmost permutation () changed with each character. In addition, there was a plugboard that did some additional scrambling.
The number of possible different rotor wirings is:
The number of possible different reflector wirings is:
A perhaps more intuitive way of arriving at this figure is to consider that 1 letter can be wired to any of 25. That leaves 24 letters to connect. The next chosen letter can connect to any of 23. And so on.
The number of possible different plugboard wirings (for six cables) is:
To encrypt or decrypt, the operator made the following machine key settings:
the rotor order (Walzenlage)
the ring settings (Ringstellung)
the plugboard connections (Steckerverbindung)
an initial rotor position (Grundstellung)
In the early 1930s, the Germans distributed a secret monthly list of all the daily machine settings. The Germans knew that it would be foolish to encrypt the day's traffic using the same key, so each me |
https://en.wikipedia.org/wiki/Colipase | Colipase, abbreviated CLPS, is a protein co-enzyme required for optimal enzyme activity of pancreatic lipase. It is secreted by the pancreas in an inactive form, procolipase, which is activated in the intestinal lumen by trypsin. Its function is to prevent the inhibitory effect of bile salts on the lipase-catalyzed intraduodenal hydrolysis of dietary long-chain triglycerides.
In humans, the colipase protein is encoded by the CLPS gene.
Protein domain
Colipase is also a family of evolutionarily related proteins.
Colipase is a small protein cofactor needed by pancreatic lipase for efficient dietary lipid hydrolysis. Efficient absorption of dietary fats is dependent on the action of pancreatic triglyceride lipase. Colipase binds to the C-terminal, non-catalytic domain of lipase, thereby stabilising an active conformation and considerably increasing the hydrophobicity of its binding site. Structural studies of the complex and of colipase alone have revealed the functionality of its architecture.
Colipase is a small protein (12K) with five conserved disulphide bonds. Structural analogies have been recognised between a developmental protein (Dickkopf), the pancreatic lipase C-terminal domain, the N-terminal domains of lipoxygenases and the C-terminal domain of alpha-toxin. These non-catalytic domains in the latter enzymes are important for interaction with membrane. It has not been established if these domains are also involved in eventual protein cofactor binding as is the case for pancreatic lipase.
See also
Enterostatin |
https://en.wikipedia.org/wiki/Robin%20Gandy | Robin Oliver Gandy (22 September 1919 – 20 November 1995) was a British mathematician and logician. He was a friend, student, and associate of Alan Turing, having been supervised by Turing during his PhD at the University of Cambridge, where they worked together.
Education and early life
Robin Gandy was born in the village of Rotherfield Peppard, Oxfordshire, England. A great-great-grandson of the architect and artist Joseph Gandy (1771–1843), he was the son of Thomas Hall Gandy (1876–1948), a general practitioner, and Ida Caroline née Hony (1885–1977), a social worker and later an author. His brother was the diplomat Christopher Gandy and his sister was the physician Gillian Gandy.
Educated at Abbotsholme School in Derbyshire, Gandy took two years of the Mathematical Tripos, at King's College, Cambridge, before enlisting for military service in 1940. During World War II he worked on radio intercept equipment at Hanslope Park, where Alan Turing was working on a speech encipherment project, and he became one of Turing's lifelong friends and associates. In 1946, he completed Part III of the Mathematical Tripos, then began studying for a PhD under Turing's supervision. He completed his thesis, On axiomatic systems in mathematics and theories in Physics, in 1952. He was a member of the Cambridge Apostles.
Career and research
Gandy held positions at the University of Leicester, the University of Leeds, and the University of Manchester. He was a visiting associate professor at Stanford University from 1966 to 1967, and held a similar position at University of California, Los Angeles in 1968. In 1969, he moved to Wolfson College, Oxford, where he became Reader in Mathematical Logic.
Gandy is known for his work in recursion theory. His contributions include the Spector–Gandy theorem, the Gandy Stage Comparison theorem, and the Gandy Selection theorem. He also made a significant contribution to the understanding of the Church–Turing thesis, and his generalisation of the |
https://en.wikipedia.org/wiki/Intercast | Intercast was a short-lived technology developed in 1996 by Intel for broadcasting information such as web pages and computer software, along with a single television channel. It required a compatible TV tuner card installed in a personal computer and a decoding program called Intel Intercast Viewer. The data for Intercast was embedded in the Vertical Blanking Interval (VBI) of the video signal carrying the Intercast-enabled program, at a maximum of 10.5 Kilobytes/sec in 10 of the 45 lines of the VBI.
With Intercast, a computer user could watch the TV broadcast in one window of the Intercast Viewer, while being able to view HTML web pages in another window. Users were also able to download software transmitted via Intercast as well. Most often the web pages received were relevant to the television program being broadcast, such as extra information relating to a television program, or extra news headlines and weather forecasts during a newscast. Intercast can be seen as a more modern version of teletext.
The Intercast Viewer software was bundled with several TV tuner cards at the time, such as the Hauppauge Win-TV card. Also at the time of Intercast's introduction, Compaq offered some models of computers with built-in TV tuners installed with the Intercast Viewer software.
Upon its debut, Intercast was used by several TV networks, such as NBC, CNN, The Weather Channel, and MTV Networks.
On June 25, 1996, Intel and NBC announced an arrangement which enabled users to watch coverage of the 1996 Summer Olympics and other programming from NBC News.
Intel discontinued support for Intercast a couple of years later.
NBC's series Homicide: Life on the Street was a show that was Intercast-enabled. |
https://en.wikipedia.org/wiki/Discount%20points | Discount points, also called mortgage points or simply points, are a form of pre-paid interest available in the United States when arranging a mortgage. One point equals one percent of the loan amount. By charging a borrower points, a lender effectively increases the yield on the loan above the amount of the stated interest rate. Borrowers can offer to pay a lender points as a method to reduce the interest rate on the loan, thus obtaining a lower monthly payment in exchange for this up-front payment. For each point purchased, the loan rate is typically reduced by anywhere from 1/8% (0.125%) to 1/4% (0.25%).
Selling the property or refinancing prior to this break-even point will result in a net financial loss for the buyer while keeping the loan for longer than this break-even point will result in a net financial savings for the buyer. Accordingly, if the intention is to buy and sell the property or refinance, paying points will cost more than just paying the higher interest rate.
Points may also be purchased to reduce the monthly payment for the purpose of qualifying for a loan. Loan qualification based on monthly income versus the monthly loan payment may sometimes only be achievable by reducing the monthly payment through the purchasing of points to buy down the interest rate, thereby reducing the monthly loan payment.
Discount points may be different from origination fee, mortgage arrangement fee or broker fee. Discount points are always used to buy down the interest rates, while origination fees sometimes are fees the lender charges for the loan or sometimes just another name for buying down the interest rate. Origination fee and discount points are both items listed under lender-charges on the HUD-1 Settlement Statement.
The difference in savings over the life of the loan can make paying points a benefit to the borrower. Any significant changes in fees should be re-disclosed in the final good faith estimate (GFE).
Also directly related to points is the c |
https://en.wikipedia.org/wiki/Debt-to-income%20ratio | In the consumer mortgage industry, debt-to-income ratio (DTI) is the percentage of a consumer's monthly gross income that goes toward paying debts. (Speaking precisely, DTIs often cover more than just debts; they can include principal, taxes, fees, and insurance premiums as well. Nevertheless, the term is a set phrase that serves as a convenient, well-understood shorthand.) There are two main kinds of DTI, as discussed below.
Two main kinds of DTI
The two main kinds of DTI are expressed as a pair using the notation (for example, 28/36).
The first DTI, known as the front-end ratio, indicates the percentage of income that goes toward housing costs, which for renters is the rent amount and for homeowners is PITI (mortgage principal and interest, mortgage insurance premium [when applicable], hazard insurance premium, property taxes, and homeowners' association dues [when applicable]).
The second DTI, known as the back-end ratio, indicates the percentage of income that goes toward paying all recurring debt payments, including those covered by the first DTI, and other debts such as credit card payments, car loan payments, student loan payments, child support payments, alimony payments, and legal judgments.
Example
If the lender requires a debt-to-income ratio of 28/36, then to qualify a borrower for a mortgage, the lender would go through the following process to determine what expense levels they would accept:
Using yearly figures:
Gross income of $45,000
$45,000.28 = $12,600 allowed for housing expense.
$45,000.36 = $16,200 allowed for housing expense plus recurring debt.
Using monthly figures:
Gross income of $3,750 (=)
$3,750.28 = $1,050 allowed for housing expense.
$3,750.36 = $1,350 allowed for housing expense plus recurring debt.
DTI limits used in qualifying borrowers
United States
Conforming loans
In the United States, for conforming loans, the following limits are currently typical:
Conventional financing limits are typically 28/36 for manually unde |
https://en.wikipedia.org/wiki/List%20of%20plants%20by%20common%20name | This is a list of plants organized by their common names. However, the common names of plants often vary from region to region, which is why most plant encyclopedias refer to plants using their scientific names, in other words using binomials or "Latin" names.
A
African sheepbush – Pentzia incana
Alder – Alnus
Black alder – Alnus glutinosa, Ilex verticillata
Common alder – Alnus glutinosa
False alder – Ilex verticillata
Gray alder – Alnus incana
Speckled alder – Alnus incana
White alder – Alnus incana, Alnus rhombifolia, Ilex verticillata
Almond – Prunus dulcis
Aloe vera – Aloe vera
Amaranth – Amaranthus
Foxtail amaranth – Amaranthus caudatus
Ambrosia
Tall ambrosia – Ambrosia trifida
Amy root – Apocynum cannabinum
Angel trumpet – Brugmansia suaveolens
Apple – Malus domestica
Apricot – Prunus armeniaca
Arfaj – Rhanterium epapposum
Arizona sycamore – Platanus wrighitii
Arrowwood – Cornus florida
Indian arrowwood – Cornus florida
Ash – Fraxinus spp.
Black ash – Acer negundo, Fraxinus nigra
Blue ash – Fraxinus quadrangulata
Cane ash – Fraxinus americana
European ash – Fraxinus excelsior
Green ash – Fraxinus pennsylvanica lanceolata
Maple ash – Acer negundo
Red ash – Fraxinus pennsylvanica lanceolata
River ash – Fraxinus pennsylvanica
Swamp ash – Fraxinus pennsylvanica
White ash – Fraxinus americana
Water ash – Acer negundo, Fraxinus pennsylvanica
Azolla – Azolla
Carolina azolla – Azolla caroliniana
B
Bamboo – bamboosa ardinarifolia
Banana – mainly Musa × paradisica, but also other Musa species and hybrids
Baobab – Adansonia
Bay – Laurus spp. or Umbellularia spp.
Bay laurel – Laurus nobilis (culinary)
California bay – Umbellularia californica
Bean – Fabaceae, specifically Phaseolus spp.
Bearberry – Ilex decidua
Bear corn – Veratrum viride
Beech – Fagus
Bindweed
Blue bindweed – Solanum dulcamara
Bird's nest – Daucus carota
Bird's nest plant – Daucus carota
Bird of paradise – Strelitzia reginae
Birch – Betula spp.
Black birch – Betula lenta, Betula nigra
Bolean bir |
https://en.wikipedia.org/wiki/Bidirectional%20search | Bidirectional search is a graph search algorithm that finds a shortest path from an initial vertex to a goal vertex in a directed graph. It runs two simultaneous searches: one forward from the initial state, and one backward from the goal, stopping when the two meet. The reason for this approach is that in many cases it is faster: for instance, in a simplified model of search problem complexity in which both searches expand a tree with branching factor b, and the distance from start to goal is d, each of the two searches has complexity O(bd/2) (in Big O notation), and the sum of these two search times is much less than the O(bd) complexity that would result from a single search from the beginning to the goal.
Andrew Goldberg and others explained the correct termination conditions for the bidirectional version of Dijkstra’s Algorithm.
As in A* search, bi-directional search can be guided by a heuristic estimate of the remaining distance to the goal (in the forward tree) or from the start (in the backward tree).
was the first one to design and implement a bi-directional heuristic search algorithm. Search trees emanating from the start and goal nodes failed to meet in the middle of the solution space. The BHFFA algorithm fixed this defect Champeaux (1977).
A solution found by the uni-directional A* algorithm using an admissible heuristic has a shortest path length; the same property holds for the BHFFA2 bidirectional heuristic version described in de Champeaux (1983). BHFFA2 has, among others, more careful termination conditions than BHFFA.
Description
A Bidirectional Heuristic Search is a state space search from some state to another state , searching from to and from to simultaneously. It returns a valid list of operators that if applied to will give us .
While it may seem as though the operators have to be invertible for the reverse search, it is only necessary to be able to find, given any node , the set of parent nodes of such that there exists som |
https://en.wikipedia.org/wiki/Dimethyl%20methylphosphonate | Dimethyl methylphosphonate is an organophosphorus compound with the chemical formula CH3PO(OCH3)2. It is a colourless liquid, which is primarily used as a flame retardant.
Synthesis
Dimethyl methylphosphonate can be prepared from trimethyl phosphite and a halomethane (e.g. iodomethane) via the Michaelis–Arbuzov reaction.
Dimethyl methylphosphonate is a schedule 2 chemical as it may be used in the production of chemical weapons. It will react with thionyl chloride to produce methylphosphonic acid dichloride, which is used in the production of sarin and soman nerve agents. Various amines can be used to catalyse this process. It can be used as a sarin-simulant for the calibration of organophosphorus detectors.
Uses
The primary commercial use of dimethyl methylphosphonate is as a flame retardant. Other commercial uses are a preignition additive for gasoline, anti-foaming agent, plasticizer, stabilizer, textile conditioner, antistatic agent, and an additive for solvents and low-temperature hydraulic fluids. It can be used as a catalyst and a reagent in organic synthesis, as it can generate a highly reactive ylide. The yearly production in the United States varies between .
About 190 liters of dimethyl methylphosphonate, together with other chemicals, were released during the crash of El Al Flight 1862 at Bijlmer in Amsterdam in 1992. |
https://en.wikipedia.org/wiki/Redfield%20ratio | The Redfield ratio or Redfield stoichiometry is the consistent atomic ratio of carbon, nitrogen and phosphorus found in marine phytoplankton and throughout the deep oceans.
The term is named for American oceanographer Alfred C. Redfield who in 1934 first described the relatively consistent ratio of nutrients in marine biomass samples collected across several voyages on board the research vessel Atlantis, and empirically found the ratio to be C:N:P = 106:16:1. While deviations from the canonical 106:16:1 ratio have been found depending on phytoplankton species and the study area, the Redfield ratio has remained an important reference to oceanographers studying nutrient limitation. A 2014 paper summarizing a large data set of nutrient measurements across all major ocean regions spanning from 1970 to 2010 reported the global median C:N:P to be 163:22:1.
Discovery
For his 1934 paper, Alfred Redfield analyzed nitrate and phosphate data for the Atlantic, Indian, Pacific oceans and Barents Sea. As a Harvard physiologist, Redfield participated in several voyages on board the research vessel Atlantis, analyzing data for C, N, and P content in marine plankton, and referenced data collected by other researchers as early as 1898.
Redfield’s analysis of the empirical data led to him to discover that across and within the three oceans and Barents Sea, seawater had an N:P atomic ratio near 20:1 (later corrected to 16:1), and was very similar to the average N:P of phytoplankton.
To explain this phenomenon, Redfield initially proposed two mutually non-exclusive mechanisms:
I) The N:P in plankton tends towards the N:P composition of seawater. Specifically, phytoplankton species with different N and P requirements compete within the same medium and come to reflect the nutrient composition of the seawater.
II) An equilibrium between seawater and planktonic nutrient pools is maintained through biotic feedback mechanisms. Redfield proposed a thermostat like scenario in which the a |
https://en.wikipedia.org/wiki/Generation%20gap%20%28pattern%29 | Generation gap is a software design pattern documented by John Vlissides that treats automatically generated code differently than code that was written by a developer. Modifications should not be made to generated code, as they would be overwritten if the code generation process was ever re-run, such as during recompilation. Vlissides proposed creating a subclass of the generated code which contains the desired modification. This might be considered an example of the template method pattern.
Modern languages
Modern byte-code language like Java were in their early stages when Vlissides developed his ideas. In a language like Java or C#, this pattern may be followed by generating an interface, which is a completely abstract class. The developer would then hand-modify a concrete implementation of the generated interface. |
https://en.wikipedia.org/wiki/Posterior%20interosseous%20nerve | The posterior interosseous nerve (or dorsal interosseous nerve/deep radial nerve) is a nerve in the forearm. It is the continuation of the deep branch of the radial nerve, after this has crossed the supinator muscle. It is considerably diminished in size compared to the deep branch of the radial nerve. The nerve fibers originate from cervical segments C7 and C8 in the spinal column.
Structure
Course
It descends along the interosseous membrane, anterior to the extensor pollicis longus muscle, to the back of the carpus, where it presents a gangliform enlargement from which filaments are distributed to the ligaments and articulations of the carpus.
Supply
The posterior interosseous nerve supplies all the muscles of the posterior compartment of the forearm, except anconeus muscle, brachioradialis muscle, and extensor carpi radialis longus muscle. In other words, it supplies the following muscles:
Extensor carpi radialis brevis muscle — deep branch of radial nerve
Extensor digitorum muscle
Extensor digiti minimi muscle
Extensor carpi ulnaris muscle
Supinator muscle — deep branch of radial nerve
Abductor pollicis longus muscle
Extensor pollicis brevis muscle
Extensor pollicis longus muscle
Extensor indicis muscle
The posterior interosseous nerve provides proprioception to the joint capsule of the distal radioulnar articulation, but not pain sensation.
Clinical significance
The posterior interosseous nerve may be entrapped at the arcade of Frohse, which is part of the supinator muscle. This nerve can be injured in Monteggia fracture due to dislocation of the proximal head of radius bone.
Posterior interosseous neuropathy is purely a motor syndrome resulting in finger drop due to no extension of IP joints and radial wrist deviation on extension.
See also
Anterior interosseous nerve |
https://en.wikipedia.org/wiki/Posterior%20interosseous%20artery | The posterior interosseous artery (dorsal interosseous artery) is an artery of the forearm. It is a branch of the common interosseous artery, which is a branch of the ulnar artery.
Structure
The posterior interosseous artery passes backward between the oblique cord and the upper border of the interosseous membrane. It appears between the contiguous borders of supinator muscle and the abductor pollicis longus muscle, and runs down the back of the forearm between the superficial and deep layers of muscles, to both of which it distributes branches.
Where it lies on abductor pollicis longus muscle and the extensor pollicis brevis muscle, it is accompanied by the dorsal interosseous nerve. At the lower part of the forearm it anastomoses with the termination of the volar interosseous artery, and with the dorsal carpal network.
Branches
Near its origin, it gives off the interosseous recurrent artery. This ascends to the interval between the lateral epicondyle and olecranon, on or through the fibers of supinator muscle, but beneath the anconeus muscle, and anastomoses with the middle collateral branch of the deep artery of arm, the posterior ulnar recurrent artery and the inferior ulnar collateral artery.
The posterior interosseous artery gives off many muscular arteries.
Additional images
See also
Anterior interosseous artery
Ulnar artery |
https://en.wikipedia.org/wiki/Copper%28I%29%20cyanide | Copper(I) cyanide is an inorganic compound with the formula CuCN. This off-white solid occurs in two polymorphs; impure samples can be green due to the presence of Cu(II) impurities. The compound is useful as a catalyst, in electroplating copper, and as a reagent in the preparation of nitriles.
Structure
Copper cyanide is a coordination polymer. It exists in two polymorphs both of which contain -[Cu-CN]- chains made from linear copper(I) centres linked by cyanide bridges. In the high-temperature polymorph, HT-CuCN, which is isostructural with AgCN, the linear chains pack on a hexagonal lattice and adjacent chains are off set by +/- 1/3 c, Figure 1. In the low-temperature polymorph, LT-CuCN, the chains deviate from linearity and pack into rippled layers which pack in an AB fashion with chains in adjacent layers rotated by 49 °, Figure 2.
LT-CuCN can be converted to HT-CuCN by heating to 563 K in an inert atmosphere. In both polymorphs the copper to carbon and copper to nitrogen bond lengths are ~1.85 Å and bridging cyanide groups show head-to-tail disorder.
Preparation
Cuprous cyanide is commercially available and is supplied as the low-temperature polymorph. It can be prepared by the reduction of copper(II) sulfate with sodium bisulfite at 60 °C, followed by the addition of sodium cyanide to precipitate pure LT-CuCN as a pale yellow powder.
2 CuSO4 + NaHSO3 + H2O + 2 NaCN → 2 CuCN + 3 NaHSO4
On addition of sodium bisulfite the copper sulfate solution turns from blue to green, at which point the sodium cyanide is added. The reaction is performed under mildly acidic conditions. Copper cyanide has historically been prepared by treating copper(II) sulfate with sodium cyanide, in this redox reaction, copper(I) cyanide forms together with cyanogen:
2 CuSO4 + 4 NaCN → 2 CuCN + (CN)2 + 2 Na2SO4
Because this synthetic route produces cyanogen, uses two equivalents of sodium cyanide per equivalent of CuCN made and the resulting copper cyanide |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.