id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
16,040,681 | https://en.wikipedia.org/wiki/Ronald%20D.%20Macfarlane | Ronald D. Macfarlane (born February 21, 1933, Buffalo, New York) is distinguished professor of Chemistry at Texas A&M University. In 1991, he received the Inaugural Award of the American Society for Mass Spectrometry's Distinguished Achievement Award.
Early life and education
1954 University at Buffalo, New York - B.A. Chemistry
1957 Carnegie-Mellon University, Pennsylvania - M.S. Chemistry
1959 Carnegie-Mellon University, Pennsylvania - Ph.D. Chemistry
Research interests
Separations Methods for Medical Diagnosis
Ultra-Sensitive Mass Spectrometry
Is researching the new methods of Conceptual Learning
Awards
Guggenheim Fellowship, 1968
Distinguished Achievement in Research Award
ACS Nuclear Chemistry Award
1990 ASMS Distinguished Contribution in Mass Spectrometry Award
References
Living people
Texas A&M University faculty
Thomson Medal recipients
Mass spectrometrists
21st-century American chemists
University at Buffalo alumni
1933 births | Ronald D. Macfarlane | Physics,Chemistry | 179 |
45,959 | https://en.wikipedia.org/wiki/Nuclear%20strategy | Nuclear strategy involves the development of doctrines and strategies for the production and use of nuclear weapons.
As a sub-branch of military strategy, nuclear strategy attempts to match nuclear weapons as means to political ends. In addition to the actual use of nuclear weapons whether in the battlefield or strategically, a large part of nuclear strategy involves their use as a bargaining tool.
Some of the issues considered within nuclear strategy include:
Conditions which serve a nation's interest to develop nuclear weapons
Types of nuclear weapons to be developed
How and when weapons are to be used
Many strategists argue that nuclear strategy differs from other forms of military strategy. The immense and terrifying power of the weapons makes their use, in seeking victory in a traditional military sense, impossible.
Perhaps counterintuitively, an important focus of nuclear strategy has been determining how to prevent and deter their use, a crucial part of mutually assured destruction.
In the context of nuclear proliferation and maintaining the balance of power, states also seek to prevent other states from acquiring nuclear weapons as part of nuclear strategy.
Nuclear deterrent composition
The doctrine of mutual assured destruction (MAD) assumes that a nuclear deterrent force must be credible and survivable. That is, each deterrent force must survive a first strike with sufficient capability to effectively destroy the other country in a second strike. Therefore, a first strike would be suicidal for the launching country.
In the late 1940s and 1950s as the Cold War developed, the United States and Soviet Union pursued multiple delivery methods and platforms to deliver nuclear weapons. Three types of platforms proved most successful and are collectively called a "nuclear triad". These are air-delivered weapons (bombs or missiles), ballistic missile submarines (usually nuclear-powered and called SSBNs), and intercontinental ballistic missiles (ICBMs), usually deployed in land-based hardened missile silos or on vehicles.
Although not considered part of the deterrent forces, all of the nuclear powers deployed large numbers of tactical nuclear weapons in the Cold War. These could be delivered by virtually all platforms capable of delivering large conventional weapons.
During the 1970s there was growing concern that the combined conventional forces of the Soviet Union and the Warsaw Pact could overwhelm the forces of NATO. It seemed unthinkable to respond to a Soviet/Warsaw Pact incursion into Western Europe with strategic nuclear weapons, inviting a catastrophic exchange. Thus, technologies were developed to greatly reduce collateral damage while being effective against advancing conventional military forces. Some of these were low-yield neutron bombs, which were lethal to tank crews, especially with tanks massed in tight formation, while producing relatively little blast, thermal radiation, or radioactive fallout. Other technologies were so-called "suppressed radiation devices," which produced mostly blast with little radioactivity, making them much like conventional explosives, but with much more energy.
See also
Assured destruction
Bernard Brodie
Counterforce, Countervalue
Decapitation strike
Deterrence theory
Doctrine for Joint Nuclear Operations
Dr. Strangelove (1964), a film by Stanley Kubrick, satirizing nuclear strategy.
Fail-deadly
Pre-emptive nuclear strike, Second strike
Force de frappe
Game theory, wargaming
Herman Kahn
Madman theory
Massive retaliation
Military strategy
Minimal deterrence
Mutual assured destruction (MAD)
No first use
National Security Strategy of the United States
Nuclear blackmail
Nuclear proliferation
Nuclear utilization target selection (NUTS)
Nuclear weapons debate
Single Integrated Operational Plan (SIOP)
Strategic bombing
Tactical nuclear weapons
Thomas Schelling
Bibliography
Early texts
Brodie, Bernard. The Absolute Weapon. Freeport, N.Y.: Books for Libraries Press, 1946.
Brodie, Bernard. Strategy in the Missile Age. Princeton: Princeton University Press, 1959.
Dunn, Lewis A. Deterrence Today – Roles, Challenges, and Responses Paris: IFRI Proliferation Papers n° 19, 2007.
Kahn, Herman. On Thermonuclear War. 2nd ed. Princeton, N.J.: Princeton University Press, 1961.
Kissinger, Henry A. Nuclear Weapons and Foreign Policy. New York: Harper, 1957.
Schelling, Thomas C. Arms and Influence. New Haven: Yale University Press, 1966.
Wohlstetter, Albert. "The Delicate Balance of Terror." Foreign Affairs 37, 211 (1958): 211–233.
Secondary literature
Baylis, John, and John Garnett. Makers of Nuclear Strategy. London: Pinter, 1991. .
Buzan, Barry, and Herring, Eric. "The Arms Dynamic in World Politics". London: Lynne Rienner Publishers, 1998. .
Freedman, Lawrence. The Evolution of Nuclear Strategy. 2nd ed. New York: St. Martin's Press, 1989. .
Heuser, Beatrice. NATO, Britain, France and the FRG: Nuclear Strategies and Forces for Europe, 1949–2000 (London: Macmillan, hardback 1997, paperback 1999), 256p.,
Heuser, Beatrice. Nuclear Mentalities? Strategies and Belief Systems in Britain, France and the FRG (London: Macmillan, July 1998), 277p., Index, Tables.
Heuser, Beatrice. "Victory in a Nuclear War? A Comparison of NATO and WTO War Aims and Strategies", Contemporary European History Vol. 7 Part 3 (November 1998), pp. 311–328.
Heuser, Beatrice. "Warsaw Pact Military Doctrines in the 70s and 80s: Findings in the East German Archives", Comparative Strategy Vol. 12 No. 4 (Oct.–Dec. 1993), pp. 437–457.
Kaplan, Fred M. The Wizards of Armageddon. New York: Simon and Schuster, 1983. .
Rai Chowdhuri, Satyabrata. Nuclear Politics: Towards A Safer World, Ilford: New Dawn Press, 2004.
Rosenberg, David. "The Origins of Overkill: Nuclear Weapons and American Strategy, 1945–1960." International Security 7, 4 (Spring, 1983): 3–71.
Schelling, Thomas C. The Strategy of Conflict. Cambridge: Harvard University Press, 1960.
Smoke, Richard. National Security and the Nuclear Dilemma. 3rd ed. New York: McGraw–Hill, 1993. .
References
Nuclear warfare | Nuclear strategy | Chemistry | 1,250 |
29,767,475 | https://en.wikipedia.org/wiki/Flat%20lens | A flat lens is a lens whose flat shape allows it to provide distortion-free imaging, potentially with arbitrarily-large apertures. The term is also used to refer to other lenses that provide a negative index of refraction. Flat lenses require a refractive index close to −1 over a broad angular range. In recent years, flat lenses based on metasurfaces were also demonstrated.
History
Russian mathematician Victor Veselago predicted that a material with simultaneously negative electric and magnetic polarization responses would yield a negative refractive index (an isotropic refractive index of −1), a "left-handed" medium in which light propagates with opposite phase and energy velocities.
The first, near-infrared, flat lens was announced in 2012 using nanostructured antennas. It was followed in 2013 by an ultraviolet flat lens that used a bi-metallic sandwich.
In 2014 a flat lens was announced that combined composite metamaterials and transformation optics. The lens works over a broad frequency range.
Traditional lenses
Traditional curved glass lenses can bend light coming from many angles to end up at the same focal point on a piece of photographic film or an electronic sensor. Light captured at the very edges of a curved glass lens does not line up correctly with the rest of the light, creating a fuzzy image at the edge of the frame. (Petzval field curvature and other aberrations.) To correct this, lenses use extra pieces of glass, adding bulk, complexity, and mass.
Metamaterials
Flat lenses employ metamaterials, that is, electromagnetic structures engineered on subwavelength scales to elicit tailored polarization responses.
Left-handed responses typically are implemented using resonant metamaterials composed of periodic arrays of unit cells containing inductive–capacitive resonators and conductive wires. Negative refractive indices that are isotropic in two and three dimensions at microwave frequencies have been achieved in resonant metamaterials with centimetre-scale features.
Metamaterials can image infrared, visible, and, most recently, ultraviolet wavelengths.
Types
Graphene oxide
With the advances in micro- and nanofabrication techniques, continued miniaturization of conventional optical lenses has been requested for applications such as communications, sensors, and data storage. Specifically, smaller and thinner micro lenses are needed for subwavelength optics or nano-optics with small structures, for visible and near-IR applications. As the distance scale for optical communications shrinks, the required feature sizes of micro lenses shrink.
Graphene oxide provides solutions to advance planar focusing devices. Giant refractive index modification (as large as 10^-1 or one order of magnitude larger than earlier materials), between graphene oxide (GO) and reduced graphene oxide (rGO) were demonstrated by manipulating its oxygen content using direct laser writing (DLW) method. The overall lens thickness potentially can be reduced by more than ten times. Also, the linear optical absorption of GO increases as the reduction of GO deepens, which results in transmission contrast between GO and rGO and therefore provides an amplitude modulation mechanism. Moreover, both the refractive index and optical absorption are dispersionless over a wavelength range from visible to near infrared. GO film offers flexible patterning capability by using the maskless DLW method, which reduces manufacturing complexity.
A novel ultrathin planar lens on a GO thin film used the DLW method. Its advantage is that phase modulation and amplitude modulation can be achieved simultaneously, which are attributed to the giant refractive index modulation and the variable linear optical absorption of GO during its reduction process, respectively. Due to the enhanced wavefront shaping capability, the lens thickness is subwavelength scale (~200 nm), which is thinner than dielectric lenses (~ μm scale). The focusing intensities and the focal length can be controlled effectively by varying laser power and lens size, respectively. By using oil immersion high numerical aperture (NA) objective during DLW process, 300 nm fabrication feature size on GO film has been realized, and therefore the minimum lens size reached 4.6 μm in diameter, the smallest planar micro lens. This can only be realized with metasurface by FIB. Thereafter, the focal length can be reduced to as small as 0.8 μm, which would potentially increase the NA and the focusing resolution.
The full-width at half-maximum (FWHM) of 320 nm at the minimum focal spot using a 650 nm input beam has been demonstrated experimentally, which corresponds to an effective NA of 1.24 (n=1.5). Furthermore, ultra-broadband focusing capability from 500 nm to as far as 2 μm have been realized with this planar lens.
Nanoantennas
The first flat lens used a thin wafer of silicon 60 nanometers thick coated with concentric rings of v-shaped gold nanoantennas to produce photographic images. The antennas refract the light so that it all ends up on a single focal plane, a so-called artificial refraction process. The antennas were surrounded by an opaque silver/titanium mask that reflected all light that did not strike the antennas. Varying the arm lengths and angle provided the required range of amplitudes and phases. The distribution of the rings controls focal length.
The refraction angle—more at the edges than in the middle—is controlled by the antennas' shape, size, and orientation. It could focus only a single near-infrared wavelength.
Nanoantennas introduce a radial distribution of phase discontinuities, thereby generating respectively spherical wavefronts and nondiffracting Bessel beams. Simulations show that such aberration-free designs are applicable to high-numerical aperture lenses such as flat microscope objectives.
In 2015 a refined version used an achromatic metasurface to focus different wavelengths of light at the same point, employing a dielectric material rather than a metal. This improves efficiency and can produce a consistent effect by focusing red, blue and green wavelengths at the same point to achieve instant color correction, yielding a color image. This lens does not suffer from the chromatic aberrations, or color fringing, that plague refractive lenses. As such, it does not require the additional lens elements traditionally used to compensate for this chromatic dispersion.
Bi-metallic sandwich
A bi-metallic flat lens is made of a sandwich of alternating nanometer-thick layers of silver and titanium dioxide. It consists of a stack of strongly-coupled plasmonic waveguides sustaining backward waves. It exhibits a negative index of refraction regardless of the incoming light's angle of travel. The waveguides yield an omnidirectional left-handed response for transverse magnetic polarization. Transmission through the metamaterial can be turned on and off using higher frequency light as a switch, allowing the lens to act as a shutter with no moving parts.
Membrane
Membrane optics employ plastic in place of glass to diffract rather than refract or reflect light. Concentric microscopic grooves etched into the plastic provide the diffraction.
Glass transmits light with 90% efficiency, while membrane efficiencies range from 30-55%. Membrane thickness is on the order of that of plastic wrap.
Holographic lenses
Holographic lenses are made from a hologram of a conventional lens. It is flat, and present any drawbacks of the original lens (aberrations), plus the drawbacks of the hologram (diffraction).
The hologram of a mathematical lens is flat, and it has the properties of the mathematical lens, but it has the drawbacks of the hologram (diffraction).
Geometric-phase lenses
Geometric phase lenses, also known as polarization-directed flat lenses are made by depositing liquid crystal polymer in a pattern to make a "holographically recorded wavefront profile". They exhibit a positive focal length for circularly polarized light of one direction, and a negative focal length for circularly polarized light of one direction.
See also
Metamaterial
Superlens
Zone plate
References
External links
Aberration-Free Ultrathin Flat Lenses and Axicons at Telecom Wavelengths Based on Plasmonic Metasurfaces (full text)
Metamaterials
Photographic lens designs | Flat lens | Materials_science,Engineering | 1,707 |
1,226,822 | https://en.wikipedia.org/wiki/Transforming%20growth%20factor | Transforming growth factor (, or TGF) is used to describe two classes of polypeptide growth factors, TGFα and TGFβ.
The name "Transforming Growth Factor" is somewhat arbitrary, since the two classes of TGFs are not structurally or genetically related to one another, and they act through different receptor mechanisms. Furthermore, they do not always induce cellular transformation, and are not the only growth factors that induce cellular transformation.
Types
TGFα is upregulated in some human cancers. It is produced in macrophages, brain cells, and keratinocytes, and induces epithelial development. It belongs to the EGF family.
TGFβ exists in three known subtypes in humans, TGFβ1, TGFβ2, and TGFβ3. These are upregulated in Marfan's syndrome and some human cancers, and play crucial roles in tissue regeneration, cell differentiation, embryonic development, and regulation of the immune system. Isoforms of transforming growth factor-beta (TGF-β1) are also thought to be involved in the pathogenesis of pre-eclampsia. They belong to the transforming growth factor beta family. TGFβ receptors are single pass serine/threonine kinase receptors.
Function
These proteins were originally characterized by their capacity to induce oncogenic transformation in a specific cell culture system, rat kidney fibroblasts. Application of the transforming growth factors to normal rat kidney fibroblasts induces the cultured cells to proliferate and overgrow, no longer subject to the normal inhibition caused by contact between cells.
See also
Bone morphogenetic protein
TGF beta signaling pathway
Tubuloglomerular feedback
References
External links
Tumor growth factor (TGF) citations
Growth factors
Signal transduction | Transforming growth factor | Chemistry,Biology | 374 |
407,398 | https://en.wikipedia.org/wiki/1001%20%28number%29 | 1001 is the natural number following 1000 and preceding 1002.
In mathematics
One thousand and one is a sphenic number, a pentagonal number, a pentatope number and the first four-digit palindromic number. Scheherazade numbers always have 1001 as a factor.
Divisibility by 7, 11 and 13
Two properties of 1001 are the basis of a divisibility test for 7, 11 and 13. The method is along the same lines as the divisibility rule for 11 using the property 10 ≡ -1 (mod 11). The two properties of 1001 are
1001 = 7 × 11 × 13 in prime factors
103 ≡ -1 (mod 1001)
The method simultaneously tests for divisibility by any of the factors of 1001. First, the digits of the number being tested are grouped in blocks of three. The odd numbered groups are summed. The sum of the even numbered groups is then subtracted from the sum of the odd numbered groups. The test number is divisible by 7, 11 or 13 iff the result of the summation is divisible by 7, 11 or 13 respectively.
Example:
Number under test, N = 22 872 563 219
Sum of odd groups, So = 219 + 872 = 1091
Sum of even groups, Se = 563 + 22 = 585
Total sum, S = So - Se = 1091 - 585 = 506
506 = 46 × 11
Since 506 is divisible by 11 then N is also divisible by 11. If the total sum is still too large to conveniently test for divisibility, and is longer than three digits, then the algorithm can be repeated to obtain a smaller number.
On the Windows calculator, 1001 is the smallest positive integer whose reciprocal is automatically displayed in scientific notation. On Windows Calculator, numbers larger than 1032 and non-terminating decimals smaller than 0.001 or 10−3 are always displayed exponentially.
In other fields
In The Book of One Thousand and One Nights, Scheherazade tells her husband the king a new story every night for 1,001 nights, staving off her execution. From this, 1001 is sometimes used as a generic term for "a very large number", starting with a large number (1000) and going beyond it:
1001 uses for...
1001 ways to...
In Arabic, this is usually phrased as "one thousand things and one thing", e.g.:
The Book of One Thousand and One Nights, in Arabic Alf layla wa layla (), literally "One thousand nights and a night".
1001 was the name of a popular British detergent in the 1960s, supposedly with "1001 uses".
In the Mawlawiyyah order of Sufi Islam, a novice must complete 1001 days of prayer before becoming a dada, or junior teacher of the faith.
In many cases, including the title "Thousand and One Nights", 1001 is meant to indicate a "big number", and need not be taken literally. A book published in 2007 titled 40 Days & 1001 Nights describes a journey through the Islamic world.
Among them are recent books aiming to introduce significant works in various fields:
1001 Books You Must Read Before You Die
1001 Movies You Must See Before You Die
1001 Albums You Must Hear Before You Die
There are also many film titles starting with 1001. For example:
Bugs Bunny's 3rd Movie: 1001 Rabbit Tales
The NBA draft lottery uses a lottery with 1,001 combinations by selecting four balls out of 14, then disregards the combination 11, 12, 13 and 14 to produce 1,000 outcomes.
"1001" was a hidden track on the Australian release of Two Shoes, the second album by the Cat Empire.
Buckminster Fuller called 1001 a Scheherazade number in his book Synergetics, since Scheherazade was the name of the story-telling wife in The Book of One Thousand and One Nights.
References
Integers | 1001 (number) | Mathematics | 843 |
51,919,619 | https://en.wikipedia.org/wiki/Elias%202-27 | Elias 2-27 (2MASS J16264502-2423077) is a YSO star with a protoplanetary disk around it, located in the Ophiuchus Molecular Cloud (ρ Oph Cld, 5 Oph Cld, Ophiuchus Dark Cloud), a star-forming region in the Ophiuchus constellation, some away. This star system became the first ever observed with density waves in the disk, giving it a spiral structure. Elias 2-27 is located near the double star Rho Ophiuchi (5 Ophiuchi).
Disk
In 2016, it was discovered that disk perturbations from density waves organized the disk debris into a pinwheel structure, with sweeping spiral arms; using observations from the Atacama Large Millimeter Array (ALMA) radio telescope. This marks the first instance of such an observation in a protoplanetary disk, though they have been previously predicted. The spiral arms start at and extend out to . The disk has a 14 AU wide gap at 69 AU radius with a reduced amount of dust. The disk is very massive at 0.08.
Further reading
References
Ophiuchus
M-type stars
Pre-main-sequence stars
Circumstellar disks
J16264502-2423077 | Elias 2-27 | Astronomy | 269 |
72,114,025 | https://en.wikipedia.org/wiki/Cl6b | Cl6b (μ-THTX-Cl6b) is a peptide toxin from the venom of the spider Cyriopagopus longipes. It acts as a sodium channel blocker: Cl6b significantly and persistently reduces currents through the tetrodotoxin-sensitive sodium channels NaV1.2-1.4, NaV1.6, and NaV1.7.
Structure
The Cl6b peptide has a molecular weight of 3708.9 Da. It contains 33 amino acid residues, among which six cysteines that engage in three disulfide bonds to form a structural motif known as an inhibitor cystine knot (ICK). This structure grants stability to the toxin and has been identified previously in other spider peptide toxins that share high sequence similarity to Cl6b.
Family
Simultaneously with the isolation of Cl6b, another peptide toxin known as Cl6a was characterized from the same spider species. The two Cl6 peptides share a sequence identity of 78.8%, including the six cysteines that make both peptides adopt the ICK motif.
Target
Cl6b acts as a selective sodium channel blocker.
Source in nature
Cl6b has been isolated from Cyriopagopus longipes, an Asian spider mainly found in Thailand, Cambodia, Laos, and China.
Activity mechanism
Cl6b significantly reduces currents through the tetrodotoxin-sensitive sodium channels NaV1.2, NaV1.3, NaV1.4, NaV1.6, and NaV1.7, with no effect on the tetrodotoxin-resistant sodium channels NaV1.5, NaV1.8, NaV1.9. Cl6b exhibits a particularly high affinity to NaV1.7 channels, which are present in great numbers in nociceptors (pain neurons) located at the dorsal root ganglion. The activity of Cl6b on NaV1.7 has similar characteristics compared to previously reported NaV1.7-peptide inhibitors, such as HWTX-IV., as Cl6b binds to the domain II segments three and four, which are part of the domain's voltage sensor. The binding is high-affinity (half-maximal inhibitory concentration (IC50) 18.80 ± 2.4 nM). It is also irreversible, which poises it as a candidate for the development of long-term in-vivo analgesia.
References
Neurotoxins
Spider toxins
Sodium channel blockers
Ion channel toxins | Cl6b | Chemistry | 531 |
49,978,676 | https://en.wikipedia.org/wiki/A-1%20holin%20family | The Actinobacterial 1 TMS Holin (A-1 Holin) Family (TC# 1.E.32) consists of proteins found in actinobacteria, their conjugative plasmids and their phage. They are usually between 90 and 140 amino acyl residues (aas) in length and exhibit 1 or sometimes even 2 transmembrane segments despite the families name (i.e., TC# 1.E.32.2.1). Although some are annotated as phage proteins or holins, members of the A-1 family are not yet functionally characterized. A representative list of proteins belonging to the A-1 Holin family can be found in the Transporter Classification Database (TCDB).
See also
Holin
Lysin
Transporter Classification Database
Further reading
References
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins
Holins | A-1 holin family | Biology | 198 |
26,869,391 | https://en.wikipedia.org/wiki/2MASS%20J0441%2B2301 | 2MASS J0441+2301 (abbreviated as 2M 0441+23) is a young quadruple system hosting a planetary-mass object, a red dwarf star and two brown dwarfs, approximately 470 light years (145 parsecs) away.
The 2MASS J04414489+2301513 Bab (abbreviated as 2M J044144) primary (a brown dwarf) has a large separation (12.4 arcseconds) companion, 2MASS J04414565+2301580 Aab (abbreviated as 2M J044145), which in turn has a nearby small separation substellar companion (separation of 0.23 arcseconds to the northeast). 2M J044145 has similar proper motion to 2M J044144 and is likely physically associated with the system. The entire system of 4 objects is then a hierarchical quadruple of two binary objects orbiting each other. The primary component Aa has a spectral type of M4.5 and a red apparent magnitude of 14.2. Both components seem to be accreting mass from their stellar disks, as shown by their emission lines. The four objects have a total mass of only 26% of the Sun, making it the quadruple star system with the lowest mass known.
Planetary system
The primary is orbited by a companion about 5–10 times the mass of Jupiter. The mass of the primary brown dwarf is roughly 20 times the mass of Jupiter and its age is roughly one million years. It is not clear whether this companion object is a sub-brown dwarf or a planet. The companion is very large with respect to its parent and must have formed within 1 million years or so. This seems to be too big and too fast to form like a regular planet from a disk around the central object. It also fails the mass ratio criterion of the IAU definition of an exoplanet; the mass ratio with the primary is closer than 1/25. It is still considered a planet by the NASA Exoplanet Archive and Extrasolar Planets Encyclopaedia though.
See also
2MASS
SCR 1845−6357
2M1207
HR 8799
References
4
Brown dwarfs
Taurus (constellation)
J04414489+2301513
M-type brown dwarfs
Planetary systems with one confirmed planet
TIC objects
T Tauri stars | 2MASS J0441+2301 | Astronomy | 502 |
36,344,470 | https://en.wikipedia.org/wiki/Centre%20for%20Marine%20Living%20Resources%20%26%20Ecology | The Centre for Marine Living Resources & Ecology (CMLRE) is a research institute in Kochi, Kerala under the Ministry of Earth Sciences, Government of India with a mandate to study the marine living resources. Today, apart from implementing various research projects of the ministry, the institute also manages and operates the Fishery Oceanographic Research Vessel (FORV) Sagar Sampada.
History
The institute has its origins in the Sagar Sampada cell, which was established under the then Department of Ocean Development, DOD (upgraded to the Ministry of Earth Sciences in 2006) for managing and co-ordinating activities of FORV Sagar Sampada. During the beginning of the 9th Five Year Plan of the Government of India in 1998, the Marine Living Resources Programme (MLR Programme) was formulated by the DOD with a view of promoting ocean development activities in the country which inter-alia include mapping of the living resources, preparing inventory of commercially exploitable living marine resources, their optimum utilization through ecosystem management and R & D in basic sciences on Marine Living Resources & Ecology. With the objective of implementing this programme, the Sagar Sampada cell was upgraded to the Centre for Marine Living Resources and Ecology. To this date, the research vessel Sagar Sampada serves as the backbone of the MLR research activities co-ordinated by CMLRE.
During the 9th five-year plan (1998-2002), the Centre co-ordinated the first systematic study of marine life along the Indian shelf waters, along the eastern and western coasts of India. The environmental characteristics of this region and the phytoplankton, zooplankton, marine benthos, fishery resources etc. of this region were systematically characterized for the first time. During the 10th five-year plan (2002-2007) the exploration was extended to the continental slope regions, particularly in the case of marine benthos and fisheries. Research thrust was also placed on studies of harmful algal blooms and marine mammals around the Indian subcontinent. The environmental and productivity patterns around the Indian EEZ continued to be monitored and research on the productivity and fishery resources of the Andaman and Nicobar regions was also carried out. In January 2005, after the devastating 2004 tsunami the institute along with National Institute of Oceanography and School of Marine Sciences, Cochin University of Science and Technology (CUSAT), Kochi carried out one of the first scientific studies regarding the impact of the tsunami on marine life. In the 9th and 10th plan periods, the CMLRE served chiefly as a co-ordinating and fund granting agency, managing the projects that were granted to various other research institutes and universities.
During the 11th five-year plan (2007-2012), the CMLRE established in-house R&D activities, apart from co-ordination of projects in other institutes. In this period, focus was placed on continued monitoring of the pelagic environment and productivity, marine benthos, harmful algal blooms, studies on reproduction and recruitment of sardines in the south-east Arabian Sea, deep-sea fisheries and myctophid resources in the Indian EEZ. Several projects during this period also focused on isolating and identifying bioactive compounds from marine organisms.
Institute Mandate
The Centre has the following mandate
To develop management strategies for marine living resources through Ecosystem monitoring and modelling efforts.
Evolving, coordinating and implementing time targeted national /regional R&D programmes in the field of marine living resources and ecology through effective utilization of Fishery and Oceanographic Research Vessel Sagar Sampada.
Strengthening of research on marine living resources and Ecology including establishment of a data centre for storage and dissemination of data/ information to end users.
Coordinating the national programmes relating to Southern Ocean Living Resources (Antarctic marine living resources).
Association with other Institutions
Institutes associated with the MLR research programme through fund-granting from CMLRE (past and present) include
National Institute of Oceanography
Central Marine Fisheries Research Institute, Kochi
Central Institute of Fisheries Technology, Kochi
Cochin University of Science & Technology, Kochi - several departments including the Department of Marine Biology, Microbiology & Biochemistry, the School of Industrial Fisheries and Department of Electronics
Annamalai University - Centre for Advanced Studies in Marine Biology, Parangipettai
Andhra University - Department of Marine Living Resources, Visakhapatnam
Pondicherry University - Department of Ocean Studies and Marine Biology, Port Blair
References
External links
Official website
Ministry of Earth Sciences, website
Research institutes in Kochi
Marine biology
Biological research institutes
Research institutes established in 1998
1998 establishments in Kerala
Ecology
Colleges affiliated to Andhra University
Ministry of Earth Sciences | Centre for Marine Living Resources & Ecology | Biology | 928 |
2,526,916 | https://en.wikipedia.org/wiki/Isotopes%20of%20thulium | Naturally occurring thulium (69Tm) is composed of one stable isotope, 169Tm (100% natural abundance). Thirty-nine radioisotopes have been characterized, with the most stable being 171Tm with a half-life of 1.92 years, 170Tm with a half-life of 128.6 days, 168Tm with a half-life of 93.1 days, and 167Tm with a half-life of 9.25 days. All of the remaining radioactive isotopes have half-lives that are less than 64 hours, and the majority of these have half-lives that are less than 2 minutes. This element also has 26 meta states, with the most stable being 164mTm (t1/2 5.1 minutes), 160mTm (t1/2 74.5 seconds) and 155mTm (t1/2 45 seconds).
The known isotopes of thulium range from 144Tm to 183Tm. The primary decay mode before the most abundant stable isotope, 169Tm, is electron capture, and the primary mode after is beta emission. The primary decay products before 169Tm are erbium isotopes, and the primary products after are ytterbium isotopes. All isotopes of thulium are either radioactive or, in the case of 169Tm, observationally stable, meaning that 169Tm is predicted to be radioactive but no actual decay has been observed.
List of isotopes
|-id=Thulium-144
| 144Tm
| style="text-align:right" | 69
| style="text-align:right" | 75
| 143.97621(43)#
| 2.3(9) μs
| p
| 143Er
| (10+)
|
|-id=Thulium-145
| 145Tm
| style="text-align:right" | 69
| style="text-align:right" | 76
| 144.97039(21)#
| 3.17(20) μs
| p
| 144Er
| (11/2−)
|
|-id=Thulium-146
| 146Tm
| style="text-align:right" | 69
| style="text-align:right" | 77
| 145.96666(22)#
| 155(20) ms
| p
| 145Er
| (1+)
|
|-id=Thulium-146m1
| style="text-indent:1em" | 146m1Tm
| colspan="3" style="text-indent:2em" | 304(6) keV
| 73(7) ms
| p
| 145Er
| (5−)
|
|-id=Thulium-146m2
| style="text-indent:1em" | 146m2Tm
| colspan="3" style="text-indent:2em" | 437(7) keV
| 200(3) ms
| p
| 145Er
| (10+)
|
|-id=Thulium-147
| rowspan=2|147Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 78
| rowspan=2|146.9613799(73)
| rowspan=2|0.58(3) s
| β+ (85%)
| 147Er
| rowspan=2|11/2−
| rowspan=2|
|-
| p (15%)
| 146Er
|-id=Thulium-147m
| style="text-indent:1em" | 147mTm
| colspan="3" style="text-indent:2em" | 62(5) keV
| 360(40) μs
| p
| 146Er
| 3/2+
|
|-id=Thulium-148
| 148Tm
| style="text-align:right" | 69
| style="text-align:right" | 79
| 147.958384(11)
| 0.7(2) s
| β+
| 148Er
| (10+)
|
|-id=Thulium-149
| rowspan=2|149Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 80
| rowspan=2|148.95283(22)#
| rowspan=2|0.9(2) s
| β+ (99.74%)
| 149Er
| rowspan=2|11/2−
| rowspan=2|
|-
| β+, p (0.26%)
| 148Ho
|-id=Thulium-150
| 150Tm
| style="text-align:right" | 69
| style="text-align:right" | 81
| 149.95009(21)#
| 3# s
| β+
| 150Er
| (1+)
|
|-id=Thulium-150m1
| rowspan=2 style="text-indent:1em" | 150m1Tm
| rowspan=2 colspan="3" style="text-indent:2em" | 140(140)# keV
| rowspan=2|2.20(6) s
| β+ (98.9%)
| 150Er
| rowspan=2|(6−)
| rowspan=2|
|-
| β+, p (1.1%)
| 149Ho
|-id=Thulium-150m2
| style="text-indent:1em" | 150m2Tm
| colspan="3" style="text-indent:2em" | 811(140)# keV
| 5.2(3) ms
| IT
| 150m1Tm
| 10+#
|
|-id=Thulium-151
| 151Tm
| style="text-align:right" | 69
| style="text-align:right" | 82
| 150.945494(21)
| 4.17(11) s
| β+
| 151Er
| (11/2−)
|
|-id=Thulium-151m1
| style="text-indent:1em" | 151m1Tm
| colspan="3" style="text-indent:2em" | 93(6) keV
| 6.6(20) s
| β+
| 151Er
| (1/2+)
|
|-id=Thulium-151m2
| style="text-indent:1em" | 151m2Tm
| colspan="3" style="text-indent:2em" | 2655.67(22) keV
| 451(34) ns
| IT
| 151Tm
| (27/2−)
|
|-id=Thulium-152
| 152Tm
| style="text-align:right" | 69
| style="text-align:right" | 83
| 151.944476(58)
| 8.0(10) s
| β+
| 152Er
| (2)−
|
|-id=Thulium-152m1
| style="text-indent:1em" | 152m1Tm
| colspan="3" style="text-indent:2em" | −100(250) keV
| 5.2(6) s
| β+
| 152Er
| (9)+
|
|-id=Thulium-152m2
| style="text-indent:1em" | 152m2Tm
| colspan="3" style="text-indent:2em" | 2455(250) keV
| 301(7) ns
| IT
| 152Tm
| (17+)
|
|-id=Thulium-153
| rowspan=2|153Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 84
| rowspan=2|152.942058(13)
| rowspan=2|1.48(1) s
| α (91%)
| 149Ho
| rowspan=2|(11/2−)
| rowspan=2|
|-
| β+ (9%)
| 153Er
|-id=Thulium-153m
| rowspan=2 style="text-indent:1em" | 153mTm
| rowspan=2 colspan="3" style="text-indent:2em" | 43.2(2) keV
| rowspan=2|2.5(2) s
| α (92%)
| 149Ho
| rowspan=2|(1/2+)
| rowspan=2|
|-
| β+ (8%)
| 153Er
|-id=Thulium-154
| rowspan=2|154Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 85
| rowspan=2|153.941570(15)
| rowspan=2|8.1(3) s
| β+ (54%)
| 154Er
| rowspan=2|(2)−
| rowspan=2|
|-
| α (46%)
| 150Ho
|-id=Thulium-154m
| rowspan=2 style="text-indent:1em" | 154mTm
| rowspan=2 colspan="3" style="text-indent:2em" | 70(50) keV
| rowspan=2|3.30(7) s
| α (58%)
| 150Ho
| rowspan=2|(9)+
| rowspan=2|
|-
| β+ (42%)
| 154Er
|-id=Thulium-155
| rowspan=2|155Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 86
| rowspan=2|154.939210(11)
| rowspan=2|21.6(2) s
| β+ (99.17%)
| 155Er
| rowspan=2|11/2−
| rowspan=2|
|-
| α (0.83%)
| 151Ho
|-id=Thulium-155m
| style="text-indent:1em" | 155mTm
| colspan="3" style="text-indent:2em" | 41(6) keV
| 45(4) s
| β+
| 155Er
| 1/2+
|
|-id=Thulium-156
| rowspan=2|156Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 87
| rowspan=2|155.938986(15)
| rowspan=2|83.8(18) s
| β+ (99.94%)
| 156Er
| rowspan=2|2−
| rowspan=2|
|-
| α (0.064%)
| 152Er
|-id=Thulium-156m
| style="text-indent:1em" | 156mTm
| colspan="3" style="text-indent:2em" | 400(200)# keV
| ~400 ns
| IT
| 156Tm
| (11−)
|
|-id=Thulium-157
| rowspan=2|157Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 88
| rowspan=2|156.936973(30)
| rowspan=2|3.63(9) min
| β+
| 157Er
| rowspan=2|1/2+
| rowspan=2|
|-
| α (7.5%)
| 153Er
|-id=Thulium-157m
| style="text-indent:1em" | 157mTm
| colspan="3" style="text-indent:2em" | 100(50)# keV
| 1.6 s
|
|
| 7/2−#
|
|-id=Thulium-158
| 158Tm
| style="text-align:right" | 69
| style="text-align:right" | 89
| 157.936980(27)
| 3.98(6) min
| β+
| 158Er
| 2−
|
|-id=Thulium-158m
| style="text-indent:1em" | 158mTm
| colspan="3" style="text-indent:2em" | 100(50)# keV
| ~20 s
|
|
| 5−#
|
|-id=Thulium-159
| 159Tm
| style="text-align:right" | 69
| style="text-align:right" | 90
| 158.934975(30)
| 9.13(16) min
| β+
| 159Er
| 5/2+
|
|-id=Thulium-160
| 160Tm
| style="text-align:right" | 69
| style="text-align:right" | 91
| 159.935264(35)
| 9.4(3) min
| β+
| 160Er
| 1−
|
|-id=Thulium-160m1
| rowspan=2 style="text-indent:1em" | 160m1Tm
| rowspan=2 colspan="3" style="text-indent:2em" | 67(14) keV
| rowspan=2|74.5(15) s
| IT (85%)
| 160Tm
| rowspan=2|(5+)
| rowspan=2|
|-
| β+ (15%)
| 160Er
|-id=Thulium-160m2
| style="text-indent:1em" | 160m2Tm
| colspan="3" style="text-indent:2em" | 215(52)# keV
| ~200 ns
| IT
| 160Tm
| (8)
|
|-id=Thulium-161
| 161Tm
| style="text-align:right" | 69
| style="text-align:right" | 92
| 160.933549(30)
| 30.2(8) min
| β+
| 161Er
| 7/2+
|
|-id=Thulium-161m1
| style="text-indent:1em" | 161m1Tm
| colspan="3" style="text-indent:2em" | 7.51(24) keV
| 5# min
|
|
| (1/2+)
|
|-id=Thulium-161m2
| style="text-indent:1em" | 161m2Tm
| colspan="3" style="text-indent:2em" | 78.20(3) keV
| 110(3) ns
| IT
| 161Tm
| 7/2−
|
|-id=Thulium-162
| 162Tm
| style="text-align:right" | 69
| style="text-align:right" | 93
| 161.934001(28)
| 21.70(19) min
| β+
| 162Er
| 1−
|
|-id=Thulium-162m
| rowspan=2 style="text-indent:1em" | 162mTm
| rowspan=2 colspan="3" style="text-indent:2em" | 130(40) keV
| rowspan=2|24.3(17) s
| IT (81%)
| 162Tm
| rowspan=2|5+
| rowspan=2|
|-
| β+ (19%)
| 162Er
|-id=Thulium-163
| 163Tm
| style="text-align:right" | 69
| style="text-align:right" | 94
| 162.9326583(59)
| 1.810(5) h
| β+
| 163Er
| 1/2+
|
|-id=Thulium-163m
| style="text-indent:1em" | 163mTm
| colspan="3" style="text-indent:2em" | 86.92(5) keV
| 380(30) ns
| IT
| 163Tm
| (7/2)−
|
|-id=Thulium-164
| rowspan=2|164Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 95
| rowspan=2|163.933538(27)
| rowspan=2|2.0(1) min
| EC (61%)
| rowspan=2|164Er
| rowspan=2|1+
| rowspan=2|
|-
| β+ (39%)
|-id=Thulium-164m
| rowspan=2 style="text-indent:1em" | 164mTm
| rowspan=2 colspan="3" style="text-indent:2em" | 20(12) keV
| rowspan=2|5.1(1) min
| IT (~80%)
| 164Tm
| rowspan=2|6−
| rowspan=2|
|-
| β+ (~20%)
| 164Er
|-id=Thulium-165
| 165Tm
| style="text-align:right" | 69
| style="text-align:right" | 96
| 164.9324418(18)
| 30.06(3) h
| β+
| 165Er
| 1/2+
|
|-id=Thulium-165m1
| style="text-indent:1em" | 165m1Tm
| colspan="3" style="text-indent:2em" | 80.37(6) keV
| 80(3) μs
| IT
| 165Tm
| 7/2+
|
|-id=Thulium-165m2
| style="text-indent:1em" | 165m2Tm
| colspan="3" style="text-indent:2em" | 160.47(6) keV
| 9.0(5) μs
| IT
| 165Tm
| 7/2−
|
|-id=Thulium-166
| 166Tm
| style="text-align:right" | 69
| style="text-align:right" | 97
| 165.933562(12)
| 7.70(3) h
| β+
| 166Er
| 2+
|
|-id=Thulium-166m1
| style="text-indent:1em" | 166m1Tm
| colspan="3" style="text-indent:2em" | 122(7) keV
| 348(21) ms
| IT
| 166Tm
| (6−)
|
|-id=Thulium-166m2
| style="text-indent:1em" | 166m2Tm
| colspan="3" style="text-indent:2em" | 244(7) keV
| 2(1) μs
| IT
| 166Tm
| (6−)
|
|-id=Thulium-167
| 167Tm
| style="text-align:right" | 69
| style="text-align:right" | 98
| 166.9328572(14)
| 9.25(2) d
| EC
| 167Er
| 1/2+
|
|-id=Thulium-167m1
| style="text-indent:1em" | 167m1Tm
| colspan="3" style="text-indent:2em" | 179.480(19) keV
| 1.16(6) μs
| IT
| 167Tm
| 7/2+
|
|-id=Thulium-167m2
| style="text-indent:1em" | 167m2Tm
| colspan="3" style="text-indent:2em" | 292.820(20) keV
| 0.9(1) μs
| IT
| 167Tm
| 7/2−
|
|-id=Thulium-168
| rowspan=2|168Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 99
| rowspan=2|167.9341785(18)
| rowspan=2|93.1(2) d
| β+ (99.99%)
| 168Er
| rowspan=2|3+
| rowspan=2|
|-
| β− (0.010%)
| 168Yb
|-id=Thulium-169
| 169Tm
| style="text-align:right" | 69
| style="text-align:right" | 100
| 168.93421896(79)
| colspan=3 align=center|Observationally Stable
| 1/2+
| 1.0000
|-id=Thulium-169m
| style="text-indent:1em" | 169mTm
| colspan="3" style="text-indent:2em" | 316.1463(1) keV
| 659.9(23) ns
| IT
| 169Tm
| 7/2+
|
|-
| rowspan=2|170Tm
| rowspan=2 style="text-align:right" | 69
| rowspan=2 style="text-align:right" | 101
| rowspan=2|169.93580709(79)
| rowspan=2|128.6(3) d
| β− (99.87%)
| 170Yb
| rowspan=2|1−
| rowspan=2|
|-
| EC (0.131%)
| 170Er
|-id=Thulium-170m
| style="text-indent:1em" | 170mTm
| colspan="3" style="text-indent:2em" | 183.197(4) keV
| 4.12(13) μs
| IT
| 170Tm
| 3+
|
|-id=Thulium-171
| 171Tm
| style="text-align:right" | 69
| style="text-align:right" | 102
| 170.9364352(10)
| 1.92(1) y
| β−
| 171Yb
| 1/2+
|
|-id=Thulium-171m1
| style="text-indent:1em" | 171m1Tm
| colspan="3" style="text-indent:2em" | 424.9557(15) keV
| 2.60(2) μs
| IT
| 171Tm
| 7/2−
|
|-id=Thulium-171m2
| style="text-indent:1em" | 171m2Tm
| colspan="3" style="text-indent:2em" | 1674.43(13) keV
| 1.7(2) μs
| IT
| 171Tm
| 19/2+
|
|-id=Thulium-172
| 172Tm
| style="text-align:right" | 69
| style="text-align:right" | 103
| 171.9384070(59)
| 63.6(3) h
| β−
| 172Yb
| 2−
|
|-id=Thulium-172m
| style="text-indent:1em" | 172mTm
| colspan="3" style="text-indent:2em" | 476.2(2) keV
| 132(7) μs
| IT
| 172Tm
| (6+)
|
|-id=Thulium-173
| 173Tm
| style="text-align:right" | 69
| style="text-align:right" | 104
| 172.9396066(47)
| 8.24(8) h
| β−
| 173Yb
| (1/2+)
|
|-id=Thulium-173m1
| style="text-indent:1em" | 173m1Tm
| colspan="3" style="text-indent:2em" | 317.73(20) keV
| 10.7(17) μs
| IT
| 173Tm
| 7/2−
|
|-id=Thulium-173m2
| style="text-indent:1em" | 173m2Tm
| colspan="3" style="text-indent:2em" | 1905.7(4) keV
| 250(69) ns
| IT
| 173Tm
| 19/2−
|
|-id=Thulium-173m3
| style="text-indent:1em" | 173m3Tm
| colspan="3" style="text-indent:2em" | 4047.9(5) keV
| 121(28) ns
| IT
| 173Tm
| 35/2−
|
|-id=Thulium-174
| 174Tm
| style="text-align:right" | 69
| style="text-align:right" | 105
| 173.942174(48)
| 5.4(1) min
| β−
| 174Yb
| 4−
|
|-id=Thulium-174m1
| rowspan=2 style="text-indent:1em" | 174m1Tm
| rowspan=2 colspan="3" style="text-indent:2em" | 252.4(7) keV
| rowspan=2|2.29(1) s
| IT (>98.5%)
| 174Tm
| rowspan=2|0+
| rowspan=2|
|-
| β− (<1.5%)
| 174Yb
|-id=Thulium-174m2
| style="text-indent:1em" | 174m2Tm
| colspan="3" style="text-indent:2em" | 2091.7(3) keV
| 106(7) μs
| IT
| 174Tm
| 14−
|
|-id=Thulium-175
| 175Tm
| style="text-align:right" | 69
| style="text-align:right" | 106
| 174.943842(54)
| 15.2(5) min
| β−
| 175Yb
| (1/2)+
|
|-id=Thulium-175m1
| style="text-indent:1em" | 175m1Tm
| colspan="3" style="text-indent:2em" | 440.0(11) keV
| 319(35) ns
| IT
| 175Tm
| 7/2−
|
|-id=Thulium-175m2
| style="text-indent:1em" | 175m2Tm
| colspan="3" style="text-indent:2em" | 1517.7(12) keV
| 21(14) μs
| IT
| 175Tm
| 23/2+
|
|-id=Thulium-176
| 176Tm
| style="text-align:right" | 69
| style="text-align:right" | 107
| 175.94700(11)
| 1.85(3) min
| β−
| 176Yb
| (4+)
|
|-id=Thulium-177
| 177Tm
| style="text-align:right" | 69
| style="text-align:right" | 108
| 176.94893(22)#
| 95(7) s
| β−
| 177Yb
| 1/2+#
|
|-id=Thulium-177m
| style="text-indent:1em" | 177mTm
| colspan="3" style="text-indent:2em" | 100(100)# keV
| 77(11) s
| β−
| 177Yb
| 7/2−#
|
|-id=Thulium-178
| 178Tm
| style="text-align:right" | 69
| style="text-align:right" | 109
| 177.95251(32)#
| 10# s[>300 ns]
|
|
| 1−#
|
|-id=Thulium-179
| 179Tm
| style="text-align:right" | 69
| style="text-align:right" | 110
| 178.95502(43)#
| 18# s[>300 ns]
|
|
| 1/2+#
|
|-id=Thulium-180
| 180Tm
| style="text-align:right" | 69
| style="text-align:right" | 111
| 179.95902(43)#
| 3# s[>300 ns]
|
|
|
|
|-id=Thulium-181
| 181Tm
| style="text-align:right" | 69
| style="text-align:right" | 112
| 180.96195(54)#
| 7# s[>300 ns]
|
|
| 1/2+#
|
|-id=Thulium-182
| 182Tm
| style="text-align:right" | 69
| style="text-align:right" | 113
| 181.96619(54)#
|
|
|
|
|
|-id=Thulium-183
| 183Tm
| style="text-align:right" | 69
| style="text-align:right" | 114
|
|
|
|
|
|
Thulium-170
Thulium-170 has a half-life of 128.6 days, decaying by β− decay about 99.87% of the time and electron capture the remaining 0.13% of the time. Due to its low-energy X-ray emissions, it has been proposed for radiotherapy and as a source in a radiothermal generator.
References
Isotope masses from:
Half-life, spin, and isomer data selected from the following sources.
Thulium
Thulium | Isotopes of thulium | Chemistry | 6,748 |
11,271,178 | https://en.wikipedia.org/wiki/Transient%20state | In systems theory, a system is said to be transient or in a transient state when a process variable or variables have been changed and the system has not yet reached a steady state. In electrical engineering, the time taken for an electronic circuit to change from one steady state to another steady state is called the transient time.
Examples
Chemical Engineering
When a chemical reactor is being brought into operation, the concentrations, temperatures, species compositions, and reaction rates are changing with time until operation reaches its nominal process variables.
Electrical engineering
When a switch is closed in an electrical circuit containing a capacitor or inductor, the component draws out the resulting change in voltage or current, causing the system to take a substantial amount of time to reach a new steady state. This period of time is known as the transient state.
A capacitor acts as a short circuit immediately after the switch is closed, increasing its impedance during the transient state until it acts as an open circuit in its steady state.
An inductor is the opposite, behaving as an open circuit until reaching a short circuit steady state.
See also
Attractor
Carrying capacity
Control theory
Dynamical system
Ecological footprint
Economic growth
Engine test stand
Equilibrium point
List of types of equilibrium
Evolutionary economics
Growth curve
Herman Daly
Homeostasis
Lead-lag compensator
Limit cycle
Limits to Growth
Population dynamics
Race condition
Simulation
State function
Steady state
Steady state economy
Steady State theory
Systems theory
Thermodynamic equilibrium
Transient modelling
Transient response
References
Chemical process engineering
Electrical engineering
Systems theory
Control theory | Transient state | Chemistry,Mathematics,Engineering | 305 |
52,431,501 | https://en.wikipedia.org/wiki/NGC%20376 | NGC 376 is a young open cluster of stars in the southern constellation of Tucana. It was discovered on September 2, 1826, by Scottish astronomer James Dunlop. Dreyer, a Danish/British astronomer, described it as a "globular cluster, bright, small, round." It is irregular in form, with a central spike.
The cluster is located in the eastern extension of the Small Magellanic Cloud (SMC), a nearby dwarf galaxy. It may have already lost 90% of its original mass and is in the process of dissolving into the SMC. As a result, it has achieved a relatively low concentration of stars and is no longer in dynamic equilibrium. The cluster is about 28 million years old and contains ~3,400 times the mass of the Sun. It has a core radius of and a tidal radius of .
References
External links
0376
Small Magellanic Cloud
18260902
Tucana
Open clusters | NGC 376 | Astronomy | 195 |
63,711,571 | https://en.wikipedia.org/wiki/Japanese%20Navy%20Signal%20Flags | The Japanese Navy Signal Flags are a set of maritime signal flags for conveying messages in the Japanese language. The system generally uses the standard International Signal Flags, assigning both the letter, number and repeater flags to various kana, roughly following Iroha order for the standard letter flags. It also has several unique flags for some kana, as well as for conveying non-alphabetic messages.
Maritime flags
Maritime signalling
Nonverbal communication
Optical communications
Signal flags
Encodings of Japanese | Japanese Navy Signal Flags | Engineering | 100 |
73,305,281 | https://en.wikipedia.org/wiki/Green%20solvent | Green solvents are environmentally friendly chemical solvents that are used as a part of green chemistry. They came to prominence in 2015, when the UN defined a new sustainability-focused development plan based on 17 sustainable development goals, recognizing the need for green chemistry and green solvents for a more sustainable future. Green solvents are developed as more environmentally friendly solvents, derived from the processing of agricultural crops or otherwise sustainable methods as alternatives to petrochemical solvents. Some of the expected characteristics of green solvents include ease of recycling, ease of biodegradation, and low toxicity.
Examples
Water
Although not an organic solvent, water is an attractive solvent because it its non-toxic and renewable. It is a useful solvent in many industrial processes. Traditional organic solvents can sometimes be replaced by aqueous preparations. Water-based coatings have largely replaced standard petroleum-based paints for the construction industry; however, solvent-based anti-corrosion paints remain among the most used today.
Supercritical water (SCW) is obtained at a temperature of 374.2 °C and a pressure of 22.05 MPa. It behaves as a dense gas with a dissolving power equivalent to that of organic solvents of low polarity. However, the solubility of inorganic salts in SCW is radically reduced. SCW is used as a reaction medium, especially in oxidation processes for the destruction of toxic substances such as those found in industrial aqueous effluents. The use of supercritical water has two main technical challenges, namely corrosion and salt deposition.
Supercritical carbon dioxide
Supercritical carbon dioxide (CO2) is the most commonly used supercritical fluid because of its relatively easy to use. Temperatures above 31 °C and pressures above 7.38 MPa are sufficient to obtain supercriticality, at which point it behaves as a good nonpolar solvent.
Alcohols and esters
Ethanol is used in toiletries, cosmetics, some cleaners and coatings. . Bioethanol, made industrially by fermentation of sugars, starch, and cellulose is widely available. Biobutanol (butyl alcohol, various isomers) is also produced by fermentation of sugars. Tetrahydrofurfuryl alcohol (THFA) is a specialty solvent that may be obtained from hemicellulose.
Ethyl lactate, made from lactic acid obtained from corn starch, is notably used as a mixture with other solvents in some paint strippers and cleaners. Ethyl lactate has replaced solvents such as toluene, acetone, and xylene in some applications.
Lipid-derived solvents
Lipids (triglycerides) themselves can be used as solvents, but are mostly hydrolyzed to fatty acids and glycerol (glycerin). Fatty acids can be esterified with an alcohol to give fatty acid esters, e.g., FAMEs (fatty acid methyl esters) if the esterification is performed with methanol. Usually derived from natural gas or petroleum, the methanol used to produce FAMEs can also be obtained by other routes, including gasification of biomass and household hazardous waste. Glycerol from lipid hydrolysis can be used as a solvent in synthetic chemistry, as can some of its derivatives.
Deep eutectic solvents
Deep eutectic solvents (DES) have low melting points, can be cheap, safe and useful in industries. One example is octylammonium bromide/decanoic acid (molar ratio of [1:2]) has a lower density compared to water of 0.8889 g.cm−3, up to 1.4851 g.cm−3 for choline chloride/trifluoroacetamine [1:2]. Their miscibility is also composition-dependent.
A mixture whose melting point is lower than that of the constituents is called an eutectic mixture. Many such mixtures can be used as solvents, especially when the melting-point depression is very large, hence the term deep eutectic solvent (DES). One of the most commonly used substances to obtain DES is the ammonium salt choline chloride. Smith, Abbott, and Ryder report that a mixture of urea (melting point: 133 °C) and choline chloride (melting point: 302 °C) in a 2:1 molar ratio has a melting point of 12 °C.
Natural deep eutectic solvents (NADES) are also a research area relevant to green chemistry, being easy to produce from two low-cost and well-known ecotoxicity components, a hydrogen-bond acceptor, and a hydrogen-bond donor.
Terpenes
Solvents in a diverse class of natural substances called terpenes are obtained by extraction from certain parts of plants. All terpenes are structurally presented as multiples of isoprene with the gross formula (C5H8)n.
D-limonene, a monoterpene, is one of the best known solvents in this class, as is turpentine.
D-limonene is extracted from citrus peels while turpentine is obtained from pine trees (sap, stump) and as a by-product of the Kraft paper-making process (Sell, 2006).
Turpentine is a mixture of terpenes whose composition varies according to its origin and production method. In Canada and the United States, a range of mass concentrations of 40 to 65% α-pinene, 20 to 35% β-pinene, and 2 to 20% d-limonene are found.
α-pinene can replace n-hexane for the extraction of vegetable oil, and as a substitute solvent for extracting molecules such as carotenoids used as food additives.
Turpentine, formerly used as a solvent in organic coatings, is now largely replaced by petroleum hydrocarbons. Nowadays, it is mainly used as a source of its constituents, including α-pinene and β-pinene.
Ionic liquids
Ionic liquids are molten organic salts that are generally fluid at room temperature. Frequently used cationic liquids, include imidazolium, pyridinium, ammonium and phosphonium. Anionic liquids include halides, tetrafluoroborate, hexafluorophosphate, and nitrate. Bubalo et al. (2015) argue that ionic liquids are non-flammable, and chemically, electrochemically and thermally stable. These properties allow for ionic liquids to be used as green solvents, as their low volatility limits VOC emissions compared to conventional solvents. The ecotoxicity and poor degradability of ionic liquids has been recognized in the past because the resources typically used for their production are non-renewable, as is the case for imidazole and halogenated alkanes (derived from petroleum). Ionic liquids produced from renewable and biodegradable materials have recently emerged, but their availability is low because of high production costs.
Switchable solvents
Bubbling CO2 into water or an organic solvent results in changes to certain properties of the liquid such as its polarity, ionic strength, and hydrophilicity. This allows an organic solvent to form a homogeneous mixture with the otherwise immiscible water. This process is reversible, and was developed by Jessop et al. (2012) for potential uses in synthetic chemistry, extraction and separation of various substances. The degree of how green switchable solvents are is measured by the energy and material savings it provides; thus, one of the advantages of switchable solvents is the potential reuse of solvent and water in post-process applications.
Solvents from waste materials
First-generation biorefineries exploit food-based substances such as starch and vegetable oils. For example, corn grain is used to make ethanol. Second-generation biorefineries use residues or wastes generated by various industries as feedstock for the manufacture of their solvents. 2-Methyltetrahydrofuran, derived from lignocellulosic waste, would have the potential to replace tetrahydrofuran, toluene, DCM, and diethyl ether in some applications. Levulinic acid esters from the same source would have the potential to replace DCM in paint cleaners and strippers.
Used cooking oils can be used to produce FAMEs. Glycerol, obtained as a byproduct of the synthesis of these, can in turn be used to produce various solvents such as 2,2-dimethyl-1,3-dioxolane-4-methanol, usable as a solvent in the formulation of inks and cleaners.
Fusel oil, an isomeric mixture of amyl alcohol, is a byproduct of ethanol production from sugars. Green solvents derived from fusel oil such as isoamyl acetate or isoamyl methyl carbonate could be obtained. When these green solvents are used to manufacture nail polishes, VOC emissions report a minimum reduction of 68% compared to the emissions caused by using traditional solvents.
Petrochemical solvents with green characteristics
Due to the high price of new sustainable solvents, in 2017, Clark et al. listed twenty-five solvents that are currently considered acceptable to replace hazardous solvents, even if they are derived from petrochemicals.
These include propylene carbonate and dibasic esters (DBEs). Propylene carbonate and DBEs have been the subject of monographs on solvent substitution. Propylene carbonate and two DBEs are considered green in the manufacturer GlaxoSmithKline's (GSK) Solvent Sustainability Guide, which is used in the pharmaceutical industry. Propylene carbonate can be produced from renewable resources, but DBEs that have appeared on the market in recent years are obtained as by-products of the synthesis of polyamides, derived from petroleum. Other petrochemical solvents are variously referred to as green solvents, such as halogenated hydrocarbons like parachlorobenzotrifluoride, which has been used since the early 1990s in paints to replace smog-forming solvents.
Siloxanes are compounds known in industry in the form of polymers (silicones, R-SiO-R'), for their thermal stability and elastic and non-stick properties. The early 1990s saw the emergence of low molecular weight siloxanes (methylsiloxanes), which can be used as solvents in precision cleaning, replacing stratospheric ozone-depleting solvents.
A final category of petrochemical solvents that qualify as green involves polymeric solvents. The International Union of Pure and Applied Chemistry defines the term "polymer solvent" as "a polymer that acts as a solvent for low-molecular weight compounds". In industrial chemistry, polyethylene glycols (PEGs, H(OCH2CH2)nOH) are one of the most widely used polymeric solvent families. PEGs, with molecular weights below 600 Da, are viscous liquids at room temperature, while heavier PEGs are waxy solids.
Soluble in water and readily biodegradable, liquid PEGs have the advantage of negligible volatility (< 0.01 mmHg or < 1.3 Pa at 20 °C). PEGs are synthesized from ethylene glycol and ethylene oxide, both of which are petrochemical-derived molecules, though ethylene glycol from renewable sources (cellulose) is commercially available.
Physical properties
The physical properties of solvents are important in identifying the solvent used according to the reaction conditions. In particular, their dissolution properties make it possible to assess the use of a particular solvent for a chemical reaction, such as an extraction or a washing. Evaporation is also important to consider, as it can be indicative of the potential volatile organic compound (VOC) emissions.
The following table shows selected properties of green solvents in each category:
Other categories of green solvent have additional properties that preclude their usage in various applications:
Fatty acid methyl esters have been investigated and compared to fossil diesel. At 20 °C or 40 °C, those solvents have a lower density than water at 4 °C (temperature in which the water is the densest):
: from 0.9079 (acetate) to 0.8488 (arachidate);
: from 0.9338 (acetate) to 0.8663 (pentadecanoate).
Their kinematic viscosity depends if they are saturated or unsaturated or even the temperature. At 40 °C, for saturated FAMEs, it goes from 0.340 (acetate) to 6.39 (nonadecanoate), and for unsaturated FAMEs, it goes from 5.61 for the stearate to 7.21 for the erucate.
Their dielectric constant decreases as their alkyl chain gets longer. For example, acetate has a tiny alkyl chain and has a dielectric constant of ε40= 6.852 and ε40= 2.982 for the nonadecanoate.
The properties of switchable solvents are caused by the strength of their conjugate acid's pKa and octanol-water partition coefficient ratio Kow. They must have a pKa above 9.5 to be protonated by carbonated water and also a log(Kow) between 1.2 and 2.5 to be switchable, otherwise they will be hydrophilic or hydrophobic. These properties depend on the volumetric ratio of the compound compared to water. For example, N,N,N-Tributylpentanamidine is a switchable solvent, and for a volumetric ratio of compound to water of 2:1, it has a log(Kow)= 5.99, which is higher than 2.5.
Ionic liquids with low melting points are associated with asymmetric cations, and liquids with high melting point are associated with symmetric cations. Additionally, if they have branched alkyl chains, they will have a higher melting point. They are more dense than water, ranging from 1.05 to 1.64 g·cm−3 at 20 °C and from 1.01 to 1.57 at 90 °C.
Applications
Some green solvents, in addition to being more sustainable, have been found to have more efficient physicochemical properties or reaction yields than when using traditional solvents. However, the results obtained are for the most part observations from experiments on particular green solvents and cannot be generalized. The effectiveness of a green solvent is quantified by calculating the "E factor", which is a ratio of waste materials to desired product produced through a process.
Organic synthesis
Green solvent efficiency has mainly been proven in extractions and separations in comparison to traditional solvents.
Supercritical CO2 is largely used in the food industry as an extraction solvent. Among other processes like flavoring agents, fragrances, essential oils, or lipid extraction from plants, sc-CO2 is a green substitute to dichloromethane in coffee decaffeination, avoiding the use of a hazardous solvent and additional synthesis steps. Sc-CO2 can also apply to polymerization reactions, specifically in PTFE formation to manipulate monomers safely and avoid explosive reactions of peroxide with dioxygen. Although the original process involves water, a green solvent itself, sc-CO2 allows less waste materials.
In deep eutectic solvents, observations report that the higher the solvent's hydrophobicity, the higher the extraction efficiency of neonicotinoids from aqueous solutions, although the exact trend has not been established yet. Additionally, the creation of a biphasic system is easier to achieve. In 2015, several hydrophobic DES composed of highly hydrophobic hydrogen bond donors were reported, one of them being decanoic acid with a quaternary ammonium salt, and a fatty acid as a hydrogen bond acceptor.
The pharmaceutical industry intends to substitute their solvents for greener options, emphasized as solvent use in active substance synthesis is important, which aggrandizes solvents with a high boiling point. Solvent must generally be evaporated at the end of a chemical reaction, hence the insistence on low-boiling solvents in order to minimize the energy required for its removal by distillation.
Industrial chemistry
Ethyl lactate has uses in cleaning metal surfaces, removing greases, oils, adhesives and solid fuels. They are included in aqueous preparations used for industrial degreasing, coatings, adhesives and inks.
Fatty acid methyl esters (FAMEs) have been used as a reactive diluent in coatings for continuous metal strip coating (e.g., the interior coating of food cans), reducing the amount of volatile solvent in this type of coating and lowering its overall toxicity.
Tetrahydrofurfuryl alcohol (THFA) mixtures with other green solvents are studied for their cleaning properties. As an example, the mixture of THFA with FAME and ethyl lactate has been patented as a paint stripper.
Ionic liquids particularly have applications in electrodeposition. Their relevance as green solvents is further enhanced by the emergence of production methods based on renewable and biodegradable resources.
Solvent manufacturers also provide industrial companies with databases to propose green alternative solvent mixtures to those originally used in industrial processes with similar efficiency and reaction yield. However, environmental and safety requirements are not always considered in these suggestions.
Safety
The use of green solvents is increasingly preferred because of their lower environmental impact. These solvents still present dangers for human health as well as for the environment. However, for a number of green solvents, their impact is still unclear, or at least, not categorized yet.
Listed here is selected information from the safety data sheets of common green solvents:
Solvents derived from carbohydrates
For ethanol, the American Conference of Governmental Industrial Hygienists, shortened ACGIH, advises a short-term exposure limit of 1000 ppm to avoid irritating the respiratory tract.
The French National Agency for Food, Environmental, and Occupational Health Safety (ANSES) has recommended a short-term occupational exposure limit value of 100 mg/m3 for butan-1-ol, a solvent used in paints, cleaners, and degreasers, in order to prevent irritation of the mucous membranes of the eyes and upper airways. Since 1998, the ACGIH has suggested an 8-hour exposure limit value (ELV) of 20 ppm of butan-1-ol to prevent irritation of the upper respiratory tract and eyes.
Male rats exposed to THFA develop reproductive toxicity. Moreover, it has an impact on fetal and embryonic development in rats. The American Industrial Hygiene Association suggested an ELV of 2 ppm for THFA to prevent testicular degeneration in 1993 based on the No-observed-effect level of two subchronic investigations in rats and dogs
Deep eutectic solvents
DES components, according to Wazeer, Hayyan, and Hadj-Kali, are typically non-toxic and biodegradable. According to Hayyan et al., the DES they investigated were more harmful to the small crustacean artemia than each of their individual components, which could be attributed to synergy. The abbreviation NADES refers to DES that contain only materials sourced from renewable resources. Compared to other DES, these would typically be less hazardous.
Legislation
Due to the recency of green solvent development, few laws related to their regulation have been developed beyond standard workplace safety precautions already in place, and laws that enforce the use of green solvents have not been widespread.
References
Green chemistry | Green solvent | Chemistry,Engineering,Environmental_science | 4,118 |
67,652,489 | https://en.wikipedia.org/wiki/Merochlorophaeic%20acid | Merochlorophaeic acid is a depside with the molecular formula C24H30O8 which has been isolated from the lichen Cladonia merochlorophaea.
References
Further reading
Polyphenols
Carboxylic acids
Phenol esters
Methoxy compounds
Pentyl compounds
Lichen products | Merochlorophaeic acid | Chemistry | 69 |
1,442,169 | https://en.wikipedia.org/wiki/Shadow%20play | Shadow play, also known as shadow puppetry, is an ancient form of storytelling and entertainment which uses flat articulated cut-out figures (shadow puppets) which are held between a source of light and a translucent screen or scrim. The cut-out shapes of the puppets sometimes include translucent color or other types of detailing. Various effects can be achieved by moving both the puppets and the light source. A skilled puppeteer can make the figures appear to walk, dance, fight, nod and laugh.
There are four different types of performances in shadow play: the actors using their bodies as shadows, puppets where the actors hold them as shadows in the daytime, spatial viewing, and viewing the shadows from both sides of the screen.
Shadow play is popular in various cultures, among both children and adults in many countries around the world. More than 20 countries are known to have shadow show troupes. Shadow play is an old tradition and is listed as a Syrian intangible cultural heritage by UNESCO. It also has a long history in Southeast Asia, especially in Indonesia, Malaysia, Thailand, and Cambodia. It has been an ancient art and a living folk tradition in China, India, Iran and Nepal. It is also known in Egypt, Turkey, Greece, Germany, France, and the United States.
History
Shadow play probably developed from "par" shows with narrative scenes painted on a large cloth and the story further related through song. As the shows were mostly performed at night the par was illuminated with an oil lamp or candles. Shadow puppet theatre likely originated in Central Asia-China or in India in the 1st millennium BCE. By at least around 200 BCE, the figures on cloth seem to have been replaced with puppetry in Indian tholu bommalata shows. These are performed behind a thin screen with flat, jointed puppets made of colorfully painted transparent leather. The puppets are held close to the screen and lit from behind, while hands and arms are manipulated with attached canes and lower legs swinging freely from the knee.
The evidence of shadow puppet theatre is found in both old Chinese and Indian texts. The most significant historical centers of shadow play theatre have been China, Southeast Asia and the Indian subcontinent.
According to Martin Banham, there is little mention of indigenous theatrical activity in the Middle East between the 3rd century CE and the 13th century, including the centuries that followed the Islamic conquest of the region. The shadow puppet play, states Banham, probably came into vogue in the Middle East after the Mongol invasions and thereafter it incorporated local innovations by the 16th century. Little mention of shadow play is found in Islamic literature of Iran, but much is found in Turkish and 19th-century Ottoman Empire-influenced territories.
While shadow play theatre is an Asian invention, hand puppets have a long history in Europe. As European merchant ships sailed in the search of sea routes to India and China, they helped diffuse popular entertainment arts and cultural practices into Europe. Shadow theatre became popular in France, Italy, Britain and Germany by the 17th century. In France, shadow play was advertised as ombres chinoises, while elsewhere they were called "magic lantern". Goethe helped build a shadow play theatre in Tiefurt in 1781.
Prelude to cinematography
According to Stephen Herbert, the popular shadow theatre evolved nonlinearly into projected slides and ultimately into cinematography. The common principle in these innovations were the creative use of light, images and a projection screen. According to Olive Cook, there are many parallels in the development of shadow play and modern cinema, such as their use of music, voice, attempts to introduce colors and mass popularity.
By country and region
Australia
Richard Bradshaw is an Australian shadow puppeteer known for his characters like "Super Kangaroo". Bradshaw's puppetry has been featured in television programs made by Jim Henson as well as the long-running ABC children's TV series Play School.
The Shadow Theatre of Anaphoria (relocated to Australia from California) combines a mixture of reconstructed and original puppets with multiple sources of lights. The company is under the direction of Kraig Grady.
Australian company Shadowplay Studios' debut game Projection: First Light was inspired by shadow puppetry and its art style replicates the traditional shadow play canvas using black props and sepia backgrounds. They visited Richard Bradshaw to gain more insight into shadow puppetry, to make their game more authentic and to get references for the game's shadow puppet characters.
Cambodia
In Cambodia, the shadow play is called Nang Sbek Thom, or simply as Sbek Thom (literally "large leather hide"), Sbek Touch ("small leather hide") and Sbek Por ("colored leather hide").
It is performed during sacred temple ceremonies, at private functions, and for the public in Cambodia's villages. The popular plays include the Ramayana and Mahabharata epics, as well as other Hindu myth and legends. The performance is accompanied by a pinpeat orchestra.
The Sbek Thom is based on the Cambodian version of the Indian epic Ramayana, an epic story about good and evil involving Rama, Sita, Lakshmana, Hanuman and Ravana. It is a sacred performance, embodying Khmer beliefs built on the foundations and mythologies of Brahmanism and Buddhism.
Cambodian shadow puppets are made of cowhide, and their size are usually quite large, depicting a whole scene, including its background. Unlike their Javanese counterparts, Cambodian shadow puppets are usually not articulated, rendering the figure's hands unmovable, and are left uncolored, retaining the original color of the leather. The main shadow puppet production center is Roluos near Siem Reap. Cambodian shadow puppetry is one of the cultural performances staged for tourists alongside Cambodian traditional dances.
The Sbek Thom figures are unlike puppets because they are large and heavy, with no moveable parts. The Sbek Touch, in contrast, are much smaller puppets with movable parts; their shows have been more popular. The Sbek Thom shadow play involves many puppeteers dancing on the screen, each puppeteer playing one character of the Ramayana, while separate narrators recite the story accompanied by an orchestra.
China
Chinese mainland
There are several myths and legends about the origins of shadow puppetry in China. The most famous one has it that Chinese shadow puppetry originated when the favorite concubine of Emperor Wu of Han (156 BCE – 87 BCE) died and magician Shao-weng promised to raise her spirit. The emperor could see a shadow that looked like her move behind the curtains that the magician had placed around some lit torches. It is often told that the magician used a shadow puppet, but the original text in Book of Han gives no reason to believe in a relation to shadow puppetry. Although there are many earlier records of all kinds of puppetry in China, clear mention of Chinese shadow play does not occur until the Northern Song dynasty (960–1127). A 1235 book mentions that the puppets were initially cut out of paper, but later made of colored leather or parchment. The stories were mostly based on history and half fact half fiction, but comedies were also performed.
Shadow play in China is called piyingxi. There are two distinct styles of shadow play: Luanzhou (North China) and Sichuan (South China). Within Sichuan, there are two styles: Chuanbei piyingxi (Northern Sichuan) and Chengdu piyingxi. Cities that are included in the Northern Sichuan are Bazhong, Nanchong, and Guangyuan.
Shadow theatre became quite popular as early as the Song dynasty, when holidays were marked by the presentation of many shadow plays. During the Ming dynasty there were 40 to 50 shadow show troupes in the city of Beijing alone. The earliest shadow theatre screens were made of mulberry paper. The storytellers generally used the art to tell events between various war kingdoms or stories of Buddhist sources. Today, puppets made of leather and moved on sticks are used to tell dramatic versions of traditional fairy tales and myths. In Gansu province, it is accompanied by Daoqing music, while in Jilin, accompanying Huanglong music forms some of the basis of modern opera.
Chinese shadow puppetry is shown in the 1994 Zhang Yimou film To Live.
Taiwan
The origins of Taiwan's shadow puppetry can be traced to the Chaochow school of shadow puppet theatre. Commonly known as leather monkey shows or leather shows, the shadow plays were popular in Tainan, Kaohsiung, and Pingtung as early as the Qing dynasty (1644–1911 A.D.). Older puppeteers estimate that there were at least seventy shadow puppet troupes in the Kaohsiung area alone in the closing years of the Qing. Traditionally, the eight to twelve-inch puppet figures, and the stage scenery and props such as furniture, natural scenery, pagodas, halls, and plants, are all cut from leather. As shadow puppetry is based on light penetrating through a translucent sheet of cloth, the "shadows" are actually silhouettes seen by the audience in profile or face on. Taiwan's shadow plays are accompanied by Chaochow melodies which are often called "priest's melodies" owing to their similarity with the music used by Taoist priests at funerals. A large repertoire of some 300 scripts of the southern school of drama used in shadow puppetry and dating back to the fourteenth and fifteenth centuries has been preserved in Taiwan and is considered to be a priceless cultural asset.
Terminology
A number of terms are used to describe the different forms.
皮影戏, píyĭngxì is a shadow theatre that uses leather puppets. The figures are usually moved behind a thin screen. It is not entirely a show of shadows, as the shadow is more of a silhouette. This gives the figures some color on the screen; they are not 100% black and white.
纸影戏, zhĭyĭngxì is paper shadow theatre.
中国影戏, Zhōngguó yĭngxì is Chinese shadow theatre.
Europe
In Plato's allegory of the cave (circa 380 BCE), Socrates described a kind of shadow play with figures made out of stone, wood, or other materials, presented to prisoners who in all of their life could see nothing more than the shadows on the wall in front of them. This was an imaginative illustration of ideas about (false or limited) the relations between knowledge, education and a truthful understanding of reality. Plato compared a wall that screens off the people who carry the figures to the kind of partitions used by puppet (marionette) players to hide behind. Apparently, there was no existing form of shadow theatre known in ancient Greece that Socrates/Plato could refer to.
Shadow plays started spreading throughout Europe at the end of the 17th century, probably via Italy. It is known that several Italian showmen performed in Germany, France and England during this period.
In 1675 German polymath and philosopher Gottfried Wilhelm Leibniz imagined a kind of world exhibition that would show all kinds of new inventions and spectacles. In a handwritten document he supposed it should include shadow theatre.
French missionaries brought the shadow show from China to France in 1767 and put on performances in Paris and Marseilles, causing quite a stir. In time, the ombres chinoises (French for "Chinese shadows") with local modification and embellishment, became the ombres françaises and struck root in the country. The popularity of ombres chinoises reflected the chinoiserie fashion of the days.
French showman François Dominique Séraphin first presented his shadow spectacle in a hôtel particulier in Versailles in 1771. He would go on to perform at the Palace of Versailles in front of royalty. In 1784 Séraphin moved to Paris, performing his shows at his permanent theatre in the newly opened Palais-Royal from 8 September 1784. The performances would adapt to the political changes and survived the French Revolution. Séraphin developed the use of clockwork mechanisms to automate the show. His nephew took over the show after Séraphin's death in 1800 and it was continued by his heirs until the theatre closed in 1870.
In 1775, Ambrogio (also known as Ambroise and Ambrose) staged ambitious shows in Paris and London.
The art was a popular entertainment in Paris during the 19th century, especially in the famous Paris nightclub district of Montmartre. The cabaret Le Chat noir ("The Black Cat") produced 45 Théatre d'ombres shows between 1885 and 1896 under the management of Rodolphe Salis. Behind a screen on the second floor of the establishment, the artist Henri Rivière worked with up to 20 assistants in a large, oxy-hydrogen back-lit performance area and used a double optical lantern to project backgrounds. Figures were originally cardboard cut-outs, but were replaced with zinc figures since 1887. Various artists took part in the creation, including Steinlen, Adolphe Willette and Albert Robida. Caran d'Ache designed circa 50 cut-outs for the very popular 1888 show L'Epopée. Musée d'Orsay has circa 40 original zinc figures in its collection. Other cabarets would produce their own versions; the ombres evolved into numerous theatrical productions and had a major influence on phantasmagoria.
In Italy, the Museum of Precinema collezione Minici Zotti in Padua houses a collection of 70 French shadow puppets, similar to those used in the cabaret Le Chat Noir, together with an original theatre and painted backdrops, as well as two magic lanterns for projecting scenes. So far, the shadow plays identified are La Marche a l'étoile (introduced by Henri Rivière), Le Sphinx (introduced by Amédée Vignola), L'Âge d'or and Le Carneval de Venise. The shadow puppets were presumably created for a tour in France or abroad at the end of the 19th century.
Nowadays, several theatre companies in France are developing the practice of shadow puppets: Le Théâtre des Ombres, Le Théâtre du Petit Miroir, Le Théâtre Les Chaises, and La Loupiote.
India
Shadow puppets are an ancient part of India's culture, particularly regionally as the keelu bomme and Tholu bommalata of Andhra Pradesh, the Togalu gombeyaata in Karnataka, the charma bahuli natya in Maharashtra, the Ravana chhaya in Odisha, the Tholpavakoothu in Kerala and Tamil Nadu. Shadow puppet play is also found in pictorial traditions in India, such as temple mural painting, loose-leaf folio paintings, and the narrative paintings. Dance forms such as the Chhau of Odisha literally mean "shadow". The shadow theatre dance drama theatre are usually performed on platform stages attached to Hindu temples, and in some regions these are called Koothu Madams or Koothambalams. In many regions, the puppet drama play is performed by itinerant artist families on temporary stages during major temple festivals. Legends from the Hindu epics Ramayana and the Mahabharata dominate their repertoire. However, the details and the stories vary regionally.
During the 19th century and early parts of the 20th century of the colonial era, Indologists believed that shadow puppet plays had become extinct in India, though mentioned in its ancient Sanskrit texts. In the 1930s and thereafter, states Stuart Blackburn, these fears of its extinction were found to be false as evidence emerged that shadow puppetry had remained a vigorous rural tradition in central Kerala mountains, most of Karnataka, northern Andhra Pradesh, parts of Tamil Nadu, Odisha and southern Maharashtra. The Marathi people, particularly of low caste, had preserved and vigorously performed the legends of Hindu epics as a folk tradition. The importance of Marathi artists is evidenced, states Blackburn, from the puppeteers speaking Marathi as their mother tongue in many non-Marathi speaking states of India.
According to Beth Osnes, the tholu bommalata shadow puppet theatre dates back to the 3rd century BCE, and has attracted patronage ever since. The puppets used in a tholu bommalata performance, states Phyllis Dircks, are "translucent, lusciously multicolored leather figures four to five feet tall, and feature one or two articulated arms". The process of making the puppets is an elaborate ritual, where the artist families in India pray, go into seclusion, produce the required art work, then celebrate the "metaphorical birth of a puppet" with flowers and incense.
The tholu pava koothu of Kerala uses leather puppets whose images are projected on a backlit screen. The shadows are used to creatively express characters and stories in the Ramayana. A complete performance of the epic can take forty-one nights, while an abridged performance lasts as few as seven days. One feature of the tholu pava koothu show is that it is a team performance of puppeteers, while other shadow plays such as the wayang of Indonesia are performed by a single puppeteer for the same Ramayana story. There are regional differences within India in the puppet arts. For example, women play a major role in shadow play theatre in most parts of India, except in Kerala and Maharashtra. Almost everywhere, except Odisha, the puppets are made from tanned deer skin, painted and articulated. Translucent leather puppets are typical in Andhra Pradesh and Tamil Nadu, while opaque puppets are typical in Kerala and Odisha. The artist troupes typically carry over a hundred puppets for their performance in rural India.
Indonesia
Shadow puppet theatre is called wayang in Indonesia, wherein a dramatic story is told through shadows thrown by puppets and sometimes combined with human characters. Wayang is an ancient form of storytelling that renowned for its elaborate puppets and complex musical styles. The earliest evidence is from the late 1st millennium CE, in medieval-era texts and archeological sites. Around 860 CE an Old Javanese charter issued by Maharaja Sri Lokapala mentions three sorts of performers: atapukan, aringgit, and abanol. Ringgit is described in an 11th-century Javanese poem as a leather shadow figure. Unlike India's shadow plays that incorporated little to no musical performance, Indonesia wayang includes an assemble of gamelan music.
Wayang kulit, a style of wayang shadow play, is particularly popular in Java and Bali. The term derived from the word wayang literally means "shadow" or "imagination" in Javanese; it also connotes "spirit". The word kulit means "skin", as the material from which the puppet is made is thin perforated leather sheets made from buffalo skin.
Performances of shadow puppet theater in Bali are typically at night, lasting until dawn. The complete wayang kulit troupes include dalang (puppet master), nayaga (gamelan players), and sinden (female choral singer). Some of the nayaga also perform as male choral singers. The dalang (puppet master) performs the wayang behind the cotton screen illuminated by oil lamp or modern halogen lamp, creating visual effects similar to animation. The flat puppet has moveable joints that are animated by hand, using rods connected to the puppet. The handle of the rod is made of carved buffalo horn. On November 7, 2003, UNESCO designated wayang kulit from Indonesia as one of the Masterpieces of the Oral and Intangible Heritage of Humanity.
Malaysia
In Malaysia, shadow puppet plays are also known as wayang kulit. In Malay, wayang means "theater", while kulit means "skin/leather" and refers to the puppets that are made out of leather. There are four types of shadow theaters in Malaysia: wayang kulit Jawa, wayang kulit Gedek, wayang kulit Melayu, and wayang kulit Siam. Wayang kulit Jawa and wayang kulit Melayu can be traced back to Javanese Shadows while wayang kulit Gedek and wayang kulit Siam are traced back to Southern Thailand's shadow theaters. Stories presented are usually mythical and morality tales. There is an educational moral to the plays, which usually portray a battle. Malay shadow plays are sometimes considered one of the earliest examples of animation. The wayang kulit in the northern states of Malaysia such as Kelantan is influenced by and similar to Thai shadow puppets, while the wayang kulit in the southern Malay peninsula, especially in Johor, is borrowed from Javanese Indonesian wayang kulit with slight differences in the story and performance.
The puppets are made primarily of leather and manipulated with sticks or buffalo horn handles. Shadows are cast using an oil lamp or, in modern times, a halogen light, onto a cotton cloth background. They are often associated with gamelan music.
Thailand
Shadow theatre in Thailand is called nang yai (which used large and steady figures); in the south there is a tradition called nang talung (which uses small, movable figures).Nang yai puppets are normally made of cowhide and rattan and are carried by people in front of the screen compared to behind it.Nang talung shadow play usually occur at domestic rituals and ceremonies or at commercial and temple fairs but they are starting to occur on Thai television.
There are different kind of performers in Thailand's shadow play. Nang samai performers are more modern in terms of music and dialogue while Nang booraan performers are more traditional. Performances are normally accompanied by a combination of songs and chants. Moreover, there are specific types of performances in Thailand that are political than theatrical like which are called nang kaanmuang.
Performances in Thailand were temporarily suspended in 1960 due to a fire at the national theatre. Nang drama has influenced modern Thai cinema, including filmmakers like Cherd Songsri and Payut Ngaokrachang.
Turkey
A more bawdy comedy tradition of shadow play was widespread throughout the Ottoman Empire, possibly since the late 14th century. It was centered around the contrasting interaction between the figures Karagöz and Hacivat: an unprincipled peasant and his fussy, educated companion. Together with other characters they represented all the major social groups in Ottoman culture. The theatres had an enormous following and would take place in coffee houses and in rich private houses and even performed before the sultan. Every quarter of the city had its own Karagöz.
The Karagöz theatre consisted of a three sided booth covered with a curtain printed with branches and roses and a white cotton screen by about three feet by four which was inserted in the front. The performance had a three man orchestra who sat at the foot of a small raised stage where they would play for the audience. The show would start when the puppet master lit the oil lamp. The show could be introduced by a singer, accompanied by a tambourine player. The background and scenery would sometimes include moving ships, riders moving on horseback, swaying palm trees and even dragons. The sound effects included songs and various voices.
Puppets were made to be about 15 inches or 35–40 centimeters high and oiled to make them look translucent. The puppets were made of either horse, water buffalo or calf skin. They had movable limbs and were jointed with waxed thread at the neck, arms, waist and knees and manipulated from rods in their back and held by the finger of the puppet master. The hide is worked until it is semi-transparent; then it is colored, resulting in colorful projections. Karagöz theatre was also adapted in Egypt and North Africa.
Shadow puppetry today
In the 1910s, the German animator Lotte Reiniger pioneered silhouette animation as a format, whereby shadow-play-like puppets are filmed frame-by-frame. This technique has been kept alive by subsequent animators and is still practised today, though cel animation and computer animation has also been used to imitate the look of shadow play and silhouette animation. By the 1920s, shadow puppetry had breached the world of German Expressionism, through the silent film Warning Shadows.
Traditional Chinese shadow puppetry was brought to audiences in the United States in the 1920s and 1930s through the efforts of Pauline Benton. Contemporary artists such as Annie Katsura Rollins have perpetuated the medium, sometimes combining the form with Western theatre.
Shadow theatre is still popular in many parts of Asia. Prahlad Acharya is one famous Indian magician who incorporates it into his performances.
In the 2010s, performer Tom McDonagh introduced 3-D shadow puppets and the use of laser-cut objects.
It also appears occasionally in western popular culture; for example:
The Broadway musical The Lion King
The children's television show Bear in the Big Blue House
The 1983 film The Year of Living Dangerously opens with a scene from an Indonesian wayang shadow play
The 2004 video game Sudeki opens with a shadow puppet play setting the stage for the game
The Center for Puppetry Arts in Atlanta, Georgia, has an extensive variety of Chinese shadow puppets in their Asian collection.
The 2010 film The Karate Kid
The Disney Channel show What a Life features shadow puppetry from Sunny Seki.
Music videos, notably "The Free Design" by Stereolab and "Twice" by Little Dragon
The 2021 film Candyman uses shadow puppetry to portray several African-American victims of racial violence throughout history, including Sherman Fields (who appears at the beginning of the film), Anthony Crawford, George Stinney, and James Byrd, Jr.
In 2023, the performer known as Shadow Ace performed on season 18 of America's Got Talent to a standing ovation.
Gallery
References
Further reading
Currell, David, An Introduction to Puppets and Puppetmaking, New Burlington Books, (1992)
Logan, David, Puppetry, Brisbane Dramatic Arts Company (2007)
Fan Pen Chen tr., "Visions for the Masses; Chinese Shadow Plays from Shaanxi and Shanxi", Ithaca: Cornell East Asia Series, (2004)
Ghulam-Sarwar Yousof, Dictionary of Traditional Southeast Asian Theatre, Oxford University Press, (1994)
External links
Greek Shadows, an interactive, educational website on Greek shadow-theater
1st-millennium BC introductions
Theatrical genres
Performing arts in China
Arts in Syria
Arts in Turkey
Shadows
Articles containing video clips
Arts in Indonesia
Puppetry in India | Shadow play | Physics | 5,324 |
254,814 | https://en.wikipedia.org/wiki/List%20of%20house%20types | This is a list of house types. Houses can be built in a large variety of configurations. A basic division is between free-standing or single-family detached homes and various types of attached or multi-family residential dwellings. Both may vary greatly in scale and the amount of accommodation provided.
By layout
Hut
A hut is a dwelling of relatively simple construction, usually one room and one story in height. The design and materials of huts vary widely around the world.
Bungalow
Bungalow is a common term applied to a low one-story house with a shallow-pitched roof (in some locations, dormered varieties are referred to as 1.5-story, such as the chalet bungalow in the United Kingdom).
Cottage
A cottage is a small house, usually one or two stories in height, although the term is sometimes applied to larger structures.
Ranch
A ranch-style house or rambler is one-story, low to the ground, with a low-pitched roof, usually rectangular, L- or U-shaped with deep overhanging eaves. Ranch styles include:
California ranch: the "original" ranch style, developed in the United States in the early 20th century, before World War II
Tract ranch: a post-World War II style of ranch that was smaller and less ornate than the original, mass-produced in housing developments, usually without basements
Suburban ranch: a modern style of ranch that retains many of the characteristics of the original but is larger, with modern amenities
I-house
An I-house is a two or three-story house that is one room deep with a double-pen, hall-parlor, central-hall or saddlebag layout.
New England I-house: characterized by a central chimney
Pennsylvania I-house: characterized by internal gable-end chimneys at the interior of either side of the house
Southern I-house: characterized by external gable-end chimneys on the exterior of either side of the house
Gablefront
A gablefront house or gablefront cottage has a gable roof that faces its street or avenue, as in the novel The House of Seven Gables.
A-frame: so-called because the steep roofline, reaching to or near the ground, makes the gable ends resemble a capital letter A.
Chalet: a gablefront house built into a mountainside with a wide sloping roof
Charleston single house: originating in Charleston, South Carolina, a narrow house with its shoulder to the street and front door on the side.
Upright and Wing: a style originating in New England (particularly Upstate New York) and the Great Lakes states, usually of a Greek Revival style.
Split-level
Split-level house is a design of house that was commonly built during the 1950s and 1960s. It has two nearly equal sections that are located on two different levels, with a short stairway in the corridor connecting them.
Bi-level, split-entry, or raised ranch
Tri-level, quad-level, quintlevel etc.
Tower
A tower house is a compact two or more story house, often fortified.
Irish tower houses were often surrounded by defensive walls called bawns
Kulla: an Albanian tower house
Peel tower or Pele tower: fortified tower houses in England and Scotland used as keeps or houses
Vainakh tower: a tower house found in Chechenya and Ingushetia that reached up to four stories tall and were used for residential or military purposes, or both
Welsh tower houses: built mostly in the 14th and 15th centuries
Longhouse
A longhouse is historical house type typically for family groups.
Geestharden house: one of the three basic house types in Schleswig-Holstein region of Germany
Uthland-Frisian house: a sub type of Geestharden house of northwest Germany and Denmark
Longère: a long and narrow house in rural Normandy and Brittany
Housebarn
A housebarn is a combined house and barn.
Barndominium: a type of house that includes living space attached to either a workshop or a barn, typically for horses, or a large vehicle such as a recreational vehicle or a large recreational boat
Byre-dwelling: farmhouse with people and livestock under one roof
Connected farm: type of farmhouse common in New England
Frutighaus: a type of barnhouse originating in the Frutigland region of Switzerland.
Other house types
Courtyard house
Riad: a type of courtyard house found in Morocco
Siheyuan, Sanheyuan: a type of courtyard house found in China
Slope house: a house with soil or rock completely covering the bottom floor on one side and partly two of the walls on the bottom floor. The house has two entries depending on the ground level.
Snout house: a house with the garage door being the closest part of the dwelling to the street.
Octagon house: a house of symmetrical octagonal floor plan, popularized briefly during the 19th century by Orson Squire Fowler
Stilt house: is a house built on stilts above a body of water or the ground (usually in swampy areas prone to flooding).
Villa: a large house which one might retreat to in the country. Villa can also refer to a freestanding comfortable-sized house, on a large block, generally found in the suburbs, and in Victorian terraced housing, a house larger than the average byelaw terraced house, often having double street frontage.
Mansion: a very large, luxurious house, typically associated with exceptional wealth or aristocracy, usually of more than one story, on a very large block of land or estate. Mansions usually will have many more rooms and bedrooms than a typical single-family home, including specialty rooms, such as a library, study, conservatory, theater, greenhouse, infinity pool, bowling alley, or server room.
Palace: the residence of a high ranking government official or the country's ruler.
Castle: a heavily fortified medieval dwelling or a house styled after medieval castles. Usually with towers, crenellations, and a stone exterior.
By construction method or materials
Airey house: a type of low-cost house that was developed in the United Kingdom during the 1940s by Sir Edwin Airey, and then widely constructed between 1945 and 1960 to provide housing for soldiers, sailors, and airmen who had returned home from World War II. These are recognizable by their precast concrete columns and by their walls made of precast "ship-lap" concrete panels.
Assam-type House: an earthquake-resistant house type commonly found in the northeastern states of India
Bastle house: a fortified farmhouse found in England and Scotland
Castle: primarily a defensive structure/dwelling built during the Dark Ages and the Middle Ages, and also from the 18th century to today.
Converted barn: an old barn converted into a house or other use.
Earth sheltered: houses using dirt ("earth") piled against it exterior walls for thermal mass, which reduces heat flow into or out of the house, maintaining a more steady indoor temperature
Pit-house: a prehistoric house type used on many continents and of many styles, partially sunken into the ground.
Rammed earth
Sod house
Earthbag home
Souterrain: an earthen dwelling typically deriving from Neolithic Age or Bronze Age times.
Underground home: a type of dwelling dug and constructed underground. Ex. A Rammed-Earth Style House
Yaodong: a dugout used as an abode or shelter in northern China, especially on the Loess Plateau
Wattle and daub
Adobe: a type of mudbrick house made of dirt and straw with mud used as mortar. Found throughout the world, in particular Spain, North Africa, the Middle East and the Americas.
Igloo: an Inuit, Yup'ik, and Aleut seasonal or emergency shelter that was made of knife-sliced blocks of packed snow and/or ice in the Arctic regions of Alaska, Canada, Greenland, and Siberian Russia.
Kit house: a type of pre-fabricated house made of pre-cut, numbered pieces of lumber.
Sears Catalog Home: an owner-built "kit" houses that were sold by the Sears, Roebuck and Co. corporation via catalog orders from 1906 to 1940.
Laneway house: a type of Canadian house that is constructed behind a normal single-family home that opens onto a back lane
Log home, Log cabin: a house built by American, Canadian, and Russian frontiersmen and their families which was built of solid, unsquared wooden logs and later as a well crafted style of dwelling
Plank house: a general term for houses built using planks in a variety of ways
Pole house: a timber house in which a set of vertical poles carry the load of all of its suspended floors and roof, allowing all of its walls to be non-load-bearing.
Prefabricated house: a house whose main structural sections were manufactured in a factory, and then transported to their final building site to be assembled upon a concrete foundation, which had to be poured locally.
Manufactured house: a prefabricated house that is assembled on the permanent site on which it will sit.
Modular home: a prefabricated house that consists of repeated sections called modules.
Lustron house: a type of prefabricated house
Stilt houses or Pile dwellings: houses raised on stilts over the surface of the soil or a body of water.
Tree house: a house built among the branches or around the trunk of one or more mature trees and does not rest on the ground.
Upper Lusatian house or Umgebinde: combined log and timber-frame construction in Germany-Czech Republic-Poland region
Wimpey no-fines house: a low-cost semi-attached or terraced houses built in the United Kingdom from the 1940s onwards using concrete without fine aggregates ("no-fine")
Single-family attached
Two-family or duplex: two living units, either attached side by side and sharing a common wall (in some countries, called semi-detached) or stacked one atop the other (in some countries, called a double-decker)
Three-family or triplex: three living units, either attached side by side and sharing common walls, or stacked (in some countries, called a three-decker or triple-decker)
Four-family or quadplex or quad: four living units, typically with two units on the first floor and two on the second, or side-by-side
Townhouse, terraced house, or rowhouse: common terms for single-family attached housing, whose precise meaning varies by location, often connecting a series of living units arranged side-by-side sharing common walls (not to be confused with the English term for an aristocratic mansion, townhouse (Great Britain))
Linked house: side-by-side attached houses that appear detached above-ground but are attached at the foundation below-ground
Linked semi-detached: side-by-side attached houses with garages in between them, sharing basement and garage walls
Mews property: an urban stable-block that has often been converted into residential properties. The houses may have been converted into ground floor garages with a small flat above which used to house the ostler or just a garage with no living quarters.
Patio house: townhouses that share a patio
Weavers' cottage: townhouses with attached workshops for weavers
Movable dwellings
Chattel house: a small wooden house occupied by working-class people on Barbados. Originally relocatable; personal chattel (property) rather than fixed real property.
Mobile home, park home, or trailer home: a prefabricated house that is manufactured off-site and moved by trailer to its final location (but not intended to be towed regularly by a vehicle)
Recreational vehicle or RV: a motor vehicle or trailer that can be used for habitation
Travel trailer, camper or caravan: a trailer designed to be used as a residence (usually temporarily), which must be towed regularly by a vehicle and cannot move under its own power
Tiny house: a dwelling, usually built on a trailer or barge, that is or smaller, built to look like a small house and suitable for long-term habitation
Houseboat includes float houses: a boat designed to be primarily used as a residence
Tent: a temporary, movable dwelling usually constructed with fabric covering a frame of lightweight wood or other locally-available material
Tipi: a conical tent originating in North America
Yurt: a round tent with a conical roof originating in Central Asia
See also
Cohousing
Company town
City block
Home
House
Gated community
Intentional community
List of house styles
Outbuilding
Planned unit development
Real estate
Spite house, which may or may not be attached to other structures
Sustainable design
Timeshare, form of vacation property
Total institution
Notes
References
External links
House Images
Architectural Housing Styles at Old House Web
Bilingual Glossary of House types
A comprehensive summary of common residential architectural styles and themes
House types | List of house types | Engineering | 2,583 |
28,530,078 | https://en.wikipedia.org/wiki/Cortinarius%20infractus | Cortinarius infractus, commonly known as the sooty-olive Cortinarius or the bitter webcap, is an inedible basidiomycete mushroom of the genus Cortinarius. The fungus produces sooty-olive fruit bodies with sticky caps measuring up to in diameter. The fruit bodies contains alkaloids that inhibit the enzyme acetylcholinesterase.
Taxonomy
The species was first named as Agaricus infractus by Christian Hendrik Persoon in 1799. It was transferred to the genus Cortinarius by Elias Magnus Fries in his 1838 Epicrisis Systematis Mycologici. Differing opinions regarding the organization of Cortinarius have led to the names Phlegmacium infractum (Wünsche) andPholiota infracta (Kummer).
The mushroom is commonly known as the "sooty-olive Cortinarius", or the "bitter webcap".
Description
The cap is in diameter, thickly fleshy, particularly in the center. It is initially convex, but already slightly undulatingly squashed while still young, becoming convex and depressed around the umbo, then flattened, retaining for a long time a low broad umbo but sometimes at length uniformly depressed. The margin is distinctly curved, slightly rolled inward when young, thin and sharply rounded when mature and often wavy and lobed. The cap surface is smooth and sticky, dirty yellow-olive to dirty brownish-olive or olive grayish-green, then dirty light brown with green tinge, streakily fibrillose almost from the middle (scantily so at center), finely and persistently dark greenish-brownish. The gills are moderately crowded to distant spaced—about four gills per centimeter in the middle on mature fruit bodies, at the margin about ten per centimeter. They are adnate and deeply emarginate (notched), especially when mature, up to broad, somewhat wrinkled on the surface and with the edge entire or slightly undulatingly denticulate. Their color is dirty olive, then later dark brownish-olive, with slightly paler edge.
The stem is up to long, wide at the apex but up to wide at the base, where it thickens bulbously. The bulb, which is not sharply upright but marked by a rounded ridge, is egg-shaped in cross section. For a long time the stem is solid, then sometimes hollow, firm, hard, silkily fibrillose, dirty whitish with olive tinge but faintly greenish-blue, particularly at the apex, and sometimes even with a blue tint or bluish spots on the bulb. The cortina is olive greenish, then brownish, thickly developed but soon disappearing. The flesh is tough, then softer, whitish, with slight bluish-green tinge, thick in the cap, homogenous and softly juicily fleshy; in the stem, fibrillosely rivulose beneath the surface and with a dirty bluish-green or greenish-olive tinge in this region, more homogenous and almost non-fibrillose in the bulb. The taste is bitter and the odor faint, similar to radish. The basidia (spore-bearing cells) are 50–60 by 9–10 μm, with four sterigmata 6–7 μm long.
In 1999, Moser and Ammmirati described a variety that they had seen many times since 1983 in Shoshone National Forest, Wyoming. Cortinarius infractus var. flavus differs from the typical variety in its cap, which reaches up to with a yellow-brown to almost yellow color.; paler gills described as "olivaceous-brownish"; a slightly bitterish, sometimes mild taste.
Similar species
Cortinarius immixtus somewhat resembles C. infractus, but has brighter-colored young gills (ranging from yellowish to olive to green), a mild taste, and larger spores.
Distribution and habitat
The fruit bodies of Cortinarius infractus grows scattered in deciduous forests of both oak and beech.
Chemical compounds
The chemical composition of the essential oils obtained from the fruit bodies were shown to contain 36 components, predominantly musk ambrette, in a ratio of 62.3%. The essential oil was also tested for antimicrobial activity against the human pathogenic bacteria Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, Enterococcus faecalis, Staphylococcus aureus, Bacillus cereus and the fungus Candida tropicalis, but did not show any biological activity.
Two alkaloids, infractopicrin and 10-hydroxy-infractopicrin, have been isolated from the fruit bodies of Cortinarius infractus. Both compounds show the ability to inhibit the enzyme acetylcholinesterase in vitro, and possess a higher selectivity than galanthamine, a drug used for the treatment of mild to moderate Alzheimer's disease.
See also
List of Cortinarius species
References
infractus
Fungi described in 1799
Fungi of Europe
Inedible fungi
Taxa named by Christiaan Hendrik Persoon
Fungus species | Cortinarius infractus | Biology | 1,087 |
4,297,420 | https://en.wikipedia.org/wiki/Respiratory%20quotient | The respiratory quotient (RQ or respiratory coefficient) is a dimensionless number used in calculations of basal metabolic rate (BMR) when estimated from carbon dioxide production. It is calculated from the ratio of carbon dioxide produced by the body to oxygen consumed by the body, when the body is in a steady state. Such measurements, like measurements of oxygen uptake, are forms of indirect calorimetry. It is measured using a respirometer. The respiratory quotient value indicates which macronutrients are being metabolized, as different energy pathways are used for fats, carbohydrates, and proteins. If metabolism consists solely of lipids, the respiratory quotient is approximately 0.7, for proteins it is approximately 0.8, and for carbohydrates it is 1.0. Most of the time, however, energy consumption is composed of both fats and carbohydrates. The approximate respiratory quotient of a mixed diet is 0.8. Some of the other factors that may affect the respiratory quotient are energy balance, circulating insulin, and insulin sensitivity.
It can be used in the alveolar gas equation.
Respiratory exchange ratio
The respiratory exchange ratio (RER) is the ratio between the metabolic production of carbon dioxide (CO2) and the uptake of oxygen (O2).
The ratio is determined by comparing exhaled gases to room air. Measuring this ratio is equal to RQ only at rest or during mild to moderate aerobic exercise without the accumulation of lactate. The loss of accuracy during more intense anaerobic exercise is among others due to factors including the bicarbonate buffer system. The body tries to compensate for the accumulation of lactate and minimize the acidification of the blood by expelling more CO2 through the respiratory system.
The RER can exceed 1.0 during intense exercise. A value above 1.0 cannot be attributed to the substrate metabolism, but rather to the aforementioned factors regarding bicarbonate buffering. Calculation of RER is commonly done in conjunction with exercise tests such as the VO2 max test. This can be used as an indicator that the participants are nearing exhaustion and the limits of their cardio-respiratory system. An RER greater than or equal to 1.0 is often used as a secondary endpoint criterion of a VO2 max test.
Calculation
The respiratory quotient (RQ) is the ratio:
RQ = CO2 eliminated / O2 consumed
where the term "eliminated" refers to carbon dioxide (CO2) removed from the body in a steady state.
In this calculation, the CO2 and O2 must be given in the same units, and in quantities proportional to the number of molecules. Acceptable inputs would be either moles, or else volumes of gas at standard temperature and pressure.
Many metabolized substances are compounds containing only the elements carbon, hydrogen, and oxygen. Examples include fatty acids, glycerol, carbohydrates, deamination products, and ethanol. For complete oxidation of such compounds, the chemical equation is
CxHyOz + (x + y/4 - z/2) O2
→ x CO2 + (y/2) H2O
and thus metabolism of this compound gives an RQ of x/(x + y/4 - z/2).
For glucose, with the molecular formula, C6H12O6, the complete oxidation equation is C6H12O6 + 6 O2
→ 6 CO2 + 6 H2O. Thus, the RQ= 6 CO2/ 6 O2=1.
For oxidation of a fatty acid molecule, namely palmitic acid:
A RQ near 0.7 indicates that fat is the predominant fuel source, a value of 1.0 is indicative of carbohydrate being the predominant fuel source, and a value between 0.7 and 1.0 suggests a mix of both fat and carbohydrate. In general a mixed diet corresponds with an RER of approximately 0.8. For fats, the RQ depends on the specific fatty acids present. Amongst the commonly stored fatty acids in vertebrates, RQ varies from 0.692 (stearic acid) to as high as 0.759 (docosahexaenoic acid). Historically, it was assumed that 'average fat' had an RQ of about 0.71, and this holds true for most mammals including humans. However, a recent survey showed that aquatic animals, especially fish, have fat that should yield higher RQs on oxidation, reaching as high as 0.73 due to high amounts of docosahexaenoic acid.
The range of respiratory coefficients for organisms in metabolic balance usually ranges from 1.0 (representing the value expected for pure carbohydrate oxidation) to ~0.7 (the value expected for pure fat oxidation). In general, molecules that are more oxidized (e.g., glucose) require less oxygen to be fully metabolized and, therefore, have higher respiratory quotients. Conversely, molecules that are less oxidized (e.g., fatty acids) require more oxygen for their complete metabolism and have lower respiratory quotients. See BMR for a discussion of how these numbers are derived. A mixed diet of fat and carbohydrate results in an average value between these numbers.
RQ value corresponds to a caloric value for each liter (L) of CO2 produced. If O2 consumption numbers are available, they are usually used directly, since they are more direct and reliable estimates of energy production.
RQ as measured includes a contribution from the energy produced from protein. However, due to the complexity of the various ways in which different amino acids can be metabolized, no single RQ can be assigned to the oxidation of protein in the diet.
Insulin, which increases lipid storage and decreases fat oxidation, is positively associated with increases in the respiratory quotient. A positive energy balance will also lead to an increased respiratory quotient.
Applications
Practical applications of the respiratory quotient can be found in severe cases of chronic obstructive pulmonary disease, in which patients spend a significant amount of energy on respiratory effort. By increasing the proportion of fats in the diet, the respiratory quotient is driven down, causing a relative decrease in the amount of CO2 produced. This reduces the respiratory burden to eliminate CO2, thereby reducing the amount of energy spent on respirations.
Respiratory Quotient can be used as an indicator of over or underfeeding. Underfeeding, which forces the body to utilize fat stores, will lower the respiratory quotient, while overfeeding, which causes lipogenesis, will increase it. Underfeeding is marked by a respiratory quotient below 0.85, while a respiratory quotient greater than 1.0 indicates overfeeding. This is particularly important in patients with compromised respiratory systems, as an increased respiratory quotient significantly corresponds to increased respiratory rate and decreased tidal volume, placing compromised patients at a significant risk.
Because of its role in metabolism, respiratory quotient can be used in analysis of liver function and diagnosis of liver disease. In patients with liver cirrhosis, non-protein respiratory quotient (npRQ) values act as good indicators in the prediction of overall survival rate. Patients having a npRQ < 0.85 show considerably lower survival rates as compared to patients with a npRQ > 0.85. A decrease in npRQ corresponds to a decrease in glycogen storage by the liver. Similar research indicates that non-alcoholic fatty liver diseases are also accompanied by a low respiratory quotient value, and the non protein respiratory quotient value was a good indication of disease severity.
Recently the respiratory quotient is also used from aquatic scientists to illuminate its environmental applications. Experimental studies with natural bacterioplankton using different single substrates suggested that RQ is linked to the elemental composition of the respired compounds. By this way, it is demonstrated that bacterioplankton RQ is not only a practical aspect of Bacterioplankton Respiration determination, but also a major ecosystem state variable that provides unique information about aquatic ecosystem functioning. Based on the stoichiometry of the different metabolized substrates, the scientists can predict that dissolved oxygen (O2) and carbon dioxide (CO2) in aquatic ecosystems should covary inversely due to the processing of photosynthesis and respiration. Using this quotient we could shed light on the metabolic behavior and the simultaneous roles of chemical and physical forcing that shape the biogeochemistry of aquatic ecosystems.
Moving from a molecular and cellular level to an ecosystem level, various processes account for the exchange of O2 and CO2 between the biosphere and atmosphere. Field measurements of the concurrent consumption of oxygen (-ΔO2) and production of carbon dioxide (ΔCO2) can be used to derive an apparent respiratory quotient (ARQ). This value reflects a cumulative effect of not only the aerobic respiration of all organisms (microorganisms and higher consumers) in the sample, but also all the other biogeochemical processes which consume O2 without a corresponding CO2 production and vice versa influencing the observed RQ.
Respiratory quotients of some substances
See also
References
External links
Biochemistry methods
Energy conversion
Metabolism
Respiratory physiology
Underwater diving physiology | Respiratory quotient | Chemistry,Biology | 1,934 |
37,822,732 | https://en.wikipedia.org/wiki/History%20of%20network%20traffic%20models | Design of robust and reliable networks and network services relies on an understanding of the traffic characteristics of the network. Throughout history, different models of network traffic have been developed and used for evaluating existing and proposed networks and services.
Demands on computer networks are not entirely predictable. Performance modeling is necessary for deciding the quality of service (QoS) level. Performance models in turn, require accurate traffic models that have the ability to capture the statistical characteristics of the actual traffic on the network. Many traffic models have been developed based on traffic measurement data. If the underlying traffic models do not efficiently capture the characteristics of the actual traffic, the result may be the under-estimation or over-estimation of the performance of the network. This impairs the design of the network. Traffic models are hence, a core component of any performance evaluation of networks and they need to be very accurate.
“Teletraffic theory is the application of mathematics to the measurement, modeling, and control of traffic in telecommunications networks. The aim of traffic modeling is to find stochastic processes to represent the behavior of traffic. Working at the Copenhagen Telephone Company in the 1910s, A. K. Erlang famously characterized telephone traffic at the call level by certain probability distributions for arrivals of new calls and their holding times. Erlang applied the traffic models to estimate the telephone switch capacity needed to achieve a given call blocking probability. The Erlang blocking formulas had tremendous practical interest for public carriers because telephone facilities (switching and transmission) involved considerable investments. Over several decades, Erlang’s work stimulated the use of queuing theory, and applied probability in general, to engineer the public switched telephone network. Teletraffic theory for packet networks has seen considerable progress in recent decades. Significant advances have been made in long-range dependence, wavelet, and multifractal approaches. At the same time, traffic modeling continues to be challenged by evolving network technologies and new multimedia applications. For example, wireless technologies allow greater mobility of users. Mobility must be an additional consideration for modeling traffic in wireless networks. Traffic modeling is an ongoing process without a real end. Traffic models represent our best current understanding of traffic behavior, but our understanding will change and grow over time.”
Network traffic models usage
Measurements are useful and necessary for verifying the actual network performance. However, measurements do not have the level of abstraction that makes traffic models useful. Traffic models can be used for hypothetical problem solving whereas traffic measurements only reflect current reality. In probabilistic terms, a traffic trace is a realization of a random process, whereas a traffic model is a random process. Thus, traffic models have universality. A traffic trace gives insight about a particular traffic source, but a traffic model gives insight about all traffic sources of that type. Traffic models have three major uses. One important use of traffic models is to properly dimension network resources for a target level of QoS. It was mentioned earlier that Erlang developed models of voice calls to estimate telephone switch capacity to achieve a target call blocking probability. Similarly, models of packet traffic are needed to estimate the bandwidth and buffer resources to provide acceptable packet delays and packet loss probability. Knowledge of the average traffic rate is not sufficient. It is known from queuing theory that queue lengths increase with the variability of traffic. Hence, an understanding of traffic burstiness or variability is needed to determine sufficient buffer sizes at nodes and link capacities. A second important use of traffic models is to verify network performance under specific traffic controls. For example, given a packet scheduling algorithm, it would be possible to evaluate the network performance resulting from different traffic scenarios. For another example, a popular area of research is new improvements to the TCP congestion avoidance algorithm. It is critical that any algorithm is stable and allows multiple hosts to share bandwidth fairly, while sustaining a high throughput. Effective evaluation of the stability, fairness, and throughput of new algorithms would not be possible without realistic source models. A third important use of traffic models is admission control. In particular, connection oriented networks such as ATM depends on admission control to block new connections to maintain QOS guarantees. A simple admission strategy could be based on the peak rate of a new connection; a new connection is admitted if the available bandwidth is greater than the peak rate. However, that strategy would be overly conservative because a variable bit-rate connection may need significantly less bandwidth than its peak rate. A more sophisticated admission strategy is based on effective bandwidths. The source traffic behavior is translated into an effective bandwidth between the peak rate and average rate, which is the specific amount of bandwidth required to meet a given QoS constraint. The effective bandwidth depends on the variability of the source.
Network traffic models steps
Traffic modeling consists of three steps:
(i) selection of one or more models that may provide a good description of the traffic type
(ii) estimation of parameters for the selected models
(iii) statistical testing for election of one of the considered models and analysis of its suitability to describe the traffic type under analysis.
Parameter estimation is based on a set of statistics (e.g. mean, variance, density function or auto covariance function, multifractal characteristics) that are measured or calculated from observed data. The set of statistics used in the inference process depends on the impact they may have in the main performance metrics of interest.
Network traffic models parameter
In recent years several types of traffic behavior, that can have significant impact on network performance, were discovered: long-range dependence, self-similarity and, more recently, multifractality.
There are two major parameters generated by network traffic models: packet length distributions and packet inter-arrival distributions. Other parameters, such as routes, distribution of destinations, etc., are of less importance. Simulations that use traces generated by network traffic models usually examine a single node in the network, such as a router or switch; factors that depend on specific network topologies or routing information are specific to those topologies and simulations. The problem of packet size distribution is fairly well-understood today. Existing models of packet sizes have proven to be valid and simple. Most packet size models do not consider the problem of order in packet sizes. For example, a TCP datagram in one direction is likely to be followed by a tiny ACK in the other direction about half of one Round-Trip Time (RTT) later. The problem of packet inter-arrival distribution is much more difficult. Understanding of network traffic has evolved significantly over the years, leading to a series of evolutions in network traffic models.
Self-similar traffic models
One of the earliest objections to self-similar traffic models was the difficulty in mathematical analysis. Existing self-similar models could not be used in conventional queuing models. This limitation was rapidly overturned and workable models were constructed. Once basic self-similar models became feasible, the traffic modeling community settled into the “detail” concerns. TCP’s congestion control algorithm complicated the matter of modeling traffic, so solutions needed to be created. Parameter estimation of self-similar models was always difficult, and recent research addresses ways to model network traffic without fully understanding it.
Fractional Brownian motion:
When self-similar traffic models were first introduced, there were no efficient, analytically tractable processes to generate the models. Ilkka Norros devised a stochastic process for a storage model with self-similar input and constant bit-rate output. While this initial model was continuous rather than discrete, the model was effective, simple, and attractive.
SWING:
All self-similar traffic models suffer from one significant drawback: estimating the self-similarity parameters from real network traffic requires huge amounts of data and takes extended computation. The most modern method, wavelet multi-resolution analysis, is more efficient, but still very costly. This is undesirable in a traffic model. SWING uses a surprisingly simple model for the network traffic analysis and generation. The model examines characteristics of users, Request-Response Exchanges (RREs), connections, individual packets, and the overall network. No attempt is made to analyze self-similarity characteristics; any self-similarity in the generated traffic comes naturally from the aggregation of many ON/OFF sources.
Pareto distribution process:
The Pareto distribution process produces independent and identically distributed (IID) inter-arrival times. In general if X is a random variable with a Pareto distribution, then the probability that X is greater than some number x is given by P(X > x) = (x/x_m)-k for all x ≥ x_m where k is a positive parameter and x_m is the minimum possible value of Xi The probability distribution and the density functions are represented as:
F(t) = 1 – (α/t)β where α,β ≥ 0 & t ≥ α
f(t) = βαβ t-β-1
The parameters β and α are the shape and location parameters, respectively. The Pareto distribution is applied to model self-similar arrival in packet traffic. It is also referred to as double exponential, power law distribution. Other important characteristics of the model are that the Pareto distribution has infinite variance, when β ≥ 2 and achieves infinite mean, when β ≤ 1.
Weibull distribution process:
The Weibull distributed process is heavy-tailed and can model the fixed rate in ON period and ON/OFF period lengths, when producing self-similar traffic by multiplexing ON/OFF sources. The distribution function in this case is given by:
F(t) = 1 – e-(t/β)α t > 0
and the density function of the weibull distribution is given as:
f(t) = αβ-α tα-1 e -(t/β)α t > 0
where parameters β ≥ 0 and α > 0 are the scale and location parameters respectively.
The Weibull distribution is close to a normal distribution. For β ≤ 1 the density function of the distribution is L-shaped and for values of β > 1, it is bell-shaped. This distribution gives a failure rate increasing with time. For β > 1, the failure rate decreases with time. At, β = 1, the failure rate is constant and the lifetimes are exponentially distributed.
Autoregressive models:
The Autoregressive model is one of a group of linear prediction formulas that attempt to predict an output y_n of a system based on previous set of outputs {y_k} where k < n and inputs x_n and {x_k} where k < n. There exist minor changes in the way the predictions are computed based on which, several variations of the model are developed. Basically, when the model depends only on the previous outputs of the system, it is referred to as an auto-regressive model. It is referred to as a Moving Average Model (MAM), if it depends on only the inputs to the system. Finally, Autoregressive-Moving Average models are those that depend both on the inputs and the outputs, for prediction of current output. Autoregressive model of order p, denoted as AR(p), has the following form:
Xt = R1 Xt-1 + R2 Xt-2 + ... + Rp Xt-p + Wt
where Wt is the white noise, Ri are real numbers and Xt are prescribed correlated random numbers. The auto-correlation function of the AR(p) process consists of damped sine waves depending on whether the roots (solutions) of the model are real or imaginary. Discrete Autoregressive Model of order p, denoted as DAR(p), generates a stationary sequence of discrete random variables with a probability distribution and with an auto-correlation structure similar to that of the Autoregressive model of order p.[3]
Regression models:
Regression models define explicitly the next random variable in the sequence by previous ones within a specified time window and a moving average of a white noise.[5]
TES models :
Transform-expand-sample (TES) models are non-linear regression models with modulo-1 arithmetic. They aim to capture both auto-correlation and marginal distribution of empirical data. TES models consist of two major TES processes: TES+ and TES–. TES+ produces a sequence which has positive correlation at lag 1, while TES– produces a negative correlation at lag 1.
Non-self-similar traffic models
Early traffic models were derived from telecommunications models and focused on simplicity of analysis. They generally operated under the assumption that aggregating traffic from a large number of sources tended to smooth out bursts; that burstiness decreased as the number of traffic sources increased.
Poisson distribution model:
One of the most widely used and oldest traffic models is the Poisson Model. The memoryless Poisson distribution is the predominant model used for analyzing traffic in traditional telephony networks. The Poisson process is characterized as a renewal process. In a Poisson process the inter-arrival times are exponentially distributed with a rate parameter λ: P{An ≤ t} = 1 – exp(-λt). The Poisson distribution is appropriate if the arrivals are from a large number of independent sources, referred to as Poisson sources. The distribution has a mean and variance equal to the parameter λ.
The Poisson distribution can be visualized as a limiting form of the binomial distribution, and is also used widely in queuing models. There are a number of interesting mathematical properties exhibited by Poisson processes. Primarily, superposition of independent Poisson processes results in a new Poisson process whose rate is the sum of the rates of the independent Poisson processes. Further, the independent increment property renders a Poisson process memoryless. Poisson processes are common in traffic applications scenarios that consist of a large number of independent traffic streams. The reason behind the usage stems from Palm's Theorem which states that under suitable conditions, such large number of independent multiplexed streams approach a Poisson process as the number of processes grows, but the individual rates decrease in order to keep the aggregate rate constant. Traffic aggregation need not always result in a Poisson process. The two primary assumptions that the Poisson model makes are:
1. The number of sources is infinite
2. The traffic arrival pattern is random.
Compound Poisson traffic models:
In the compound Poisson model, the base Poisson model is extended to deliver batches of packets at once. The inter-batch arrival times are exponentially distributed, while the batch size is geometric. Mathematically, this model has two parameters, λ, the arrival rate, and ρ in (0,1), the batch parameter. Thus, the mean number of packets in a batch is 1/ ρ, while the mean inter-batch arrival time is 1/ λ. Mean packet arrivals over time period t are tλ/ ρ.
The compound Poisson model shares some of the analytical benefits of the pure Poisson model: the model is still memoryless, aggregation of streams is still (compound) Poisson, and the steady-state equation is still reasonably simple to calculate, although varying batch parameters for differing flows would complicate the derivation.
Markov and Embedded Markov Models:
Markov models attempt to model the activities of a traffic source on a network, by a finite number of states. The accuracy of the model increases linearly with the number of states used in the model. However, the complexity of the model also increases proportionally with increasing number of states. An important aspect of the Markov model - the Markov Property, states that the next (future) state depends only on the current state. In other words, the probability of the next state, denoted by some random variable Xn+1, depends only on the current state, indicated by Xn, and not on any other state Xi, where i<n. The set of random variables referring to different states {Xn} is referred to as a Discrete Markov Chain.
Packet trains:
Another attempt at providing a bursty traffic model is found in Jain and Routhier’s Packet Trains model. This model was principally designed to recognize that address locality applies to routing decisions; that is, packets that arrive near each other in time are frequently going to the same destination. In generating a traffic model that allows for easier analysis of locality, the authors created the notion of packet trains, a sequence of packets from the same source, traveling to the same destination (with replies in the opposite direction). Packet trains are optionally sub-divided into tandem trailers. Traffic between a source and a destination usually consists of a series of messages back and forth. Thus, a series of packets go one direction, followed by one or more reply packets, followed by a new series in the initial direction. Traffic quantity is then a superposition of packet trains, which generates substantial bursty behavior. This refines the general conception of the compound Poisson model, which recognized that packets arrived in groups, by analyzing why they arrive in groups, and better characterizing the attributes of the group. Finally, the authors demonstrate that packet arrival times are not Poisson distributed, which led to a model that departs from variations on the Poisson theme. The packet train model is characterized by the following parameters and their associated probability distributions:
mean inter-train arrival time
mean inter-car arrival time
mean truck size (in the tandem trailer model)
mean train size.
The train model is designed for analyzing and categorizing real traffic, not for generating synthetic loads for simulation. Thus, little claim has been made about the feasibility of packet trains for generating synthetic traffic. Given accurate parameters and distributions, generation should be straightforward, but derivation of these parameters is not addressed.
Traffic models today
NS-2 is a popular network simulator; PackMimeHTTP is a web traffic generator for NS-2, published in 2004. It does take long-range dependencies into account, and uses the Weibull distribution. Thus, it relies on heavy tails to emulate true self-similarity. Over most time scales, the effort is a success; only a long-running simulation would allow a distinction to be drawn. This follows suggestions from where it is suggested that self-similar processes can be represented as a superposition of many sources each individually modeled with a heavy-tailed distribution. It is clear that self-similar traffic models are in the mainstream.
See also
Traffic generation model
Traffic model
Network traffic simulation
References
History of telecommunications
Network theory
Mathematical modeling | History of network traffic models | Mathematics | 3,777 |
1,158,591 | https://en.wikipedia.org/wiki/Windows%20Installer | Windows Installer (msiexec.exe, previously known as Microsoft Installer, codename Darwin) is a software component and application programming interface (API) of Microsoft Windows used for the installation, maintenance, and removal of software. The installation information, and optionally the files themselves, are packaged in installation packages, loosely relational databases structured as COM Structured Storages and commonly known as "MSI files", from their default filename extensions. The packages with the file extensions mst contain Windows Installer "Transformation Scripts", those with the msm extensions contain "Merge Modules" and the file extension pcp is used for "Patch Creation Properties". Windows Installer contains significant changes from its predecessor, Setup API. New features include a GUI framework and automatic generation of the uninstallation sequence. Windows Installer is positioned as an alternative to stand-alone executable installer frameworks such as older versions of InstallShield and NSIS.
Before the introduction of Microsoft Store (then named Windows Store), Microsoft encouraged third parties to use Windows Installer as the basis for installation frameworks, so that they synchronize correctly with other installers and keep the internal database of installed products consistent. Important features such as rollback and versioning depend on a consistent internal database for reliable operation. Furthermore, Windows Installer facilitates the principle of least privilege by performing software installations by proxy for unprivileged users.
Logical structure of packages
A package describes the installation of one or more full products and is universally identified by a GUID. A product is made up of components, grouped into features. Windows Installer does not handle dependencies between products.
Products
A single, installed, working program (or set of programs) is a product. A product is identified by a unique GUID (the ProductCode property) providing an authoritative identity throughout the world. The GUID, in combination with the version number (ProductVersion property), allows for release management of the product's files and registry keys.
A package includes the package logic and other metadata that relates to how the package executes when running. For example, changing an EXE file in the product may require the ProductCode or ProductVersion to be changed for the release management. However, merely changing or adding a launch condition (with the product remaining exactly the same as the previous version) would still require the PackageCode to change for release management of the MSI file itself.
Features
A feature is a hierarchical group of components. A feature may contain any number of components and other sub-features. Smaller packages can consist of a single feature. More complex installers may display a "custom setup" dialog box, from which the user can select which features to install or remove.
The package author defines the product features. A word processor, for example, might place the program's core file into one feature, and the program's help files, optional spelling checker and stationery modules into additional features.
Components
A component is the basic unit of a product. Each component is treated by Windows Installer as a unit. The installer cannot install just part of a component. Components can contain program files, folders, COM components, registry keys, and shortcuts. The user does not directly interact with components.
Components are identified globally by GUIDs; thus the same component can be shared among several features of the same package or multiple packages, ideally through the use of Merge Modules.
Key paths
A key path is a specific file, registry key, or ODBC data source that the package author specifies as critical for a given component. Because a file is the most common type of key path, the term key file is commonly used. A component can contain at most one key path; if a component has no explicit key path, the component's destination folder is taken to be the key path. When an MSI-based program is launched, Windows Installer checks the existence of key paths. If there is a mismatch between the current system state and the value specified in the MSI package (e.g., a key file is missing), the related feature is re-installed. This process is known as self-healing or self-repair. No two components should use the same key path.
Developing installer packages
Creating an installer package for a new application is not trivial. It is necessary to specify which files must be installed, to where and with what registry keys. Any non-standard operations can be done using Custom Actions, which are typically developed in DLLs. There are a number of commercial and freeware products to assist in creating MSI packages, including Visual Studio (natively up to VS 2010, with an extension on newer VS versions), InstallShield and WiX. To varying degrees, the user interface and behavior may be configured for use in less common situations such as unattended installation. Once prepared, an installer package is "compiled" by reading the instructions and files from the developer's local machine, and creating the .msi file.
Windows Installer may be slower than native code installation technologies, such as InstallAware, due to the overhead of component registration and rollback support, which often involves generating tens of thousands of registry keys and temporary files.
The user interface (dialog boxes) presented at the start of installation can be changed or configured by the setup engineer developing a new installer. There is a limited language of buttons, text fields and labels which can be arranged in a sequence of dialogue boxes. An installer package should be capable of running without any UI, for what is called "unattended installation".
ICE validation
Microsoft provides a set of Internal Consistency Evaluators (ICE) that can be used to detect potential problems with an MSI database. The ICE rules are combined into CUB files, which are stripped-down MSI files containing custom actions that test the target MSI database's contents for validation warnings and errors. ICE validation can be performed with the Platform SDK tools Orca and msival2, or with validation tools that ship with the various authoring environments.
For example, some of the ICE rules are:
ICE09: Validates that any component destined for the System folder is marked as being permanent.
ICE24: Validates that the product code, product version, and product language have appropriate formats.
ICE33: Validates that the Registry table is not used for data better suited for another table (Class, Extension, Verb, and so on).
Addressing ICE validation warnings and errors is an important step in the release process.
Versions
See also
APPX – Software package format used on Microsoft's Windows Store
App-V – Software package format used for virtualization and streaming
.exe
List of installation software
Package management system
Windows Installer CleanUp Utility
Windows Package Manager
ZAP file – a way to perform an application installation when no MSI file exists
References
External links
Installation software
Microsoft application programming interfaces
Windows administration
Windows components
Windows-only software | Windows Installer | Technology | 1,426 |
66,771,918 | https://en.wikipedia.org/wiki/Tissiflashmob | Tissiflashmob (Finnish for "tits flash mob") was a demonstration organised by Sandra Marins and Säde Vallarén, which was held for the first time in June 2019 at the Hietaniemi beach in Helsinki, Finland. The organisers held the demonstration in criticism for a previous event where a woman had been removed from a beach for sunbathing topless.
The demonstration was held for a second time in 2020 in eight different cities and also online. This time the demonstration was held by the feminist activist group Cult Cunth.
References
Demonstrations
Nudity
Feminist protests
2019 in Finland
2020 in Finland
2010s in Helsinki
2020s in Helsinki
Feminism in Finland
Events in Helsinki
Protests in Finland
2019 protests
2020 protests
Breast
Sun tanning
Flash mob
Finnish words and phrases
2019 in women's history
2020 in women's history | Tissiflashmob | Chemistry | 170 |
20,208,243 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20dipeptidyl%20peptidase-4%20inhibitors | Dipeptidyl peptidase-4 inhibitors (DPP-4 inhibitors) are enzyme inhibitors that inhibit the enzyme dipeptidyl peptidase-4 (DPP-4). They are used in the treatment of type 2 diabetes mellitus. Inhibition of the DPP-4 enzyme prolongs and enhances the activity of incretins that play an important role in insulin secretion and blood glucose control regulation.
Type 2 diabetes mellitus is a chronic metabolic disease that results from inability of the β-cells in the pancreas to secrete sufficient amounts of insulin to meet the body's needs. Insulin resistance and increased hepatic glucose production can also play a role by increasing the body's demand for insulin. Current treatments, other than insulin supplementation, are sometimes not sufficient to achieve control and may cause undesirable side effects, such as weight gain and hypoglycemia. In recent years, new drugs have been developed, based on continuing research into the mechanism of insulin production and regulation of the metabolism of sugar in the body. The enzyme DPP-4 has been found to play a significant role.
History
Since its discovery in 1967, serine protease DPP-4 has been a popular subject of research. Inhibitors of DPP-4 have long been sought as tools to elucidate the functional significance of the enzyme. The first inhibitors were characterized in the late 1980s and 1990s. Each inhibitor was important to establish an early structure activity relationship (SAR) for subsequent investigation. The inhibitors fall into two main classes, those that interact covalently with DPP-4 and those that do not. DPP-4 is a dipeptidase that selectively binds substrates that contain proline at the P1-position, thus many DPP-4 inhibitors have 5-membered heterocyclic rings that mimic proline, e.g. pyrrolidine, cyanopyrrolidine, thiazolidine and cyanothiazolidine. These compounds commonly form covalent bonds to the catalytic residue Ser630.
In 1994, researchers from Zeria Pharmaceuticals unveiled cyanopyrrolidines with a nitrile function group that was assumed to form an imidate with the catalytic serine. Concurrently other DPP-4 inhibitors without a nitrile group were published but they contained other serine-interacting motifs, e.g. boronic acids, phosphonates or diacyl hydroxylamines. These compounds were not as potent because of the similarity of DPP-4 and prolyl oligopeptidase (PEP) and also suffered from chemical instability. Ferring Pharmaceuticals filed for patent on two cyanopyrrolidine DPP-4 inhibitors, which they published in 1995. These compounds had excellent potency and improved chemical stability.
In 1995, Edwin B. Villhauer at Novartis started to explore N-substituted glycinyl-cyanopyrrolidines based on the fact that DPP-4 identifies N-methylglycine as an N-terminal amino acid. This group of new cyanopyrrolidines became extremely popular field of research in the following years. Some trials with dual inhibitors of DPP-4 and vasopeptidase have been represented, since vasopeptidase inhibition is believed to enhance the antidiabetic effect of DPP-4 inhibition by stimulating insulin secretion. Vasopeptidase-inhibiting motif is connected to the DPP-4 inhibitor at the N-substituent.
DPP-4 mechanism
Fig.1: During a meal, the incretins glucagon-like peptide 1 (GLP-1) and glucose-dependent gastric inhibitory polypeptide (GIP) are released by the small intestine into the blood stream. These hormones regulate insulin secretion in a glucose-dependent manner. (GLP-1 has many roles in the human body. It stimulates insulin biosynthesis, inhibits glucagon secretion, slows gastric emptying, reduces appetite and stimulates regeneration of islet β-cells.)
GLP-1 and GIP have extremely short plasma half-lives due to very rapid inactivation, catalyzed by the enzyme DPP-4. Inhibition of DPP-4 slows their inactivation, thereby potentiating their action, leading to lower plasma glucose levels, hence its utility in the treatment of type 2 diabetes. (Figure 1).
DPP-4 distribution and function
DPP-4 is attached to the plasma membrane of the endothelium of almost every organ in the body. Tissues which strongly express DPP-4 include the exocrine pancreas, sweat glands, salivary and mammary glands, thymus, lymph nodes, biliary tract, kidney, liver, placenta, uterus, prostate, skin, and the capillary bed of the gut mucosa (where most GLP-1 is inactivated locally). It is also present, in soluble form, in body fluids, such as blood plasma and cerebrospinal fluid. (It also happens that DPP-4 is the CD26 T-cell activating antigen.)
DPP-4 selectively cleaves two amino acids from peptides, such as GLP-1 and GIP, which have proline or alanine in the second position (Figure 2). At the active site where DPP-4 has its effect, there is a characteristic arrangement of three amino acids, Asp-His-Ser. Since alanine and proline are crucial for the biological activity of GPL-1 and GIP, they are inactivated by cleaving away these amino acids. Thus, preventing the degradation of the incretin hormones GLP-1 and GIP by inhibition of DPP-4 has potential as a therapeutic strategy in the treatment of type 2 diabetes.
DPP-4 characteristics
Since DPP-4 is a protease, it is not unexpected that inhibitors would likely have a peptide nature and this theme has carried through to contemporary research.
Structure
X-ray structures of DPP-4 that have been published since 2003 give rather detailed information about the structural characteristics of the binding site. Many structurally diverse DPP-4 inhibitors have been discovered and it is not that surprising considering the properties of the binding site:
1. A deep lipophilic pocket combined with several exposed aromatic side chains for achieving high affinity small molecule binding.
2. A significant solvent access that makes it possible to tune the physico-chemical properties of the inhibitors that leads to better pharmacokinetic behavior.
DPP-4 is a 766-amino acid transmembrane glycoprotein that belongs to the prolyloligopeptidase family. It consists of three parts; a cytoplasmic tail, a transmembrane region and an extracellular part. The extracellular part is divided into a catalytic domain and an eight-bladed β-propeller domain. The latter contributes to the inhibitor binding site. The catalytic domain shows an α/β-hydrolase fold and contains the catalytic triad Ser630 - Asp708 - His740. The S1-pocket is very hydrophobic and is composed of the side chains: Tyr631, Val656, Trp662, Tyr666 and Val711. Existing X-ray structures show that there is not much difference in size and shape of the pocket that indicates that the S1-pocket has high specificity for proline residues
Binding site
DPP-4 inhibitors usually have an electrophilic group that can interact with the hydroxyl of the catalytic serine in the active binding site (Figure 3). Frequently that group is a nitrile group but can also be boronic acid or diphenyl phosphonate. This electrophilic group can bind to the imidate complex with covalent bonds and slow, tight-binding kinetics but this group is also responsible for stability issues due to reactions with the free amino group of the P2-amino acid. Therefore, inhibitors without the electrophilic group have also been developed, but these molecules have shown toxicity due to affinity to other dipeptidyl peptidases, e.g. DPP-2, DPP-8 and DPP-9.
DPP-4 inhibitors span diverse structural types. In 2007 few of the most potent compounds contain a proline mimetic cyanopyrrolidine P1 group. This group enhances the potency, probably due to a transient covalent trapping of the nitrile group by the active site Ser630 hydroxyl, leading to delayed dissociation and slow tight binding of certain inhibitors. When these potency enhancements were achieved, some chemical stability issues were noted and more advanced molecules had to be made. To avoid these stability issues, the possibility to exclude the nitrile group was investigated. Amino acids with aryl or polar side chains did not show appreciable DPP-4 inhibition and in fact, all compounds without the nitrile group in this research suffered a 20 to 50-fold loss of potency corresponding to the compounds containing the nitrile group.
Discovery and development
It is important to find a fast and accurate system to discover new DPP-4 inhibitors with ideal therapeutic profiles. High throughput screening (HTS) usually gives low hit rates in identifying the inhibitors but virtual screening (VS) can give higher rates. VS has for example been used to screen for small primary aliphatic amines to identify fragments that could be placed in S1 and S2 sites of DPP-4. On the other hand, these fragments were not very potent and therefore identified as a starting point to design better ones.
Three-dimensional models can provide a useful tool for designing novel DPP-4 inhibitors. Pharmacophore models have been made based on key chemical features of compounds with DPP-4 inhibitory activity. These models can provide a hypothetical picture of the primary chemical feature responsible for inhibitory activity.
The first DPP-4 inhibitors were reversible inhibitors and came with bad side effects because of low selectivity. Researchers suspected that inhibitors with short half-lives would be preferred in order to minimize possible side effects. However, since clinical trials showed the opposite, the latest DPP-4 inhibitors have a long-lasting effect. One of the first reported DPP-4 inhibitor was P32/98 from Merck. It used thiazolidide as the P1-substitute and was the first DPP-4 inhibitor that showed effects in both animals and humans but it was not developed to a market drug due to side effects. Another old inhibitor is DPP-728 from Novartis, where 2-cyanopyrrolidine is used as the P1-substitute. The addition of the cyano group generally increases the potency. Therefore, researchers' attention was directed to those compounds. Usually, DPP-4 inhibitors are either substrate-like or non-substrate-like.
Substrate-like inhibitors
Substrate-like inhibitors (Figure 4) are more common than the non-substrate-likes. They bind either covalently or non-covalently and have a basic structure where the P1-substituent occupies the S1-pocket and the P2-substituent occupies the S2-pocket. Usually they contain a proline mimetic that occupies the S1-pocket. Large substituents on the 2-cyanopyrrolidine ring are normally not tolerated since the S1-pocket is quite small.
Since DPP-4 is identical with the T-cell activation marker CD26 and DPP-4 inhibitors are known to inhibit T-cell proliferation, these compounds were initially thought to be potential immunomodulators. When the function against type 2 diabetes was discovered, the cyanopyrrolidines became a highly popular research material. A little later vildagliptin and saxagliptin, which are the most developed cyanopyrrolidine DPP-4 inhibitors to date, were discovered.
Cyanopyrrolidines
Cyanopyrrolidines have two key interactions to the DPP-4 complex:
1. Nitrile in the position of the scissile bond of the peptidic substrate that is important for high potency. The nitrile group forms reversible covalent bonds with the catalytically active serine hydroxyl (Ser630), i.e. cyanopyrrolidines are competitive inhibitors with slow dissociation kinetics.
2. Hydrogen bonding network between the protonated amino group and a negatively charged region of the protein surface, Glu205, Glu206 and Tyr662. All cyanopyrrolidines have basic, primary or secondary amine, which makes this network possible but these compounds usually drop in potency if these amines are changed. Nonetheless, two patent applications unveil that the amino group can be changed, i.e. replaced by a hydrazine, but it is claimed that these compounds do not only act via DPP-4 inhibition but also prevent diabetic vascular complications by acting as a radical scavenger.
Structure-activity relationship (SAR)
Important structure-activity relationship:
1. Strict steric constraint exists around the pyrrolidine ring of cyanopyrrolidine-based inhibitors, with only hydrogen, fluoro, acetylene, nitrile, or methano substitution permitted.
2. Presence of a nitrile moiety on the pyrrolidine ring is critical to achieving potent activity
Also, systematic SAR investigation has shown that the ring size and stereochemistry for the P2 position is quite conditioned. A 5-membered ring and L-configuration has shown better results than a 4-membered or 6-membered ring with D-configuration. Only minor changes on the pyrrolidine ring can be tolerated, since the good fit of the ring with the hydrophobic S1 pocket is very important for high affinity. Some trials have been made, e.g. by replacing the pyrrolidine with a thiazoline. That led to improved potency but also loss of chemical stability. Efforts to improve chemical stability often led to loss of specificity because of interactions with DPP-8 and DPP-9. These interactions have been connected with increased toxicity and mortality in animals. There are strict limitations in the P1 position and hardly any changes are tolerated. On the other hand, a variety of changes can be made in the P2 position. In fact, substitution with quite big branched side chains, e.g. tert-butylglycin, normally increased activity and chemical stability, which could lead to longer-lasting inhibition of the DPP-4 enzyme. It has also been noted that biaryl-based side chains can also give highly active inhibitors. It was originally believed that only lipophilic substitution would be tolerated. Now it is stated that also the substitution of polar negatively charged side-chains as well as hydrophilic substitution can lead to excellent inhibitory activity.
Chemical stability
In general, DPP-4 inhibitors are not very stable compounds. Therefore, many researchers focus on enhancing the stability for cyanopyrrolidines. The most widespread technique to improve chemical stability is to incorporate a steric bulk. The two cyanopyrrolidines that have been most pronounced, vildagliptin and saxagliptin, were created in this manner. K579 is a DPP-4 inhibitor discovered by researchers at Kyowa Hakko Kyogo. It had improved not only chemical stability but also a longer-lasting action. That long-lasting action was most likely due to slow dissociation of the enzyme-inhibitor complex and an active oxide metabolite that undergoes enterohepatic circulation. The discovery of the active oxide was in fact a big breakthrough as it led to the development of vildagliptin and saxagliptin. One major problem in DPP-4 inhibitor stability is intramolecular cyclization. The precondition for the intramolecular cyclization is the conversion of the trans-rotamer, which is the DPP-4 binding rotamer (Figure 5). Thus, preventing this conversion will increase stability. This prevention was successful when incorporating an amide group into a ring, creating a compound that kept the DPP-4 inhibitory activity that, did not undergo the intramolecular cyclization and was even more selective over different DPP enzymes. It has also been reported that a cyanoazetidine in the P1 position and a β-amino acid in the P2 position increased stability.
Vildagliptin
Vildagliptin (Galvus)(Figure 6) was first synthesized in May 1998 and was named after Edwin B. Villhauer. It was discovered when researchers at Novartis examined adamantyl derivatives that had proven to be very potent. The adamantyl group worked as a steric bulk and slowed intramolecular cyclization while increasing chemical stability. Furthermore, the primary metabolites were highly active. To avoid additional chiral center a hydroxylation at the adamantyl ring was carried out (Figure 6). The product, vildagliptin, was even more stable, undergoing intramolecular cyclization 30-times slower, and having high DPP-4 inhibitory activity and longer-lasting pharmacodynamic effect.
Saxagliptin
Researchers at Bristol-Myers Squibb found that increased steric bulk of the N-terminal amino acid side-chain led to increased stability. To additionally increase stability the trans-rotamer was stabilized with a cis-4,5-methano substitution of the pyrrolidine ring, resulting in an intramolecular van-der-Waals interaction, thus preventing intramolecular cyclisation. Because of that increased stability, the researchers continued their investigation on cis-4,5-methano cyanopyrrolidines and came across with a new adamantyl derivative, which showed extraordinary ex vivo DPP-4 inhibition in rat plasma. Also noted, high microsomal turnover rate which indicated that the derivative was quickly converted to an active metabolite. After hydroxylation on the adamantyl group they had a product with better microsomal stability and improved chemical stability. That product was named saxagliptin (Onglyza) (Figure 6). In June 2008 AstraZeneca and Bristol-Myers Squibb submitted a new drug application for Onglyza in the United States and a marketing authorization application in Europe. Approval was granted in the United States by the FDA in July 2009 for Onglyza 5 mg and Onglyza 2.5 mg. This was later combined with extended-release metformin (taken once daily) and approved by the FDA in January 2011 under the trade name Kombiglyze XR.
Denagliptin
Denagliptin (Figure 6) is an advanced compound with a branched side-chain at the P2 position, but also has (4S)-fluoro substitution on the cyanopyrrolidine ring. It is a well-known DPP-4 inhibitor developed by GlaxoSmithKline (GSK). Biological evaluations have shown that the S-configuration of the amino acid portion is essential for the inhibitory activity since the R-configuration showed reluctantly inhibition. These findings will be useful in future designs and synthesis of DPP-4 inhibitors. GSK suspended Phase III clinical trials in October 2008.
Azetidine based compounds
Informations for this group of inhibitors are quite restricted. Azetidine-based DPP-4 inhibitors can roughly be grouped into three main subcategories: 2-cyanoazetidines, 3-fluoroazetidines, and 2-ketoazetidines. The most potent ketoazetidines and cyanoazetidines have large hydrophobic amino acid groups bound to the azetidine nitrogen and are active below 100nM.
Non-substrate-like inhibitors
Non-substrate-like inhibitors do not take after dipeptidic nature of DPP-4 substrates. They are non-covalent inhibitors and usually have an aromatic ring that occupies the S1-pocket, instead of the proline mimetic.
In 1999, Merck started a drug development program on DPP-4 inhibitors. When they started internal screening and medicinal chemistry program, two DPP-4 inhibitors were already in clinical trials, isoleucyl thiazolidide (P32/38) and NVP-DPP728 from Novartis. Merck in-licensed L-threo-isoleucyl thiazolidide and its allo stereoisomer. In animal studies, they found that both isomers had similar affinity for DPP-4, similar in vivo efficacy, similar pharmacokinetic and metabolic profiles. Nevertheless, the allo isomer was 10-fold more toxic. The researchers found out that this difference in toxicity was due to the allo isomer's greater inhibition of DPP-8 and DPP-9 but not because of selective DPP-4 inhibition. More research also supported that DPP-4 inhibition would not cause compromised immune function. Once this link between affinity for DPP-8/DPP-9 and toxicity was discovered, Merck decided on identifying an inhibitor with more than a thousandfold affinity for DPP-4 over the other dipeptidases. For this purpose, they used positional scanning libraries. From scanning these libraries, the researchers discovered that both DPP-4 and DPP-8 showed a strong preference for breaking down peptides with a proline at the P1 position but they found a great difference at the P2 site; i.e., they found that acidic functionality at the P2 position could provide a greater affinity for DPP-4 over DPP-8. Merck kept up doing even more research and screening. They stopped working on compounds from the α-amino acid series related to isoleucyl thiazolidide due to lack of selectivity but instead they discovered a very selective β-amino acid piperazine series through SAR studies on two screening leads. When trying to stabilize the piperazine moiety, a group of bicyclic derivatives were made, which led to the identification of a potent and selective triazolopiperazine series. Most of these analogs showed excellent pharmacokinetic properties in preclinical species. Optimization of these compounds finally led to the discovery of sitagliptin.
Sitagliptin
Sitagliptin (Januvia) has a novel structure with β-amino amide derivatives (Figure 7). Since sitagliptin has shown excellent selectivity and in vivo efficacy it urged researchers to inspect the new structure of DPP-4 inhibitors with appended β-amino acid moiety. Further studies are being developed to optimize these compounds for the treatment of diabetes.
In October 2006 sitagliptin became the first DPP-4 inhibitor that got FDA approval for the treatment of type 2 diabetes. Crystallographic structure of sitagliptin along with molecular modeling has been used to continue the search for structurally diverse inhibitors. A new potent, selective and orally bioavailable DPP-4 inhibitor was discovered by replacing the central cyclohexylamine in sitagliptin with 3-aminopiperidine. A 2-pyridyl substitution was the initial SAR breakthrough since that group plays a significant role in potency and selectivity for DPP-4.
It has been shown with an X-ray crystallography how sitagliptin binds to the DPP-4 complex:
1. The trifluorophenyl group occupies the S1-pocket
2. The trifluoromethyl group interacts with the side chains of residues Arg358 and Ser209.
3. The amino group forms a salt bridge with Tyr662 and the carboxylated groups of the two glutamate residues, Glu205 and Glu206.
4. The triazolopiperazine group collides with the phenyl group of residue Phe357
Constrained phenylethylamine compounds
Researchers at Abbott Laboratories identified three novel series of DPP-4 inhibitors using HTS. After more research and optimization ABT-341 was discovered (Figure 8). It is a potent and selective DPP-4 inhibitor with a 2D-structure very similar to sitagliptin. However, the 3D-structure is quite different. ABT-341 also has a trifluorophenyl group that occupies the S1-pocket and the free amino group, but the two carbonyl groups are orientated 180° away from each other. ABT-341 is also believed to interact with the Tyr547, probably because of steric hindrance between the cyclohexenyl ring and the tyrosine side-chain. Omarigliptin is one of such compound which is in Phase-III development by Merck & Co.
Pyrrolidine compounds
The pyrrolidine type of DPP-4 inhibitors was first discovered after HTS. Research showed that the pyrrolidine rings were the part of the compounds that fit into the binding site. Further development has led to fluoro substituted pyrrolidines that show superior activity, as well as pyrrolidines with fused cyclopropylrings that are highly active.
Xanthine-based compounds
This is a different class of inhibitors that was identified with HTS. Aromatic heterocyclic-based DPP-4 inhibitors have gained increased attention recently. The first patents describing xanthines (Figure 10) as DPP-4 inhibitors came from Boehringer-Ingelheim(BI) and Novo Nordisk.
When xanthine based DPP-4 inhibitors are compared with sitagliptin and vildagliptin it has shown a superior profile. Xanthines are believed to have higher potency, longer-lasting inhibition and longer-lasting improvement of glucose tolerance.
Alogliptin
Alogliptin (Figure 9) is a novel DPP-4 inhibitor developed by the Takeda Pharmaceutical Company. Researchers hypothesized that a quinazolinone based structure (Figure 9) would have the necessary groups to interact with the active site on the DPP-4 complex. Quinazolinone based compounds interacted effectively with the DPP-4 complex, but suffered from low metabolic half-life. It was found that when replacing the quinazolinone with a pyrimidinedione, the metabolic stability was increased and the result was a potent, selective, bioavailable DPP-4 inhibitor named alogliptin. The quinazoline based compounds showed potent inhibition and excellent selectivity over related protease, DPP-8. However, short metabolic half-life due to oxidation of the A-ring phenyl group was problematic. At first, the researchers tried to make a fluorinated derivative. The derivative showed improved metabolic stability and excellent inhibition of the DPP-4 enzyme. However, it was also found to inhibit CYP 450 3A4 and block the hERG channel. The solution to this problem was to replace the quinazolinone with other heterocycles, but the quinazolinone could be replaced without any loss of DPP-4 inhibition. Alogliptin was discovered when quinazolinone was replaced with a pyrimidinedione. Alogliptin has shown excellent inhibition of DPP-4 and extraordinary selectivity, greater than 10.000 fold over the closely related serine proteases DPP-8 and DPP-9. Also, it does not inhibit the CYP 450 enzymes nor block the hERG channel at concentration up to 30 μM. Based on this data, alogliptin was chosen for preclinical evaluation. In January 2007 alogliptin was undergoing the phase III clinical trial and in October 2008 it was being reviewed by the U.S. Food and Drug Administration.
Linagliptin
Researchers at BI discovered that using a buty-2-nyl group resulted in a potent candidate, called BI-1356 (Figure 10). In 2008 BI-1356 was undergoing phase III clinical trials; it was released as linagliptin in May 2011. X-ray crystallography has shown that that xanthine type binds the DPP-4 complex in a different way than other inhibitors:
1. The amino group also interacts with the Glu205, Glu206 and Tyr662
2. The buty-2-nyl group occupies the S1-pocket
3. The uracil group undergoes a π-stacking interaction with the Tyr547 residue
4. The quinazoline group undergoes a π-stacking interaction with the Trp629 residue
Pharmacology
The pharmacokinetic properties of sitagliptin and vildagliptin appear unaffected by age, sex or BMI. Clinical researches have shown that sitagliptin and vildagliptin do not have the side effects that tend to follow type 2 diabetes treatment, e.g. weight gain and hyperglycemia, but however, other side effects have been observed, including upper respiratory tract infections, sore throat and diarrhea.
See also
Dipeptidyl peptidase-4
Dipeptidyl peptidase-4 inhibitors
Linagliptin
Vildagliptin
Sitagliptin
Saxagliptin
Berberine
teneligliptin
gosogliptin
References
Dipeptidyl peptidase-4 inhibitors
Dipeptidyl peptidase-4 inhibitors | Discovery and development of dipeptidyl peptidase-4 inhibitors | Chemistry,Biology | 6,260 |
490,668 | https://en.wikipedia.org/wiki/Immunoglobulin%20A | Immunoglobulin A (Ig A, also referred to as sIgA in its secretory form) is an antibody that plays a role in the immune function of mucous membranes. The amount of IgA produced in association with mucosal membranes is greater than all other types of antibody combined. In absolute terms, between three and five grams are secreted into the intestinal lumen each day. This represents up to 15% of total immunoglobulins produced throughout the body.
IgA has two subclasses (IgA1 and IgA2) and can be produced as a monomeric as well as a dimeric form. The IgA dimeric form is the most prevalent and, when it has bound the Secretory component, is also called secretory IgA (sIgA). sIgA is the main immunoglobulin found in mucous secretions, including tears, saliva, sweat, colostrum and secretions from the genitourinary tract, gastrointestinal tract, prostate and respiratory epithelium. It is also found in small amounts in blood. The secretory component of sIgA protects the immunoglobulin from being degraded by proteolytic enzymes; thus, sIgA can survive in the harsh gastrointestinal tract environment and provide protection against microbes that multiply in body secretions. sIgA can also inhibit inflammatory effects of other immunoglobulins. IgA is a poor activator of the complement system, and opsonizes only weakly.
Forms
IgA1 vs. IgA2
IgA exists in two isotypes, IgA1 and IgA2. They are both heavily glycosylated proteins. While IgA1 predominates in serum (~80%), IgA2 percentages are higher in secretions than in serum (~35% in secretions); the ratio of IgA1 and IgA2 secreting cells varies in the different lymphoid tissues of the human body:
IgA1 is the predominant IgA subclass found in serum. Most lymphoid tissues have a predominance of IgA1-producing cells.
In IgA2, the heavy and light chains are not linked with disulfide, but with non-covalent bonds. In secretory lymphoid tissues (e.g., gut-associated lymphoid tissue, or GALT), the share of IgA2 production is larger than in the non-secretory lymphoid organs (e.g. spleen, peripheral lymph nodes).
Both IgA1 and IgA2 have been found in external secretions like colostrum, maternal milk, tears and saliva, where IgA2 is more prominent than in the blood.
Polysaccharide antigens tend to induce more IgA2 than protein antigens.
Both IgA1 and IgA2 can be in membrane-bound form. (see B-cell receptor)
The heavy chain of IgA1, in contrast to IgA2, features an extended hinge region. This is thought to allow IgA1 to adapt more effectively to varying epitope spacings on multivalent antigens, while also presenting less resistance to bacterial proteases.
Serum vs. secretory IgA
It is also possible to distinguish forms of IgA based upon their location – serum IgA vs. secretory IgA. In serum IgA is predominantly monomeric with a minor population of IgA polymers (dimers in healthy individuals).
IgA has the unique ability to be secreted by B cells either as a monomer or as a covalently-linked polymer of multiple IgA subunits. Polymers of 2–4 IgA monomers, but most commonly 2, are covalently linked to one molecule of the J chain (joining chain) inside the B cell prior to secretion as a J chain-coupled polymeric IgA molecule. The J chain is a polypeptide with a backbone molecular mass of 15 kDa but typically ~18 kDa when glycosylated, rich with cysteine and structurally completely different from other immunoglobulin chains. As such, the molecular weight of a J chain-coupled dimer of IgA is ~340 kDa.
The oligomeric forms of IgA in the external (mucosal) secretions also contain a polypeptide of a much larger molecular mass (70 kDa) called the secretory component that is produced by epithelial cells. This molecule originates from the poly-Ig receptor (130 kDa) that is responsible for the uptake and transcellular transport of J chain-containing polymeric (but not monomeric, which is devoid of J chain) IgA across the epithelial cells and into secretions such as tears, saliva, sweat and gut fluid.
Physiology
Serum IgA
In the blood, IgA interacts with an Fc receptor called FcαRI (or CD89), which is expressed on immune effector cells, to initiate inflammatory reactions. Ligation of FcαRI by IgA containing immune complexes causes antibody-dependent cell-mediated cytotoxicity (ADCC), degranulation of eosinophils and basophils, phagocytosis by monocytes, macrophages, and neutrophils, and triggering of respiratory burst activity by polymorphonuclear leukocytes. Unlike IgM and IgG, which activate complement through the classical pathway, IgA can activate complement via the alternative and lectin pathways.
Secretory IgA
The high prevalence of IgA in mucosal areas is a result of a cooperation between plasma cells that produce polymeric IgA (pIgA), and mucosal epithelial cells that express polymeric immunoglobulin receptor (pIgR). Polymeric IgA (mainly the secretory dimer) is produced by plasma cells in the lamina propria adjacent to mucosal surfaces. It binds to the pIgR on the basolateral surface of epithelial cells, and is taken up into the cell via endocytosis. The receptor-IgA complex passes through the cellular compartments before being secreted on the luminal surface of the epithelial cells, still attached to the receptor. Proteolysis of the receptor occurs, and the dimeric IgA molecule, along with a portion of the receptor known as the secretory component (SC), is free to diffuse throughout the lumen, with dimeric IgA and SC together forming the so-called secretory IgA (sIgA) In the gut, IgA can bind to the mucus layer covering the epithelial cells. In this way, a barrier capable of neutralizing threats before they reach the epithelial cells is formed.
Secretory IgA levels fluctuate diurnally, with the highest levels found in the small intestine and feces around ZT6, the middle of the light period. The regulation of IgA secretion is related to the microbiota, and IgA is known to control specific members of oscillating microbes through direct interactions. However, the underlying cause of the rhythmic secretion of IgA is not completely understood and may differ from one region of the body to another.
Production of sIgA against specific antigens depends on sampling of M cells and underlying dendritic cells, T cell activation, and B cell class switching in GALT, mesenteric lymph nodes, and isolated lymphoid follicles in the small intestine.
sIgA primarily acts by blockading epithelial receptors (e.g. by binding their ligands on pathogens), by sterically hindering attachment to epithelial cells, and by immune exclusion. Immune exclusion is a process of agglutinating polyvalent antigens or pathogens by crosslinking them with antibody, trapping them in the mucus layer, and/or clearing them peristaltically. The oligosaccharide chains of the component of IgA can associate with the mucus layer that sits atop epithelial cells. Since sIgA is a poor opsonin and activator of complement, simply binding a pathogen isn't necessarily enough to contain it—specific epitopes may have to be bound to sterically hinder access to the epithelium.
Clearance of IgA is mediated at least in part by asialoglycoprotein receptors, which recognizes galactose-terminating IgA N-glycans.
Pathology
Genetic
Decreased or absent IgA due to an inherited inability to produce IgA is termed selective IgA deficiency and can produce a clinically significant immunodeficiency.
Anti-IgA antibodies, sometimes present in individuals with low or absent IgA, can result in serious anaphylactic reactions when transfused with blood products that incidentally contain IgA. However, most persons with suspected IgA anaphylactic reactions had experienced acute generalized reactions that were from causes other than anti-IgA transfusion.
Microbial
Neisseria species including Neisseria gonorrhoeae (which causes gonorrhea), Streptococcus pneumoniae, and Haemophilus influenzae type B all release a protease that destroys IgA. Additionally, Blastocystis species have been shown to have several subtypes that generate cysteine and aspartic protease enzymes which degrade human IgA.
Autoimmune and immune-mediated
IgA nephropathy is caused by IgA deposits in the kidneys. The pathogenesis involves the production of hypoglycosylated IgA1, which accumulates and subsequently leads to the formation of immune complexes and the production of IgA-specific IgG, further leading to tissue inflammation.
Celiac disease involves IgA pathology due to the presence of IgA antiendomysial antibodies. Additional testing has been conducted using IgA trans-glutaminase autoantibodies which has been identified as a specific and sensitive for the detection of celiac disease.
Henoch–Schönlein purpura (HSP) is a systemic vasculitis caused by deposits of IgA and complement component 3 (C3) in small blood vessels. HSP occurs usually in small children and involves the skin and connective tissues, scrotum, joints, gastrointestinal tract and kidneys. It usually follows an upper respiratory infection and resolves within a couple weeks as the liver clears out the IgA aggregates.
Linear IgA bullous dermatosis and IgA pemphigus are two examples of IgA-mediated immunobullous diseases. IgA-mediated immunobullous diseases can often be difficult to treat even with usually effective medications such as rituximab.
Drug-induced
Vancomycin can induce a linear IgA bullous dermatosis in some patients.
See also
List of target antigens in pemphigus
TGF beta
References
External links
Antibodies
Glycoproteins | Immunoglobulin A | Chemistry | 2,322 |
40,458,595 | https://en.wikipedia.org/wiki/Runcicantellated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the runcicantellated tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space.
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
Demitesseractic honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
x3x3x *b3o4x, x4o3x3x4o - prittit - O97
Honeycombs (geometry)
5-polytopes
Truncated tilings | Runcicantellated tesseractic honeycomb | Physics,Chemistry,Materials_science | 288 |
58,608 | https://en.wikipedia.org/wiki/Trusted%20Computing | Trusted Computing (TC) is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning that is distinct from the field of confidential computing. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by computer hardware and software. Enforcing this behavior is achieved by loading the hardware with a unique encryption key that is inaccessible to the rest of the system and the owner.
TC is controversial as the hardware is not only secured for its owner, but also against its owner, leading opponents of the technology like free software activist Richard Stallman to deride it as "treacherous computing", and certain scholarly articles to use scare quotes when referring to the technology.
Trusted Computing proponents such as International Data Corporation, the Enterprise Strategy Group and Endpoint Technologies Associates state that the technology will make computers safer, less prone to viruses and malware, and thus more reliable from an end-user perspective. They also state that Trusted Computing will allow computers and servers to offer improved computer security over that which is currently available. Opponents often state that this technology will be used primarily to enforce digital rights management policies (imposed restrictions to the owner) and not to increase computer security.
Chip manufacturers Intel and AMD, hardware manufacturers such as HP and Dell, and operating system providers such as Microsoft include Trusted Computing in their products if enabled. The U.S. Army requires that every new PC it purchases comes with a Trusted Platform Module (TPM). As of July 3, 2007, so does virtually the entire United States Department of Defense.
Key concepts
Trusted Computing encompasses six key technology concepts, of which all are required for a fully Trusted system, that is, a system compliant to the TCG specifications:
Endorsement key
Secure input and output
Memory curtaining / protected execution
Sealed storage
Remote attestation
Trusted Third Party (TTP)
Endorsement key
The endorsement key is a 2048-bit RSA public and private key pair that is created randomly on the chip at manufacture time and cannot be changed. The private key never leaves the chip, while the public key is used for attestation and for encryption of sensitive data sent to the chip, as occurs during the TPM_TakeOwnership command.
This key is used to allow the execution of secure transactions: every Trusted Platform Module (TPM) is required to be able to sign a random number (in order to allow the owner to show that he has a genuine trusted computer), using a particular protocol created by the Trusted Computing Group (the direct anonymous attestation protocol) in order to ensure its compliance of the TCG standard and to prove its identity; this makes it impossible for a software TPM emulator with an untrusted endorsement key (for example, a self-generated one) to start a secure transaction with a trusted entity. The TPM should be designed to make the extraction of this key by hardware analysis hard, but tamper resistance is not a strong requirement.
Memory curtaining
Memory curtaining extends common memory protection techniques to provide full isolation of sensitive areas of memory—for example, locations containing cryptographic keys. Even the operating system does not have full access to curtained memory. The exact implementation details are vendor specific.
Sealed storage
Sealed storage protects private information by binding it to platform configuration information including the software and hardware being used. This means the data can be released only to a particular combination of software and hardware. Sealed storage can be used for DRM enforcing. For example, users who keep a song on their computer that has not been licensed to be listened will not be able to play it. Currently, a user can locate the song, listen to it, and send it to someone else, play it in the software of their choice, or back it up (and in some cases, use circumvention software to decrypt it). Alternatively, the user may use software to modify the operating system's DRM routines to have it leak the song data once, say, a temporary license was acquired. Using sealed storage, the song is securely encrypted using a key bound to the trusted platform module so that only the unmodified and untampered music player on his or her computer can play it. In this DRM architecture, this might also prevent people from listening to the song after buying a new computer, or upgrading parts of their current one, except after explicit permission of the vendor of the song.
Remote attestation
Remote attestation allows changes to the user's computer to be detected by authorized parties. For example, software companies can identify unauthorized changes to software, including users modifying their software to circumvent commercial digital rights restrictions. It works by having the hardware generate a certificate stating what software is currently running. The computer can then present this certificate to a remote party to show that unaltered software is currently executing. Numerous remote attestation schemes have been proposed for various computer architectures, including Intel, RISC-V, and ARM.
Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that requested the attestation, and not by an eavesdropper.
To take the song example again, the user's music player software could send the song to other machines, but only if they could attest that they were running an authorized copy of the music player software. Combined with the other technologies, this provides a more restricted path for the music: encrypted I/O prevents the user from recording it as it is transmitted to the audio subsystem, memory locking prevents it from being dumped to regular disk files as it is being worked on, sealed storage curtails unauthorized access to it when saved to the hard drive, and remote attestation prevents unauthorized software from accessing the song even when it is used on other computers. To preserve the privacy of attestation responders, Direct Anonymous Attestation has been proposed as a solution, which uses a group signature scheme to prevent revealing the identity of individual signers.
Proof of space (PoS) have been proposed to be used for malware detection, by determining whether the L1 cache of a processor is empty (e.g., has enough space to evaluate the PoSpace routine without cache misses) or contains a routine that resisted being evicted.
Trusted third party
Known applications
The Microsoft products Windows Vista, Windows 7, Windows 8 and Windows RT make use of a Trusted Platform Module to facilitate BitLocker Drive Encryption. Other known applications with runtime encryption and the use of secure enclaves include the Signal messenger and the e-prescription service ("E-Rezept") by the German government.
Possible applications
Digital rights management
Trusted Computing would allow companies to create a digital rights management (DRM) system which would be very hard to circumvent, though not impossible. An example is downloading a music file. Sealed storage could be used to prevent the user from opening the file with an unauthorized player or computer. Remote attestation could be used to authorize play only by music players that enforce the record company's rules. The music would be played from curtained memory, which would prevent the user from making an unrestricted copy of the file while it is playing, and secure I/O would prevent capturing what is being sent to the sound system. Circumventing such a system would require either manipulation of the computer's hardware, capturing the analogue (and thus degraded) signal using a recording device or a microphone, or breaking the security of the system.
New business models for use of software (services) over Internet may be boosted by the technology. By strengthening the DRM system, one could base a business model on renting programs for a specific time periods or "pay as you go" models. For instance, one could download a music file which could only be played a certain number of times before it becomes unusable, or the music file could be used only within a certain time period.
Preventing cheating in online games
Trusted Computing could be used to combat cheating in online games. Some players modify their game copy in order to gain unfair advantages in the game; remote attestation, secure I/O and memory curtaining could be used to determine that all players connected to a server were running an unmodified copy of the software.
Verification of remote computation for grid computing
Trusted Computing could be used to guarantee participants in a grid computing system are returning the results of the computations they claim to be instead of forging them. This would allow large scale simulations to be run (say a climate simulation) without expensive redundant computations to guarantee malicious hosts are not undermining the results to achieve the conclusion they want.
Criticism
The Electronic Frontier Foundation and the Free Software Foundation criticize that trust in the underlying companies is not deserved and that the technology puts too much power and control into the hands of those who design systems and software. They also state that it may cause consumers to lose anonymity in their online interactions, as well as mandating technologies Trusted Computing opponents say are unnecessary. They suggest Trusted Computing as a possible enabler for future versions of mandatory access control, copy protection, and DRM.
Some security experts, such as Alan Cox and Bruce Schneier, have spoken out against Trusted Computing, believing it will provide computer manufacturers and software authors with increased control to impose restrictions on what users are able to do with their computers. There are concerns that Trusted Computing would have an anti-competitive effect on the IT market.
There is concern amongst critics that it will not always be possible to examine the hardware components on which Trusted Computing relies, the Trusted Platform Module, which is the ultimate hardware system where the core 'root' of trust in the platform has to reside. If not implemented correctly, it presents a security risk to overall platform integrity and protected data. The specifications, as published by the Trusted Computing Group, are open and are available for anyone to review. However, the final implementations by commercial vendors will not necessarily be subjected to the same review process. In addition, the world of cryptography can often move quickly, and that hardware implementations of algorithms might create an inadvertent obsolescence. Trusting networked computers to controlling authorities rather than to individuals may create digital imprimaturs.
Cryptographer Ross Anderson, University of Cambridge, has great concerns that:
TC can support remote censorship [...] In general, digital objects created using TC systems remain under the control of their creators, rather than under the control of the person who owns the machine on which they happen to be stored [...] So someone who writes a paper that a court decides is defamatory can be compelled to censor it — and the software company that wrote the word processor could be ordered to do the deletion if she refuses. Given such possibilities, we can expect TC to be used to suppress everything from pornography to writings that criticize political leaders.
He goes on to state that:
[...] software suppliers can make it much harder for you to switch to their competitors' products. At a simple level, Word could encrypt all your documents using keys that only Microsoft products have access to; this would mean that you could only read them using Microsoft products, not with any competing word processor. [...]
The [...] most important benefit for Microsoft is that TC will dramatically increase the costs of switching away from Microsoft products (such as Office) to rival products (such as OpenOffice). For example, a law firm that wants to change from Office to OpenOffice right now merely has to install the software, train the staff and convert their existing files. In five years' time, once they have received TC-protected documents from perhaps a thousand different clients, they would have to get permission (in the form of signed digital certificates) from each of these clients in order to migrate their files to a new platform. The law firm won't in practice want to do this, so they will be much more tightly locked in, which will enable Microsoft to hike its prices.
Anderson summarizes the case by saying:
The fundamental issue is that whoever controls the TC infrastructure will acquire a huge amount of power. Having this single point of control is like making everyone use the same bank, or the same accountant, or the same lawyer. There are many ways in which this power could be abused.
Digital rights management
One of the early motivations behind trusted computing was a desire by media and software corporations for stricter DRM technology to prevent users from freely sharing and using potentially copyrighted or private files without explicit permission.
An example could be downloading a music file from a band: the band's record company could come up with rules for how the band's music can be used. For example, they might want the user to play the file only three times a day without paying additional money. Also, they could use remote attestation to only send their music to a music player that enforces their rules: sealed storage would prevent the user from opening the file with another player that did not enforce the restrictions. Memory curtaining would prevent the user from making an unrestricted copy of the file while it is playing, and secure output would prevent capturing what is sent to the sound system.
Users unable to modify software
A user who wanted to switch to a competing program might find that it would be impossible for that new program to read old data, as the information would be "locked in" to the old program. It could also make it impossible for the user to read or modify their data except as specifically permitted by the software.
Users unable to exercise legal rights
The law in many countries allows users certain rights over data whose copyright they do not own (including text, images, and other media), often under headings such as fair use or public interest. Depending on jurisdiction, these may cover issues such as whistleblowing, production of evidence in court, quoting or other small-scale usage, backups of owned media, and making a copy of owned material for personal use on other owned devices or systems. The steps implicit in trusted computing have the practical effect of preventing users exercising these legal rights.
Users vulnerable to vendor withdrawal of service
A service that requires external validation or permission - such as a music file or game that requires connection with the vendor to confirm permission to play or use - is vulnerable to that service being withdrawn or no longer updated. A number of incidents have already occurred where users, having purchased music or video media, have found their ability to watch or listen to it suddenly stop due to vendor policy or cessation of service, or server inaccessibility, at times with no compensation. Alternatively in some cases the vendor refuses to provide services in future which leaves purchased material only usable on the present -and increasingly obsolete- hardware (so long as it lasts) but not on any hardware that may be purchased in future.
Users unable to override
Some opponents of Trusted Computing advocate "owner override": allowing an owner who is confirmed to be physically present to allow the computer to bypass restrictions and use the secure I/O path. Such an override would allow remote attestation to a user's specification, e.g., to create certificates that say Internet Explorer is running, even if a different browser is used. Instead of preventing software change, remote attestation would indicate when the software has been changed without owner's permission.
Trusted Computing Group members have refused to implement owner override. Proponents of trusted computing believe that owner override defeats the trust in other computers since remote attestation can be forged by the owner. Owner override offers the security and enforcement benefits to a machine owner, but does not allow them to trust other computers, because their owners could waive rules or restrictions on their own computers. Under this scenario, once data is sent to someone else's computer, whether it be a diary, a DRM music file, or a joint project, that other person controls what security, if any, their computer will enforce on their copy of those data. This has the potential to undermine the applications of trusted computing to enforce DRM, control cheating in online games and attest to remote computations for grid computing.
Loss of anonymity
Because a Trusted Computing equipped computer is able to uniquely attest to its own identity, it will be possible for vendors and others who possess the ability to use the attestation feature to zero in on the identity of the user of TC-enabled software with a high degree of certainty.
Such a capability is contingent on the reasonable chance that the user at some time provides user-identifying information, whether voluntarily, indirectly, or simply through inference of many seemingly benign pieces of data. (e.g. search records, as shown through simple study of the AOL search records leak). One common way that information can be obtained and linked is when a user registers a computer just after purchase. Another common way is when a user provides identifying information to the website of an affiliate of the vendor.
While proponents of TC point out that online purchases and credit transactions could potentially be more secure as a result of the remote attestation capability, this may cause the computer user to lose expectations of anonymity when using the Internet.
Critics point out that this could have a chilling effect on political free speech, the ability of journalists to use anonymous sources, whistle blowing, political blogging and other areas where the public needs protection from retaliation through anonymity.
The TPM specification offers features and suggested implementations that are meant to address the anonymity requirement. By using a third-party Privacy Certification Authority (PCA), the information that identifies the computer could be held by a trusted third party. Additionally, the use of direct anonymous attestation (DAA), introduced in TPM v1.2, allows a client to perform attestation while not revealing any personally identifiable or machine information.
The kind of data that must be supplied to the TTP in order to get the trusted status is at present not entirely clear, but the TCG itself admits that "attestation is an important TPM function with significant privacy implications". It is, however, clear that both static and dynamic information about the user computer may be supplied (Ekpubkey) to the TTP (v1.1b), it is not clear what data will be supplied to the “verifier” under v1.2. The static information will uniquely identify the endorser of the platform, model, details of the TPM, and that the platform (PC) complies with the TCG specifications . The dynamic information is described as software running on the computer. If a program like Windows is registered in the user's name this in turn will uniquely identify the user. Another dimension of privacy infringing capabilities might also be introduced with this new technology; how often you use your programs might be possible information provided to the TTP. In an exceptional, however practical situation, where a user purchases a pornographic movie on the Internet, the purchaser nowadays, must accept the fact that he has to provide credit card details to the provider, thereby possibly risking being identified. With the new technology a purchaser might also risk someone finding out that he (or she) has watched this pornographic movie 1000 times. This adds a new dimension to the possible privacy infringement. The extent of data that will be supplied to the TTP/Verifiers is at present not exactly known, only when the technology is implemented and used will we be able to assess the exact nature and volume of the data that is transmitted.
TCG specification interoperability problems
Trusted Computing requests that all software and hardware vendors will follow the technical specifications released by the Trusted Computing Group in order to allow interoperability between different trusted software stacks. However, since at least mid-2006, there have been interoperability problems between the TrouSerS trusted software stack (released as open source software by IBM) and Hewlett-Packard's stack. Another problem is that the technical specifications are still changing, so it is unclear which is the standard implementation of the trusted stack.
Shutting out of competing products
People have voiced concerns that trusted computing could be used to keep or discourage users from running software created by companies outside of a small industry group. Microsoft has received a great deal of bad press surrounding their Palladium software architecture, evoking comments such as "Few pieces of vaporware have evoked a higher level of fear and uncertainty than Microsoft's Palladium", "Palladium is a plot to take over cyberspace", and "Palladium will keep us from running any software not personally approved by Bill Gates". The concerns about trusted computing being used to shut out competition exist within a broader framework of consumers being concerned about using bundling of products to obscure prices of products and to engage in anti-competitive practices. Trusted Computing is seen as harmful or problematic to independent and open source software developers.
Trust
In the widely used public-key cryptography, creation of keys can be done on the local computer and the creator has complete control over who has access to it, and consequentially their own security policies. In some proposed encryption-decryption chips, a private/public key is permanently embedded into the hardware when it is manufactured, and hardware manufacturers would have the opportunity to record the key without leaving evidence of doing so. With this key it would be possible to have access to data encrypted with it, and to authenticate as it. It is trivial for a manufacturer to give a copy of this key to the government or the software manufacturers, as the platform must go through steps so that it works with authenticated software.
Therefore, to trust anything that is authenticated by or encrypted by a TPM or a Trusted computer, an end user has to trust the company that made the chip, the company that designed the chip, the companies allowed to make software for the chip, and the ability and interest of those companies not to compromise the whole process. A security breach breaking that chain of trust happened to a SIM card manufacturer Gemalto, which in 2010 was infiltrated by US and British spies, resulting in compromised security of cellphone calls.
It is also critical that one be able to trust that the hardware manufacturers and software developers properly implement trusted computing standards. Incorrect implementation could be hidden from users, and thus could undermine the integrity of the whole system without users being aware of the flaw.
Hardware and software support
Since 2004, most major manufacturers have shipped systems that have included Trusted Platform Modules, with associated BIOS support. In accordance with the TCG specifications, the user must enable the Trusted Platform Module before it can be used.
The Linux kernel has included trusted computing support since version 2.6.13, and there are several projects to implement trusted computing for Linux. In January 2005, members of Gentoo Linux's "crypto herd" announced their intention of providing support for TC—in particular support for the Trusted Platform Module. There is also a TCG-compliant software stack for Linux named TrouSerS, released under an open source license. There are several open-source projects that facilitate the use of confidential computing technology, including EGo, EdgelessDB and MarbleRun from Edgeless Systems, as well as Enarx, which originates from security research at Red Hat.
Some limited form of trusted computing can be implemented on current versions of Microsoft Windows with third-party software. Major cloud providers such as Microsoft Azure, AWS and Google Cloud Platform have virtual machines with trusted computing features available. With the Intel Software Guard Extension (SGX) and AMD Secure Encrypted Virtualization (SEV) processors, there is hardware available for runtime memory encryption and remote attestation features.
The Intel Classmate PC (a competitor to the One Laptop Per Child) includes a Trusted Platform Module.
PrivateCore vCage software can be used to attest x86 servers with TPM chips.
Mobile T6 secure operating system simulates the TPM functionality in mobile devices using the ARM TrustZone technology.
Samsung smartphones come equipped with Samsung Knox that depend on features like Secure Boot, TIMA, MDM, TrustZone and SE Linux.
See also
Glossary of legal terms in technology
Next-Generation Secure Computing Base (formerly known as Palladium)
Trusted Network Connect
Trusted Platform Module
Web Environment Integrity
References
External links
Cryptography
Copyright law
Microsoft Windows security technology | Trusted Computing | Mathematics,Engineering | 4,992 |
27,928,695 | https://en.wikipedia.org/wiki/Sama%20%28company%29 | Samasource Impact Sourcing, Inc., also known as Samasource and Sama, is a training-data company, focusing on annotating data for artificial intelligence algorithms. The company offers image, video, and sensor data annotation and validation for machine learning algorithms in industries including automotive, navigation, augmented reality, virtual reality, biotechnology, agriculture, manufacturing, and e-commerce. One of the first organizations to engage in impact sourcing, Sama trains workers in basic computer skills.
Sama is headquartered in San Francisco, California, with additional offices in Montreal and San Jose, Costa Rica. The organization owns and operates delivery centers in Nairobi, Kenya, Kampala, Uganda and Gulu, Uganda, and partners with additional delivery centers in India. Sama previously employed workers via partner delivery centers in Haiti, Pakistan, Ghana, and South Africa.
Business model
Sama uses a secured cloud annotation platform to manage the annotation lifecycle. This includes image upload, annotation, data sampling and QA, data delivery, and overall collaboration.
Sama's platform breaks down complex data projects from large companies into small tasks that can be completed by women and youth in developing countries with basic English skills after a few weeks of training.
Sama's technology features a five-step quality assurance mechanism that gauges the success of each individual worker. Workers are not, however, in direct competition with one another as they are in crowdsourcing models. Sama's staff also makes a point of understanding the skills native to each region so that it can channel projects to centers best equipped to handle them.
First founded as a non-profit in 2008, Sama adopted a hybrid business model in 2019, becoming a for-profit business with the previous non-profit organization becoming a shareholder.
History
Entrepreneur Leila Janah founded Samasource (now Sama Group) in 2008. While working as an English teacher she was seeing her students' ambition combined with the rise in global literacy and access to technology during that time provided the initial inspiration for Samasource.
After completing a degree in African Development Studies from Harvard University, Janah worked as a consultant at Katzenbach Partners (now Booz & Company) and at the World Bank. She quickly became disillusioned, however, by the lack of insight she perceived from World Bank officials into the needs of those the organization was attempting to move out of poverty. While working with multiple clients in the outsourcing sector and nonprofit world, Janah developed the business plan for Sama.
Recognition
Sama has received numerous awards and grants, including the 2012 Secretary's Innovation Award for the Empowerment of Women and Girls and the 2012 TechFellows Award for Disruptive Innovation. The organization was also part of POPTech's 2010 Class of Social Innovation Fellows. Fast Company named Sama as "One of the Most Innovative Companies of 2015", saying that Sama is "defining what it means to be a not-for-profit business". Sama has also been profiled in TechCrunch, Wired, and Business Insider among other publications.
Janah, was included in Conde Nast's Daring 25 list in 2016 and as one of "Five Visionary Tech Entrepreneurs Who Are Changing the World" by The New York Times Style Magazine in 2015. She was also named a "Rising Star" on Forbes' 30 Under 30 list in 2011, one of the 50 people who will change the world by Wired, and one of the 100 most creative people in business by Fast Company. She was the recipient of a 2011 World Technology Award, a Social Enterprise Alliance Award, and a Club de Madrid award.
Controversy
Content moderation and poor worker treatment
It was revealed by a Time investigation that in order to build a safety system against toxic content (e.g. sexual abuse, violence, racism, sexism) in e.g. ChatGPT, OpenAI used Sama's services to outsource labeling toxic content to Kenyan workers earning less than $2 per hour. These labels were used to train a model to detect such content in the future. The outsourced laborers were exposed to toxic and dangerous content, and one described the experience as "torture". Following the Time investigation, Fairwork conducted a study of Sama. Benchmarking them against Fairwork principles, the company scored a 5/10.
In 2023, Sama employees were involved in the formation of the African Content Moderators Union alongside employees from other African-based outsourcing companies.
Lawsuit
In March 2022, the law firm Nzili and Sumbi Advocates published a letter on behalf of former Sama employee Daniel Motaung, threatening legal action against Sama if the company did not address twelve demands. Demands included that the company adhere to Kenyan labor, privacy, and health laws; that they provide adequate healthcare and insurance for their employees; and that they improve compensation. In 2019, Motaung was fired for organizing a strike and trying to unionize Sama employees over poor working conditions and pay. The threatened lawsuit followed a Time report detailing how Sama recruited content moderators under the false pretense that they would take jobs at call centers. According to the report, the moderators, who were recruited from all parts of the continent, only learned about the nature of their work after signing employment contracts and moving to the center in Nairobi. The moderators sift through social media posts on all platforms, including Facebook, to remove those that spread hate, misinformation and violence. On March 29, 2022, the law firm gave Meta and Sama 21 days to respond to the claims or face legal action.
In a post published after the revelation, Sama denied any wrongdoing and said the company is transparent in its hiring practices and maintains a culture that "prioritizes the health and well-being of employees".
In May 2022, Motaung officially filed a lawsuit against both Sama and Meta over these alleged unsafe and unfair working conditions. Motaung accused the subcontractor of various constitutional violations, including "widespread trauma, pay as low as $1.50 per hour, and alleged union busting."
References
Information and communication technologies for development
Human-based computation
Social enterprises
Companies based in San Francisco
Technology companies established in 2008
2008 establishments in California
B Lab-certified corporations | Sama (company) | Technology | 1,299 |
49,937,039 | https://en.wikipedia.org/wiki/Varian%27s%20War | Varian's War (aka Varian's War: The Forgotten Hero) is a 2001 joint Canadian/American/United Kingdom film made-for-television drama. The film was written and directed by Lionel Chetwynd, based on the life and wartime exploits of Varian Fry who saved more than 2,000 Jewish artists from Vichy France, the conquered ally of Nazi Germany. Varian's War stars William Hurt, heading an all-star ensemble cast of Julia Ormond, Matt Craven, Maury Chaykin, Alan Arkin and Lynn Redgrave.
Plot
While in Berlin during Kristallnacht in 1938, journalist Varian Fry witnesses the Nazis' brutal treatment of Jews. He was helpless and physically sick as the SA brown-shirts clubbed their victims to the ground. The experience left him with a resolve to do something to help the Jews.
Back in the United States, Fry begins to canvass his influential friends and acquaintances, only to find indifference or even antisemitism. Learning that the Nazis have targeted artists and intelligentsia, he approaches the State Department with a plan and a few prominent names, such as artist Marc Chagall, scrawled on a list. When the State Department tries to block his plans to head back to Europe, Fry finds an ally in First Lady Eleanor Roosevelt, who intervenes on his behalf. She specifically asks Fry to check on Lion Feuchtwanger, imprisoned without charge by the French in the Camp des Milles internment camp.
In 1940, heading for Marseille in Vichy Francethe nominally unoccupied zone libre in the southern part of Nazi-conquered metropolitan France, where he knows that Jewish artists have taken refugeFry arrives with money to bribe officials.
While U.S. Consul Jamieson is intransigent and rude to him, Fry later learns that Vice Consul Harry Bingham is an ally, as Bingham has worked with Waitstill and Martha Sharp, taking Feuchtwanger, Hannah Arendt, and Marc and Bella Chagall into his own home. The Chagalls, like many other expatriates, believe they are safe in Vichy France, willfully ignoring the article in the terms of the French surrender stating that France must immediately hand over any French citizen that the Nazis should demand. Word spreads quickly in Marseille that an American will help Jews to escape Vichy France. In setting up an office out of his hotel room, Fry encounters Miriam Davenport, who helps him screen the numerous refugees that begin lining up at his hotel.
Two other accomplices approach Fry, Albert Hirschman, a Jewish con-man that he names "Beamish", and Bill Freier, a counterfeit expert. With picture-perfect forged passes and identification cards, Fry begins to send Jewish artists out of France to Spain where they can arrange transport to the United States. Both French and German officials suspect that Fry is deceitful and assign agents, such as Nazi SS Oberstleutnant Marius Franken, to follow him.
With French collaborators turning in Jews, an urgency to leave begins to take hold. Even Chagall now joins with author Heinrich Mann and others in seeking passage out of Marseille. Fry and Davenport decide to shepherd a large group of frightened refugees, first on a train, then taking the group on a long hike through a mountain forest to a checkpoint where, if their documents will be accepted, they will be free to enter neutral Spain. Despite some near misses, the group makes it to freedom.
In just under one year, ending with his expulsion in September 1941, Fry's clandestine underground escape route over the Pyrenees eventually frees more than 2,000 artists, authors, scientists and intellectuals from Vichy France, including some who are listed onscreen in the background of the closing credits: Chagall, Arendt, Jacques Lipchitz, Hans Bellmer, Heinrich Mann, André Masson, Max Ernst, Franz Werfel, Ferdinand Springer and Feuchtwanger.
Cast
William Hurt as Varian Fry
Julia Ormond as Miriam Davenport
Matt Craven as Beamish (Albert Hirschman)
Maury Chaykin as Marcello
Alan Arkin as Bill Freier
Lynn Redgrave as Alma Werfel-Mahler
Rémy Girard as Colonel Joubert (credited as Remy Girard)
Christopher Heyerdahl as Marius Franken
Gloria Carlin as Bella Chagall
Joel Miller as Marc Chagall
Vlasta Vrana as Franz Werfel
John Dunn-Hill as Heinrich Mann
Ted Whittall as Harry" Bingham
Dorothée Berryman as Mme Fanny
Production
Producers of Varian's War included Barwood Film's chief executive, Barbra Streisand and Cis Corman, along with Prince Edward, Duke of Edinburgh, the head of Ardent Productions. The trio acted as executive producers in their first and only collaboration.
Varian's War was filmed entirely in Montreal with principal photography beginning on May 3, 2000 and having wrapped by June 14, 2000. Additional exteriors and studio shots at Audio Cine Films Inc. took place over August–September 2000. In filming in Canada, a large Canadian supporting cast was assembled that included Christopher Heyerdahl, Remy Girard, Gloria Carlin, Dorothee Berryman as a brothel madame, Pascale Montpetit, Vlasta Vrana, Joel Miller, Maury Chaykin and Aubert Pallascio.
Historical accuracy
When released, Varian's War was advertised as "The true story of the American Schindler", a claim that was roundly decried as inaccurate by historians. Although loosely based on the life of Varian Fry, the film received a Hollywood treatment, merging characters and over dramatizing events. Bill Bingham, the son of Hiram Bingham IV, commented: "The film is dreadfully inaccurate and demeaning to Fry, Feuchtwanger, Miriam Davenport and others, despite the apparent desire to honor them."
Reception
Varian's War was telecast on Showtime television network on April 21, 2001 to mainly negative reviews. Darryl Miller of the Los Angeles Times wrote: "Noble intentions aside, Varian's War ... is a mess of a movie that leaves viewers with more questions than answers about Varian Fry ... Clumsily constructed and hollowly acted, it's a project that its lead performers – William Hurt and Julia Ormond – along with Barbra Streisand's Barwood Films, should quickly try to bury in their resumes ... Writer-director Lionel Chetwynd fudges a lot of facts, beginning with the implication that Fry founded the Emergency Rescue Committee. Chetwynd also plays fast and loose with depictions of the supporting characters, including Fry's associate, Miriam Davenport (Ormond), and the writer Lion Feuchtwanger."
The Washington Post reported that Weekly Standard columnist Fred Barnes was so impressed with Varian's War that he sent a video copy to White House aide Karl Rove, and subsequently, with Rove raving about the film, a special screening was arranged for President George W. Bush. "... written and directed by Brit-turned-American Lionel Chetwynd and starring William Hurt as Varian Fry, a Harvard-educated American journalist who rescued 2,000 artists and intellectuals from Nazi-occupied France ..." Hurt's co-stars, Julia Ormond and Lynn Redgrave, Chetwynd and his wife, actress Gloria Carlin, were in attendance.
After broadcast by the Showtime Networks/Showtime Entertainment, Varian's War was also released internationally as Varian’s War: A Forgotten Hero by Alliance Atlantis Communications.
Awards
At the 2001 WorldFest Houston film festival, Lionel Chetwynd won the Gold Special Jury Award (Best Director), while at the 2002 Satellite Awards, Julie Ormand won for Best Performance by an Actress in a Supporting Role in a Miniseries or a Motion Picture Made for Television while Varian's War won as the Best Motion Picture Made for Television. William Hurt was also nominated for Best Performance by an Actor in a Miniseries or a Motion Picture Made for Television.
Other nominations included Nicoletta Massone for Best Achievement in Costume Design at the 2002 Genie Awards and Lionel Chetwynd was nominated for the WGA Award (TV) in Original Long Form for the 2002 Writers Guild of America Awards.
See also
Schindler's List (1993)
References
Notes
Citations
Bibliography
Subak, Susan Elisabeth. Rescue and Flight: American Relief Workers Who Defied the Nazis. Lincoln, Nebraska: University of Nebraska Press, 2010. .
External links
Varian's War
2001 television films
2001 films
American drama television films
American war drama films
British war drama films
Canadian war drama films
Films shot in Montreal
Films set in Germany
Films set in France
Rescue of Jews during the Holocaust
2000s biographical drama films
2000s war drama films
Drama films based on actual events
American World War II films
British World War II films
Canadian World War II films
Holocaust films
2000 films
Canadian drama television films
English-language Canadian films
2000 drama films
2001 drama films
2000s English-language films
2000s Canadian films
2000s British films
British drama television films
World War II television films
English-language war drama films
English-language biographical drama films | Varian's War | Biology | 1,875 |
74,311,590 | https://en.wikipedia.org/wiki/Einsteinium%28II%29%20chloride | Einsteinium(II) chloride is a binary inorganic chemical compound of einsteinium and chlorine with the chemical formula .
Synthesis
The compound can be prepared via a reaction of and .
Physical properties
The compound forms a solid.
References
Einsteinium compounds
Chlorides
Actinide halides | Einsteinium(II) chloride | Chemistry | 57 |
602,032 | https://en.wikipedia.org/wiki/Pierre%20Janssen | Pierre Jules César Janssen (22 February 1824 – 23 December 1907), usually known as Jules Janssen, was a French astronomer who, along with English scientist Joseph Norman Lockyer, is credited with discovering the gaseous nature of the solar chromosphere, but there is no justification for the conclusion that he deserves credit for the co-discovery of the element helium.
Life, work, and interests
Janssen was born in Paris (During Bourbon Restoration in France) into a cultivated family. His father, César Antoine Janssen (born in Paris, 1780 – 1860) was a well known clarinettist from Dutch/Belgian descent (his father, Christianus Janssen, emigrated from Walloon Brabant to Paris). His mother Pauline Marie Le Moyne (1789 – 1871) was a daughter of the architect Paul Guillaume Le Moyne.
Pierre Janssen studied mathematics and physics at the faculty of sciences. He taught at the Lycée Charlemagne in 1853, and in the school of architecture 1865 – 1871, but his energies were mainly devoted to various scientific missions entrusted to him. Thus in 1857 he went to Peru in order to determine the magnetic equator; in 1861–1862 and 1864, he studied telluric absorption in the solar spectrum in Italy and Switzerland; in 1867 he carried out optical and magnetic experiments at the Azores; he successfully observed both transits of Venus, that of 1874 in Japan, that of 1882 at Oran in Algeria; and he took part in a long series of solar eclipse-expeditions, e.g. to Trani, Italy (1867), Guntur, India (1868), Algiers (1870), Siam (1875), the Caroline Islands (1883), and to Alcossebre in Spain (1905). To see the eclipse of 1870, he escaped from the Siege of Paris in a balloon. Unfortunately the eclipse was obscured from him by cloud.
In the year 1874, Janssen invented the Revolver of Janssen or Photographic Revolver, instrument that originated the chronophotography. Later this invention was of great use for researchers like Etienne Jules Marey to carry out exhibitions and inventions.
Solar spectroscopy
In 1868 Janssen discovered how to observe solar prominences without an eclipse. While observing the solar eclipse of 18 August 1868, at Guntur, Madras State (now in Andhra Pradesh), British India, he noticed bright lines in the spectrum of the chromosphere, showing that the chromosphere is gaseous. From the brightness of the spectral lines, Janssen realized that the chromospheric spectrum could be observed even without an eclipse, and he proceeded to do so. But he never mentioned the emission line seen by Joseph Norman Lockyer, which later was shown to be due to the element helium.
On 20 October, Lockyer in England set up a new, relatively powerful spectroscope. He also observed the emission spectrum of the chromosphere, including a new yellow line near the sodium D line, which he called "D3". Lockyer and the English chemist Edward Frankland speculated that the new line could be due to a new element, which they named the element after the Greek word for the Sun, ἥλιος (helios).
Observatories
At the great Indian eclipse of 1868 that occurred in Guntur, Janssen also demonstrated the gaseous nature of the red prominences, and devised a method of observing them under ordinary daylight conditions. One main purpose of his spectroscopic inquiries was to answer the question whether the Sun contains oxygen or not. An indispensable preliminary was the virtual elimination of oxygen-absorption in the Earth's atmosphere, and his bold project of establishing an observatory on the top of Mont Blanc was prompted by a perception of the advantages to be gained by reducing the thickness of air through which observations have to be made. This observatory, the foundations of which were fixed in the hard ice that appeared to cover the summit to a depth of over ten metres, was built in September 1893, and Janssen, in spite of his sixty-nine years, made the ascent and spent four days making observations.
In 1875, Janssen was appointed director of the new astrophysical observatory established by the French government at Meudon, and set on foot there in 1876 the remarkable series of solar photographs collected in his great Atlas de photographies solaires (1904). The first volume of the Annales de l'observatoire de Meudon was published by him in 1896. (see also Meudon Great Refractor)
Janssen was the President of the Société Astronomique de France (SAF), the French astronomical society, from 1895 to 1897.
International Meridian Conference
In 1884 he took part in the International Meridian Conference.
Death, honors, and legacy
Janssen died at Meudon on 23 December 1907 and was buried at Père Lachaise Cemetery in Paris, with the name "J. Janssen" inscribed on his tomb. During his life he was made a Knight of the Legion of Honor and a Foreign Member of the Royal Society of London.
Craters on both Mars and the Moon are named in his honor. The public square in front of Meudon Observatory is named Place Jules Janssen after him. Two major prizes carry his name: the Prix Jules Janssen of the French Astronomical Society, and the Janssen Medal of the French Academy of Sciences.
Janssen named minor planet 225 Henrietta discovered by Johann Palisa, after his wife, Henrietta.
Notes and references
Further reading
Obituary, from Popular Astronomy, 1908, vol. 16, pp. 72–74
Obituary, from Astronomische Nachrichten, 1908, vol. 177, p. 63 (in French)
Obituary, from The Astrophysical Journal, 1908, vol. 28, pp. 89–99 (in French)
Janssen statue, description and black-and-white picture from The Observatory, 1922, vol. 45, pp. 175–176
Brief biography, from the High Altitude Observatory at Boulder, Colorado
1824 births
1907 deaths
Burials at Père Lachaise Cemetery
Discoverers of chemical elements
19th-century French astronomers
Members of the French Academy of Sciences
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Knights of the Legion of Honour
Honorary Fellows of the Royal Society of Edinburgh
Scientists from Paris
Helium
Spectroscopists
Recipients of the Lalande Prize
Articles containing video clips
French people of Belgian descent | Pierre Janssen | Physics,Chemistry | 1,303 |
41,331,720 | https://en.wikipedia.org/wiki/Weyr%20canonical%20form | In mathematics, in linear algebra, a Weyr canonical form (or, Weyr form or Weyr matrix) is a square matrix which (in some sense) induces "nice" properties with matrices it commutes with. It also has a particularly simple structure and the conditions for possessing a Weyr form are fairly weak, making it a suitable tool for studying classes of commuting matrices. A square matrix is said to be in the Weyr canonical form if the matrix has the structure defining the Weyr canonical form. The Weyr form was discovered by the Czech mathematician Eduard Weyr in 1885. The Weyr form did not become popular among mathematicians and it was overshadowed by the closely related, but distinct, canonical form known by the name Jordan canonical form. The Weyr form has been rediscovered several times since Weyr’s original discovery in 1885. This form has been variously called as modified Jordan form, reordered Jordan form, second Jordan form, and H-form. The current terminology is credited to Shapiro who introduced it in a paper published in the American Mathematical Monthly in 1999.
Recently several applications have been found for the Weyr matrix. Of particular interest is an application of the Weyr matrix in the study of phylogenetic invariants in biomathematics.
Definitions
Basic Weyr matrix
Definition
A basic Weyr matrix with eigenvalue is an matrix of the following form: There is an integer partition
of with
such that, when is viewed as an block matrix , where the block is an matrix, the following three features are present:
The main diagonal blocks are the scalar matrices for .
The first superdiagonal blocks are full column rank matrices in reduced row-echelon form (that is, an identity matrix followed by zero rows) for .
All other blocks of W are zero (that is, when ).
In this case, we say that has Weyr structure .
Example
The following is an example of a basic Weyr matrix.
In this matrix, and . So has the Weyr structure . Also,
and
General Weyr matrix
Definition
Let be a square matrix and let be the distinct eigenvalues of . We say that is in Weyr form (or is a Weyr matrix) if has the following form:
where is a basic Weyr matrix with eigenvalue for .
Example
The following image shows an example of a general Weyr matrix consisting of three basic Weyr matrix blocks. The basic Weyr matrix in the top-left corner has the structure (4,2,1) with eigenvalue 4, the middle block has structure (2,2,1,1) with eigenvalue -3 and the one in the lower-right corner has the structure (3, 2) with eigenvalue 0.
Relation between Weyr and Jordan forms
The Weyr canonical form is related to the Jordan form by a simple permutation for each Weyr basic block as follows: The first index of each Weyr subblock forms the largest Jordan chain. After crossing out these rows and columns, the first index of each new subblock forms the second largest Jordan chain, and so forth.
The Weyr form is canonical
That the Weyr form is a canonical form of a matrix is a consequence of the following result: Each square matrix over an algebraically closed field is similar to a Weyr matrix which is unique up to permutation of its basic blocks. The matrix is called the Weyr (canonical) form of .
Computation of the Weyr canonical form
Reduction to the nilpotent case
Let be a square matrix of order over an algebraically closed field and let the distinct eigenvalues of be . The Jordan–Chevalley decomposition theorem states that is similar to a block diagonal matrix of the form
where is a diagonal matrix, is a nilpotent matrix, and , justifying the reduction of into subblocks . So the problem of reducing to the Weyr form reduces to the problem of reducing the nilpotent matrices to the Weyr form. This is leads to the generalized eigenspace decomposition theorem.
Reduction of a nilpotent matrix to the Weyr form
Given a nilpotent square matrix of order over an algebraically closed field , the following algorithm produces an invertible matrix and a Weyr matrix such that .
Step 1
Let
Step 2
Compute a basis for the null space of .
Extend the basis for the null space of to a basis for the -dimensional vector space .
Form the matrix consisting of these basis vectors.
Compute . is a square matrix of size − nullity .
Step 3
If is nonzero, repeat Step 2 on .
Compute a basis for the null space of .
Extend the basis for the null space of to a basis for the vector space having dimension − nullity .
Form the matrix consisting of these basis vectors.
Compute . is a square matrix of size − nullity − nullity.
Step 4
Continue the processes of Steps 1 and 2 to obtain increasingly smaller square matrices and associated invertible matrices until the first zero matrix is obtained.
Step 5
The Weyr structure of is where = nullity.
Step 6
Compute the matrix (here the 's are appropriately sized identity matrices).
Compute . is a matrix of the following form:
.
Step 7
Use elementary row operations to find an invertible matrix of appropriate size such that the product is a matrix of the form .
Step 8
Set diag and compute . In this matrix, the -block is .
Step 9
Find a matrix formed as a product of elementary matrices such that is a matrix in which all the blocks above the block contain only 's.
Step 10
Repeat Steps 8 and 9 on column converting -block to via conjugation by some invertible matrix . Use this block to clear out the blocks above, via conjugation by a product of elementary matrices.
Step 11
Repeat these processes on columns, using conjugations by . The resulting matrix is now in Weyr form.
Step 12
Let . Then .
Applications of the Weyr form
Some well-known applications of the Weyr form are listed below:
The Weyr form can be used to simplify the proof of Gerstenhaber’s Theorem which asserts that the subalgebra generated by two commuting matrices has dimension at most .
A set of finite matrices is said to be approximately simultaneously diagonalizable if they can be perturbed to simultaneously diagonalizable matrices. The Weyr form is used to prove approximate simultaneous diagonalizability of various classes of matrices. The approximate simultaneous diagonalizability property has applications in the study of phylogenetic invariants in biomathematics.
The Weyr form can be used to simplify the proofs of the irreducibility of the variety of all k-tuples of commuting complex matrices.
References
Linear algebra
Matrix theory
Matrix normal forms
Matrix decompositions | Weyr canonical form | Mathematics | 1,398 |
1,139,926 | https://en.wikipedia.org/wiki/Hyperbolic%20sector | A hyperbolic sector is a region of the Cartesian plane bounded by a hyperbola and two rays from the origin to it. For example, the two points and on the rectangular hyperbola , or the corresponding region when this hyperbola is re-scaled and its orientation is altered by a rotation leaving the center at the origin, as with the unit hyperbola. A hyperbolic sector in standard position has and .
Hyperbolic sectors are the basis for the hyperbolic functions.
Area
The area of a hyperbolic sector in standard position is natural logarithm of b .
Proof: Integrate under 1/x from 1 to b, add triangle {(0, 0), (1, 0), (1, 1)}, and subtract triangle {(0, 0), (b, 0), (b, 1/b)} (both triangles of which have the same area).
When in standard position, a hyperbolic sector corresponds to a positive hyperbolic angle at the origin, with the measure of the latter being defined as the area of the former.
Hyperbolic triangle
When in standard position, a hyperbolic sector determines a hyperbolic triangle, the right triangle with one vertex at the origin, base on the diagonal ray y = x, and third vertex on the hyperbola
with the hypotenuse being the segment from the origin to the point (x, y) on the hyperbola. The length of the base of this triangle is
and the altitude is
where u is the appropriate hyperbolic angle. The usual definitions of the hyperbolic functions can be seen via the legs of right triangles plotted with hyperbolic coordinates. When the length of theses legs is divided by the square root of 2, they can be graphed as the unit hyperbola with hyperbolic cosine and sine coordinates.
The analogy between circular and hyperbolic functions was described by Augustus De Morgan in his Trigonometry and Double Algebra (1849). William Burnside used such triangles, projecting from a point on the hyperbola xy = 1 onto the main diagonal, in his article "Note on the addition theorem for hyperbolic functions".
Hyperbolic logarithm
It is known that f(x) = xp has an algebraic antiderivative except in the case p = –1 corresponding to the quadrature of the hyperbola. The other cases are given by Cavalieri's quadrature formula. Whereas quadrature of the parabola had been accomplished by Archimedes in the third century BC (in The Quadrature of the Parabola), the hyperbolic quadrature required the invention in 1647 of a new function: Gregoire de Saint-Vincent addressed the problem of computing the areas bounded by a hyperbola. His findings led to the natural logarithm function, once called the hyperbolic logarithm since it is obtained by integrating, or finding the area, under the hyperbola.
Before 1748 and the publication of Introduction to the Analysis of the Infinite, the natural logarithm was known in terms of the area of a hyperbolic sector. Leonhard Euler changed that when he introduced transcendental functions such as 10x. Euler identified e as the value of b producing a unit of area (under the hyperbola or in a hyperbolic sector in standard position). Then the natural logarithm could be recognized as the inverse function to the transcendental function ex.
To accommodate the case of negative logarithms and the corresponding negative hyperbolic angles, different hyperbolic sectors are constructed according to whether x is greater or less than one. A variable right triangle with area 1/2 is The isosceles case is The natural logarithm is known as the area under y = 1/x between one and x. A positive hyperbolic angle is given by the area of
A negative hyperbolic angle is given by the negative of the area This convention is in accord with a negative natural logarithm for x in (0,1).
Hyperbolic geometry
When Felix Klein's book on non-Euclidean geometry was published in 1928, it provided a foundation for the subject by reference to projective geometry. To establish hyperbolic measure on a line, Klein noted that the area of a hyperbolic sector provided visual illustration of the concept.
Hyperbolic sectors can also be drawn to the hyperbola . The area of such hyperbolic sectors has been used to define hyperbolic distance in a geometry textbook.
See also
Squeeze mapping
References
Mellen W. Haskell (1895) On the introduction of the notion of hyperbolic functions Bulletin of the American Mathematical Society 1(6):155–9.
Area
Elementary geometry
Integral calculus
Logarithms | Hyperbolic sector | Physics,Mathematics | 964 |
296,524 | https://en.wikipedia.org/wiki/Betty%20Williams | Elizabeth Williams ( Smyth; 22 May 1943 – 17 March 2020) was a peace activist from Northern Ireland. She was a co-recipient with Mairead Corrigan of the Nobel Peace Prize in 1976 for her work as a cofounder of Community of Peace People, an organisation dedicated to promoting a peaceful resolution to the Troubles in Northern Ireland.
Williams headed the Global Children's Foundation and was the President of the World Centre of Compassion for Children International. She was also the Chair of Institute for Asian Democracy in Washington D.C. She lectured widely on topics of peace, education, inter-cultural and inter-faith understanding, anti-extremism, and children's rights.
Williams was a founding member of the Nobel Laureate Summit, which has taken place annually since 2000.
In 2006, Williams became a founder of the Nobel Women's Initiative along with Nobel Peace Laureates Mairead Corrigan Maguire, Shirin Ebadi, Wangari Maathai, Jody Williams and Rigoberta Menchú Tum. These six women, representing North and South America, the Middle East, Europe and Africa, brought together their experiences in a united effort for peace with justice and equality. It is the goal of the Nobel Women's Initiative to help strengthen work being done in support of women's rights around the world. Williams was also a member of PeaceJam.
Early life
Williams was born on 22 May 1943 in Belfast, Northern Ireland. Her father worked as a butcher and her mother was a housewife. Betty received her primary education from St. Teresa Primary School in Belfast and attended St Dominic's Grammar School for Girls for her secondary school studies. Upon completing her formal education, she took up a job of office receptionist.
Rare for the time in Northern Ireland, her father was Protestant and her mother was Catholic; a family background from which Williams later said she derived religious tolerance and a breadth of vision that motivated her to work for peace. Early in the 1970s she joined an anti-violence campaign headed by a Protestant priest. Williams credited this experience for preparing her to eventually found her own peace movement, which focused on creating peace groups composed of former opponents, practicing confidence-building measures, and the development of a grassroots peace process.
Peace petition
Williams was drawn into the public arena after witnessing the death of three children on 10 August 1976, when they were hit by a car whose driver, an Irish Republican Army (IRA) paramilitary named Danny Lennon, had been fatally shot in return fire by a soldier of the Kings Own Royal Border regiment. As she turned the corner to her home, she saw the three Maguire children crushed by the swerving car and rushed to help. Their mother, Anne Maguire, who was with the children, died by suicide in January 1980.
Williams was so moved by the incident that within two days of the tragic event, she had obtained 6,000 signatures on a petition for peace and gained wide media attention. With Corrigan, she co-founded the Women for Peace; which, with Ciaran McKeown, later became the Community of Peace People.
Williams soon organised a peace march to the graves of the slain children, which was attended by 10,000 Protestant and Catholic women. However, the peaceful march was violently disrupted by members of the IRA, who accused them of being "dupes of the British". The following week, Williams led another march in Ormeau Park that concluded successfully without incident – this time with 20,000 participants.
At that time, Williams declared the following:
Declaration of the Peace People
First Declaration of the Peace People
We have a simple message to the world from this movement for Peace.
We want to live and love and build a just and peaceful society.
We want for our children, as we want for ourselves, our lives at home, at work, and at play to be lives of joy and Peace.
We recognise that to build such a society demands dedication, hard work, and courage.
We recognise that there are many problems in our society which are a source of conflict and violence.
We recognise that every bullet fired and every exploding bomb make that work more difficult.
We reject the use of the bomb and the bullet and all the techniques of violence.
We dedicate ourselves to working with our neighbours, near and far, day in and day out, to build that peaceful society in which the tragedies we have known are a bad memory and a continuing warning.
Nobel Peace Prize
In recognition of her efforts for peace, Williams, together with her friend Mairead Corrigan, became joint recipients of the Nobel Peace Prize in 1977 (the prize for 1976). In her acceptance speech, Williams said, That first week will always be remembered of course for something else besides the birth of the Peace People. For those most closely involved, the most powerful memory of that week was the death of a young republican and the deaths of three children struck by the dead man's car. A deep sense of frustration at the mindless stupidity of the continuing violence was already evident before the tragic events of that sunny afternoon of 10 August 1976. But the deaths of those four young people in one terrible moment of violence caused that frustration to explode, and create the possibility of a real peace movement...As far as we are concerned, every single death in the last eight years, and every death in every war that was ever fought represents life needlessly wasted, a mother's labour spurned.
The Peace Prize money was divided equally between Williams and Corrigan. Williams kept her share of the money, stating that her intention was to use it to promote peace beyond Ireland, but faced criticism for her decision. She and Corrigan had no contact after 1976. In 1978 Williams broke off links with the Peace People movement, and became instead an activist for peace in other areas around the world.
Other awards
Williams received the People's Peace Prize of Norway in 1976, the Golden Plate Award of the American Academy of Achievement in 1977, the Schweitzer Medallion for Courage, the Martin Luther King, Jr. Award, the Eleanor Roosevelt Award in 1984, and the Frank Foundation Child Care International Oliver Award. In 1995, she was awarded the Rotary Club International "Paul Harris Fellowship" and the Together for Peace Building Award.
Talks and guest lectures
At the 2006 Earth Dialogues forum in Brisbane, Williams told an audience of schoolchildren during a speech on Iraq War casualties that "Right now, I would like to kill George W. Bush." From 17 to 20 September 2007, Williams gave a series of lectures in Southern California: on 18 September, she presented a lecture to the academic community of Orange County entitled "Peace in the World Is Everybody's Business"; and on 20 September she gave a lecture to 2,232 members of the general public, including 1,100 high school sophomores, at Soka University of America. In 2010, she gave a lecture at WE Day Toronto, a WE Charity event that empowers students to be active within their communities, and worldwide.
Speaking at the University of Bradford before an audience of 200 in March 2011, Williams warned that young Muslim women on campus were vulnerable to attacks from angry family members, while the university does little to help protect them. "If you had someone on this campus these young women could go to say, 'I am frightened' – if you are not doing that here, you are dehumanising them by not helping these young women, don't you think?"
Personal life
At the time she received the Nobel Prize, Williams worked as a receptionist and was raising her two children with her first husband Ralph Williams. This marriage was dissolved in 1981. She married businessman James Perkins in December 1982; they lived in Florida in the United States.
In 2004, she returned to live in Northern Ireland. Williams died on 17 March 2020, St. Patrick's Day, at the age of 76 in Belfast.
In popular culture
Williams was honoured/featured in the music video of Nickelback's hit song "If Everyone Cared".
Williams and Mairead Corrigan were the subject of a French song, "Deux Femmes à Dublin", sung by French Pied-Noir singer Enrico Macias.
See also
List of female Nobel laureates
List of peace activists
References
External links
http://lectures.syr.edu/betty-jody-williams – brief bio
Peace People in NI – a socialist position
1943 births
2020 deaths
20th-century women politicians from Northern Ireland
Expatriates from Northern Ireland in the United States
Nobel laureates from Northern Ireland
British Nobel laureates
Nobel Peace Prize laureates
Pacifists from Northern Ireland
People educated at St Dominic's Grammar School for Girls
Activists from Belfast
People of The Troubles (Northern Ireland)
Sam Houston State University faculty
Women activists from Northern Ireland
Women Nobel laureates | Betty Williams | Technology | 1,782 |
58,950 | https://en.wikipedia.org/wiki/Galaxy%20cluster | A galaxy cluster, or a cluster of galaxies, is a structure that consists of anywhere from hundreds to thousands of galaxies that are bound together by gravity, with typical masses ranging from 1014 to 1015 solar masses. They are the second-largest known gravitationally bound structures in the universe after some superclusters (of which only one, the Shapley Supercluster, is known to be bound). They were believed to be the largest known structures in the universe until the 1980s, when superclusters were discovered. One of the key features of clusters is the intracluster medium (ICM). The ICM consists of heated gas between the galaxies and has a peak temperature between 2–15 keV that is dependent on the total mass of the cluster. Galaxy clusters should not be confused with galactic clusters (also known as open clusters), which are star clusters within galaxies, or with globular clusters, which typically orbit galaxies. Small aggregates of galaxies are referred to as galaxy groups rather than clusters of galaxies. The galaxy groups and clusters can themselves cluster together to form superclusters.
Notable galaxy clusters in the relatively nearby Universe include the Virgo Cluster, Fornax Cluster, Hercules Cluster, and the Coma Cluster. A very large aggregation of galaxies known as the Great Attractor, dominated by the Norma Cluster, is massive enough to affect the local expansion of the Universe. Notable galaxy clusters in the distant, high-redshift universe include SPT-CL J0546-5345 and SPT-CL J2106-5844, the most massive galaxy clusters found in the early Universe. In the last few decades, they are also found to be relevant sites of particle acceleration, a feature that has been discovered by observing non-thermal diffuse radio emissions, such as radio halos and radio relics. Using the Chandra X-ray Observatory, structures such as cold fronts and shock waves have also been found in many galaxy clusters.
Basic properties
Galaxy clusters typically have the following properties:
They contain 100 to 1,000 galaxies, hot X-ray emitting gas and large amounts of dark matter. Details are described in the "Composition" section.
The distribution of the three components is approximately the same in the cluster.
They have total masses of 1014 to 1015 solar masses.
They typically have a diameter from 1 to 5 Mpc (see 1023 m for distance comparisons).
The spread of velocities for the individual galaxies is about 800–1000 km/s.
Composition
There are three main components of a galaxy cluster. They are tabulated below:
Classification
Galaxy clusters are categorized as type I, II, or III based on morphology.
Galaxy clusters as measuring instruments
Gravitational redshift
Galaxy clusters have been used by Radek Wojtak from the Niels Bohr Institute at the University of Copenhagen to test predictions of general relativity: energy loss from light escaping a gravitational field. Photons emitted from the center of a galaxy cluster should lose more energy than photons coming from the edge of the cluster because gravity is stronger in the center. Light emitted from the center of a cluster has a longer wavelength than light coming from the edge. This effect is known as gravitational redshift. Using the data collected from 8000 galaxy clusters, Wojtak was able to study the properties of gravitational redshift for the distribution of galaxies in clusters. He found that the light from the clusters was redshifted in proportion to the distance from the center of the cluster as predicted by general relativity. The result also strongly supports the Lambda-Cold Dark Matter model of the Universe, according to which most of the cosmos is made up of Dark Matter that does not interact with matter.
Gravitational lensing
Galaxy clusters are also used for their strong gravitational potential as gravitational lenses to boost the reach of telescopes. The gravitational distortion of space-time occurs near massive galaxy clusters and bends the path of photons to create a cosmic magnifying glass. This can be done with photons of any wavelength from the optical to the X-ray band. The latter is more difficult, because galaxy clusters emit a lot of X-rays. However, X-ray emission may still be detected when combining X-ray data to optical data. One particular case is the use of the Phoenix galaxy cluster to observe a dwarf galaxy in its early high energy stages of star formation.
List
Gallery
Images
Videos
See also
Abell catalogue
Intracluster medium
List of Abell clusters
References
Cluster
Articles containing video clips
Types of groupings | Galaxy cluster | Astronomy | 922 |
701,916 | https://en.wikipedia.org/wiki/Silly%20season | In the United Kingdom, silly season is a period in the summer months known for frivolous news stories in the mass media. The term was first attested in 1861, and listed in the second (1894) edition of Brewer's Dictionary of Phrase and Fable. The 15th edition of Brewer's defined the silly season as "the part of the year when Parliament and the Law Courts are not sitting (about August and September)". In North America, the period is often referred to as the slow news season.
In Australia, New Zealand, and South Africa, the silly season has come to refer to the Christmas/New Year festive period (which occurs during the summer season in the Southern Hemisphere).
Origin
The first attestation in the Oxford English Dictionary is an article titled "The Silly Season" in the Saturday Review edition of 13 July 1861. The article is specifically about an alleged reduction in the quality of the editorial content of The Times newspaper:
during the months of autumn [, w]hen Parliament is no longer sitting and the gay world is no longer gathered together in London, something very different is supposed to do for the remnant of the public from what is needed in the politer portions of the year. The Timess great men have doubtless gone out of town, like other great men. ... The hands which at other times wield the pen for our instruction are now wielding the gun on a Scotch moor or the Alpenstock on a Swiss mountain. Work is left to feebler hands. ... In those months the great oracle becomes —what at other times it is not—simply silly. In spring and early summer, the Times is often violent, unfair, fallacious, inconsistent, intentionally unmeaning, even positively blundering, but it is very seldom merely silly. ... In the dead of autumn, when the second and third rate hands are on, we sink from nonsense written with a purpose to nonsense written because the writer must write either nonsense or nothing.
Motivation
Typically, the latter half of the summer is slow in terms of newsworthy events. Newspapers as their primary means of income rely on advertisements, which rely on readers seeing them, but historically newspaper readership drops off during this time. In the United Kingdom, Parliament takes its summer recess, so that parliamentary debates and Prime Minister's Questions, which generate much news coverage, do not happen. This period is also a summer school holiday, when many families with children choose to take holidays, and there is accordingly often a decline of business news, as many employers reduce their activity. With law courts not sitting, there is a lack of coverage of court cases. Similar recesses are typical of legislative bodies elsewhere. To retain (and attract) subscribers, newspapers would print attention-grabbing headlines and articles to boost sales, often to do with minor moral panics or child abductions. For example, the extensive British press coverage devoted to Operation Irma, a humanitarian airlift during the Siege of Sarajevo, was critiqued as a "silly season" tactic.
Other names
Other countries have comparable periods, for example the ("summer [news]hole") in German-speaking Europe; French has ("the dead season" or "the dull season") or ("the conker tree season"), and Swedish has ("news drought").
In many languages, the name for the silly season references cucumbers (more precisely: gherkins or pickled cucumbers). in Dutch, Danish , Icelandic , Norwegian (a piece of news is called or , i.e., "cucumber news"), Czech ("pickle season"), Slovak , Polish , Hungarian , and Hebrew (, "season of the cucumbers") all mean "cucumber time" or "cucumber season". The corresponding term in German is and in Estonian ("pickled cucumber season"); the same term is also used in Croatian as and in Slovene as .
The term "cucumber time" was also used in England in the 1800s to denote the slow season for tailors.
A silly season news item is called in Sweden and in Finland, both literally meaning "rotting-month story".
In Spain the term ("summer snake") is often used, not for the season, but for the news items. The term is a reference to the Loch Ness Monster and similar creatures, who are reputed to get more headlines in summer.
Sports
Silly season also refers to periods outside traditional season-long competitive sporting competitions. In team sports such as association football and professional ice hockey, and leagues such as Formula One, NASCAR, the NBA, or the NFL, the final weeks of the season leading into off-time between one season and the next is filled with speculation regarding possible changes involving players, staff, and teams. Regardless of whether the speculation remains just that or indeed bears fruit, the moves and the discussions they generate help build interest in the leagues, their teams, and their upcoming seasons. For Major League Baseball, the term hot stove league describes that league's off-season.
Silly season is also used in professional golf to describe tournaments that are not official PGA Tour or LPGA Tour events. Normally scheduled at or near the end of the calendar year, when PGA and LPGA tournaments are not usually scheduled, these events also employ formats of play not normally seen on those tours. The Shark Shootout and the Skins Game are two such examples of silly season events.
See also
References
Bibliography
Brewer's Dictionary of Phrase & Fable, 15th edition, 1996 published by Cassell.
Brewer's Dictionary of Phrase & Fable, 2nd edition, 1898, online: definition for silly season
External links
Let's hear it for the silly season, Jonathan Duffy, BBC News, 31 August 2005
1860s neologisms
Criticism of journalism
Seasons
Journalism terminology
Summer
Silliness | Silly season | Physics | 1,199 |
22,407,888 | https://en.wikipedia.org/wiki/PSR%20B1509%E2%88%9258 |
PSR B1509−58 is a pulsar approximately 17,000 light-years away in the constellation of Circinus discovered by the Einstein X-Ray Observatory in 1982. Its diameter is only . It is located in a Pulsar wind nebula created by itself, that was caused as a remnant of the Supernova (SNR) MSH 15−52 visual approximately 1,700 years ago at the southern celestial hemisphere not visible in the northern hemisphere. The nebula spans about 150 light years. The pulsar's spin rate is "almost 7 times per second".
NASA described the star as "a rapidly spinning neutron star which is spewing energy out into the space around it to create complex and intriguing structures, including one that resembles a large cosmic hand". It is also known by the name "Hand of God". This phenomenon is called pareidolia.
Gallery
See also
List of neutron stars
Pulsar planet
References
External links
Chandra X-ray Center (CXO): Young Pulsar Shows Its Hand, Science Daily, 5 May 2009, retrieved 15 November 2024
Chandra X-ray Observatory blog
pulsars
Optical pulsars
Circinus
Pulsar wind nebulae
Articles containing video clips | PSR B1509−58 | Astronomy | 255 |
57,440,610 | https://en.wikipedia.org/wiki/Photovoltaic%20system%20performance | Photovoltaic system performance is a function of the climatic conditions, the equipment used and the system configuration. PV performance can be measured as the ratio of actual solar PV system output vs expected values, the measurement being essential for proper solar PV facility's operation and maintenance. The primary energy input is the global light irradiance in the plane of the solar arrays, and this in turn is a combination of the direct and the diffuse radiation.
The performance is measured by PV monitoring systems, which include a data logging device and often also a weather measurement device (on-site device or an independent weather data source). Photovoltaic performance monitoring systems serve several purposes - they are used to track trends in a single photovoltaic (PV) system, to identify faults in or damage to solar panels and inverters, to compare the performance of a system to design specifications or to compare PV systems at different locations. This range of applications requires various sensors and monitoring systems, adapted to the intended purpose. Specifically, there is a need for both electronic monitoring sensors and independent weather sensing (irradiance, temperature and more) in order to normalize PV facility output expectations. Irradiance sensing is very important for the PV industry and can be classified into two main categories - on-site pyranometers and satellite remote sensing; when onsite pyranometers are not available, regional weather stations are also sometimes utilized, but at lower quality of data; the Industrial IoT-powered sensorless measurement approach has recently evolved as the third option.
Sensors and photovoltaic monitoring systems are standardized in IEC 61724-1 and classified into three levels of accuracy, denoted by the letters “A”, “B” or “C”, or by the labels “High accuracy”, “Medium accuracy” and “Basic accuracy”. A parameter called the 'performance ratio' has been developed to evaluate the total value of PV system losses.
Principles
Photovoltaic system performance is generally dependent on incident irradiance in the plane of the solar panels, the temperature of the solar cells, and the spectrum of the incident light. Furthermore, it is dependent upon the inverter, which typically sets the operating voltage of the system. The voltage and current output of the system changes as lighting, temperature and load conditions change, so there is no specific voltage, current, or wattage at which the system always operates. Hence, system performance varies depending on its architecture (direction and tilt of modules), geographic location and the time of day, weather conditions (amount of solar insolation, cloud cover, temperature), and local disturbances such as shading, soiling, state of charge, and system component availability.
Performance by system type
Solar PV parks
Solar parks of industrial and utility scale may reach high performance figures. In modern solar parks the performance ratio should typically be in excess of 80%. Many solar PV parks utilize advanced performance monitoring solutions, which are supplied by a variety of technology providers.
Distributed solar PV
In rooftop solar systems it typically takes a longer time to identify a malfunction and send a technician, due to lower availability of sufficient photovoltaic system performance monitoring tools and higher costs of human labor. As a result, rooftop solar PV systems typically suffer from lower quality of operation & maintenance and essentially lower levels of system availability and energy output.
Off-grid solar PV
Most off-grid solar PV facilities lack any performance monitoring tools, due to a number of reasons - including monitoring equipment costs, cloud connection availability and O&M availability.
Performance measurement and monitoring
A number of technical solutions exist to provide performance measurement and monitoring for solar photovoltaic installations, differing according to data quality, compatibility with irradiance sensors as well as pricing.
Weather data acquisition
Weather data acquisition is generally relying on physical weather sensors and remote sensing with satellites.
Energy generation data availability and quality
An essential part of PV system performance evaluation is the availability and the quality of energy generation data. Access to the Internet has allowed a further improvement in energy monitoring and communication.
Typically, PV plant data is transmitted via a data logger to a central monitoring portal. Data transmission is dependent on the local cloud connectivity, thus being highly available in OECD countries, but more limited in developed countries. According to Samuel Zhang, vice president of Huawei Smart PV, over 90% of global PV plants will be fully digitilized by 2025.
Performance monitoring
In general, monitoring solutions can be classified to inverter manufacturer-provided logger and monitoring software solutions, independent data-logger solutions with custom software and finally agnostic monitoring software-only solutions compatible with different inverters and data-loggers.
Monitoring solutions by inverter manufacturers
Dedicated performance monitoring systems are available from a number of vendors. For solar PV systems that use microinverters (panel-level DC to AC conversion), module power data is automatically provided. Some systems allow setting performance alerts that trigger phone/email/text warnings when limits are reached. These solutions provide data for the system owner and/or the installer. Installers are able to remotely monitor multiple installations, and see at-a-glance the status of their entire installed base. All the major inverter manufacturers provide a data acquisition unit - whether a data logger or a direct means of communication with the portal.
These solutions have the advantage of providing of a maximum information from the inverter and of supplying it on a local display or transmitting it on the internet, in particular alerts from the inverter itself (temperature overload, loss of connection with a network, etc.).
Some of those monitoring solutions are:
Fronius accessible via Solar.web portal;
SMA's Webbox/Inverter Manager/Cluster-Controller loggers accessible via Sunnyportal and EnnexOS portals;
SolarEdge accessible via SolarEdge Monitoring and MySolarEdge (only application) portals;
Sungrow accessible via the iSolarCloud portal;
Independent data logging solutions connected to inverters
Generic data logging solutions connected to inverters make it possible to overcome the major drawback of inverter-specific manufacturer solutions - being compatible with several different manufacturers. These data acquisition units connect to the serial links of the inverters, complying with each manufacturer’s protocol. Generic data logging solutions are generally more affordable than inverter manufacturer solutions and allow aggregation of solar PV system fleets of varying inverter manufacturers.
Some of those monitoring solutions are:
AlsoEnergy loggers accessible via the PowerTrack portal;
Solar-Log loggers accessible via the WEB Enerest™ 4 portal;
Meteocontrol loggers accessible via the VCOM Cloud portal;
Solar Analytics' "Smart Solar logger"s accessible via the Solar Analytics portal;
Independent monitoring solutions
The last category is the most recent segment in the solar photovoltaic monitoring domain. Those are software based aggregation portals, able to aggregate information from both inverter-specific portals and data loggers as well as independent data loggers. Such solutions become more widespread as inverter-specific communication to the cloud is done more and more without data loggers, but rather as direct data connections.
Omnidian residential solar performance insurance partner Omnidian;
Solytic generic solar monitoring Solytic portal;
Sunreport device-agnostic cloud solar monitoring Sunreport platform;
Weather data sources
On-site irradiance sensors
On-site irradiance measurements are an important part of PV performance monitoring systems. Irradiance can be measured in the same orientation as the PV panels, so-called plane of array (POA) measurements, or horizontally, so-called global horizontal irradiance (GHI) measurements. Typical sensors used for such irradiance measurements include thermopile pyranometers, PV reference devices and photodiode sensors. To conform to a specific accuracy class, each sensor type must meet a certain set of specifications. These specifications are listed in the table below.
If an irradiance sensor is placed in POA, it must be placed at the same tilt angle as the PV module, either by attaching it to the module itself or with an extra platform or arm at the same tilt level. Checking if the sensor is properly aligned can be done with portable tilt sensors or with an integrated tilt sensor.
Sensor maintenance
The standard also specifies a required maintenance schedule per accuracy class. Class C sensors require maintenance per manufacturer's requirement. Class B sensors need to be re-calibrated every 2 years and require a heater to prevent precipitation or condensation. Class A sensors need to be re-calibrated once per year, require cleaning once per week, require a heater and require ventilation (for thermopile pyranometers).
Satellite remote sensing of irradiance
PV performance can also be estimated by satellite remote sensing. These measurements are indirect because the satellites measure the solar radiance reflected off the earth surface. In addition, the radiance is filtered by the spectral absorption of Earth's atmosphere. This method is typically used in non-instrumented class B and class C monitoring systems to avoid costs and maintenance of on-site sensors. If the satellite-derived data is not corrected for local conditions, an error in radiance up to 10% is possible.
Equipment and performance standards
Sensors and monitoring systems are standardized in IEC 61724-1 and classified into three levels of accuracy, denoted by the letters “A”, “B” or “C”, or by the labels “High accuracy”, “Medium accuracy” and “Basic accuracy”.
In California, solar PV performance monitoring has been regulated by the State government. As of 2017, the governmental agency California Solar Initiative (CSI) provided a Performance Monitoring & Reporting Service certificate to eligible companies active in the solar segment and acting in line with CSI requirements.
A parameter called the 'performance ratio' has been developed to evaluate the total value of PV system losses. The performance ratio gives a measure of the output AC power delivered as a proportion of the total DC power which the solar modules should be able to deliver under the ambient climatic conditions.
See also
Photovoltaics
Pyranometer
Remote sensing
Atmosphere of Earth
Absorption (electromagnetic radiation)
References
External links
NREL - Analytics of PV System Energy Performance Evaluation Method
Photovoltaic Geographical Information System (PVGIS) provides information on solar radiation and photovoltaic system performance for any location in the world, except the North and South Poles
Photovoltaics
Maintenance | Photovoltaic system performance | Engineering | 2,146 |
63,891,869 | https://en.wikipedia.org/wiki/ESO%20439-26 | ESO 439-26 was considered the least luminous white dwarf known. Located 140 light years away from the Sun, it is roughly 10 billion years old and has a temperature of 4560 Kelvin. Thus, despite being classified as a "white dwarf", it would actually appear yellowish in color.
This finding is however based on a too large parallax. Gaia measurement of the parallax shows a more distant source and therefore an absolute magnitude of MG=15.0 mag. For example the white dwarf WD J2147–4035 has MG=17.7 mag, making this white dwarf less luminous. The updated MV is 15.46, using the Gaia parallax and the apparent V-magnitude from Ruiz et al. (see formulae at the article absolute magnitude).
References
White dwarfs
Hydra (constellation) | ESO 439-26 | Astronomy | 173 |
41,032,284 | https://en.wikipedia.org/wiki/Touchmate | Touchmate is a computer products manufacturing company founded in 1988 by "Vasant Menghani". One of the leading computer product manufacturing company in UAE, it was rated "Best IT Brand of UAE" by Reseller-Magazine in 2012.
Ranging from products from tablets to smartphones.
References
Computer companies established in 1988 | Touchmate | Technology | 65 |
9,129,242 | https://en.wikipedia.org/wiki/Paulo%20Ribenboim | Paulo Ribenboim (born March 13, 1928) is a Brazilian-Canadian mathematician who specializes in number theory.
Biography
Ribenboim was born into a Jewish family in Recife, Brazil. He received his BSc in mathematics from the University of São Paulo in 1948, and won a fellowship to study with Jean Dieudonné in France at the University of Nancy in the early 1950s, where he became a close friend of Alexander Grothendieck.
He has contributed to the theory of ideals and of valuations.
Ribenboim has authored 246 publications including 13 books. He has been at Queen's University in Kingston, Ontario, since the 1960s, where he remains a professor emeritus.
Jean Dieudonné was one of his doctoral advisors. Andrew Granville, Jan Minac, Karl Dilcher and Aron Simis have been doctoral students of Ribenboim.
The Ribenboim Prize of the Canadian Number Theory Association is named in his honor.
Personal life
In 1951, Ribenboim married Huguette Demangelle, a French Catholic woman who he met in France. The couple have two children and five grandchildren, and have lived in Canada since 1962.
Bibliography
Paulo Ribenboim (1964) Functions, Limits, and Continuity , John Wiley & Sons, Inc.
References
External links
The Canadian Number Theory Association Ribenboim Prize
1928 births
Living people
People from Recife
Brazilian Jews
Number theorists
Brazilian emigrants to Canada
Brazilian expatriates in France
Academic staff of Queen's University at Kingston
20th-century Brazilian mathematicians
21st-century Canadian mathematicians | Paulo Ribenboim | Mathematics | 321 |
14,316,080 | https://en.wikipedia.org/wiki/Sapping | Sapping is a term used in siege operations to describe the digging of a covered trench (a "sap") to approach a besieged place without danger from the enemy's fire. The purpose of the sap is usually to advance a besieging army's position towards an attacked fortification. It is excavated by specialised military units, whose members are often called sappers.
By using the sap, the besiegers could move closer to the walls of a fortress, without exposing the sappers to direct fire from the defending force. To protect the sappers, trenches were usually dug at an angle in zig-zag pattern (to protect against enfilading fire from the defenders), and at the head of the sap a defensive shield made of gabions (or a mantlet) could be deployed.
Once the saps were close enough, siege engines or cannon could be moved through the trenches to get closer to—and enable firing at—the fortification. The goal of firing is to batter a breach in the curtain walls, to allow attacking infantry to get past the walls. Prior to the invention of large pieces of siege artillery, miners could start to tunnel from the head of a sap to undermine the walls. A fire or gunpowder would then be used to create a crater into which a section of the fortifications would fall, creating a breach.
Before the development of explosives, sapping was the undermining of an enemy's fortifications, which would collapse when the sap's supports were removed. Later, explosives were placed surreptitiously in the undermining sap or mine, then detonated, as was done with 450 tons of high explosive in the First World War battle of Messines, the largest planned explosion until the 1945 Trinity atomic bomb test.
History
Pre-gunpowder
A way to force entry into a fortified structure was to dig a mine or sap under defensive walls, typically shored up by wooden props. On collapsing the tunnel, for example by burning the props, the wall would collapse.
1500s
Sapping trenches, cannons and gunpowder explosives were a potent force against fortifications. However, the Siege of Godesberg of 1583 during the Cologne War showed that fortresses could still withstand sapping and explosives to a point. After the attacking force of Ferdinand of Bavaria fired on the fortress with large caliber cannons; this had little impact on the walls. The cannons were firing heavy shot, but the height of the fortress significantly reduced the force of the impact with the walls, which bounced off to little effect, although the fortress dated from the 14th century. To breach the walls, Ferdinand ordered his soldiers to dig into the feldspar supporting the side of the mountain and place an explosive charge. Even after the powder was ignited and a substantial portion of the wall, the gate, and the inner walls were breached, the defenders still held out for three days.
Trace Italienne forts
Sapping became necessary as a response to the development and spread of trace Italienne in defensive architecture in the 1500s. The Italian style star fort bastion made siege warfare and sapping the modus operandi of military operations in the late medieval and first decades of the early modern period of warfare. Fortresses with abutments with gentler angles were difficult to breach; cannonballs and mortar shells often had little impact on the walls, or impact that could be readily repaired after night fell. Towers no longer protruded at right angles from the wall; rather, they blended with the wall. These created a two-fold advantage. First, defenders in the towers had a field of fire of 280 degrees or more. This range of fire and the towers' positioning allowed defenders to fire upon the attackers' flank as they advanced, a deadly fire called enfilade. Consequently, a hostile force which ranged their cannons was less effective, as the "hostile cannon [had] to fire from longer range" and defenders could better enfilade attackers.
1600s
During the English Civil War, there was a siege of Newark-on-Trent which took place from 6 March 1645 – 8 May 1646. A detailed map of the Cavaliers defences of Newark and the lines of circumvallation and contravallation along with the besiegers redoubts and fortified camps was drawn up by R Clampe, the besieging Roundheads' chief engineer. It includes a zig-zag sap emerging from a bastion of the circumvallation. The zig-zags are at such angles and positions that the defenders were unable to bring enfilade fire to bear. Once the sap was completed four cannons were placed much closer to a gateway than those in bastions of the circumvallation.
American Civil War
In the American Civil War, troops advanced their sap under cover of a sap roller or mantlet by forming a parapet on the engaged side of the trench one gabion at a time and filling it with earth taken from the trench.
First World War
During First World War trench warfare, the combatant's sappers, who were often experienced civilian miners who had been rejected for combat duties due to age or ill-health, strived to undermine each other's positions, working silently to avoid detection. After completing a mine it was filled with explosives, sometimes hundreds of tons, and detonated, followed by an attack on the surprised survivors from the destroyed position.
Russian sap
A Russian sap is a tunnel dug at a shallow depth under no man's land towards an enemy position. It allows the attacking infantry to approach an enemy position without being detected and safe from enemy fire. For the attack, the tunnel is opened and the infantry attacks the enemy position at comparatively short range. Russian saps were widely used in the First World War, for example during the Battle of the Somme, when four of them were further equipped with Livens Large Gallery Flame Projectors. Similar tactics were used in the Korean War by the Chinese People's Volunteer Army, when they dug under the Yalu River to attack US troops, and by Hamas, when carrying out tunnel warfare from the Gaza Strip against Israel.
See also
Mining (military)
References
Notes
Bibliography
External links
The Civil War Field Fortifications Website
Military engineering | Sapping | Engineering | 1,261 |
1,339,640 | https://en.wikipedia.org/wiki/Oversampling | In signal processing, oversampling is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.
A signal is said to be oversampled by a factor of N if it is sampled at N times the Nyquist rate.
Motivation
There are three main reasons for performing oversampling: to improve anti-aliasing performance, to increase resolution and to reduce noise.
Anti-aliasing
Oversampling can make it easier to realize analog anti-aliasing filters. Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampling system, design constraints for the anti-aliasing filter may be relaxed. Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, the digital filter associated with this downsampling is easier to implement than a comparable analog filter required by a non-oversampled system.
Resolution
In practice, oversampling is implemented in order to reduce cost and improve performance of an analog-to-digital converter (ADC) or digital-to-analog converter (DAC). When oversampling by a factor of N, the dynamic range also increases a factor of N because there are N times as many possible values for the sum. However, the signal-to-noise ratio (SNR) increases by , because summing up uncorrelated noise increases its amplitude by , while summing up a coherent signal increases its average by N. As a result, the SNR increases by .
For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the SNR by a factor of 16, effectively adding 4 bits to the resolution and producing a single sample with 24-bit resolution.
The number of samples required to get bits of additional data precision is
To get the mean sample scaled up to an integer with additional bits, the sum of samples is divided by :
This averaging is only effective if the signal contains sufficient uncorrelated noise to be recorded by the ADC. If not, in the case of a stationary input signal, all samples would have the same value and the resulting average would be identical to this value; so in this case, oversampling would have made no improvement. In similar cases where the ADC records no noise and the input signal is changing over time, oversampling improves the result, but to an inconsistent and unpredictable extent.
Adding some dithering noise to the input signal can actually improve the final result because the dither noise allows oversampling to work to improve resolution. In many practical applications, a small increase in noise is well worth a substantial increase in measurement resolution. In practice, the dithering noise can often be placed outside the frequency range of interest to the measurement, so that this noise can be subsequently filtered out in the digital domain—resulting in a final measurement, in the frequency range of interest, with both higher resolution and lower noise.
Noise
If multiple samples are taken of the same quantity with uncorrelated noise added to each sample, then because, as discussed above, uncorrelated signals combine more weakly than correlated ones, averaging N samples reduces the noise power by a factor of N. If, for example, we oversample by a factor of 4, the signal-to-noise ratio in terms of power improves by factor of four which corresponds to a factor of two improvement in terms of voltage.
Certain kinds of ADCs known as delta-sigma converters produce disproportionately more quantization noise at higher frequencies. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled signal down to half the target sampling rate, a final result with less noise (over the entire band of the converter) can be obtained. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.
Example
Consider a signal with a bandwidth or highest frequency of B = 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at four times that rate requires a sampling frequency of 800 Hz. This gives the anti-aliasing filter a transition band of 300 Hz ((fs/2) − B = (800 Hz/2) − 100 Hz = 300 Hz) instead of 0 Hz if the sampling frequency was 200 Hz. Achieving an anti-aliasing filter with 0 Hz transition band is unrealistic whereas an anti-aliasing filter with a transition band of 300 Hz is not difficult.
Reconstruction
The term oversampling is also used to denote a process used in the reconstruction phase of digital-to-analog conversion, in which an intermediate high sampling rate is used between the digital input and the analog output. Here, digital interpolation is used to add additional samples between recorded samples, thereby converting the data to a higher sample rate, a form of upsampling. When the resulting higher-rate samples are converted to analog, a less complex and less expensive analog reconstruction filter is required. Essentially, this is a way to shift some of the complexity of reconstruction from analog to the digital domain. Oversampling in the ADC can achieve some of the same benefits as using a higher sample rate at the DAC.
See also
Oversampled binary image sensor
Supersampling
Undersampling
Notes
References
Further reading
Digital signal processing
Information theory | Oversampling | Mathematics,Technology,Engineering | 1,223 |
24,008,126 | https://en.wikipedia.org/wiki/C9H18O | {{DISPLAYTITLE:C9H18O}}
The molecular formula C9H18O (molar mass: 142.24 g/mol) may refer to:
Nonanal
Nonanones
2-Nonanone
3-Nonanone
4-Nonanone
5-Nonanone, or dibutyl ketone
3,3,5-Trimethylcyclohexanol | C9H18O | Chemistry | 88 |
392,143 | https://en.wikipedia.org/wiki/Palomar%20Observatory | Palomar Observatory is an astronomical research observatory in the Palomar Mountains of San Diego County, California, United States. It is owned and operated by the California Institute of Technology (Caltech). Research time at the observatory is granted to Caltech and its research partners, which include the Jet Propulsion Laboratory (JPL), Yale University, and the National Astronomical Observatories of China.
The observatory operates several telescopes, including the Hale Telescope, the Samuel Oschin telescope (dedicated to the Zwicky Transient Facility, ZTF), the Palomar Telescope, and the Gattini-IR telescope. Decommissioned instruments include the Palomar Testbed Interferometer and the first telescopes at the observatory, an Schmidt camera from 1936.
History
Hale's vision for large telescopes and Palomar Observatory
Astronomer George Ellery Hale, whose vision created Palomar Observatory, built the world's largest telescope four times in succession. He published a 1928 article proposing what was to become the 200-inch Palomar reflector; it was an invitation to the American public to learn about how large telescopes could help answer questions relating to the fundamental nature of the universe. Hale followed this article with a letter to the International Education Board (later absorbed into the General Education Board) of the Rockefeller Foundation dated April 16, 1928, in which he requested funding for this project. In his letter, Hale stated:
"No method of advancing science is so productive as the development of new and more powerful instruments and methods of research. A larger telescope would not only furnish the necessary gain in light space-penetration and photographic resolving power, but permit the application of ideas and devices derived chiefly from the recent fundamental advances in physics and chemistry."
Hale Telescope
The 200-inch telescope is named after astronomer and telescope builder George Ellery Hale. It was built by Caltech with a $6 million grant from the Rockefeller Foundation, using a Pyrex blank manufactured by Corning Glass Works under the direction of George McCauley. Dr. J.A. Anderson was the initial project manager, assigned in the early 1930s. The telescope (the largest in the world at that time) saw first light January 26, 1949, targeting NGC 2261. The American astronomer Edwin Powell Hubble was the first astronomer to use the telescope.
The 200-inch telescope was the largest telescope in the world from 1949 until 1975, when the Russian BTA-6 telescope saw first light. Astronomers using the Hale Telescope have discovered quasars (a subset of what was to become known as Active Galactic Nuclei) at cosmological distances. They have studied the chemistry of stellar populations, leading to an understanding of the stellar nucleosynthesis as to origin of elements in the universe in their observed abundances, and have discovered thousands of asteroids. A one-tenth-scale engineering model of the telescope at Corning Community College in Corning, New York, home of the Corning Glass Works (now Corning Incorporated), was used to discover at least one minor planet, 34419 Corning.
Architecture and design
Russell W. Porter developed the Art Deco architecture of the Observatory's buildings, including the dome of the 200-inch Hale Telescope. Porter was also responsible for much of the technical design of the Hale Telescope and Schmidt Cameras, producing a series of cross-section engineering drawings. Porter worked on the designs in collaboration with many engineers and Caltech committee members.
Max Mason directed the construction and Theodore von Karman was involved in the engineering.
Directors
Ira Sprague Bowen, 1948–1964
Horace Welcome Babcock, 1964–1978
Maarten Schmidt, 1978–1980
Gerry Neugebauer, 1980–1994
James Westphal, 1994–1997
Wallace Leslie William Sargent, 1997–2000
Richard Ellis, 2000–2006
Shrinivas Kulkarni, 2006–2018
Jonas Zmuidzinas, 2018–
Palomar Observatory and light pollution
Much of the surrounding region of Southern California has adopted shielded lighting to reduce the light pollution that would potentially affect the observatory.
Telescopes and instruments
The Hale Telescope was first proposed in 1928 and has been operational since 1949. It was the largest telescope in the world for 26 years.
A reflecting telescope is located in the Oscar Mayer Building, and operates fully robotically. The telescope became operational in 1970, and was built to increase sky access for Palomar astronomers. Among its notable accomplishments is the discovery of the first brown dwarf. The 60-inch telescope currently hosts the SED Machine integral field spectrograph instrument used as part of ZTF transient followup and classification.
The Samuel Oschin telescope development began in 1938, and the telescope saw first light in 1948. It was initially called the 48-inch Schmidt, and was dedicated to Samuel Oschin in 1986. Among many notable accomplishments, Oschin observations led to the discovery of the important dwarf planets Eris and Sedna. Eris's discovery initiated discussions in the international astronomy community that led to Pluto being re-classified as a dwarf planet in 2006. The Oschin presently operates fully robotically and hosts the 570-million-pixel ZTF Camera —the discovery engine for the ZTF project.
The WINTER (The Wide-field Infrared Transient Explorer) 1x1-degree reflecting robotic telescope has been operational since 2021. It is dedicated to the seeing-limited time domain survey of the infrared (IR) sky, with a particular emphasis on identifying r-process material in binary neutron star (BNS) merger remnants detected by LIGO. The instrument observes in Y, J, and a short-H (Hs) band tuned to the long-wave cutoff of the InGaAs sensors, covering a wavelength range from 0.9 to 1.7 microns.
Decommissioned instruments
An Schmidt camera became the first operational telescope at the Palomar in 1936. In the 1930s, Fritz Zwicky and Walter Baade advocated adding survey telescopes at Palomar, and the 18-inch was developed to demonstrate the Schmidt concept. Zwicky used the 18-inch to discover over 100 supernovae in other galaxies. Comet Shoemaker-Levy 9 was discovered with this instrument in 1993. It has since been decommissioned and is on display at the small museum/visitor center.
The Palomar Testbed Interferometer (PTI) was a multi-telescope instrument that made high-angular-resolution measurements of the apparent sizes and relative positions of stars. The apparent sizes and in some cases shapes of bright stars were measured with PTI, as well as the apparent orbits of multiple stellar systems. PTI operated from 1995 to 2008.
The Palomar Planet Search Telescope (PPST), also known as Sleuth, was a robotic telescope that operated from 2003 until 2008. It was dedicated to the search for planets around other stars using the transit method. It operated in conjunction with telescopes at Lowell Observatory and in the Canary Islands as part of the Trans-Atlantic Exoplanet Survey (TrES).
Research
Palomar Observatory remains an active research facility, operating multiple telescopes every clear night, and supporting a large international community of astronomers who study a broad range of research topics.
The Hale Telescope remains in active research use and operates with a diverse instrument suite of optical and near-infrared spectrometers and imaging cameras at multiple foci. The Hale also operates with a multi-stage, high-order adaptive optics system to provide diffraction-limited imaging in the near-infrared. Key historical science results with the Hale include cosmological measurement of the Hubble flow, the discovery of quasars as the precursor of Active Galactic Nuclei, and studies of stellar populations and stellar nucleosynthesis.
The Oschin and 60-inch telescopes operate robotically and together support a major transient astronomy program, the Zwicky Transient Facility.
The Oschin was created to facilitate astronomical reconnaissance, and has been used in many notable astronomical surveys—among them are:
POSS-I
The initial Palomar Observatory Sky Survey (POSS or POSS-I), sponsored by the National Geographic Institute, was completed in 1958. The first plates were exposed in November 1948 and the last in April 1958. This survey was performed using 14-inch2 (6-degree2) blue-sensitive (Kodak 103a-O) and red-sensitive (Kodak 103a-E) photographic plates on the Oschin Telescope. The survey covered the sky from a declination of +90° (celestial north pole) to −27° and all right ascensions and had a sensitivity to +22 magnitudes (about 1 million times fainter than the limit of human vision). A southern extension extending the sky coverage of the POSS to −33° declination was shot in 1957–1958. The final POSS I dataset consisted of 937 plate pairs.
The Digitized Sky Survey (DSS) produced images which were based on the photographic data developed in the course of POSS-I.
J.B. Whiteoak, an Australian radio astronomer, used the same instrument to extend POSS-I data south to −42° declination. Whiteoak's observations used using the same field centers as the corresponding northern declination zones. Unlike POSS-I, the Whiteoak extension consisted only of red-sensitive (Kodak 103a-E) photographic plates.
POSS-II
The Second Palomar Observatory Sky Survey (POSS II, sometimes Second Palomar Sky Survey) was performed in the 1980s and 1990s and made use of better, faster films and an upgraded telescope. The Oschin Schmidt was upgraded with an achromatic corrector and provisions for autoguiding. Images were recorded in three wavelengths: blue (IIIaJ. 480 nm), red (IIIaF, 650 nm), and near-infrared (IVN, 850 nm) plates. Observers on POSS II included C. Brewer, D. Griffiths, W. McKinley, J. Dave Mendenhall, K. Rykoski, Jeffrey L. Phinney, and Jean Mueller (who discovered over 100 supernovae by comparing the POSS I and POSS II plates). Mueller also discovered several comets and minor planets during the course of POSS II, and the bright Comet Wilson 1986 was discovered by then-graduate-student C. Wilson early in the survey.
Until the completion of the Two Micron All Sky Survey (2MASS), POSS II was the most extensive wide-field sky survey. When completed, the Sloan Digital Sky Survey will surpass POSS I and POSS II in depth, although the POSS covers almost 2.5 times more area on the sky.
POSS II also exists in digitized form (that is, the photographic plates were scanned) as part of the Digitized Sky Survey (DSS).
QUEST
The multi-year POSS projects were followed by the Palomar Quasar Equatorial Survey Team (QUEST) Variability survey. This survey yielded results that were used by several projects, including the Near-Earth Asteroid Tracking project. Another program that used the QUEST results discovered 90377 Sedna on 14 November 2003, and around 40 Kuiper belt objects. Other programs that share the camera are Shri Kulkarni's search for gamma-ray bursts (this takes advantage of the automated telescope's ability to react as soon as a burst is seen and take a series of snapshots of the fading burst), Richard Ellis's search for supernovae to test whether the universe's expansion is accelerating or not, and S. George Djorgovski's quasar search.
The camera for the Palomar QUEST Survey was a mosaic of 112 charge-coupled devices (CCDs) covering the whole (4° × 4°) field of view of the Schmidt telescope. At the time it was built, it was the largest CCD mosaic used in an astronomical camera. This instrument was used to produce The Big Picture, the largest astronomical photograph ever produced. The Big Picture is on display at Griffith Observatory.
Current research
Current research programs on the 200-inch Hale Telescope cover the range of the observable universe, including studies on near-Earth asteroids, outer Solar System planets, Kuiper Belt objects, star formation, exoplanets, black holes and x-ray binaries, supernovae and other transient source followup, and quasars/Active Galactic Nuclei.
The 48-inch Samuel Oschin Schmidt Telescope operates robotically, and supports a new transient astronomy sky survey, the Zwicky Transient Facility (ZTF).
The 60-inch telescope operates robotically, and supports ZTF by providing rapid, low-dispersion optical spectra for initial transient classification using the for-purpose Spectral Energy Distribution Machine (SEDM) integral field spectrograph.
Visiting and public engagement
Palomar Observatory is an active research facility. However, selected observatory areas are open to the public during the day. Visitors can take self-guided tours of the 200-inch telescope daily from 9 a.m. to 3 p.m. The observatory is open 7 days a week, year round, except for December 24 and 25 and during times of inclement weather. Guided tours of the 200-inch Hale Telescope dome and observing area are available Saturdays and Sundays from April through October. Behind-the-scenes tours for the public are offered through the community support group, Palomar Observatory Docents.
Palomar Observatory also has an on-site museum—the Greenway Visitor Center, containing observatory and astronomy-relevant exhibits, a gift shop, and hosts periodic public events.
For those unable to travel to the observatory, Palomar provides an extensive virtual tour that provides virtual access to all the major research telescopes on-site and the Greenway Center and has extensive embedded multimedia to provide additional context. Similarly the observatory actively maintains an extensive website and YouTube channel to support public engagement.
The observatory is located off State Route 76 in northern San Diego County, California, two hours' drive from downtown San Diego and three hours' drive from central Los Angeles (UCLA, LAX airport). Those staying at the nearby Palomar Campground can visit Palomar Observatory by hiking up Observatory Trail. Notably, Ben Burt, sound designer for the original Star Wars, recorded various sounds at the Palomar Observatory, including motors and the shutters on the dome, to add background sounds for the Death Star.
Climate
Palomar has a hot-summer Mediterranean climate (Köppen Csa).
Selected books
1983 — Calvino, Italo. Mr. Palomar. Torino: G. Einaudi. ; OCLC 461880054
1987 — Preston, Richard. First Light. New York: Atlantic Monthly Press. ; OCLC 16004290
1994 — Florence, Ronald. The Perfect Machine. New York: HarperCollins. ; OCLC 611549937
2010 — Brown, Michael E. How I Killed Pluto and Why It Had It Coming. Spiegel & Grau. ; OCLC 495271396
2020 — Schweizer, Linda. Cosmic Odyssey. MIT Press
See also
List of astronomical observatories
Mount Laguna Observatory
National Geographic Society – Palomar Observatory Sky Survey
References
Further reading
External links
Caltech Astronomy: Palomar Observatory – official observatory website
Palomar Skies, news and history written by former Palomar public affairs coordinator Scott Kardel
The SBO Palomar Sky Survey Prints
Palomar Observatory Clear Sky Clock Forecast of observing conditions.
Palomar Observatory YouTube Channel
Palomar Observatory Virtual Tour
Astronomical observatories in California
Astronomy institutes and departments
California Institute of Technology buildings and structures
Palomar Mountains
Buildings and structures in San Diego County, California
Natural history of the Peninsular Ranges | Palomar Observatory | Astronomy | 3,177 |
5,449,464 | https://en.wikipedia.org/wiki/Vizing%27s%20theorem | In graph theory, Vizing's theorem states that every simple undirected graph may be edge colored using a number of colors that is at most one larger than the maximum degree of the graph. At least colors are always necessary, so the undirected graphs may be partitioned into two classes: "class one" graphs for which colors suffice, and "class two" graphs for which colors are necessary. A more general version of Vizing's theorem states that every undirected multigraph without loops can be colored with at most colors, where is the multiplicity of the multigraph. The theorem is named for Vadim G. Vizing who published it in 1964.
Discovery
The theorem discovered by Soviet mathematician Vadim G. Vizing was published in 1964 when Vizing was working in Novosibirsk and became known as Vizing's theorem. Indian mathematician R. P. Gupta independently discovered the theorem, while undertaking his doctorate (1965-1967).
Examples
When , the graph must itself be a matching, with no two edges adjacent, and its edge chromatic number is one. That is, all graphs with are of class one.
When , the graph must be a disjoint union of paths and cycles. If all cycles are even, they can be 2-edge-colored by alternating the two colors around each cycle. However, if there exists at least one odd cycle, then no 2-edge-coloring is possible. That is, a graph with is of class one if and only if it is bipartite.
Proof
This proof is inspired by .
Let be a simple undirected graph. We proceed by induction on , the number of edges. If the graph is empty, the theorem trivially holds. Let and suppose a proper -edge-coloring exists for all where .
We say that color } is missing in with respect to proper -edge-coloring if for all . Also, let -path from denote the unique maximal path starting in with -colored edge and alternating the colors of edges (the second edge has color , the third edge has color and so on), its length can be . Note that if is a proper -edge-coloring of then every vertex has a missing color with respect to .
Suppose that no proper -edge-coloring of exists. This is equivalent to this statement:
(1) Let and be arbitrary proper -edge-coloring of and be missing from and be missing from with respect to . Then the -path from ends in .
This is equivalent, because if (1) doesn't hold, then we can interchange the colors and on the -path and set the color of to be , thus creating a proper -edge-coloring of from . The other way around, if a proper -edge-coloring exists, then we can delete , restrict the coloring and (1) won't hold either.
Now, let and be a proper -edge-coloring of and be missing in with respect to . We define to be a maximal sequence of neighbours of such that is missing in with respect to for all .
We define colorings as
for all ,
not defined,
otherwise.
Then is a proper -edge-coloring of due to definition of . Also, note that the missing colors in are the same with respect to for all .
Let be the color missing in with respect to , then is also missing in with respect to for all . Note that cannot be missing in , otherwise we could easily extend , therefore an edge with color is incident to for all . From the maximality of , there exists such that . From the definition of this holds:
Let be the -path from with respect to . From (1), has to end in . But is missing in , so it has to end with an edge of color . Therefore, the last edge of is . Now, let be the -path from with respect to . Since is uniquely determined and the inner edges of are not changed in , the path uses the same edges as in reverse order and visits . The edge leading to clearly has color . But is missing in , so ends in . Which is a contradiction with (1) above.
Classification of graphs
Several authors have provided additional conditions that classify some graphs as being of class one or class two, but do not provide a complete classification. For instance, if the vertices of the maximum degree in a graph form an independent set, or more generally if the induced subgraph for this set of vertices is a forest, then must be of class one.
showed that almost all graphs are of class one. That is, in the Erdős–Rényi model of random graphs, in which all -vertex graphs are equally likely, let be the probability that an -vertex graph drawn from this distribution is of class one; then approaches one in the limit as goes to infinity. For more precise bounds on the rate at which converges to one, see .
Planar graphs
showed that a planar graph is of class one if its maximum degree is at least eight.
In contrast, he observed that for any maximum degree in the range from two to five, there exist
planar graphs of class two. For degree two, any odd cycle is such a graph, and for degree three, four, and five, these graphs can be constructed from platonic solids by replacing a single edge by a path of two adjacent edges.
In Vizing's planar graph conjecture, states that all simple, planar graphs with maximum degree six or seven are of class one, closing the remaining possible cases.
Independently, and partially proved Vizing's planar graph conjecture by showing that all planar graphs with maximum degree seven are of class one.
Thus, the only case of the conjecture that remains unsolved is that of maximum degree six. This conjecture has implications for the total coloring conjecture.
The planar graphs of class two constructed by subdivision of the platonic solids are not regular: they have vertices of degree two as well as vertices of higher degree. The four color theorem (proved by ) on vertex coloring of planar graphs, is equivalent to the statement that every bridgeless 3-regular planar graph is of class one .
Graphs on nonplanar surfaces
In 1969, Branko Grünbaum conjectured that every 3-regular graph with a polyhedral embedding on any two-dimensional oriented manifold such as a torus must be of class one. In this context, a polyhedral embedding is a graph embedding such that every face of the embedding is topologically a disk and such that the dual graph of the embedding is simple, with no self-loops or multiple adjacencies. If true, this would be a generalization of the four color theorem, which was shown by Tait to be equivalent to the statement that 3-regular graphs with a polyhedral embedding on a sphere are of class one. However, showed the conjecture to be false by finding snarks that have polyhedral embeddings on high-genus orientable surfaces. Based on this construction, he also showed that it is NP-complete to tell whether a polyhedrally embedded graph is of class one.
Algorithms
describe a polynomial time algorithm for coloring the edges of any graph with colors, where is the maximum degree of the graph. That is, the algorithm uses the optimal number of colors for graphs of class two, and uses at most one more color than necessary for all graphs. Their algorithm follows the same strategy as Vizing's original proof of his theorem: it starts with an uncolored graph, and then repeatedly finds a way of recoloring the graph in order to increase the number of colored edges by one.
More specifically, suppose that is an uncolored edge in a partially colored graph. The algorithm of Misra and Gries may be interpreted as constructing a directed pseudoforest (a graph in which each vertex has at most one outgoing edge) on the neighbors of : for each neighbor of , the algorithm finds a color that is not used by any of the edges incident to , finds the vertex (if it exists) for which edge has color , and adds as an edge to . There are two cases:
If the pseudoforest constructed in this way contains a path from to a vertex that has no outgoing edges in , then there is a color that is available both at and . Recoloring edge with color allows the remaining edge colors to be shifted one step along this path: for each vertex in the path, edge takes the color that was previously used by the successor of in the path. This leads to a new coloring that includes edge .
If, on the other hand, the path starting from in the pseudoforest leads to a cycle, let be the neighbor of at which the path joins the cycle, let be the color of edge , and let be a color that is not used by any of the edges at vertex . Then swapping colors and on a Kempe chain either breaks the cycle or the edge on which the path joins the cycle, leading to the previous case.
With some simple data structures to keep track of the colors that are used and available at each vertex, the construction of and the recoloring steps of the algorithm can all be implemented in time , where is the number of vertices in the input graph. Since these steps need to be repeated times, with each repetition increasing the number of colored edges by one, the total time is .
In an unpublished technical report, claimed a faster time bound for the same problem of coloring with colors.
History
In both and , Vizing mentions that his work was motivated by a theorem of showing that multigraphs could be colored with at most colors. Although Vizing's theorem is now standard material in many graph theory textbooks, Vizing had trouble publishing the result initially, and his paper on it appears in an obscure journal, Diskret. Analiz.
See also
Brooks' theorem relating vertex colorings to maximum degree
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. (In Russian.)
.
External links
Proof of Vizing's theorem at PlanetMath.
Graph coloring
Theorems in graph theory | Vizing's theorem | Mathematics | 2,069 |
14,722,987 | https://en.wikipedia.org/wiki/DNA%20damage-inducible%20transcript%203 | DNA damage-inducible transcript 3, also known as C/EBP homologous protein (CHOP), is a pro-apoptotic transcription factor that is encoded by the DDIT3 gene. It is a member of the CCAAT/enhancer-binding protein (C/EBP) family of DNA-binding transcription factors. The protein functions as a dominant-negative inhibitor by forming heterodimers with other C/EBP members, preventing their DNA binding activity. The protein is implicated in adipogenesis and erythropoiesis and has an important role in the cell's stress response.
Structure
C/EBP proteins are known to have a conserved C-terminal structure, basic leucine zipper domain(bZIP), that is necessary for the formation of DNA-binding capable homodimers or heterodimers with other proteins or members of the C/EBP protein family. CHOP is a relatively small (29kDa) protein that differs from most C/EBP proteins in several amino acid substitutions, which impacts its DNA-binding ability.
Regulation and function
Due to a variety of upstream and downstream regulatory interactions, CHOP plays an important role in ER stress-induced apoptosis caused by a variety of stimuli such as pathogenic microbial or viral infections, amino acid starvation, mitochondrial stress, neurological diseases, and neoplastic diseases.
Under normal physiological conditions, CHOP is ubiquitously present at very low levels. However, under overwhelming ER stress conditions, the expression of CHOP rises sharply along with the activation of apoptotic pathways in a wide variety of cells. Those processes are mainly regulated by three factors: protein kinase RNA-like endoplasmic reticulum kinase (PERK), activating transcription factor 6 (ATF6), and inositol requiring protein 1 (IRE1α)
Upstream regulatory pathways
During ER stress, CHOP is mainly induced via activation of the integrated stress response pathways through the subsequent downstream phosphorylation of a translation initiation factor, eukaryotic initiation factor 2α (eIF2α), and induction of a transcription factor, activation transcription factor 4 (ATF4), which converges on the promoters of target genes, including CHOP.
Integrated stress response, and thus CHOP expression, can be induced by
amino acid starvation through general control non-derepressible-2 (GCN2)
viral infection through the vertebrate-specific kinases - double-stranded RNA-activated protein kinase (PKR)
iron deficiency through heme-regulated inhibitor kinase (HRI)
stress from the accumulation of unfolded or misfolded proteins in the ER activates the integrated stress response through protein kinase RNA-like endoplasmic reticulum kinase (PERK).
Under ER stress, activated transmembrane protein ATF6 translocates to the nucleus and interacts with ATF/cAMP response elements and ER stress-response elements, binding the promoters and inducing transcription of several genes involved in unfolded protein response (including CHOP, XBP1 and others). Thus, ATF6 activates the transcription of both CHOP and XBP-1, while XBP-1 can also upregulate the expression of CHOP.
ER stress also stimulates transmembrane protein IRE1α activity. Upon activation, IRE1α splices the XBP-1 mRNA introns to produce a mature and active XBP-1 protein, that upregulates CHOP expression IRE1α also stimulates the activation of the apoptotic-signaling kinase-1 (ASK1), which then activates the downstream kinases, Jun-N-terminal kinase (JNK) and p38 mitogen-activated protein kinase (p38 MAPK), which participate in apoptosis induction along with CHOP. The P38 MAP kinase family phosphorylates Ser78 and Ser81 of CHOP, which induces cell apoptosis. Moreover, research studies found that the JNK inhibitors can suppress CHOP upregulation, indicating that JNK activation is also involved in the modulation of CHOP levels.
Downstream apoptotic pathways
Mitochondria-dependent
As a transcription factor, CHOP can regulate the expression of many anti-apoptotic and pro-apoptotic genes, including genes encoding the BCL2-family proteins, GADD34 and TRB-3. In the CHOP-induced apoptotic pathway, CHOP regulates the expression of BCL2 protein family, that includes anti-apoptotic proteins (BCL2, BCL-XL, MCL-1, and BCL-W) and pro-apoptotic proteins (BAK, BAX, BOK, BIM, PUMA and others).
Under ER stress, CHOP can function as either a transcriptional activator or repressor. It forms heterodimers with other C/EBP family transcription factors via bZIP-domain interactions to inhibit the expression of genes responsive to C/EBP family transcription factors, while enhancing the expression of other genes containing a specific 12–14 bp DNA cis-acting element. CHOP can downregulate the expressions of anti-apoptotic BCL2 proteins, and upregulate the expression of proapoptotic proteins (BIM, BAK and BAX expression). BAX-BAK oligomerization causes cytochrome c and apoptosis-inducing factor (AIF) release from mitochondria, eventually causing cell death.
TRB3 pseudokinase is upregulated by the ER stress-inducible transcriptional factor, ATF4-CHOP. CHOP interacts with TRB3, which contributes to the induction of apoptosis. The expression of TRB3 has a pro-apoptotic capacity. Therefore, CHOP also regulates apoptosis by upregulating the expression of the TRB3 gene.
Death-receptor dependent
Death receptor-mediated apoptosis occurs via activation of death ligands (Fas, TNF, and TRAIL) and death receptors. Upon activation, the receptor protein, Fas-associated death domain protein, forms a death-inducing signaling complex, which activates the downstream caspase cascade to induce apoptosis.
The PERK-ATF4-CHOP pathway can induce apoptosis by binding to the death receptors and upregulating the expression of death receptor 4 (DR4) and DR5. CHOP also interacts with the phosphorylated transcription factor JUN to form a complex that binds to the promoter region of DR4 in lung cancer cells. The N-terminal domain of CHOP interacts with phosphorylated JUN to form a complex that regulates the expression of DR4 and DR5. CHOP also upregulates the expression of DR5 by binding to the 5′-region of the DR5 gene.
Under prolonged ER stress conditions, activation of the PERK-CHOP pathway will permit DR5 protein levels to rise, which accelerates the formation of the death-inducing signaling complex (DISC) and activates caspase-8, leading to apoptosis
Other downstream pathways
In addition, CHOP also mediates apoptosis through increasing the expression of the ERO1α (ER reductase) gene, which catalyzes the production of H2O2 in the ER. The highly oxidized state of the ER results in H2O2 leakage into the cytoplasm, inducing the production of reactive oxygen species (ROS) and a series of apoptotic and inflammatory reactions.
The overexpression of CHOP can lead to cell cycle arrest and result in cell apoptosis. At the same time, CHOP-induced apoptosis can also trigger cell death by inhibiting the expression of cell cycle regulatory protein, p21. The p21 protein inhibits the G1 phase of the cell cycle as well as regulates the activity of pre-apoptotic factors. Identified CHOP-p21 relationship may play a role in changing the cell state from adapting to ER stress towards pre-apoptotic activity.
Under most conditions, CHOP can directly bind to the promoters of downstream related genes. However, under specific conditions, CHOP can cooperate with other transcription factors to affect apoptosis. Recent studies have shown that Bcl-2-associated athanogene 5 (Bag5) is over-expressed in prostate cancer and inhibits ER stress-induced apoptosis. Overexpression of Bag5 results in decreased CHOP and BAX expression, and increased Bcl-2 gene expression. Bag5 overexpression inhibited ER stress-induced apoptosis in the unfolded protein response by suppressing PERK-eIF2-ATF4 and enhancing the IRE1-Xbp1 activity.
In general, the downstream targets of CHOP regulate the activation of apoptotic pathways, however, the molecular interaction mechanisms behind those processes remain to be discovered.
Interactions
DNA damage-inducible transcript 3 has been shown to interact with [proteins]:
ATF3,
C-Fos,
C-jun and
CEBPB,
CSNK2A1,
JunD, and
RPS3A.
Clinical significance
Role in fatty liver and hyperinsulinemia
Chop gene deletion has been demonstrated protective against diet induced metabolic syndromes in mice. Mice with germline Chop gene knockout have better glycemic control despite unchanged obesity. A plausible explanation for the observed dissociation between obesity and insulin resistance is that CHOP promotes insulin hypersecretion from pancreatic β cells.
Furthermore, Chop depletion by a GLP1-ASO delivery system was shown to have therapeutic effects of insulin reduction and fatty liver correction, in preclinical mouse models.
Role in microbial infection
CHOP-induced apoptosis pathways had been identified in cells infected by
Porcine circovirus type 2 (PERK-eIF2α-ATF4 -CHOP-BCL2 pathway)
HIV (XBP-1-CHOP-Caspase 3/9 pathway)
Infectious bronchitis virus (PERK-eIF2α-ATF4/PKR-eIF2α-ATF4 pathway)
M. tuberculosis (PERK-eIF2α-CHOP pathway)
Helicobacter pylori (PERK-CHOP or PKR-eIF2α-ATF4 pathway)
Escherichia coli (CHOP-DR5-Caspase 3/8 pathway)
Shigella dysenteriae (p38-CHOP-DR5 pathway)
Since CHOP has an important role of apoptosis induction during infection, it is an important target for further research that will help deepen the current understanding of pathogenesis and potentially provide an opportunity for invention of new therapeutic approaches. For example, small molecule inhibitors of CHOP expression may act as therapeutic options to prevent ER stress and microbial infections. Research had shown that small molecule inhibitors of PERK-eIF2α pathway limit PCV2 virus replication.
Role in other diseases
The regulation of CHOP expression plays an important role in metabolic diseases and in some cancers through its function in mediating apoptosis. The regulation of CHOP expression could be a potential approach to affecting cancer cells through the induction of apoptosis. In the intestinal epithelium, CHOP has been demonstrated to be downregulated under inflammatory conditions (in inflammatory bowel diseases and experimental models of colitis). In this context, CHOP seems to rather regulate the cell cycle than apoptotic processes.
Mutations or fusions of CHOP (e.g. with FUS to form FUS-CHOP) can cause Myxoid liposarcoma.
References
Further reading
External links
Transcription factors
Oncogenes | DNA damage-inducible transcript 3 | Chemistry,Biology | 2,458 |
206,618 | https://en.wikipedia.org/wiki/NGC%207742 | NGC 7742 also known as Fried Egg Galaxy is a face-on unbarred spiral galaxy in the constellation Pegasus. Its velocity with respect to the cosmic microwave background is 1292 ± 26km/s, which corresponds to a Hubble distance of . In addition, six non-redshift measurements give a farther distance of . It was discovered by German-British astronomer William Herschel on 18 Oct 1784.
NGC 7742 is unusual in that it contains a ring but no bar. Typically, bars are needed to produce a ring structure. The bars' gravitational forces move gas to the ends of the bars, where it forms into the rings seen in many barred spiral galaxies. In this galaxy, however, no bar is present, so this mechanism cannot be used to explain the formation of the ring. O. K. Sil'chenko and A. V. Moiseev proposed that the ring was formed partly as the result of a merger event in which a smaller gas-rich dwarf galaxy collided with NGC 7742. As evidence for this, they point to the unusually bright central region, the presence of highly inclined central gas disk, and the presence of gas that is counterrotating (or rotating in the opposite direction) with respect to the stars. It is also classified as a Type II Seyfert Galaxy.
Supernovae
Two supernova has been observed in NGC 7742:
SN1993R (typeII, mag. 17) was discovered by the Leuschner Observatory Supernova Search (LOSS) on 2 June 1993.
SN2014cy (typeIIP, mag. 16.2) was discovered by Ken'ichi Nishimura on 31 August 2014.
See also
NGC 7217 - a face-on spiral galaxy with identical characteristics
Sombrero Galaxy - a similar galaxy with a dust ring
Gallery
References
External links
NGC 7742 at ESA/Hubble
Ring galaxies
Unbarred spiral galaxies
Pegasus (constellation)
7742
12760
072260
17841018
Discoveries by William Herschel
+02-60-010
23417+1029 | NGC 7742 | Astronomy | 425 |
969,477 | https://en.wikipedia.org/wiki/Preimage%20attack | In cryptography, a preimage attack on cryptographic hash functions tries to find a message that has a specific hash value. A cryptographic hash function should resist attacks on its preimage (set of possible inputs).
In the context of attack, there are two types of preimage resistance:
preimage resistance: for essentially all pre-specified outputs, it is computationally infeasible to find any input that hashes to that output; i.e., given , it is difficult to find an such that .
second-preimage resistance: for a specified input, it is computationally infeasible to find another input which produces the same output; i.e., given , it is difficult to find a second input such that .
These can be compared with a collision resistance, in which it is computationally infeasible to find any two distinct inputs , that hash to the same output; i.e., such that .
Collision resistance implies second-preimage resistance. Second-preimage resistance implies preimage resistance only if the size of the hash function's inputs can be substantially (e.g., factor 2) larger than the size of the hash function's outputs. Conversely, a second-preimage attack implies a collision attack (trivially, since, in addition to , is already known right from the start).
Applied preimage attacks
By definition, an ideal hash function is such that the fastest way to compute a first or second preimage is through a brute-force attack. For an -bit hash, this attack has a time complexity , which is considered too high for a typical output size of = 128 bits. If such complexity is the best that can be achieved by an adversary, then the hash function is considered preimage-resistant. However, there is a general result that quantum computers perform a structured preimage attack in , which also implies second preimage and thus a collision attack.
Faster preimage attacks can be found by cryptanalysing certain hash functions, and are specific to that function. Some significant preimage attacks have already been discovered, but they are not yet practical. If a practical preimage attack is discovered, it would drastically affect many Internet protocols. In this case, "practical" means that it could be executed by an attacker with a reasonable amount of resources. For example, a preimaging attack that costs trillions of dollars and takes decades to preimage one desired hash value or one message is not practical; one that costs a few thousand dollars and takes a few weeks might be very practical.
All currently known practical or almost-practical attacks on MD5 and SHA-1 are collision attacks. In general, a collision attack is easier to mount than a preimage attack, as it is not restricted by any set value (any two values can be used to collide). The time complexity of a brute-force collision attack, in contrast to the preimage attack, is only .
Restricted preimage space attacks
The computational infeasibility of a first preimage attack on an ideal hash function assumes that the set of possible hash inputs is too large for a brute force search. However if a given hash value is known to have been produced from a set of inputs that is relatively small or is ordered by likelihood in some way, then a brute force search may be effective. Practicality depends on the input set size and the speed or cost of computing the hash function.
A common example is the use of hashes to store password validation data for authentication. Rather than store the plaintext of user passwords, an access control system stores a hash of the password. When a user requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, the thief will only have the hash values, not the passwords. However most users choose passwords in predictable ways and many passwords are short enough that all possible combinations can be tested if fast hashes are used, even if the hash is rated secure against preimage attacks. Special hashes called key derivation functions have been created to slow searches. See Password cracking. For a method to prevent the testing of short passwords see salt (cryptography).
See also
Birthday attack
Cryptographic hash function
Hash function security summary
Puzzle friendliness
Rainbow table
Random oracle
: Attacks on Cryptographic Hashes in Internet Protocols
References
Cryptographic attacks | Preimage attack | Technology | 915 |
8,006,446 | https://en.wikipedia.org/wiki/Shoe-fitting%20fluoroscope | Shoe-fitting fluoroscopes, also sold under the names X-ray Shoe Fitter, Pedoscope and Foot-o-scope, were X-ray fluoroscope machines installed in shoe stores from the 1920s until about the 1970s in the United States, Canada, United Kingdom, Australia, South Africa, Germany and Switzerland. In the UK, they were known as Pedoscopes, after the company based in St. Albans that manufactured them. An example can be seen at the Science Museum, London. At the beginning of the 1930s, Bally was the first company to import pedoscopes into Switzerland from the UK. In the second half of the 20th century, growing awareness of radiation hazards and increasingly stringent regulations forced their gradual phasing out. They were widely used particularly when buying shoes for children, whose shoe size continually changed until adulthood.
A shoe-fitting fluoroscope was a metal construction covered in finished wood, approximately high in the shape of short column, with a ledge with an opening through which the standing customer (adult or child) would put their feet and look through a viewing porthole at the top of the fluoroscope down at the X-ray view of the feet and shoes. Two other viewing portholes on either side enabled the parent and a sales assistant to observe the toes being wiggled to show how much room for the toes there was inside the shoe. The bones of the feet were clearly visible, as was the outline of the shoe, including the stitching around the edges.
Invention
There are multiple claims for the invention of the shoe-fitting fluoroscope. The most likely is Jacob Lowe, who demonstrated a modified medical device at shoe retailer conventions in 1920 in Boston and in 1921 in Milwaukee. Lowe filed a US patent application in 1919, granted in 1927, and assigned it to the Adrian Company of Milwaukee for US$15,000. Syl Adrian claims that his brother, Matthew Adrian, invented and built the first machine in Milwaukee; his name is featured in a 1922 advertisement for an X-ray shoe fitter. Clarence Karrer, the son of an X-ray equipment distributor, claims to have built the first unit in 1924 in Milwaukee, but had his idea stolen and patented by one of his father's employees. In the meantime, the British company Pedoscope filed a British patent application in 1924, granted in 1926, and claimed to have been building these machines since 1920.
The X-ray Shoe Fitter Corporation of Milwaukee and Pedoscope Company became the largest manufacturers of shoe-fitting fluoroscopes in the world.
Health concerns
The risk of radiation burns to extremities was known since Wilhelm Röntgen's 1895 experiment, but this was a short-term effect with early warning from reddening of the skin (erythema). The long-term risks from chronic exposure to radiation began to emerge with Hermann Joseph Muller's 1927 paper showing genetic effects, and the incidence of bone cancer in radium dial painters of the same time period. However, there was not enough data to quantify the level of risk until atomic bomb survivors began to experience the long-term effects of radiation in the late 1940s. The first scientific evaluations of these machines in 1948 immediately sparked concern for radiation protection and electrical safety reasons, and found them ineffective at shoe fitting.
Large variations in dose were possible depending on the machine design, displacement of the shielding materials, and the duration and frequency of use. Radiation surveys showed that American machines delivered an average of 13 roentgen (r) (roughly 0.13 sievert (Sv) of equivalent dose in modern units) to the customer's feet during a typical 20-second viewing, with one capable of delivering 116 r (c. 1 Sv) in 20 seconds. British Pedoscopes produced about ten times less radiation.
A customer might try several shoes in a day, or return several times in a year, and radiation dose effects may be cumulative. A dose of 300 r can cause growth disturbance in a child, and 600 r can cause erythema in an adult. Hands and feet are relatively resistant to other forms of radiation damage, such as carcinogenesis.
Although most of the dose was directed at the feet, a substantial amount would scatter or leak in all directions. Shielding materials were sometimes displaced to improve image quality, to make the machine lighter, or out of carelessness, and this aggravated the leakage. The resulting whole-body dose may have been hazardous to the salesmen, who were chronically exposed, and to children, who are about twice as radiosensitive as adults. Monitoring of American salespersons found dose rates at pelvis height of up to 95 mr/week, with an average of 7.1 mr/week (up to c. 50 mSv/a, average c. 3.7 mSv/a effective dose). A 2007 paper suggested that even higher doses of 0.5 Sv/a were plausible. The most widely accepted model of radiation-induced cancer posits that the incidence of cancers due to ionizing radiation increases linearly with effective (i.e. whole-body) dose.
Years or decades may elapse between radiation exposure and a related occurrence of cancer, and no follow-up studies of customers can be performed for lack of records. According to a 1950 medical article on the machines: "Present evidence indicates that at least some radiation injuries are statistical processes that do not have a threshold. If this evidence is valid, there is no exposure which is absolutely safe and which produces no effect." Three shoe salespersons were identified with rare conditions that might have been associated with their chronic occupational exposure: a severe radiation burn requiring amputation in 1950, a case of dermatitis with ulceration in 1957, and a case of basal-cell carcinoma of the sole in 2004.
Shoe industry response
Representatives of the shoe retail industry denied claims of potential harm in newspaper articles and opinion pieces. They argued that use of the devices prevented harm to customers' feet from poorly-fitted shoes.
Regulation
There were no applicable regulations when shoe-fitting fluoroscopes were introduced. An estimated 10,000 machines were sold in the US, 3,000 in the UK, 1,500 in Switzerland, and 1,000 in Canada before authorities began discouraging their use. As understanding grew of the long-term health effects of radiation, a variety of bodies began speaking out and regulating the machines.
1931: ACXRP recommends limiting dose to 0.1 r per day (c. 0.5 r/week) in all applications.
1934: IXRPC recommends limiting dose to 0.2 r per day (c. 1 r/week) in all applications.
1946: ASA recommends limiting foot dose to 2 r per 5 second exposure, with a limit of 12 exposures per year for children.
1948: Warnings specific to the shoe-fitting fluoroscope start appearing in US journals.
1949: Tripartite Conference on Radiation Protection recommends lowering the dose limits:
0.3 rep/week (c. 0.3 r/week) for whole body bone marrow
1.5 rep/week (c. 1.5 r/week) for the hands
1950:
Warnings start appearing in UK journals.
A public inquiry was held in Queensland, Australia and warned against uncontrolled use
ICRP adopts the Tripartite recommendations, with some lack of clarity about units.
1953:
A definitive recommendation against use on children is published in the journal Pediatrics.
US Food and Drug Administration bans the machines.
1954:
NCRP recommends reducing dose limits by a factor of 10 for children, and other changes:
15.6 mSv/a (c. 0.03 r/week) for whole body bone marrow
78 mSv/a (c. 0.15 r/week) for the hands
1956: UK Ministry of Health considers regulating the machines.
1957:
Pennsylvania is the first US state to ban use of these machines.
ICRP recommends limiting occupational whole body dose to 50 mSv/a (c. 0.1 r/week)
1958:
The UK Government requires all machines be fitted with a warning sign advising customers of possible health risks, and that they should not use a machine more than 12 times a year.
NCRP recommends limiting public whole body dose to 5 mSv/a (c. 0.01 r/week)
1960: 160 devices are still in use in the Canton of Zürich.
1970s:
By 1970, 33 US states have banned the machine.
Late 1970s: Last recorded sighting of a shoe-fitting fluoroscope in service in Boston.
1973: The last devices still in use in West Germany are banned.
1989: Switzerland prohibits the machines.
1990: ICRP recommends reducing limits on exposure and other changes:
Occupational foot dose to 500 mSv/a (c. 1 r/week)
Occupational whole body dose to 20 mSv/a (c. 0.04 r/week)
Public whole body dose to 1 mSv/a (c. 0.002 r/week)
In popular culture
Early on in the novel It by Stephen King, Eddie Kaspbrak remembers using and being enthralled by a shoe-fitting fluoroscope as a boy. Eddie remembers that this agonized his mother, who orders him to get away from the device because she believes that these machines cause cancer.
In the novel The House Without a Christmas Tree, the young protagonist Addie Mills describes these devices.
In 1999, Time placed Shoe-Store X Rays on a list of the 100 worst ideas of the 20th century.
A shoe-fitting fluoroscope appeared on a 2011 episode of the History series American Restoration. Its radionuclide source was found to be so dangerous that it was removed and replaced with a static X-ray.
A shoe-fitting fluoroscope can be seen near the beginning of the film Billion Dollar Brain starring Michael Caine, when his character uses it to establish the contents of a flask.
References
External links
A Guide for Uniform Industrial Hygiene Codes Or Regulations For The Use Of Fluoroscopic Shoe Fitting Devices. American Conference of Governmental Industrial Hygienists.
Patents
Shoe business
Fluoroscopy
Radiation health effects | Shoe-fitting fluoroscope | Chemistry,Materials_science | 2,090 |
13,356,100 | https://en.wikipedia.org/wiki/Convention%20over%20configuration | Convention over configuration (also known as coding by convention) is a software design paradigm used by software frameworks that attempts to decrease the number of decisions that a developer using the framework is required to make without necessarily losing flexibility and don't repeat yourself (DRY) principles.
The concept was introduced by David Heinemeier Hansson to describe the philosophy of the Ruby on Rails web framework, but is related to earlier ideas like the concept of "sensible defaults" and the principle of least astonishment in user interface design.
The phrase essentially means a developer only needs to specify unconventional aspects of the application. For example, if there is a class Sales in the model, the corresponding table in the database is called "sales" by default. It is only if one deviates from this convention, such as the table "product sales", that one needs to write code regarding these names.
When the convention implemented by the tool matches the desired behavior, it behaves as expected without having to write configuration files. Only when the desired behavior deviates from the implemented convention is explicit configuration required.
Ruby on Rails' use of the phrase is particularly focused on its default project file and directory structure, which prevent developers from having to write XML configuration files to specify which modules the framework should load, which was common in many earlier frameworks.
Disadvantages
Disadvantages of the convention over configuration approach can occur due to conflicts with other software design principles, like the Zen of Python's "explicit is better than implicit." A software framework based on convention over configuration often involves a domain-specific language with a limited set of constructs or an inversion of control in which the developer can only affect behavior using a limited set of hooks, both of which can make implementing behaviors not easily expressed by the provided conventions more difficult than when using a software library that does not try to decrease the number of decisions developers have to make or require inversion of control.
Other methods of decreasing the number of decisions a developer needs to make include programming idioms and configuration libraries with a multilayered architecture.
Motivation
Some frameworks need multiple configuration files, each with many settings. These provide information specific to each project, ranging from URLs to mappings between classes and database tables. Many configuration files with many parameters are often difficult to maintain.
For example, early versions of the Java persistence mapper Hibernate mapped entities and their fields to the database by describing these relationships in XML files. Most of this information could have been revealed by conventionally mapping class names to the identically named database tables and the fields to their columns, respectively. Later versions did away with the XML configuration file and instead employed these very conventions, deviations from which can be indicated through the use of Java annotations (see JavaBeans specification, linked below).
Usage
Many modern frameworks use a convention over configuration approach.
The concept is older, however, dating back to the concept of a default, and can be spotted more recently in the roots of Java libraries. For example, the JavaBeans specification relies on it heavily. To quote the JavaBeans specification 1.01: "As a general rule we don't want to invent an enormous java.beans.everything class that people have to inherit from. Instead we'd like the JavaBeans runtimes to provide default behaviour for 'normal' objects, but to allow objects to override a given piece of default behaviour by inheriting from some specific java.beans.something interface."
See also
Comparison of web frameworks
Convention over Code
Markedness
Rapid application development
References
External links
Object-oriented programming
Software design | Convention over configuration | Engineering | 729 |
42,203,738 | https://en.wikipedia.org/wiki/Wind-assisted%20propulsion | Wind-assisted propulsion is the practice of decreasing the fuel consumption of a merchant vessel through the use of sails or some other wind capture device. Sails used to be the primary means of propelling ships, but with the advent of the steam engine and the diesel engine, sails came to be used for recreational sailing only. In recent years with increasing fuel costs and an increased focus on reducing emissions, there has been increased interest in harnessing the power of the wind to propel commercial ships.
A key barrier for the implementation of any decarbonisation technology and in particular of wind-assisted ones, is frequently discussed in the academia and the industry is the availability of capital. On the one hand, shipping lenders have been reducing their commitments overall while on the other hand, low-carbon newbuilds as well as retrofit projects entail higher-than-usual capital expenditure. Therefore, research effort is directed towards the development of shared economy and leasing business models, where benefits from reduced consumption of fossil fuels as well as gains from carbon allowances or reduced levies are shared among users, technology providers and operators.
Design
The mechanical means of converting the kinetic energy of the wind into thrust for a ship is the subject of much recent study. Where early ships designed primarily for sailing were designed around the sails that propelled them, commercial ships are now designed largely around the cargo that they carry, requiring a large clear deck and minimal overhead rigging in order to facilitate cargo handling. Another design consideration in designing a sail propulsion system for a commercial ship is that in order for it to be economically advantageous it cannot require a significantly larger crew to operate and it cannot compromise the stability of the ship. Taking into account these design criteria, three main concepts have emerged as the leading designs for wind-assisted propulsion: the “Wing Sail Concept,” the “Kite Sail,” and the “Flettner Rotor.”
Wingsail
As a result of rising oil prices in the 1980s, the US government commissioned a study on the economic feasibility of using wind assisted propulsion to reduce the fuel consumption of ships in the US Merchant Marine. This study considered several designs and concluded that a wingsail would be the most effective. The wingsail option studied consisted of an automated system of large rectangular solid sails supported by cylindrical masts. These would be symmetrical sails, which would allow a minimal amount of handling to maintain the sail orientation for different wind angles; however, this design was less efficient. A small freighter was outfitted with this system to evaluate its actual fuel gains, with the result that it was estimated to save between 15 and 25% of the vessel's fuel.
Kite sail
The kite sail concept has recently received a lot of interest. This rig consists of flying a gigantic kite from the bow of a ship using the traction developed by the kite to assist in pulling the ship through the water. Other concepts that have been explored were designed to have the kite rig alternately pull out and retract on a reel, driving a generator. The kite used in this setup is similar to the kites used by recreational kiteboarders, on a much larger scale. This design also allows users to expand its scale by flying multiple kites in a stacked arrangement.
The idea of using kites was, in 2012, the most popular form of wind-assisted propulsion on commercial ships, largely due to the low cost of retrofitting the system to existing ships, with minimal interference with existing structures. This system also allows a large amount of automation, using computer controls to determine the ideal kite angle and position. Using a kite allows the capture of wind at greater altitudes, where wind speed is higher and more consistent. This system has seen use on several ships, with the most notable in 2009 being , a merchant ship chartered by the US Military Sealift Command to evaluate the claims of efficiency and the feasibility of fitting this system to other ships.
Flettner rotor
The third design considered is the Flettner rotor. This is a large cylinder mounted upright on a ship's deck and mechanically spun. The effect of this spinning area in contact with the wind flowing around it creates a thrust effect that is used to propel the ship. Flettner Rotors were invented in the 1920s and have seen limited use since then. In 2010 a 10,000 dwt cargo ship was equipped with four Flettner Rotors to evaluate their role in increasing fuel efficiency. Since then, several cargo ships and a passenger ferry have been equipped with rotors.
The only parameter of the Flettner Rotor requiring control is the rotational speed of the rotor, meaning this method of wind propulsion requires very little operator input. In comparison to kite sails, Flettner rotors often offer considerable efficiency gains when compared to the size of a sail or kite, versus the size of the rotor and prevailing wind conditions.
Examples of 2018 Flettner rotor installations include :
Cruise ferry Viking Grace became the first passenger vessel with a rotor.
The liquid bulk tanker Maersk Pelican was retrofitted with two rotors.
The ultramax bulk carrier Afros received four rotors, which can be moved aside during port operations.
Implementation
The efficiency gains of these three propulsion assistance mechanisms are typically around 15–20% depending on the size of the system. As of 2009, shipping companies had been hesitant to install untested equipment. As of 2019, several initiatives were looking into the feasibility of cost-effective wind propulsion for commercial ships, including the Swedish Oceanbird concept for using wing sails, the Japanese Wind Challenger Project, and several coordinating associations.
See also
Pyxis Ocean, a bulk carrier retrofitted with wind-propulsion technology
Viking Grace, a rotor assisted cruise ship
Wind Surf, a wind assisted cruise ship
Hydrogen-powered ship
Nuclear marine propulsion
Internal drive propulsion
Integrated electric propulsion
Combined nuclear and steam propulsion
Astern propulsion
Marine propulsion
Air-independent propulsion
References
Marine propulsion
Wind | Wind-assisted propulsion | Engineering | 1,188 |
243,327 | https://en.wikipedia.org/wiki/Glossopharyngeal%20nerve | The glossopharyngeal nerve (), also known as the ninth cranial nerve, cranial nerve IX, or simply CN IX, is a cranial nerve that exits the brainstem from the sides of the upper medulla, just anterior (closer to the nose) to the vagus nerve. Being a mixed nerve (sensorimotor), it carries afferent sensory and efferent motor information. The motor division of the glossopharyngeal nerve is derived from the basal plate of the embryonic medulla oblongata, whereas the sensory division originates from the cranial neural crest.
Structure
From the anterior portion of the medulla oblongata, the glossopharyngeal nerve passes laterally across or below the flocculus, and leaves the skull through the central part of the jugular foramen. From the superior and inferior ganglia in jugular foramen, it has its own sheath of dura mater. The inferior ganglion on the inferior surface of petrous part of temporal is related with a triangular depression into which the aqueduct of cochlea opens. On the inferior side, the glossopharyngeal nerve is lateral and anterior to the vagus nerve and accessory nerve.
In its passage through the foramen (with X and XI), the glossopharyngeal nerve passes between the internal jugular vein and internal carotid artery. It descends in front of the latter vessel and beneath the styloid process and the muscles connected with it, to the posterior lower border of the stylopharyngeus muscle. It then curves forward, forming an arch on the side of the neck and lying upon the stylopharyngeus and middle pharyngeal constrictor muscle. From there, it passes under cover of the hyoglossus muscle and is finally distributed to the palatine tonsil, the mucous membrane of the fauces and base of the tongue, and the serous glands of the mouth.
Branches
tympanic nerve
stylopharyngeal nerve
tonsillar nerve
carotid sinus nerve
Branches to the posterior third of tongue
lingual branches
A communicating branch to the vagus nerve
Note: The glossopharyngeal nerve contributes in the formation of the pharyngeal plexus along with the vagus nerve.
The glossopharyngeal nerve has five distinct general functions:
Branchial motor (special visceral efferent)supplies the stylopharyngeus muscle.
Visceral motor (general visceral efferent)provides parasympathetic innervation of the parotid gland via the otic ganglion
Visceral sensory (general visceral afferent)carries visceral sensory information from the carotid sinus and carotid body.
General sensory (general somatic afferent)provides general sensory information from inner surface of the tympanic membrane, upper pharynx (GVA), and the posterior one-third of the tongue.
Visceral afferent (special visceral afferent)provides taste sensation from the posterior one-third of the tongue, including circumvallate papillae.
The glossopharyngeal nerve as noted above is a mixed nerve consisting of both sensory and motor nerve fibers. The sensory fibers' origin include the pharynx, middle ear, posterior one-third of the tongue (including taste buds); and the carotid body and sinus. These fibers terminate at the medulla oblongata. The motor fibers' origin is the medulla oblongata, and they terminate at the parotid salivary gland, the glands of the posterior tongue, and the stylopharyngeus muscle (which dilates the pharynx during swallowing).
Overview of branchial motor component
The branchial motor component of CN IX provides voluntary control of the stylopharyngeus muscle, which elevates the pharynx during swallowing and speech.
Origin and central course
The branchial motor component originates from the nucleus ambiguus in the reticular formation of the rostral medulla. Fibers leaving the nucleus ambiguus travel anteriorly and laterally to exit the medulla, along with the other components of CN IX, between the olive and the inferior cerebellar peduncle.
Intracranial course
Upon emerging from the lateral aspect of the medulla the branchial motor component joins the other components of CN IX to exit the skull via the jugular foramen. The glossopharyngeal fibers travel just anterior to the cranial nerves X and XI, which also exit the skull via the jugular foramen.
Extra-cranial course and final innervation
Upon exiting the skull the branchial motor fibers descend deep to the temporal styloid process and wrap around the posterior border of the stylopharyngeus muscle before innervating it.
Voluntary control of the stylopharyngeus muscle
Signals for the voluntary movement of stylopharyngeus muscle originate in the pre-motor and motor cortex (in association with other cortical areas) and pass via the corticobulbar tract in the genu of the internal capsule to synapse bilaterally on the ambiguus nuclei in the medulla.
Overview of visceral motor component
Parasympathetic component of the glossopharyngeal nerve that innervates the ipsilateral parotid gland.
Origin and central course
The preganglionic nerve fibers originate in the inferior salivatory nucleus of the rostral medulla and travel anteriorly and laterally to exit the brainstem between the medullary olive and the inferior cerebellar peduncle with the other components of CN IX. Note: These neurons do not form a distinct nucleus visible on cross-section of the brainstem. The position indicated on the diagram is representative of the location of the cell bodies of these fibers.
Intracranial course
Upon emerging from the lateral aspect of the medulla, the visceral motor fibers join the other components of CN IX to enter the jugular foramen. Within the jugular foramen, there are two glossopharyngeal ganglia that contain nerve cell bodies that mediate general, visceral, and special sensation. The visceral motor fibers pass through both ganglia without synapsing and exit the inferior ganglion with CN IX general sensory fibers as the tympanic nerve. Before exiting the jugular foramen, the tympanic nerve enters the petrous portion of the temporal bone and ascends via the inferior tympanic canaliculus to the tympanic cavity. Within the tympanic cavity the tympanic nerve forms a plexus on the surface of the promontory of the middle ear to provide general sensation. The visceral motor fibers pass through this plexus and merge to become the lesser petrosal nerve. The lesser petrosal nerve re-enters and travels through the temporal bone to emerge in the middle cranial fossa just lateral to the greater petrosal nerve. It then proceeds anteriorly to exit the skull via the foramen ovale along with the mandibular nerve component of CN V (V3).
Extra-cranial course and final innervations
Upon exiting the skull, the lesser petrosal nerve synapses in the otic ganglion, which is suspended from the mandibular nerve immediately below the foramen ovale. Postganglionic fibers from the otic ganglion travel with the auriculotemporal branch of CN V3 to enter the substance of the parotid gland.
Hypothalamic Influence
Fibers from the hypothalamus and olfactory system project via the dorsal longitudinal fasciculus to influence the output of the inferior salivatory nucleus. Examples include: 1) dry mouth in response to fear (mediated by the hypothalamus); 2) salivation in response to smelling food (mediated by the olfactory system)
Overview of visceral sensory component
This component of CN IX innervates the baroreceptors of the carotid sinus and chemoreceptors of the carotid body.
Peripheral and intracranial course.
Sensory fibers arise from the carotid sinus and carotid body at the common carotid artery bifurcation, ascend in the carotid sinus nerve, and join the other components of CN IX at the inferior glossopharyngeal ganglion. The cell bodies of these neurons reside in the inferior glossopharyngeal ganglion. The central processes of these neurons enter the skull via the jugular foramen.
Central course – visceral sensory component
Once inside the skull, the visceral sensory fibers enter the lateral medulla between the olive and the inferior cerebellar peduncle and descend in the solitary tract to synapse in the caudal solitary nucleus. From the solitary nucleus, connections are made with several areas in the reticular formation and hypothalamus to mediate cardiovascular and respiratory reflex responses to changes in blood pressure, and serum concentrations of CO2 and O2.
Clinical correlation
The visceral sensory fibers of CN IX mediate the afferent limb of the pharyngeal reflex in which touching the back of the pharynx stimulates the patient to gag (i.e., the gag reflex). The efferent signal to the musculature of the pharynx is carried by the branchial motor fibers of the vagus nerve.
Overview of somatic sensory component
This component of CN IX carries general sensory information (pain, temperature, and touch) from the skin of the external ear, internal surface of the tympanic membrane, the walls of the upper pharynx, and the posterior one-third of the tongue, anterior surface of the epiglottis, vallecula.
Peripheral course
Sensory fibers from the skin of the external ear initially travel with the auricular branch of CN X, while those from the middle ear travel in the tympanic nerve as discussed above (CN IX visceral motor section). General sensory information from the upper pharynx and posterior one-third of the tongue travel via the pharyngeal branches of CN IX. These peripheral processes have their cell body in either the superior or inferior glossopharyngeal ganglion.
Central course
The central processes of the general sensory neurons exit the glossopharyngeal ganglia and pass through the jugular foramen to enter the brainstem at the level of the medulla. Upon entering the medulla these fibers descend in the spinal trigeminal tract and synapse in the caudal spinal nucleus of the trigeminal.
Overview of special sensory component
The special sensory component of CN IX provides taste sensation from the posterior one-third of the tongue.
Peripheral course
Special sensory fibers from the posterior one-third of the tongue travel via the pharyngeal branches of CN IX to the inferior glossopharyngeal ganglion where their cell bodies reside.
Central course – special sensory component
The central processes of these neurons exit the inferior ganglion and pass through the jugular foramen to enter the brainstem at the level of the rostral medulla between the olive and inferior cerebellar peduncle. Upon entering the medulla, these fibers ascend in the tractus solitarius and synapse in the gustatory part of nucleus solitarius. Taste fibers from CN VII and X also ascend and synapse here. Ascending secondary neurons originating in nucleus solitarius project bilaterally to the ventral posteromedial (VPM) nuclei of the thalamus via the central tegmental tract. Tertiary neurons from the thalamus project via the posterior limb of the internal capsule to the inferior one-third of the primary sensory cortex (the gustatory cortex of the parietal lobe).
Associated brainstem nuclei
Solitary nucleus: taste from the posterior 1/3 of the tongue and information from carotid sinus baroreceptors and carotid body chemoreceptors
Spinal nucleus of the trigeminal nerve: Somatic sensory fibers from the internal surface of the tympanic membrane, middle ear, upper part of the pharynx, soft palate and posterior 1/3 of the tongue
Nucleus ambiguus: lower motor neurons for the stylopharyngeus muscle
Inferior salivatory nucleus: preganglionic parasympathetic neurons to the otic ganglion and then to the parotid gland
Functions
It receives general somatic sensory fibers (ventral trigeminothalamic tract) from the tonsils, the pharynx, the middle ear and the posterior 1/3 of the tongue.
It receives special visceral sensory fibers (taste) from the posterior 1/3 of the tongue.
It receives visceral sensory fibers from the carotid bodies, carotid sinus.
It supplies parasympathetic fibers to the parotid gland via the otic ganglion.
It supplies motor fibers to stylopharyngeus muscle, the only motor component of this cranial nerve.
It contributes to the pharyngeal plexus.
Clinical significance
Damage
Damage to the glossopharyngeal nerve can result in loss of taste sensation to the posterior one third of the tongue, and impaired swallowing.
Examination
The clinical tests used to determine if the glossopharyngeal nerve has been damaged include testing the gag reflex of the mouth, asking the patient to swallow or cough, and evaluating for speech impediments. The clinician may also test the posterior one-third of the tongue with bitter and sour substances to evaluate for impairment of taste.
The integrity of the glossopharyngeal nerve may be evaluated by testing the patient's general sensation and that of taste on the posterior third of the tongue. The gag reflex can also be used to evaluate the glossopharyngeal nerve.
Additional images
References
Saladin, Anatomy and Physiology: The Unity of Form and Function, 6th edition
External links
()
Cranial nerves
Gustatory system
Pharynx
Otorhinolaryngology
Autonomic nervous system
Parasympathetic nervous system
Human head and neck
Nervous system
Neurology
Nerves of the head and neck | Glossopharyngeal nerve | Biology | 2,996 |
75,630,144 | https://en.wikipedia.org/wiki/Tris%282%2C4%2C6-trimethoxyphenyl%29phosphine | Tris(2,4,6-trimethoxyphenyl)phosphine (TTMPP) is a large triaryl organophosphine whose strong Lewis-basic properties make it useful as an organocatalyst for several types of chemical reactions.
Reactions
TTMPP removes the trimethylsilyl group from ketene silyl acetals (the enol ether of esters) to give enolates that can then act as strong nucleophiles. It thus serves as a catalyst for Mukaiyama aldol reactions and group-transfer chain-growth polymerization reactions.
As a Brønsted base, TTMPP can deprotonate various alcohols, giving nucleophilic alkoxides that can undergo Michael addition reactions.
TTMPP can act as a Michael nucleophile itself to catalyze Baylis–Hillman reactions.
Uses
TTMPP is used as a ligand to form palladium-phosphine catalysts which are more reactive than triphenylphosphine-based catalysts.
References
Tertiary phosphines
Catalysts
Methoxy compounds
Phenyl compounds | Tris(2,4,6-trimethoxyphenyl)phosphine | Chemistry | 244 |
3,250,305 | https://en.wikipedia.org/wiki/Rocuronium%20bromide | Rocuronium bromide (brand names Zemuron, Esmeron) is an aminosteroid non-depolarizing neuromuscular blocker or muscle relaxant used in modern anaesthesia to facilitate tracheal intubation by providing skeletal muscle relaxation, most commonly required for surgery or mechanical ventilation. It is used for standard endotracheal intubation, as well as for rapid sequence induction (RSI).
Pharmacology
Mechanism of action
Rocuronium bromide is a competitive antagonist for the nicotinic acetylcholine receptors at the neuromuscular junction. Of the neuromuscular-blocking drugs it is considered to be a non-depolarizing neuromuscular junction blocker, because it acts by dampening the receptor action causing muscle relaxation, instead of continual depolarisation which is the mechanism of action of the depolarizing neuromuscular junction blockers, like succinylcholine.
It was designed to be a weaker antagonist at the neuromuscular junction than pancuronium; hence its monoquaternary structure and its having an allyl group and a pyrrolidine group attached to the D ring quaternary nitrogen atom. Rocuronium has a rapid onset and intermediate duration of action.
There is considered to be a risk of allergic reaction to the drug in some patients (particularly those with asthma), but a similar incidence of allergic reactions has been observed by using other members of the same drug class (non-depolarizing neuromuscular blocking drugs).
The γ-cyclodextrin derivative sugammadex (trade name Bridion) is an agent to reverse the action of rocuronium by binding to it with high affinity. Sugammadex has been in use since 2009 in many European countries; however, it was turned down for approval twice by the US FDA due to concerns over allergic reactions and bleeding, but finally approved the medication for use during surgical procedures in the United States on December 15, 2015. The acetylcholinesterase inhibitor neostigmine can also be used as a reversal agent of rocuronium but is not as effective as sugammadex. Neostigmine is often still used due to its low cost compared with sugammadex.
History
It was introduced in 1994.
Society and culture
Executions
On July 27, 2012, the U.S. state of Virginia replaced pancuronium bromide, one of the three drugs used in execution by lethal injection, with rocuronium bromide.
On October 3, 2016, the U.S. state of Ohio announced that it would resume executions on January 12, 2017, using a combination of midazolam, rocuronium bromide, and potassium chloride. Prior to this, the last execution in Ohio was in January 2014.
On August 24, 2017, the U.S. state of Florida executed Mark James Asay using a combination of etomidate, rocuronium bromide, and potassium acetate.
Euthanasia
Since 2016, rocuronium bromide has been the standard drug, along with propofol, administered to patients for euthanasia in Canada.
Brand names
Rocuronium bromide is marketed under the brand name Zemuron in the United States and Esmeron in most other countries.
References
Muscle relaxants
Nicotinic antagonists
Quaternary ammonium compounds
4-Morpholinyl compounds
Drugs developed by Schering-Plough
Drugs developed by Merck & Co.
Pyrrolidines
Acetate esters
Chemical substances for emergency medicine
Allyl compounds
Neuromuscular blockers
Lethal injection components | Rocuronium bromide | Chemistry | 755 |
397,388 | https://en.wikipedia.org/wiki/Rankine%E2%80%93Hugoniot%20conditions | The Rankine–Hugoniot conditions, also referred to as Rankine–Hugoniot jump conditions or Rankine–Hugoniot relations, describe the relationship between the states on both sides of a shock wave or a combustion wave (deflagration or detonation) in a one-dimensional flow in fluids or a one-dimensional deformation in solids. They are named in recognition of the work carried out by Scottish engineer and physicist William John Macquorn Rankine and French engineer Pierre Henri Hugoniot.
The basic idea of the jump conditions is to consider what happens to a fluid when it undergoes a rapid change. Consider, for example, driving a piston into a tube filled with non-reacting gas. A disturbance is propagated through the fluid somewhat faster than the speed of sound. Because the disturbance propagates supersonically, it is a shock wave, and the fluid downstream of the shock has no advance information of it. In a frame of reference moving with the wave, atoms or molecules in front of the wave slam into the wave supersonically. On a microscopic level, they undergo collisions on the scale of the mean free path length until they come to rest in the post-shock flow (but moving in the frame of reference of the wave or of the tube). The bulk transfer of kinetic energy heats the post-shock flow. Because the mean free path length is assumed to be negligible in comparison to all other length scales in a hydrodynamic treatment, the shock front is essentially a hydrodynamic discontinuity. The jump conditions then establish the transition between the pre- and post-shock flow, based solely upon the conservation of mass, momentum, and energy. The conditions are correct even though the shock actually has a positive thickness. This non-reacting example of a shock wave also generalizes to reacting flows, where a combustion front (either a detonation or a deflagration) can be modeled as a discontinuity in a first approximation.
Governing Equations
In a coordinate system that is moving with the discontinuity, the Rankine–Hugoniot conditions can be expressed as:
{|
| style="text-align: right; padding-right: 1em;" |
| Conservation of mass
|-
| style="text-align: right; padding-right: 1em;" |
| Conservation of momentum
|-
| style="text-align: right; padding-right: 1em;" |
| Conservation of energy
|}
where m is the mass flow rate per unit area, ρ1 and ρ2 are the mass density of the fluid upstream and downstream of the wave, u1 and u2 are the fluid velocity upstream and downstream of the wave, p1 and p2 are the pressures in the two regions, and h1 and h2 are the specific (with the sense of per unit mass) enthalpies in the two regions. If in addition, the flow is reactive, then the species conservation equations demands that
to vanish both upstream and downstream of the discontinuity. Here, is the mass production rate of the i-th species of total N species involved in the reaction.
Combining conservation of mass and momentum gives us
which defines a straight line known as the Michelson–Rayleigh line, named after the Russian physicist Vladimir A. Mikhelson (usually anglicized as Michelson) and Lord Rayleigh, that has a negative slope (since is always positive) in the plane. Using the Rankine–Hugoniot equations for the conservation of mass and momentum to eliminate u1 and u2, the equation for the conservation of energy can be expressed as the Hugoniot equation:
The inverse of the density can also be expressed as the specific volume, . Along with these, one has to specify the relation between the upstream and downstream equation of state
where is the mass fraction of the species. Finally, the calorific equation of state is assumed to be known, i.e.,
Simplified Rankine–Hugoniot relations
The following assumptions are made in order to simplify the Rankine–Hugoniot equations. The mixture is assumed to obey the ideal gas law, so that relation between the downstream and upstream equation of state can be written as
where is the universal gas constant and the mean molecular weight is assumed to be constant (otherwise, would depend on the mass fraction of the all species). If one assumes that the specific heat at constant pressure is also constant across the wave, the change in enthalpies (calorific equation of state) can be simply written as
where the first term in the above expression represents the amount of heat released per unit mass of the upstream mixture by the wave and the second term represents the sensible heating. Eliminating temperature using the equation of state and substituting the above expression for the change in enthalpies into the Hugoniot equation, one obtains an Hugoniot equation expressed only in terms of pressure and densities,
where is the specific heat ratio, which for ordinary room temperature air (298 KELVIN) = 1.40. An Hugoniot curve without heat release () is often called a "shock Hugoniot", or simply a(n) "Hugoniot". Along with the Rayleigh line equation, the above equation completely determines the state of the system. These two equations can be written compactly by introducing the following non-dimensional scales,
The Rayleigh line equation and the Hugoniot equation then simplifies to
Given the upstream conditions, the intersection of above two equations in the - plane determine the downstream conditions; in the - plane, the upstream condition correspond to the point . If no heat release occurs, for example, shock waves without chemical reaction, then . The Hugoniot curves asymptote to the lines and , which are depicted as dashed lines in the figure. As mentioned in the figure, only the white region bounded by these two asymptotes are allowed so that is positive. Shock waves and detonations correspond to the top-left white region wherein and , that is to say, the pressure increases and the specific volume decreases across the wave (the Chapman–Jouguet condition for detonation is where Rayleigh line is tangent to the Hugoniot curve). Deflagrations, on the other hand, correspond to the bottom-right white region wherein and , that is to say, the pressure decreases and the specific volume increases across the wave; the pressure decrease a flame is typically very small which is seldom considered when studying deflagrations.
For shock waves and detonations, the pressure increase across the wave can take any values between ; the steeper the slope of the Rayleigh line, the stronger is the wave. On the contrary, here the specific volume ratio is restricted to the finite interval (the upper bound is derived for the case because pressure cannot take negative values). If (diatomic gas without the vibrational mode excitation), the interval is , in other words, the shock wave can increase the density at most by a factor of 6. For monatomic gas, , the allowed interval is . For diatomic gases with vibrational mode excited, we have leading to the interval . In reality, the specific heat ratio is not constant in the shock wave due to molecular dissociation and ionization, but even in these cases, density ratio in general do not exceed a factor of about .
Derivation from Euler equations
Consider gas in a one-dimensional container (e.g., a long thin tube). Assume that the fluid is inviscid (i.e., it shows no viscosity effects as for example friction with the tube walls). Furthermore, assume that there is no heat transfer by conduction or radiation and that gravitational acceleration can be neglected. Such a system can be described by the following system of conservation laws, known as the 1D Euler equations, that in conservation form is:
where
fluid mass density,
fluid velocity,
specific internal energy of the fluid,
fluid pressure, and
is the total energy density of the fluid, [J/m3], while e is its specific internal energy
Assume further that the gas is calorically ideal and that therefore a polytropic equation-of-state of the simple form
is valid, where is the constant ratio of specific heats . This quantity also appears as the polytropic exponent of the polytropic process described by
For an extensive list of compressible flow equations, etc., refer to NACA Report 1135 (1953).
Note: For a calorically ideal gas is a constant and for a thermally ideal gas is a function of temperature. In the latter case, the dependence of pressure on mass density and internal energy might differ from that given by equation ().
The jump condition
Before proceeding further it is necessary to introduce the concept of a jump condition – a condition that holds at a discontinuity or abrupt change.
Consider a 1D situation where there is a jump in the scalar conserved physical quantity , which is governed by integral conservation law
for any , , , and, therefore, by partial differential equation
for smooth solutions.
Let the solution exhibit a jump (or shock) at , where and , then
The subscripts 1 and 2 indicate conditions just upstream and just downstream of the jump respectively, i.e. and is the therefore sign.
Note, to arrive at equation () we have used the fact that and .
Now, let and , when we have and , and in the limit
where we have defined (the system characteristic or shock speed), which by simple division is given by
Equation () represents the jump condition for conservation law (). A shock situation arises in a system where its characteristics intersect, and under these conditions a requirement for a unique single-valued solution is that the solution should satisfy the admissibility condition or entropy condition. For physically real applications this means that the solution should satisfy the Lax entropy condition
where and represent characteristic speeds at upstream and downstream conditions respectively.
Shock condition
In the case of the hyperbolic conservation law (), we have seen that the shock speed can be obtained by simple division. However, for the 1D Euler equations (), () and (), we have the vector state variable and the jump conditions become
Equations (), () and () are known as the Rankine–Hugoniot conditions for the Euler equations and are derived by enforcing the conservation laws in integral form over a control volume that includes the shock. For this situation cannot be obtained by simple division. However, it can be shown by transforming the problem to a moving co-ordinate system
(setting , , to remove ) and some algebraic manipulation (involving the elimination of from the transformed equation () using the transformed equation ()), that the shock speed is given by
where is the speed of sound in the fluid at upstream conditions.
Shock Hugoniot and Rayleigh line in solids
For shocks in solids, a closed form expression such as equation () cannot be derived from first principles. Instead, experimental observations indicate that a linear relation can be used instead (called the shock Hugoniot in the us-up plane) that has the form
where c0 is the bulk speed of sound in the material (in uniaxial compression), s is a parameter (the slope of the shock Hugoniot) obtained from fits to experimental data, and is the particle velocity inside the compressed region behind the shock front.
The above relation, when combined with the Hugoniot equations for the conservation of mass and momentum, can be used to determine the shock Hugoniot in the p-v plane, where v is the specific volume (per unit mass):
Alternative equations of state, such as the Mie–Grüneisen equation of state may also be used instead of the above equation.
The shock Hugoniot describes the locus of all possible thermodynamic states a material can exist in behind a shock, projected onto a two dimensional state-state plane. It is therefore a set of equilibrium states and does not specifically represent the path through which a material undergoes transformation.
Weak shocks are isentropic and that the isentrope represents the path through which the material is loaded from the initial to final states by a compression wave with converging characteristics. In the case of weak shocks, the Hugoniot will therefore fall directly on the isentrope and can be used directly as the equivalent path. In the case of a strong shock we can no longer make that simplification directly. However, for engineering calculations, it is deemed that the isentrope is close enough to the Hugoniot that the same assumption can be made.
If the Hugoniot is approximately the loading path between states for an "equivalent" compression wave, then the jump conditions for the shock loading path can be determined by drawing a straight line between the initial and final states. This line is called the Rayleigh line and has the following equation:
Hugoniot elastic limit
Most solid materials undergo plastic deformations when subjected to strong shocks. The point on the shock Hugoniot at which a material transitions from a purely elastic state to an elastic-plastic state is called the Hugoniot elastic limit (HEL) and the pressure at which this transition takes place is denoted pHEL. Values of pHEL can range from 0.2 GPa to 20 GPa. Above the HEL, the material loses much of its shear strength and starts behaving like a fluid.
Magnetohydrodynamics
Rankine–Hugoniot conditions in magnetohydrodynamics are interesting to consider since they are very relevant to astrophysical applications. Across the discontinuity the normal component of the magnetic field and the tangential component of the electric field (infinite conductivity limit) must be continuous. We thus have
where is the difference between the values of any physical quantity on the two sides of the discontinuity. The remaining conditions are given by
These conditions are general in the sense that they include contact discontinuities () tangential discontinuities (), rotational or Alfvén discontinuities () and shock waves ().
See also
Euler equations (fluid dynamics)
Shock polar
Becker–Morduchow–Libby solution
Mie–Grüneisen equation of state
Engineering Acoustics Wikibook
Atmospheric focusing
References
Equations of fluid dynamics
Scottish inventions
Conservation equations
Continuum mechanics
Combustion
Fluid dynamics | Rankine–Hugoniot conditions | Physics,Chemistry,Mathematics,Engineering | 2,960 |
20,814,896 | https://en.wikipedia.org/wiki/The%20Gray%20Cloth | The Gray Cloth with Ten Percent White: A Ladies' Novel (in German, Das graue Tuch und zehn Prozent Weiß: Ein Damenroman) is an avant-garde novel by the fantasist and visionary writer Paul Scheerbart, first published in 1914. The book expresses its author's commitment to the use of glass in modern architecture, which had a significant impact on the concepts of German Expressionism.
Glass architecture
Scheerbart had advocated a transformative new architecture of glass from his first novel, Das Paradies, through many subsequent works. In 1913 he attempted to organize a "Society for Glass Architecture," an effort that brought him into contact with the Expressionist Bruno Taut. In the following year Scheerbart published not one but two books on the subject: his non-fictional Glass Architecture made the case for its subject in a more rational and pragmatic basis, while The Gray Cloth provided a far more imaginative and lavish presentation of the same matter.
Plot summary
The novel is set in the middle of the twentieth century, and opens in Chicago, where the protagonist Edgar Krug has designed an enormous colored-glass exhibition hall. An art exhibit is held there, accompanied by an organ concert. Krug, fiercely dedicated to his esthetic concepts, is unhappy that the bright colors of women's fashions clash with his architectural scheme. When he meets Clara Weber, the organist, he is struck by her gray dress with white lace trim; he finds it the perfect complement to the color effect of his hall. Krug impulsively asks Clara to marry him — providing she agrees to wear the same style of clothing. Clara accepts Krug's terms, which are specified in their marriage contract. Once married, the couple leave for the Fiji Islands in Krug's private dirigible (it has a glass-walled cupola, and air conditioning).
Though Clara accepts Krug's strange terms for their marriage, other women do not. Clara maintains a telegraphic correspondence with her American friend, Amanda Schmidt; and Amanda is highly critical of Clara's subservience in the arrangement. Later in the book, other women also protest its terms. Scheerbart provides portrayals of a number of strong female characters through the book, supporting its subtitle, "a Ladies' novel."
Krug goes to Fiji because he has an ongoing project there, a retirement home for airship pilots. He clashes with the project's sponsor over how much colored glass the building will accommodate. From Fiji, Krug and Clara travel to other sites throughout the world, to visit other projects.
Their first stop is "Makartland" at the South Pole, an artists' colony for twenty women artists. There, a seamstress makes outfits for Clara that arrange her gray-and-white wardrobe scheme in imaginative ways. Käte Bandel, one of the artists, joins the Krugs in their further travels; she debates artistic assumptions and values with Krug as they travel to Australia and then to Borneo. Bandel enrages Krug when she convinces Clara to wear a plaid scarf.
Leaving Bandel behind, they fly to Japan; but Japanese women also react negatively to the gray and white. In the Himalayas and in Ceylon, Krug visits another projects; later the couple travel to an experimental station by the Aral Sea. They also visit Babylon and Egypt. Despite his enthusiasm for colored glass, Krug turns down an offer to build large glass obelisks atop the Pyramids of Giza. In the "Kurian Murian Islands" off the eastern coast of Arabia, Krug meets the tycoon Li-Tung, who commissions him to design houses suspended in mid-air (so that they don't scratch the majolica tiles with which the islands are paved). Li-Tung is passionate about color, and has Clara change into more varied silk outfits. Krug allows this.
Their journey is not a parade of triumphs, however; in most places, Krug's ideas are resisted, criticized, and rejected to greater or lesser degrees. At Malta, though, a glass architecture museum in established. The Krugs end the novel at their glass house in Switzerland.
At Babylon, Krug gives up on his determination about Clara's wardrobe, and agrees to strike the binding clause out of their marriage contract. By this time, though, Clara has become a convert to her husband's ideas about glass architecture, and maintains the gray fashion by her own choice.
Impact
Scheerbart certainly influenced Bruno Taut's Glass Pavilion at the 1914 Werkbund Exhibition — the first, last, and only design of German Expressionist glass architecture that was actually constructed. Taut inscribed the fourteen sides of the structure with fourteen quotations from Scheerbart's works. Scheerbart's writings also influenced Carl Krayl, Wenzel Hablik, and other members of the Glass Chain group.
It is an open question how much influence the German Expressionists' work had on the glass skyscrapers of Ludwig Mies van der Rohe and other Modernist architects later in the century. The modernists rejected Scheerbart's strong emphasis on the use of colored glass. In one view, though, the modernist glass skyscrapers of the mid-twentieth century "came closer to realizing Scheerbart's vision than the utopian projects of Taut and other Expressionist architects."
Genre
The Gray Cloth, like other Scheerbart works, is a challenge to classification. It can be termed a fantasy, though it also shares some characteristics of science fiction (a temporal setting in the future, and advanced technology).
English edition
Chronologically, The Gray Cloth is the last of Scheerbart's novels; but it is also the first to receive an English translation.
References
1914 German-language novels
German science fiction novels
1914 science fiction novels
Novels about architects
1914 German novels | The Gray Cloth | Engineering | 1,218 |
1,653,981 | https://en.wikipedia.org/wiki/Bandwidth%20throttling | Bandwidth throttling consists in the limitation of the communication speed (bytes or kilobytes per second), of the ingoing (received) or outgoing (sent) data in a network node or in a network device such as computers and mobile phones.
The data speed and rendering may be limited depending on various parameters and conditions.
Bandwidth throttling should be done along with rate limiting pattern to minimize the number of throttling errors.
Overview
Limiting the speed of data sent by a data originator (a client computer or a server computer) is much more efficient than limiting the speed in an intermediate network device between client and server because while in the first case usually no network packets are lost, in the second case network packets can be lost / discarded whenever ingoing data speed overcomes the bandwidth limit or the capacity of device and data packets cannot be temporarily stored in a buffer queue (because it is full or it does not exist); the usage of such a buffer queue is to absorb the peaks of incoming data for very short time lapse.
In the second case discarded data packets can be resent by transmitter and received again.
When a low-level network device discards incoming data packets usually can also notify that fact to data transmitter in order to slow down the transmission speed (see also network congestion).
NOTE: Bandwidth throttling should not be confused with rate limiting which operates on client requests at application server level and/or at network management level (i.e. by inspecting protocol data packets). Rate limiting can also help in keeping peaks of data speed under control.
These bandwidth limitations can be implemented:
at (a client program or a server program, i.e. ftp server, web server, etc.) which can be run and configured to throttle data sent through network or even to throttle data received from network (by reading data at most at a throttled amount per second);
at (typically done by an ISP).
The (client/server program) is usually perfectly because it is a choice of the client manager or the server manager (by server administrator) to limit or not to limit the speed of data received from remote program via network or the speed of data sent to target program (server or client).
The (ISP) instead is considered an in the USA under FCC regulations. While Internet service providers (ISPs) prey on the individual's inability to fight them, fines can range up to $25,000 USD for throttling. In the United States, net neutrality, the principle that ISPs treat all data on the Internet the same, and not discriminate, has been an issue of contention between network users and access providers since the 1990s. With net neutrality, ISPs may not intentionally block, slow down, or charge money for specific online content.
Defined as the intentional slowing or speeding of an internet service by an ISP. It is a reactive measure employed in communication networks to regulate network traffic and minimize bandwidth congestion. Bandwidth throttling can occur at different locations on the network. On a local area network (LAN), a system administrator ("sysadmin") may employ bandwidth throttling to help limit network congestion and server crashes. On a broader level, the ISP may use bandwidth throttling to help reduce a user's usage of bandwidth that is supplied to the local network. Bandwidth throttling is also used as a measurement of data rate on Internet speed test websites.
Throttling can be used to actively limit a user's upload and download rates on programs such as video streaming, BitTorrent protocols and other file sharing applications, as well as even out the usage of the total bandwidth supplied across all users on the network. Bandwidth throttling is also often used in Internet applications, in order to spread a load over a wider network to reduce local network congestion, or over a number of servers to avoid overloading individual ones, and so reduce their risk of the system crashing, and gain additional revenue by giving users an incentive to use more expensive tiered pricing schemes, where bandwidth is not throttled.
Operation
A computer network typically consists of a number of servers, which host data and provide services to clients. The Internet is a good example, in which web servers are used to host websites, providing information to a potentially very large number of client computers. Clients will make requests to servers, which will respond by sending the required data, which may be a song file, a video, and so on, depending on what the client has requested. As there will typically be many clients per server, the data processing demand on a server will generally be considerably greater than on any individual client. And so servers are typically implemented using computers with high data capacity and processing power. The traffic on such a network will vary over time, and there will be periods when client requests will peak or sent responses will be huge, sometimes exceeding the capacity of parts of network and causing congestion, especially in parts of the network that form bottlenecks. This can cause data request failures, or in worst cases, server crashes.
In order to prevent such occurrences, a client / server / system administrator may enable (if available) bandwidth throttling:
at , to control the speed of ingoing (received) data and/or to control the speed of outgoing (sent) data:
a client program could be configured to throttle the sending (upload) of a big file to a server program in order to reserve some network bandwidth for other uses (i.e. for sending emails with attached data, browsing web sites, etc.);
a server program (i.e. web server) could throttle its outgoing data to allow more concurrent active client connections without using too much network bandwidth (i.e. using only 90% of available bandwidth in order to keep a reserve for other activities, etc.);
examples: assuming to have a server site with speed access to Internet of 100MB/s (around 1000Mbit/s), assuming that most clients have a 1 MB/s (around 10 Mbit/s) network speed access to Internet and assuming to be able to download huge files (i.e. 1 GB each):
with bandwidth throttling, a server using a max. output speed of 100kB/s (around 1 Mbit/s) for each TCP connection, could allow at least (or even 10000 if output is limited to 10 kB/s) (active connections means that data content, such as a big file, is being downloaded from server to client);
without bandwidth throttling, a server could efficiently serve only (100 MB/s / 1 MB/s) before saturating network bandwidth; a saturated network (i.e. with a bottleneck through an Internet Access Point) could slow down a lot the attempts to establish other new connections or even to force them to fail because of timeouts, etc.; besides this new active connections could not get easily or fastly their proper share of bandwidth.
at , to control the speed of data received or sent both at low level (data packets) and/or at high level (i.e. by inspecting application protocol data):
policies similar or even more sophisticated than those of application software level could be set in low level network devices near Internet access point.
Application
A bandwidth intensive device, such as a server, might limit (throttle) the speed at which it receives or sends data, in order to avoid overloading its processing capacity or to saturate network bandwidth. This can be done both at the local network servers or at the ISP servers. ISPs often employ deep packet inspection (DPI), which is widely available in routers or provided by special DPI equipment. Additionally, today's networking equipment allows ISPs to collect statistics on flow sizes at line speed, which can be used to mark large flows for traffic shaping. Two ISPs, Cox and Comcast, have stated that they engage in this practice, where they limit users' bandwidth by up to 99%. Today most if not all ISPs throttle their users' bandwidth, with or without the user ever even realizing it. In the specific case of Comcast, an equipment vendor called Sandvine developed the network management technology that throttled P2P file transfers.
Those that could have their bandwidth throttled are typically someone who is constantly downloading and uploading torrents, or someone who just watches a lot of online videos. If this is done by an ISP, many consider this practice as an unfair method of regulating the bandwidth because consumers are not getting the required bandwidth even after paying the prices set by the ISPs. By throttling the people who are using so much bandwidth, the ISPs claim to enable their regular users to have a better overall quality of service.
Network neutrality
Net neutrality is the principle that all Internet traffic should be treated equally. It aims to guarantee a level playing field for all websites and Internet technologies. With net neutrality, the network's only job is to move data—not to choose which data to privilege with higher quality, that is faster, service. In the US, on February 26, 2015, the Federal Communications Commission adopted Open Internet rules. They are designed to protect free expression and innovation on the Internet and promote investment in the nation's broadband networks. The Open Internet rules are grounded in the strongest possible legal foundation by relying on multiple sources of authority, including: Title II of the Communications Act and Section 706 of the Telecommunications Act of 1996. The new rules apply to both fixed and mobile broadband services. However, these rules were rolled back on December 14, 2017. On October 19, 2023, the FCC voted 3-2 to approve a Notice of Proposed Rulemaking (NPRM) that seeks comments on a plan to restore net neutrality rules and regulation of ISPs.On April 25, 2024, the FCC voted 3-2 to reinstate net neutrality in the United States by reclassifying the Internet under Title II.
Bright line rules:
No blocking: broadband providers may not block access to legal content, applications, services, or non-harmful devices.
No throttling: broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.
No paid prioritization: broadband providers may not favor some lawful Internet traffic over other lawful traffic in exchange for consideration or payment of any kind—in other words, no "fast lanes." This rule also bans ISPs from prioritizing content and services of their own affiliated businesses.
Throttling vs. capping
Bandwidth throttling works by limiting (throttling) the speed at which a bandwidth intensive device (a server) receives data or the speed (i.e. bytes or kilobytes per second) of each data response. If these limits are not in place, the device can overload its processing capacity.
Contrary to throttling, in order to use bandwidth when available, but prevent excess, each node in a proactive system should set an outgoing bandwidth cap that appropriately limits the . There are two types of bandwidth capping. A standard cap limits the bitrate or speed of data transfer on a broadband Internet connection. Standard capping is used to prevent individuals from consuming the entire transmission capacity of the medium. A lowered cap reduces an individual user's bandwidth cap as a defensive measure and/or as a punishment for heavy use of the medium's bandwidth. Oftentimes this happens without notifying the user.
The difference is that bandwidth throttling regulates a bandwidth intensive device (such as a server) by limiting how much data that device can receive from each node / client or can output or can send for each response. Bandwidth capping on the other hand limits the total transfer capacity, upstream or downstream, of data over a medium.
Court cases
Comcast Corp. v. FCC
In 2007, Free Press, Public Knowledge, and the Federal Communications Commission filed a complaint against Comcast's Internet service. Several subscribers claimed that the company was interfering with their use of peer-to-peer networking applications. The Commission stated that it had jurisdiction over Comcast's network management practices and that it could resolve the dispute through negotiation rather than through rulemaking. The Commission believed that Comcast had "significantly impeded consumers' ability to access the content and use the applications of their choice", and that because Comcast "ha[d] several available options it could use to manage network traffic without discriminating" against peer-to-peer communications, its method of bandwidth management "contravene[d] ... federal policy". At this time, "Comcast had already agreed to adopt a new system for managing bandwidth demand, the Commission simply ordered it to make a set of disclosures describing the details of its new approach and the company's progress toward implementing it". Comcast complied with this Order but petitioned for a review and presented several objections.
ISP bandwidth throttling
Canada
In 2008, the Canadian Radio-television and Telecommunications Commission (CRTC) decided to allow Bell Canada to single out peer-to-peer (P2P) traffic for bandwidth throttling between the hours of 4:30 p.m. to 2 a.m. In 2009, the CRTC released a guideline for bandwidth throttling rules.
In 2011, following a major complaint by the Canadian Gamers Organization against Rogers for breaking the 2009 rules already in place, the CRTC created an addendum to their ITMP policy, allowing them to send the complaint to their Enforcements Division. The Canadian Gamers Organization in their submissions alluded to filing a complaint against Bell Canada. On December 20, 2011, Bell Canada announced they would end throttling by March 31, 2012 for their customers, as well as their wholesale customers. On February 4, 2012, in an effort to get out of trouble with the CRTC (which had continued its own testing and had found additional non-compliance and demanded immediate compliance), Rogers announced 50% of their customers would be throttle-free by June 2012, and 100% of their customers would be throttle-free by the end of 2012. Unfortunately for Rogers, this did not mollify the CRTC Enforcements Division.
ISPs in Canada that throttle bandwidth:
Acanac: No
Altima telecom: No
Bell Canada: No
Cogeco Cable: No
DeryTelecom: Yes (Netflix)
Distributel: Yes
Bell MTS: No (Only with unlimited data mobile devices)
Oxio: No
Rogers Cable: Yes (Netflix) (Android)
SaskTel: Yes
Primus Telecom: No
Shaw: Yes (25% of the traffic)
Xplornet: Yes, and also prioritizes VoIP
TELUS: Yes(2% of the traffic)
EastLink: Yes – The public statement was "Confidential".
Sunwire Cable: No
Sunwire DSL: No
Teksavvy Cable: No
Teksavvy DSL: No
Teksavvy DSL MLPPP: No
Talk Wireless Inc.: Yes
Internet Lightspeed Cable: No
Internet Lightspeed DSL: No
Internet Lightspeed Bonded (MLPPP): No
Europe
In April 2011, the European Union launched an investigation into ISPs' methods for managing traffic on their networks. Some ISPs, for instance, restrict access to services such as Skype or the BBC iPlayer at peak times so that their users all receive an equal service. The EU's commissioner for the digital agenda, Neelie Kroes, said: "I am absolutely determined that everyone in the EU should have the chance to enjoy the benefits of an open and lawful Internet, without hidden restrictions or slower speeds than they have been promised." The Body of European Regulators for Electronic Communications (Berec) will examine the issues for the EU, and will ask both businesses and consumers for their views. The EU published the results of its investigation at the end of 2011. New laws mean that ISPs are prohibited from blocking or slowing down of Internet traffic, except where necessary.
Singapore
In Singapore, net neutrality has been the law since 2011. November 2010, defined by the Infocomm Development Authority (IDA). But despite the law, the majority of the ISPs do throttle bandwidth.
ISPs in Singapore that throttle bandwidth:
United States
In 2007, Comcast was caught interfering with peer-to-peer traffic. Specifically, it falsified packets of data that fooled users and their peer-to-peer programs into thinking they were transferring files. Comcast initially denied that it interfered with its subscribers' uploads, but later admitted it. The FCC held a hearing and concluded that Comcast violated the principles of the Internet Policy Statement because Comcast's "discriminatory and arbitrary practice unduly squelched the dynamic benefits of an open and accessible Internet and did not constitute reasonable network management." The FCC also provided clear guidelines to any ISP wishing to engage in reasonable network management. The FCC suggested ways that Comcast could have achieved its goal of stopping network congestion, including capping the average user's capacity and charging the most aggressive users overage (going over a maximum) fees, throttling back the connections of all high capacity users, or negotiating directly with the application providers and developing new technologies.
However, in 2008, Comcast amended their Acceptable Usage Policy and placed a specific 250 GB monthly cap. Comcast has also announced a new bandwidth-throttling plan. The scheme includes a two-class system of "priority-best-effort" and "best-effort" where "sustained use of 70% of your up or downstream throughput triggers the BE state, at which point you'll find your traffic priority lowered until your usage drops to 50% of your provisioned upstream or downstream bandwidth for "a period of approximately 15 minutes". A throttled Comcast user being placed in a BE state "may or may not result in the user's traffic being delayed or, in extreme cases, dropped before PBE traffic is dropped". Comcast explained to the FCC that "If there is no congestion, packets from a user in a BE state should have little trouble getting on the bus when they arrive at the bus stop. If, on the other hand, there is congestion in a particular instance, the bus may become filled by packets in a PBE state before any BE packets can get on. In that situation, the BE packets would have to wait for the next bus that is not filled by PBE packets".
US cell phone ISPs have also increasingly resorted to bandwidth throttling in their networks. Verizon and AT&T even applied such throttling to data plans advertised as "unlimited", resulting an FCC complaint against Verizon. Though AT&T had told its customers throttling was a possibility, the FTC filed a lawsuit against the company in 2014, charging that the disclosure was insufficiently specific. A nation-wide study of video streaming speeds in 2018-2019 found major wireless carriers throttling a majority of the time, including when traffic was light, and with significant discrepancies between video services.
Uruguay
Antel has a state-enforced monopoly forcing consumers who require non-wireless Internet access (i.e. ADSL or fiber – cable Internet is outlawed) to purchase it directly from Antel. Its practices provide insight into the probable behavior of ISPs in markets that have little or no competition and/or lack balancing regulations in the interest of consumers. All of Antel's Internet access plans for consumers are either capped or throttled. Capped plans are typically marketed under the brand "flexible". On such plans once a user reaches a data tier (e.g. 5 GB) additional data usage is billed at a rate of approximately $5 US/GB. Once a second tier is reached (e.g., 15 GB), Internet services are suspended until the start of the next billing cycle. Throttled plans are typically marketed under the brand "Flat Rate" (for ADSL) and "Vera" (for fiber.) Such plans allow full bandwidth on the connection (e.g. 20 Mbit/s down on the Vera fiber plan) from the beginning of the billing month but are restricted to a percentage of the contracted transmission rate (e.g., to 2 Mbit/s down, or 10% of the advertised speed) once a data tier (e.g. 150 GB) is reached. Full bandwidth capability is restored at the beginning of the next billing month.
Metrics for ISPs
Whether aimed at avoiding network congestion or at pushing users to upgrade to costlier Internet plans, the increasingly common capping and throttling practices of ISPs undoubtedly have an effect on the value proposition of the plans they offer. For consumers to be able to make an informed decision when choosing an Internet plan, ISPs should publish their capping and throttling practices with the necessary level of detail. While the net effect of some throttling and capping strategies can be hard to compare across ISPs, some basic metrics that are of interest for any kind of throttled/capped Internet connection are:
Maximum monthly payload: This is the amount of data that an Internet connection would be able to carry in a hypothetical setting assuming no bottlenecks external to the ISP. In the example Antel 20Mbs fiber connection (see Uruguay above), the maximum monthly payload in that hypothetical setting would be reached by running the connection at 20Mbs for the first 150 GB, and at 2Mbs for the rest of the month. Thus the maximum monthly payload of that connection is 60,000 seconds * 2.5 MB/s + 2,532,000 seconds * 0.25 MB/s = 783 GB (about the size of a large laptop disk drive in 2013.)
Maximum utilization percentage: This is the ratio of the maximum monthly payload of a throttled Internet connection to the maximum unthrottled monthly payload of the same connection. In the example Antel fiber 20Mbs connection the maximum unthrottled monthly payload of that connection is 2,592,000 seconds * 2.5 MB/s = 6,480 GB. Thus the maximum utilization percentage of that connection is 783 GB / 6,480 GB = 12%
Throttling percentage: This represents how much the maximum monthly payload of an Internet connection gets reduced by the ISP's throttling policy. It is calculated simply as 1 − maximum utilization percentage. In the example Antel fiber 20Mbs connection it is 1 − 12% = 88%
Equivalent connection bandwidth: This is the bandwidth of an unthrottled Internet connection whose maximum monthly payload is the same as the maximum monthly payload of the throttled connection in question. This can be calculated as unthrottled connection bandwidth * throttling percentage. In the example Antel fiber 20Mbs connection the equivalent connection bandwidth is 20 Mbs * 12% = 2.4 Mbs
Cost per unit payload: The ultimate metric of throttling's effect on an Internet connection's potential value to a customer is the cost per GB (or TB in the case of fast connections) carried assuming perfect utilization of the connection. It is calculated by dividing the monthly cost of the connection by the maximum monthly payload. In the example Antel fiber 20 Mbs connection it would be US$36 / 0.783 TB = US$46 per TB. By comparison, if the same 20Mbs connection weren't throttled by the ISP it would have a cost per unit payload of US$36 / 6.48 TB = US$5.6 per TB
Unthrottled connection cost: This is how much it would cost the customer to offset the effect of throttling by aggregating throttled Internet connections from the ISP. It is calculated by dividing the monthly cost of a throttled connection by the throttling percentage. In the example Antel fiber connection the cost of building an unthrottled 20 Mbit/s fiber Internet connection by aggregating 20 Mbit/s throttled ones would be US$36 / 12% = US$300 per month
User responses
Although ISPs may actively throttle bandwidth, if the throttling is focused on a particular protocol, there are several methods to bypass the throttling (although some of these methods may be against the ToS of specific plans).
Virtual private network (VPN) – Generally costs a monthly fee to rent, but offers users a secure connection where data cannot be intercepted.
Force Encryption – Free method that works for some users.
Seedbox – A dedicated private server, usually hosted offshore, that offers high-speed upstream and downstream rates and often storage for a relatively high monthly cost.
SSH Tunneling – Tunneling protocol
See also
Traffic shaping
Bandwidth management
Rate limiting
References
Broadband
Computer network technology
Data transmission
Net neutrality | Bandwidth throttling | Engineering | 5,096 |
4,461,797 | https://en.wikipedia.org/wiki/SBML | The Systems Biology Markup Language (SBML) is a representation format, based on XML, for communicating and storing computational models of biological processes. It is a free and open standard with widespread software support and a community of users and developers. SBML can represent many different classes of biological phenomena, including metabolic networks, cell signaling pathways, regulatory networks, infectious diseases, and many others. It has been proposed as a standard for representing computational models in systems biology today.
History
Late in the year 1999 through early 2000, with funding from the Japan Science and Technology Corporation (JST), Hiroaki Kitano and John C. Doyle assembled a small team of researchers to work on developing better software infrastructure for computational modeling in systems biology. Hamid Bolouri was the leader of the development team, which consisted of Andrew Finney, Herbert Sauro, and Michael Hucka. Bolouri identified the need for a framework to enable interoperability and sharing between the different simulation software systems for biology in existence during the late 1990s, and he organized an informal workshop in December 1999 at the California Institute of Technology to discuss the matter. In attendance at that workshop were the groups responsible for the development of DBSolve, E-Cell, Gepasi, Jarnac, StochSim, and The Virtual Cell. Separately, earlier in 1999, some members of these groups also had discussed the creation of a portable file format for metabolic network models in the BioThermoKinetics (BTK) group. The same groups who attended the first Caltech workshop met again on April 28–29, 2000, at the first of a newly created meeting series called Workshop on Software Platforms for Systems Biology. It became clear during the second workshop that a common model representation format was needed to enable the exchange of models between software tools as part of any functioning interoperability framework, and the workshop attendees decided the format should be encoded in XML.
The Caltech ERATO team developed a proposal for this XML-based format and circulated the draft definition to the attendees of the 2nd Workshop on Software Platforms for Systems Biology in August 2000. This draft underwent extensive discussion over mailing lists and during the 2nd Workshop on Software Platforms for Systems Biology, held in Tokyo, Japan, in November 2000 as a satellite workshop of the ICSB 2000 conference. After further revisions, discussions and software implementations, the Caltech team issued a specification for SBML Level 1, Version 1 in March 2001.
SBML Level 2 was conceived at the 5th Workshop on Software Platforms for Systems Biology, held in July 2002, at the University of Hertfordshire, UK. By this time, far more people were involved than the original group of SBML collaborators and the continued evolution of SBML became a larger community effort, with many new tools having been enhanced to support SBML. The workshop participants in 2002 collectively decided to revise the form of SBML in Level 2. The first draft of the Level 2 Version 1 specification was released in August 2002, and the final set of features was finalized in May 2003 at the 7th Workshop on Software Platforms for Systems Biology in Ft. Lauderdale, Florida.
The next iteration of SBML took two years in part because software developers requested time to absorb and understand the larger and more complex SBML Level 2. The inevitable discovery of limitations and errors led to the development of SBML Level 2 Version 2, issued in September 2006. By this time, the team of SBML Editors (who reconcile proposals for changes and write a coherent final specification document) had changed and now consisted of Andrew Finney, Michael Hucka and Nicolas Le Novère.
SBML Level 2 Version 3 was published in 2007 after countless contributions by and discussions with the SBML community. 2007 also saw the election of two more SBML Editors as part of the introduction of the modern SBML Editor organization in the context of the SBML development process.
SBML Level 2 Version 4 was published in 2008 after certain changes in Level 2 were requested by popular demand. (For example, an electronic vote by the SBML community in late 2007 indicated a majority preferred not to require strict unit consistency before an SBML model is considered valid.) Version 4 was finalized after the SBML Forum meeting held in Gothenburg, Sweden, as a satellite workshop of ICSB 2008 in the fall of 2008.
SBML Level 3 Version 1 Core was published in final form in 2010, after prolonged discussion and revision by the SBML Editors and the SBML community. It contains numerous significant changes in syntax and constructs from Level 2 Version 4, but also represents a new modular base for continued expansion of SBML's features and capabilities going into the future.
SBML Level 2 Version 5 was published in 2015. This revision included a number of textual (but not structural) changes in response to user feedback, thereby addressing the list of errata collected over many years for the SBML Level 2 Version 4 specification. In addition, Version 5 introduced a facility to use nested annotations within SBML's annotation format (an annotation format that is based on a subset of RDF).
The language
SBML is sometimes incorrectly assumed to be limited in scope only to biochemical network models because the original publications and early software focused on this domain. In reality, although the central features of SBML are indeed oriented towards representing chemical reaction-like processes that act on entities, this same formalism serves analogously for many other types of processes; moreover, SBML has language features supporting the direct expression of mathematical formulas and discontinuous events separate from reaction processes, allowing SBML to represent much more than solely biochemical reactions. Evidence for SBML's ability to be used for more than merely descriptions of biochemistry can be seen in the variety of models available from BioModels Database.
Purposes
SBML has three main purposes:
enable the use of multiple software tools without having to rewrite models to conform to every tool's idiosyncratic file format;
enable models to be shared and published in a form that other researchers can use even when working with different software environments;
ensure the survival of models beyond the lifetime of the software used to create them.
SBML is not an attempt to define a universal language for quantitative models. SBML's purpose is to serve as a lingua franca—an exchange format used by different present-day software tools to communicate the essential aspects of a computational model.
Main capabilities
SBML can encode models consisting of entities (called species in SBML) acted upon by processes (called reactions). An important principle is that models are decomposed into explicitly-labeled constituent elements, the set of which resembles a verbose rendition of chemical reaction equations (if the model uses reactions) together with optional explicit equations (again, if the model uses these); the SBML representation deliberately does not cast the model directly into a set of differential equations or other specific interpretation of the model. This explicit, modeling-framework-agnostic decomposition makes it easier for a software tool to interpret the model and translate the SBML form into whatever internal form the tool actually uses.
A software package can read an SBML model description and translate it into its own internal format for model analysis. For example, a package might provide the ability to simulate the model by constructing differential equations and then perform numerical time integration on the equations to explore the model's dynamic behavior. Or, alternatively, a package might construct a discrete stochastic representation of the model and use a Monte Carlo simulation method such as the Gillespie algorithm.
SBML allows models of arbitrary complexity to be represented. Each type of component in a model is described using a specific type of data structure that organizes the relevant information. The data structures determine how the resulting model is encoded in XML.
In addition to the elements above, another important feature of SBML is that every entity can have machine-readable annotations attached to it. These annotations can be used to express relationships between the entities in a given model and entities in external resources such as databases. A good example of the value of this is in BioModels Database, where every model is annotated and linked to relevant data resources such as publications, databases of compounds and pathways, controlled vocabularies, and more. With annotations, a model becomes more than simply a rendition of a mathematical construct—it becomes a semantically-enriched framework for communicating knowledge.
Levels and versions
SBML is defined in Levels: upward-compatible specifications that add features and expressive power. Software tools that do not need or cannot support the complexity of higher Levels can go on using lower Levels; tools that can read higher Levels are assured of also being able to interpret models defined in the lower Levels. Thus new Levels do not supersede previous ones. However, each Level can have multiple Versions within it, and new Versions of a Level do supersede old Versions of that same Level.
There are currently three Levels of SBML defined. The current Versions within those Levels are the following:
Level 3 Version 2 Core, for which the final Release 2 specification was issued 26 April 2019
Level 2 Version 5 Release 1
Level 1 Version 2
Open-source software infrastructure such as libSBML and JSBML allows developers to support all Levels of SBML their software with a minimum amount of effort.
The SBML Team maintains a public issue tracker where readers may report errors or other issues in the SBML specification documents. Reported issues are eventually put on the list of official errata associated with each specification release. The lists of errata are documented on the Specifications page of SBML.org.
Level 3 packages
Development of SBML Level 3 has been proceeding in a modular fashion. The Core specification is a complete format that can be used alone. Additional Level 3 packages can be layered on to this core to provide additional, optional features.
Hierarchical Model Composition
The Hierarchical Model Composition package, known as "comp", was released in November 2012. This package provides
the ability to include models as submodels inside another model. The goal is to support the ability
of modelers and software tools to do such things as (1) decompose larger models into smaller ones,
as a way to manage complexity; (2) incorporate multiple instances of a given model within one or more
enclosing models, to avoid literal duplication of repeated elements; and (3) create libraries of reusable,
tested models, much as is done in software development and other engineering fields. The specification was the culmination
of years of discussion by a wide number of people.
Flux Balance Constraints
The Flux Balance Constraints package (nicknamed "fbc") was first released
in February, 2013. Import revisions were introduced as part of
Version 2, released in September, 2015. The
"fbc" package provides support for constraint-based modeling,<ref
name="FBA"/> frequently used to analyze and study biological networks on
both a small and large scale. This SBML package makes use of
standard components from the SBML Level 3 core specification, including
species and reactions, and extends them with additional attributes and
structures to allow modelers to define such things as flux bounds and
optimization functions.
Qualitative Models
The Qualitative Models or "qual" package for SBML Level 3 was
released in May 2013. This package supports the representation of models where an
in-depth knowledge of the biochemical reactions and their kinetics is missing
and a qualitative approach must be used. Examples of phenomena that have
been modeled in this way include gene regulatory networks
and signaling pathways, basing the model structure on
the definition of regulatory or influence graphs. The definition and use of
some components of this class of models differ from the way that species and
reactions are defined and used in core SBML models. For example,
qualitative models typically associate discrete levels of activities with
entity pools; consequently, the processes involving them cannot be described
as reactions per se, but rather as transitions between states. These systems
can be viewed as reactive systems whose dynamics are represented by means of
state transition graphs (or other Kripke structures ) in
which the nodes are the reachable states and the edges are the state
transitions.
Layout
The SBML layout package originated as a set of annotation conventions
usable in SBML Level 2. It was introduced at the SBML Forum in
St. Louis in 2004. Ralph Gauges wrote the
specification and provided an implementation that
was widely used. This original definition was reformulated as an SBML
Level 3 package, and a specification was formally released in August,
2013.
The SBML Level 3 Layout package provides a specification for how to
represent a reaction network in a graphical form. It is thus better tailored
to the task than the use of an arbitrary drawing or graph. The SBML
Level 3 package only deals with the information necessary to define the
position and other aspects of a graph's layout; the additional details
necessary to complete the graph—namely, how the visual aspects are meant
to be rendered— are the purview of the separate SBML Level 3
package called Rendering (nicknamed "render"). As of November 2015, a draft
specification for the "render" package is available, but it has not yet been
officially finalized.
Packages under development
Development of SBML Level 3 packages is being undertaken such that specifications are reviewed and implementations
attempted during the development process. Once a specification is stable and there are two implementations that support it,
the package is considered accepted. The packages detailed above have all reached the acceptance stage.
The table below gives a brief summary of packages that are currently in the development phase.
Structure
A model definition in SBML Levels 2 and 3 consists of lists of one or more of the following components:
Function definition: A named mathematical function that may be used throughout the rest of a model.
Unit definition: A named definition of a new unit of measure, or a redefinition of an existing SBML default unit. Named units can be used in the expression of quantities in a model.
Compartment Type (only in SBML Level 2): A type of location where reacting entities such as chemical substances may be located.
Species type (only in SBML Level 2): A type of entity that can participate in reactions. Examples of species types include ions such as Ca2+, molecules such as glucose or ATP, binding sites on a protein, and more.
Compartment: A well-stirred container of a particular type and finite-size where species may be located. A model may contain multiple compartments of the same compartment type. Every species in a model must be located in a compartment.
Species: A pool of entities of the same species type located in a specific compartment.
Parameter: A quantity with a symbolic name. In SBML, the term parameter is used in a generic sense to refer to named quantities regardless of whether they are constants or variables in a model.
Initial Assignment: A mathematical expression used to determine the initial conditions of a model. This type of structure can only be used to define how the value of a variable can be calculated from other values and variables at the start of simulated time.
Rule: A mathematical expression used in combination with the differential equations constructed based on the set of reactions in a model. It can be used to define how a variable's value can be calculated from other variables or used to define the rate of change of a variable. The set of rules in a model can be used with the reaction rate equations to determine the behavior of the model with respect to time. The set of rules constrains the model for the entire duration of simulated time.
Constraint: A mathematical expression that defines a constraint on the values of model variables. The constraint applies at all instants of simulated time. The set of constraints in the model should not be used to determine the behavior of the model with respect to time.
Reaction: A statement describing some transformation, transport or binding process that can change the amount of one or more species. For example, a reaction may describe how certain entities (reactants) are transformed into certain other entities (products). Reactions have associated kinetic rate expressions describing how quickly they take place.
Event: A statement describing an instantaneous, discontinuous change in a set of variables of any type (species concentration, compartment size or parameter value) when a triggering condition is satisfied.
DSLs Supporting SBML
SBML is primarily a format for the exchange of systems biology models between software modeling tools or for archiving models in repositories such as BiGG, BioModels, or JWS Online. Since SBML is encoded in XML and in particular uses MathML for representing mathematics, the format is not human-readable. As a result, other groups have developed human-readable formats that can be converted to and from SBML.
SBML-shorthand
SBML shorthand is a specification and associated Python tooling to interconvert SBML and the shorthand notation. The format was developed by the UK Newcastle systems biology group sometime before 2006. Its aim was to enable modelers to more rapidly create models without having to either write raw XML or use GUI tools. Two Python tools are provided, mod2sbml.py and sbml2mod.py. The libSBML package for Python is required to assist in the conversion. Currently, SBML-shorthand supports SBML Level 3, version 1.
The following code is an example of SBML-shorthand being used to describe the simple enzyme-substrate mechanism.
@compartments
cell=1
@species
cell:Substrate=10
cell:Enzyme=5
cell:Complex=0
cell:Product=0
@parameters
k1=1
k1r=2
@reactions
@rr=Binding
Substrate+Enzyme -> Complex
k1*Substrate*Enzyme-k1r*Complex
@r=Conversion
Complex -> Product + Enzyme
kcat*Complex : kcat=3
Antimony
Antimony is based on an earlier DSL implemented in the Jarnac modeling application. That, in turn, was based on the SCAMP modeling application which ultimately drew inspiration from the DSL language developed by David Garfinkel for the BIOSIM simulator.
Like SBML-shorthand, Antimony provides a simplified text representation of SBML. It uses a minimum of punctuation characters which renders the text easier to read and understand. It also allows users to add comments. Antimony is implemented using C/C++ and Bison as the grammar parser. However, the distribution also includes Python bindings which can be installed using pip to make it easy to use from Python. It is also available via the Tellurium package. More recently, a JavaScript/WASM version has been generated which allows the Antimony language to be used on the web. The website tool makesbml uses the Javascript version. Antimony supports SBML Level 3, version 2. Antimony also supports the following SBML packages: Hierarchical Model Composition, Flux Balance Constraints, and Distributions.
The following example illustrates Antimony being used to describe a simple enzyme-kinetics model:
binding: Substrate + Enzyme -> Complex; k1*Substrate*Enzyme - k1r*Complex;
Conversion: Complex -> Product + Enzyme; kcat*Complex;
// Species initializations
Substrate = 10;
Enzyme = 5;
Complex = 0;
Product = 0;
// Variable initializations
k1 = 1;
k1r = 2;
kcat = 3;
Community
As of February 2020, nearly 300 software systems advertise support for SBML. A current list is available in the form of the SBML Software Guide, hosted at SBML.org.
SBML has been and continues to be developed by the community of people making software platforms for systems biology, through active email discussion lists and biannual workshops. The meetings are often held in conjunction with other biology conferences, especially the International Conference on Systems Biology (ICSB). The community effort is coordinated by an elected editorial board made up of five members. Each editor is elected for a 3-year non-renewable term.
Tools such as an online model validator as well as open-source libraries for incorporating SBML into software programmed in the C, C++, Java, Python, Mathematica, MATLAB and other languages are developed partly by the SBML Team and partly by the broader SBML community.
SBML is an official IETF MIME type, specified by RFC 3823.
See also
BioModels Database
BioPAX
CellML
MIASE
MIRIAM
Systems Biology Ontology
Systems Biology Graphical Notation
References
External links
SBML home page
SBML-related presentations and posters from Nature Precedings
COmputational Modeling in BIology NEtwork
XML markup languages
Industry-specific XML-based standards
Systems biology | SBML | Biology | 4,216 |
1,392,242 | https://en.wikipedia.org/wiki/Energy%20tower%20%28downdraft%29 | The energy tower is a device for producing electrical power. The brainchild of Dr. Phillip Carlson, expanded by Professor Dan Zaslavsky from the Technion. Energy towers spray water on hot air at the top of the tower, making the cooled air fall through the tower and drive a turbine at the tower's bottom.
Concept
An energy tower (also known as a downdraft energy tower, because the air flows down the tower) is a tall (1,000 meters) and wide (400 meters) hollow cylinder with a water spray system at the top. Pumps lift the water to the top of the tower and then spray the water inside the tower. Evaporation of water cools the hot, dry air hovering at the top. The cooled air, now denser than the outside warmer air, falls through the cylinder, spinning a turbine at the bottom. The turbine drives a generator which produces the electricity.
The greater the temperature difference between the air and water, the greater the energy efficiency. Therefore, downdraft energy towers should work best in a hot dry climate. Energy towers require large quantities of water. Salt water is acceptable, although care must be taken to prevent corrosion; desalination can help solve this problem.
The energy that is extracted from the air is ultimately derived from the sun, so this can be considered a form of solar power. Energy production continues at night, because air retains some of the day's heat after dark. However, power generation by the energy tower is affected by the weather: it slows down each time the ambient humidity increases (such as during a rainstorm), or the temperature falls.
A related approach is the solar updraft tower, which heats air in glass enclosures at ground level and sends the heated air up a tower driving turbines at the base. Updraft towers do not pump water, which increases their efficiency, but do require large amounts of land for the collectors. Land acquisition and collector construction costs for updraft towers must be compared to pumping infrastructure costs for downdraft collectors. Operationally, maintaining the collector structures for updraft towers must be compared to pumping costs and pump infrastructure maintenance.
Cost/efficiency
Zaslavsky and other authors estimate that depending on the site and financing costs, energy could be produced in the range of 1-4 cents per kWh, well below alternative energy sources other than hydro. Pumping the water requires about 50% of the turbine's output. Zaslavsky claims that the Energy Tower would achieve up to 70-80% of the Carnot limit. If the conversion efficiency turns out to be much lower, it is expected to have an adverse impact on projections made for cost of energy.
Projections made by Altmann and by Czisch about conversion efficiency and about cost of energy (cents/kWh) are based only on model calculations, no data on a working pilot plant have ever been collected.
Actual measurements on the 50 kW Manzanares pilot solar updraft tower found a conversion efficiency of 0.53%, although SBP believe that this could be increased to 1.3% in a large and improved 100 MW unit. This amounts to about 10% of the theoretical limit for the Carnot cycle. It is important to note a significant difference between the up-draft and down-draft proposals. The usage of water as a working-medium dramatically increases the potential for thermal energy capture, and electrical generation, due to its specific heat capacity. While the design may have its problems (see next section) and the stated efficiency claims has yet to be demonstrated, it would be an error to extrapolate performance from one to the other simply because of similarities in the name.
Potential problems
In salty humid air corrosion rates can be very high. This concerns the tower and the turbines.
The technology requires a hot and arid climate. Such locations include the coast of West Africa, Western Australia, northern Chile, Namibia, the Red Sea, Persian Gulf, and the Gulf of California. Most of these regions are remote and thinly populated, and would require power to be transported over long distances to where it is needed. Alternatively, such plants could provide captive power for nearby industrial uses such as desalination plants, aluminium production via the Hall-Héroult process, or to generate hydrogen for ammonia production.
Humidity as a result of plant operation may be an issue for nearby communities. A 400 meter diameter powerplant producing wind velocity of 22 meters per second, must add about 15 grams of water per kilogram of air processed. This is equal to 41 tonnes of water per second (m3s−1). In terms of humid air, this is 10 cubic kilometers of very humid air each hour. Thus, a community even 100 kilometers away may be unpleasantly affected.
Brine is a problem in proportion to the humidity created, since water's vapor pressure decreases with salinity, it is reasonable to expect at least as much brine as water in humidity. This means that a river of brine flows away from the powerplant at 41 tonnes per second (m3s−1), along with a river of saline water flowing in with 82 tonnes of water per second (m3s−1).
Large industrial consumers often locate near cheap sources of electricity. However, many of these desert regions also lack necessary infrastructure, increasing capital requirements and overall risk.
Demonstration project
Maryland-based Solar Wind Energy, Inc. was developing a tower.
Under the most recent design specifications, the Tower designed for a site near San Luis, Arizona, has a gross production capacity on an hourly basis, of up to 1,250 megawatt hours. Due to lower capacities during winter days, the average hourly output per day for sale to the grid for the entire year averages approximately 435 megawatt hours/hr.
See also
Psychrometrics (not to be confused with Psychometrics)
Solar updraft tower
References
Zaslavsky, Dan; Rami Guetta et al. (December 2001). . Technion Israel, Israel - India Steering Committee. Retrieved on 2007-03-15.
Zwirn, Michael J. (January 1997). Energy Towers: Pros and Cons of the Arubot Sharav Alternative Energy Proposal. Arava Institute for Environmental Studies. Retrieved on 2006-12-22.
Zaslavsky, Dan (November, 1996). "Solar Energy Without a Collector". The 3rd Sabin Conference.
External links
Energy Towers, A complete brochure by Dan Zaslavsky, updated for December 2009
SHPEGS "open source" energy tower concept similar in some ways to the downdraft tower.
Prof. Dan Zaslavsky on the Technion faculty page.
A commercial company set to build this type of tower
Electric power
Energy conversion
Power station technology
Sustainable energy
Sustainable technologies | Energy tower (downdraft) | Physics,Engineering | 1,387 |
56,588,027 | https://en.wikipedia.org/wiki/Datex%20II | Datex II or Datex2 is a data exchange standard for exchanging traffic information between traffic management centres, traffic service providers, traffic operators and media partners. It contains for example traffic incidents, current road works and other special traffic-related events. These data is presented in XML-format and is modeled with UML.
The standard is developed by the technical body Intelligent transport systems (CEN/TC 278) of the European Committee for Standardization.
The standard contains 12 parts:
Context and framework
Location referencing
Situation publication
Variable Message Sign (VMS) Publications
Measured and Elaborated Data Publications
Parking Publications
Common data elements
Traffic management publications and extensions dedicated to the urban environment
Traffic signal management publications dedicated to the urban environment
Energy infrastructure
Publication of machine interpretable traffic regulations
Facility related publications
References
External links
Official Datex II website
Datex II example messages
Datex on openstreetmap
EN standards
Intelligent transportation systems | Datex II | Technology | 178 |
28,304,972 | https://en.wikipedia.org/wiki/Glycylglycine | Glycylglycine is the dipeptide of glycine, making it the simplest peptide.
The compound was first synthesized by Emil Fischer and Ernest Fourneau in 1901 by boiling 2,5-diketopiperazine (glycine anhydride) with hydrochloric acid.
Shaking with alkali and other synthesis methods have been reported.
Because of its low toxicity, it is useful as a buffer for biological systems with effective ranges between pH 2.5–3.8 and 7.5–8.9; however, it is only moderately stable for storage once dissolved. It is used in the synthesis of more complex peptides.
Glycylglycine has also been reported to be helpful in solubilizing recombinant proteins in E. coli. Using different concentrations of the glycylglycine improvement in protein solubility after cell lysis has been observed.
References
Dipeptides
Dimers (chemistry) | Glycylglycine | Chemistry,Materials_science | 201 |
13,219,793 | https://en.wikipedia.org/wiki/New%20Alchemy%20Institute | The New Alchemy Institute was a research center that did pioneering investigation into organic agriculture, aquaculture and bioshelter design between 1969 and 1991. It was founded by John Todd, Nancy Jack Todd, and William McLarney. Its purpose was to research human support systems of food, water, and shelter and to completely rethink how these systems were designed.
Purpose of the Institute
The New Alchemy Institute was founded on a , former dairy farm in Hatchville, part of Falmouth, Massachusetts, on Cape Cod. Their stated aim was to do research on behalf of the planet: "Among our major tasks is the creation of ecologically derived human support systems - renewable energy, agriculture, aquaculture, housing and landscapes. The strategies we research emphasize a minimal reliance on fossil fuels and operate on a scale accessible to individuals, families and small groups. It is our belief that ecological and social transformations must take place at the lowest functional levels of society if humankind is to direct its course towards a greener, saner world.
Our programs are geared to produce not riches, but rich and stable lives, independent of world fashion and the vagaries of international economics. The New Alchemists work at the lowest functional level of society on the premise that society, like the planet itself, can be no healthier than the components of which it is constructed. The urgency of our efforts is based on our belief that the industrial societies which now dominate the world are in the process of destroying it." (Fall 1970, Bulletin of the New Alchemists. )
Areas of research
Bioshelters
A bioshelter is a solar greenhouse that is managed as a self-contained ecosystem. The groupings of plants, animals, soil and insects are selected so that closed loops of life cycles, materials, water, and energy are created, and require minimal inputs from outside the system. They emulate natural rhythms of growth and cycling of nutrients.
New Alchemy built several bioshelters:
"The Ark" located at the property in Hatchville, Massachusetts, United States
"The Ark" located at Spry Point, Prince Edward Island, Canada was built in 1976 and demolished in the late 1990s.
Organic agriculture
New Alchemy investigated the practices of organic agriculture for both field crops, and greenhouse growing. They researched intensive gardening, biological pest control, cover cropping, irrigation using fish pond water, perennial food crops, and tree crops.
Aquaculture
New Alchemy experimented with growing edible fish in ponds in the bioshelters. The solar aquaculture ponds were above-ground, translucent tanks. The fertile pond water was used for irrigating the crops in the greenhouses. This proved to be a successful way to raise edible fish, floating hydroponic crops, and irrigated greenhouse food crops.
Ideological basis
The scientists working at New Alchemy were determined to rethink how human support systems were designed. They looked to nature as the ultimate designer, using careful observation of natural cycles and processes as the template for creating truly sustainable systems.
Publications
The research conducted at New Alchemy was documented in a series of journals and technical bulletins. A complete list is available at: New Alchemy Institute
References
External links
New Alchemy Institute
Ocean Arks International
Environmental research institutes
Research institutes in Massachusetts
Research institutes in Canada | New Alchemy Institute | Environmental_science | 679 |
20,559,838 | https://en.wikipedia.org/wiki/Intertidal%20wetland | An intertidal wetland is an area along a shoreline that is exposed to air at low tide and submerged at high tide. This type of wetland is defined by an intertidal zone and includes its own intertidal ecosystems.
Description
The main types of intertidal wetlands are mudflats (e.g., mangrove swamps) and salt marshes. The mangrove swamps are encountered along tropical shores and are characterized by tree vegetation, while salt marshes are mostly found in temperate zones and are mostly grass ecosystems.
Intertidal wetlands are commonly encountered in most estuaries. Intertidal wetland ecosystems are amongst the most productive plant communities and often constitute a large part of the estuary areas.
See also
Tidal marsh
References
Coastal geography
Landforms
Wetlands | Intertidal wetland | Environmental_science | 149 |
3,299,423 | https://en.wikipedia.org/wiki/Transversality%20%28mathematics%29 | In mathematics, transversality is a notion that describes how spaces can intersect; transversality can be seen as the "opposite" of tangency, and plays a role in general position. It formalizes the idea of a generic intersection in differential topology. It is defined by considering the linearizations of the intersecting spaces at the points of intersection.
Definition
Two submanifolds of a given finite-dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point. Manifolds that do not intersect are vacuously transverse. If the manifolds are of complementary dimension (i.e., their dimensions add up to the dimension of the ambient space), the condition means that the tangent space to the ambient manifold is the direct sum of the two smaller tangent spaces. If an intersection is transverse, then the intersection will be a submanifold whose codimension is equal to the sums of the codimensions of the two manifolds. In the absence of the transversality condition the intersection may fail to be a submanifold, having some sort of singular point.
In particular, this means that transverse submanifolds of complementary dimension intersect in isolated points (i.e., a 0-manifold). If both submanifolds and the ambient manifold are oriented, their intersection is oriented. When the intersection is zero-dimensional, the orientation is simply a plus or minus for each point.
One notation for the transverse intersection of two submanifolds and of a given manifold is . This notation can be read in two ways: either as “ and intersect transversally” or as an alternative notation for the set-theoretic intersection of and when that intersection is transverse. In this notation, the definition of transversality reads
Transversality of maps
The notion of transversality of a pair of submanifolds is easily extended to transversality of a submanifold and a map to the ambient manifold, or to a pair of maps to the ambient manifold, by asking whether the pushforwards of the tangent spaces along the preimage of points of intersection of the images generate the entire tangent space of the ambient manifold. If the maps are embeddings, this is equivalent to transversality of submanifolds.
Meaning of transversality for different dimensions
Suppose we have transverse maps and where and are manifolds with dimensions and respectively.
The meaning of transversality differs a lot depending on the relative dimensions of and . The relationship between transversality and tangency is clearest when .
We can consider three separate cases:
When , it is impossible for the image of and 's tangent spaces to span 's tangent space at any point. Thus any intersection between and cannot be transverse. However, non-intersecting manifolds vacuously satisfy the condition, so can be said to intersect transversely.
When , the image of and 's tangent spaces must sum directly to 's tangent space at any point of intersection. Their intersection thus consists of isolated signed points, i.e. a zero-dimensional manifold.
When this sum needn't be direct. In fact it cannot be direct if and are immersions at their point of intersection, as happens in the case of embedded submanifolds. If the maps are immersions, the intersection of their images will be a manifold of dimension
Intersection product
Given any two smooth submanifolds, it is possible to perturb either of them by an arbitrarily small amount such that the resulting submanifold intersects transversally with the fixed submanifold. Such perturbations do not affect the homology class of the manifolds or of their intersections. For example, if manifolds of complementary dimension intersect transversally, the signed sum of the number of their intersection points does not change even if we isotope the manifolds to another transverse intersection. (The intersection points can be counted modulo 2, ignoring the signs, to obtain a coarser invariant.) This descends to a bilinear intersection product on homology classes of any dimension, which is Poincaré dual to the cup product on cohomology. Like the cup product, the intersection product is graded-commutative.
Examples of transverse intersections
The simplest non-trivial example of transversality is of arcs in a surface. An intersection point between two arcs is transverse if and only if it is not a tangency, i.e., their tangent lines inside the tangent plane to the surface are distinct.
In a three-dimensional space, two curves can be transverse only when they have empty intersection, since their tangent spaces could generate at most a two-dimensional space. Curves transverse to surfaces intersect in points, and surfaces transverse to each other intersect in curves. Curves that are tangent to a surface at a point (for instance, curves lying on a surface) do not intersect the surface transversally.
Here is a more specialised example: suppose that is a simple Lie group and is its Lie algebra. By the Jacobson–Morozov theorem every nilpotent element can be included into an -triple . The representation theory of tells us that . The space is the tangent space at to the adjoint orbit and so the affine space intersects the orbit of transversally. The space is known as the "Slodowy slice" after Peter Slodowy.
Applications
Optimal control
In fields utilizing the calculus of variations or the related Pontryagin maximum principle, the transversality condition is frequently used to control the types of solutions found in optimization problems. For example, it is a necessary condition for solution curves to problems of the form:
Minimize where one or both of the endpoints of the curve are not fixed.
In many of these problems, the solution satisfies the condition that the solution curve should cross transversally the nullcline or some other curve describing terminal conditions.
Smoothness of solution spaces
Using Sard's theorem, whose hypothesis is a special case of the transversality of maps, it can be shown that transverse intersections between submanifolds of a space of complementary dimensions or between submanifolds and maps to a space are themselves smooth submanifolds. For instance, if a smooth section of an oriented manifold's tangent bundle—i.e. a vector field—is viewed as a map from the base to the total space, and intersects the zero-section (viewed either as a map or as a submanifold) transversely, then the zero set of the section—i.e. the singularities of the vector field—forms a smooth 0-dimensional submanifold of the base, i.e. a set of signed points. The signs agree with the indices of the vector field, and thus the sum of the signs—i.e. the fundamental class of the zero set—is equal to the Euler characteristic of the manifold. More generally, for a vector bundle over an oriented smooth closed finite-dimensional manifold, the zero set of a section transverse to the zero section will be a submanifold of the base of codimension equal to the rank of the vector bundle, and its homology class will be Poincaré dual to the Euler class of the bundle.
An extremely special case of this is the following: if a differentiable function from reals to the reals has nonzero derivative at a zero of the function, then the zero is simple, i.e. it the graph is transverse to the x-axis at that zero; a zero derivative would mean a horizontal tangent to the curve, which would agree with the tangent space to the x-axis.
For an infinite-dimensional example, the d-bar operator is a section of a certain Banach space bundle over the space of maps from a Riemann surface into an almost-complex manifold. The zero set of this section consists of holomorphic maps. If the d-bar operator can be shown to be transverse to the zero-section, this moduli space will be a smooth manifold. These considerations play a fundamental role in the theory of pseudoholomorphic curves and Gromov–Witten theory. (Note that for this example, the definition of transversality has to be refined in order to deal with Banach spaces!)
Grammar
"Transversal" is a noun; the adjective is "transverse."
quote from J.H.C. Whitehead, 1959
See also
Transversality theorem
Notes
References
Differential topology
Calculus of variations
Geometry | Transversality (mathematics) | Mathematics | 1,758 |
48,323,501 | https://en.wikipedia.org/wiki/136th%20Civil%20Engineer%20Squadron | The 136th Civil Engineer Squadron (136 CES) is a unit of the 136th Airlift Wing, Texas Air National Guard, Texas Military Forces stationed at Naval Air Station Fort Worth Joint Reserve Base, Fort Worth, Texas. If activated to federal service, the Squadron is gained by the United States Air Force Air Mobility Command.
Lineage
Constituted as 136th Civil Engineering Flight on 1 October 1969
Assignment to National Guard Bureau on 1 October 1969
Extended Federal Recognition on 1 November 1969
Assigned to Texas Air National Guard on 1 November 1969
Assigned to 136th Tactical Airlift Wing on 1 November 1969
Stationed at Naval Air Station Dallas, Hensley Field on 1 November 1969
Re-Designated as 136th Civil Engineering Squadron on 1 July 1985
Assigned to Air Combat Command on 1 October 1993
Re-Designated as 136th Civil Engineer Squadron on 1 March 1994
Assigned to Air Mobility Command on 1 April 1997
Stationed at Naval Air Station Fort Worth Joint Reserve Base on 27 April 1999
Decorations
Air Force Outstanding Unit Award
1972 (1 May 1969 – 30 Apr 1977) (136ARW)
1985 (1 Jan 1980 – 1 Aug 1982) (136CEF)
1985 (1 Jan 1983 – 31 Dec 1984) (136AW)
1991 (1 Sep 1989 – 1 Jun 1991) (136AW)
2009 (1 Oct 2006 – 30 Sep 2008) (136CES)
2015 (1 Oct 2012 – 30 Sep 2014) (136AW)
Texas Governor's Unit Citation
References
Squadrons of the United States Air National Guard
Civil engineering
Engineering squadrons of the United States Air Force
Military units and formations in Texas
Texas Military Forces | 136th Civil Engineer Squadron | Engineering | 314 |
48,782,942 | https://en.wikipedia.org/wiki/Sebelipase%20alfa | Sebelipase alfa, sold under the brand name Kanuma, is a recombinant form of the enzyme lysosomal acid lipase (LAL) that is used as a medication for the treatment of lysosomal acid lipase deficiency (LAL-D). It is administered via intraveneous infusion. It was approved for medical use in the European Union and in the United States in 2015.
Medical uses
Sebelipase alfa is indicated for long-term enzyme replacement therapy (ERT) in people of all ages with lysosomal acid lipase (LAL) deficiency.
History
Sebelipase was developed by Synageva that became part of Alexion Pharmaceuticals in 2015. For its production, chickens are genetically modified to produce the recombinant form of LAL (rhLAL) in their egg white. After extraction and purification it becomes available as the medication. On 8 December 2015 the FDA announced that its approval came from two centers: The Center for Drug Evaluation and Research (CDER) approved the human therapeutic application of the medication, while the Center for Veterinary Medicine (CVM) approved the application for a recombinant DNA construct in genetically engineered chicken to produce rhLAL in their egg whites. At the time it gained FDA approval Kanuma was the first only drug manufactured in chicken eggs and intended for use in humans.
Sebelipase alfa is an orphan drug; its effectiveness was published after a phase 3 trial in 2015. The disease of LAL affects < 0.2 in 10,000 people in the EU.
References
Drugs developed by AstraZeneca
Biopharmaceuticals
Orphan drugs | Sebelipase alfa | Chemistry,Biology | 335 |
56,615,271 | https://en.wikipedia.org/wiki/Aparna%20Higgins | Aparna W. Higgins is a mathematician known for her encouragement of undergraduate mathematicians to participate in mathematical research. Higgins originally specialized in universal algebra, but her more recent research concerns graph theory, including graph pebbling and line graphs. She is a professor of mathematics at the University of Dayton.
Education and career
Higgins is originally from Mumbai, India, and did her undergraduate studies at the University of Mumbai, graduating in 1978. She completed her Ph.D. in 1983 at the University of Notre Dame; her dissertation, Heterogeneous Algebras Associated with Non-Indexed Algebras, a Representation Theorem on Weak Automorphisms of Universal Algebras, was supervised by Abraham Goetz.
In 2009 she became director of Project NExT, after the previous director, T. Christine Stevens, stepped down; this project is an initiative of the Mathematical Association of America to provide career guidance to new doctorates in mathematics.
Higgins is married to Bill Higgins, a mathematics professor at Wittenberg University, and the two regularly take their sabbaticals together in California.
Recognition
Higgins won a Distinguished Teaching Award from the Mathematical Association of America in 1995, for her contributions to undergraduate research. In 2005 she was one of three winners of the Deborah and Franklin Haimo Award for Distinguished College or University Teaching of Mathematics of the Mathematical Association of America.
References
Year of birth missing (living people)
Living people
20th-century American mathematicians
20th-century Indian mathematicians
20th-century American women mathematicians
21st-century American mathematicians
21st-century Indian mathematicians
21st-century American women mathematicians
Graph theorists
Indian combinatorialists
Indian emigrants to the United States
21st-century Indian women mathematicians
University of Dayton faculty
University of Mumbai alumni
University of Notre Dame alumni
Women scientists from Maharashtra
20th-century Indian women scientists | Aparna Higgins | Mathematics | 352 |
44,014,576 | https://en.wikipedia.org/wiki/The%20Palindromist | The Palindromist is a magazine devoted to palindromes, published since 1996. Initially it was published biannually. The frequency switched to irregular. It is edited by Mark Saltveit, a Portland-based stand-up comedian who won the first-ever World Palindrome Championship.
Each issue of the magazine prints a variety of palindromes in various forms (letter-unit, word-unit, and vertical), covers palindrome-related news, and seeks to accredit writers of famous palindromes. The magazine also covers closely related forms of wordplay, including calculator words and written charades.
The magazine organizes the SymmyS Awards, an annual palindrome competition adjudicated by a celebrity panel. Past judges have included Will Shortz, MC Paul Barman, Ben Zimmer, David Allen Cress, "Weird Al" Yankovic, Demetri Martin, and John Flansburgh.
See also
Word Ways: The Journal of Recreational Linguistics
References
External links
The SymmyS Awards
1996 establishments in Oregon
Biannual magazines published in the United States
English-language magazines
Game magazines
Literary magazines published in the United States
Magazines established in 1996
Magazines published in Portland, Oregon
Palindromes
Word games
Irregularly published magazines published in the United States | The Palindromist | Physics | 272 |
23,553,917 | https://en.wikipedia.org/wiki/Reider%27s%20theorem | In algebraic geometry, Reider's theorem gives conditions for a line bundle on a projective surface to be very ample.
Statement
Let D be a nef divisor on a smooth projective surface X. Denote by KX the canonical divisor of X.
If D2 > 4, then the linear system |KX+D| has no base points unless there exists a nonzero effective divisor E such that
, or
;
If D2 > 8, then the linear system |KX+D| is very ample unless there exists a nonzero effective divisor E satisfying one of the following:
or ;
or ;
;
Applications
Reider's theorem implies the surface case of the Fujita conjecture. Let L be an ample line bundle on a smooth projective surface X. If m > 2, then for D=mL we have
D2 = m2 L2 ≥ m2 > 4;
for any effective divisor E the ampleness of L implies D · E = m(L · E) ≥ m > 2.
Thus by the first part of Reider's theorem |KX+mL| is base-point-free. Similarly, for any m > 3 the linear system |KX+mL| is very ample.
References
Algebraic surfaces
Theorems in algebraic geometry | Reider's theorem | Mathematics | 267 |
2,990,689 | https://en.wikipedia.org/wiki/Modeling%20perspective | A modeling perspective in information systems is a particular way to represent pre-selected aspects of a system. Any perspective has a different focus, conceptualization, dedication and visualization of what the model is representing.
The traditional way to distinguish between modeling perspectives is structural, functional and behavioral/processual perspectives. This together with rule, object, communication and actor and role perspectives is one way of classifying modeling approaches.
Types of perspectives
Structural modeling perspective
This approach concentrates on describing the static structure. The main concept in this modeling perspective is the entity, this could be an object, phenomena, concept, thing etc.
The data modeling languages have traditionally handled this perspective, examples of such being:
The ER-language (Entity-Relationship)
Generic Semantic Modeling language (GSM)
Other approaches including:
The NIAM language (Binary relationship language)
Conceptual graphs (Sowa)
Looking at the ER-language we have the basic components:
Entities: Distinctively identifiable phenomenon.
Relationships: An association among the entities.
Attributes: Used to give value to a property of an entity/relationship.
Looking at the generic semantic modeling language we have the basic components:
Constructed types built by abstraction: Aggregation, generalization, and association.
Attributes.
Primitive types: Data types in GSM are classified into printable and abstract types.
Printable: Used to specify visible values.
Abstract: Representing entities.
Functional modeling perspective
The functional modeling approach concentrates on describing the dynamic process. The main concept in this modeling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modeling language employing this perspective is data flow diagrams.
The perspective uses four symbols to describe a process, these being:
Process: Illustrates transformation from input to output.
Store: Data-collection or some sort of material.
Flow: Movement of data or material in the process.
External Entity: External to the modeled system, but interacts with it.
Now, with these symbols, a process can be represented as a network of these symbols.
This decomposed process is a DFD, data flow diagram.
Behavioral perspective
Behavioral perspective gives a description of system dynamics. The main concepts in behavioral perspective are states and transitions between states. State transitions are triggered by events. State Transition Diagrams (STD/STM), State charts and Petri-nets are some examples of well-known behaviorally oriented modeling languages. Different types of State Transition Diagrams are used particularly within real-time systems and telecommunications systems.
Rule perspective
Rule perspective gives a description of goals/means connections. The main concepts in rule perspective are rule, goal and constraint. A rule is something that influences the actions of a set of actors. The standard form of rule is “IF condition THEN action/expression”. Rule hierarchies (goal-oriented modeling), Tempora and Expert systems are some examples of rule oriented modeling.
Object perspective
The object-oriented perspective describes the world as autonomous, communicating objects. An object is an “entity” which has a unique and unchangeable identifier and a local state consisting of a collection of attributes with assignable values. The state can only be manipulated with a set of methods defined on the object. The value of the state can only be accessed by sending a message to the object to call on one of its methods. An event is when an operation is being triggered by receiving a message, and the trace of the events during the existence of the object is called the object’s life cycle or the process of an object. Several objects that share the same definitions of attributes and operations can be parts of an object class. The perspective is originally based on design and programming of object oriented systems. Unified Modelling Language (UML) is a well known language for modeling with an object perspective.
Communication perspective
This perspective is based on language/action theory from philosophical linguistics. The basic assumption in this perspective is that person/objects cooperate on a process/action through communication within them.
An illocutionary act consists of five elements: Speaker, hearer, time, location and circumstances. It is a reason and goal for the communication, where the participations in a communication act is oriented towards mutual agreement. In a communication act, the speaker generally can raise three claims: truth (referring an object), justice (referring a social world of the participations) and claim to sincerity (referring the subjective world of the speaker).
Actor and role perspective
Actor and role perspective is a description of organisational and system structure. An actor can be defined as a phenomenon that influences the history of another actor, whereas a role can be defined as the behaviour which is expected by an actor, amongst other actors, when filling the role. Modeling within these perspectives is based both on work with object-oriented programming languages and work with intelligent agents in artificial intelligence. I* is an example of an actor oriented language.
See also
Domain-Specific Modeling (DSM)
Glossary of Unified Modeling Language terms
General-purpose modeling
Model Driven Engineering (MDE)
Modeling language
Three schema approach for data modeling
View model
References
Further reading
Ingeman Arbnor and Björn Bjerke (1997). Methodology for Creating Business Knowledge. California : Sage Publications. (Third Edition 2009).
Information systems
Scientific modelling
Systems engineering | Modeling perspective | Technology,Engineering | 1,071 |
15,275,135 | https://en.wikipedia.org/wiki/Jon%20Lomberg | Jon Lomberg (born 1948) is an American space artist and science journalist. He was Carl Sagan's principal artistic collaborator for more than twenty years on many projects from 1972 through 1996. In 1998, the International Astronomical Union officially named an asteroid (6446 Lomberg) in recognition of his achievements in science communication. He was NASA's Design Director for the Golden Record on the Voyager spacecraft; the cover he designed is expected to last at least a billion years.
Biography
Jon Lomberg grew up in Philadelphia, Pennsylvania. During a visit to Toronto, Ontario, after college, he was invited by science fiction author Judith Merril to display his artwork at a conference she organised for the Ontario Institute for Studies in Education. Lomberg moved to Toronto later that year and, after assisting Merril in a radio documentary for the CBC Radio One program Ideas, went on to create many documentaries on topics such as NASA's Viking program and Halley's Comet for the program.
In 1972, Lomberg showed some of his paintings to astronomer Carl Sagan, who then asked him to illustrate his book The Cosmic Connection (1973). This was the beginning of their quarter century of collaboration on many projects, including the Cosmos series (for which Lomberg created the talent pool and as chief artist won a Primetime Emmy Award), the Cosmos book, Broca's Brain, NASA's interstellar Voyager Golden Record, the original cover art for Sagan's 1985 novel Contact, and the opening sequence from Earth through the Solar System and its galaxy and beyond for the 1997 Contact film. At Sagan's request, Lomberg designed the original sailing ship logo for the Planetary Society in 1981.
The Smithsonian Institution commissioned Lomberg in the early 1990s to paint "A Portrait of the Milky Way", a scientifically accurate artistic representation of the Milky Way galaxy as seen by a hypothetical observer from a vantage point 10 degrees above the galactic plane and 60,000 light years from the galactic center. The by painting, which was described in a peer reviewed academic paper in 1994 as "the best representation of our galaxy to date" and "a first map like those of explorers long ago", was displayed in the National Air and Space Museum from 1992 through 2002 and remains part of its permanent collection of aviation and space art. Lomberg also designed the Galaxy Garden, a three-dimensional walk-through Milky Way scale model which is part of the Paleaku Peace Gardens Sanctuary in Kailua-Kona, Hawaii.
Lomberg co-designed the MarsDial aboard the Mars Exploration Rovers Spirit and Opportunity, and was the project director and editor-in-chief for the Visions of Mars CD-ROM and mini-DVD aboard the spacecraft Phoenix, which landed on Mars in May 2008. He was also on the Waste Isolation Pilot Plant planning teams with Frank Drake, Ben Finney, Ward Goodenough, Louis Narens, Frederick Newmeyer, Woodruff Sullivan and others.
Jon Lomberg is a founding member of the International Association of Astronomical Artists, a member of the Planetary Society advisory council, and designs exhibits and gives presentations for the Mauna Kea Astronomy Education Center in Hilo, Hawaii for the Mauna Kea Observatory and Gemini North. He lives in Hawaii with his wife and two children.
Bibliography
The books Jon Lomberg has co-authored and/or illustrated include:
1978: Carl Sagan, Frank Drake, Ann Druyan, Timothy Ferris, Jon Lomberg, Linda Salzman Sagan. Murmurs of Earth: The Voyager Interstellar Record. New York City: Random House. .
1979: Carl Sagan, Broca's Brain: Reflections on the Romance of Science. New York City: Random House. .
1997: Donald Goldsmith, Worlds Unnumbered: The Search For Extrasolar Planets. Herndon, Virginia: University Science Books. .
1997: Donald Goldsmith, Einstein's Greatest Blunder? The Cosmological Constant and Other Fudge Factors in the Physics of the Universe. Cambridge, Massachusetts: Harvard University Press. .
2001: Donald Goldsmith and Tobias Owen, The Search for Life in the Universe. Herndon, Virginia: University Science Books. .
2004: David W. Thomson and James Bourassa, Secrets of the Aether: Unified Force Theory, Dark Matter and Consciousness. Alma, Illinois: The Aenor Trust. .
Awards and honors
The awards and honors which have been bestowed upon Jon Lomberg include:
1979: American Institute of Graphic Arts Certificate of Excellence for the cover of Broca's Brain: Reflections on the Romance of Science.
1981: Primetime Emmy Award from the Academy of Television Arts and Sciences for Outstanding Individual Achievement in Creative Technical Crafts for the TV series Cosmos.
1983: Graphic Design USA magazine DESI Award for nuclear winter artwork in PARADE magazine.
1984: Vermont World Peace Film Festival Prize for Best Video Documentary for the videotape of the multi-media show Nuclear Winter with Carl Sagan.
1987: Columbia University Graduate School of Journalism Armstrong Award for Documentary for the Ideas program Halley's Comet.
1996: American Association for the Advancement of Science Best Children's Science Book Award for the "Life In The Universe" curriculum for grades 4-9, in the Life Science category.
1998: International Astronomical Union officially renamed Asteroid Lomberg, formerly asteroid 6446 1990QL.
2002: Astronomical Society of the Pacific Klumpke-Roberts Award.
References
External links
Official Jon Lomberg website
Official website of the Galaxy Garden at the Paleaku Astronomy Center in Captain Cook, Hawaii.
Two views of A Portrait of the Milky Way on the Kepler Mission website.
Transcripts of two Robyn Williams shows with Jon Lomberg for the Australian Broadcasting Corporation:
Voyager - A message from Earth - 19 November 2005 at the Sydney Observatory
Jon Lomberg, In Conversation - 5 January 2006
1948 births
Living people
Artists from Philadelphia
Artists from Hawaii
Space artists
Primetime Emmy Award winners
American science writers
Interstellar messages | Jon Lomberg | Astronomy | 1,196 |
64,694,403 | https://en.wikipedia.org/wiki/Bat1K | Bat1K is a project to sequence the genomes of all living bat species to the level of chromosomes and then make the data publicly available. The project began in 2017.
History
Bat1K was founded in 2017. Zoologist and geneticist Emma Teeling and neurogeneticist Sonja Vernes are co-founders. The Bat1K consortium includes researchers from institutions such as University College Dublin, University of Bristol, Max Planck Institute of Molecular Cell Biology and Genetics, and Max Planck Institute for Psycholinguistics. Notable members include Eugene Myers, Liliana M. Dávalos, Nancy Simmons, and Erich Jarvis. As of November 2017, there were 148 members in total, consisting of bat biologists, genome technologists, conservationists, and computational scientists.
Applications
Several research areas could be furthered by documenting bat genomes. These include healthy ageing, disease resistance, ecosystem function and ecosystem services, sensory perception, communication, limb development, and mammal genome structure.
Results
In 2020, the genomes of six species were published: the greater horseshoe bat, Egyptian fruit bat, pale spear-nosed bat, greater mouse-eared bat, Kuhl's pipistrelle, and the velvety free-tailed bat. These genomes were called "comparable to the best reference-quality genomes that have so far been generated for any eukaryote with a gigabase-sized genome". In 2020, the project's stated goal was to sequence an additional 27 genomes, with a representative from each family of bats, within the next year.
See also
Genome project
References
Genome projects
Bats | Bat1K | Biology | 331 |
68,116,053 | https://en.wikipedia.org/wiki/Mark%20Shtaif | Mark Shtaif (Hebrew: מרק שטייף) is an Israeli communication scientist, and a professor of electrical engineering at the faculty of engineering of Tel Aviv University. As of October 2020, he serves as Tel Aviv University’s rector.
Biography
Early life
Mark Shtaif was born in 1966 in Kishinev of the former USSR. His father Abraham was an engineer of agricultural machinery and his mother Tania worked as a pediatrician. His family immigrated to Israel in April 1973 when he was 7 years old.
Education and career
After graduating from the Reali high-school in Haifa, and following a mandatory military service, he completed the bachelor, masters and doctorate degrees in electrical engineering at the Technion in 1997 and joined the light-wave research lab of AT&T in Red Bank NJ. His initial position at AT&T was of a post-doctoral fellow, but he was soon promoted to the position of a senior and subsequently principal member of technical staff and specialized on the theoretical modeling of fiber communications systems. In 2000 he assumed the position of a principal architect at a newly established optical communication start-up named Celion Networks. Later in 2002 he joined Tel Aviv University’s faculty of engineering, where he has been teaching and conducting research ever since.
His fields of research focus primarily on fiber optics and optical communication systems. Within this general area of activity he integrates the fields of optics, quantum theory, nonlinear systems, communications theory, information theory, and signal processing. Over the years he has contributed to a variety of topics including optical amplification, analysis of nonlinear propagation, polarization-related phenomena, analyses of noise and signal detection, quantum information in fiber systems, and fundamental limits to optical communications.
In the years 2014 – 2017 he headed the department of Physical Electronics within the School of Electrical Engineering in Tel Aviv University, and in 2017 – 2020 he was the head of the entire school. In October 2020 he was appointed rector of Tel Aviv University.
Personal life
Mark Shtaif is married to Michal, an educational councilor. They have three children and reside in the town of Even Yehuda.
References
External links
Profile on DBLP website
Shtaif discussing the history of quantum communications in 2014 at memorial symposium for James P. Gordon on YouTube
1966 births
Living people
Academics from Tel Aviv
Bessarabian Jews
Electrical engineers
Tel Aviv University people | Mark Shtaif | Engineering | 483 |
34,667,944 | https://en.wikipedia.org/wiki/Frozen%20mirror%20image%20method | Frozen mirror image method (or method of frozen images) is an extension of the method of images for magnet-superconductor systems that has been introduced by Alexander Kordyuk in 1998 to take into account the magnetic flux pinning phenomenon. The method gives a simple representation of the magnetic field distribution generated by a magnet (a system of magnets) outside an infinitely flat surface of a perfectly hard (with infinite pinning force) type-II superconductor in more general field cooled (FC) case, i.e. when the superconductor goes into superconducting state been already exposed to the magnetic field. The difference from the mirror image method, which deals with a perfect type-I superconductor (that completely expels the magnetic field, see the Meissner effect), is that the perfectly hard superconductor screens the variation of the external magnetic field rather than the field itself.
Description
The name originates from the replacement of certain elements in the original layout with imaginary magnets, which replicates the boundary conditions of the problem (see Dirichlet boundary conditions). In a simplest case of the magnetic dipole over the flat superconducting surface (see Fig. 1), the magnetic field, generated by a dipole moved from its initial position (at which the superconductor is cooled to the superconducting state) to a final position and by the screening currents at the superconducting surface, is equivalent to the field of three magnetic dipoles: the real one (1), its mirror image (3), and the mirror image of it in initial (FC) position but with the magnetization vector inversed (2).
Applications
The method is shown to work for the bulk high temperature superconductors (HTSC), which are characterized by strong pinning and used for calculation of the interaction in magnet-HTSC systems such as superconducting magnetic bearings, superconducting flywheels, MAGLEV, for spacecraft applications, as well as a textbook model for science education.
See also
Method of images
Ideally hard superconductor
Magnetic levitation
Bean's critical state model
High temperature superconductors
References
Demos
Superconducting levitation with strong pinning
Magnetic levitation (YouTube)
Electromagnetism
Condensed matter physics
Superconductivity
Applied and interdisciplinary physics | Frozen mirror image method | Physics,Chemistry,Materials_science,Engineering | 484 |
57,065,322 | https://en.wikipedia.org/wiki/Interpump%20Group | Interpump Group S.p.A. is an Italian company specialized in the production of high and very high pressure water pumps and one of the world's leading groups in the hydraulic sector.
It is listed in the FTSE Italia Mid Cap and FTSE Italia STAR index of the Borsa Italiana.
History
The company was founded in 1977 in the province of Reggio Emilia, in Sant'Ilario d'Enza. Initially production was linked to high pressure pumps and pistons characterized by small dimensions and new type materials that led the company to hold 50% of the market in a few years.
Since the 90s it has begun to expand the areas of interest by acquiring other companies in the sector of professional cleaning machines and electric motors.
In 1996 it was listed on the stock exchange and from the following year it began to enter the hydraulic sector with targeted acquisitions of companies in the sector.
In 2005 the branch relating to professional cleaning machines was sold following a repositioning in the most technological sectors.
Group companies
Water sector: Interpump Group S.p.A. (Bertoli; Pratissoli Pompe); General Pump Inc.; Hammelmann G.m.b.H.; Inoxihp S.r.l.; Inoxpa S.A.; NLB Corporation Inc.
Hydraulic sector: Walvoil S.p.A. (Galtech; Hydrocontrol); Avi S.r.l.; Contarini S.r.l.; American Mobile Power Inc.; Hydroven S.r.l.; I.M.M. Hydraulics S.p.A.; Interpump Hydraulics S.p.A. (HS Penta; Hydrocar; Modenflex Hydraulics; PZB); Mega Pacific New Zealand; Mega Pacific Pty Ltd; Muncie Power Products, Inc.; Oleodinamica Panni S.r.l. (Cover); Takarada Industria e Comercio lta; Teknotubi S.r.l.; Tubiflex S.p.A.
Manufacturing sites
Interpump is present in the following countries with at least one of the companies in the group:
America: Canada, US, Brazil, Chile
Europe: Italy, France, Spain, England, (Germany), Romania, Bulgaria
Africa: South Africa
Asia: United Arab Emirates, China, India, South Korea
Oceania: Australia, New Zealand
External links
Official site
References
Engineering companies of Italy
Companies based in the Province of Reggio Emilia
Technology companies of Italy
Italian brands
Industrial machine manufacturers
Italian companies established in 1977
Interpump Group S.p.A.
Companies listed on the Borsa Italiana
Companies in the FTSE MIB
Manufacturing companies established in 1977
Hydraulic engineering
Companies of Italy | Interpump Group | Physics,Engineering,Environmental_science | 581 |
46,386,116 | https://en.wikipedia.org/wiki/Cookie%20Jar%20Butte | Cookie Jar Butte is a tower in Kane County, Utah, in the United States with an elevation of . It is located in Padre Bay on the north shore of Lake Powell.
References
External links
Cookie Jar Butte photo: National Park Service
Weather forecast: Cookie Jar Butte
Landforms of Kane County, Utah
Buttes of Utah
Colorado Plateau
Glen Canyon National Recreation Area
Lake Powell | Cookie Jar Butte | Engineering | 74 |
30,006,269 | https://en.wikipedia.org/wiki/LG%20Optimus%207 | The LG Optimus 7 (also known as the LG-E900) is a slate smartphone which runs Microsoft's Windows Phone operating system. The Optimus 7 is part of the first-generation Windows Phone line-up launched in October 2010.
Technical issues
Update issues
Several users reported an error while updating their handsets to Windows Phone 7.5 via Zune which only seemed to affect users with firmware versions 1.0.1.12 and 1.1.2.10. Newer versions of the firmware do not seem to have this problem. Certain users have reported the error has been removed after re-flashing their device ROM via an Authorised LG Support Center or restoring the previous version of their device and updating to Mango again.
Overheating
The Optimus 7 has a tendency to heat to a high temperature when the handset is left to run an application for an extended period of time. This behavior has also been noted during charging. Due to the device's metal battery cover, the handset tends to retain any heat generated.
It has also been reported that the phone can reboot after reaching high temperatures. Upon rebooting, the phone can hang at the LG start-up logo, necessitating the user to reset the device manually.
See also
LG Quantum
Windows Phone
Comparable Devices
LG Quantum
HTC HD7
Samsung Omnia 7
HTC 7 Trophy
Nokia Lumia 520
References
External links
Official LG Optimus 7 homepage
Windows Phone devices
LG Electronics smartphones
Mobile phones introduced in 2010
Discontinued smartphones | LG Optimus 7 | Technology | 319 |
27,660,529 | https://en.wikipedia.org/wiki/Oxidation%20with%20chromium%28VI%29%20complexes | Oxidation with chromium(VI) complexes involves the conversion of alcohols to carbonyl compounds or more highly oxidized products through the action of molecular chromium(VI) oxides and salts. The principal reagents are Collins reagent, PDC, and PCC. These reagents represent improvements over inorganic chromium(VI) reagents such as Jones reagent.
Inventory of Cr(VI)-pyridine and pyridinium reagents
Cr(VI)-pyridine and pyridinium reagents have the advantage that they are soluble in organic solvents as are the alcohol substrates.
One family of reagents employs the complex CrO3(pyridine)2.
Sarett's reagent: a solution of CrO3(pyridine)2 in pyridine. It was popularized for selective oxidation of primary and secondary alcohols to carbonyl compounds.
Collins reagent is a solution of the same CrO3(pyridine)2 but in dichloromethane. The Ratcliffe variant of Collins reagent relates to details of the preparation of this solution, i.e., the addition of chromium trioxide to a solution of pyridine in methylene chloride.
The second family of reagents are salts, featuring the pyridinium cation (C5H5NH+).
pyridinium dichromate (PDC) is the pyridium salt of dichromate, [Cr2O7]2-.
pyridinium chlorochromate (PCC) is the pyridinium salt of [CrO3Cl]−.
These salts are less reactive, more easily handled, and more selective than Collins reagent in oxidations of alcohols. These reagents, as well as other, more exotic adducts of nitrogen heterocycles with chromium(VI), facilitate a number of oxidative transformations of organic compounds, including cyclization to form tetrahydrofuran derivatives and allylic transposition to afford enones from allylic alcohols.
The above reagents represent improvements over the Jones reagent, a solution of chromium trioxide in aqueous sulfuric acid.
Mechanism and stereochemistry
Chromate esters are implicated in these reactions. The chromate ester decomposes to the aldehyde or carbonyl by transfer of an alpha proton. Large kinetic isotope effects (kH/kD) are observed.
Oxidative annulation of alkenols to form six-membered rings may be accomplished with PCC. This process is postulated to occur via initial oxidation of the alcohol, attack of the alkene on the new carbonyl, then re-oxidation to a ketone. Double-bond isomerization may occur upon treatment with base as shown below.
An important process mediated by chromium(VI)-amines is the oxidative transposition of tertiary allylic alcohols to give enones. The mechanism of this process likely depends on the acidity of the chromium reagent. Acidic reagents such as PCC may cause ionization and recombination of the chromate ester (path A), while the basic reagents (Collins) likely undergo direct allylic transposition via sigmatropic rearrangement (path B).
Oxidative cyclizations of olefinic alcohols to cyclic ethers may occur via [3+2], [2+2], or epoxidation mechanisms. Insights into the mechanism is provided by structure-reactivity, implicating direct epoxidation by the chromate ester. Subsequent epoxide opening and release of chromium leads to the observed products.
Scope and limitations
Buffering agents may be used to prevent acid-labile protecting groups from being removed during chromium(VI)-amine oxidations. However, buffers will also slow down oxidative cyclizations, leading to selective oxidation of alcohols over any other sort of oxidative transformation. Citronellol, for instance, which cyclizes to pugellols in the presence of PCC, does not undergo cyclization when buffers are used.
Oxidative cyclization can be used to prepare substituted tetrahydrofurans. Cyclization of dienols leads to the formation of two tetrahydrofuran rings in a syn fashion.
Enones can be synthesized from tertiary allylic alcohols through the action of a variety of chromium(VI)-amine reagents, in a reaction known as the Babler oxidation. The reaction is driven by the formation of a more substituted double bond. (E)-Enones form in greater amounts than (Z) isomers because of chromium-mediated geometric isomerization.
Suitably substituted olefinic alcohols undergo oxidative cyclization to give tetrahydrofurans. Further oxidation of these compounds to give tetrahydropyranyl carbonyl compounds then occurs.
In addition to the limitations described above, chromium(VI) reagents are often unsuccessful in the oxidation of substrates containing heteroatoms (particularly nitrogen). Coordination of the heteroatoms to chromium (with displacements of the amine ligand originally attached to the metal) leads to deactivation and eventual decomposition of the oxidizing agent.
Comparison with other methods
Methods employing dimethyl sulfoxide (the Swern and Moffatt oxidations) are superior to chromium(VI)-amines for oxidations of substrates with heteroatom functionality that may coordinate to chromium. Dess-Martin periodinane (DMP) offers the advantages of operational simplicity, a lack of heavy metal byproducts, and selective oxidation of complex, late-stage synthetic intermediates. Additionally, both DMP and manganese dioxide (MnO2) can be used to oxidize allylic alcohols to the corresponding enones without allylic transposition. When allylic transpositions is desired, however, chromium(VI)-amine reagents are unrivaled.
Catalytic methods employing cheap, clean terminal oxidants in conjunction with catalytic amounts of chromium reagents produce only small amounts of metal byproducts. However, undesired side reactions mediated by stoichiometric amounts of the terminal oxidant may occur.
Historic references
Poos, G. I.; Arth, G. E.; Beyler, R. E.; Sarrett, L. H. J. Am. Chem. Soc., 1953, 75, 422.
References
Organic oxidation reactions | Oxidation with chromium(VI) complexes | Chemistry | 1,426 |
69,080,137 | https://en.wikipedia.org/wiki/Izhar%20Bar-Gad | Izhar Bar-Gad is a full professor at the Leslie and Susan Gonda Brain Research Center at Bar-Ilan University. Bar-Gad is a researcher in the field of neurophysiology and neural computation. His main areas of research are information processing in the basal ganglia in a normal state and in various pathologies, such as Parkinson's disease and Tourette's syndrome.
Biography
Born in Rehovot and raised in Pretoria, South Africa and Qiryat Gat, Israel. Enlistment in the Israel Defense Forces in 1989 in the Israeli Intelligence Corps, discharge in 1994 with the rank of Captain.
In the years 1994–1997, he worked for Amdocs, as a researcher and project manager in the development of distributed artificial intelligence. From 1997 to 2002, he worked for Sanctum in Israel and later in California as the Chief Technology Officer (CTO).
In 2005, he was appointed a lecturer at the Center for Brain Research at Bar-Ilan University. In the years 2008–2011 he served as head of the Department of Neuroscience. In 2010 he was appointed a senior lecturer. In 2012 he was appointed associate professor and since 2018 he has been a full professor at Bar-Ilan University.
Izhar Bar-Gad researches information processing in the basal ganglia in the normal condition and in various pathological conditions. His research combines experimental methods in the fields of systems neurophysiology with computational methods from the fields of data science and neural computation.
His early research was mainly concerned with changes in brain computation that occur during Parkinson's disease and its treatments, drug therapies and deep brain stimulation (DBS). His later research deals with the neurophysiological changes that occur in neurodevelopmental disorders, such as Tourette's Syndrome and Attention Deficit Hyperactivity Disorder (ADHD).
Academic education
In 2003, he completed a Ph.D. at the Hebrew University in neural computation, under the guidance of Prof. Hagai Bergman and Prof. Yaakov Ritov. His doctoral dissertation was written on the subject of "Reinforcement driven dimensionality reduction as a model for information processing in the basal ganglia".
References
External links
Website of Izhar Bar Gad Lab
Google Scholar Link
Researchers Brain from Bar-Ilan won an international award sponsored by the Michael J. Fox Foundation for Parkinson's Research
Dr. Izhar Bar Gad, on the Ynet website
People from Giv'at Shmuel
Academic staff of Bar-Ilan University
People of the Military Intelligence Directorate (Israel)
People in information technology
Tel Aviv University alumni
Hebrew University of Jerusalem alumni
Israeli neuroscientists
Israeli officers
1971 births
Living people | Izhar Bar-Gad | Technology | 556 |
43,293,178 | https://en.wikipedia.org/wiki/Sony%20Vaio%20TP%20series | The Sony Vaio TP series was a series of living room PCs part of Sony's Vaio line that sold from 2007 through 2008.
Models
References
TP
Computer-related introductions in 2007
Consumer electronics brands | Sony Vaio TP series | Technology | 44 |
44,048,160 | https://en.wikipedia.org/wiki/Homology-derived%20Secondary%20Structure%20of%20Proteins | HSSP (Homology-derived Secondary Structure of Proteins) is a database that combines structural and sequence information about proteins. This database has the information of the alignment of all available homologs of proteins from the PDB database As a result of this, HSSP is also a database of homology-based implied protein structures.
See also
Protein Data Bank (PDB)
STING
References
External links
HSSP
Protein databases
Protein structure | Homology-derived Secondary Structure of Proteins | Chemistry | 87 |
4,988,544 | https://en.wikipedia.org/wiki/Omega1%20Aquarii | {{DISPLAYTITLE:Omega1 Aquarii}}
Omega1 Aquarii, Latinized from ω1 Aquarii, is the Bayer designation for a binary star in the equatorial constellation of Aquarius. With an apparent visual magnitude of 4.96, this star is faintly visible to the naked eye from the suburbs. The distance to this star can be estimated from the parallax as approximately .
The stellar classification of this star is A7 IV, matching a subgiant star. It is spinning rapidly with a projected rotational velocity of 105 km/s. The star is about 600 million years old and is radiating 15 times the Sun's luminosity. It has 1.9 times the mass of the Sun and 2.4 times the Sun's radius. Previously thought to be a single star, in 2022 it was discovered to have a smaller companion, making it a binary star. The secondary star has a projected separation of about 1 astronomical unit away from the primary star.
References
External links
Image Omega1 Aquarii
A-type subgiants
Aquarius (constellation)
Binary stars
Aquarii, Omega1
BD-15 6471
Aquarii, 102
222345
116758
8968 | Omega1 Aquarii | Astronomy | 254 |
58,622,954 | https://en.wikipedia.org/wiki/Aspergillus%20corrugatus | Aspergillus corrugatus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1976. It has been isolated from soil in Thailand. It has been reported to produce asperthecin, emecorrugatin A, emecorrugatin B, sterigmatocystin, and norsolorinic acid.
Growth and morphology
A. corrugatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
corrugatus
Fungi described in 1976
Fungus species | Aspergillus corrugatus | Biology | 158 |
2,239,396 | https://en.wikipedia.org/wiki/Smartwrap | Smartwrap is an ultra-thin polymer-based material made by James Timberlake and Stephen Kieran of Philadelphia architecture firm KieranTimberlake.
The compound consists of substrate and printed and laminated layers that have been roll-coated into a single film. The resulting film has the ability to alter look and color, as well as to supply light and electricity, shelter, and govern internal temperatures.
References
Technical fabrics | Smartwrap | Physics | 86 |
55,756,466 | https://en.wikipedia.org/wiki/Judy%20Green%20%28mathematician%29 | Judith (Judy) Green (born 1943) is an American logician and historian of mathematics who studies women in mathematics. She is a founding member of the Association for Women in Mathematics; she has also served as its vice president, and as the vice president of the American Association of University Professors.
Education and career
Green earned her bachelor's degree at Cornell University.
She completed a master's degree at Yale University,
and a Ph.D. at the University of Maryland, College Park.
Her dissertation, supervised by Carol Karp and finished in 1972, was
Consistency Properties for Uncountable Finite-Quantifier Languages.
Green was elected an AMS Member at Large in 1975 and served for three years until 1977. She belonged to the faculty of Rutgers University before moving to Marymount University in 1989. After retiring from Marymount in 2007, she became a volunteer at the National Museum of American History.
Book
With Jeanne LaDuke, she wrote Pioneering Women in American Mathematics: The Pre-1940 PhD’s (American Mathematical Society and London Mathematical Society, 2009). This was a biographical study of the first women in the U.S. to earn doctorates in mathematics.
Recognition
She is part of the 2019 class of fellows of the Association for Women in Mathematics.
References
1943 births
Place of birth missing (living people)
Living people
20th-century American mathematicians
Mathematical logicians
Women logicians
American historians of mathematics
Cornell University alumni
Yale University alumni
University of Maryland, College Park alumni
Rutgers University faculty
Marymount University faculty
Fellows of the Association for Women in Mathematics
20th-century American women mathematicians
21st-century American women | Judy Green (mathematician) | Mathematics | 326 |
59,630,222 | https://en.wikipedia.org/wiki/IBM%20Q%20System%20One | IBM Quantum System One is the first circuit-based commercial quantum computer, introduced by IBM in January 2019.
This integrated quantum computing system is housed in an airtight borosilicate glass cube that maintains a controlled physical environment. Each face of the cube is wide and tall. A cylindrical protrusion from the center of the ceiling is a dilution refrigerator, containing a 20-qubit transmon quantum processor. It was tested for the first time in the summer of 2018, for two weeks, in Milan, Italy.
IBM Quantum System One was developed by IBM Research, with assistance from the Map Project Office and Universal Design Studio. CERN, ExxonMobil, Fermilab, Argonne National Laboratory and Lawrence Berkeley National Laboratory are among the clients signed up to access the system remotely.
From April 6 to May 31, 2019, the Boston Museum of Science hosted an exhibit featuring a replica of the IBM Quantum System One.
On June 15, 2021, IBM deployed the first unit of Quantum System One in Germany at its headquarters in Ehningen. On April 5, 2024, IBM unveiled a Quantum System One at the Rensselaer Polytechnic Institute, the first IBM quantum system on a university campus.
See also
IBM Eagle
IBM Quantum Platform
Timeline of quantum computing and communication
Superconducting quantum computing
Qiskit
References
External links
Official website
Quantum computing
Computer-related introductions in 2019
IBM computers | IBM Q System One | Technology | 283 |
19,928,041 | https://en.wikipedia.org/wiki/Effects%20of%20climate%20change%20on%20plant%20biodiversity | There is an ongoing decline in plant biodiversity, just like there is ongoing biodiversity loss for many other life forms. One of the causes for this decline is climate change. Environmental conditions play a key role in defining the function and geographic distributions of plants. Therefore, when environmental conditions change, this can result in changes to biodiversity. The effects of climate change on plant biodiversity can be predicted by using various models, for example bioclimatic models.
Habitats may change due to climate change. This can cause non-native plants and pests to impact native vegetation diversity. Therefore, the native vegetation may become more vulnerable to damage.
Another example are wildfires: if they become more intense due to climate change, this may result in more severe burn conditions and shorter burn intervals. This can threaten the biodiversity of native vegetation.
Direct impacts
Changing climatic variables relevant to the function and distribution of plants include increasing concentrations (see CO2 fertilization effect), increasing global temperatures, altered precipitation patterns, and changes in the pattern of extreme weather events such as cyclones, fires or storms.
Because individual plants and therefore species can only function physiologically, and successfully complete their life cycles under specific environmental conditions (ideally within a subset of these), changes to climate are likely to have significant impacts on plants from the level of the individual right through to the level of the ecosystem or biome.
Effects of temperature
One common hypothesis among scientists is that the warmer an area is, the higher the plant diversity. This hypothesis can be observed in nature, where higher plant biodiversity is often located at certain latitudes (which often correlates with a specific climate/temperature). Plant species in montane and snowy ecosystems are at greater risk for habitat loss due to climate change. The effects of climate change are predicted to be more severe in mountains of northern latitude. Heat and drought as a result of climate change has been found to severely impact tree mortality rates, putting forest ecosystems at high risk.
Changes in distributions
If climatic factors such as temperature and precipitation change in a region beyond the tolerance of a species phenotypic plasticity, then distribution changes of the species may be inevitable. There is already evidence that plant species are shifting their ranges in altitude and latitude as a response to changing regional climates. Yet it is difficult to predict how species ranges will change in response to climate and separate these changes from all the other man-made environmental changes such as eutrophication, acid rain and habitat destruction.
When compared to the reported past migration rates of plant species, the rapid pace of current change has the potential to not only alter species distributions, but also render many species as unable to follow the climate to which they are adapted. The environmental conditions required by some species, such as those in alpine regions may disappear altogether. The result of these changes is likely to be a rapid increase in extinction risk. Adaptation to new conditions may also be of great importance in the response of plants.
Predicting the extinction risk of plant species is not easy however. Estimations from particular periods of rapid climatic change in the past have shown relatively little species extinction in some regions, for example. Knowledge of how species may adapt or persist in the face of rapid change is still relatively limited.
It is clear now that the loss of some species will be very dangerous for humans because they will stop providing services. Some of them have unique characteristics that cannot be replaced by any other.
Distributions of species and plant species will narrow following the effects of climate change. Climate change can affect areas such as wintering and breeding grounds to birds. Migratory birds use wintering and breeding grounds as a place to feed and recharge after migrating for long hours. If these areas are damaged due to climate change, it will eventually affect them as well.
Lowland forest have gotten smaller during the last glacial period and those small areas became island which are made up of drought resisting plants. In those small refugee areas there are also a lot of shade dependent plants. As an example, the dynamics of the calcareous grassland were significantly impacted due to the climate factors.
Changes in the suitability of a habitat for a species drive distributional changes by not only changing the area that a species can physiologically tolerate, but how effectively it can compete with other plants within this area. Changes in community composition are therefore also an expected product of climate change.
Changes in life-cycles
Plants typically reside in locations that are beneficial to their life histories. The timing of phenological events such as flowering and leaf production, are often related to environmental variables, including temperature, which can be altered by climate change. Changing environments are, therefore, expected to lead to changes in life cycle events, and these have been recorded for many species of plants, therefore, many plant species are considered to be adequate indicators of climate change. These changes have the potential to lead to the asynchrony between species, or to change competition between plants. Both the insect pollinators and plant populations will eventually become extinct due to the uneven and confusing connection that is caused by the change of climate. Flowering times in British plants for example have changed, leading to annual plants flowering earlier than perennials, and insect pollinated plants flowering earlier than wind pollinated plants; with potential ecological consequences. Other observed effects also include the lengthening in growing seasons of certain agricultural crops such as wheat and maize. A recently published study has used data recorded by the writer and naturalist Henry David Thoreau to confirm effects of climate change on the phenology of some species in the area of Concord, Massachusetts. Another life-cycle change is a warmer winter which can lead to summer rainfall or summer drought.
Ultimately, climate change can affect the phenology and interactions of many plant species, and depending on its effect, can make it difficult for a plant to be productive.
Indirect impacts
All species are likely to be directly impacted by the changes in environmental conditions discussed above, and also indirectly through their interactions with other species. While direct impacts may be easier to predict and conceptualise, it is likely that indirect impacts are equally important in determining the response of plants to climate change. A species whose distribution changes as a direct result of climate change may invade the range of another species or be invaded, for example, introducing a new competitive relationship or altering other processes such as carbon sequestration.
The range of a symbiotic fungi associated with plant roots (i.e., mycorrhizae) may directly change as a result of altered climate, resulting in a change in the plant's distribution.
Extinction risks
Challenges of modeling future impacts
Predicting the effects that climate change will have on plant biodiversity can be achieved using various models, however bioclimatic models are most commonly used.
Improvement of models is an active area of research, with new models attempting to take factors such as life-history traits of species or processes such as migration into account when predicting distribution changes; though possible trade-offs between regional accuracy and generality are recognised.
Climate change is also predicted to interact with other drivers of biodiversity change such as habitat destruction and fragmentation, or the introduction of foreign species. These threats may possibly act in synergy to increase extinction risk from that seen in periods of rapid climate change in the past.
See also
CO2 fertilization effect
Decline in insect populations
Effects of climate change on biomes
Mycorrhizae and changing climate
References
External links
The Millennium Ecosystem Assessment, including discussion of the effects of climate change on biodiversity (2005)
Biodiversity
Effects of climate change
Botany
Plant ecology | Effects of climate change on plant biodiversity | Biology | 1,508 |
2,297,912 | https://en.wikipedia.org/wiki/Mueller%20calculus | Mueller calculus is a matrix method for manipulating Stokes vectors, which represent the polarization of light. It was developed in 1943 by Hans Mueller. In this technique, the effect of a particular optical element is represented by a Mueller matrix—a 4×4 matrix that is an overlapping generalization of the Jones matrix.
Introduction
Disregarding coherent wave superposition, any fully polarized, partially polarized, or unpolarized state of light can be represented by a Stokes vector ; and any optical element can be represented by a Mueller matrix (M).
If a beam of light is initially in the state and then passes through an optical element M and comes out in a state , then it is written
If a beam of light passes through optical element M1 followed by M2 then M3 it is written
given that matrix multiplication is associative it can be written
Matrix multiplication is not commutative, so in general
Mueller vs. Jones calculi
With disregard for coherence, light which is unpolarized or partially polarized must be treated using the Mueller calculus, while fully polarized light can be treated with either the Mueller calculus or the simpler Jones calculus. Many problems involving coherent light (such as from a laser) must be treated with Jones calculus, however, because it works directly with the electric field of the light rather than with its intensity or power, and thereby retains information about the phase of the waves.
More specifically, the following can be said about Mueller matrices and Jones matrices:
Stokes vectors and Mueller matrices operate on intensities and their differences, i.e. incoherent superpositions of light; they are not adequate to describe either interference or diffraction effects.
(...)
Any Jones matrix [J] can be transformed into the corresponding Mueller–Jones matrix, M, using the following relation:
,
where * indicates the complex conjugate [sic], [A is:]
and ⊗ is the tensor (Kronecker) product.
(...)
While the Jones matrix has eight independent parameters [two Cartesian or polar components for each of the four complex values in the 2-by-2 matrix], the absolute phase information is lost in the [equation above], leading to only seven independent matrix elements for a Mueller matrix derived from a Jones matrix.
Mueller matrices
Below are listed the Mueller matrices for some ideal common optical elements:
General expression for reference frame rotation from the local frame to the laboratory frame:
where is the angle of rotation. For rotation from the laboratory frame to the local frame, the sign of the sine terms inverts.
Linear polarizer (horizontal transmission)
The Mueller matrices for other polarizer rotation angles can be generated by reference frame rotation.
Linear polarizer (vertical transmission)
Linear polarizer (+45° transmission)
Linear polarizer (−45° transmission)
General linear polarizer matrix
where is the angle of rotation of the polarizer.
General linear retarder (wave plate calculations are made from this)
where is the phase difference between the fast and slow axis and is the angle of the slow axis.
Quarter-wave plate (fast-axis vertical)
Quarter-wave plate (fast-axis horizontal)
Half-wave plate (fast-axis horizontal and vertical; also, ideal mirror)
Attenuating filter (25% transmission)
Mueller tensors
The Mueller/Stokes architecture can also be used to describe non-linear optical processes, such as multi-photon excited fluorescence and second harmonic generation. The Mueller tensor can be connected back to the laboratory-frame Jones tensor by direct analogy with Mueller and Jones matrices.
,
where is the rank three Mueller tensor describing the Stokes vector produced by a pair of incident Stokes vectors, and is the 2×2×2 laboratory-frame Jones tensor.
See also
Stokes parameters
Jones calculus
Polarization (waves)
References
Other sources
E. Collett (2005) Field Guide to Polarization, SPIE Field Guides vol. FG05, SPIE .
Eugene Hecht (1987) Optics, 2nd ed., Addison-Wesley .
N. Mukunda and others (2010) "A complete characterization pre-Mueller and Mueller matrices in polarization optics", Journal of the Optical Society of America A 27(2): 188 to 99
William Shurcliff (1966) Polarized Light: Production and Use, chapter 8 Mueller Calculus and Jones Calculus, page 109, Harvard University Press.
Polarization (waves)
Matrices | Mueller calculus | Physics,Mathematics | 900 |
2,824,030 | https://en.wikipedia.org/wiki/In-system%20programming | In-system programming (ISP), or also called in-circuit serial programming (ICSP), is the ability of some programmable logic devices, microcontrollers, chipsets and other embedded devices to be programmed while installed in a complete system, rather than requiring the chip to be programmed prior to installing it into the system. It also allows firmware updates to be delivered to the on-chip memory of microcontrollers and related processors without requiring specialist programming circuitry on the circuit board, and simplifies design work.
Overview
There is no standard for in-system programming protocols for programming microcontroller devices. Almost all manufacturers of microcontrollers support this feature, but all have implemented their own protocols, which often differ even for different devices from the same manufacturer. Up to 4 pins may be required for implementing a JTAG standard interface. In general, modern protocols try to keep the number of pins used low, typically to 2 pins. Some ISP interfaces manage to achieve the same with just a single pin. Newer ATtiny microcontrollers with UPDI can even reuse that programming pin also as a general-purpose input/output.
The primary advantage of in-system programming is that it allows manufacturers of electronic devices to integrate programming and testing into a single production phase, and save money, rather than requiring a separate programming stage prior to assembling the system. This may allow manufacturers to program the chips in their own system's production line instead of buying pre-programmed chips from a manufacturer or distributor, making it feasible to apply code or design changes in the middle of a production run. The other advantage is that production can always use the latest firmware, and new features as well as bug fixes can be implemented and put into production without the delay occurring when using pre-programmed microcontrollers.
Microcontrollers are typically soldered directly to a printed circuit board and usually do not have the circuitry or space for a large external programming cable to another computer.
Typically, chips supporting ISP have internal circuitry to generate any necessary programming voltage from the system's normal supply voltage, and communicate with the programmer via a serial protocol. Most programmable logic devices use a variant of the JTAG protocol for ISP, in order to facilitate easier integration with automated testing procedures. Other devices usually use proprietary protocols or protocols defined by older standards. In systems complex enough to require moderately large glue logic, designers may implement a JTAG-controlled programming subsystem for non-JTAG devices such as flash memory and microcontrollers, allowing the entire programming and test procedure to be accomplished under the control of a single protocol.
History
Starting from the early 1990s an important technological evolution in the architecture of the microcontrollers was witnessed. At first, they were realized in two possible solutions: with OTP (one-time programmable) or with EPROM memories. In EPROM, memory-erasing process requires the chip to be exposed to ultraviolet light through a specific window above the package. In 1993 Microchip Technology introduced the first microcontroller with EEPROM memory: the PIC16C84. EEPROM memories can be electrically erased. This feature allowed to lower the realization costs by removing the erasing window above the package and initiate in-system programming technology. With ISP flashing process can be performed directly on the board at the end of the production process. This evolution gave the possibility to unify the programming and functional test phase and in production environments and to start the preliminary production of the boards even if the firmware development has not yet been completed. This way it was possible to correct bugs or to make changes at a later time. In the same year, Atmel developed the first microcontroller with flash memory, easier and faster to program and with much longer life cycle compared to EEPROM memories.
Microcontrollers that support ISP are usually provided with pins used by the serial communication peripheral to interface with the programmer, a flash/EEPROM memory and the circuitry used to supply the voltage necessary to program the microcontroller. The communication peripheral is in turn connected to a programming peripheral which provides commands to operate on the flash or EEPROM memory.
When designing electronic boards for ISP programming it’s necessary to take into account some guidelines to have a programming phase as reliable as possible. Some microcontrollers with a low number of pins share the programming lines with the I/O lines. This could be a problem if the necessary precautions are not taken into account in the design of the board; the device can suffer the damage of the I/O components during the programming. Moreover, it’s important to connect the ISP lines to high impedance circuitry both to avoid a damage of the components by the programmer and because the microcontroller often cannot supply enough current to pilot the line. Many microcontrollers need a dedicated RESET line to enter in the Programming Mode. It is necessary to pay attention to current supplied for line driving and to check for presence of watchdogs connected to the RESET line that can generate an unwanted reset and, so, to lead a programming failure. Moreover, some microcontrollers need a higher voltage to enter in Programming Mode and, hence, it’s necessary to check that this value it’s not attenuated and that this voltage is not forwarded to others components on the board.
Industrial application
In-System Programming process takes place during the final stage of production of the product and it can be performed in two different ways based on the production volumes.
In the first method, a connector is manually connected to the programmer. This solution expects the human participation to the programming process that has to connect the programmer to the electronic board with the cable. Hence, this solution is meant for low production volumes.
The second method uses test points on the board. These are specific areas placed on the printed board, or PCB, that are electrically connected to some of the electronic components on the board. Test points are used to perform functional tests for components mounted on board and, since they are connected directly to some microcontroller pins, they are very effective for ISP. For medium and high production volumes using test points is the best solution since it allows to integrate the programming phase in an assembly line.
In production lines, boards are placed on a bed of nails called fixture. The latter are integrated, based on the production volumes, in semiautomatic or automatic test systems called ATE (automatic test equipment). Fixtures are specifically designed for each board - or at most for few models similar to the board they were designed for – therefore these are interchangeable in the system environment where they are integrated. The test system, once the board and the fixture are placed in position, has a mechanism to put in contact the needles of the fixture with the Test Points on the board to test. The system it’s connected to, or has directly integrated inside, an ISP programmer. This one has to program the device or devices mounted on the board: for example, a microcontroller and/or a serial memory.
Microchip ICSP
For most Microchip microcontrollers, ICSP programming is performed using two pins, clock (PGC) and data (PGD), while a high voltage (12 V) is present on the Vpp/MCLR pin. Low voltage programming (5 V or 3.3 V) dispenses with the high voltage, but reserves exclusive use of an I/O pin. However, for newer microcontrollers, specifically PIC18F6XJXX/8XJXX microcontrollers families from Microchip Technology, entering into ICSP modes is a bit different. Entering ICSP Program/Verify mode requires the following three steps:
Voltage is briefly applied to the MCLR (master clear) pin.
A 32-bit key sequence is presented on PGD.
Voltage is reapplied to MCLR.
A separate piece of hardware, called a programmer is required to connect to an I/O port of a PC on one side and to the PIC on the other side. A list of the features for each major programming type are:
Parallel port - large bulky cable, most computers have only one port and it may be inconvenient to swap the programming cable with an attached printer. Most laptops newer than 2010 do not support this port. Parallel port programming is very fast.
Serial port (COM port) - At one time the most popular method. Serial ports usually lack adequate circuit programming supply voltage. Most computers and laptops newer than 2010 lack support for this port.
Socket (in or out of circuit) - the CPU must be either removed from circuit board, or a clamp must be attached to the chip-making access an issue.
USB cable - Small and light weight, has support for voltage source and most computers have extra ports available. The distance between the circuit to be programmed and the computer is limited by the length of USB cable - it must usually be less than 180 cm. This can make programming devices deep in machinery or cabinets a problem.
ICSP programmers have many advantages, with size, computer port availability, and power source being major features. Due to variations in the interconnect scheme and the target circuit surrounding a micro-controller, there is no programmer that works with all possible target circuits or interconnects. Microchip Technology provides a detailed ICSP programming guide Many sites provide programming and circuit examples.
PICs are programmed using five signals (a sixth pin 'aux' is provided but not used). The data is transferred using a two-wire synchronous serial scheme, three more wires provide programming and chip power. The clock signal is always controlled by the programmer.
Signals and pinout
Vpp - Programming mode voltage. This must be connected to the MCLR pin, or the Vpp pin of the optional ICSP port available on some large-pin-count PICs. To put the PIC into programming mode, this line must be in a specified range that varies from PIC to PIC. For 5V PICs, this is always some amount above Vdd, and can be as high as 13.5 V. The 3.3 V only PICs like the 18FJ, 24H, and 33F series use a special signature to enter programming mode and Vpp is a digital signal that is either at ground or Vdd. There is no one Vpp voltage that is within the valid Vpp range of all PICs. In fact, the minimum required Vpp level for some PICs can damage other PICs.
Vdd - This is the positive power input to the PIC. Some programmers require this to be provided by the circuit (circuit must be at least partially powered up), some programmers expect to drive this line themselves and require the circuit to be off, while others can be configured either way (like the Microchip ICD2). The Embed Inc programmers expect to drive the Vdd line themselves and require the target circuit to be off during programming.
Vss - Negative power input to the PIC and the zero volts reference for the remaining signals. Voltages of the other signals are implicitly with respect to Vss.
ICSPCLK - Clock line of the serial data interface. This line swings from GND to Vdd and is always driven by the programmer. Data is transferred on the falling edge.
ICSPDAT - Serial data line. The serial interface is bi-directional, so this line can be driven by either the programmer or the PIC depending on the current operation. In either case this line swings from GND to Vdd. A bit is transferred on the falling edge of PGC.
AUX/PGM - Newer PIC controllers use this pin to enable low voltage programming (LVP). By holding PGM high, the micro-controller will enter LVP mode. PIC micro-controllers are shipped with LVP enabled - so if you use a brand new chip you can use it in LVP mode. The only way to change the mode is by using a high voltage programmer. If you program the micro controller with no connection to this pin, the mode is left unchanged.
RJ11 pinout
An industry standard for using RJ11 sockets with an ICSP programmer is supported by Microchip. The illustration represents information provided in their data sheets. However, there is room for confusion. The PIC data sheets show an inverted socket and do not provide a pictorial view of pinouts so it is unclear what side of the socket Pin 1 is located on. The illustration provided here is untested but uses the phone industry standard pinout (the RJ11 plug/socket was original developed for wired desktop phones).
References
See also
Device Programmers
Digital electronics
Microcontrollers | In-system programming | Engineering | 2,644 |
39,515,134 | https://en.wikipedia.org/wiki/Mass%20cytometry | Mass cytometry is a mass spectrometry technique based on inductively coupled plasma mass spectrometry and time of flight mass spectrometry used for the determination of the properties of cells (cytometry). In this approach, antibodies are conjugated with isotopically pure elements, and these antibodies are used to label cellular proteins. Cells are nebulized and sent through an argon plasma, which ionizes the metal-conjugated antibodies. The metal signals are then analyzed by a time-of-flight mass spectrometer. The approach overcomes limitations of spectral overlap in flow cytometry by utilizing discrete isotopes as a reporter system instead of traditional fluorophores which have broad emission spectra.
Commercialization
Tagging technology and instrument development occurred at the University of Toronto and DVS Sciences, Inc. CyTOF (cytometry by time of flight) was initially commercialized by DVS Sciences in 2009. In 2014, Fluidigm acquired DVS Sciences to become a reference company in single cell technology. In 2022 Fluidigm received a capitol infusion and changed its name to Standard BioTools. The CyTOF, CyTOF2, Helios (CyTOF3) and CyTOF XT(4th generation) have been commercialized up to now. Fluidigm sells a variety of commonly used metal-antibody conjugates, and an antibody conjugation kit.
Imaging Mass Cytometry (IMC)
Imaging mass cytometry (IMC) is a relatively new imaging technique, emerged from previously available CyTOF technology (cytometry by time of flight), that combines mass spectrometry with UV laser ablation to generate pseudo images of tissue samples. This approach adds spatial resolution to the data, which enables simultaneous analysis of multiple cell markers at subcellular resolution and their spatial distribution in tissue sections. The IMC approach, in the same way as CyTOF, relies on detection of metal-tagged antibodies using time-of-flight mass spectrometry, allowing for quantification of up to 40 markers simultaneously.
Data analysis
CyTOF mass cytometry data is recorded in tables that list, for each cell, the signal detected per channel, which is proportional to the number of antibodies tagged with the corresponding channel's isotope bound to that cell. These data are formatted as FCS files, which are compatible with traditional flow cytometry software. Due to the high-dimensional nature of mass cytometry data, novel data analysis tools have been developed as well.
Imaging Mass Cytometry data analysis has its specificity due to different nature of data obtained. In terms of data analysis, both IMC and CyTOF generate large datasets with high dimensionality that require specialized computational methods for analysis. However, data generated by IMC can be more challenging to analyze due to additional data complexity and need for specific tools and pipelines specific for digital image analysis, whereas the data generated by CyTOF is generally analyzed using conventional flow cytometry software. A comprehensive overview of IMC data analysis techniques has been given by Milosevic in.
Advantages and disadvantages
Advantages include minimal overlap in metal signals meaning the instrument is theoretically capable of detecting 100 parameters per cell, entire cell signaling networks can be inferred organically without reliance on prior knowledge, and one well-constructed experiment produces large amounts of data.
Disadvantages, in the case of CyTOF, include the practical flow rate is around 500 cells per second versus several thousand in flow cytometry and current reagents available limit cytometer use to around 50 parameters per cell. Additionally, mass cytometry is a destructive method and cells cannot be sorted for further analysis. In the case of IMC, the resolution of the data is relatively low (1μm2/pixel), the technique is as well destructive, acquiring of the data is also very slow, and it requires specialized expensive equipment and expertise.
Applications
Mass cytometry has research applications in medical fields including immunology, hematology, and oncology. It has been used in studies of hematopoiesis, cell cycle, cytokine expression, and differential signaling responses.
MC has been used in various research fields, such as cancer biology, immunology, and neuroscience, to provide a more comprehensive understanding of tissue architecture and cellular interactions.
References
Scientific techniques
Mass spectrometry
Analytical chemistry
Cell biology
Clinical pathology
Flow cytometry
Laboratory techniques | Mass cytometry | Physics,Chemistry,Biology | 937 |
3,444,238 | https://en.wikipedia.org/wiki/Comparison%20of%20assemblers | This is an incomplete comparison of assemblers. Some assemblers are components of a compiler system for a high-level programming language and may have limited or no usable functionality outside of the compiler system. Some assemblers are hosted on the target processor and operating system, while other assemblers (cross-assemblers) may run under an unrelated operating system or processor. For example, assemblers for embedded systems are not usually hosted on the target system since it would not have the storage and terminal I/O to permit entry of a program from a keyboard. An assembler may have a single target processor or may have options to support multiple processor types.
As part of a compiler suite
GNU Assembler (GAS): GPL: many target instruction sets, including ARM architecture, Atmel AVR, x86, x86-64, Freescale 68HC11, Freescale v4e, Motorola 680x0, MIPS, PowerPC, IBM System z, TI MSP430, Zilog Z80.
SDAS (fork of ASxxxx Cross Assemblers and part of the Small Device C Compiler project): GPL: several target instruction sets including Intel 8051, Zilog Z80, Freescale 68HC08, PIC microcontroller.
The Amsterdam Compiler Kit (ACK) targets many architectures of the 1980s, including 6502, 6800, 680x0, ARM, x86, Zilog Z80 and Z8000.
LLVM targets many platforms, however its main focus is not machine-dependent code generation; instead a more high-level typed assembly-like intermediate representation is used. Nevertheless for the most common targets the LLVM MC (machine code) project provides an assembler both as an integrated component of the compilers and as an external tool.
Some other self-hosted native-targeted language implementations (like Go, Free Pascal, SBCL) have their own assemblers with multiple targets. They may be used for inline assembly inside the language, or even included as a library, but aren't always suitable for being used outside of their framework - no command-line tool exists, or only the intermediate representation can be used as their input, or the set of supported targets is very limited.
Single target assemblers
6502 assemblers
680x0 assemblers
ARM assemblers
Mainframe Assemblers
POWER, PowerPC, and Power ISA assemblers
x86 assemblers
Part of the MINIX 3 source tree, but without obvious development activity.
Developed by Interactive Systems Corporation in 1986 when they ported UNIX System V to Intel iAPX286 and 80386 architectures. Archetypical of ATT syntax because it was used as a reference for GAS. Still used for The SCO Group's products, UnixWare and OpenServer.
Active, supported, but unadvertised.
Part of the C++Builder Tool Chain, but not sold as a stand-alone product, or marketed since the CodeGear spin-off; Borland was still selling it until then. Version 5.0, the last, is dated 1996.
Turbo Assembler was developed as Turbo Editasm by Uriah Barnett from Speedware Inc (Sacramento, CA) between 1984 and 1987, then later sold to, or marketed by, Borland as their Turbo Assembler.
Last stable version 1.3.0 was released in August 2014, and low maintenance since then: https://github.com/yasm/yasm
Z80 assemblers
Other single target assemblers
Retargetable/cross-assemblers
Notes and references
External links
List of assemblers running on S100 bus hardware, including CP/M hosted assemblers for 8080 and Z80
Assembler
Assemblers | Comparison of assemblers | Technology | 770 |
56,954,329 | https://en.wikipedia.org/wiki/Metallacyclopentanes | In organometallic chemistry, metallacyclopentanes are compounds with the formula LnM(CH2)4 (Ln = ligands, and M = metal). They are a type of metallacycle. Metallacyclopentanes are intermediates in some metal-catalysed reactions in homogeneous catalysis.
Synthesis
Traditionally, metallacyclopentanes are prepared by dialkylation of metal dihalides with 1,4‐bis(bromomagnesio)butane or the related dilithio reagent. The complex Ni(bipyridine)C4H8 is prepared by the oxidative addition of 1,4-dibromobutane to Ni(0) precursors. Metallacyclopentanes also arise via the dimerization of ethylene within the coordination sphere of a low-valence metal center. This reaction is relevant to the catalytic production of butenes and related alkenes.
Structure
Unsubstituted metallacyclopentanes adopt conformations related to cyclopentane itself: open-envelope conformation and a twisted open-envelope structure.
Occurrence
Early examples of metallacyclopentanes come from studies of the Ni-catalyzed linear- and cyclo-dimerization of ethylenes. Linear dimerization proceeds via beta-hydride elimination of the nickelacyclopentane (Ph3P)Ni(CH2)4 whereas cyclodimerization to give cyclobutane proceeds by reductive elimination from the related (Ph3P)2Ni(CH2)4. Another example of a metallacyclopentane is the titanocene derivative Cp2Ti(CH2)4.
Metallacyclopentanes are intermediates in the metal-catalysed dimerization, trimerization, and tetramerization of ethylene to give 1-butene, 1-hexene, and 1-octene, respectively. These compounds are of commercial interest as comonomers, used in the production of polyethylene.
In the evolution of heterogeneous alkene metathesis catalysts, metallacyclopentanes are invoked as intermediates in the formation of metal alkylidenes from ethylene. Thus, metallacyclopentane intermediates are proposed to isomerize to metallacyclobutanes, which can eliminate alkene giving the alkylidene.
See also
Metallole
References
External links
Five-membered rings
Heterocyclic compounds | Metallacyclopentanes | Chemistry | 545 |
24,431,452 | https://en.wikipedia.org/wiki/Leccinum%20manzanitae | Leccinum manzanitae is an edible species of bolete fungus in the family Boletaceae. Described as new to science in 1971, it is commonly known as the manzanita bolete for its usual mycorrhizal association with manzanita trees. Its fruit bodies (mushrooms) have sticky reddish to brown caps up to , and its stipes are up to long and thick. They have a whitish background color punctuated with small black scales known as scabers. Found only in the Pacific Northwest region of the United States and Canada, it is the most common Leccinum species in California. The mushroom is edible, although opinions vary as to its quality. L. manzanitae can be usually distinguished from other similar bolete mushrooms by its large size, reddish cap, dark scabers on a whitish stipe, and association with manzanita and madrone.
Taxonomy
Leccinum manzanitae was first described by the American mycologist Harry Delbert Thiers in 1971, from collections made in San Mateo County, California, the previous year. In that state, it is known as the manzanita bolete because of its close association with manzanita trees. It is classified in subsection Versicolores of the section Leccinum in the genus Leccinum. Closely related species in this section include L. piceinum, L. monticola, L. albostipitatum, and L. versipelle.
Description
The fruit bodies of Leccinum manzanitae are sometimes massive, occasionally reaching weights of several pounds. The cap is in diameter, spherical to convex when young, and broadly convex to flattened or cushion-shaped (pulvinate). The surface of the cap is often shallowly to deeply pitted or reticulate, sticky, and covered with pressed-down hairs that are more conspicuous toward the edge of the cap. Its color is dark red during all stages of development. The cap's flesh is thick, white when first exposed, but slowly and irregularly changing to dark brownish-gray with no reddish intermediate state. The change in color upon bruising or injury is often more pronounced in young specimens.
The tubes comprising the hymenophore are long, with an adnate attachment to the stipe; their color is pale olive when young and darkens when bruised. Pores are up to 1 mm in diameter, angular, and the same color as the tubes. The stipe is long, and thick at the apex, and either club-shaped or swollen in the middle. It is solid (i.e., not hollow), with a dry surface, and covered with small, stiff, granular projections called scabers. The scabers are usually whitish when young, but eventually turn dark brownish-grey with age. The stipe flesh stains a bluish color when bruised, although this reaction is variable and sometimes slow to occur. It has no distinctive taste or odor.
Leccinum manzanitae mushroom produce a cinnamon-brown spore print. Spores are 13–17 by 4–5.5 μm, somewhat elliptical to cylindrical, and tapered on each end (fusoid); their walls are smooth and moderately thick. The spore-bearing cells, the basidia, are 27–32 by 6–9 μm, club-shaped to pear-shaped (pyriform) and four-spored. The cystidia are 23–32 by 4–6 μm, fusoid to club-shaped with narrow, elongated apices. Caulocystidia (found on the stipe surface) are thin-walled, club-shaped to somewhat fusoid, and sometimes end in a sharp point; they measure 35–45 by 9–14 μm. Clamp connections are absent in the hyphae of Leccinum manzanitae. The hyphae of the cap cuticle are arranged in the form of a trichoderm (wherein the outermost hyphae emerge roughly parallel, perpendicular to the cap surface).
Several chemical tests can be used to help confirm the identify of the mushroom: a drop of dilute (3–10%) potassium hydroxide (KOH) solution will turn the tubes pale red whereas nitric acid (HNO3) on the tubes produces orange-yellow; a solution of Iron(II) sulfate (FeSO4) applied to the flesh results in a pale grey color.
Thiers also described the variety L. manzanitae var. angustisporae from Mendocino County. Similar to the main form in appearance and habitat, it has smaller, narrowly elongated spores, typically 3–4 μm wide and 1–2 μm longer.
Edibility
Leccinum manzanitae is edible, and its taste is sometimes rated highly, although others have described the flavor as bland. Drying the mushroom may improve the flavor. One field guide advises caution when selecting this species for the table, as there have been poisonings reported with similar-looking mushrooms found in the Rocky Mountains and Pacific Northwest region of the United States.
Similar species
In the field, Leccinum manzanitae fruit bodies can be usually distinguished from those of other similar bolete species by its large size, reddish cap, dark scabers on a whitish stipe, and association with manzanita and madrone. L. ponderosum also has a dark red sticky cap, but its flesh does not darken upon exposure, and its cap is smooth when young. L. armeniacum also grows with manzanita and madrone, but its cap is more orange.
L. aeneum, known only from California, is another species that associates with manzanitae and madrone. It has an orangish cap and whitish scabers on the stipe that do not darken significantly as the mushroom matures. L. insigne, found in coniferous forests with aspens, has a coloration similar to L. manzanitae. The brown-capped L. scabrum is associated with ornamental birch, usually in cultivated landscapes. L. constans, also found exclusively in California, is paler, and does not undergo color changes when the cut flesh is exposed to air; it is found near madrone in coastal regions. The species L. largentii, found in northern regions of the West Coast, has a dry cap with a fibrillose to scaly edge, dark olive pores, and densely arranged scabers on the stipe. It associates with toyon (Heteromeles arbutifolia).
Habitat and distribution
Leccinum manzanitae is a mycorrhizal species. Its fruit bodies grow singly to scattered in soil under madrone and manzanita. Known to occur only in North America, it is commonly found from central California to southern Oregon, but has also been reported further north in Washington and British Columbia (Canada). Thiers considered it the most abundant Leccinum in California.
See also
List of Leccinum species
List of North American boletes
References
External links
Images @ Mushroom Observer
manzanitae
Edible fungi
Fungi described in 1971
Fungi of North America
Taxa named by Harry Delbert Thiers
Fungus species | Leccinum manzanitae | Biology | 1,502 |
21,714,705 | https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Science%20Education%20and%20Research%2C%20Bhopal | Indian Institute of Science Education and Research, Bhopal (IISERB or IISER Bhopal) is a prestigious autonomous research institute in Bhauri, Bhopal district, Madhya Pradesh, India. It was established by the Ministry of Education (India), Government of India in 2008 in order to incorporate research in fundamental science at undergraduate and graduate level, with equal emphasis on higher education for research and education in science. It is an autonomous institution awarding its own degrees.
History
Indian Institutes of Science Education and Research (IISERs) were created in 2006 through a proclamation of Ministry of Human Resource Development, Government of India, under the category of institutes of national importance, to promote quality education and research in basic sciences. Soon after the announcement, two of these institutes at Pune and Kolkata, respectively, were started in 2006. This was followed by institutes at Mohali (2007), Bhopal and Thiruvananthapuram (2008), Tirupati (2015) and Berhampur (2016). Each IISER is a degree granting autonomous institution with a prime focus to integrate science, education and research.
Departments
Natural Sciences Stream
Department of Biological Sciences
Department of Chemistry
Department of Earth and Environmental Sciences
Department of Mathematics
Department of Physics
Engineering Sciences Stream
Department of Chemical Engineering
Department of Data Science and Engineering
Department of Electrical Engineering and Computer Science
Humanities and Social Sciences Stream
Department of Economic Sciences
Department of Humanities and Social Sciences
Academics
Academic Programmes
IISER Bhopal offers Bachelor of Science - Master of Science (BS-MS) Dual Degree, Bachelor of Science (BS), Master of Science (M.Sc.), Integrated PhD and PhD programmes.
Bachelor of Science – Master of Science (BS-MS) Dual Degree
The five year BS-MS (Dual Degree) programme is offered in Biological Sciences, Chemistry, Mathematics, Physics, and Earth & Environmental Sciences (EES). The institute also offers BS-MS (Dual Degree) to students from Engineering Sciences disciplines and Economic Sciences after spending four years in the BS programme.
The first year of the programme consists of mandatory, common courses (core courses) from Natural and Engineering Sciences as well as interdisciplinary courses from Humanities and Social Sciences. The second year of the programme enables students to choose thee pre-major disciplines that guide them to make an informed choice of major/minor discipline in the third year. Third year onwards, students register for professional courses consisting of discipline dependent mandatory and elective courses. During the final year, students are required to register for MS thesis with a faculty supervisor, relevant to their major discipline.
In addition to majoring in one of the disciplines, students can also minor in another discipline by fulfilling criteria specified by individual departments.
The Institute encourages and rewards academic excellence exhibited by its students. To this end, Professor C. N. R. Rao Education Foundation Prize, with a prize amount of 5000/- per semester is awarded to the student scoring the highest CPI during the first year (first and second semesters) of the BS-MS (Dual Degree) programme.
In addition, the Institute awards the President's Gold Medal for the best academic performance in the graduating class in all disciplines of the BS-MS programme. Proficiency Medals are the awarded for the best academic performance in each discipline of the BS-MS programme to the graduating class. The Director's Gold Medal is awarded for outstanding all-round achievement and leadership in the graduating class amongst all disciplines.
Bachelor of Science (BS)
The BS programme consists of core courses in basic sciences and introductory courses in Economics/Engineering Sciences during the first year. A fixed set of courses are to be taken in the second year for Engineering Sciences, while students of Economics Sciences can choose multiple courses (open electives) from other departments. The next two years are dedicated to specialization in the Major disciplines offered by their respective stream. The three major disciplines offered in Engineering Sciences are:
Chemical Engineering
Electrical Engineering and Computer Science
Data Science and Engineering
The programme integrates classroom learning with research. The training during the programme enables students to pursue careers in academia, industry or government organizations.
Master of Science (M.Sc.)
The two years M.Sc. Programme is offered for students with bachelor’s degree in relevant discipline. The programme is currently offered in the Departments of Biological Science, Chemistry and Mathematics with an aim to integrate classroom learning with research and provides ample scope for multidisciplinary interactions. The training during the programme enables students to pursue a career in academia, R&D institutes, and science-based industries.
Integrated Doctoral Programme (Integrated PhD)
The Institute offers Integrated PhD programme in Chemistry, Mathematics, and Physics. All admitted students will receive a fellowship of 10,000/- per month. Subsequently, upon successful completion of PhD candidacy requirements, a revised fellowship will be provided as per the Ministry of Education (MoE) norms/guidelines.
Doctoral programme (PhD)
Admission to the doctoral programme is after a master's degree in science. Besides the students of the Integrated Master's programme, postgraduate students with a master's degree in science from other Universities/Institutes are also admitted to the doctoral programme.
The institute also offers PhD programmes in all disciplines. All selected candidates not receiving external fellowship are awarded an Institute fellowship. Currently, the institute offers PhD programmes in the following disciplines: Biological Sciences, Chemistry, Chemical Engineering, Earth and Environmental Sciences, Electrical Engineering & Computer Science, Mathematics, Physics and English. All doctoral students are also expected to participate in the undergraduate teaching programme of the Institute as a part of their training. The programme involved course work, a qualifying examination, a state-of-the-art seminar, thesis work, open seminar and a thesis examination, leading to the award of a PhD degree.
Postdoctoral Studies
Postdoctoral studies are possible in Biology, Chemistry, Earth and Environmental Sciences, Mathematics, Physics, Chemical Engineering, Electrical Engineering & Computer Science.
Ranking
Indian Institute of Science Education and Research, Bhopal ranked 78th by the National Institutional Ranking Framework in overall ranking in 2024.
Admissions
BS-MS (Dual Degree)
The process of admissions in all IISERs is managed by the Joint Admissions Committee (JAC), headed by a chairman and having members from all IISERs. The details regarding admissions in all IISERs can be obtained at IISER admissions website
Channels of Admission
Students are admitted to the programme through the following channels:
Kishore Vaigyanik Protsahan Yojana (KVPY) Basic Science Stream: Candidates having a valid KVPY fellowship are eligible to apply for admissions. Additional cut off criteria may be applicable.
JEE ADVANCED had been one of the channels to join IISERs uptil 2023 but now this channel has been removed.
State and Central Boards (IISER Aptitude Test): Students who have passed (10+2) level with science stream during the current or previous year with marks equal or above the cut off percentage in their respective boards are eligible to apply.
Indian nationals and students belonging to PIO or OCI category are eligible to apply provided they satisfy the eligibility criteria described above.
Scholarship
KVPY scholars admitted to IISERs would draw fellowship as per KVPY norms. In addition, a limited number of INSPIRE scholarships will be available for candidates admitted through JEE Advanced and SCB channels as per the norms prescribed by DST INSPIRE scheme.
Reservation
IISERs follow Government of India rules regarding reservation for seat allocation in Central institutes of higher education.
Bachelor of Science (BS) programme
IISER Bhopal offers four years BS programme for bright and motivated science students who have passed (10+2) with Mathematics as one of the subjects. The programme integrates classroom learning with research. The training during the programme enables students to pursue careers in academia, industry or government organizations.
BS programme is offered in two broad streams:
Engineering sciences disciplines
After completing a common curriculum in the first two years, students can choose from one amongst the three major disciplines currently being offered:
Chemical engineering
Data science and engineering
Electrical engineering and computer science
The courses in the above disciplines are designed to enable students to pursue careers in the industry, academia, or government organizations.
Humanities and social science discipline (economic sciences)
The first year of the programme consists of core courses in basic sciences and introductory courses in Economics. The next three years are dedicated to specialization in Economic Sciences.
The students pursuing the BS programme can also earn a 'minor' degree in other disciplines after fulfilling the prerequisites laid down by the respective department. After spending four years in the BS programme, students have the option to obtain BS-MS (Dual Degree) by spending an additional year devoted to courses and research thesis.
Integrated PhD (I-PhD)
Admission to the Integrated PhD programme will be made once a year during May/June. Advertisement would be floated between March/April. Candidates must have a bachelor's degree. A written or oral exam will be conducted at the campus. A prospective candidate should have completed a graduate programme (BSc/BTech/B.E.) in a discipline relevant to his/her choice of Integrated PhD programme. Candidates seeking admission in Chemistry should have a valid JAM score. Candidates seeking admission in Physics should have a valid JEST rank
PhD
Applications are invited twice a year. A prospective candidate should have completed a postgraduate programme (M.S./MSc/MTech/MBBS) in a discipline relevant to his/her choice of PhD programme along with qualifying a national entrance exam. Students should have a valid rank in the Graduate Aptitude Test in Engineering (GATE) and/or have qualified the Council of Scientific and Industrial Research (CSIR)/University Grants Commission (UGC)/National Eligibility Test – Junior Research Fellowship (NET-JRF), or other equivalent examinations. All selected candidates not receiving external fellowship will be awarded an Institute fellowship.
Student life
Students Activity Council
Students Activity Council (SAC) is a union of the students for organisation of extracurricular activities in the Institute and to address their concerns. It is totally managed, organised and maintained by the students themselves. SAC contains 8 different Activity Councils:
Computing and Networking Council.
Cultural Council.
Fine Arts and Literary Council.
Science Council.
Sports Council.
Student Development Council.
Environmental and Social Initiative Council.
Representatives' Council.
Centre for Science and Society
IISER Bhopal hosted the 4th Inter IISER Sports Meet (IISM 2015), 9 institutes participated in this event ( 6 IISERs, IISc, NISER, CBS Mumbai). The institute also organised a Summer Outreach Camp from 15th May- 21st May, 2022. Roughly 95 students had attended the camp from all over India for the same.
IISER Bhopal hosts two annual festivals. Enthuzia is the cultural festival of IISER Bhopal. It was started in 2010. Enthuzia '19 witnessed Zakir Khan, the Indian comedian, as one of the performers. Singularity is the annual science festival of IISER Bhopal. Singularity'16, the third edition, witnessed K. Radhakrishnan as keynote speaker.
References
External links
The IISER System
Universities and colleges in Bhopal
Universities and colleges in Madhya Pradesh
Science education in India
Bhopal, Indian Institutes of Science Education and Research
Research institutes established in 2008
Materials science institutes
2008 establishments in Madhya Pradesh
Research institutes in Bhopal
Education research institutes | Indian Institute of Science Education and Research, Bhopal | Materials_science | 2,294 |
29,967,061 | https://en.wikipedia.org/wiki/Resupinatus%20applicatus | Resupinatus applicatus, commonly known as the smoked oysterling or the black jelly oyster, is a species of fungus in the family Tricholomataceae, and the type species of the genus Resupinatus. First described in 1786 as Agaricus applicatus by August Johann Georg Karl Batsch, it was transferred to Resupinatus by Samuel Frederick Gray in 1821.
Description
The cuplike to convex fruit bodies of the fungus are in diameter, and grayish-blue to grayish-black in color. The dry cap surface is covered with small, fine hairs. The mushrooms have no stem, and have a firm but gelatinous flesh. The mushrooms produce a white spore print.
Habitat and distribution
The fungus is saprobic, and grows on decaying wood. It is widely distributed in North America, Europe, and Australia.
References
External links
Tricholomataceae
Fungi of Australia
Fungi of North America
Fungi of Europe
Taxa named by August Batsch
Fungus species | Resupinatus applicatus | Biology | 204 |
17,055 | https://en.wikipedia.org/wiki/Kotoamatsukami | In Shinto, is the collective name for the first gods which came into existence at the time of the creation of the universe. They were born in Takamagahara, the world of Heaven at the time of the creation. Unlike the later gods, these deities were born without any procreation.
The three deities that first appeared were:
- Central Master
- High Creator
- Divine Creator
A bit later, two more deities came into existence:
- Energy
(天之常立神) - Heaven
The next generation of gods that followed was the Kamiyonanayo, which included Izanagi-no-Mikoto and Izanami-no-Mikoto, the patriarch and matriarch of all other Japanese gods, respectively. Afterward, the Kotoamatsukami "hides away" as hitorigami.
Though the Zōkasanshin (three deity of creation) are thought to be genderless, another theory stated Kamimusuhi was the woman and Takamimusubi the man, comparing them with water and fire or with yin and yang.
The theologian Hirata Atsutane identified Amenominakanushi as the spirit of the North Star, master of the seven stars of the Big Dipper.
Strangely, Takamimusubi later reappeared together with Amaterasu as one of the central gods in Takamagahara, and his daughter was the mother of the god Ninigi-no-Mikoto. He also played important roles in the events of the founding of Japan, such as selecting the gods who would tag along with Ninigi and sending the Yatagarasu, the three legged solar crow, to help Emperor Jimmu, who in turn, greatly worshipped him by playing the role of medium priest taking Takami Musubi's identity, in the ceremonies before his Imperial Enthronement. Later, Takamimusubi was worshipped by the Jingi-kan and considered the god of matchmaking. Some Japanese clans also claimed descent from this god, such as the Saeki clan, he is also an Imperial ancestor.
As for Kamimusuhi, he (or she) has strong ties with both the Amatsukami (heavenly gods) and the Kunitsukami (earthly gods) of Izumo mythology. Kamimusuhi is also said to have transformed the grains produced by the food goddess Ōgetsuhime (Ukemochi no kami) after she was slain by Amaterasu's angered brother.
See also
Cosmic Man
Japanese creation myth
Creation myth
Japanese mythology
Notes
Creation myths
Japanese deities
Shinto kami
Amatsukami | Kotoamatsukami | Astronomy | 535 |
237,876 | https://en.wikipedia.org/wiki/Deformation%20%28engineering%29 | In engineering, deformation (the change in size or shape of an object) may be elastic or plastic.
If the deformation is negligible, the object is said to be rigid.
Main concepts
Occurrence of deformation in engineering applications is based on the following background concepts:
Displacements are any change in position of a point on the object, including whole-body translations and rotations (rigid transformations).
Deformation are changes in the relative position between internals points on the object, excluding rigid transformations, causing the body to change shape or size.
Strain is the relative internal deformation, the dimensionless change in shape of an infinitesimal cube of material relative to a reference configuration. Mechanical strains are caused by mechanical stress, see stress-strain curve.
The relationship between stress and strain is generally linear and reversible up until the yield point and the deformation is elastic. Elasticity in materials occurs when applied stress does not surpass the energy required to break molecular bonds, allowing the material to deform reversibly and return to its original shape once the stress is removed. The linear relationship for a material is known as Young's modulus. Above the yield point, some degree of permanent distortion remains after unloading and is termed plastic deformation. The determination of the stress and strain throughout a solid object is given by the field of strength of materials and for a structure by structural analysis.
In the above figure, it can be seen that the compressive loading (indicated by the arrow) has caused deformation in the cylinder so that the original shape (dashed lines) has changed (deformed) into one with bulging sides. The sides bulge because the material, although strong enough to not crack or otherwise fail, is not strong enough to support the load without change. As a result, the material is forced out laterally. Internal forces (in this case at right angles to the deformation) resist the applied load.
Types of deformation
Depending on the type of material, size and geometry of the object, and the forces applied, various types of deformation may result. The image to the right shows the engineering stress vs. strain diagram for a typical ductile material such as steel. Different deformation modes may occur under different conditions, as can be depicted using a deformation mechanism map.
Permanent deformation is irreversible; the deformation stays even after removal of the applied forces, while the temporary deformation is recoverable as it disappears after the removal of applied forces.
Temporary deformation is also called elastic deformation, while the permanent deformation is called plastic deformation.
Elastic deformation
The study of temporary or elastic deformation in the case of engineering strain is applied to materials used in mechanical and structural engineering, such as concrete and steel, which are subjected to very small deformations. Engineering strain is modeled by infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small.
For some materials, e.g. elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%, thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain. Elastomers and shape memory metals such as Nitinol exhibit large elastic deformation ranges, as does rubber. However, elasticity is nonlinear in these materials.
Normal metals, ceramics and most crystals show linear elasticity and a smaller elastic range.
Linear elastic deformation is governed by Hooke's law, which states:
where
is the applied stress;
is a material constant called Young's modulus or elastic modulus;
is the resulting strain.
This relationship only applies in the elastic range and indicates that the slope of the stress vs. strain curve can be used to find Young's modulus (). Engineers often use this calculation in tensile tests. The area under this elastic region is known as resilience.
Note that not all elastic materials undergo linear elastic deformation; some, such as concrete, gray cast iron, and many polymers, respond in a nonlinear fashion. For these materials Hooke's law is inapplicable.
Plastic deformation
This type of deformation is not undone simply by removing the applied force. An object in the plastic deformation range, however, will first have undergone elastic deformation, which is undone simply be removing the applied force, so the object will return part way to its original shape. Soft thermoplastics have a rather large plastic deformation range as do ductile metals such as copper, silver, and gold. Steel does, too, but not cast iron. Hard thermosetting plastics, rubber, crystals, and ceramics have minimal plastic deformation ranges. An example of a material with a large plastic deformation range is wet chewing gum, which can be stretched to dozens of times its original length.
Under tensile stress, plastic deformation is characterized by a strain hardening region and a necking region and finally, fracture (also called rupture). During strain hardening the material becomes stronger through the movement of atomic dislocations. The necking phase is indicated by a reduction in cross-sectional area of the specimen. Necking begins after the ultimate strength is reached. During necking, the material can no longer withstand the maximum stress and the strain in the specimen rapidly increases. Plastic deformation ends with the fracture of the material.
Failure
Compressive failure
Usually, compressive stress applied to bars, columns, etc. leads to shortening.
Loading a structural element or specimen will increase the compressive stress until it reaches its compressive strength. According to the properties of the material, failure modes are yielding for materials with ductile behavior (most metals, some soils and plastics) or rupturing for brittle behavior (geomaterials, cast iron, glass, etc.).
In long, slender structural elements — such as columns or truss bars — an increase of compressive force F leads to structural failure due to buckling at lower stress than the compressive strength.
Fracture
A break occurs after the material has reached the end of the elastic, and then plastic, deformation ranges. At this point forces accumulate until they are sufficient to cause a fracture. All materials will eventually fracture, if sufficient forces are applied.
Types of stress and strain
Engineering stress and engineering strain are approximations to the internal state that may be determined from the external forces and deformations of an object, provided that there is no significant change in size. When there is a significant change in size, the true stress and true strain can be derived from the instantaneous size of the object.
Engineering stress and strain
Consider a bar of original cross sectional area being subjected to equal and opposite forces pulling at the ends so the bar is under tension. The material is experiencing a stress defined to be the ratio of the force to the cross sectional area of the bar, as well as an axial elongation:
Subscript 0 denotes the original dimensions of the sample. The SI derived unit for stress is newtons per square metre, or pascals (1 pascal = 1 Pa = 1 N/m2), and strain is unitless. The stress–strain curve for this material is plotted by elongating the sample and recording the stress variation with strain until the sample fractures. By convention, the strain is set to the horizontal axis and stress is set to vertical axis. Note that for engineering purposes we often assume the cross-section area of the material does not change during the whole deformation process. This is not true since the actual area will decrease while deforming due to elastic and plastic deformation. The curve based on the original cross-section and gauge length is called the engineering stress–strain curve, while the curve based on the instantaneous cross-section area and length is called the true stress–strain curve. Unless stated otherwise, engineering stress–strain is generally used.
True stress and strain
In the above definitions of engineering stress and strain, two behaviors of materials in tensile tests are ignored:
the shrinking of section area
compounding development of elongation
True stress and true strain are defined differently than engineering stress and strain to account for these behaviors. They are given as
Here the dimensions are instantaneous values. Assuming volume of the sample conserves and deformation happens uniformly,
The true stress and strain can be expressed by engineering stress and strain. For true stress,
For the strain,
Integrate both sides and apply the boundary condition,
So in a tension test, true stress is larger than engineering stress and true strain is less than engineering strain. Thus, a point defining true stress–strain curve is displaced upwards and to the left to define the equivalent engineering stress–strain curve. The difference between the true and engineering stresses and strains will increase with plastic deformation. At low strains (such as elastic deformation), the differences between the two is negligible. As for the tensile strength point, it is the maximal point in engineering stress–strain curve but is not a special point in true stress–strain curve. Because engineering stress is proportional to the force applied along the sample, the criterion for necking formation can be set as
This analysis suggests nature of the ultimate tensile strength (UTS) point. The work strengthening effect is exactly balanced by the shrinking of section area at UTS point.
After the formation of necking, the sample undergoes heterogeneous deformation, so equations above are not valid. The stress and strain at the necking can be expressed as:
An empirical equation is commonly used to describe the relationship between true stress and true strain.
Here, is the strain-hardening exponent and is the strength coefficient. is a measure of a material's work hardening behavior. Materials with a higher have a greater resistance to necking. Typically, metals at room temperature have ranging from 0.02 to 0.5.
Discussion
Since we disregard the change of area during deformation above, the true stress and strain curve should be re-derived. For deriving the stress strain curve, we can assume that the volume change is 0 even if we deformed the materials. We can assume that:
Then, the true stress can be expressed as below:
Additionally, the true strain can be expressed as below:
Then, we can express the value as
Thus, we can induce the plot in terms of and as right figure.
Additionally, based on the true stress-strain curve, we can estimate the region where necking starts to happen. Since necking starts to appear after ultimate tensile stress where the maximum force applied, we can express this situation as below:
so this form can be expressed as below:
It indicates that the necking starts to appear where reduction of area becomes much significant compared to the stress change. Then the stress will be localized to specific area where the necking appears.
Additionally, we can induce various relation based on true stress-strain curve.
1) True strain and stress curve can be expressed by the approximate linear relationship by taking a log on true stress and strain. The relation can be expressed as below:
Where is stress coefficient and is strain-hardening coefficient. Usually, the value of has range around 0.02 to 0.5 at room temperature. If is 1, we can express this material as perfect elastic material.
2) In reality, stress is also highly dependent on the rate of strain variation. Thus, we can induce the empirical equation based on the strain rate variation.
Where is constant related to the material flow stress. indicates the derivative of strain by the time, which is also known as strain rate. is the strain-rate sensitivity. Moreover, value of is related to the resistance toward the necking. Usually, the value of is at the range of 0-0.1 at room temperature and as high as 0.8 when the temperature is increased.
By combining the 1) and 2), we can create the ultimate relation as below:
Where is the global constant for relating strain, strain rate and stress.
3) Based on the true stress-strain curve and its derivative form, we can estimate the strain necessary to start necking. This can be calculated based on the intersection between true stress-strain curve as shown in right.
This figure also shows the dependency of the necking strain at different temperature. In case of FCC metals, both of the stress-strain curve at its derivative are highly dependent on temperature. Therefore, at higher temperature, necking starts to appear even under lower strain value.
All of these properties indicate the importance of calculating the true stress-strain curve for further analyzing the behavior of materials in sudden environment.
4) A graphical method, so-called "Considere construction", can help determine the behavior of stress-strain curve whether necking or drawing happens on the sample. By setting as determinant, the true stress and strain can be expressed with engineering stress and strain as below:
Therefore, the value of engineering stress can be expressed by the secant line from made by true stress and value where to . By analyzing the shape of diagram and secant line, we can determine whether the materials show drawing or necking.
On the figure (a), there is only concave upward Considere plot. It indicates that there is no yield drop so the material will be suffered from fracture before it yields. On the figure (b), there is specific point where the tangent matches with secant line at point where . After this value, the slope becomes smaller than the secant line where necking starts to appear. On the figure (c), there is point where yielding starts to appear but when , the drawing happens. After drawing, all the material will stretch and eventually show fracture. Between and , the material itself does not stretch but rather, only the neck starts to stretch out.
Misconceptions
A popular misconception is that all materials that bend are "weak" and those that do not are "strong". In reality, many materials that undergo large elastic and plastic deformations, such as steel, are able to absorb stresses that would cause brittle materials, such as glass, with minimal plastic deformation ranges, to break.
See also
Artificial cranial deformation
Buff strength
Creep (deformation)
Deflection (engineering)
Deformation (mechanics)
Deformation mechanism maps
Deformation monitoring
Deformation retract
Deformation theory
Elasticity
Malleability
Planar deformation features
Plasticity (physics)
Poisson's ratio
Strain tensor
Strength of materials
Wood warping
References
Solid mechanics
Deformation (mechanics)
de:Verformung | Deformation (engineering) | Physics,Materials_science,Engineering | 2,943 |
50,568,483 | https://en.wikipedia.org/wiki/Read-once%20function | In mathematics, a read-once function is a special type of Boolean function that can be described by a Boolean expression in which each variable appears only once.
More precisely, the expression is required to use only the operations of logical conjunction, logical disjunction, and negation. By applying De Morgan's laws, such an expression can be transformed into one in which negation is used only on individual variables (still with each variable appearing only once). By replacing each negated variable with a new positive variable representing its negation, such a function can be transformed into an equivalent positive read-once Boolean function, represented by a read-once expression without negations.
Examples
For example, for three variables , , and , the expressions
, and
are all read-once (as are the other functions obtained by permuting the variables in these expressions). However, the Boolean median operation, given by the expression
is not read-once: this formula has more than one copy of each variable, and there is no equivalent formula that uses each variable only once.
Characterization
The disjunctive normal form of a (positive) read-once function is not generally itself read-once. Nevertheless, it carries important information about the function. In particular, if one forms a co-occurrence graph in which the vertices represent variables, and edges connect pairs of variables that both occur in the same clause of the conjunctive normal form, then the co-occurrence graph of a read-once function is necessarily a cograph. More precisely, a positive Boolean function is read-once if and only if its co-occurrence graph is a cograph, and in addition every maximal clique of the co-occurrence graph forms one of the conjunctions (prime implicants) of the disjunctive normal form. That is, when interpreted as a function on sets of vertices of its co-occurrence graph, a read-once function is true for sets of vertices that contain a maximal clique, and false otherwise.
For instance the median function has the same co-occurrence graph as the conjunction of three variables, a triangle graph, but the three-vertex complete subgraph of this graph (the whole graph) forms a subset of a clause only for the conjunction and not for the median.
Two variables of a positive read-once expression are adjacent in the co-occurrence graph if and only if their lowest common ancestor in the expression is a conjunction, so the expression tree can be interpreted as a cotree for the corresponding cograph.
Another alternative characterization of positive read-once functions combines their disjunctive and conjunctive normal form. A positive function of a given system of variables, that uses all of its variables, is read-once if and only if every prime implicant of the disjunctive normal form and every clause of the conjunctive normal form have exactly one variable in common.
Recognition
It is possible to recognize read-once functions from their disjunctive normal form expressions in polynomial time.
It is also possible to find a read-once expression for a positive read-once function, given access to the function only through a "black box" that allows its evaluation at any truth assignment, using only a quadratic number of function evaluations.
Notes
References
.
.
.
.
.
.
.
Boolean algebra | Read-once function | Mathematics | 682 |
37,165,192 | https://en.wikipedia.org/wiki/Algebraic%20matroid | In mathematics, an algebraic matroid is a matroid, a combinatorial structure, that expresses an abstraction of the relation of algebraic independence.
Definition
Given a field extension L/K, Zorn's lemma can be used to show that there always exists a maximal algebraically independent subset of L over K. Further, all the maximal algebraically independent subsets have the same cardinality, known as the transcendence degree of the extension.
For every finite set S of elements of L, the algebraically independent subsets of S satisfy the axioms that define the independent sets of a matroid. In this matroid, the rank of a set of elements is its transcendence degree, and the flat generated by a set T of elements is the intersection of L with the field K[T]. A matroid that can be generated in this way is called algebraic or algebraically representable. No good characterization of algebraic matroids is known, but certain matroids are known to be non-algebraic; the smallest is the Vámos matroid.
Relation to linear matroids
Many finite matroids may be represented by a matrix over a field K, in which the matroid elements correspond to matrix columns, and a set of elements is independent if the corresponding set of columns is linearly independent. Every matroid with a linear representation of this type over a field F may also be represented as an algebraic matroid over F, by choosing an indeterminate for each row of the matrix, and by using the matrix coefficients within each column to assign each matroid element a linear combination of these transcendentals. For fields of characteristic zero (such as the real numbers) linear and algebraic matroids coincide, but for other fields there may exist algebraic matroids that are not linear; indeed the non-Pappus matroid is algebraic over any finite field, but not linear and not algebraic over any field of characteristic zero. However, if a matroid is algebraic over a field F of characteristic zero then it is linear over F(T) for some finite set of transcendentals T over F and over the algebraic closure of F.
Closure properties
If a matroid is algebraic over a simple extension F(t) then it is algebraic over F. It follows that the class of algebraic matroids is closed under contraction, and that a matroid algebraic over F is algebraic over the prime field of F.
The class of algebraic matroids is closed under truncation and matroid union. It is not known whether the dual of an algebraic matroid is always algebraic and there is no excluded minor characterisation of the class.
Characteristic set
The (algebraic) characteristic set K(M) of a matroid M is the set of possible characteristics of fields over which M is algebraically representable.
If 0 is in K(M) then all sufficiently large primes are in K(M).
Every prime occurs as the unique characteristic for some matroid.
If M is algebraic over F then any contraction of M is algebraic over F and hence so is any minor of M.
Notes
References
Matroid theory | Algebraic matroid | Mathematics | 631 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.