id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
18,365 | https://en.wikipedia.org/wiki/Luminance | Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle.
The procedure for conversion from spectral radiance to luminance is standardized by the CIE and ISO.
Brightness is the term for the subjective impression of the objective luminance measurement standard (see for the importance of this contrast).
The SI unit for luminance is candela per square metre (cd/m2). A non-SI term for the same unit is the nit. The unit in the Centimetre–gram–second system of units (CGS) (which predated the SI system) is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2.
Description
Luminance is often used to characterize emission or reflection from flat, diffuse surfaces. Luminance levels indicate how much luminous power could be detected by the human eye looking at a particular surface from a particular angle of view. Luminance is thus an indicator of how bright the surface will appear. In this case, the solid angle of interest is the solid angle subtended by the eye's pupil.
Luminance is used in the video industry to characterize the brightness of displays. A typical computer display emits between . The sun has a luminance of about at noon.
Luminance is invariant in geometric optics. This means that for an ideal optical system, the luminance at the output is the same as the input luminance.
For real, passive optical systems, the output luminance is equal to the input. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. The light at the image plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be "brighter" than the source.
Health effects
Retinal damage can occur when the eye is exposed to high luminance. Damage can occur because of local heating of the retina. Photochemical effects can also cause damage, especially at short wavelengths.
The IEC 60825 series gives guidance on safety relating to exposure of the eye to lasers, which are high luminance sources. The IEC 62471 series gives guidance for evaluating the photobiological safety of lamps and lamp systems including luminaires. Specifically it specifies the exposure limits, reference measurement technique and classification scheme for the evaluation and control of photobiological hazards from all electrically powered incoherent broadband sources of optical radiation, including LEDs but excluding lasers, in the wavelength range from through . This standard was prepared as Standard CIE S 009:2002 by the International Commission on Illumination.
Luminance meter
A luminance meter is a device used in photometry that can measure the luminance in a particular direction and with a particular solid angle. The simplest devices measure the luminance in a single direction while imaging luminance meters measure luminance in a way similar to the way a digital camera records color images.
Formulation
The luminance of a specified point of a light source, in a specified direction, is defined by the mixed partial derivative
where
is the luminance (cd/m2);
is the luminous flux (lm) leaving the area in any direction contained inside the solid angle ;
is an infinitesimal area (m2) of the source containing the specified point;
is an infinitesimal solid angle (sr) containing the specified direction; and
is the angle between the normal to the surface and the specified direction.
If light travels through a lossless medium, the luminance does not change along a given light ray. As the ray crosses an arbitrary surface , the luminance is given by
where
is the infinitesimal area of seen from the source inside the solid angle ;
is the infinitesimal solid angle subtended by as seen from ; and
is the angle between the normal to and the direction of the light.
More generally, the luminance along a light ray can be defined as
where
is the etendue of an infinitesimally narrow beam containing the specified ray;
is the luminous flux carried by this beam; and
is the index of refraction of the medium.
Relation to illuminance
The luminance of a reflecting surface is related to the illuminance it receives:
where the integral covers all the directions of emission ,
is the surface's luminous exitance;
is the received illuminance; and
is the reflectance.
In the case of a perfectly diffuse reflector (also called a Lambertian reflector), the luminance is isotropic, per Lambert's cosine law. Then the relationship is simply
Units
A variety of units have been used for luminance, besides the candela per square metre. Luminance is essentially the same as surface brightness, the term used in astronomy. This is measured with a logarithmic scale, magnitudes per square arcsecond (MPSAS).
See also
Relative luminance
Orders of magnitude (luminance)
Diffuse reflection
Etendue
Lambertian reflectance
Lightness (color)
Luma, the representation of luminance in a video monitor
Lumen (unit)
Radiance, radiometric quantity analogous to luminance
Brightness, the subjective impression of luminance
Glare (vision)
Table of SI light-related units
References
External links
A Kodak guide to Estimating Luminance and Illuminance using a camera's exposure meter. Also available in PDF form.
Autodesk Design Academy Measuring Light Levels
Photometry
Physical quantities | Luminance | Physics,Mathematics | 1,219 |
3,633,250 | https://en.wikipedia.org/wiki/Meiotic%20drive | Meiotic drive is a type of intragenomic conflict, whereby one or more loci within a genome will affect a manipulation of the meiotic process in such a way as to favor the transmission of one or more alleles over another, regardless of its phenotypic expression. More simply, meiotic drive is when one copy of a gene is passed on to offspring more than the expected 50% of the time. According to Buckler et al., "Meiotic drive is the subversion of meiosis so that particular genes are preferentially transmitted to the progeny. Meiotic drive generally causes the preferential segregation of small regions of the genome".
Meiotic drive in plants
The first report of meiotic drive came from Marcus Rhoades who in 1942 observed a violation of Mendelian segregation ratios for the R locus - a gene controlling the production of the purple pigment anthocyanin in maize kernels - in a maize line carrying abnormal chromosome 10 (Ab10). Ab10 differs from the normal chromosome 10 by the presence of a 150-base pair heterochromatic region called 'knob', which functions as a centromere during division (hence called 'neocentromere') and moves to the spindle poles faster than the centromeres during meiosis I and II. The mechanism for this was later found to involve the activity of a kinesin-14 gene called Kinesin driver (Kindr). Kindr protein is a functional minus-end directed motor, displaying quicker minus-end directed motility than an endogenous kinesin-14, such as Kin11. As a result Kindr outperforms the endogenous kinesins, pulling the 150 bp knobs to the poles faster than the centromeres and causing Ab10 to be preferentially inherited during meiosis
Meiotic drive in animals
The unequal inheritance of gametes has been observed since the 1950s, in contrast to Gregor Mendel's First and Second Laws (the law of segregation and the law of independent assortment), which dictate that there is a random chance of each allele being passed on to offspring. Examples of selfish drive genes in animals have primarily been found in rodents and flies. These drive systems could play important roles in the process of speciation. For instance, the proposal that hybrid sterility (Haldane's rule) may arise from the divergent evolution of sex chromosome drivers and their suppressors.
Meiotic drive in mice
Early observations of mouse t-haplotypes by Mary Lyon described numerous genetic loci on chromosome 17 that suppress X-chromosome sex ratio distortion. If a driver is left unchecked, this may lead to population extinction as the population would fix for the driver (e.g. a selfish X chromosome), removing the Y chromosome (and therefore males) from the population. The idea that meiotic drivers and their suppressors may govern speciation is supported by observations that mouse Y chromosomes lacking certain genetic loci produce female-biased offspring, implying these loci encode suppressors of drive. Moreover, matings of certain mouse strains used in research results in unequal offspring ratios. One gene responsible for sex ratio distortion in mice is r2d2 (r2d2 – responder to meiotic drive 2), which predicts which strains of mice can successfully breed without offspring sex ratio distortion.
Meiotic drive in flies
Selfish chromosomes of stalk-eyed flies have had ecological consequences. Driving X chromosomes lead to reductions in male fecundity and mating success, leading to frequency dependent selection maintaining both the driving alleles and wild-type alleles.
Multiple species of fruit fly are known to have driving X chromosomes, of which the best-characterized are found in Drosophila simulans. Three independent driving X chromosomes are known in D. simulans, called Paris, Durham, and Winters. In Paris, the driving gene encodes a DNA modelling protein ("heterochromatin protein 1 D2" or HP1D2), where the allele of the driving copy fails to prepare the male Y chromosome for meiosis. In Winters, the gene responsible ("Distorter on the X" or Dox) has been identified, though the mechanism by which it acts is still unknown. The strong selective pressure imposed by these driving X chromosomes has given rise to suppressors of drive, of which the genes are somewhat known for Winters, Durham, and Paris. These suppressors encode hairpin RNAs which match the sequence of driver genes (such as Dox), leading host RNA interference pathways to degrade Dox sequence. Autosomal suppressors of drive are known in Drosophila mediopunctata, Drosophila paramelanica, Drosophila quinaria, and Drosophila testacea, emphasizing the importance of these drive systems in natural populations.
See also
Fixed allele
References
Genetics | Meiotic drive | Biology | 1,010 |
8,434,240 | https://en.wikipedia.org/wiki/European%20Programme%20for%20Intervention%20Epidemiology%20Training | The European Programme for Intervention Epidemiology Training (EPIET) Fellowship provides training and practical experience in intervention epidemiology at the national centres for surveillance and control of communicable diseases in the European Union. The fellowship is aimed at EU medical practitioners, public-health nurses, microbiologists, veterinarians and other health professionals with previous experience in public health and a keen interest in epidemiology.
Aims
The aims of the programme are:
To strengthen the surveillance of infectious diseases in EU member states and at Community level
To develop response capacity at national and Community level to meet communicable disease threats through rapid and effective field investigation and control
To develop a European network of public health epidemiologists using standard methods, and sharing common objectives
To contribute to the development of the Community network for the surveillance and control of communicable diseases.
Structure
The EPIET Fellowship lasts two years. Ten percent of this time is taken up by formal training courses and the remainder by a placement at a training site in a European country. The fellowship starts with a three-week introductory course in infectious disease epidemiology. This course provides basic knowledge of intervention epidemiology, including outbreak investigation,
surveillance and applied research.
Following the introductory course, fellows spend 23 months at a training site in an EU member state, Norway, Switzerland, the WHO or at the European Centre for Disease Prevention and Control (ECDC). During the training period, fellows will:
analyse, design, or implement a surveillance system
assist in the development of international surveillance networks
perform outbreak investigations
develop a research project on a relevant public health issue
develop knowledge of laboratory techniques
assist in the analysis of public health decisions
teach epidemiology
present the results of their work at the annual scientific conference ESCAIDE
In addition to the introductory course, 4-5 one-week modules are organised throughout the fellowship. The modules focus on one or several specific public health topics, such as: computer tools in outbreak investigations; multivariable regression; time series analysis; vaccinology; laboratory methods for epidemiologists.
Funding
The Fellowship is funded by the European Centre for Disease Prevention and Control (ECDC) and the EU member states. The ECDC took over the coordination of the programme on November 1, 2007, as the European Commission funded project components ended in 2007.
See also
Eurosurveillance
References
External links
EPIET
Epidemiology
Health and the European Union
Schools of public health | European Programme for Intervention Epidemiology Training | Environmental_science | 504 |
8,334,646 | https://en.wikipedia.org/wiki/Delta%20Equulei | Delta Equulei, Latinized from δ Equulei, is the second brightest star in the constellation Equuleus. Delta Equulei is a binary star system about 60 light years away, with components of class G0 and F5. Their combined magnitude is 4.47, and their absolute magnitude is 3.142. There is controversy as to the exact masses of the stars. One study puts the larger at 1.22 solar masses and the smaller at 1.17, while another pegs them at 1.66 and 1.593. The luminosity of the larger star is calculated to be 2.23 solar, and the smaller to be 2.17.
System
William Herschel listed Delta Equulei as a wide binary. Friedrich Georg Wilhelm von Struve later showed this to be an unrelated optical double star. However his son Otto Wilhelm von Struve while making follow-up observations in 1852 found that while the separation of the optical double continued to increase, Delta Equulei itself appeared elongated. He concluded that it is a much more compact binary.
References
External links
Delta Equulei
Spectroscopic binaries
202275
Equulei, Delta
Equuleus
G-type main-sequence stars
F-type main-sequence stars
Equulei, 07
8123
0822
Durchmusterung objects
104858 | Delta Equulei | Astronomy | 287 |
4,385,708 | https://en.wikipedia.org/wiki/Setting%20circles | Setting circles are used on telescopes equipped with an equatorial mount to find celestial objects by their equatorial coordinates, often used in star charts and ephemerides.
Description
Setting circles consist of two graduated disks attached to the axes – right ascension (RA) and declination (DEC) – of an equatorial mount. The RA disk is graduated into hours, minutes, and seconds. The DEC disk is graduated into degrees, arcminutes, and arcseconds.
Since the RA coordinates are fixed onto the celestial sphere, the RA disk is usually driven by a clock mechanism in sync with sidereal time. Locating an object on the celestial sphere using setting circles is similar to finding a location on a terrestrial map using latitude and longitude. Sometimes the RA setting circle has two scales on it: one for the Northern Hemisphere and one for the Southern.
Application
Research telescopes
Historically setting circles have rivaled the telescopes optics as far as difficulty in construction. Making a set of setting circles required a lot of precision crafting on a dividing engine. Setting circles usually had a large diameter and when combined with a vernier scale could point a telescope to nearly an arc minute of accuracy. In the 20th century setting circles were replaced with electronic encoders on most research telescopes.
Portable telescopes
In amateur astronomy, setting up a portable telescope equipped with setting circles requires:
Polar alignment – The telescope must be aligned with either the north celestial pole or the south celestial pole. Polaris is roughly at the north pole, while Sigma Octantis is roughly at the south pole.
Setting Right Ascension – After polar alignment, the observer uses a calculator or a known star to synchronize the right ascension circle with Sidereal Time.
Accuracy of pointing the telescope can be hard to achieve. Some sources of error are:
Less-than-perfect polar alignment
The optical tube not being perpendicular to the declination axis
The declination and right ascension axis not being perpendicular
Errors in rotating the setting circles when setting up
Errors in reading the setting circles
Confusion between Northern and Southern hour angles (Right Ascension)
It is common to blame an unlevel tripod as a source of error, however when a proper polar alignment is performed, any induced error is factored out.
These sources of error add up and cause the telescope to point far from the desired object. They are also hard to control; for example, Polaris is often used as the celestial north pole for alignment purposes, but it is over half a degree away from the true pole. Also, even the finest graduations on setting circles are usually more than a degree apart, which makes them difficult to read accurately, especially in the dark. Nothing can be done if the optical tube is not perpendicular to the declination axis or if the R.A. and Dec axes are not perpendicular, because these problems are next to impossible to fix.
In the southern hemisphere the Right Ascension scale operates in reverse from in the Northern Hemisphere. The term Right Ascension took its name from early northern hemisphere observers for whom "ascending stars" were on the east or right hand side. In the southern hemisphere the east is on the left when an equatorial mount is aligned on the south pole. Many Right Ascension setting circles therefore carry two sets of numbers, one showing the value if the telescope is aligned in the northern hemisphere, the other for the southern.
Even with some inaccuracies in polar alignment or the perpendicularity of the mount, setting circles can be used to roughly get to a desired object's coordinates, where a star chart can be used to apply the necessary correction. Alternatively, it is possible to point to a bright star very close to the object, rotate the circles to match the star's coordinates, and then point to the desired object's coordinates. Setting circles are also used in a modified version of star hopping where the observer points the telescope at a known object and then moves it a set distance in RA or declination to the location of a desired object.
Digital setting circles
Digital setting circles (DSC) consist of two rotary encoders on both axis of the telescope mount and a digital readout. They give a highly accurate readout of where the telescope is pointed and their lit display makes them easier to read in the dark. They have also been combined with microcomputers to give the observer a large database of celestial objects and even guide the observer in correctly pointing their telescope.
In contrast to a GOTO telescope mount, a mount equipped with DSC alone is sometimes called a "PUSH TO" mount.
References
External links
Telescopes and Optics
Sky & Telescope article on the use of setting circles
Accurate polar alignment
Telescopes | Setting circles | Astronomy | 937 |
1,840,504 | https://en.wikipedia.org/wiki/Twilight%20Sentinel | Twilight Sentinel is an automatic headlight control system developed by General Motors (GM) for use in their vehicles. The system uses a photoelectric cell to sense outside light conditions and automatically control the vehicle's exterior lights.
History and development
The development of automatic headlight systems at General Motors can be traced back to the early 1950s. In 1952, GM introduced the Autronic Eye, an automatic headlight dimming system, for Oldsmobile and Cadillac models.
Twilight Sentinel, which expanded on the concept of automatic lighting control, was introduced in the mid-1960s. By 1964, it was available as a feature in Cadillac vehicles. The system was later expanded to other GM makes, becoming a popular feature across various models throughout the 1970s and beyond.
Functionality
The Twilight Sentinel system operates based on the following principles:
1. Light sensing: A photoelectric cell, typically located in the dashboard, detects changes in ambient light conditions.
2. Automatic activation: In twilight or low-light conditions, the system automatically turns on the vehicle's exterior lights.
3. Automatic deactivation: When sufficient ambient light is detected, the system turns off the exterior lights.
4. Manual override: The system can be turned off or bypassed by manually operating the headlights.
Technical details
The Twilight Sentinel system uses a photoelectric cell to measure ambient light levels. This cell is typically mounted on the dashboard of the vehicle. The system is designed to activate the vehicle's exterior lights when the ambient light falls below a certain threshold, typically corresponding to twilight conditions.
Some versions of the Twilight Sentinel system included a delay feature to prevent unnecessary switching due to temporary light fluctuations, such as when driving under overpasses or through tunnels.
Integration with other systems
In some GM vehicles, the Twilight Sentinel system was integrated with other automatic features:
1. Automatic transmission integration: On some early 1990s Canadian GM cars with automatic transmissions, the headlights were integrated with the transmission system, activating when the vehicle was put into drive.
2. Windshield wiper integration: In more recent implementations, the system may be linked to the windshield wipers, automatically activating the lights when the wipers are in use to improve visibility in rainy conditions.
Similar systems by other manufacturers
Other automotive manufacturers have developed similar automatic headlight systems:
1. Ford Motor Company: Ford engineers combined the features of their automatic headlight dimmer system with a Twilight Sentinel-like function, creating a comprehensive automatic lighting control system.
Impact on automotive safety
The development of automatic headlight systems like Twilight Sentinel has contributed to improved road safety. These systems ensure that vehicles have proper illumination in low-light conditions, reducing the risk of accidents caused by reduced visibility. As automotive lighting technology continues to advance, features like Twilight Sentinel remain an important part of vehicle safety systems.
References
Automotive electrics
General Motors | Twilight Sentinel | Engineering | 568 |
51,453,550 | https://en.wikipedia.org/wiki/Cyanonephron%20elegans | Cyanonephron elegans is a freshwater species of cyanobacteria in the family Synechococcaceae. It is described in the Netherlands, Siberia, Russia and Queensland, Australia.
References
McGregor, G. (2013). Freshwater Cyanobacteria from North-Eastern Australia: 2. Chroococcales. Phytotaxa 133: 1-130
External links
Cyanonephron elegans at algaebase
Synechococcales
Bacteria described in 2006
Biota of Queensland
Biota of the Netherlands
Biota of Siberia | Cyanonephron elegans | Biology | 117 |
65,804,611 | https://en.wikipedia.org/wiki/List%20of%20spyware%20programs | This is a list of spyware programs.
These common spyware programs illustrate the diversity of behaviours found in these attacks. Note that as with computer viruses, researchers give names to spyware programs which may not be used by their creators. Programs may be grouped into "families" based not on shared program code, but on common behaviors, or by "following the money" of apparent financial or business connections. For instance, a number of the spyware programs distributed by Claria are collectively known as "Gator". Likewise, programs that are frequently installed together may be described as parts of the same spyware package, even if they function separately.
Spyware programs
CoolWebSearch, a group of programs, takes advantage of Internet Explorer vulnerabilities. The package directs traffic to advertisements on Web sites including coolwebsearch.com. It displays pop-up ads, rewrites search engine results, and alters the infected computer's hosts file to direct DNS lookups to these sites.
FinFisher, sometimes called FinSpy is a high-end surveillance suite sold to law enforcement and intelligence agencies. Support services such as training and technology updates are part of the package.
Gator, replaced banner ads on web sites with its own
GO Keyboard, virtual Android keyboard apps (GO Keyboard - Emoji keyboard and GO Keyboard - Emoticon keyboard), transmit personal information to its remote servers without explicit users' consent. This information includes user's Google account email, language, IMSI, location, network type, Android version and build, and device's model and screen size. The apps also download and execute a code from a remote server, breaching the Malicious Behavior section of the Google Play privacy policies. Some of these plugins are detected as Adware or PUP by many Anti-Virus engines, while the developer, a Chinese company GOMO Dev Team, claims in the apps' description that they will never collect personal data including credit card information. The apps with about 2 million users in total were caught spying in September 2017 by security researchers from AdGuard who then reported their findings to Google.
Hermit is a toolkit developed by RCS Lab for government agencies to spy on iOS and Android mobile phones.
HuntBar, aka WinTools or Adware.Websearch, was installed by an ActiveX drive-by download at affiliate Web sites, or by advertisements displayed by other spyware programs—an example of how spyware can install more spyware. These programs add toolbars to IE, track aggregate browsing behavior, redirect affiliate references, and display advertisements.
Internet Optimizer, also known as DyFuCa, redirects Internet Explorer error pages to advertising. When users follow a broken link or enter an erroneous URL, they see a page of advertisements. However, because password-protected Web sites (HTTP Basic authentication) use the same mechanism as HTTP errors, Internet Optimizer makes it impossible for the user to access password-protected sites.
Spyware such as Look2Me hides inside system-critical processes and start up even in safe mode. With no process to terminate they are harder to detect and remove, which is a combination of both spyware and a rootkit. Rootkit technology is also seeing increasing use, as newer spyware programs also have specific countermeasures against well known anti-malware products and may prevent them from running or being installed, or even uninstall them.
Movieland, also known as Moviepass.tv and Popcorn.net, is a movie download service that has been the subject of thousands of complaints to the Federal Trade Commission (FTC), the Washington State Attorney General's Office, the Better Business Bureau, and other agencies. Consumers complained they were held hostage by a cycle of oversized pop-up windows demanding payment of at least $29.95, claiming that they had signed up for a three-day free trial but had not cancelled before the trial period was over, and were thus obligated to pay. The FTC filed a complaint, since settled, against Movieland and eleven other defendants charging them with having "engaged in a nationwide scheme to use deception and coercion to extract payments from consumers."
Onavo Protect is used by Facebook to monetize usage habits within a privacy-focused environment, and was criticized because the app listing did not contain a prominent disclosure of Facebook's ownership. The app was removed from the Apple iOS App Store. Apple deemed it a violation of guidelines barring apps from harvesting data from other apps on a user's device.
Pegasus is spyware for iOS and Android mobile phones developed by NSO Group which received widespread publicity for its use by government agencies.
Zwangi redirects URLs typed into the browser's address bar to a search page at www.zwangi.com, and may also take screenshots without permission.
Programs distributed with spyware
Kazaa
Morpheus
WeatherBug
WildTangent
Programs formerly distributed with spyware
AOL Instant Messenger (AOL Instant Messenger still packages Viewpoint Media Player, and WildTangent)
DivX
FlashGet
magicJack
References
Spyware
Spyware
Types of malware
Rogue security software
Computer network security
Online advertising
Espionage techniques
Espionage devices
Identity theft
Security breaches
Deception | List of spyware programs | Technology,Engineering | 1,081 |
1,186,904 | https://en.wikipedia.org/wiki/Confocal%20microscopy | Confocal microscopy, most frequently confocal laser scanning microscopy (CLSM) or laser scanning confocal microscopy (LSCM), is an optical imaging technique for increasing optical resolution and contrast of a micrograph by means of using a spatial pinhole to block out-of-focus light in image formation. Capturing multiple two-dimensional images at different depths in a sample enables the reconstruction of three-dimensional structures (a process known as optical sectioning) within an object. This technique is used extensively in the scientific and industrial communities and typical applications are in life sciences, semiconductor inspection and materials science.
Light travels through the sample under a conventional microscope as far into the specimen as it can penetrate, while a confocal microscope only focuses a smaller beam of light at one narrow depth level at a time. The CLSM achieves a controlled and highly limited depth of field.
Basic concept
The principle of confocal imaging was patented in 1957 by Marvin Minsky and aims to overcome some limitations of traditional wide-field fluorescence microscopes. In a conventional (i.e., wide-field) fluorescence microscope, the entire specimen is flooded evenly in light from a light source. All parts of the sample can be excited at the same time and the resulting fluorescence is detected by the microscope's photodetector or camera including a large unfocused background part. In contrast, a confocal microscope uses point illumination (see Point Spread Function) and a pinhole in an optically conjugate plane in front of the detector to eliminate out-of-focus signal – the name "confocal" stems from this configuration. As only light produced by fluorescence very close to the focal plane can be detected, the image's optical resolution, particularly in the sample depth direction, is much better than that of wide-field microscopes. However, as much of the light from sample fluorescence is blocked at the pinhole, this increased resolution is at the cost of decreased signal intensity – so long exposures are often required. To offset this drop in signal after the pinhole, the light intensity is detected by a sensitive detector, usually a photomultiplier tube (PMT) or avalanche photodiode, transforming the light signal into an electrical one.
As only one point in the sample is illuminated at a time, 2D or 3D imaging requires scanning over a regular raster (i.e. a rectangular pattern of parallel scanning lines) in the specimen. The beam is scanned across the sample in the horizontal plane by using one or more (servo controlled) oscillating mirrors. This scanning method usually has a low reaction latency and the scan speed can be varied. Slower scans provide a better signal-to-noise ratio, resulting in better contrast.
The achievable thickness of the focal plane is defined mostly by the wavelength of the used light divided by the numerical aperture of the objective lens, but also by the optical properties of the specimen. The thin optical sectioning possible makes these types of microscopes particularly good at 3D imaging and surface profiling of samples.
Successive slices make up a 'z-stack', which can either be processed to create a 3D image, or it is merged into a 2D stack (predominately the maximum pixel intensity is taken, other common methods include using the standard deviation or summing the pixels).
Confocal microscopy provides the capacity for direct, noninvasive, serial optical sectioning of intact, thick, living specimens with a minimum of sample preparation as well as a marginal improvement in lateral resolution compared to wide-field microscopy. Biological samples are often treated with fluorescent dyes to make selected objects visible. However, the actual dye concentration can be low to minimize the disturbance of biological systems: some instruments can track single fluorescent molecules. Also, transgenic techniques can create organisms that produce their own fluorescent chimeric molecules (such as a fusion of GFP, green fluorescent protein with the protein of interest). Confocal microscopes work on the principle of point excitation in the specimen (diffraction limited spot) and point detection of the resulting fluorescent signal. A pinhole at the detector provides a physical barrier that blocks out-of-focus fluorescence. Only the in-focus, or central spot of the Airy disk, is recorded.
Techniques used for horizontal scanning
Four types of confocal microscopes are commercially available:
Confocal laser scanning microscopes use multiple mirrors (typically 2 or 3 scanning linearly along the x- and the y- axes) to scan the laser across the sample and "descan" the image across a fixed pinhole and detector. This process is usually slow and does not work for live imaging, but can be useful to create high-resolution representative images of fixed samples.
Spinning-disk (Nipkow disk) confocal microscopes use a series of moving pinholes on a disc to scan spots of light. Since a series of pinholes scans an area in parallel, each pinhole is allowed to hover over a specific area for a longer amount of time thereby reducing the excitation energy needed to illuminate a sample when compared to laser scanning microscopes. Decreased excitation energy reduces phototoxicity and photobleaching of a sample often making it the preferred system for imaging live cells or organisms.
Microlens enhanced or dual spinning-disk confocal microscopes work under the same principles as spinning-disk confocal microscopes except a second spinning-disk containing micro-lenses is placed before the spinning-disk containing the pinholes. Every pinhole has an associated microlens. The micro-lenses act to capture a broad band of light and focus it into each pinhole significantly increasing the amount of light directed into each pinhole and reducing the amount of light blocked by the spinning-disk. Microlens enhanced confocal microscopes are therefore significantly more sensitive than standard spinning-disk systems. Yokogawa Electric invented this technology in 1992.
Programmable array microscopes (PAM) use an electronically controlled spatial light modulator (SLM) that produces a set of moving pinholes. The SLM is a device containing an array of pixels with some property (opacity, reflectivity or optical rotation) of the individual pixels that can be adjusted electronically. The SLM contains microelectromechanical mirrors or liquid crystal components. The image is usually acquired by a charge-coupled device (CCD) camera.
Each of these classes of confocal microscope have particular advantages and disadvantages. Most systems are either optimized for recording speed (i.e. video capture) or high spatial resolution. Confocal laser scanning microscopes can have a programmable sampling density and very high resolutions while Nipkow and PAM use a fixed sampling density defined by the camera's resolution. Imaging frame rates are typically slower for single point laser scanning systems than spinning-disk or PAM systems. Commercial spinning-disk confocal microscopes achieve frame rates of over 50 per second – a desirable feature for dynamic observations such as live cell imaging.
In practice, Nipkow and PAM allow multiple pinholes scanning the same area in parallel as long as the pinholes are sufficiently far apart.
Cutting-edge development of confocal laser scanning microscopy now allows better than standard video rate (60 frames per second) imaging by using multiple microelectromechanical scanning mirrors.
Confocal X-ray fluorescence imaging is a newer technique that allows control over depth, in addition to horizontal and vertical aiming, for example, when analyzing buried layers in a painting.
Resolution enhancement
CLSM is a scanning imaging technique in which the resolution obtained is best explained by comparing it with another scanning technique like that of the scanning electron microscope (SEM). CLSM has the advantage of not requiring a probe to be suspended nanometers from the surface, as in an AFM or STM, for example, where the image is obtained by scanning with a fine tip over a surface. The distance from the objective lens to the surface (called the working distance) is typically comparable to that of a conventional optical microscope. It varies with the system optical design, but working distances from hundreds of micrometres to several millimeters are typical.
In CLSM a specimen is illuminated by a point laser source, and each volume element is associated with a discrete scattering or fluorescence intensity. Here, the size of the scanning volume is determined by the spot size (close to diffraction limit) of the optical system because the image of the scanning laser is not an infinitely small point but a three-dimensional diffraction pattern. The size of this diffraction pattern and the focal volume it defines is controlled by the numerical aperture of the system's objective lens and the wavelength of the laser used. This can be seen as the classical resolution limit of conventional optical microscopes using wide-field illumination. However, with confocal microscopy it is even possible to improve on the resolution limit of wide-field illumination techniques because the confocal aperture can be closed down to eliminate higher orders of the diffraction pattern. For example, if the pinhole diameter is set to 1 Airy unit then only the first order of the diffraction pattern makes it through the aperture to the detector while the higher orders are blocked, thus improving resolution at the cost of a slight decrease in brightness. In fluorescence observations, the resolution limit of confocal microscopy is often limited by the signal-to-noise ratio caused by the small number of photons typically available in fluorescence microscopy. One can compensate for this effect by using more sensitive photodetectors or by increasing the intensity of the illuminating laser point source. Increasing the intensity of illumination laser risks excessive bleaching or other damage to the specimen of interest, especially for experiments in which comparison of fluorescence brightness is required. When imaging tissues that are differentially refractive, such as the spongy mesophyll of plant leaves or other air-space containing tissues, spherical aberrations that impair confocal image quality are often pronounced. Such aberrations however, can be significantly reduced by mounting samples in optically transparent, non-toxic perfluorocarbons such as perfluorodecalin, which readily infiltrates tissues and has a refractive index almost identical to that of water. When imaging in a reflection geometry, it is also possible to detect the interference of the reflected and scattered light of an object like an intracellular organelle. The interferometric nature of the signal allows to reduce the pinhole diameter down to 0.2 Airy units and therefore enables an ideal resolution enhancement of √2 without sacrificing the signal-to-noise ratio as in confocal fluorescence microscopy.
Uses
CLSM is widely used in various biological science disciplines, from cell biology and genetics to microbiology and developmental biology. It is also used in quantum optics and nano-crystal imaging and spectroscopy.
Biology and medicine
Clinically, CLSM is used in the evaluation of various eye diseases, and is particularly useful for imaging, qualitative analysis, and quantification of endothelial cells of the cornea. It is used for localizing and identifying the presence of filamentary fungal elements in the corneal stroma in cases of keratomycosis, enabling rapid diagnosis and thereby early institution of definitive therapy. Research into CLSM techniques for endoscopic procedures (endomicroscopy) is also showing promise. In the pharmaceutical industry, it was recommended to follow the manufacturing process of thin film pharmaceutical forms, to control the quality and uniformity of the drug distribution. Confocal microscopy is also used to study biofilms — complex porous structures that are the preferred habitat of microorganisms. Some of temporal and spatial function of biofilms can be understood only by studying their structure on micro- and meso-scales. The study of microscale is needed to detect the activity and organization of single microorganisms.
Optics and crystallography
CLSM is used as the data retrieval mechanism in some 3D optical data storage systems and has helped determine the age of the Magdalen papyrus.
Audio preservation
The IRENE system makes use of confocal microscopy for optical scanning and recovery of damaged historical audio.
Material's surface characterization
Laser scanning confocal microscopes are used in the characterization of the surface of microstructured materials, such as Silicon wafers used in solar cell production. During the first processing steps, wafers are wet-chemically etch with acid or alkaline compounds, rendering a texture to their surface. Laser confocal microscopy is then used to observe the state of the resulting surface at the micrometer lever. Laser confocal microscopy can also be used to analyze the thickness and height of metallization fingers printed on top of solar cells.
Variants and enhancements
Improving axial resolution
The point spread function of the pinhole is an ellipsoid, several times as long as it is wide. This limits the axial resolution of the microscope. One technique of overcoming this is 4Pi microscopy where incident and or emitted light are allowed to interfere from both above and below the sample to reduce the volume of the ellipsoid. An alternative technique is confocal theta microscopy. In this technique the cone of illuminating light and detected light are at an angle to each other (best results when they are perpendicular). The intersection of the two point spread functions gives a much smaller effective sample volume. From this evolved the single plane illumination microscope. Additionally deconvolution may be employed using an experimentally derived point spread function to remove the out of focus light, improving contrast in both the axial and lateral planes.
Super resolution
There are confocal variants that achieve resolution below the diffraction limit such as stimulated emission depletion microscopy (STED). Besides this technique a broad variety of other (not confocal based) super-resolution techniques are available like PALM, (d)STORM, SIM, and so on. They all have their own advantages such as ease of use, resolution, and the need for special equipment, buffers, or fluorophores.
Low-temperature operability
To image samples at low temperatures, two main approaches have been used, both based on the laser scanning confocal microscopy architecture. One approach is to use a continuous flow cryostat: only the sample is at low temperature and it is optically addressed through a transparent window. Another possible approach is to have part of the optics (especially the microscope objective) in a cryogenic storage dewar. This second approach, although more cumbersome, guarantees better mechanical stability and avoids the losses due to the window.
Molecular interaction
To study molecular interactions using the CLSM Förster resonance energy transfer (FRET) can be used to confirm that two proteins are within a certain distance to one another.
Images
History
The beginnings: 1940–1957
In 1940 Hans Goldmann, ophthalmologist in Bern, Switzerland, developed a slit lamp system to document eye examinations. This system is considered by some later authors as the first confocal optical system.
In 1943 Zyun Koana published a confocal system.
In 1951 Hiroto Naora, a colleague of Koana, described a confocal microscope in the journal Science for spectrophotometry.
The first confocal scanning microscope was built by Marvin Minsky in 1955 and a patent was filed in 1957. The scanning of the illumination point in the focal plane was achieved by moving the stage. No scientific publication was submitted and no images made with it were preserved.
The Tandem-Scanning-Microscope
In the 1960s, the Czechoslovak Mojmír Petráň from the Medical Faculty of the Charles University in Plzeň developed the Tandem-Scanning-Microscope, the first commercialized confocal microscope. It was sold by a small company in Czechoslovakia and in the United States by Tracor-Northern (later Noran) and used a rotating Nipkow disk to generate multiple excitation and emission pinholes.
The Czechoslovak patent was filed 1966 by Petráň and Milan Hadravský, a Czechoslovak coworker. A first scientific publication with data and images generated with this microscope was published in the journal Science in 1967, authored by M. David Egger from Yale University and Petráň. As a footnote to this paper it is mentioned that Petráň designed the microscope and supervised its construction and that he was, in part, a "research associate" at Yale. A second publication from 1968 described the theory and the technical details of the instrument and had Hadravský and Robert Galambos, the head of the group at Yale, as additional authors. In 1970 the US patent was granted. It was filed in 1967.
1969: The first confocal laser scanning microscope
In 1969 and 1971, M. David Egger and Paul Davidovits from Yale University, published two papers describing the first confocal laser scanning microscope. It was a point scanner, meaning just one illumination spot was generated. It used epi-Illumination-reflection microscopy for the observation of nerve tissue. A 5 mW Helium-Neon-Laser with 633 nm light was reflected by a semi-transparent mirror towards the objective. The objective was a simple lens with a focal length of 8.5 mm. As opposed to all earlier and most later systems, the sample was scanned by movement of this lens (objective scanning), leading to a movement of the focal point. Reflected light came back to the semitransparent mirror, the transmitted part was focused by another lens on the detection pinhole behind which a photomultiplier tube was placed. The signal was visualized by a CRT of an oscilloscope, the cathode ray was moved simultaneously with the objective. A special device allowed to make Polaroid photos, three of which were shown in the 1971 publication.
The authors speculate about fluorescent dyes for in vivo investigations. They cite Minsky's patent, thank Steve Baer, at the time a doctoral student at the Albert Einstein School of Medicine in New York City where he developed a confocal line scanning microscope, for suggesting to use a laser with 'Minsky's microscope' and thank Galambos, Hadravsky and Petráň for discussions leading to the development of their microscope. The motivation for their development was that in the Tandem-Scanning-Microscope only a fraction of 10−7 of the illumination light participates in generating the image in the eye piece. Thus, image quality was not sufficient for most biological investigations.
1977–1985: Point scanners with lasers and stage scanning
In 1977 Colin J. R. Sheppard and Amarjyoti Choudhury, Oxford, UK, published a theoretical analysis of confocal and laser-scanning microscopes. It is probably the first publication using the term "confocal microscope".
In 1978, the brothers Christoph Cremer and Thomas Cremer published a design for a confocal laser-scanning-microscope using fluorescent excitation with electronic autofocus. They also suggested a laser point illumination by using a "4π-point-hologramme". This CLSM design combined the laser scanning method with the 3D detection of biological objects labeled with fluorescent markers for the first time.
In 1978 and 1980, the Oxford-group around Colin Sheppard and Tony Wilson described a confocal microscope with epi-laser-illumination, stage scanning and photomultiplier tubes as detectors. The stage could move along the optical axis (z-axis), allowing optical serial sections.
In 1979 Fred Brakenhoff and coworkers demonstrated that the theoretical advantages of optical sectioning and resolution improvement are indeed achievable in practice. In 1985 this group became the first to publish convincing images taken on a confocal microscope that were able to answer biological questions. Shortly after many more groups started using confocal microscopy to answer scientific questions that until then had remained a mystery due to technological limitations.
In 1983 I. J. Cox and C. Sheppard from Oxford published the first work whereby a confocal microscope was controlled by a computer. The first commercial laser scanning microscope, the stage-scanner SOM-25 was offered by Oxford Optoelectronics (after several take-overs acquired by BioRad) starting in 1982. It was based on the design of the Oxford group.
Starting 1985: Laser point scanners with beam scanning
In the mid-1980s, William Bradshaw Amos and John Graham White and colleagues working at the Laboratory of Molecular Biology in Cambridge built the first confocal beam scanning microscope. The stage with the sample was not moving, instead the illumination spot was, allowing faster image acquisition: four images per second with 512 lines each. Hugely magnified intermediate images, due to a 1–2 meter long beam path, allowed the use of a conventional iris diaphragm as a ‘pinhole’, with diameters ~1 mm. First micrographs were taken with long-term exposure on film before a digital camera was added. A further improvement allowed zooming into the preparation for the first time. Zeiss, Leitz and Cambridge Instruments had no interest in a commercial production. The Medical Research Council (MRC) finally sponsored development of a prototype. The design was acquired by Bio-Rad, amended with computer control and commercialized as 'MRC 500'. The successor MRC 600 was later the basis for the development of the first two-photon-fluorescent microscope developed 1990 at Cornell University.
Developments at the KTH Royal Institute of Technology in Stockholm around the same time led to a commercial CLSM distributed by the Swedish company Sarastro. The venture was acquired in 1990 by Molecular Dynamics, but the CLSM was eventually discontinued. In Germany, Heidelberg Instruments, founded in 1984, developed a CLSM, which was initially meant for industrial applications rather than biology. This instrument was taken over in 1990 by Leica Lasertechnik. Zeiss already had a non-confocal flying-spot laser scanning microscope on the market which was upgraded to a confocal. A report from 1990, mentioned some manufacturers of confocals: Sarastro, Technical Instrument, Meridian Instruments, Bio-Rad, Leica, Tracor-Northern and Zeiss.
In 1989, Fritz Karl Preikschat, with his son Ekhard Preikschat, invented the scanning laser diode microscope for particle-size analysis. and co-founded Lasentec to commercialize it. In 2001, Lasentec was acquired by Mettler Toledo. They are used mostly in the pharmaceutical industry to provide in-situ control of the crystallization process in large purification systems.
2010s: Computational methods for removing the output pinhole
In standard confocal instruments, the second or "output" pinhole is utilized to filter out the emitted or scattered light. Traditionally, this pinhole is a passive component that blocks light to filter the illumination optically. However, newer designs have tried to perform this filtering digitally.
Recent approaches have replaced the passive pinhole with a compound detector element. Typically, after digital processing, this approach leads to better resolution and photon budget, as the resolution limit can approach that of an infinitely small pinhole.
Other researchers have attempted to digitally refocus the light from a point excitation source using deep convolutional neural networks.
See also
Charge modulation spectroscopy
Deconvolution
Fluorescence microscope
Focused ion beam
Focus stacking
Laser scanning confocal microscopy
Live cell imaging
Microscope objective lens
Microscope slide
Optical microscope
Optical sectioning
Photodetector
Point spread function
Stimulated emission depletion microscope
Super-resolution microscopy
Total internal reflection fluorescence microscope (TIRF)
Two-photon excitation microscopy
References
External links
Virtual CLSM (Java-based)
Animations and explanations on various types of microscopes including fluorescent and confocal microscopes (Université Paris Sud)
Parts need to know.
Microscopy
American inventions
Cell imaging
Scientific techniques
Laboratory equipment
Optical microscopy techniques
Fluorescence techniques | Confocal microscopy | Chemistry,Biology | 4,899 |
56,032,130 | https://en.wikipedia.org/wiki/Loubignac%20iteration | In applied mathematics, Loubignac iteration is an iterative method in finite element methods. It gives continuous stress field. It is named after Gilles Loubignac, who published the method in 1977.
References
Loubignac's paper
Continuum mechanics
Finite element method
Numerical differential equations
Partial differential equations
Structural analysis | Loubignac iteration | Physics,Mathematics,Engineering | 65 |
714,440 | https://en.wikipedia.org/wiki/Wolf%20Prize%20in%20Chemistry | The Wolf Prize in Chemistry is awarded annually by the Wolf Foundation in Israel. It is one of the six Wolf Prizes established by the Foundation and awarded since 1978; the others are in Agriculture, Mathematics, Medicine, Physics and Arts.
The Wolf Prize in Chemistry is considered to be one of the most prestigious international chemistry awards, behind only the Nobel Prize in Chemistry. Becoming a Wolf Prize laureate has been viewed as a potential precursor to receiving the Nobel Prize. As of 2022, 12 awardees have subsequently become Nobel laureates; the most recent of those is Carolyn Bertozzi, who received the Nobel Prize the same year.
Laureates
Laureates per country
Below is a chart of all laureates per country (updated to 2023 laureates). Some laureates are counted more than once if they have multiple citizenships.
See also
List of chemistry awards
Notes and references
External links
Jerusalempost Israel-Wolf-Prizes 2016
Jerusalempost Israel-Wolf-Prizes 2017
Jerusalempost Wolf-Prizes 2018
Wolf Prize 2019
Chemistry
Chemistry awards
Lists of Israeli award winners
Awards established in 1978
Israeli science and technology awards
1978 establishments in Israel | Wolf Prize in Chemistry | Technology | 224 |
22,216,147 | https://en.wikipedia.org/wiki/Harassment | Harassment covers a wide range of behaviors of an offensive nature. It is commonly understood as behavior that demeans, humiliates, and intimidates a person, and it is characteristically identified by its unlikelihood in terms of social and moral reasonableness. In the legal sense, these are behaviors that appear to be disturbing, upsetting, or threatening. Traditional forms evolve from discriminatory grounds, and have an effect of nullifying a person's rights or impairing a person from benefiting from their rights.
When harassing behaviors become repetitive, it is defined as bullying. The continuity or repetitiveness and the aspect of distressing, alarming or threatening may distinguish it from insult. It also constitutes a tactic of coercive control, which may be deployed by an abuser in the context of domestic violence. Harassment is a specific form of discrimination, and occurs when a person is the victim of unwanted intimidating, offensive, or humiliating behavior.
To qualify as harassment, there must be a connection between the harassing behavior and a person's protected personal characteristics or prohibited grounds of discrimination, and the harassment must occur in a protected area. Although harassment typically involves behavior that persists over time, serious and malicious one-off incidents are also considered harassment in some cases.
Etymology
Attested in English from 1753, harassment derives from the English verb harass plus the suffix -ment. The verb harass, in turn, is a loan word from the French, which was already attested in 1572 meaning torment, annoyance, bother, trouble and later as of 1609 was also referred to the condition of being exhausted, overtired. Of the French verb harasser itself there are the first records in a Latin to French translation of 1527 of Thucydides' History of the war that was between the Peloponnesians and the Athenians both in the countries of the Greeks and the Romans and the neighboring places wherein the translator writes harasser allegedly meaning harceler (to exhaust the enemy by repeated raids); and in the military chant Chanson du franc archer of 1562, where the term is referred to a gaunt jument (de poil fauveau, tant maigre et harassée: of fawn horsehair, so meagre and ...) where it is supposed that the verb is used meaning overtired.
A hypothesis about the origin of the verb harasser is harace/harache, which was used in the 14th century in expressions like courre à la harache (to pursue) and prendre aucun par la harache (to take somebody under constraint). The Französisches Etymologisches Wörterbuch, a German etymological dictionary of the French language (1922–2002) compares phonetically and syntactically both harace and harache to the interjection hare and haro by alleging a pejorative and augmentative form. The latter was an exclamation indicating distress and emergency (recorded since 1180) but is also reported later in 1529 in the expression crier haro sur (to arise indignation over somebody). hares use is already reported in 1204 as an order to finish public activities as fairs or markets and later (1377) still as command but referred to dogs. This dictionary suggests a relation of haro/hare with the old lower Franconian *hara (here) (as by bringing a dog to heel).
While the pejorative of an exclamation and in particular of such an exclamation is theoretically possible for the first word (harace) and maybe phonetically plausible for harache, a semantic, syntactic and phonetic similarity of the verb harasser as used in the first popular attestation (the chant mentioned above) with the word haras should be kept in mind: Already in 1160 haras indicated a group of horses constrained together for the purpose of reproduction and in 1280 it also indicated the enclosure facility itself, where those horses are constrained. The origin itself of harass is thought to be the old Scandinavian hârr with the Romanic suffix –as, which meant grey or dimmish horsehair. Controversial is the etymological relation to the Arabic word for horse whose roman transliteration is faras.
Although the French origin of the word 'harassment' is beyond all question in the Oxford English Dictionary and those dictionaries basing on it, a supposed Old French verb harer should be the origin of the French verb harasser, despite the fact that this verb cannot be found in French etymologic dictionaries like that of the Centre national de resources textuelles et lexicales or the Trésor de la langue française informatisé (see also their corresponding websites as indicated in the interlinks); since the entry further alleges a derivation from hare, like in the mentioned German etymological dictionary of the French language a possible misprint of harer = har/ass/er = harasser is plausible or cannot be excluded. In those dictionaries the relationship with harassment were an interpretation of the interjection hare as to urge a dog to attack, despite the fact that it should indicate a shout to come and not to go (hare = hara = here; cf. above). The American Heritage Dictionary prudently indicates this origin only as possible.
Types
Electronic
Electronic harassment is the unproven belief of the use of electromagnetic waves to harass a victim. Psychologists have identified evidence of auditory hallucinations, delusional disorders, or other mental disorders in online communities supporting those who claim to be targeted.
Landlord
Landlord harassment is the willing creation, by a landlord or his agents, of conditions that are uncomfortable for one or more tenants in order to induce willing abandonment of a rental contract. Such a strategy is often sought because it avoids costly legal expenses and potential problems with eviction. This kind of activity is common in regions where rent control laws exist, but which do not allow the direct extension of rent-controlled prices from one tenancy to the subsequent tenancy, thus allowing landlords to set higher prices. Landlord harassment carries specific legal penalties in some jurisdictions, but enforcement can be very difficult or even impossible in many circumstances. However, when a crime is committed in the process and motives similar to those described above are subsequently proven in court, then those motives may be considered an aggravating factor in many jurisdictions, thus subjecting the offender(s) to a stiffer sentence.
Online
Harassment directs multiple repeating obscenities and derogatory comments at specific individuals focusing, for example, on the targets' race, religion, gender, nationality, disability, or sexual orientation. This often occurs in chat rooms, through newsgroups, and by sending hate e-mail to interested parties. This may also include stealing photos of the victim and their families, doctoring these photos in offensive ways, and then posting them on social media with the aim of causing emotional distress (see cyberbullying, cyberstalking, hate crime, online predator, Online Gender-Based Violence, and stalking).
Herd mentality and cyberbullying are common on social media platforms. The "social media mob" that formed may evolve to "bullying anyone who didn't align with their beliefs or conclusions".
Police
Unfair treatment conducted by law officials, including but not limited to excessive force, profiling, threats, coercion, and racial, ethnic, religious, gender/sexual, age, or other forms of discrimination.
Power
Power harassment is harassment or unwelcome attention of a political nature, often occurring in the environment of a workplace including hospitals, schools and universities. It includes a range of behavior from mild irritation and annoyances to serious abuses which can even involve forced activity beyond the boundaries of the job description. Power harassment is considered a form of illegal discrimination and is a form of political and psychological abuse, and bullying.
Psychological
This is humiliating, intimidating or abusive behavior which is often difficult to detect, leaving no evidence other than victim reports or complaints. This characteristically lowers a person's self-esteem or causes one to have overwhelming torment. This can take the form of verbal comments, engineered episodes of intimidation, aggressive actions or repeated gestures. Falling into this category is workplace harassment by individuals or groups mobbing.
Racial
The targeting of an individual because of their race or ethnicity. The harassment may include words, deeds, and actions that are specifically designed to make the target feel degraded due to their race or ethnicity.
Religious
Religious persecution is verbal, psychological or physical harassment against targets because they choose to practice a specific religion. Religious abuse is abuse due to religious settings. Religious harassment can include coercion into forced conversion.
Sexual
Sexual harassment is an offensive or humiliating behavior that is related to a person's sex. It can be a subtle or overt sexual nature of a person (sexual annoyance, e.g. flirting, expression of sexuality, etc.) that results in wrong communication or miscommunication, implied sexual conditions of a job (sexual coercion, etc.). It includes unwanted and unwelcome words, facial expressions, sexual attention, deeds, actions, symbols, or behaviors of a sexual nature that make the target feel uncomfortable. This can involve visual or suggestive looks or comments, staring at a person's body, or the showing of inappropriate photos. It can happen anywhere, but is most common in the workplace, schools, and the military. Even if certain civility codes were relevant in the past, the changing cultural norms calls for policies to avoid intentional fallacies between sexes and among same sexes. Women are substantially more likely to be affected than men. The main focus of groups working against sexual harassment has been the protection of women, but in recent years awareness has grown of the need to protect LGBTQ (for right of gender expression), transgender women and men.
Workplace
Workplace harassment is the offensive, belittling or threatening behavior directed at an individual worker or a group of workers. Workplace harassment can be verbal, physical, sexual, racial, or bullying.
Recently, matters of workplace harassment have gained interest among practitioners and researchers as it is becoming one of the most sensitive areas of effective workplace management. In some East Asian countries, it has attracted substantial attention from researchers and governments since the 1980s, because aggressive behaviors have become a significant source of work stress, as reported by employees. Under occupational health and safety laws around the world, workplace harassment and workplace bullying are identified as being core psychosocial hazards.
Laws
United States
Harassment, under the laws of the United States, is defined as any repeated or continuing uninvited contact that serves no useful purpose beyond creating alarm, annoyance, or emotional distress. In 1964, the United States Congress passed Title VII of the Civil Rights Act which prohibited discrimination at work on the basis of race, color, religion, national origin and sex. This later became the legal basis for early harassment law. The practice of developing workplace guidelines prohibiting harassment was pioneered in 1969, when the U.S. Department of Defense drafted a Human Goals Charter, establishing a policy of equal respect for both sexes. In Meritor Savings Bank v. Vinson, : the U.S. Supreme Court recognized harassment suits against employers for promoting a sexually hostile work environment. In 2006, President George W. Bush signed a law which prohibited the transmission of annoying messages over the Internet (aka spamming) without disclosing the sender's true identity. An important standard in U.S. federal harassment law is that to be unlawful, the offending behavior either must be "severe or pervasive enough to create a work environment that a reasonable person would consider intimidating, hostile, or abusive," or that enduring the offensive conduct becomes a condition of continued employment; e.g. if the employee is fired or threatened with firing upon reporting the conduct.
New Jersey's Law Against Discrimination ("LAD")
The LAD prohibits employers from discriminating in any job-related action, including recruitment, interviewing, hiring, promotions, discharge, compensation and the terms, conditions and privileges of employment on the basis of any of the law's specified protected categories. These protected categories are race, creed, color, national origin, nationality, ancestry, age, sex (including pregnancy and sexual harassment), marital status, domestic partnership status, affectional or sexual orientation, atypical hereditary cellular or blood trait, genetic information, liability for military service, or mental or physical disability, including HIV/AIDS and related illnesses. The LAD prohibits intentional discrimination based on any of these characteristics. Intentional discrimination may take the form of differential treatment or statements and conduct that reflect discriminatory animus or bias.
Canada
In 1984, the Canadian Human Rights Act prohibited sexual harassment in workplaces under federal jurisdiction.
United Kingdom
In the UK, there are a number of laws protecting people from harassment, including the Protection from Harassment Act 1997 and the Criminal Justice and Public Order Act 1994.
See also
References
External links
1750s neologisms
Abuse | Harassment | Biology | 2,663 |
60,261,481 | https://en.wikipedia.org/wiki/Government%20formation | Government formation is the process in a parliamentary system of selecting a prime minister and cabinet members. If no party controls a majority of seats, it can also involve deciding which parties will be part of a coalition government. It usually occurs after an election, but can also occur after a vote of no confidence in an existing government.
The equivalent phenomenon in presidential republics is a presidential transition.
Delays or failures in forming a government
A failure to form a government is a type of cabinet crisis where a functional cabinet (whether a majority or a minority government ruling with a confidence and supply agreement) cannot be formed. Such a problem typically occurs after an inconclusive election, but can also happen if a formerly-stable government falls apart mid-term and new elections are not called.
The process of government formation can sometimes be lengthy. For example, following the 2013 German federal election, Germany engaged in 85 days of government formation negotiations, the longest in the nation's post-war history. The outcome was the third Merkel cabinet, another grand coalition led by Angela Merkel.
During the formation process, the outgoing ministers typically remains in office as a caretaker government. If the cabinet formation process is lengthy, this can result in a substantial extension of their term; Dutch Prime Minister Mark Rutte did not run for re-election in the 2023 Dutch general election, but remained in office for 7 months during the cabinet formation.
Belgium
Belgian governments are typically coalition governments due to the split between the Flemish and French-speaking parts of the country. On occasion, this has led to a situation where no party is able to form a government but the Parliament does not vote to return to the polls. This occurred most notably in 2010–11, when Belgium was ruled by a caretaker government for a year and a half. Though there were calls for drastic measures to resolve the issue, including via a partition of Belgium, government functions continued without interruption under the caretaker government.
See also
Formateur
Dutch cabinet formation
German governing coalition
References
Beginnings | Government formation | Physics | 401 |
34,900,000 | https://en.wikipedia.org/wiki/Digital%20firm | The Digital Firm is a kind of organization that has enabled core business relationships through digital networks In these digital networks are supported by enterprise class technology platforms that have been leveraged within an organization to support critical business functions and services. Some examples of these technology platforms are Customer Relationship Management (CRM), Supply Chain Management (SCM), Enterprise Resource Planning (ERP), Knowledge Management System (KMS), Enterprise Content Management (ECM), and Warehouse Management System (WMS) among others. The purpose of these technology platforms is to digitally enable seamless integration and information exchange within the organization to employees and outside the organization to customers, suppliers, and other business partners.
History
Origin of "The Digital Firm"
The term "Digital Firm" originated, as a concept in a series of Management Information Systems (MIS) books authored by Kenneth C. Laudon. It provides a new way to describe organizations that operate differently than the traditional brick and mortar business as a result of broad sweeping changes in technology and global markets. Digital firms place an emphasis on the digitization of business processes and services through sophisticated technology and information systems. These information systems create opportunities for digital firms to decentralize operations, accelerate market readiness and responsiveness, enhance customer interactions, as well as increase efficiencies across a variety of business functions.
Acceleration of technology adoption
Technology adoption has been increasing as digital firms continually look to achieve greater levels cost savings, competitive advantage, and operational performance optimization. As organizations adopt technology, the internal appetite for additional technologies increases and in some cases accelerates. This acceleration of technology adoption by digital firms creates a "digital divide". Emerging technology is absorbed at varying rates across organizations. This technology divergence can affect competitive dynamics in the market place between firms that achieve operational benefits from the technology and firms which have yet to adapt.
While the growth of new technology consumption is not uniform across organizations, the trend for business-driven investment in technology across all markets has and continues to increase. During the span of 1990 to 2006, the gross U.S. domestic investment in information and communications technology, as measured by the U.S. Census Bureau, increased by 170%.
The market for Enterprise Resource Planning (ERP) systems and other packaged applications started to grow substantially during the 90's to the point that the ERP market alone accounts for approximately $25 billion. According to surveys conducted in 2002, nearly "75% of global Fortune 1000 firms had implemented SAP’s ERP suite".
Many businesses are aware of the need to further digitalize in the post-COVID-19 climate, but those in less developed areas have traditionally been slower to do so.
Advantages
Through digital networks and information systems, the digital firm is able to operate core business services and functions continuously and more efficiently. This digital enablement of business processes creates highly dynamic information systems allowing for more efficient and productive management of an organization.
According to the European Commission, as of 2022, across all economic categories, digital enterprises have been more likely than their non-digital counterparts to increase employment. Manufacturing (33%) or infrastructure (30%) are the two industries in Europe with the most digital and environmentally friendly businesses.
Additionally, digital enablement of core business functions and services provides an organization with opportunities to:
Operate business continuously ("Time Shifting")
Operate business in a global workplace ("Space Shifting")
Adapt business strategies to meet market demands
Create business value from technology investments
Drive efficiency improvements in inventory and supply chain
Enhance the management of customer relationships
Improve organizational productivity
Effects on organizational performance
Technology and information systems serve many critical roles in a digital firm by providing technology-driven capabilities that increase operational performance. For example, digital networks and information systems allow organizations to connect and integrate supply chains in ways that are real-time, uninterrupted and highly responsive to market conditions.
Another example of an information system that can increase an organization's performance awareness and management capabilities is a Real-Time Business Intelligence (RTBI) system. A RTBI system can provide a highly responsive and strategic decision support platform for an organization to analyze operational events as they occur. RTBI systems often work closely with Organizational Risk Management (ORM) systems in this capacity to increase capabilities around monitoring operational performance and assessing operational risks. These types of information systems can increase an organization's capabilities to effectively manage performance and productivity.
The three main enterprise information systems that can positively affect an organization's performance and productivity are:
Enterprise Resource Planning (ERP)
ERP deployments can be complex and require a significant shift in business operations for an organization but the benefits can be substantial.
After implementation of an ERP system within an organization there are measurable performance and productivity gains that can be directly correlated to the ERP system go-live event. This study conducted a detailed analysis of the ERP data produced and found that there was a direct causal relationship between ERP systems and performance gains in an organization.
Organizations that deploy ERP systems typically, based on performance and productivity gains, also implement both of the following enterprise platforms as well.
Customer Relationship Management (CRM)
Organizations leverage CRM systems to improve the overall management of their relationships with customers. CRM systems operate as enterprise platforms that provide digital firms with opportunities to closely manage all aspects of interactions with customers through customer-oriented business processes.
Organizations which implement CRM systems may encounter some lag time until the CRM productivity affects are fully realized in the firm based on studies. However, the lag effects are difficult to measure and based in part on the organization's ability to leverage the new CRM system and adapt to the changes in business operations as a result.
Supply Chain Management (SCM)
Studies of organizations that implemented SCM systems to improve supply chain management capabilities found that those systems had a significant impact on productivity and performance within the organization. Additionally, the implementation of SCM and CRM systems differed from an ERP implementation in that organizational performance could be directly correlated "with both the initial purchase and go-live event".
SCM and CRM systems are often viewed as "extended enterprise systems" due to the way that they integrate with ERP systems and the benefits that they bring to organizations.
See also
Information Systems
Management Information System (MIS)
Enterprise Resource Planning (ERP)
Customer Relationship Management (CRM)
Supply Chain Management (SCM)
Warehouse Management System (WMS)
Real-Time Business Intelligence (RTBI)
Organizational Risk Management (ORM)
Knowledge Management System (KMS)
Enterprise Content Management (ECM)
Office Automation Systems (OAS)
Expert Systems
Software as a Service (SaaS)
References
Information
Information systems
Information technology management
Management systems
Business software | Digital firm | Technology | 1,351 |
39,580,391 | https://en.wikipedia.org/wiki/Academic%20Health%20Science%20Networks | Academic Health Science Networks (AHSNs) are membership organisations within the NHS in England. They were created in May 2013 with the aim of bringing together health services, and academic and industry members. Their stated purpose is to improve patient outcomes and generate economic benefits by promoting and encouraging the adoption of innovation in healthcare. In 2019 the AHSNs were issued with a fresh five-year licence to continue their work.
Background and history
A report in 2008 by Lord Ara Darzi noted that the NHS was poor at innovating, and suggested wider collaboration between industry, education and all aspects of healthcare. The NHS is one of the world's largest employers and with the UK's spending on healthcare at over £140b in 2010 or 9.6% of national GDP, it is a key component of the national economy. There is a generally recognised need to improve the NHS's ability to identify and adopt innovation.
AHSNs were first proposed by name in the 2011 report "Innovation Health and Wealth" by Sir David Nicholson, chief executive of NHS England, and launched by the Prime Minister, David Cameron. A request for expressions of interest was issued in June 2012 and, on 23 May 2013, the 15 designated AHSNs were announced. They are regional, with non-overlapping territories covering the whole of England.
AHSNs take their place in the "fragmented, cluttered and confusing" landscape of NHS innovation. As part of the "Sunset Review" a number of initiatives closed in 2013 including the NHS National Innovation Centre, NHS Institute for Innovation and Improvement, and Health Innovation and Education Clusters (HIECs). There is still a range of active initiatives including NHS Innovation Hubs, NHS Supply Chain Innovation and NHS Improvement.
Funding
Core funding comes from NHS England and work was "in hand to identify the funding" when expressions of interest were invited. A briefing paper assumed funding to be in the region of £2 per head of population served. With a population averaging 3m people, a typical AHSN might have expected roughly £6m per AHSN per year. These figures reflect early expectations but were neither clarified nor confirmed with the designation announcement.
When contracts were signed with NHS England in November 2013, the 15 AHSNs shared around £60 million of funding.
Operation and activity
Although their purpose is clear, the structure and approach of individual AHSNs is a matter for local decision. This is apparent in the contrasting approaches taken and the variety of opinions expressed by network founders.
As membership organisations, AHSNs do not have any direct authority over their members, but the Innovation Health and Wealth report states: "all NHS organisations will aspire to be affiliated to their local AHSN where the AHSN will operate as a gateway for the NHS on innovation and working with the life sciences industry on the evaluation, commercialisation and rapid adoption of health technologies". They will be seen to be successful if and only if they can demonstrably improve the rate of adoption of medical technologies and ICTs.
In April 2014, it emerged that NHS England's 2014–15 business plan showed that AHSNs would receive £53.6m that financial year, a 5 per cent cut on the previous year's budget. However, it represents a larger 23 per cent cut on the £70m NHS England announced in May 2013. A senior AHSN figure told the Health Service Journal that NHS England risked "castrating" the programme by cutting the budget and by a perceived lack of promotion of the networks. "They are trying to save a comparatively small amount of money [by cutting the budget] but in doing so they risk castrating the AHSNs. Commissioners are not going to sign up to us if they are thinking that we are not going to be around in two years' time".
In 2019, the AHSNs received a new five-year licence, running to 2023, funded by NHS England, NHS Improvement and the Office of Life Sciences.
See also
National Institute for Clinical Excellence
National Institute for Health and Care Research
Academic health science centre
References
External links
List of the 15 AHSNs – West of England AHSN, accessed 20 August 2020
Innovation in the United Kingdom
Medical and health organisations based in England
Medical education in the United Kingdom
Life sciences industry | Academic Health Science Networks | Biology | 867 |
64,530,838 | https://en.wikipedia.org/wiki/Lactobacillus%20vaccine | Lactobacillus vaccines are used in the therapy and prophylaxis of non-specific bacterial vaginitis and trichomoniasis. The vaccines consist of specific inactivated strains of Lactobacilli, called "aberrant" strains in the relevant literature dating from the 1980s. These strains were isolated from the vaginal secretions of patients with acute colpitis. The lactobacilli in question are polymorphic, often shortened or coccoid in shape and do not produce an acidic, anti-pathogenic vaginal environment. A colonization with aberrant lactobacilli has been associated with an increased susceptibility to vaginal infections and a high rate of relapse following antimicrobial treatment. Intramuscular administration of inactivated aberrant lactobacilli provokes a humoral immune response. The production of specific antibodies both in serum and in the vaginal secretion has been demonstrated. As a result of the immune stimulation, the abnormal lactobacilli are inhibited, the population of normal, rod-shaped lactobacilli can grow and exert its defense functions against pathogenic microorganisms.
Medical uses
Lactobacillus vaccines are primarily used in the therapy and prophylaxis of dysbiotic conditions of the vaginal ecosystem (bacterial vaginitis, vaginal trichomoniasis, and to a lesser extent, vaginal candidiasis). Secondarily, they are used in the prophylaxis and complementary treatment of various urogenital diseases, if vaginal dysbiosis is suspected to be the root cause of the condition. These include (chronic) upper genital tract infections, urinary tract infections and cervical dysplasias. The prophylactic use in patients with a history of late miscarriage and preterm labor is practiced preferably before conception.
Effectiveness
Bacterial vaginitis
Rüttgers studied the benefit of vaccination with Gynatren in preventing bacterial vaginitis in a patient group with frequent vaginal infections. All of the 192 patients participating in the prospective, randomized, double-blind, placebo-controlled study received local treatment with a tetracycline-amphotericin B vaginal suppository. 95 patients additionally received vaccination with Gynatren, whereas 97 patients were treated with a placebo preparation of identical outward appearance. One month after the start of the treatment 85% of the patients in the active treatment group and 83% in the placebo group were cured (asymptomatic and free from pathogenic bacteria). After 3 months 78% of the verum group and 60% of the placebo group remained free from infection. After 6 months 76% and 40%, and after 12 months 75% and 37% of women in the respective groups were still free from infection.
Another study by Boos and Rüttgers investigated the therapeutic effect of SolcoTrichovac when used as a sole therapeutic agent. The 182 patients enrolled into the study showed symptoms of acute vaginitis, and most of them had been treated for months with topical or oral antibiotics or antimycotics without success. For the course of the study they were advised to refrain from using such preparations. Six months after the first injection 71% of the patients showed a normal vaginal flora according to the classification of Jirovec and Peter. Further studies on the therapeutic and preventive efficacy of lactobacillus vaccines alone or in combination to antimicrobial treatment in bacterial vaginitis have produced similar results.
Vaginal trichomoniasis
Litschgi has investigated the use of SolcoTrichovac both as a therapeutic and as a recurrence prophylactic measure. On the latter subject he reported enrolling 114 women with trichomoniasis into a randomized, double-blind, placebo-controlled study, 66% of whom had case histories of recurrent vulvovaginitis. All patients as well as their sexual partners received systemic and/or local nitroimidazole treatment. 61 patients were additionally vaccinated with SolcoTrichovac, 53 patients with placebo. At the first follow-up check, 6 weeks after the first injection, 3 patients in each group still had motile trichomonads. Among the patients that were pronounced cured at this visit, a total of 15 reinfections (33.3%) were recorded in the placebo group during the follow-up period from month 4 to month 12 after the first injection, whilst in the verum group there were no new infections. Harris designed a similar randomized, double-blind, placebo-controlled study with 198 participants and reported a reinfection rate of 21.6% in the placebo group, in contrast to 3.1% in the SolcoTrichovac group 8 months after completing the course of three injections. Further studies have confirmed the efficacy of lactobacillus vaccines as a powerful complementary treatment and recurrence prophylactic measure in trichomoniasis.
Vaginal candidiasis
Vaginal mycoses are considered a weak indicator that the lactobacillus flora is compromised, since Candida albicans and Lactobacilli can coexist symbiotically. Consequently, immunotherapeutic modulation of the lactobacillus flora has a lesser success rate in this condition than in bacterial and trichomonal vaginitis. Verling reported vaccinating 42 patients with candida-induced chronic colpo-vaginitis with SolcoTrichovac, who had shown resistance to usual fungicidal treatment such as topical amphotericin B, nystatin and povidone-iodine. Of these, 7 patients (17%) have healed and another 18 patients (43%) showed only mild symptoms one month after the third injection.
Urinary tract infections
In many women prone to recurrent urinary tract infections, the mucosal surfaces of the vaginal introitus are colonized by Escherichia coli and Enterococci, rather than Lactobacilli. Reid and Burton have postulated that the vagina may act as a reservoir for uropathogens. In the proposed scenario, the dysbiotic vaginal environment is continually seeding the bladder with infectious microbes leading to a persistent or recurrent urinary tract infection. As they suggested, by recolonizing the vagina with lactobacilli and displacing the pathogens, the infection of the bladder may resolve. No studies have been conducted on the use of lactobacillus vaccines in recurrent urinary tract infections using modern formulations of the vaccine. The inventor of lactobacillus vaccines, Újhelyi reported initial success in preventing uropoietic infections in pregnant women under therapy with experimental single-strain vaccines.
Intrauterine infections during pregnancy
The relationship between intrauterine infections and second-trimester pregnancy loss as well as early preterm delivery has been established. Most bacteria found in the uterus in association with preterm labor are of vaginal origin, with only a small minority originating from the abdominal cavity or from an inadvertent needle contamination at the time of amniocentesis. Pathogenic bacteria may ascend through the cervix and maintain a subacute infection of the upper genital tract and the fetal membranes for months before the infection is eventually detected. These infections tend to remain asymptomatic and are not associated with fever, a tender uterus, or peripheral-blood leukocytosis. Often the first symptoms are the rupture of membranes and preterm labor, at which point the conservation of pregnancy becomes difficult. Early treatment and prophylaxis of vaginal infections are crucially important especially in those patients, that have already experienced a second-trimester miscarriage, which is associated with 27% rate of recurrence (pregnancy loss between 14 and weeks of gestation), 10% rate of extremely preterm delivery (24 to weeks), and further 23% rate of very, moderate or late preterm delivery (28 to weeks) in the subsequent pregnancy.
A study performed by Lázár and her coworkers examined the incidence of low-birth-weight offspring among therapeutically and preventively vaccinated women. Out of 413 pregnant women presenting with acute urogenital infections, 209 were vaccinated with Gynevac additionally to conventional antimicrobial treatment, whereas 204 women only received antimicrobial therapy. A birth-weight below 2500 g was recorded in 10.4% of vaccinated patients compared to 24.1% among patients that had not received lactobacillus vaccination. The rate of perinatal mortality was 1.42% in the vaccinated group in contrast to 3.86% among non-vaccinated patients. On average the gestational period was longer in vaccinated patients, 81.3% of whom reached full term, in contrast to only 66.7% of the non-vaccinated patients. Preventive lactobacillus vaccination with Gynevac was performed on 1396 healthy women, partly before conception and partly during early pregnancy. The reported incidence of low birth weight was 7.9% among vaccinated women compared to 14.0% among healthy controls. In a subsequent prospective study with the participation of 1852 vaccinated pregnant women and 1418 controls, Lázár and coworkers reported a preterm birth rate of 7.1% among vaccinated women and 12.2% among those that declined lactobacillus vaccination.
Formulation
Each ampoule of Gynatren contains at least inactivated microorganisms of eight Lactobacillus strains in approximately equal amounts ( microorganisms per strain). Three strains belong to the species L. vaginalis, three strains to L. rhamnosus, one strain to L. fermentum and one to L. salivarius. The eight specific aberrant polymorphous Lactobacillus strains have been deposited at the Westerdijk Institute (Centraalbureau voor Schimmelcultures) in 1977 under the strain numbers CBS 465.77 to CBS 472.77. Inactivated material from the eight strains is mixed and diluted with physiological sodium chloride solution. Phenol is added as a preservative. The vaccine usually has a total nitrogen content of 3.68 mg in 100 ml solution (based on the dry material, using the Kjeldahl method). Using the conversion factor of 6.25 to convert nitrogen concentration to protein concentration, this means that there is on average 0.115 mg bacterial proteins in each ampoule of 0.5 ml.
Gynevac is composed of five specific aberrant polymorphous Lactobacillus strains, four belonging to the species L. fermentum and one to the species L. reuteri. The further ingredients are formaldehyde and sodium ethylmercuric thiosalicylate (Thiomersal) as preservatives and sodium chloride solution as a diluent. Each ampoule of 1 ml contains between 0.08 mg and 0.32 mg bacterial proteins.
Schedule
The usual vaccination schedule of Gynatren is 3 intramuscular injections of 0.5 ml vaccine at intervals of 2 weeks, followed by a booster dose of 0.5 ml 6–12 months after the first injection. The booster injection raises the serum antibody titres in most cases back to similar levels to those found shortly after primary vaccination and ensures renewed immune protection for about 2 further years. Grčić et al. recommends the periodic administration of booster doses every 2 years to maintain protective immunity for many years.
The schedule of Gynevac includes 5 intragluteal injections of 1 ml vaccine at intervals of 10 days. Protective immunity is conferred for about a year. The primary immunization program may be repeated, if reinfection or relapse occurs.
Side effects
Common side effects include pain, redness and swelling or hardening of the tissues at the injection site. Systemic vaccination reactions commonly include fatigue, flu-like symptoms, a raised temperature between , shivering, headache, dizziness, nausea and a swelling of the inguinal lymph nodes. Symptoms usually subside within days after injection and are less pronounced or absent at subsequent injections.
Contraindications
Gynatren is contraindicated in patients with a history of allergic reaction to the bacterial antigens or phenol contained in the vaccine. Further contraindications are acute fever, active tuberculosis, severe hematopoietic disorders, decompensated cardiac or renal insufficiency, autoimmune and immunoproliferative diseases. Gynevac is additionally contraindicated in arthritides affecting several joints, under immunosuppressive- or radiotherapy.
Pregnancy
Lactobacillus vaccines are not contraindicated during pregnancy and breastfeeding. Both Gynatren and Gynevac may be prescribed during pregnancy upon careful individual consideration of the potential risks and benefits. Lázár reported vaccinating 3457 pregnant patients with Gynevac between 1976 and 1982, usually starting the vaccination schedule at the first prenatal care visit, and has not observed any impairment of pregnancy or teratogenic effect in association to the lactobacillus vaccine. Rüttgers made similar observations about Gynatren when administered in the second trimester.
Mechanism of action
The mechanism of action of lactobacillus vaccines is far from being completely understood. At least three theories have been proposed. The most commonly accepted one, as formulated by Påhlson and Larsson, suggests that the vaccine breaks the immune tolerance of the host and makes it possible for the immune defense to attack aberrant, "ecologically wrong" lactobacilli and create an environment for beneficial strains to become dominant. Rüttgers on the other hand described SolcoTrichovac as an anti-adhesive vaccine, suggesting that the induced antibodies and perhaps other mechanisms inhibit the adhesion of microbes to epithelial cells in a largely nonspecific manner. A third hypothesis, advanced by Goisis among others, involves the possibility of an immunomodulation resulting in tolerance, rather than defense against the bacterial antigens used in the vaccine.
Multiple authors have proposed cellular immunological phenomena as the primary mediators of protective effect of lactobacillus vaccines. Studies into cellular immunity are technically challenging in humans owing to the difficulty of sampling lymphoid tissues as opposed to secretions, and none has been performed so far on lactobacillus vaccines. A number of studies have been published on the humoral responses to primary and booster immunization in serum and in the vaginal secretions. Rüttgers identified mucosal secretory IgA as a strong immune correlate of vaccine efficacy.
Humoral immune response
Mucosal surfaces are a major portal of entry for pathogens into the body. Antibodies in mucosal secretions represent the first line of immune defense of the mucosae. They are capable to bind to specific pathogens and prevent their adherence to the epithelial cell lining of the mucous membranes. Neutralized pathogens can then be eliminated from the mucosal surfaces by means of conveyance by the mucus stream. Mucosae throughout the body have been described as parts of a common mucosal immune system (CMIS). The basis for this concept is the observation that precursor lymphocytes sensitized to a certain antigen at a specific mucosal site can migrate and assume effector function at distant mucosal tissues. Although the female genital tract is thought of as part of the CMIS, it shows some characteristics that set it apart from other mucosal immune sites. One of these features is the relative inefficacy of local antigenic stimulation owing to a sparsity of mucosal lymphoepithelial inductive sites. A further distinctive characteristic is the significant contribution of the systemic immune compartment to the pool of antibodies. In most external secretions, like tears, saliva or milk, the dominant antibody class is secretory IgA (sIgA), whereas in the cervicovaginal secretions IgG levels equal or exceed the levels of sIgA. A large portion of this IgG is thought to originate from the circulation and appear in vaginal fluids via transudation through the uterine tissues. There are reports that systemic immunization can stimulate humoral immune protection in vaginal secretions more efficiently than in other mucosal secretions, where serum-derived IgG concentrations remain lower.
Milovanović and coworkers studied the serum antibody response of 97 women with trichomonad colpitis to primary immunization with SolcoTrichovac (3 intramuscular injections of 0.5 ml vaccine at intervals of 2 weeks) and a booster dose of 0.5 ml administered 12 months after the first injection. The agglutination titres were determined by preparing two-fold serial dilutions of the serum samples in isotonic saline (dilutions of 1:10 to 1:1280), using 0.5 ml concentrated lactobacillus vaccine as an agglutinogen. An at least threefold elevation of the agglutination titres following primary immunization was detected in the serum of 93.8% of patients; the rest of the patients were considered non-responders or poor responders to the vaccination. The geometric mean of the agglutination titres increased from the basal level of 1:56 before vaccination to 1:320 after finishing the primary immunization program, and it was still 1:140 one year later. Two weeks after the booster injection the mean titres were raised back to 1:343.
Rüttgers quantified the total concentration of secretory IgA antibodies in the vaginal secretions of 192 women with bacterial vaginitis participating in a randomized, double-blind, placebo-controlled study. 95 patients were treated with SolcoTrichovac and 97 with placebo, according to the primary immunization scheme described above. The samples were tested using the enzyme‐linked immunosorbent assay according to Åkerlund et al. The mean baseline concentrations were similar in the two comparative groups. One month after the start of the therapy the sIgA concentration in the active-treatment group had risen significantly compared to the baseline and also in comparison to the placebo group. This difference gradually decreased over the subsequent months. After 12 months the sIgA concentration in the SolcoTrichovac group had fallen back to baseline value. About 35% of the actively treated patients had not developed a pronounced mucosal immune response. In these patients sIgA concentration of the vaginal secretion remained unchanged or showed only a short-lived elevation. Rüttgers observed that this group of patients by large overlapped with those, that had had reinfections during the follow-up period of 12 months, and concluded that vaginal sIgA concentration is a better correlate to immune protection than serum antibody titres.
On the question of the mechanism underlying the induction of IgA-secreting plasma cells in the vaginal mucosa, Pavić and Stojković suggested that intramuscularly administered antigens may be transported to the local immunocompetent organ, in this case the vagina, and provoke a local secretory immune response. Patrolling dendritic cells exposed to killed bacterial antigens at a muscular injection site however typically do not migrate further than the local draining lymph nodes, where antigen presentation and the activation of T and B cells occur. Effector and memory lymphocytes in turn preferentially home back to the tissue where they were first activated, in this case the secondary lymph nodes. This is the reason why parenteral immunization with non-replicating antigens is generally considered ineffective in eliciting a mucosal immune response. Another possible explanation for an increased level of anti-aberrant-lactobacillus sIgA in vaginal secretions involves natural priming by mucosal infection at this site. Similarly to how subcutaneously administered killed whole-cell cholera vaccines reportedly only provoke substantial mucosal secretory antibody response in cholera‐endemic countries, vaginal priming with aberrant lactobacilli may be necessary for the generation of mucosal IgA-secreting plasma cells following parenteral vaccination.
Effect on the vaginal ecology
Protective lactobacilli inhibit the growth of other microorganisms by competing for adherence to epithelial cells and by producing antimicrobial compounds. These compounds include lactic acid, which lowers the vaginal pH, hydrogen peroxide and bacteriocins. Aberrant strains of Lactobacilli are incapable to effectively control the vaginal microbiota, leading to an overgrowth of a mixed flora of aerobic, anaerobic and microaerophilic bacterial species. Antibodies and cellular defense mechanisms directed against aberrant lactobacilli induced by vaccination have been shown to change the composition of the vaginal flora. Milovanović and his coworkers found a marked reduction in prevalence of Klebsiella and Proteus infestations in 36 trichomoniasis patients under therapy with SolcoTrichovac, while normal, metabolically active Lactobacillus species that could initially be found in only 11% of patients, were present in 72% after finishing treatment. Karkut observed a significant reduction in the incidence of Escherichia coli (55% to 23%), Group B Streptococci (37% to 10%), Enterococci (36% to 12%), Bacteroides (25% to 3%) and Gardnerella vaginalis (37% to 9%) in 94 patients treated for recurrent bacterial vaginitis eight weeks after initial injection. The incidence of aberrant lactobacilli fell from 17% to 3%, while that of normal lactobacilli rose from 31% to 72% during the course of the eight weeks. Harris reported a significant reduction of the number of microbial species (other than lactobacilli) found in post-treatment cultures from 77 patients. Litschgi found, that the incidence of mixed bacterial infections characterized by the presence of G. vaginalis, haemolytic Streptococci and Staphylococci was reduced by two-thirds four weeks after finishing therapy in 120 patients treated for bacterial colpitis. He observed a similar reduction of the less frequent Klebsiella, Proteus-dominant infections.
A quantitative bacteriological analysis has been performed by Milovanović and coworkers in a group of 36 trichomoniasis patients. The study aimed at quantifying locally unusual and mostly pathogenic organisms, whereby anaerobes were excluded for methodological reasons. Bacterial counts of aerobes excluding lactobacilli reportedly dropped from 18,900 organisms per 0.1 ml vaginal secretion on the day of the first SolcoTrichovac injection to 5800 organisms 112 days thereafter. Goisis and his coworkers reported a mean count of lactobacilli of organisms per ml vaginal secretion before vaccination with SolcoTrichovac in 19 trichomoniasis patients. One month after the start of the treatment the count increased to bacilli per ml. In 46 patients with bacterial vaginitis the lactobacillus counts were significantly higher during the entire course of treatment with bacilli per ml before and bacilli per ml after vaccination. While this study summed the counts of normal and aberrant lactobacilli, microscopic study of the fixed, Gram-stained smears of vaginal secretions revealed lactobacilli of differing lengths, with a predominance of short forms in trichomoniasis patients before vaccination; the bacilli retained this tendency even in cultures started from the secretion samples. The morphology of lactobacilli shifted towards normal rod-shaped forms under therapy in most patients, which property was once again retained in culture. Müller and Salzer have confirmed the quantitative increase in physiological lactobacilli under vaccination therapy of 28 patients with recurrent bacterial infections.
The decreasingly diverse and numerous populations of non-lactic acid-producing bacteria and the concurrent growth of normal, metabolically active lactobacilli lead to a gradual decrease of vaginal pH. Goisis and his coworkers reported in trichomoniasis patients a mean pH value of 6.14 at the time of the first injection, 5.64 two weeks later, and 5.23 on the day of treatment completion, two weeks after the second visit. In patients with vaginitis not caused by trichomonads a mean initial pH of 5.81 was documented, which dropped to 5.39 two weeks later and finally to 4.98. Karkut has published very similar results. Boos and Rüttgers measured in 182 patients with bacterial vaginitis a vaginal pH of 4.90 before therapy and 4.26 six months after the start of therapy.
History
Invention
In 1969 a research project was started in Budapest, Hungary to develop a vaccine against trichomoniasis, initiated by György Philipp, a Hungarian gynaecologist and led by Károly Újhelyi, head of the Vaccine Production and Research Department of the Hungarian Institute of Public Health, one of the most distinguished Hungarian physician-scientists of the 20th century and a pioneer of vaccine research and technology. In 1972 the research group reported vaccinating 300 patients with acute trichomonal colpitis with autovaccines consisting primarily of inactivated Trichomonas vaginalis strains cultured from vaginal samples of the patients themselves, along with some residual amounts of the accompanying bacterial flora, inadvertently present in the cultures. Despite a marked alleviation of clinical symptoms, all trichomoniasis patients still tested positive upon completion of the autovaccine therapy.
Újhelyi and his coworkers attributed the partial therapeutic effect to the bacterial residue in the T. vaginalis cultures used for the vaccine. They identified a Gram-positive Lactobacillus with a tendency to polymorphism commonly present in the accompanying flora of trichomoniasis patients. To test their assumption, further 700 patients each received treatment with an inactivated bacterial vaccine composed of one of 16 such polymorphic Lactobacillus strains. The effect was studied on eight patient groups with the following conditions: (1) colpitis, including trichomonal colpitis (2) erythroplakia (3) endocervicitis (4) upper genital tract infection (5) urinary tract infection (6) infertility (7) genital lesions and tumors (8) trichomoniasis during pregnancy, childbirth and postpartum period. Treatment with the experimental bacterial vaccines was capable to eliminate trichomoniasis in 28% of infected patients and resolved or alleviated many of the examined urogenital conditions. After this initial breakthrough, Újhelyi and his coworkers directed their efforts into the development and optimization of Gynevac, a composite bacterial vaccine, containing five aberrant, polymorphic Lactobacillus strains. Erika Lázár, a Hungarian gynaecologist and specialist in the field of reproductive medicine, and her coworkers performed many of the clinical trials on Gynevac, focusing clinical and research interest on the prevention of ascending infections during pregnancy. In two prospective studies performed between 1976 and 1982 in rural, socioeconomically disadvantaged Kazincbarcika with the enrollment of nearly 3500 pregnant women, lactobacillus vaccination appeared to reduce the incidence of preterm birth by about 40%.
1980-2012
In 1975 the research group of Újhelyi sold the unpatented technology to Solco Basel AG, a Swiss pharmaceutical company with the agreement, that Solco would manufacture and market the vaccine in Western Europe, whereas the Hungarian company HUMÁN Oltóanyagtermelő Vállalat (later Vakcina Kft.) would supply the Eastern markets ("Soviet Bloc"). In 1980 Solco's researchers patented the vaccine; in 1981 the company obtained regulatory approval and started marketing the vaccine under the trade name SolcoTrichovac. After prolonged clinical trials, mainly driven by Lázár, the production and marketing of Gynevac started in Hungary in 1997.
After Solco's acquisition of the technology, mainly Swiss and German researchers have joined the investigations. In 1980 Mario Litschgi reported a cure rate of trichomoniasis of 92.5% in a clinical study with 427 female participants. Following this initial success, a number of studies have been conducted on the vaccine. Most of the reports can be found in the proceedings of two symposia: the Symposium on Trichomoniasis (1981) featured investigations with Trichomonas vaginalis-infected women and mainly clinical results, whereas the Symposia on the Immunotherapy of Vaginal Infections (1983) focused on the therapy of bacterial infections and delved into the mechanism of action. Solco continued to develop the formulation, during the course of which the new species Lactobacillus vaginalis was identified in 1989. In the same year, the Hamburg-based pharmaceutical company Strathmann GmbH & Co. KG. overtook production of the vaccines SolcoUrovac (now named Strovac) and SolcoTrichovac (now named Gynatren).
2012-today
In 2012 Gynevac was withdrawn from the market, not due to any unexpected adverse effects, but rather due to Vakcina Kft. failing to obtain regulatory compliance upon the EU-accession of Hungary. Today Gynatren is the only lactobacillus vaccine marketed for the treatment of non-specific bacterial vaginitis and trichomoniasis, and it is mostly only prescribed by a select few gynaecologists in the DACH countries and Hungary. In Germany the vaccine may be covered by health insurance upon individual deliberation of the attending gynaecologist.
Research
Research interest in lactobacillus vaccines peaked in the 1980s. The technical and theoretical advances in the fields of microbiology, immunology and vaccinology of the past few decades could help shed new light on the still not fully clarified mode of action of these clinically promising vaccines. More research is warranted to elucidate the distinct properties of "aberrant" strains of Lactobacilli, the exact mechanism by which they contribute to or accompany pathologies, the determinants of colonization in different groups of individuals. A further point of interest is the specificity of the immune stimulation – whether vaccination induces cross-reacting antibodies with any other microorganism. A comparative study on lactobacillus heterovaccines like Gynatren and gynaecological autovaccines such as GynVaccine has yet to be performed.
Lactobacillus strains used in the vaccines
Characteristics
It has not been clarified by what mechanism the lactobacilli used in the vaccines ("aberrant" lactobacilli) fail to confer protection against vaginal pathogens. At the time of invention, available knowledge of the various health-promoting mechanisms of lactobacilli was very limited. For example, Eschenbach's seminal work on -producing lactobacilli has not been published until 1989; at this point scientific efforts to clarify the vaccine's mechanism of action have already subsided.
The nutrient medium, carbohydrate fermentation profile, and microscopic appearance of the strains used in SolcoTrichovac have been described. Growth on an iron-enriched medium of 0.12 mᴍ concentration of FeSO4·7H2O is rather unusual for a lactobacillus species, and resembles the nutrient needs of L. iners, a vaginal lactobacillus associated with bacterial vaginosis and preterm birth, known for its ambiguous morphology, including coccobacillar cells (not used in the vaccine).
Påhlson and Larsson hypothesized, that the defining characteristic of the lactobacilli used in SolcoTrichovac is a missing -production, which has not been confirmed. Moreover, the correlation they found between bacterial cell morphology and health benefits, pointed towards an association between long uniform lactobacilli and decreased protection against vaginal infections, whereas polymorphic/shortened lactobacilli were described as innocuous inhabitants of the vaginal econiche. It seems, that the authors equated the strains used in SolcoTrichovac to those responsible for cytolytic vaginosis, which is generally considered a different condition, characterized by a lactobacillus overgrowth, rather than the depletion seen in patients colonized by the strains used in the vaccine.
Various other properties that could potentially play a role in the (lack of) protective effect, like the ratio of -lactic acid to -lactic acid production (correlated to MMP-8 concentrations of the vaginal fluid), adhesion competition, self- and co-aggregation ability, production of bacteriocins, organic acids or biosurfactants, immunomodulatory properties, or toxin production such as seen in L. iners remain obscure for the time.
Risk factors of colonization
The inventor of lactobacillus vaccines, Újhelyi described the strains used in Gynevac as pathobionts to Trichomonas vaginalis. He considered colonization with "aberrant", unprotective strains of lactobacilli, and their persistence even after protozoan infection has been cleared, a chronic post-infectious complication, and introduced the term "lactobacillus syndrome" for the condition (not to be confused with the distinct pathologies of cytolytic vaginosis and vaginal lactobacillosis). Scattered reports suggest that some minority of Lactobacillus strains found in humans indeed enhance rather than inhibit parasite adhesion to the vaginal epithelium. In vitro preincubation of vaginal epithelial cells (VECs) with physiological concentrations (– CFU/ml) of Lactobacillus CBI3 (a human isolate of L. plantarum or L. pentosus) increased the number of T. vaginalis cells able to adhere to the VEC monolayer up to eightfold. McGrory and Garber reported a significant prolongation of T. vaginalis infection in estrogenized BALB/c mice intravaginally preinoculated with cells of L. acidophilus ATCC 4356 (originating from the human pharynx) in comparison to animals that had not been pretreated. Although initial infectivity in the two groups was comparable, at day 24 post-infection 69% of L. acidophilus-inoculated mice still showed positive T. vaginalis cultures, compared with only 11% of mice not harboring lactobacilli.
Other hypothesized risk factors of colonization by lactobacilli of low protective value in general include prior antimicrobial treatment and congenital factors.
Association with T. vaginalis
Soszka and Kuczyńska described the appearance of morphological variations of Lactobacilli, when grown in the presence of a high concentration of Trichomonas vaginalis. The authors interpreted the observed atypical (coccoid) cell morphology as an involution (senescent, dying) form. Goisis et al. have shown, that shortened and coccoidal lactobacilli are not only present in the primary secretion samples of trichomoniasis patients, but also in the cultures started from these samples, free from competitive microorganisms and under optimal culture conditions, suggesting that the coccoid bacteria may represent a distinct viable phenotype. Contrastingly, the isolates from vaccinated patients tended to assume bacilliform also in culture. The general consensus remains, that at least some of the morphology variants seen under trichomoniasis versus health are to be interpreted as representations of "true commensal" versus more pathogenic strains (genotypes), although a possible relationship between morphotype and distinct environment-driven proteome profiles has not been excluded.
Immunological cross-reaction with T. vaginalis
The antigenic material responsible for the effect of lactobacillus vaccines is most likely surface antigens of the aberrant lactobacilli. The anti-trichomonal effect of SolcoTrichovac has led multiple researchers to investigate the possibility of shared surface antigens between the specific strains used in the vaccine and T. vaginalis. The theory of antigenic cross-reactivity was put to the test by Stojković. Indirect immunofluorescence was performed on trichomonads treated with rabbit antisera against aberrant lactobacilli and against T. vaginalis. Specific immunofluorescence was observed on those protozoa which had been treated with anti-lactobacillus serum and anti-trichomonas serum, but not on those treated with serum from non-vaccinated animals. Bonilla-Musoles performed an electron microscopic study on trichomonads treated with serum from women who were previously vaccinated with SolcoTrichovac. After three days the trichomonads exposed to antibody-containing serum showed marked signs of destruction, similar to those observed under the influence of metronidazole. The electron micrographs revealed cytoplasmic swelling, dilation of the reticuloendothelial lamellae and formation of vacuoles as well as evaginations and invaginations of cellular membranes. Alderete, Gombošová and others however described contrary findings, and attributed any anti-trichomonal activity of lactobacillus vaccines to non-specific immune mechanisms. The question of immunological relationship between aberrant lactobacilli and T. vaginalis has not been answered conclusively.
Phylogenetic relationships to T. vaginalis
An intriguing hypothesis was advanced by Alain de Weck that suggests horizontal gene transfer between specific aberrant strains of Lactobacilli used in SolcoTrichovac and T. vaginalis, which leads to their (possible) cross-immunogenicity. Phylogenetic relationships between T. vaginalis and aberrant lactobacilli have not been studied. Nevertheless, multiple examples of gene transfer between the parasite and bacteria have been documented. Audrey de Koning argues that lateral transfer of the N-acetylneuraminate lyase gene from Pasteurellaceae to T. vaginalis may have been a key factor in the adaptation of Trichomonas to parasitism. In an analogous manner, Buret et al. suggest gene exchanges between enteropathogens and normal microbiota during acute enteric infection as one of the possible causative factors behind post-infectious intestinal inflammatory disorders.
Alternative theory of the mechanism of action
Goisis and his colleagues proposed an alternative hypothesis on the mechanism of action of SolcoTrichovac, suggesting that anti-lactobacillus antibodies may stimulate proliferation of lactobacilli rather than their (strain-specific) damage or inhibition. Among the circumstances they cited to support this theory, is their opinion that antibodies specific to one strain of Lactobacillus would most likely cross-react with several antigens present on various other strains (yet, both the concentration of anti-lactobacillus sIgA antibodies and lactobacillus counts have been demonstrated to increase in vaccinated women). Further they referred to the inconspicuous metabolic profile and the lack of a verified pathomechanism of the strains used in SolcoTrichovac, suggesting that they may represent mere morphotypes rather than pathogenic/unprotective biotypes. The proposed theory relies on analogies with other known examples of non-classical, stimulatory/homeostatic antibody-antigen interactions. Notably, the majority of intestinal bacterial cells in healthy individuals is bound by sIgA. The sIgA-coating of commensal enteric bacteria is believed to promote intestinal microbial homeostasis by a number of mechanisms. Secreted IgA anchors commensal bacteria to the mucus and facilitates biofilm formation, thereby limiting their translocation from the lumen into mucosal tissues. This minimizes activation of the innate immune system, a process termed "immune exclusion". Furthermore, the selective uptake of sIgA-microbe immune complexes by dendritic cells (DCs) in lymphoid follicles has been shown to induce semimaturation of the DCs. The resulting, so-called tolerogenic DCs downregulate the expression of T cell costimulatory molecules and proinflammatory cytokines. The altered immune signaling favours local processing of antigens and a rapid induction of low-affinity, broad-specificity IgA, leaving the systemic immune compartment ignorant about these organisms. In contrast, direct translocation of non-sIgA-coated microbes or microbial products across the epithelium preferentially results in proinflammatory signalling and a systemic response against the invading agent, involving affinity-matured serum antibodies of the classes IgA, IgE and IgG. Lastly, binding by sIgA can downregulate the expression of virulence factors e.g. involved in adhesion or nutrient acquisition by commensal bacteria. If the homeostasis breaks down, innate immune responses directed against commensal enteric bacteria lead to a shift in the species composition (dysbiosis). Invasive species are better equipped to resist or take advantage of host inflammatory mechanisms and in the perturbed niche successfully compete with the resident microbiota. Hypersensitivity responses to commensal enteric microbiota and a perturbation of microbial ecology is observed in many patients with chronic enterocolitis.
This alternative theory coincides with the observation that women without a history of urinary tract or vaginal infections harbor higher antibody levels against vaginal lactobacilli than women with a history of these infections. Alvarez-Olmos and her coworkers reported an approximately fourfold elevation of total IgG and a threefold elevation of total IgA concentration in the cervicovaginal secretions of adolescent women colonized with -producing lactobacilli (associated with vaginal health) in comparison to those colonized with non--producing lactobacilli.
Goisis et al. described lactobacillus vaccination as a means to systemically boost a diminished pool of lactobacillus-specific vaginal antibodies, likely increasing the potential for immune exclusion and tolerogenic responses to the microorganisms. They added to this a further hypothetical notion: loss of lactobacillus-specific sIgA may be characteristic to patients co-colonized by bacteria capable to gradually desialylate and finally proteolytically degrade sIgA, a known impairment of the vaginal defense system, established in the context of Gardnerella vaginalis-specific antibodies. This contrasts with other proposed mechanism of sIgA deficiency, such as the loss of immunomodulatory strains or host immunodeficiency.
Although Goisis et al. announced ongoing experiments and preliminary results to prove this theory, as well as the possible cross-reactivity of "normal", ecologically beneficial lactobacilli with antibodies directed against the strains used in SolcoTrichovac, a conclusive report has not been publicized to date.
References
External links
Vaccines
Hungarian inventions | Lactobacillus vaccine | Biology | 9,245 |
37,621,805 | https://en.wikipedia.org/wiki/Stenella%20paulliniae | Stenella paulliniae is a species of anamorphic fungi.
References
External links
paulliniae
Fungi described in 2005
Fungus species | Stenella paulliniae | Biology | 28 |
49,213,402 | https://en.wikipedia.org/wiki/Tubaria%20moseri | Tubaria moseri is a species of agaric fungus in the family Tubariaceae. Found in Argentina, it was described as new to science in 1974 by Jörg H. Raithelhuber. The specific epithet moseri honours Austrian mycologist Meinhard Moser.
References
External links
Tubariaceae
Fungi described in 1974
Fungi of Argentina
Fungus species | Tubaria moseri | Biology | 77 |
73,163,376 | https://en.wikipedia.org/wiki/PLAC-Seq | Proximity ligation-assisted chromatin immunoprecipitation sequencing (PLAC-seq) is a chromatin conformation capture(3C)-based technique to detect and quantify genomic chromatin structure from a protein-centric approach. PLAC-seq combines in situ Hi-C and chromatin immunoprecipitation (ChIP), which allows for the identification of long-range chromatin interactions at a high resolution with low sequencing costs. Mapping long-range 3-dimensional(3D) chromatin interactions is important in identifying transcription enhancers and non-coding variants that can be linked to human diseases.
Different 3C-based techniques have been used to study the higher-order 3D chromatin structure, and it has been combined with high-throughput sequencing to determine the chromatin structure on a genome-wide level. Hi-C is one of the most widely used 3C-based techniques because it allows for high-resolution (kilobase-scale) genome-topology identification. However, it requires billions of sequencing reads which has limited its application. Another commonly used 3C-based technique is chromatin interaction analysis by paired-end tag sequencing (ChiA-PET). ChiA-PET can identify long-range interactions of transcription promoters and enhancers at a high resolution but requires millions of cells.
PLAC-seq alleviates these issues by using in situ Hi-C, which creates long-range DNA contacts in situ in the nucleus before lysis. Unlike ChiA-PET which performs ChIP and proximity ligation after chromatin shearing, performing proximity ligation in the nuclei first prevents large disruptions of protein/DNA complexes. This decreases false-positive interactions and improves DNA contact capture efficiency, meaning that PLAC-seq is more accurate and requires fewer cells.
History
PLAC-seq was developed in 2016 and an almost identical technique called HiChIP was also developed in the same year. Both methods combine in situ Hi-C and ChIP but have different library preparation methods. While PLAC-seq uses biotin pull-down followed by end-repair, adapter ligation, and PCR, HiChIP usesTn5 tagmentation, biotin pull-down, and PCR. However, both techniques can use the same quality control and data analysis techniques.
Different computation software tools can be used to analyze the data from PLAC-seq, for example, Fit-Hi-C, HiCCUPS, Mango, Hichipper, MAPS, and FitHiChIP. Many of the earlier software tools were developed for other 3C-based technologies and were not optimized for PLAC-seq/HiChIP data. Fit-Hi-C and HiCCUPS, both developed in 2014, were mainly developed for Hi-C data, and utilize a matrix-balancing-based normalization approach. Mango was developed in 2015, and is mainly used for ChIA-PET data, but has high false-positive rates in analyzing PLAC-seq/HiChIP data due to the different biases. Hichipper was developed in 2018 to alleviate this issue and introduced a bias-correcting algorithm, but it still has difficulties identifying protein interactions between protein binding and non-protein binding regions on the chromosome. MAPS and FitHiChIP were developed in 2019 as a PLAC-seq/HiChIP-specific analysis pipeline, and are generally thought to be more effective than the existing models to analyze PLAC-seq/HiChIp data.
Procedure
The general workflow of PLAC-seq involves cell harvesting and crosslinking, in situ digestion and proximity ligation, ChIP, library construction, sequencing, and data analysis. The first step of PLAC-seq includes the preparation and crosslinking of cell and tissue samples, which typically begins with cell collection through centrifugation. The next step involves the use of a DNA crosslinking agent such as formaldehyde (HCOH) followed by the addition of glycine to stop the crosslinking reaction. The cross-linked cells can then be pelleted by centrifugation and either stored at -80 or used in the next step of the procedure. In situ digestion involves cell lysis with the use of a lysis buffer followed by digestion with a restriction enzyme MboI. This step allows for uniform digestion of genetic material while keeping the crosslinked regions of the chromosome intact. After inactivation of the digestion reaction, dNTPs and biotin are added in order to repair overhangs and mark the DNA for pull down respectively. In situ proximity ligation occurs when the biotinylated ends of the crosslinked DNA are ligated with each other. Chromatin fragmentation by sonication allows for the shearing of non-crosslinked fragments of DNA. This is followed by immunoprecipitation of biotinylated DNA through the use of antibody-coated beads. The DNA is then reverse-crosslinked and purified using column-based DNA purification or phenol-chloroform extraction. The library construction step first involves the pull-down of biotinylated DNA and the addition of sequencing adapters. The cycle number for amplification needs to be determined prior to the final amplification and library purification. Data analysis of PLAC-seq sequencing data can be carried out in multiple ways, however, the common methods involve the use of Fit-Hi-C, FitHiChIP, and MAPS. Data analysis involves mapping to a reference genome, using software tools such as Hichipper to identify peaks, and downstream analysis involving peak comparison and functional enrichment analysis. The resulting data can also be integrated with other genomic data such as Hi-C or RNA-seq in order to identify potential regulatory networks.
Applications
PLAC-seq was developed to map and analyze long-range chromatin interactions. These interactions have important implications when it comes to the transcriptional regulation of genes.
One challenge for mammalian cells is fitting around two meters of genetic material into a nucleus that is around a few microns in diameter, and at the same time organizing the genetic material to be able to access and use the genetic and epigenetic information. To do this, DNA is compacted around histone octamers into 2D structures, and then further packaged into 3D compartments by various mechanisms such as cis-regulatory interactions and repressive interactions. Therefore, chromosomal regions distant in 2D may have intra- and interchromosomal long-range interactions in 3D. These 3D structures are involved in the induction and repression of genes that have biological implications on basic cell functions such as cell cycle, replication, and development. Aberrant 3D structures have roles in the development of diseases and abnormalities such as cancer. This can involve interactions between promoters and terminators/enhancers through the formation of long-range chromatin loops.
PLAC-seq has been utilized to study H3K4me3 and H3K27ac PLACE (PLAC-Enriched) interactions. It has also been used to call for significant H3K4me3-mediated chromatin interactions, thereby allowing for the identification of differential epigenetic modification in different cell types such as those found in the developing human cortex.
Use
Advantages: Compared to ChIA-PET, PLAC-seq requires significantly less amount of starting biological material. With shearing being one of the first steps in ChIA-PET, this leads to the disruption of protein and DNA complexes. PLAC-seq avoids this by having the crosslinking reaction precede the shearing process. Furthermore, PLAC-seq requires fewer sequencing reads than Hi-C. While ChIA-PET requires 100 million starting cells, PLAC-seq only requires 5 million cells. Even with 20-fold fewer cells, PLAC-seq was able to produce more reads (175 million) with a fewer PCR duplication rate (33%) than ChIA-PET (16 million, and 44% respectively). PLAC-seq was also nearly 100 times more cost-effective than ChIA-PET.
Disadvantages: While many of the 3C-based techniques have different biases from the protocols, PLAC-seq (and HiChIP) data have biases from immunoprecipitation efficiencies that need to be corrected for in the computational step. Effective ways of reducing and/or removing the different biases in 3C-based technologies is still being studied.
References
Molecular biology techniques | PLAC-Seq | Chemistry,Biology | 1,784 |
36,234,132 | https://en.wikipedia.org/wiki/Trichophaea%20woolhopeia | Trichophaea woolhopeia is a species complex of ectomycorrhizal fungi belonging to the family Pyronemataceae. There are at least 4 well-resolved cryptic species within the complex, including Quercirhiza quadratum and AD (Angle Droit). They are European species that appear on damp ground, with apothecial fruiting bodies that appear as tiny (up to 6 mm across) whitish cups with brown hairs on the margin and outer surface.
References
Trichophaea woolhopeia at Species Fungorum
Trichophaea woolhopeia at GBIF
Pyronemataceae
Fungi described in 1875
Taxa named by Mordecai Cubitt Cooke
Fungus species | Trichophaea woolhopeia | Biology | 148 |
7,802,744 | https://en.wikipedia.org/wiki/Secret%20admirer | A secret admirer is an individual who feels adoration, fondness or love for another person without disclosing their identity to that person, and who might send gifts or love letters to their crush.
Grade school
The goal of a secret admirer is to woo the object of their affections, and then to reveal their identity, paving the way for a real relationship – a revealing which at school age usually occurs on Valentine's Day, the day of love. Reactions to a gushy Valentine may range from approval to gross out.
Many elementary schools and sometimes schools up to secondary schools have children do Valentine's Day projects on February 14 to craft and send "secret admirer" letters to classmates, which may not actually reflect a real "crush" and may be done neutrally or arbitrarily, and, perhaps, if done under duress from the class project requirement, reluctantly.
Office
Notes from a secret admirer may feature in office dating, but are not recommended as a means of approaching a colleague, and may border on sexual harassment.
Youthful passion for a celebrity stands on the boundary between secret admirer and fan; while the secret or concealed admiration of 'having eyes for' may also feature as a preliminary phase in the process of initially approaching the opposite sex.
Cultural examples
The adolescent Mendelssohn wrote a song - Frage (Question) – about his own suspected secret admirer.
See also
Puppy love
Secret dating
Quasimodo
References
Love
Interpersonal relationships | Secret admirer | Biology | 300 |
23,909,605 | https://en.wikipedia.org/wiki/Cyanophora | Cyanophora is a genus of glaucophytes, a group of rare but evolutionarily significant freshwater microalgae.
It includes the following species:
Cyanophora biloba
Cyanophora cuspidata
Cyanophora kugrensii
Cyanophora paradoxa
Cyanophora sudae
Cyanophora tetracyanea
These species are differentiated based on cell shape, number and position of cyanelles, and molecular data.
The species Cyanophora paradoxa is well-studied as a model organism.
References
Archaeplastida
Taxa described in 1924
Glaucophyta genera | Cyanophora | Biology | 135 |
4,093,697 | https://en.wikipedia.org/wiki/Monte%20Carlo%20localization | Monte Carlo localization (MCL), also known as particle filter localization, is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot.
Basic description
Consider a robot with an internal map of its environment. When the robot moves around, it needs to know where it is within this map. Determining its location and rotation (more generally, the pose) by using its sensor observations is known as robot localization.
Because the robot may not always behave in a perfectly predictable way, it generates many random guesses of where it is going to be next. These guesses are known as particles. Each particle contains a full description of a possible future state. When the robot observes the environment, it discards particles inconsistent with this observation, and generates more particles close to those that appear consistent. In the end, hopefully most particles converge to where the robot actually is.
State representation
The state of the robot depends on the application and design. For example, the state of a typical 2D robot may consist of a tuple for position and orientation . For a robotic arm with 10 joints, it may be a tuple containing the angle at each joint: .
The belief, which is the robot's estimate of its current state, is a probability density function distributed over the state space. In the MCL algorithm, the belief at a time is represented by a set of particles . Each particle contains a state, and can thus be considered a hypothesis of the robot's state. Regions in the state space with many particles correspond to a greater probability that the robot will be there—and regions with few particles are unlikely to be where the robot is.
The algorithm assumes the Markov property that the current state's probability distribution depends only on the previous state (and not any ones before that), i.e., depends only on . This only works if the environment is static and does not change with time. Typically, on start up, the robot has no information on its current pose so the particles are uniformly distributed over the configuration space.
Overview
Given a map of the environment, the goal of the algorithm is for the robot to determine its pose within the environment.
At every time the algorithm takes as input the previous belief , an actuation command , and data received from sensors ; and the algorithm outputs the new belief .
Algorithm MCL:
for to :
motion_update
sensor_update
endfor
for to :
draw from with probability
endfor
return
Example for 1D robot
Consider a robot in a one-dimensional circular corridor with three identical doors, using a sensor that returns either true or false depending on whether there is a door.
At the end of the three iterations, most of the particles are converged on the actual position of the robot as desired.
Motion update
During the motion update, the robot predicts its new location based on the actuation command given, by applying the simulated motion to each of the particles. For example, if a robot moves forward, all particles move forward in their own directions no matter which way they point. If a robot rotates 90 degrees clockwise, all particles rotate 90 degrees clockwise, regardless of where they are. However, in the real world, no actuator is perfect: they may overshoot or undershoot the desired amount of motion. When a robot tries to drive in a straight line, it inevitably curves to one side or the other due to minute differences in wheel radius. Hence, the motion model must compensate for noise. Inevitably, the particles diverge during the motion update as a consequence. This is expected since a robot becomes less sure of its position if it moves blindly without sensing the environment.
Sensor update
When the robot senses its environment, it updates its particles to more accurately reflect where it is. For each particle, the robot computes the probability that, had it been at the state of the particle, it would perceive what its sensors have actually sensed. It assigns a weight for each particle proportional to the said probability. Then, it randomly draws new particles from the previous belief, with probability proportional to . Particles consistent with sensor readings are more likely to be chosen (possibly more than once) and particles inconsistent with sensor readings are rarely picked. As such, particles converge towards a better estimate of the robot's state. This is expected since a robot becomes increasingly sure of its position as it senses its environment.
Properties
Non-parametricity
The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation. Some other Bayesian localization algorithms, such as the Kalman filter (and variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal. For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at. In such situations, the particle filter can give better performance than parametric filters.
Another non-parametric approach to Markov localization is the grid-based localization, which uses a histogram to represent the belief distribution. Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized.
Computational requirements
The particle filter's time complexity is linear with respect to the number of particles. Naturally, the more particles, the better the accuracy, so there is a compromise between speed and accuracy and it is desired to find an optimal value of . One strategy to select is to continuously generate additional particles until the next pair of command and sensor reading has arrived. This way, the greatest possible number of particles is obtained while not impeding the function of the rest of the robot. As such, the implementation is adaptive to available computational resources: the faster the processor, the more particles can be generated and therefore the more accurate the algorithm is.
Compared to grid-based Markov localization, Monte Carlo localization has reduced memory usage since memory usage only depends on number of particles and does not scale with size of the map, and can integrate measurements at a much higher frequency.
The algorithm can be improved using KLD sampling, as described below, which adapts the number of particles to use based on how sure the robot is of its position.
Particle deprivation
A drawback of the naive implementation of Monte Carlo localization occurs in a scenario where a robot sits at one spot and repeatedly senses the environment without moving. Suppose that the particles all converge towards an erroneous state, or if an occult hand picks up the robot and moves it to a new location after particles have already converged. As particles far away from the converged state are rarely selected for the next iteration, they become scarcer on each iteration until they disappear altogether. At this point, the algorithm is unable to recover. This problem is more likely to occur for small number of particles, e.g., , and when the particles are spread over a large state space. In fact, any particle filter algorithm may accidentally discard all particles near the correct state during the resampling step.
One way to mitigate this issue is to randomly add extra particles on every iteration. This is equivalent to assuming that, at any point in time, the robot has some small probability of being kidnapped to a random position in the map, thus causing a fraction of random states in the motion model. By guaranteeing that no area in the map is totally deprived of particles, the algorithm is now robust against particle deprivation.
Variants
The original Monte Carlo localization algorithm is fairly simple. Several variants of the algorithm have been proposed, which address its shortcomings or adapt it to be more effective in certain situations.
KLD sampling
Monte Carlo localization may be improved by sampling the particles in an adaptive manner based on an error estimate using the Kullback–Leibler divergence (KLD). Initially, it is necessary to use a large due to the need to cover the entire map with a uniformly random distribution of particles. However, when the particles have converged around the same location, maintaining such a large sample size is computationally wasteful.
KLD–sampling is a variant of Monte Carlo Localization where at each iteration, a sample size is calculated. The sample size is calculated such that, with probability , the error between the true posterior and the sample-based approximation is less than . The variables and are fixed parameters.
The main idea is to create a grid (a histogram) overlaid on the state space. Each bin in the histogram is initially empty. At each iteration, a new particle is drawn from the previous (weighted) particle set with probability proportional to its weight. Instead of the resampling done in classic MCL, the KLD–sampling algorithm draws particles from the previous, weighted, particle set and applies the motion and sensor updates before placing the particle into its bin. The algorithm keeps track of the number of non-empty bins, . If a particle is inserted in a previously empty bin, the value of is recalculated, which increases mostly linear in . This is repeated until the sample size is the same as .
It is easy to see KLD–sampling culls redundant particles from the particle set, by only increasing when a new location (bin) has been filled. In practice, KLD–sampling consistently outperforms and converges faster than classic MCL.
References
Robot navigation
Monte Carlo methods | Monte Carlo localization | Physics | 2,113 |
64,019,016 | https://en.wikipedia.org/wiki/Amplified%20magnetic%20resonance%20imaging | Amplified magnetic resonance imaging (aMRI) is an MRI method that is coupled with video magnification processing methods to amplify the subtle spatial variations in MRI scans and to enable better visualization of tissue motion. aMRI can enable better visualization of tissue motion to aid the in vivo assessment of the biomechanical response in pathology. It is thought to have potential for helping with diagnosing and monitoring a range of clinical implications in the brain and other organs, including in Chiari Malformation, brain injury, hydrocephalus, other conditions associated with abnormal intracranial pressure, cerebrovascular, and neurodegenerative disease.
The aMRI method takes high temporal-resolution MRI data as input, applies a spatial decomposition, followed by temporal filtering and frequency-selective amplification of the MRI frames before synthesizing a motion-amplified MRI data set. This approach can reveal deformations of the brain parenchyma and displacements of arteries due to cardiac pulsatility and CSF flow. aMRI has thus far been demonstrated to amplify motion in brain tissue to a more visible scale, however, can in theory be applied to visualize motion induced by other endogenous or exogenous sources in other tissues.
aMRI uses video magnificent processing methods, which use Eulerian Video Magnification and phase-based motion processing, with the latter thought to be less prone to noise and less sensitive to nonmotion-induced, voxel-intensity changes. Both video-processing methods use a series of mathematical operations used in image processing known as steerable-pyramid wavelet transformation to amplify motion without the accompanying noise. The aMRI temporal data undergoes spatial decomposition, followed by temporal filtering and frequency-selective amplification – and can allow one to visualize in vivo tissue and vascular motion that is smaller than the image resolution.
References
Magnetic resonance imaging | Amplified magnetic resonance imaging | Chemistry | 390 |
59,784,486 | https://en.wikipedia.org/wiki/Marie-Paule%20Kieny | Marie-Paule Kieny is a French virologist, vaccinologist, public health expert and science writer. She is currently director of research at INSERM and chief of the board at DNDi.
Education
Kieny completed her PhD in microbiology in 1980 at the University of Montpellier, and received her Habilitation à Diriger des Recherches in 1995 from the University of Strasbourg.
Kieny was presented with an honorary doctorate from the Autonomous University of Barcelona in 2019 for her commitment to public health and worldwide universal health care.
Career
After completing her PhD, Kieny took up a position at Transgene SA, Strasbourg, as assistant scientific director under scientific director Jean-Pierre Lecocq until 1988.
Kieny became director of research at Inserm for the first time between 1999 and 2000. Kieny was a member of the European Vaccine Initiative until 2010.
Kieny was vaccine research director of WHO from 2002 to 2010, most notably during the 2009 swine flu pandemic. She was promoted to assistant director-general, playing a major leadership role during the Ebola virus epidemic in West Africa and the 2015–16 Zika virus epidemic. Kieny even audaciously signed up herself as a test subject for the safety of new Ebola vaccines being developed, amidst concerns over the logistics and ethics of testing early-stage therapeutics and preventatives for ebola in the context of an ongoing epidemic. Given the slow development of new therapies in response to the outbreak, Kieny and colleagues began making a framework to speed up development and encourage researchers to share data openly without worrying about being scooped. She commented on the successful development of efficacious vaccines for deployment in a future outbreak. In aftermath of the 2014 Democratic Republic of the Congo Ebola virus outbreak Kieny stated that the vaccine may not be required since the outbreak was not as serious as previously feared. Kieny was involved also in addressing the ongoing antimicrobial resistance crisis with WHO, overseeing the first WHO Model List of Essential Medicines to include guidance on proper use of antibiotics within the framework of universal health care. She helped prepare a list of Antibiotic resistant bacteria which should be prioritised for research beyond just Extensively drug-resistant tuberculosis, such as Acinetobacter baumannii and Pseudomonas aeruginosa strains resistant to Carbapenem.
Kieny was one of seven vaccine experts interviewed in 2012 by Wired about what the next decade held for vaccine innovation.
In 2017 she joined the board of directors for the Human Vaccines Project, was made interim director of the Medicines Patent Pool (MPP), and joined the Drugs for Neglected Diseases Initiative (DNDi) as chief of the board. As head of MPP, Kieny oversaw the licensing of the Hepatitis C drug Mavyret for generic production.
Other activities
BioMérieux, Independent Member of the Board of Directors
Fondation Mérieux, Chair of the Scientific Advisory Board
GISAID, Member of the Nomination Committee, Advisor
Global Antibiotic Research and Development Partnership (GARDP), Member of the Board of Directors
Joint Programming Initiative on Antimicrobial Resistance (JPIAMR), Member of the Management Board
Solthis – Therapeutic Solidarity and Initiatives for Health, Member of the Board of Directors
Vanke School of Public Health at Tsinghua University, Member of the International Advisory Board
Wellcome Trust, Chair of the Strategic Advisory Board on Vaccines and Drug-resistant Infections
Wellcome Trust, Member of the Strategic Advisory Board on Innovation
Awards
2017 Inserm International Prize.
1994 Prix Génération 2000-Impact Médecin
1991 Prix de l'Innovation Rhône-Poulenc
Decorations
2021 Officier de l'Ordre national du Mérite (chevalier in 2000)
2016 Chevalier de la Legion d'honneur.
References
University of Montpellier alumni
University of Strasbourg alumni
French women biologists
Vaccinologists
French virologists
Women virologists
World Health Organization officials
Inserm directors | Marie-Paule Kieny | Biology | 819 |
252,009 | https://en.wikipedia.org/wiki/Radical%20of%20an%20ideal | In ring theory, a branch of mathematics, the radical of an ideal of a commutative ring is another ideal defined by the property that an element is in the radical if and only if some power of is in . Taking the radical of an ideal is called radicalization. A radical ideal (or semiprime ideal) is an ideal that is equal to its radical. The radical of a primary ideal is a prime ideal.
This concept is generalized to non-commutative rings in the semiprime ring article.
Definition
The radical of an ideal in a commutative ring , denoted by or , is defined as
(note that ).
Intuitively, is obtained by taking all roots of elements of within the ring . Equivalently, is the preimage of the ideal of nilpotent elements (the nilradical) of the quotient ring (via the natural map ). The latter proves that is an ideal.
If the radical of is finitely generated, then some power of is contained in . In particular, if and are ideals of a Noetherian ring, then and have the same radical if and only if contains some power of and contains some power of .
If an ideal coincides with its own radical, then is called a radical ideal or semiprime ideal.
Examples
Consider the ring of integers.
The radical of the ideal of integer multiples of is (the evens).
The radical of is .
The radical of is .
In general, the radical of is , where is the product of all distinct prime factors of , the largest square-free factor of (see Radical of an integer). In fact, this generalizes to an arbitrary ideal (see the Properties section).
Consider the ideal . It is trivial to show (using the basic property but we give some alternative methods: The radical corresponds to the nilradical of the quotient ring , which is the intersection of all prime ideals of the quotient ring. This is contained in the Jacobson radical, which is the intersection of all maximal ideals, which are the kernels of homomorphisms to fields. Any ring homomorphism must have in the kernel in order to have a well-defined homomorphism (if we said, for example, that the kernel should be the composition of would be , which is the same as trying to force ). Since is algebraically closed, every homomorphism must factor through , so we only have to compute the intersection of to compute the radical of We then find that
Properties
This section will continue the convention that I is an ideal of a commutative ring :
It is always true that , i.e. radicalization is an idempotent operation. Moreover, is the smallest radical ideal containing .
is the intersection of all the prime ideals of that contain and thus the radical of a prime ideal is equal to itself. Proof: On one hand, every prime ideal is radical, and so this intersection contains . Suppose is an element of that is not in , and let be the set . By the definition of , must be disjoint from . is also multiplicatively closed. Thus, by a variant of Krull's theorem, there exists a prime ideal that contains and is still disjoint from (see Prime ideal). Since contains , but not , this shows that is not in the intersection of prime ideals containing . This finishes the proof. The statement may be strengthened a bit: the radical of is the intersection of all prime ideals of that are minimal among those containing .
Specializing the last point, the nilradical (the set of all nilpotent elements) is equal to the intersection of all prime ideals of This property is seen to be equivalent to the former via the natural map , which yields a bijection : defined by
An ideal in a ring is radical if and only if the quotient ring is reduced.
The radical of a homogeneous ideal is homogeneous.
The radical of an intersection of ideals is equal to the intersection of their radicals: .
The radical of a primary ideal is prime. If the radical of an ideal is maximal, then is primary.
If is an ideal, . Since prime ideals are radical ideals, for any prime ideal .
Let be ideals of a ring . If are comaximal, then are comaximal.
Let be a finitely generated module over a Noetherian ring . Then where is the support of and is the set of associated primes of .
Applications
The primary motivation in studying radicals is Hilbert's Nullstellensatz in commutative algebra. One version of this celebrated theorem states that for any ideal in the polynomial ring over an algebraically closed field , one has
where
and
Geometrically, this says that if a variety is cut out by the polynomial equations , then the only other polynomials that vanish on are those in the radical of the ideal .
Another way of putting it: the composition is a closure operator on the set of ideals of a ring.
See also
Jacobson radical
Nilradical of a ring
Real radical
Notes
Citations
References
Ideals (ring theory)
Closure operators | Radical of an ideal | Mathematics | 1,039 |
38,439,167 | https://en.wikipedia.org/wiki/Revolution%20in%20the%20Valley | Revolution in the Valley: The Insanely Great Story of How the Mac Was Made is a nonfiction book written by Andy Hertzfeld about the birth of the Apple Macintosh personal computer. The author was a core member of the team that built the Macintosh system software and the chief creator of the Mac's radical new user interface software. The book is a collection of anecdotes tracing the development of the Macintosh from a secret project in 1979 through its "triumphant introduction" in 1984. These anecdotes were originally published on the author's Folklore.org web site.
Content
The book focuses on the hardware design and software development by the original Macintosh team at Apple Computer, including sometimes technical details of ports and cards and code. It describes the Mac's introduction by Steve Jobs, and improvements made shortly thereafter. Steve Wozniak wrote the foreword.
The author reveals that both Steve Jobs and Bill Gates had first seen the innovative Graphic User Interface at the offices of Xerox's Palo Alto Research Center (PARC), which had prototyped the "desktop computer" concept by 1978.
References
Books about Apple Inc.
2004 non-fiction books
O'Reilly Media books | Revolution in the Valley | Technology | 239 |
26,103,936 | https://en.wikipedia.org/wiki/Integrated%20Biological%20Detection%20System | The Integrated Biological Detection System is a system used by the British Army and Royal Air Force for detecting Chemical, biological, radiological, and nuclear agents or elements.
The Integrated Biological Detection System can provide early warning of a chemical or biological warfare attack and is in service with the United Kingdom Joint NBC Regiment. It can be installed in a container which can be mounted on a vehicle or ground dumped. It is also able to be transported by either a fixed-wing aircraft or by helicopter.
The system comprises
A detection suite, including equipment for atmospheric sampling
Meteorological station and GPS
CBRN filtration and environmental control for use in all climates
Chemical agent detection
A independent power supply
Cameras for 360 degree surveillance
A U.S. military system with a similar purpose and a similar name is the Biological Integrated Detection System (BIDS).
References
Biological warfare
British Army equipment
Chemical warfare
Toxicology in the United Kingdom | Integrated Biological Detection System | Chemistry,Biology,Environmental_science | 178 |
74,839,988 | https://en.wikipedia.org/wiki/Lanifibranor | Lanifibranor is a pan-PPAR (peroxisome proliferator-activated receptor) receptor agonist and is the first medication that targets PPAR-alpha, PPAR-beta, and PPAR-gamma simultaneously. As of 2023, it is in a phase III trial for nonalcoholic steatohepatitis; its advantage over other drugs that are in phase III trials for the same condition is that it has shown improvements in both steatohepatitis and fibrosis.
References
PPAR agonists
Indoles
Benzothiazoles
Sulfonamides
Carboxylic acids
Chloroarenes
Experimental drugs developed for non-alcoholic fatty liver disease | Lanifibranor | Chemistry | 146 |
28,826,058 | https://en.wikipedia.org/wiki/La%20Mort%20de%20la%20Terre | The Death of the Earth (French: La Mort de la Terre) is a 1910 Belgian novel by J.-H. Rosny aîné.
Plot summary
In the far future, the Earth has become an immense, dry desert. Small communities of future humans, partially adapted to the harsher climate, survive united by the "Great Planetarium" communications web. The means for human survival are rapidly diminishing beyond repair, with the remaining supplies of water failing or becoming increasingly hard to find. Along with this, a barely comprehensible form of life – "ferromagnetals" ("les ferromagnétaux") – have begun to develop and spread within and throughout the Earth itself.
The narrative focuses mainly on group of humans led by Targ, who at the beginning of the story is the "watchman" ("veilleur") of the Great Planetarium.
See also
1910 in science fiction
Dying Earth (genre)
External links
1910 novels
1910 science fiction novels
Belgian speculative fiction novels
French-language novels
Post-apocalyptic novels
Works by J.-H. Rosny aîné
Fiction set in the 7th millennium or beyond | La Mort de la Terre | Biology | 240 |
11,178,102 | https://en.wikipedia.org/wiki/HD%20192310 | HD 192310 (also known as 5 G. Capricorni or Gliese 785) is a star in the southern constellation of Capricornus. It is located in the solar neighborhood at a distance of , and is within the range of luminosity needed to be viewed from the Earth with the unaided eye. (According to the Bortle scale, it can be viewed from dark suburban skies.) HD 192310 is suspected of being a variable star, but this is unconfirmed.
Description
This is a K-type main sequence star with a stellar classification of K2+ V. HD 192310 has about 78% of the Sun's mass and, depending on the estimation method, 79% to 85% of the radius of the Sun. The effective temperature of the photosphere is about 5069 K, giving it the orange-hued glow of a K-type star. It is older than the Sun, with age estimates in the range 7.5–8.9 billion years. The proportion of elements other than hydrogen and helium, known as the metallicity, is similar to that of the Sun. It is spinning slowly, completing a rotation roughly every 48 days.
The space velocity components of this star are (U, V, W) = . It is following an orbit through the Milky Way galaxy that has an orbital eccentricity of 0.18 at a mean galactocentric distance of 8.1 kpc. The star will achieve perihelion in around 82,200 years when it comes within of the Sun.
Planetary system
The system has a Neptune-mass planet "b", discovered in 2010. A second planet "c" was found in this system in 2011 by the HARPS GTO program, along with the now-doubtful HD 85512 b and the planets of 82 G. Eridani. The uncertainty in the mass of the second planet was much higher than for the first because of the lack of coverage around the full orbit. Both planets may be similar in composition to Neptune. They are orbiting along the inner and outer edges of the habitable zone for this star.
A study in 2023 updated the parameters of these two planets, and identified a number of additional radial velocity signals. While most of these signals were attributed to stellar activity, one was considered a planet candidate. If real, this third planet would be a super-Earth orbiting closer to the star than the two known planets. However, another 2023 study did not find this candidate signal and also attributed it to stellar activity.
See also
List of star systems within 25–30 light-years
References
External links
K-type main-sequence stars
HR, 7722
Capricorni, 5
099825
192310
0785
Durchmusterung objects
7722
Planetary systems with two confirmed planets
Capricornus | HD 192310 | Astronomy | 584 |
198,711 | https://en.wikipedia.org/wiki/Rancidification | Rancidification is the process of complete or incomplete autoxidation or hydrolysis of fats and oils when exposed to air, light, moisture, or bacterial action, producing short-chain aldehydes, ketones and free fatty acids.
When these processes occur in food, undesirable odors and flavors can result. In processed meats, these flavors are collectively known as warmed-over flavor. In certain cases, however, the flavors can be desirable (as in aged cheeses).
Rancidification can also detract from the nutritional value of food, as some vitamins are sensitive to oxidation. Similar to rancidification, oxidative degradation also occurs in other hydrocarbons, such as lubricating oils, fuels, and mechanical cutting fluids.
Pathways
Five pathways for rancidification are recognized:
Hydrolytic
Hydrolytic rancidity refers to the odor that develops when triglycerides are hydrolyzed and free fatty acids are released. This reaction of lipid with water may require a catalyst (such as a lipase, or acidic or alkaline conditions) leading to the formation of free fatty acids and glycerol. In particular, short-chain fatty acids, such as butyric acid, are malodorous. When short-chain fatty acids are produced, they serve as catalysts themselves, further accelerating the reaction, a form of autocatalysis.
Oxidative
Oxidative rancidity is associated with the degradation by oxygen in the air.
Free-radical oxidation
The double bonds of an unsaturated fatty acid can be cleaved by free-radical reactions involving molecular oxygen. This reaction causes the release of malodorous and highly volatile aldehydes and ketones. Because of the nature of free-radical reactions, the reaction is catalyzed by sunlight. Oxidation primarily occurs with unsaturated fats. For example, even though meat is held under refrigeration or in a frozen state, the poly-unsaturated fat will continue to oxidize and slowly become rancid. The fat oxidation process, potentially resulting in rancidity, begins immediately after the animal is slaughtered and the muscle, intra-muscular, inter-muscular and surface fat becomes exposed to oxygen of the air. This chemical process continues during frozen storage, though more slowly at lower temperature. Oxidative rancidity can be prevented by light-proof packaging, oxygen-free atmosphere (air-tight containers) and by the addition of antioxidants.
Enzyme-catalysed oxidation
A double bond of an unsaturated fatty acid can be oxidised by oxygen from the air in reactions catalysed by plant or animal lipoxygenase enzymes, producing a hydroperoxide as a reactive intermediate, as in free-radical peroxidation. The final products depend on conditions: the lipoxygenase article shows that if a hydroperoxide lyase enzyme is present, it can cleave the hydroperoxide to yield short-chain fatty acids and dicarboxylic acids (several of which were first discovered in rancid fats).
Microbial
Microbial rancidity refers to a water-dependent process in which microorganisms, such as bacteria or molds, use their enzymes such as lipases to break down fat. Pasteurization and/or addition of antioxidant ingredients such as vitamin E, can reduce this process by destroying or inhibiting microorganisms.
Food safety
Despite concerns among the scientific community, there is little data on the health effects of rancidity or lipid oxidation in humans. Animal studies show evidence of organ damage, inflammation, carcinogenesis, and advanced atherosclerosis, although typically the dose of oxidized lipids is larger than what would be consumed by humans.
Antioxidants are often used as preservatives in fat-containing foods to delay the onset or slow the development of rancidity due to oxidation. Natural antioxidants include ascorbic acid (vitamin C) and tocopherols (vitamin E). Synthetic antioxidants include butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), TBHQ, propyl gallate and ethoxyquin. The natural antioxidants tend to be short-lived, so synthetic antioxidants are used when a longer shelf-life is preferred. The effectiveness of water-soluble antioxidants is limited in preventing direct oxidation within fats, but is valuable in intercepting free radicals that travel through the aqueous parts of foods. A combination of water-soluble and fat-soluble antioxidants is ideal, usually in the ratio of fat to water.
In addition, rancidification can be decreased by storing fats and oils in a cool, dark place with little exposure to oxygen or free radicals, since heat and light accelerate the rate of reaction of fats with oxygen. Antimicrobial agents can also delay or prevent rancidification by inhibiting the growth of bacteria or other micro-organisms that affect the process.
Oxygen scavenging technology can be used to remove oxygen from food packaging and therefore prevent oxidative rancidification.
Oxidative stability measurement
Oxidative stability is a measure of oil or fat resistance to oxidation. Because the process takes place through a chain reaction, the oxidation reaction has a period when it is relatively slow, before it suddenly speeds up. The time for this to happen is called the "induction time", and it is repeatable under identical conditions (temperature, air flow, etc.). There are a number of ways to measure the progress of the oxidation reaction. One of the most popular methods currently in use is the Rancimat method.
The Rancimat method is carried out using an air current at temperatures between 50 and 220 °C. The volatile oxidation products (largely formic acid) are carried by the air current into the measuring vessel, where they are absorbed (dissolve) in the measuring fluid (distilled water). By continuous measurement of the conductivity of this solution, oxidation curves can be generated. The cusp point of the oxidation curve (the point where a rapid rise in the conductivity starts) gives the induction time of the rancidification reaction, and can be taken as an indication of the oxidative stability of the sample.
The Rancimat method, the oxidative stability instrument (OSI) and the oxidograph were all developed as automatic versions of the more complicated AOM (active oxygen method), which is based on measuring peroxide values for determining the induction time of fats and oils. Over time, the Rancimat method has become established, and it has been accepted into a number of national and international standards, for example AOCS Cd 12b-92 and ISO 6886.
See also
Fermentation
Food preservation
Lipid peroxidation
Preservative
Putrefaction
References
Further reading
Food preservation
Edible oil chemistry | Rancidification | Chemistry | 1,460 |
898,023 | https://en.wikipedia.org/wiki/Grivna | The grivna () was a currency as well as a measure of weight used in Kievan Rus' and other states in Eastern Europe from the 11th century.
Name
The word grivna is derived from from . In Old East Slavic, it had the form , grivĭna. In modern East Slavic languages it has such forms: , grivna, , hryvnia, , hryŭnia.
The name of the contemporary currency of Ukraine, hryvnia, is derived from the grivna.
History
Early history
As its etymology implies the word originally meant a necklace or a torque. The reason why it has taken the meaning of a unit of weight is unclear. The grivnas that have been found at various archaeological sites are not necklaces but bullions of precious metals, usually silver. The weight and the shape of grivnas were not uniform, but varied by region. The grivnas of Novgorod and Pskov were thin long round-edged or three-edged ingots, while Kievan grivnas have rather the shape of a prolonged rhombus. The material was either gold or silver, but silver was predominant. Originally the weight of a grivna was close to the Roman or Byzantine pound. The weight of the Kievan grivna was around . The Novgorod grivna had the weight and became the basis for monetary systems in northeastern Rus', including the emerging Grand Duchy of Moscow.
Along the "grivna of silver" there were the account "grivna of kuna". The latter originally signified a certain amount of marten furs (куна kuna was the word for "marten" in Church Slavonic). Since the 12th century, the "grivna of kuna" became another unit of weight, but smaller, and signified as well a certain amount of silver coins: 2.5-gram nogata (from naqd 'money; a coin') and rezan ( dirham).
1 grivna of silver (204 g) = 4 grivnas of kuna (51 g) = 80 nogata coins = 100 kunas (marten furs) = 200 rezans = 400–600 vekshas (squirrel furs)
Later history
Since the 14th century, when coins started to be minted in northeastern Rus' (firstly in Moscow), the currency system of silver bullions and furs was becoming obsolete. The grivna became to mean not a weight but rather a particular number of silver coins called then denga. At the same time as early as the 13th century the word ruble (rubl) started to be used alongside the word grivna to mean a certain amount of either silver or silver coins. Thus one account ruble was equal to 216 denga coins (each weighted about 0.8 gram). The grivna of kuna became simply grivna and was equal to 14 dengas. Thus one ruble was equal to 15 new grivnas and 6 denga coins.
The weight of a denga coin in Moscow and Novgorod was different. In the 15th century, the Moscow denga fell as low as 0.4 gram, while the Novgorod denga remained the same. When in Moscow one ruble had been revalued to 200 denga coins, the exchange rate between Moscow and Novgorod denga coins was set to 2 to 1. Thus since the later 15th – the early 16th centuries one account ruble was equal to 100 Novgorod dengas (later known as kopeks) or to 200 Moscow dengas. In this system one grivna was equal to 10 kopeks or 20 dengas.
This last meaning survived into the 18th to 20th centuries when one grivennik or grivenka meant a 10-kopek coin.
Weight
The grivna as a silver bullion currency did not survive, but its meaning as a unit of weight became predominant. In 15th–17th centuries there were two weight grivnas (or grivenkas): the lesser grivna of and the greater grivna of . Since the middle of the 17th century the latter became known as the Russian pound (Фунт, funt). 40 Russian pounds or 80 lesser grivnas (grivenkas) are equal to one pood.
See also
Obsolete Russian units of measurement
Manilla
History of Ukrainian hryvnia
References
Further reading
Units of mass
Obsolete units of measurement
Medieval currencies
Society of Kievan Rus'
Economy of Kievan Rus'
Pound (currency) | Grivna | Physics,Mathematics | 939 |
77,465,643 | https://en.wikipedia.org/wiki/Desmethylselegiline | Desmethylselegiline (DMS), also known as norselegiline or as N-propargyl-L-amphetamine, is an active metabolite of selegiline, a medication used in the treatment of Parkinson's disease and depression.
Like selegiline, DMS is a monoamine oxidase inhibitor (MAOI); specifically, it is a selective and irreversible inhibitor of monoamine oxidase B (MAO-B). In addition, it is a catecholaminergic activity enhancer (CAE) similarly to selegiline. The drug also produces levoamphetamine as an active metabolite, which is a norepinephrine–dopamine releasing agent with sympathomimetic and psychostimulant effects.
DMS has been studied much less extensively than selegiline and has not been developed or approved for medical use.
Pharmacology
Pharmacodynamics
DMS is a monoamine oxidase inhibitor (MAOI), similarly to selegiline. It is specifically a selective and irreversible inhibitor of monoamine oxidase B (MAO-B). The compound is also a catecholaminergic activity enhancer (CAE) like selegiline. The potency of DMS as a CAE appears to be similar to that of selegiline.
Aside from being an active metabolite of selegiline, DMS itself has been studied clinically. A single 10mg oral dose of DMS inhibited platelet MAO-B activity by 68 ± 16%, relative to 94 ± 9% with a single 10mg dose of selegiline. Subsequently, platelet MAO-B activity returned to baseline after 2weeks. Hence, although less potent than selegiline, DMS is also an effective MAO-B inhibitor.
DMS has been found to be 60-fold less potent than selegiline as an MAO-B inhibitor in vitro. However, it was only 3-fold less potent than selegiline orally in vivo in rats with repeated administration. In other research, DMS was 6-fold less potent than selegiline in inhibition of platelet MAO-B activity.
Selegiline produces levomethamphetamine and levoamphetamine as active metabolites, whereas DMS produces only levoamphetamine as a metabolite. Unlike DMS and selegiline, levoamphetamine and levomethamphetamine are not active as MAO-B inhibitors at concentrations up to 100μM in vitro. However, levoamphetamine is a releaser of norepinephrine and dopamine and has sympathomimetic and psychostimulant effects. Similarly to selegiline, but unlike levoamphetamine and levomethamphetamine, DMS itself is not a monoamine releasing agent.
DMS shows neuroprotective, antioxidant, and antiapoptotic activity similarly to selegiline. DMS is more potent in some of these effects than selegiline. The neuroprotective and antioxidant properties of DMS and selegiline appear to be independent of MAO-B inhibition. Both selegiline and DMS have been found to bind to and inhibit glyceraldehyde-3-phosphate dehydrogenase (GAPDH), which may be involved in their neuroprotective effects.
Pharmacokinetics
Selegiline and DMS were compared in a clinical study in which 10mg of each drug was administered orally. DMS showed 27-fold higher peak levels and 33-fold higher area-under-the-curve levels than selegiline in this study, suggesting that it has much greater oral bioavailability than selegiline.
Levoamphetamine is an active metabolite of DMS. Conversely, in contrast to selegiline, which metabolizes into both levomethamphetamine and levoamphetamine, levomethamphetamine is not a metabolite of DMS.
Selegiline is metabolized into DMS in the liver. With use of oral selegiline in humans, 86% of a dose is excreted in urine, with 1.1% of this being DMS, 59.2% being levomethamphetamine, and 26.3% being levoamphetamine. Levoamphetamine is formed with selegiline from both DMS and levomethamphetamine. However, levoamphetamine is only a minor metabolite of levomethamphetamine (2–3%). As a metabolite of selegiline, DMS has an elimination half-life ranging from 2.6 to 11hours. The half-lives of both selegiline and DMS increase with continuous use of selegiline.
Chemistry
Prodrugs
Prodrugs of DMS have been synthesized and studied.
Notes
References
Antiparkinsonian agents
Drugs with unknown mechanisms of action
Enantiopure drugs
Human drug metabolites
Monoamine oxidase inhibitors
Monoaminergic activity enhancers
Neuroprotective agents
Norepinephrine-dopamine releasing agents
Phenethylamines
Prodrugs
Propargyl compounds
Stimulants
Substituted amphetamines
TAAR1 agonists
Selegiline | Desmethylselegiline | Chemistry | 1,177 |
25,896,348 | https://en.wikipedia.org/wiki/Copper%E2%80%93tungsten | Copper–tungsten (tungsten–copper, CuW, or WCu) is a mixture of copper and tungsten. As copper and tungsten are not mutually soluble, the material is composed of distinct particles of one metal dispersed in a matrix of the other one. The microstructure is therefore rather a metal matrix composite instead of a true alloy.
The material combines the properties of both metals, resulting in a material that is heat-resistant, ablation-resistant, highly thermally and electrically conductive, and easy to machine.
Parts are made from the CuW composite by pressing the tungsten particles into the desired shape, sintering the compacted part, then infiltrating with molten copper. Sheets, rods, and bars of the composite mixture are available as well.
Commonly used copper tungsten mixtures contains 10–50 wt.% of copper, the remaining portion being mostly tungsten. The typical properties is dependent on its composition. The mixture with less wt.% of copper has higher density, higher hardness, and higher resistivity. The typical density of CuW90, with 10% of copper, is 16.75 g/cm3 and 11.85 g/cm3 for CuW50 . CuW90 has higher hardness and resistivity of 260 HB kgf/mm2 and 6.5 μΩ.cm than CuW50.
Typical properties of commonly used copper tungsten compositions
Applications
CuW composites are used where the combination of high heat resistance, high electrical and thermal conductivity, and low thermal expansion are needed. Some of the applications are in electric resistance welding, as electrical contacts, and as heat sinks. As contact material, the composite is resistant to erosion by electric arc. WCu alloys are also used in electrodes for electrical discharge machining and electrochemical machining.
The CuW75 composite, with 75% tungsten, is widely used in chip carriers, substrates, flanges, and frames for power semiconductor devices. The high thermal conductivity of copper together with the low thermal expansion of tungsten allows thermal expansion matching to silicon, gallium arsenide, and some ceramics. Other materials for this applications are copper-molybdenum alloy, AlSiC, and Dymalloy.
Composites with 70–90% of tungsten are used in liners of some specialty shaped charges. The penetration is enhanced by factor 1.3 against copper for homogeneous steel target, as both the density and the break-up time are increased. Tungsten powder based shaped charge liners are especially suitable for oil well completion. Other ductile metals can be used as a binder in place of copper as well. Graphite can be added as lubricant to the powder.
CuW can also be used as a contact material in a vacuum. When the contact is very fine grained (VFG), the electrical conductivity is much higher than a normal piece of copper tungsten. Copper tungsten is a good choice for a vacuum contact due to its low cost, resistance to arc erosion, good conductivity, and resistance to mechanical wear and contact welding. CuW is usually a contact for vacuum, oil, and gas systems. It is not a good contact for air since the surface will oxidize when exposed. CuW is less likely to erode in air when the concentration of copper is higher in the material. The uses of CuW in the air are as an arc tip, arc plate, and an arc runner.
Copper tungsten materials are often used for arcing contacts in medium to high voltage sulfur hexafluoride (SF6) circuit breakers in environments that can reach temperatures above 20,000K. The copper tungsten material's resistance to arc erosion can be increased by modifying the grain size and chemical composition.
The Spark Erosion (EDM) process calls for copper tungsten. Usually, this process is used with graphite, but as tungsten has a high melting point (3420 °C) this allows the CuW electrodes to have a longer service life than the graphite electrodes. This is crucial when the electrodes have been processed with complex machining. Since the electrodes are susceptible to wear the electrodes provide more geometrical accuracy than the other electrodes. These properties also let the rods and tubes manufactured for spark erosion be made smaller in diameter and have a longer length since the material is less likely to chip and warp.
Properties
The electrical and thermal properties of the composites vary with different proportions. An increase in copper increases the thermal conductivity, which plays a huge part when being used in circuit breakers. Electrical resistivity increases with an increase in the percentage of tungsten present in the composite, ranging from 3.16 at 55% tungsten to 6.1 when the composite contains 90% tungsten. An increase in tungsten leads to an increase in ultimate tensile strength up until the alloy reaches 80% tungsten and 20% copper with an ultimate tensile strength of 663 MPa. After this mixture of copper and tungsten, the ultimate tensile strength then begins to decrease fairly rapidly.
References
Copper alloys
Tungsten compounds
Metal matrix composites
Chip carriers | Copper–tungsten | Chemistry | 1,059 |
70,030,960 | https://en.wikipedia.org/wiki/Paleo-inspiration | Paleo-inspiration is a paradigm shift that leads scientists and designers to draw inspiration from ancient materials (from art, archaeology, natural history or paleo-environments) to develop new systems or processes, particularly with a view to sustainability.
Paleo-inspiration has already contributed to numerous applications in fields as varied as green chemistry, the development of new artist materials, composite materials, microelectronics, and construction materials.
Semantics and definitions
While this type of application has been known for a long time, the concept itself was coined by teams from the French National Centre for Scientific Research, the Massachusetts Institute of Technology and the Bern University of Applied Sciences from the term Bioinspiration. They published the concept in a seminal paper published online in 2017 by the journal Angewandte Chemie.
Different names have been used to designate the corresponding systems, in particular: paleo-inspired, antiqua-inspired, antiquity-inspired or archaeomimetic. The use of these different names illustrates the extremely large time gap between the sources of inspiration, from millions of years ago when considering palaeontological systems and fossils, to much more recent archaeological or artistic material systems.
Properties sought
Distinct physico-chemical and mechanical properties are sought.
They may concern intrinsic properties of the paleo-inspired materials:
durability (materials found in certain contexts, having resisted alteration in these environments) and resistance to corrosion or alteration
electronic or magnetic properties
optical properties (especially from pigments or dyes, materials used for ceramic manufacture)
They can also concern processes:
processes with low energy or resource consumption, with a view to chemical processes favouring sustainable development
soft chemistry processes
The paleo-inspired approach
This approach combines several key stages.
Observation: This phase concerns materials, their properties, or the manufacturing processes (in relation in particular to the study of chaîne opératoire's in archaeology, or the history of techniques, in particular that of artistic techniques), and the processes of alteration (or even the work carried out in experimental taphonomy). This is therefore a first phase of reverse engineering. Some of these studies fall within the field of anthropology. As in the case of bioinspiration, this phase is fundamental and is based on an approach that favours creative exploration of objects, with few preconceived ideas (serendipity).
Re-creation: A second phase follows aimed at simplifying materials, systems and processes in order to identify the fundamental mechanisms at the origin of the observed properties. This stage requires a back and forth between the synthesis of simplified systems and the characterisation of the new objects of study.
Design: Finally, there follows a conception or design phase, concerning materials, systems or processes, and aiming at their concrete implementation for applications.
Practical applications
Sustainable building materials
Emblematic examples include the microscopic study of the mineral phases present in Roman concretes to reproduce their durability in aggressive environments, particularly in the marine environment.
Durable colouring materials
A notable discovery is the elucidation of the atomic structure of Maya blue, a composite pigment combining a clay with an organic dye, which has led teams to produce pigments of other colours by combining clays with distinct organic dyes, such as "Maya violet".
References
Materials science
Archaeology
Paleontology | Paleo-inspiration | Physics,Materials_science,Engineering | 666 |
35,239,732 | https://en.wikipedia.org/wiki/Cyril%20Beeson | Cyril Frederick Cherrington Beeson CIE, D.Sc. (1889–1975) was an English entomologist and forest conservator who worked in India. Beeson was an expert on forest entomology who wrote numerous papers on insects, and whose book on Indian forest insects remains a standard work on the subject. After his retirement and return to England he became an antiquarian horologist.
Family
Beeson was born in Oxford on 10 February 1889 to Walter Thomas Beeson and Rose Eliza Beeson, née Clacey. Walter Beeson was Surveyor to St John's College, Oxford.
In 1922, Beeson married Marion Cossentine, daughter of Samuel Fitze. They had a daughter, Barbara Rose, who was born about 1925. Marion died in 1946 after a long period of ill-health. In 1971, aged 82, Beeson married his second wife, Mrs Margaret Athalie Baldwin Carbury, formerly of Kenya, daughter of Cecil William Allen.
Beeson died on 3 November 1975.
Education
Beeson attended City of Oxford High School for Boys, where his best friend was T. E. Lawrence (known to Beeson as "Ned", later coming to be known as Lawrence of Arabia). Lawrence called him by his nickname of "Scroggs".
At the age of 15 Beeson and Lawrence bicycled around Berkshire, Buckinghamshire and Oxfordshire, visited almost every village's parish church, studied their monuments and antiquities and made rubbings of their monumental brasses. The two schoolboys monitored building sites in Oxford and presented their finds to the Ashmolean Museum. The Ashmolean's Annual Report for 1906 said that the pair "by incessant watchfulness secured everything of antiquarian value which has been found". In the summers of 1906 and 1907 Beeson and Lawrence toured France by bicycle, collecting photographs, drawings and measurements of medieval castles. Beeson made many of the drawings that Lawrence used in his thesis The influence of the Crusades on European Military Architecture – to the end of the 12th century, which was published in 1936 as Crusader Castles.
Beeson entered the University of Oxford in 1907 to read geology. He was a non-collegiate student until 1908, when he won an exhibition that enabled him to enter St John's College. He graduated in 1910 but then changed disciplines to forestry, in which he obtained a diploma. He received his MA in 1917 and an Oxford D.Sc. in 1923.
Army service
Beeson was a captain in the Royal Army Medical Corps in the First World War.
Forest entomologist
From 1911 until 1941 Beeson worked for the Imperial Forest Service (IFS) as a research officer, forest conservator and forest entomologist. The IFS seconded him to study tropical and forest entomology in London and Germany, after which he first served in the Punjab. In 1913 he was appointed Forest Zoologist of India. Beeson was closely involved with the development of the Forest Research Institute at Dehradun. In 1922 his post was renamed Forest Entomologist. He served in the same position until his retirement in 1941, when he was made a Companion of the Order of the Indian Empire.
Beeson's first book, The Ecology and Control of the Forest Insects of India and the Neighbouring Countries was published in 1941. It remained the standard work in its field, being republished in 1961 and 1993.
Beeson returned to Oxford, where he became Director of the Imperial Forestry Bureau from 1945 to 1947. While he was Director of the IFB, Beeson and his wife moved to Adderbury in North Oxfordshire.
The scale insect genus Beesonia was named after Beeson who collected specimens described by Edward Ernest Green in 1926. It is placed in a family Beesoniidae.
Antiquarian horologist
When the couple moved to Adderbury, Beeson began to collect antique clocks, many of which originated from Oxfordshire. Beeson turned his scholarly and scientific approach to antiquarian horology, and in 1953 became a founder member of the Antiquarian Horological Society. He contributed many articles to the AHS's quarterly academic journal Antiquarian Horology, and edited it for the year 1959–60.
Beeson became a published authority on the prominent clockmakers Joseph Knibb (1640–1711) and John Knibb (1650–1722). Beeson's own collection included five clocks and three watches by John Knibb. He also developed a special interest in turret clocks and made an influential study of the clock installed in 1669 at Wadham College, Oxford, which he proposed was made by Joseph Knibb.
Beeson joined the Banbury Historical Society soon after its foundation in 1958. He was Chairman of the BHS 1959–60 and founding editor of its journal Cake and Cockhorse 1959–62. In 1962 the AHS and BHS jointly published the first edition of Beeson's monograph, Clockmaking in Oxfordshire 1400–1850.
In 1924 the Museum of the History of Science, Oxford started a small collection of historic clocks and watches. In 1966 Beeson greatly expanded this by presenting the Museum with his own historic collection, which included 42 longcase clocks, 24 other clocks and 13 watches. In 1967 the Museum published a second, enlarged edition of his book Clockmaking in Oxfordshire 1400–1850. In 1971 the Museum published a broader study by Beeson, English Church Clocks 1280–1850: History and Classification. This led the AHS in 1973 to form its turret clock section, of which Beeson became chairman. In 1972 Lord Bullock, Vice-Chancellor of the University of Oxford opened the Museum of the History of Science's Beeson Room to house its horological collection.
For his final book Beeson returned to one of the castles in France that had interested him and T.E. Lawrence as teenagers. Perpignan 1356: The Making of a Tower Clock and Bell for the King's Castle is a substantial account of the tower clock and bell made in 1356 for the Palace of the Kings of Majorca at Perpignan.
Published works
Over a period of more than 30 years Beeson published more than 60 scientific articles on tropical forest insects. He also edited the Indian forestry journal. Listed below are only the books that Beeson wrote, including journal articles that were republished as books.
References
Sources
1889 births
1975 deaths
Military personnel from Oxford
Alumni of St John's College, Oxford
British Army personnel of World War I
Royal Army Medical Corps officers
English entomologists
Companions of the Order of the Indian Empire
Horology
English antiquarians
Historians of technology
Imperial Forestry Service officers
Naturalists from British India
20th-century English historians
People from Oxford
20th-century British zoologists
20th-century antiquarians | Cyril Beeson | Physics | 1,377 |
7,900,534 | https://en.wikipedia.org/wiki/Glaspalast%20%28Munich%29 | The Glaspalast (Glass Palace) was a glass and iron exhibition building located in the Old botanical garden in Munich modeled after the Crystal Palace in London. The Glaspalast opened for the first General German Industrial Exhibition on 15 July 1854.
Planning
Following other examples around Europe, the Glaspalast was ordered by Maximilian II, King of Bavaria, in order to hold the Erste Allgemeine Deutsche Industrieausstellung (First General German Industrial Exhibition) on 15 July 1854.
Originally it was planned to erect the building on . However, the relevant Commission decision preferred an area near the railway station. Designed by architect August von Voit and built by MAN AG, the building was built in 1854 to the north of the Old Botanical Garden close to the Stachus.
Construction
Following the completion of 1853 and the planned and conservatory of Munich Residence, a glass with cast iron design was used, using existing experience for this modern building.
As with the Crystal Palace in London, initial designs were relatively complex. Due to the short time available for construction, the design was significantly simplified and relied on use of standard components. Conventional construction methods were not possible due to the large amount of building materials required.
The two-storey building was long, wide and high. The elongated rectangular glass palace, in the form of a five-nave and two-storey main building in the hall with a transept in the middle and rectangular extensions at the ends of the longitudinal ship had a length of 234 meters and was 67 meters wide; the height was 25 meters.
The building was built entirely of glass and cast iron, load-bearing walls were completely omitted. The 1,700 tons of prefabricated iron parts were made by Cramer-Klett in Nuremberg. The company Cramer-Klett was the leader at this time in southern Germany in the field of iron constructions, the company had previously built the in Munich and also the Maximilian II conservatory. For this construction, the glass was produced in the more traditional Schmidsfelden glass works.
Construction was a mere six months, from 31 December 1853 to 7 June 1854, during which time 37,000 windows were installed. The total cost of construction was 800,000 guldens.
The Erste Allgemeine Deutsche Industrieausstellung opened five weeks later, only three years after the completion of the Crystal Palace in London, which served as its model.
Use
First General German industry exhibition
Just three years after the completion of the Crystal Palace in London, which served as a model, the First General German Industrial Exhibition opened at the newly built glass palace on 15 July 1854. However the opening was overshadowed as first the staff and later the exhibition guests were affected by cholera.
Electrification
In 1882 the first electrically lit international electrotechnical exhibition took place in the Glass Palace. The German engineer Oskar von Miller had built a 2000 volt DC overhead power line from Miesbach, 50 km distant, to bring power to Munich. At the exhibition, an electrically powered pump for an artificial waterfall demonstrated the feasibility of bringing electrical power over long distances.
The Glass Palace as a venue of art exhibitions
In 1858, the "First German general and historical art exhibition" organized in the palace, followed in 1869 by the "I. International Art Exhibition", 1888 "III. International Art Exhibition".
From 1889, the Crystal Palace was almost exclusively used for art exhibitions. This affected the forum and place of the international art trade.
Other uses
When it was planned, following the industrial exhibition, it was assumed that the Glaspalast would be used as a greenhouse. However it was almost exclusively used for international art exhibitions and artist festivals.
Fire
The building was destroyed in a fire on June 6, 1931, a fate shared with the other crystal palaces. The cause of the fire was later determined to be arson. The fire in the Glaspalast irretrievably destroyed more than 3,000 artworks including more than 110 paintings from the early 19th century including many paintings by Caspar David Friedrich, Moritz von Schwind, Karl Blechen, and Philipp Otto Runge. A further 1,000 works by contemporary artists at that time were heavily damaged and only 80 artworks were recovered unharmed.
The daily newspaper "Neues Wiener Tagblatt" reported on the following day, June 7, 1931, in a telegram: The fire of the Munich Glass Palace, S. 4:
Other
After the fire, plans were made to rebuild the Glaspalast. However, the plans were abandoned in 1933 after seizure of power by the new Nazi government. Instead of rebuilding the palace, the government built the Haus der Kunst (House of Art) on the Prinzregentenstraße near the Englischer Garten (a public park).
In 1936 a small exhibition pavilion was built, but was destroyed in World War II. This was rebuilt by artists after the war.
The Park Cafe now stands on the site of the Glaspalast.
The fountain of the Glaspalast, which remained intact, today stands in the center of the Weißenburger Platz in the Haidhausen quarter of Munich.
Footnotes
Sources
German Wikipedia
External links
Historisches Lexikon Bayerns: Glaspalast, München Several photographs of the interior and exterior of the Glaspalast
Buildings and structures in Munich
1854 establishments in Bavaria
Glass architecture
Demolished buildings and structures in Munich
Maxvorstadt
Buildings and structures destroyed by arson
Buildings and structures completed in 1854
Former palaces in Germany | Glaspalast (Munich) | Materials_science,Engineering | 1,119 |
50,702,547 | https://en.wikipedia.org/wiki/Selective%20PPAR%20modulator | A selective PPAR modulator (SPPARM) is a selective receptor modulator of the peroxisome proliferator-activated receptor (PPAR). Examples include SPPARMs of the PPARγ, BADGE, EPI-001, INT-131, MK-0533, and S26948.
See also
PPAR agonist
References
PPAR agonists | Selective PPAR modulator | Chemistry | 80 |
22,219,418 | https://en.wikipedia.org/wiki/955%20acorn%20triode | The type 955 triode "acorn tube" is a small triode thermionic valve (vacuum tube in USA) designed primarily to operate at high frequency. Although data books specify an upper limit of 400–600 MHz, some circuits may obtain gain up to about 900 MHz. Interelectrode capacitances and Miller capacitances are minimized by the small dimensions of the device and the widely separated pins. The connecting pins are placed around the periphery of the bulb and project radially outward: this maintains short internal leads with low inductance, an important property allowing operation at high frequency. The pins fit a special socket fabricated as a ceramic ring in which the valve itself occupies the central space. The 955 was developed by RCA and was commercially available in 1935.
The 955 is one of about a dozen types of "acorn valve", so called because their size and shape is similar to the acorn (nut of the oak tree), introduced starting in 1935 and designed to work in the VHF range. The 954 and 956 types are sharp and remote cut-off pentodes, respectively, both also with indirect 6.3 V, 150 mA heaters. Types 957, 958 and 959 are for portable equipment and have 1.25 V NiCd battery heaters. The 957 is a medium-μ signal triode, the 958 is a transmitting triode with dual, paralleled filaments for increased emission, and the 959 is a sharp cut-off pentode like the 954. The 957 and 959 draw 50 mA heater current, the 958 twice as much. In 1942, the 958A with tightened emission specs was introduced after it turned out that 958s with excessively high emission kept working after the filament power was turned off, the filament still sufficiently heating on the anode current alone.
Pin connections
When viewing the device from above (the end without the exhaust tip), the pins are arranged in a group of three and a group of two, starting with the centre pin in the group of three and going in a clockwise direction, the pins are cathode, heater, grid, anode, heater.
Ratings
The 955 is an indirectly heated triode with heater electrically isolated from the cathode. The heater has a 6.3 volt rating, which it shares with many other common thermionic valves/electron tubes, and it draws about 150 mA.
The maximum anode voltage is 250 V, with an anode current of 420 microamperes and anode load 250 kilohm, and the maximum anode current is 4.5 mA at a voltage of 180 V with an anode load of 20 kilohm. The 955 is designed to be used in the frequency range of 60–600 MHz (5-0.5 metres wavelength).
The amplification factor obtained is between 20 and 25 depending on details of the specific stage design and operating voltage.
See also
Micropup
References
Products introduced in 1935
Vacuum tubes
Acorns | 955 acorn triode | Physics | 639 |
54,536,319 | https://en.wikipedia.org/wiki/HPA-23 | HPA-23, sometimes known as antimonium tungstate, is an antiretroviral drug that was used for the treatment of HIV infection. It achieved widespread publicity as an effective treatment for HIV and AIDS beginning in 1984, just one year after HIV was first identified. Later testing failed to demonstrate any efficacy and some patients suffered serious side effects from the drug, including liver failure.
History
HPA-23 was developed by Rhône-Poulenc at the Pasteur Institute in the 1970s and used in France on an experimental basis to treat HIV and AIDS patients beginning in 1984. The inventors of the drug, as listed in its patent, were Jean-Claude Chermann, Dominique Dormont, Etienne Vilmer, Bruno Spire, Françoise Barré-Sinoussi, Luc Montagnier, and Willy Rozenbaum. While the drug was not presented as a cure for HIV/AIDS, it was suggested it could arrest replication and spread of the virus.
The United States, which had a more stringent drug approval process than France, delayed authorizing use of HPA-23 even for clinical trials, prompting an angry outcry and an exodus of more than 100 American AIDS patients to France to seek treatment, encouraged in part by a French call for American volunteers.
Bill Kraus, who received HPA-23 dosages in France as a medical tourist, "pinned his entire hope for survival" on the drug, even to the exclusion of other experimental medications then in development. After actor Rock Hudson received treatment at a Paris hospital with HPA-23, a representative of the National Gay Task Force declared that "something is wrong with the health-care system when a wealthy man and a friend of the President has to go to Europe for treatment". At the same time, however, some within the American scientific community cautioned AIDS sufferers against putting too much hope in HPA-23 and generally supported the Food and Drug Administration's (FDA) conservative approach to certification. William A. Haseltine commented that reports of the drug's success in France were based on "the crummiest kind of anecdotal stories – they don't do the scientifically controlled trials". Physicians at San Francisco General Hospital's AIDS Clinic echoed Haseltine's concerns, noting that French testing of the drug was done without any type of control group and that the drug's high toxicity made it potentially dangerous to patients already suffering serious infections. Public Citizen, which was often critical of FDA decisions, also came out in support of the agency's timeline for certification.
In August 1985, under increasing public pressure to fast track approval of the drug, the United States Food and Drug Administration permitted the use of HPA-23 in extremely limited human testing. In the ensuing clinical trials no improvement in the condition of the test subjects was observed, with some even showing increased levels of HIV replication and three patients suffering liver failure triggered by the drug. By 1986, the National Academy of Sciences had concluded that no therapeutic benefits for persons infected with HIV could be attributed to HPA-23. It was subsequently abandoned as a treatment option.
See also
Ammonium paratungstate
References
HIV/AIDS
French inventions
Tungstates
Withdrawn drugs
Antimony compounds
Sodium compounds
Ammonium compounds
1984 in science | HPA-23 | Chemistry | 667 |
14,256,746 | https://en.wikipedia.org/wiki/Alpha%20Fornacis | Alpha Fornacis (α Fornacis, abbreviated Alpha For, α For) is a triple star system in the southern constellation of Fornax. It is the brightest star in the constellation and the only one brighter than magnitude 4.0. Based on parallax measurements obtained during the Hipparcos mission, it is approximately 46 light-years (14 parsecs) distant from the Sun.
Its three components are designated Alpha Fornacis A (officially named Dalim ), Alpha Fornacis Ba and Alpha Fornacis Bb.
Nomenclature
α Fornacis (Latinised to Alpha Fornacis) is the system's Bayer designation. The designations of the three components as Alpha Fornacis A, Ba and Bb derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). Formerly it was designated as the 12th of Eridanus (12 Eri) by Flamsteed.
Indigenous Arabs had named both Alpha Eridani and Fomalhaut ظَلِيم al-ẓalīm, a local word for 'ostrich'. Later, Arabian astronomer would transfer the appellation to Theta Eridani, as they could not see those other stars from their location.
In recent times, Italian astronomer Giuseppe Piazzi applied it with the spelling Dalim to his "III 13" (= α For) in his Palermo Catalogue, and Elijah Burritt labeled it Fornacis in his Atlas.
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Dalim for the component Alpha Fornacis A on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
Properties
Alpha Fornacis has a high proper motion and the system displays an excess of infrared emission, which may indicate the presence of circumstellar material such as a debris disk. The space velocity components of this star are (U, V, W) = (−35, +20, +30) km/s. Approximately 350,000 years ago, Alpha Fornacis experienced a close encounter with the A-type main-sequence star Nu Horologii. The two came within an estimated of each other, and both stars have debris disks.
Alpha Fornacis A has a stellar classification of F8IV, where the luminosity class IV indicates this is a subgiant star that has just evolved off the main sequence. It has 33% more mass than the Sun and is an estimated 2.9 billion years old.
The secondary, Alpha Fornacis B, has been identified as a blue straggler, and has likely accumulated material from a third star in the past. It is a strong source of X-rays and is 78% as massive as the Sun. Such a third star, designated Alpha Fornacis Bb, was detected in 2016. Alpha Fornacis Ba is enriched in barium relative to star A, suggesting that its companion is likely a white dwarf.
References
External links
α For (Dalim) – SKY-MAP.ORG
LHL Digital Collections – Linda Hall library
F-type subgiants
Fornacis, Alpha
Blue stragglers
Triple star systems
Fornax
Fornacis, Alpha
CD−29 1177
Circumstellar disks
Eridani, 12
0127
020010
014879
0127
Dalim
J03120443-2859156 | Alpha Fornacis | Astronomy | 748 |
1,031,816 | https://en.wikipedia.org/wiki/Chemokine | Chemokines (), or chemotactic cytokines, are a family of small cytokines or signaling proteins secreted by cells that induce directional movement of leukocytes, as well as other cell types, including endothelial and epithelial cells. In addition to playing a major role in the activation of host immune responses, chemokines are important for biological processes, including morphogenesis and wound healing, as well as in the pathogenesis of diseases like cancers.
Cytokine proteins are classified as chemokines according to behavior and structural characteristics. In addition to being known for mediating chemotaxis, chemokines are all approximately 8–10 kilodaltons in mass and have four cysteine residues in conserved locations that are key to forming their 3-dimensional shape.
These proteins have historically been known under several other names including the SIS family of cytokines, SIG family of cytokines, SCY family of cytokines, Platelet factor-4 superfamily or intercrines. Some chemokines are considered pro-inflammatory and can be induced during an immune response to recruit cells of the immune system to a site of infection, while others are considered homeostatic and are involved in controlling the migration of cells during normal processes of tissue maintenance or development. Chemokines are found in all vertebrates, some viruses and some bacteria, but none have been found in other invertebrates.
Chemokines have been classified into four main subfamilies: CXC, CC, CX3C and C. All of these proteins exert their biological effects by interacting with G protein-linked transmembrane receptors called chemokine receptors, that are selectively found on the surfaces of their target cells.
Function
The major role of chemokines is to act as a chemoattractant to guide the migration of cells. Cells that are attracted by chemokines follow a signal of increasing chemokine concentration towards the source of the chemokine. Some chemokines control cells of the immune system during processes of immune surveillance, such as directing lymphocytes to the lymph nodes so they can screen for invasion of pathogens by interacting with antigen-presenting cells residing in these tissues. These are known as homeostatic chemokines and are produced and secreted without any need to stimulate their source cells. Some chemokines have roles in development; they promote angiogenesis (the growth of new blood vessels), or guide cells to tissues that provide specific signals critical for cellular maturation. Other chemokines are inflammatory and are released from a wide variety of cells in response to bacterial infection, viruses and agents that cause physical damage such as silica or the urate crystals that occur in gout. Their release is often stimulated by pro-inflammatory cytokines such as interleukin 1. Inflammatory chemokines function mainly as chemoattractants for leukocytes, recruiting monocytes, neutrophils and other effector cells from the blood to sites of infection or tissue damage. Certain inflammatory chemokines activate cells to initiate an immune response or promote wound healing. They are released by many different cell types and serve to guide cells of both innate immune system and adaptive immune system.
Types by function
Chemokines are functionally divided into two groups:
Homeostatic: are constitutively produced in certain tissues and are responsible for basal leukocyte migration. These include: CCL14, CCL19, CCL20, CCL21, CCL25, CCL27, CXCL12 and CXCL13. This classification is not strict; for example, CCL20 can act also as pro-inflammatory chemokine.
Inflammatory: these are formed under pathological conditions (on pro-inflammatory stimuli, such as IL-1, TNF-alpha, LPS, or viruses) and actively participate in the inflammatory response attracting immune cells to the site of inflammation. Examples are: CXCL-8, CCL2, CCL3, CCL4, CCL5, CCL11, CXCL10.
Homing
The main function of chemokines is to manage the migration of leukocytes (homing) in the respective anatomical locations in inflammatory and homeostatic processes.
Basal: homeostatic chemokines are basal produced in the thymus and lymphoid tissues. Their homeostatic function in homing is best exemplified by the chemokines CCL19 and CCL21 (expressed within lymph nodes and on lymphatic endothelial cells) and their receptor CCR7 (expressed on cells destined for homing in cells to these organs). Using these ligands is possible routing antigen-presenting cells (APC) to lymph nodes during the adaptive immune response. Among other homeostatic chemokine receptors include: CCR9, CCR10, and CXCR5, which are important as part of the cell addresses for tissue-specific homing of leukocytes. CCR9 supports the migration of leukocytes into the intestine, CCR10 to the skin and CXCR5 supports the migration of B-cell to follicles of lymph nodes. As well CXCL12 (SDF-1) constitutively produced in the bone marrow promotes proliferation of progenitor B cells in the bone marrow microenvironment.
Inflammatory: inflammatory chemokines are produced in high concentrations during infection or injury and determine the migration of inflammatory leukocytes into the damaged area. Typical inflammatory chemokines include: CCL2, CCL3 and CCL5, CXCL1, CXCL2 and CXCL8. A typical example is CXCL-8, which acts as a chemoattractant for neutrophils. In contrast to the homeostatic chemokine receptors, there is significant promiscuity (redundancy) associated with binding receptor and inflammatory chemokines. This often complicates research on receptor-specific therapeutics in this area.
Types by cell attracted
Monocytes / macrophages: the key chemokines that attract these cells to the site of inflammation include: CCL2, CCL3, CCL5, CCL7, CCL8, CCL13, CCL17 and CCL22.
T-lymphocytes: the four key chemokines that are involved in the recruitment of T lymphocytes to the site of inflammation are: CCL2, CCL1, CCL22 and CCL17. Furthermore, CXCR3 expression by T-cells is induced following T-cell activation and activated T-cells are attracted to sites of inflammation where the IFN-y inducible chemokines CXCL9, CXCL10 and CXCL11 are secreted.
Mast cells: on their surface express several receptors for chemokines: CCR1, CCR2, CCR3, CCR4, CCR5, CXCR2, and CXCR4. Ligands of these receptors CCL2 and CCL5 play an important role in mast cell recruitment and activation in the lung. There is also evidence that CXCL8 might be inhibitory of mast cells.
Eosinophils: the migration of eosinophils into various tissues involved several chemokines of CC family: CCL11, CCL24, CCL26, CCL5, CCL7, CCL13, and CCL3. Chemokines CCL11 (eotaxin) and CCL5 (RANTES) acts through a specific receptor CCR3 on the surface of eosinophils, and eotaxin plays an essential role in the initial recruitment of eosinophils into the lesion.
Neutrophils: are regulated primarily by CXC chemokines. An example CXCL8 (IL-8) is chemoattractant for neutrophils and also activating their metabolic and degranulation.
Structural characteristics
Proteins are classified into the chemokine family based on their structural characteristics, not just their ability to attract cells. All chemokines are small, with a molecular mass of between 8 and 10 kDa. They are approximately 20-50% identical to each other; that is, they share gene sequence and amino acid sequence homology. They all also possess conserved amino acids that are important for creating their 3-dimensional or tertiary structure, such as (in most cases) four cysteines that interact with each other in pairs to create a Greek key shape that is a characteristic of chemokines. Intramolecular disulfide bonds typically join the first to third, and the second to fourth cysteine residues, numbered as they appear in the protein sequence of the chemokine. Typical chemokine proteins are produced as pro-peptides, beginning with a signal peptide of approximately 20 amino acids that gets cleaved from the active (mature) portion of the molecule during the process of its secretion from the cell. The first two cysteines, in a chemokine, are situated close together near the N-terminal end of the mature protein, with the third cysteine residing in the centre of the molecule and the fourth close to the C-terminal end. A loop of approximately ten amino acids follows the first two cysteines and is known as the N-loop. This is followed by a single-turn helix, called a 310-helix, three β-strands and a C-terminal α-helix. These helices and strands are connected by turns called 30s, 40s and 50s loops; the third and fourth cysteines are located in the 30s and 50s loops.
Types by structure
Members of the chemokine family are divided into four groups depending on the spacing of their first two cysteine residues. Thus the nomenclature for chemokines is, e.g.: CCL1 for the ligand 1 of the CC-family of chemokines, and CCR1 for its respective receptor.
CC chemokines
The CC chemokine (or β-chemokine) proteins have two adjacent cysteines (amino acids), near their amino terminus.
There have been at least 27 distinct members of this subgroup reported for mammals, called CC chemokine ligands (CCL)-1 to -28; CCL10 is the same as CCL9. Chemokines of this subfamily usually contain four cysteines (C4-CC chemokines), but a small number of CC chemokines possess six cysteines (C6-CC chemokines). C6-CC chemokines include CCL1, CCL15, CCL21, CCL23 and CCL28. CC chemokines induce the migration of monocytes and other cell types such as NK cells and dendritic cells.
Examples of CC chemokine include monocyte chemoattractant protein-1 (MCP-1 or CCL2) which induces monocytes to leave the bloodstream and enter the surrounding tissue to become tissue macrophages.
CCL5 (or RANTES) attracts cells such as T cells, eosinophils and basophils that express the receptor CCR5.
Increased CCL11 levels in blood plasma are associated with aging (and reduced neurogenesis) in mice and humans.
CXC chemokines
The two N-terminal cysteines of CXC chemokines (or α-chemokines) are separated by one amino acid, represented in this name with an "X". There have been 17 different CXC chemokines described in mammals, that are subdivided into two categories, those with a specific amino acid sequence (or motif) of glutamic acid-leucine-arginine (or ELR for short) immediately before the first cysteine of the CXC motif (ELR-positive), and those without an ELR motif (ELR-negative). ELR-positive CXC chemokines specifically induce the migration of neutrophils, and interact with chemokine receptors CXCR1 and CXCR2. An example of an ELR-positive CXC chemokine is interleukin-8 (IL-8), which induces neutrophils to leave the bloodstream and enter into the surrounding tissue. Other CXC chemokines that lack the ELR motif, such as CXCL13, tend to be chemoattractant for lymphocytes. CXC chemokines bind to CXC chemokine receptors, of which seven have been discovered to date, designated CXCR1-7.
C chemokines
The third group of chemokines is known as the C chemokines (or γ chemokines), and is unlike all other chemokines in that it has only two cysteines; one N-terminal cysteine and one cysteine downstream. Two chemokines have been described for this subgroup and are called XCL1 (lymphotactin-α) and XCL2 (lymphotactin-β).
CX3C chemokines
A fourth group has also been discovered and members have three amino acids between the two cysteines and is termed CX3C chemokine (or d-chemokines). The only CX3C chemokine discovered to date is called fractalkine (or CX3CL1). It is both secreted and tethered to the surface of the cell that expresses it, thereby serving as both a chemoattractant and as an adhesion molecule.
Receptors
Chemokine receptors are G protein-coupled receptors containing 7 transmembrane domains that are found on the surface of leukocytes. Approximately 19 different chemokine receptors have been characterized to date, which are divided into four families depending on the type of chemokine they bind; CXCR that bind CXC chemokines, CCR that bind CC chemokines, CX3CR1 that binds the sole CX3C chemokine (CX3CL1), and XCR1 that binds the two XC chemokines (XCL1 and XCL2). They share many structural features; they are similar in size (with about 350 amino acids), have a short, acidic N-terminal end, seven helical transmembrane domains with three intracellular and three extracellular hydrophilic loops, and an intracellular C-terminus containing serine and threonine residues important for receptor regulation. The first two extracellular loops of chemokine receptors each has a conserved cysteine residue that allow formation of a disulfide bridge between these loops. G proteins are coupled to the C-terminal end of the chemokine receptor to allow intracellular signaling after receptor activation, while the N-terminal domain of the chemokine receptor determines ligand binding specificity.
Signal transduction
Chemokine receptors associate with G-proteins to transmit cell signals following ligand binding. Activation of G proteins, by chemokine receptors, causes the subsequent activation of an enzyme known as phospholipase C (PLC). PLC cleaves a molecule called phosphatidylinositol (4,5)-bisphosphate (PIP2) into two second messenger molecules known as Inositol triphosphate (IP3) and diacylglycerol (DAG) that trigger intracellular signaling events; DAG activates another enzyme called protein kinase C (PKC), and IP3 triggers the release of calcium from intracellular stores. These events promote many signaling cascades (such as the MAP kinase pathway) that generate responses like chemotaxis, degranulation, release of superoxide anions and changes in the avidity of cell adhesion molecules called integrins within the cell harbouring the chemokine receptor.
Infection control
The discovery that the β chemokines RANTES, MIP (macrophage inflammatory proteins) 1α and 1β (now known as CCL5, CCL3 and CCL4 respectively) suppress HIV-1 provided the initial connection and indicated that these molecules might control infection as part of immune responses in vivo, and that sustained delivery of such inhibitors have the capacity of long-term infection control. The association of chemokine production with antigen-induced proliferative responses, more favorable clinical status in HIV infection, as well as with an uninfected status in subjects at risk for infection suggests a positive role for these molecules in controlling the natural course of HIV infection.
See also
Paracrine signalling
References
External links
The cytokine family database – Chemokines at kumamoto-u.ac.jp
The correct chemokine nomenclature at rndsystems.com
Cytokines
Signal transduction | Chemokine | Chemistry,Biology | 3,655 |
448,518 | https://en.wikipedia.org/wiki/Heisenberg%20group | In mathematics, the Heisenberg group , named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form
under the operation of matrix multiplication. Elements a, b and c can be taken from any commutative ring with identity, often taken to be the ring of real numbers (resulting in the "continuous Heisenberg group") or the ring of integers (resulting in the "discrete Heisenberg group").
The continuous Heisenberg group arises in the description of one-dimensional quantum mechanical systems, especially in the context of the Stone–von Neumann theorem. More generally, one can consider Heisenberg groups associated to n-dimensional systems, and most generally, to any symplectic vector space.
Three-dimensional case
In the three-dimensional case, the product of two Heisenberg matrices is given by
As one can see from the term , the group is non-abelian.
The neutral element of the Heisenberg group is the identity matrix, and inverses are given by
The group is a subgroup of the 2-dimensional affine group Aff(2):
acting on corresponds to the affine transform
There are several prominent examples of the three-dimensional case.
Continuous Heisenberg group
If , are real numbers (in the ring R), then one has the continuous Heisenberg group H3(R).
It is a nilpotent real Lie group of dimension 3.
In addition to the representation as real 3×3 matrices, the continuous Heisenberg group also has several different representations in terms of function spaces. By Stone–von Neumann theorem, there is, up to isomorphism, a unique irreducible unitary representation of H in which its centre acts by a given nontrivial character. This representation has several important realizations, or models. In the Schrödinger model, the Heisenberg group acts on the space of square integrable functions. In the theta representation, it acts on the space of holomorphic functions on the upper half-plane; it is so named for its connection with the theta functions.
Discrete Heisenberg group
If are integers (in the ring Z), then one has the discrete Heisenberg group H3(Z). It is a non-abelian nilpotent group. It has two generators:
and relations
where
is the generator of the center of H3. (Note that the inverses of x, y, and z replace the 1 above the diagonal with −1.)
By Bass's theorem, it has a polynomial growth rate of order 4.
One can generate any element through
Heisenberg group modulo an odd prime p
If one takes a, b, c in Z/p Z for an odd prime p, then one has the Heisenberg group modulo p. It is a group of order p3 with generators x, y and relations
Analogues of Heisenberg groups over finite fields of odd prime order p are called extra special groups, or more properly, extra special groups of exponent p. More generally, if the derived subgroup of a group G is contained in the center Z of G, then the map G/Z × G/Z → Z is a skew-symmetric bilinear operator on abelian groups.
However, requiring that G/Z to be a finite vector space requires the Frattini subgroup of G to be contained in the center, and requiring that Z be a one-dimensional vector space over Z/p Z requires that Z have order p, so if G is not abelian, then G is extra special. If G is extra special but does not have exponent p, then the general construction below applied to the symplectic vector space G/Z does not yield a group isomorphic to G.
Heisenberg group modulo 2
The Heisenberg group modulo 2 is of order 8 and is isomorphic to the dihedral group D4 (the symmetries of a square). Observe that if
then
and
The elements x and y correspond to reflections (with 45° between them), whereas xy and yx correspond to rotations by 90°. The other reflections are xyx and yxy, and rotation by 180° is xyxy (= yxyx).
Heisenberg algebra
The Lie algebra of the Heisenberg group (over the real numbers) is known as the Heisenberg algebra.
It may be represented using the space of 3×3 matrices of the form
with .
The following three elements form a basis for :
These basis elements satisfy the commutation relations
The name "Heisenberg group" is motivated by the preceding relations, which have the same form as the canonical commutation relations in quantum mechanics:
where is the position operator, is the momentum operator, and is the Planck constant.
The Heisenberg group has the special property that the exponential map is a one-to-one and onto map from the Lie algebra to the group :
In conformal field theory
In conformal field theory, the term Heisenberg algebra is used to refer to an infinite-dimensional generalization of the above algebra. It is spanned by elements with commutation relations
Under a rescaling, this is simply a countably-infinite number of copies of the above algebra.
Higher dimensions
More general Heisenberg groups may be defined for higher dimensions in Euclidean space, and more generally on symplectic vector spaces. The simplest general case is the real Heisenberg group of dimension , for any integer . As a group of matrices, (or to indicate that this is the Heisenberg group over the field of real numbers) is defined as the group matrices with entries in and having the form
where
a is a row vector of length n,
b is a column vector of length n,
In is the identity matrix of size n.
Group structure
This is indeed a group, as is shown by the multiplication:
and
Lie algebra
The Heisenberg group is a simply-connected Lie group whose Lie algebra consists of matrices
where
a is a row vector of length n,
b is a column vector of length n,
0n is the zero matrix of size n.
By letting e1, ..., en be the canonical basis of Rn and setting
the associated Lie algebra can be characterized by the canonical commutation relations
where p1, ..., pn, q1, ..., qn, z are the algebra generators.
In particular, z is a central element of the Heisenberg Lie algebra. Note that the Lie algebra of the Heisenberg group is nilpotent.
Exponential map
Let
which fulfills . The exponential map evaluates to
The exponential map of any nilpotent Lie algebra is a diffeomorphism between the Lie algebra and the unique associated connected, simply-connected Lie group.
This discussion (aside from statements referring to dimension and Lie group) further applies if we replace R by any commutative ring A. The corresponding group is denoted Hn(A).
Under the additional assumption that the prime 2 is invertible in the ring A, the exponential map is also defined, since it reduces to a finite sum and has the form above (e.g. A could be a ring Z/p Z with an odd prime p or any field of characteristic 0).
Representation theory
The unitary representation theory of the Heisenberg group is fairly simple later generalized by Mackey theory and was the motivation for its introduction in quantum physics, as discussed below.
For each nonzero real number , we can define an irreducible unitary representation of acting on the Hilbert space by the formula
This representation is known as the Schrödinger representation. The motivation for this representation is the action of the exponentiated position and momentum operators in quantum mechanics. The parameter describes translations in position space, the parameter describes translations in momentum space, and the parameter gives an overall phase factor. The phase factor is needed to obtain a group of operators, since translations in position space and translations in momentum space do not commute.
The key result is the Stone–von Neumann theorem, which states that every (strongly continuous) irreducible unitary representation of the Heisenberg group in which the center acts nontrivially is equivalent to for some . Alternatively, that they are all equivalent to the Weyl algebra (or CCR algebra) on a symplectic space of dimension 2n.
Since the Heisenberg group is a one-dimensional central extension of , its irreducible unitary representations can be viewed as irreducible unitary projective representations of . Conceptually, the representation given above constitutes the quantum-mechanical counterpart to the group of translational symmetries on the classical phase space, . The fact that the quantum version is only a projective representation of is suggested already at the classical level. The Hamiltonian generators of translations in phase space are the position and momentum functions. The span of these functions does not form a Lie algebra under the Poisson bracket, however, because Rather, the span of the position and momentum functions and the constants forms a Lie algebra under the Poisson bracket. This Lie algebra is a one-dimensional central extension of the commutative Lie algebra , isomorphic to the Lie algebra of the Heisenberg group.
On symplectic vector spaces
The general abstraction of a Heisenberg group is constructed from any symplectic vector space. For example, let (V, ω) be a finite-dimensional real symplectic vector space (so ω is a nondegenerate skew symmetric bilinear form on V). The Heisenberg group H(V) on (V, ω) (or simply V for brevity) is the set V×R endowed with the group law
The Heisenberg group is a central extension of the additive group V. Thus there is an exact sequence
Any symplectic vector space admits a Darboux basis {ej, fk}1 ≤ j,k ≤ n satisfying ω(ej, fk) = δjk and where 2n is the dimension of V (the dimension of V is necessarily even). In terms of this basis, every vector decomposes as
The qa and pa are canonically conjugate coordinates.
If {ej, fk}1 ≤ j,k ≤ n is a Darboux basis for V, then let {E} be a basis for R, and {ej, fk, E}1 ≤ j,k ≤ n is the corresponding basis for V×R. A vector in H(V) is then given by
and the group law becomes
Because the underlying manifold of the Heisenberg group is a linear space, vectors in the Lie algebra can be canonically identified with vectors in the group. The Lie algebra of the Heisenberg group is given by the commutation relation
or written in terms of the Darboux basis
and all other commutators vanish.
It is also possible to define the group law in a different way but which yields a group isomorphic to the group we have just defined. To avoid confusion, we will use u instead of t, so a vector is given by
and the group law is
An element of the group
can then be expressed as a matrix
,
which gives a faithful matrix representation of H(V). The u in this formulation is related to t in our previous formulation by , so that the t value for the product comes to
,
as before.
The isomorphism to the group using upper triangular matrices relies on the decomposition of V into a Darboux basis, which amounts to a choice of isomorphism V ≅ U ⊕ U*. Although the new group law yields a group isomorphic to the one given higher up, the group with this law is sometimes referred to as the polarized Heisenberg group as a reminder that this group law relies on a choice of basis (a choice of a Lagrangian subspace of V is a polarization).
To any Lie algebra, there is a unique connected, simply connected Lie group G. All other connected Lie groups with the same Lie algebra as G are of the form G/N where N is a central discrete group in G. In this case, the center of H(V) is R and the only discrete subgroups are isomorphic to Z. Thus H(V)/Z is another Lie group which shares this Lie algebra. Of note about this Lie group is that it admits no faithful finite-dimensional representations; it is not isomorphic to any matrix group. It does however have a well-known family of infinite-dimensional unitary representations.
Connection with the Weyl algebra
The Lie algebra of the Heisenberg group was described above, (1), as a Lie algebra of matrices. The Poincaré–Birkhoff–Witt theorem applies to determine the universal enveloping algebra . Among other properties, the universal enveloping algebra is an associative algebra into which injectively imbeds.
By the Poincaré–Birkhoff–Witt theorem, it is thus the free vector space generated by the monomials
where the exponents are all non-negative.
Consequently, consists of real polynomials
with the commutation relations
The algebra is closely related to the algebra of differential operators on with polynomial coefficients, since any such operator has a unique representation in the form
This algebra is called the Weyl algebra. It follows from abstract nonsense that the Weyl algebra Wn is a quotient of . However, this is also easy to see directly from the above representations; viz. by the mapping
Applications
Weyl's parameterization of quantum mechanics
The application that led Hermann Weyl to an explicit realization of the Heisenberg group was the question of why the Schrödinger picture and Heisenberg picture are physically equivalent. Abstractly, the reason is the Stone–von Neumann theorem: there is a unique unitary representation with given action of the central Lie algebra element z, up to a unitary equivalence: the nontrivial elements of the algebra are all equivalent to the usual position and momentum operators.
Thus, the Schrödinger picture and Heisenberg picture are equivalent – they are just different ways of realizing this essentially unique representation.
Theta representation
The same uniqueness result was used by David Mumford for discrete Heisenberg groups, in his theory of equations defining abelian varieties. This is a large generalization of the approach used in Jacobi's elliptic functions, which is the case of the modulo 2 Heisenberg group, of order 8. The simplest case is the theta representation of the Heisenberg group, of which the discrete case gives the theta function.
Fourier analysis
The Heisenberg group also occurs in Fourier analysis, where it is used in some formulations of the Stone–von Neumann theorem. In this case, the Heisenberg group can be understood to act on the space of square integrable functions; the result is a representation of the Heisenberg groups sometimes called the Weyl representation.
As a sub-Riemannian manifold
The three-dimensional Heisenberg group H3(R) on the reals can also be understood to be a smooth manifold, and specifically, a simple example of a sub-Riemannian manifold. Given a point p = (x, y, z) in R3, define a differential 1-form Θ at this point as
This one-form belongs to the cotangent bundle of R3; that is,
is a map on the tangent bundle. Let
It can be seen that H is a subbundle of the tangent bundle TR3. A cometric on H is given by projecting vectors to the two-dimensional space spanned by vectors in the x and y direction. That is, given vectors and in TR3, the inner product is given by
The resulting structure turns H into the manifold of the Heisenberg group. An orthonormal frame on the manifold is given by the Lie vector fields
which obey the relations [X, Y] = Z and [X, Z] = [Y, Z] = 0. Being Lie vector fields, these form a left-invariant basis for the group action. The geodesics on the manifold are spirals, projecting down to circles in two dimensions. That is, if
is a geodesic curve, then the curve is an arc of a circle, and
with the integral limited to the two-dimensional plane. That is, the height of the curve is proportional to the area of the circle subtended by the circular arc, which follows by Green's theorem.
Heisenberg group of a locally compact abelian group
It is more generally possible to define the Heisenberg group of a locally compact abelian group K, equipped with a Haar measure. Such a group has a Pontrjagin dual , consisting of all continuous -valued characters on K, which is also a locally compact abelian group if endowed with the compact-open topology. The Heisenberg group associated with the locally compact abelian group K is the subgroup of the unitary group of generated by translations from K and multiplications by elements of .
In more detail, the Hilbert space consists of square-integrable complex-valued functions on K. The translations in K form a unitary representation of K as operators on :
for . So too do the multiplications by characters:
for . These operators do not commute, and instead satisfy
multiplication by a fixed unit modulus complex number.
So the Heisenberg group associated with K is a type of central extension of , via an exact sequence of groups:
More general Heisenberg groups are described by 2-cocyles in the cohomology group . The existence of a duality between and gives rise to a canonical cocycle, but there are generally others.
The Heisenberg group acts irreducibly on . Indeed, the continuous characters separate points so any unitary operator of that commutes with them is an multiplier. But commuting with translations implies that the multiplier is constant.
A version of the Stone–von Neumann theorem, proved by George Mackey, holds for the Heisenberg group . The Fourier transform is the unique intertwiner between the representations of and . See the discussion at Stone–von Neumann theorem#Relation to the Fourier transform for details.
See also
Canonical commutation relations
Wigner–Weyl transform
Stone–von Neumann theorem
Projective representation
Geometrization conjecture
Notes
References
External links
Groupprops, The Group Properties Wiki Unitriangular matrix group UT(3,p)
Group theory
Lie groups
Mathematical quantization
Mathematical physics
Werner Heisenberg
3-manifolds | Heisenberg group | Physics,Mathematics | 3,820 |
65,853,439 | https://en.wikipedia.org/wiki/Piezospectroscopy | Piezospectroscopy (also known as photoluminescence piezospectroscopy) is an analytical technique that reveals internal stresses in alumina-containing materials, particularly thermal barrier coatings (TBCs). A typical procedure involves illuminating the sample with laser light of a known wavelength, causing the material to release its own radiation in response (see fluorescence). By measuring the emitted radiation and comparing the location of the peaks to a stress-free sample, stresses in the material can be revealed without any destructive interaction.
Piezospectroscopy can be used on any material that exhibits fluorescence, but is almost exclusively used on samples containing alumina because of the presence of chromium ions, either as part of the composition or as an impurity, that greatly increase the fluorescent response. As opposed to other methods of stress measurement, such as powder diffraction or the use of a strain gauge, piezospectroscopy can measure internal stresses at higher resolution, on the order of 1 μm, and can measure very quickly, with most systems taking less than one second to acquire data.
Theory
Piezospectroscopy takes advantage of both the microstructure and composition of TBCs to generate accurate results.
A typical candidate for piezospectroscopy contains three layers:
Ceramic topcoat – A thick, highly porous layer, usually composed of yttria-stabilized zirconia (YSZ), which displays low thermal conductivity and stability at high operating temperatures
Thermally grown oxide (TGO) – A thin layer that results from oxidation of the bond coat. Because oxidation is inevitable at high temperatures, the goal of an effective TBC is slow and uniform growth of an oxide.
Metallic bond coat – A metallic layer directly above the substrate intended to prevent corrosion and oxidation
Coating failure is usually a result of spalling or cracking of the TGO layer. Because the TGO is buried beneath a thick layer of ceramic, subsurface stresses are generally difficult to detect. The use of an argon-ion laser makes this possible. The optical band gap (threshold for photon absorption) of the ceramic topcoat is much greater than the energy of argon laser light, effectively making the topcoat translucent and allowing for interaction with the TGO layer. Within the TGO, it is the chromium (Cr3+) ions that produce strong emission spectra and allow for piezospectroscopic analysis.
At the subatomic level, the laser light of known wavelength (usually 5149Å) causes the outer electron in the Cr3+ ions to absorb the incoming radiation, which raises it to a higher energy level. Upon returning to a lower energy state, the electron releases its own radiation. Because the energy levels are discrete, the spectrum for stress-free aluminum oxide always exhibits two peaks at wavelengths 14,402 cm−1 and 14,432 cm−1. The wavelength and frequency are related through:
where v is the frequency, λ is the wavelength, and c is the speed of light. If the coating is under a compressive stress, the peaks will be shifted downward while a tensile stress will shift them upward.
The frequency shift is given by the equation:
where is the piezospectroscopic tensor and is the residual stress within the coating.
Instrumentation
In order to obtain accurate results, a few finely tuned instruments must work in tandem:
Laser
A light source, such as a laser, is instrumental to piezospectroscopy. Narrow bandwidth lasers are preferred due to the increased resolution of the resulting spectrum. The fluorescent response is stronger at lower frequencies, but excessively low frequency light can cause sample degradation and interference with the ceramic surface of the coating.
Microscope
A microscope is generally used to isolate a certain section of a sample. Because TBC failure can begin at microscopic scales, magnification is often essential to accurately detect stresses.
Monochromator
A monochromator is used to filter out weakly scattered light and permit the strong emission peaks from the fluorescent response. In addition, notch or long-pass optical filters are used to filter the peak from the laser wavelength itself.
Detector
Many types of detectors are used with piezospectroscopy, the two most common being dispersion through a spectrograph or an interferometer. The resulting signal can be analyzed through Fourier Transform (FT) methods. Array detectors such as CCDs are also common, with many different types being suited for different ranges of wavelengths.
Procedure
The laser beam is directed through a lens and focused on the sample
The reflected beam is sent through a set of filters, which remove signal noise and isolate the desired range of the signal
The filtered beam is once again focused through a lens and split into several beams with a diffraction grading
The diffracted signal is reflected onto a detector, which converts the optical information into digital samples that are sent to a computer for further analysis
Applications
Piezospectroscopy is used in industry to ensure safe operation of TBCs.
Quality control
It is critical that TBCs be applied properly in order to prevent premature microfractures, delamination, and other structural failure. Through piezospectroscopy, parts can be put into service with the assurance of a properly protected substrate.
Nondestructive inspection/remaining lifetime assessment
Piezospectroscopy can accurately describe the extent of any discovered damage and provide accurate lifetime estimates in actual use. In addition, piezospectroscopy can be set up in situ. This, along with its noninvasive nature, makes piezospectroscopy an efficient method of onsite damage assessment.
References
Materials science | Piezospectroscopy | Physics,Materials_science,Engineering | 1,167 |
72,671,951 | https://en.wikipedia.org/wiki/Cis-action | Cis-action or cis-acting is a vague term that, in general, means "an action on the same" in contrast to trans-action "an action on a different". In other words, the initiator of the action is affected by it. Cis-actions occur wherever circular dependencies are present. Most notably in:
biology, where it refers to life itself as in the selfish gene, cis-acting genetic elements and self-maintenance as a trait of self-replicating entities;
chemistry, where it is known as autocatalytic set.
Software engineering, as in computer viruses.
References
Genetics terms
Molecular biology | Cis-action | Chemistry,Biology | 130 |
13,150,801 | https://en.wikipedia.org/wiki/Client-side%20persistent%20data | Client-side persistent data or CSPD is a term used in computing for storing data required by web applications to complete internet tasks on the client-side as needed rather than exclusively on the server. As a framework it is one solution to the needs of Occasionally connected computing or OCC.
A major challenge for HTTP as a stateless protocol has been asynchronous tasks. The AJAX pattern using XMLHttpRequest was first introduced by Microsoft in the context of the Outlook e-mail product.
The first CSPD were the 'cookies' introduced by the Netscape Navigator. ActiveX components which have entries in the Windows registry can also be viewed as a form of client-side persistence.
See also
Occasionally connected computing
Curl (programming_language)
AJAX
HTTP
Web storage
External links
CSPD
Safari preview
Netscape on persistent client state
Clients (computing)
Data management
Web applications | Client-side persistent data | Technology | 177 |
1,962,960 | https://en.wikipedia.org/wiki/Two-photon%20physics | Two-photon physics, also called gamma–gamma physics, is a branch of particle physics that describes the interactions between two photons. Normally, beams of light pass through each other unperturbed. Inside an optical material, and if the intensity of the beams is high enough, the beams may affect each other through a variety of non-linear effects. In pure vacuum, some weak scattering of light by light exists as well. Also, above some threshold of this center-of-mass energy of the system of the two photons, matter can be created.
Astronomy
Cosmological/intergalactic gamma rays
Photon–photon interactions limit the spectrum of observed gamma-ray photons at moderate cosmological distances to a photon energy below around 20 GeV, that is, to a wavelength of greater than approximately . This limit reaches up to around 20 TeV at merely intergalactic distances.
An analogy would be light traveling through a fog: at near distances a light source is more clearly visible than at long distances due to the scattering of light by fog particles. Similarly, the further a gamma-ray travels through the universe, the more likely it is to be scattered by an interaction with a low energy photon from the extragalactic background light.
At those energies and distances, very high energy gamma-ray photons have a significant probability of a photon-photon interaction with a low energy background photon from the extragalactic background light resulting in either the creation of particle-antiparticle pairs via direct pair production or (less often) by photon-photon scattering events that lower the incident photon energies. This renders the universe effectively opaque to very high energy photons at intergalactic to cosmological distances.
Experiments
Two-photon physics can be studied with high-energy particle accelerators, where the accelerated particles are not the photons themselves but charged particles that will radiate photons. The most significant studies so far were performed at the Large Electron–Positron Collider (LEP) at CERN. If the transverse momentum transfer and thus the deflection is large, one or both electrons can be detected; this is called tagging. The other particles that are created in the interaction are tracked by large detectors to reconstruct the physics of the interaction.
Frequently, photon-photon interactions will be studied via ultraperipheral collisions (UPCs) of heavy ions, such as gold or lead. These are collisions in which the colliding nuclei do not touch each other; i.e., the impact parameter is larger than the sum of the radii of the nuclei. The strong interaction between the quarks composing the nuclei is thus greatly suppressed, making the weaker electromagnetic interaction much more visible. In UPCs, because the ions are heavily charged, it is possible to have two independent interactions between a single ion pair, such as production of two electron-positron pairs. UPCs are studied with the STARlight simulation code.
Light-by-light scattering, as predicted in, can be studied using the strong electromagnetic fields of the hadrons collided at the LHC, it has first been seen in 2016 by the ATLAS collaboration and was then confirmed by the CMS collaboration., including at high two-photon energies. The best previous constraint on the elastic photon–photon scattering cross section was set by PVLAS, which reported an upper limit far above the level predicted by the Standard Model. Observation of a cross section larger than that predicted by the Standard Model could signify new physics such as axions, the search of which is the primary goal of PVLAS and several similar experiments.
Processes
From quantum electrodynamics it can be found that photons cannot couple directly to each other and a fermionic field according to the Landau-Yang theorem since they carry no charge and no 2 fermion + 2 boson vertex exists due to requirements of renormalizability, but they can interact through higher-order processes or couple directly to each other in a vertex with an additional two W bosons:
a photon can, within the bounds of the uncertainty principle, fluctuate into a virtual charged fermion–antifermion pair, to either of which the other photon can couple. This fermion pair can be leptons or quarks. Thus, two-photon physics experiments can be used as ways to study the photon structure, or, somewhat metaphorically, what is "inside" the photon.
There are three interaction processes:
Direct or pointlike: The photon couples directly to a quark inside the target photon. If a lepton–antilepton pair is created, this process involves only quantum electrodynamics (QED), but if a quark–antiquark pair is created, it involves both QED and perturbative quantum chromodynamics (QCD).
The intrinsic quark content of the photon is described by the photon structure function, experimentally analyzed in deep-inelastic electron–photon scattering.
Single resolved: The quark pair of the target photon form a vector meson. The probing photon couples to a constituent of this meson.
Double resolved: Both target and probe photon have formed a vector meson. This results in an interaction between two hadrons.
For the latter two cases, the scale of the interaction is such as the strong coupling constant is large. This is called vector meson dominance (VMD) and has to be modelled in non-perturbative QCD.
See also
Channelling radiation has been considered as a method to generate polarized high energy photon beams for gamma–gamma colliders.
Matter creation
Pair production
Delbrück scattering
Breit–Wheeler process
References
External links
Lauber,J A, 1997, A small tutorial in gamma–gamma Physics Archive
Two-photon physics at LEP
Two-photon physics at CESR Archive
Particle physics
Quantum electrodynamics
Experimental particle physics | Two-photon physics | Physics | 1,210 |
38,679,864 | https://en.wikipedia.org/wiki/Jumping%20library | Jumping libraries or junction-fragment libraries are collections of genomic DNA fragments generated by chromosome jumping. These libraries allow the analysis of large areas of the genome and overcome distance limitations in common cloning techniques.
A jumping library clone is composed of two stretches of DNA that are usually located many kilobases away from each other. The stretch of DNA located between these two "ends" is deleted by a series of biochemical manipulations carried out at the start of this cloning technique.
Invention and early improvements
Origin
Chromosome jumping (or chromosome hopping) was first described in 1984 by Collins and Weissman.
At the time, cloning techniques allowed for generation of clones of limited size (up to 240kb), and cytogenetic techniques allowed for mapping such clones to a small region of a particular chromosome to a resolution of around 5-10Mb. Therefore, a major gap remained in resolution between available technologies, and no methods were available for mapping larger areas of the genome.
Basic principle and original method
This technique is an extension of "chromosome walking" that allows larger "steps" along the chromosome.
If steps of length N kb are desired, very high molecular weight DNA is necessary. Once isolated, it is partially digested with a frequent-cutting restriction enzyme (such as MboI or BamHI). Next, obtained fragments are selected for size which should be around N kb in length. DNA must then be ligated at low concentration to favour ligation into circles rather than formation of multimers. A DNA marker (such as the amber suppressor tRNA gene supF) can be included at this time point within the covalently linked circle to allow for selection of junction fragments. Circles are subsequently fully digested with a second restriction enzyme (such as EcoRI) to generate a large number of fragments. Such fragments are ligated into vectors (such as a λ vector) which should be selected for using the DNA marker introduced earlier. The remaining fragments thus represent the library of junction fragments, or "jumping library".
The next step is to screen this library with a probe that represents a "starting point" of the desired "chromosome hop", i.e. determining the location of the genome that is being interrogated. Clones obtained from this final selection step will consist of DNA that is homologous to our probe, separated by our DNA marker from another DNA sequence that was originally located N kb away (thus being called "jumping").
By generating several libraries of different N values, eventually the entire genome can be mapped, allowing movement from one location to another, while controlling direction, by any value of N desired.
Early challenges and improvements
The original technique of chromosome jumping was developed in the laboratories of Collins and Weissman at Yale University in New Haven, U.S. and the laboratories of Poustka and Lehrach at the European Molecular Biology Laboratory in Heidelberg, Germany.
Collins and Weissman's method described above encountered some early limitations. The main concern was with avoiding non-circularized fragments. Two solutions were suggested: either screening junction fragments with a given probe or adding a second size-selection step after the ligation to separate single circular clones (monomers) from clones ligated to each other (multimers). The authors also suggested that other markers such as the λ cos site or antibiotic resistance genes should be considered (instead of the amber suppressor tRNA gene) to facilitate selection of junction clones.
Poustka and Lehrach suggested that full digestion with rare-cutting restrictions enzymes (such as NotI) should be used for the first step of the library construction instead of partial digestion with a frequently cutting restriction enzyme. This would significantly reduce the number of clones from millions to thousands. However, this could create problems with circularizing the DNA fragments since these fragments would be very long, and would also lose the flexibility in choice of end points that one gets in partial digests. One suggestion for overcoming these problems would be to combine the two methods, i.e. to construct a jumping library from DNA fragments digested partially with a commonly cutting restriction enzyme and completely with a rare cutting restriction enzyme and circularizing them into plasmids cleaved with both enzymes. Several of these "combination" libraries were completed in 1986.
In 1991, Zabarovsky et al. proposed a new approach for construction of jumping libraries. This approach included the use of two separate λ vectors for library construction, and a partial filling-in reaction that removes the need for a selectable marker. This filling-in reaction worked by destroying the specific cohesive ends (resulting from restriction digests) of the DNA fragments that were nonligated and noncircularized, thus preventing them from cloning into the vectors, in a more energy-efficient and accurate manner. Furthermore, this improved technique required less DNA to start with, and also produced a library that could be transferred into a plasmid form, making it easier to store and replicate. Using this new approach, they successfully constructed a human NotI jumping library from a lymphoblastoid cell line and a human chromosome 3-specific NotI jumping library from a human chromosome 3 and mouse hybrid cell line.
Current method
Second-generation or "Next-Gen" (NGS) techniques have evolved radically: the sequencing capacity has increased more than ten thousandfold and the cost has dropped by over one million-fold since 2007(National Human Genome Research Institute). NGS has revolutionized the genetic field in many ways.
Library construction
A library is often prepared by random fragmentation of DNA and ligation of common adaptor sequences. However, the generated short reads challenge the identification of structural variants, such as indels, translocations, and duplication. Large regions of simple repeats can further complicate the alignment. Alternatively, a jumping library can be used with NGS for the mapping of structural variation and scaffolding of de novo assemblies.
Jumping libraries can be categorized according to the length of the incorporated DNA fragments.
Short-jump library
In a short-jump library, 3 kb genomic DNA fragments are ligated with biotinylate ends and circularized. The circular segments are then sheared into small fragments and the biotinylated fragments are selected by affinity assay for paired-end sequencing.
There are two issues related to short-jump libraries. First, a read can pass through the biotinylated circularization junction and reduce the effective read length. Second, reads from non-jumped fragments (i.e. fragments without the circularization junction) are sequenced and reduce genomic coverage. It has been reported that non-jumped fragments range from 4% to 13%, depending on the size of selection. The first problem might be solved by shearing circles into a larger size and select for those larger fragments. The second problem can be addressed by using a custom barcoded jumping library.
Custom barcoded jumping library
This jumping library uses adaptors containing markers for fragment selection in combination with barcodes for multiplexing. The protocol was developed by Talkowski et al. and based on mate-pair library preparation for SOLiD sequencing. The selected DNA fragment size is 3.5 – 4.5 kb. Two adaptors were involved: one containing an EcoP15I recognition site and an AC overhang; the other containing a GT overhang, a biotinylated thymine, and an oligo barcode. The circularized DNA was digested and the fragments with biotynylated adaptors were selected for (see Figure 3). The EcoP15I recognition site and barcode help to distinguish junction fragments from nonjump fragments. These targeted fragments should contain 25 to 27bp of genomic DNA, the EcoP15I recognition site, the overhang, and the barcode.
Long-jump library
This library construction process is similar to that of the short-jump library except that the condition is optimized for longer fragments (5 kb).
Fosmid-jump library
This library construction process is also similar to that of short-jump library except that transfection using the E. coli vector is required for amplification of large (40 kb) DNA fragments. In addition, the fosmids can be modified to facilitate the conversion into jumping library compatible with certain next generation sequencers.
Paired-end sequencing
The segments resulting from circularization during constructing jumping library are cleaved, and DNA fragments with markers will be enriched and subjected to paired-end sequencing.
These DNA fragments are sequenced from both ends and generate pairs of reads. The genomic distance between the reads in each pair is approximately known and used for the assembly process.
For example, a DNA clone generated by random fragmentation is about 200 bp, and a read from each end is around 180 bp, overlapping each other.
This should be distinguished from mate-pair sequencing, which is basically a combination of next generation sequencing with jumping libraries.
Computational analysis
Different assembly tools have been developed to handle jumping library data. One example is DELLY. DELLY was developed to discover genomic structural variants and "integrates short insert paired-ends, long-range mate-pairs and split-read alignments" to detect rearrangements at sequence level.
An example of joint development of new experimental design and algorithm development is demonstrated by the ALLPATHS-LG assembler.
Confirmation
When used for detection of genetic and genomic changes, jumping clones require validation by Sanger sequencing.
Applications
Early applications
In the early days, chromosome walking from genetically linked DNA markers was used to identify and clone disease genes. However, the large molecular distance between known markers and the gene of interest was complicating the cloning process. In 1987, a human chromosome jumping library was constructed to clone the cystic fibrosis gene. Cystic fibrosis is an autosomal recessive disease affecting 1 in 2000 Caucasians. This was the first disease in which the usefulness of the jumping libraries was demonstrated. Met oncogene was a marker tightly linked to the cystic fibrosis gene on human chromosome 7, and the library was screened for a jumping clone starting at this marker. The cystic fibrosis gene was determined to localize 240kb downstream of the met gene. Chromosome jumping helped reduce the mapping "steps" and bypass the highly repetitive regions in the mammalian genome. Chromosome jumping also allowed the production of probes required for faster diagnosis of this and other diseases.
New applications
Characterizing chromosomal rearrangements
Balanced chromosomal rearrangements can have a significant contribution to diseases, as demonstrated by the studies of leukemia. However, many of them are undetected by chromosomal microarray. Karyotyping and FISH can identify balanced translocations and inversions but are labor-intensive and provide low resolution (small genomic changes are missed).
A jumping library NGS combined approach can be applied to identify such genomic changes. For example, Slade et al. applied this method to fine map a de novo balanced translocation in a child with Wilms' tumor. For this study, 50 million reads were generated, but only 11.6% of these could be mapped uniquely to the reference genome, which represents approximately a sixfold coverage.
Talkowski et al. compared different approaches to detect balanced chromosome alterations, and showed that modified jumping library in combination with next generation DNA sequencing is an accurate method for mapping chromosomal breakpoints. Two varieties of jumping libraries (short-jump libraries and custom barcoded jumping libraries) were tested and compared to standard sequencing libraries.
For standard NGS, 200-500bp fragments are generated. About 0.03%–0.54% of fragments represent chimeric pairs, which are pairs of end-reads that are mapped to two different chromosomes. Therefore, very few fragments cover the breakpoint area.
When using short-jump libraries with fragments of 3.2–3.8kb, the percentage of chimeric pairs increased to 1.3%. With Custom Barcoded Jumping Libraries, the percentage of chimeric pairs further increased to 1.49%.
Prenatal diagnosis
Conventional cytogenetic testing cannot offer the gene-level resolution required to predict the outcome of a pregnancy and whole genome deep sequencing is not practical for routine prenatal diagnosis. Whole-genome jumping library could complement conventional prenatal testing. This novel method was successfully applied to identify a case of CHARGE syndrome.
De novo assembly
In metagenomics, regions of the genomes that are shared between strains are typically longer than the reads. This complicates the assembly process and makes reconstructing individual genomes for a species a daunting task. Chimeric pairs that are mapped far apart in the genome can facilitate the de novo assembly process. By using a longer-jump library, Ribeiro et al. demonstrated that the assemblies of bacterial genomes were of high quality while reducing both cost and time.
Limitation
The cost of sequencing has dropped dramatically while the cost of construction of jumping libraries has not. Therefore, as new sequencing technologies and bioinformatic tools are developed, jumping libraries may become redundant.
See also
Chromosome jumping
Bioinformatics
DNA sequencing
References
External links
DELLY: Structural variant discovery by integrated paired-end and split-read analysis
ALLPATHS-LG: de novo assembly of whole-genome shotgun microreads
Molecular biology techniques
Laboratory techniques
Molecular biology
DNA
DNA sequencing | Jumping library | Chemistry,Biology | 2,743 |
4,372,153 | https://en.wikipedia.org/wiki/The%20Conquest%20of%20Space | The Conquest of Space is a 1949 speculative science book written by Willy Ley and illustrated by Chesley Bonestell. The book contains a portfolio of paintings by Bonestell depicting the possible future exploration of the Solar System, with explanatory text by Ley. Most of the 58 illustrations by Bonestell in Conquest, were previously published in color, in popular magazines.
Influences on fiction
Some of Bonestell's designs inspired the look of George Pal's 1955 science fiction movie Conquest of Space, which also takes its title from the book, but uses it as a framework on which to hang a melodramatic plot.
Bonestell's illustrations of the Moon in The Conquest of Space were used by Hergé as a basis for his illustrations of the lunar surface in his 1952–53 The Adventures of Tintin comic, Explorers on the Moon.
Arthur C. Clarke was also an admirer of The Conquest of Space; in his novel 2001: A Space Odyssey, Clarke refers to Saturn's moon Iapetus as "Japetus" due to that being the spelling used by Ley in The Conquest of Space.
Larry Niven's 1967 short story "The Soft Weapon" is set on a planet around Beta Lyrae; Niven's description of Beta Lyrae is actually a meticulous retelling of the details of Bonestell's painting rather than any kind of portrayal of the Beta Lyrae system itself, which is now understood to look quite different.
References
Notes
Bibliography
Ley, Willy. The Conquest of Space. New York: Viking, 1949. Pre-ISBN era.
Astronomy books
Spaceflight books | The Conquest of Space | Astronomy | 331 |
650,677 | https://en.wikipedia.org/wiki/Molluscicide | Molluscicides () – also known as snail baits, snail pellets, or slug pellets – are pesticides against molluscs, which are usually used in agriculture or gardening, in order to control gastropod pests specifically slugs and snails which damage crops or other valued plants by feeding on them.
A number of chemicals can be employed as a molluscicide:
Metal salts such as iron(III) phosphate, aluminium sulfate, and ferric sodium EDTA, relatively non-toxic, most are approved for use in organic gardening
Metaldehyde
Niclosamide
Acetylcholinesterase inhibitors (e.g. methiocarb), highly toxic to other animals and humans with a quick onset of toxic symptoms.
Accidental poisonings
Metal salt-based molluscicides are not that toxic to higher animals. However, metaldehyde-based and especially acetylcholinesterase inhibitor-based products are and have resulted in many deaths of pets and humans. Some products contain a bittering agent that reduces but does not eliminate the risk of accidental poisoning. Anticholinergic drugs such as atropine can be used as an antidote for acetylcholinesterase inhibitor poisoning. There is no antidote for metaldehyde; the treatment is symptomatic.
Slug pellets contain a carbohydrate source (e.g. durum flour) as a bulking agent.
See also
Pest control
Biological pest control
References
External links
Overview of potential piscicides and molluscicides for controlling aquatic pest species in New Zealand
National Pesticide Information Center (NPIC) Information about pesticide-related topics.
Get Rid of Slugs and Snails, Not Puppy Tails! Case Profile - National Pesticide Information Center
Slugs and Snails - National Pesticide Information Center
Snail bait and dogs
Snail Bait Poisoning
in the Garden Safety in the Garden
Metaldehyde toxicity
Iron phosphate: The first honestly effective snail & slug bait | Molluscicide | Biology | 402 |
53,248,722 | https://en.wikipedia.org/wiki/NGC%206801 | NGC 6801 is a spiral galaxy in the constellation of Cygnus. It was discovered by Lewis A. Swift on August 5, 1886.
Supernovae
In May 2011 a Type Ia supernova, 2011df, was detected in NGC 6801. 2015af, a Type II supernova was discovered in August 2015.
References
Unbarred spiral galaxies
Cygnus (constellation)
6801
08981
50063 | NGC 6801 | Astronomy | 88 |
35,205,666 | https://en.wikipedia.org/wiki/Drinfeld%20reciprocity | In mathematics, Drinfeld reciprocity, introduced by , is a correspondence between eigenforms of the moduli space of Drinfeld modules and factors of the corresponding Jacobian variety, such that all twisted L-functions are the same.
References
. English translation in Math. USSR Sbornik 23 (1974) 561–592.
Modular forms | Drinfeld reciprocity | Mathematics | 75 |
61,594,511 | https://en.wikipedia.org/wiki/Spheniscid%20alphaherpesvirus%201 | Spheniscid alphaherpesvirus 1 (SpAHV-1) is a species of virus in the genus Mardivirus, subfamily Alphaherpesvirinae, family Herpesviridae, and order Herpesvirales.
References
Alphaherpesvirinae | Spheniscid alphaherpesvirus 1 | Biology | 57 |
70,847,519 | https://en.wikipedia.org/wiki/Import%20One-Stop%20Shop | Import One-Stop Shop (IOSS or Import OSS) is an electronic one-stop shop (OSS) portal in the European Union (EU) which serves as a point of contact for the import of goods from third countries into the European Union. The scheme aims to simplify the declaration and payment of value-added tax when importing goods into the European Union.
IOSS became available from 1 July 2021, and applies to distance sales of items imported from third territories or third countries with a value from 0 to 150 euros. Participation in the IOSS portal is voluntary.
History
A system change in the VAT procedure was proposed by the European Commission in two stages. The first stage came into effect on 1 January 2015 under the name Mini One-Stop Shop (MOSS), and related to telecommunications, radio and television services as well as electronically provided services to end customers.
The second package of measures was adopted by the European Council in 2017 December, and extended the VAT system change to distance sales and any types of cross-border service provided to a final customer in the EU.
Goals
Changes in trade patterns of the world economy and the creation of new technologies have opened up new trading opportunities, and it is expected that e-commerce through distance sales will continue to grow, in particular via electronic marketplaces or platforms (electronic interfaces). To keep up with these e-commerce changes, EU VAT regulations have also changed.
Some aims of the EU VAT packages for e-commerce are:
VAT is paid when the consumption of goods and services takes place
Create a uniform VAT regulation for cross-border deliveries and services, thus simplifying cross-border trade
Fight against VAT fraud
Ensure fair conditions of competition for EU entrepreneurs and e-commerce traders from third countries, as well as between e-commerce and traditional shops
Higher revenues of the union member states as a result of fairer taxation
Until the introduction of the IOSS system, there was a VAT exemption on goods imported to the EU with a value from 0 to 22 euros, which meant that sellers in the EU were disadvantaged because they had to charge end customers with VAT, while sellers from a third country did not have to add value added tax (import value added tax) to the purchase price for long-distance purchases. However, for imported products from 22 to 150 euros the customer had to pay the VAT themselves, which resulted in sellers from third countries being disadvantaged because the customer often would have to pay high custom clearance fees when VAT was collected.
Between 23 and 150 €, the goods could be inspected at the customs, adding the VAT and the delivery company customs clearance fee. This is still the case when the seller isn't registered in the IOSS or when the value is above 150 €.
The IOSS lets the user know the total cost during the checkout. The VAT is included and the delivery company doesn't charge a customs clearance fee. The customs clearance is also much faster.
By abolishing the 22 euro VAT exemption for deliveries from third countries, it is estimated that more than 7 billion euros in additional taxes will be collected from the EU member states yearly.
Implementation with IOSS
The IOSS allows suppliers selling imported goods to buyers in the EU (distance selling of goods) to collect the VAT applicable in the country of destination and pay it to the tax relevant authorities. Thus, under the IOSS the buyer is no longer obliged to pay VAT themselves at the time of importing the goods into the EU, as was the case before (for products over 22 euros).
IOSS thus facilitates the collection, declaration and payment of VAT for third-country sellers making distance sales of imported goods to buyers residing in the EU. As a result, the buyer is no longer surprised by hidden fees (taxes), such as high customs clearance fees for collecting VAT. However, if the third-country seller is not registered with the IOSS (which is voluntary), the buyer will still have to pay VAT and possibly a customs duty levied by the transport company (e.g. post office) at the time of importing the goods into the EU.
Registration
Registration for companies has been possible since 1 April 2021 on the IOSS portal of any EU member state, and registration in a single union member state is sufficient. The company receives an IOSS identification number (also simply called an IOSS number). If a company is not based in the EU, it must also use an EU-based intermediary (a fiscal representative) to meet and guarantee its VAT obligations under the IOSS. The IOSS VAT number issued by a tax authority in a union member state, and is made available electronically to all other customs authorities in the EU. However, this database of IOSS VAT registration numbers is not public. When making a customs declaration, the customs authorities check the IOSS VAT identification number of the package against the database of IOSS VAT identification numbers. If the IOSS number is valid and the real value of the shipment does not exceed 150 euros, customs authorities do not require immediate payment of VAT on low-value goods registered through the IOSS.
Non-participation in IOSS
If a company does not participate in IOSS, the customer must pay the VAT themselves when importing the goods into the EU. Postal operators or courier services can also charge the customer a handling fee to cover the costs then required for (customs) formalities when importing goods. Customers in the EU will only receive the ordered goods after paying the VAT. This can result in the customer refusing to accept the package in question because of the additional costs.
A seller which does not participate in IOSS must fulfill any customs and tax obligations separately in each EU member state to which they delivers, which may include registering there.
Exceptions
If several goods are sold to the same buyer and if these goods exceed a value of 150 euro per package, these goods are taxed when imported into the EU member state (import sales tax). In the case of distance selling of goods through an electronic interface such as a marketplace or platform, the electronic interface is responsible for the overdue VAT. Goods that are subject to excise duties (e.g. alcohol or tobacco products) cannot be processed through the IOSS. Even if these excise goods are the subject of a distance sale from third countries, they are not covered by the IOSS regulations.
See also
VOEC, a similar, but unrelated scheme implemented in Norway from 2020
References
Further reading
Council Implementing Regulation (EU) No 282/2011
Council Implementing Regulation (EU) No 282/2011
Council Regulation (EU) No 904/2010
Council Directive (EU) 2017/2455
Regulation (EU) 2017/2454,
Implementing Regulation (EU) 2017/2459
Council Directive (EU) 2019/1995
Council Implementing Regulation (EU) 2019/2026
Implementing Regulation (EU) 2020/194
Council Decision (EU) 2020/1109
Council Regulation (EU) 2020/1108
Council Implementing Regulation (EU) 2020/1112
Commission Implementing Regulation (EU) 2020/1318
E-commerce | Import One-Stop Shop | Technology | 1,455 |
14,323,258 | https://en.wikipedia.org/wiki/Hepatocyte%20nuclear%20factor%204%20gamma |
Hepatocyte nuclear factor 4 gamma (HNF4G) also known as NR2A2 (nuclear receptor subfamily 2, group A, member 2) is a nuclear receptor that in humans is encoded by the HNF4G gene.
Function
HNF4G is a transcription factor that has been shown to play a significant role in intestinal epithelial cell differentiation and function. Research using integrative multi-omics analysis of intestinal organoid differentiation has revealed that HNF4G acts as a master regulator of gene regulation in differentiation towards the enterocyte lineage. The study demonstrated widespread binding to promoters and enhancers that are activated in enterocytes, and that the loss of Hnf4g results in a partial loss of enterocyte differentiation, indicating its importance in maintaining the enterocyte lineage.
See also
Hepatocyte nuclear factor 4
Hepatocyte nuclear factors
References
Further reading
External links
Intracellular receptors
Transcription factors | Hepatocyte nuclear factor 4 gamma | Chemistry,Biology | 202 |
51,906,100 | https://en.wikipedia.org/wiki/Ts15 | Ts15 (Tityustoxin-15; α-KTx 21.1) is produced by the Brazilian yellow scorpion Tityus serrulatus. It targets voltage-gated potassium channels, primarily the subtypes Kv1.2 and Kv1.3.
Sources
Ts15 can be isolated from the venom of Tityus serrulatus, otherwise known as the Brazilian yellow scorpion. It is the deadliest scorpion toxin in Brazil, with a lethality rate of 0.15%. Ts15 is only one of many neurotoxins that can be found in the venom of Tityus serrulatus.
Chemistry
Ts15 is a peptide with a length of 36 amino acids, which are crosslinked by three disulfide bridges. The 27th position in the amino acid sequence undergoes N-linked glycosylation. Ts15 is a scorpion short toxin. The rest of the structure of the toxin remains unknown so far. Due to its low structural similarity to other members of the α-family (<30%), it cannot be easily compared to them.
Target
Kv channels are the main targets of Ts15. While other members of the α-family generally target both Kv channels and sodium channels (Nav channels), Ts15 only targets Kv channels. It mainly targets Kv1.2 and Kv1.3: it blocks the channel's current by 73% and 50% respectively. When all targeted channels are compared, Ts15 has the highest affinity for Kv1.2 channels. Besides these channels, Ts15 also targets Shaker IR channels, Kv1.6 channels and the Kv2.1 channels. The effects of Ts15 are voltage-independent, meaning it can bind to a channel in any state of activation, and they also are reversible.
Toxicity and treatment
Toxicity
The LD50 of Ts15 is unknown. The symptoms caused solely by Ts15 have not been studied extensively. However, Ts15 is known to block the Kv1.3 channels on autoreactive effector memory T-cells. The binding of the toxin triggers immunosuppression, by decreasing the calcium influx into the cell.
Treatment
In general, a sting by Tityus serrulatus is treated with scorpion antivenom serum, named Soro antiscorpionico. A human antibody fragment, serrumab, neutralises most of the venom. However, this antibody works on the entire venomous cocktail of the scorpion. Information about treating Ts15 alone is currently unavailable.
References
Neurotoxins
Ion channel toxins
Scorpion toxins | Ts15 | Chemistry | 533 |
956,089 | https://en.wikipedia.org/wiki/Marburgvirus | The genus Marburgvirus is the taxonomic home of Marburg marburgvirus, whose members are the two known marburgviruses, Marburg virus (MARV) and Ravn virus (RAVV). Both viruses cause Marburg virus disease in humans and nonhuman primates, a form of viral hemorrhagic fever. Both are select agents, World Health Organization Risk Group 4 Pathogens (requiring Biosafety Level 4-equivalent containment), National Institutes of Health/National Institute of Allergy and Infectious Diseases Category A Priority Pathogens, Centers for Disease Control and Prevention Category A Bioterrorism Agents, and are listed as Biological Agents for Export Control by the Australia Group.
Use of term
The genus Marburgvirus is a virological taxon included in the family Filoviridae, order Mononegavirales. The genus currently includes a single virus species, Marburg marburgvirus. The members of the genus (i.e. the actual physical entities) are called marburgviruses. The name Marburgvirus is derived from the city of Marburg in Hesse, West Germany (where Marburg virus was first discovered), and the taxonomic suffix -virus (which denotes a virus genus). Even though the virus was named after the city of Marburg, Dr. Ana Gligić, lead virologist at a laboratory in Belgrade, was first who managed to isolate the virus.
Previous designations
Until 1998, the family Filoviridae contained only one genus, Filovirus. Once it became clear that marburgviruses and ebolaviruses are fundamentally different, this genus was abolished and a genus "Marburg-like viruses" was established for marburgviruses. In 2002, the genus name was changed to Marburgvirus, and in 2010 and 2011 the genus was emended.
Genus inclusion criteria
A virus that fulfills the criteria for being a member of the family Filoviridae is a member of the genus Marburgvirus if
its genome has one gene overlap
its fourth gene (GP) encodes only one protein (GP1,2) and cotranscriptional editing is not necessary for its expression
peak infectivity of its virions is association with particles ≈665 nm in length
its genome differs from that of Marburg virus by <50% at the nucleotide level
its virions show almost no antigenic cross reactivity with ebolavirions
Genus organization
Marburg marburgvirus
The species was introduced in 1998 as Marburg virus.
Because of easy confusion with its virus member Marburg virus, the species name was changed to Lake Viktoria marburgvirus in 2005.
In 2010, it was proposed to change the name to Marburg marburgvirus, and this proposal was accepted in early 2012 by the ICTV.
Marburg marburgvirus (also referred to as Lake Victoria marburgvirus) is a virological taxon (i.e. a man-made concept) included in the genus Marburgvirus, family Filoviridae, order Mononegavirales. The species has two virus members, Marburg virus (MARV) and Ravn virus (RAVV). The members of the species (i.e. the actual physical entities) are called Marburg marburgviruses. The name Marburg marburgvirus is derived from the city of Marburg in Hesse, West Germany (where Marburg virus was first discovered), and the taxonomic suffix marburgvirus (which denotes a marburgvirus species).
Species inclusion criteria
A virus that fulfills the criteria for being a member of the genus Marburgvirus is a member of the species Marburg marburgvirus if it has the properties of marburgviruses (because there is currently only marburgvirus species) and if its genome differs from that of Marburg virus (variant Musoke) by <30% at the nucleotide level.
References
Further reading
External links
ICTV Report: Filoviridae
International Committee on Taxonomy of Viruses (ICTV)
Primate diseases
Animal viral diseases
Arthropod-borne viral fevers and viral haemorrhagic fevers
Biological agents
Hemorrhagic fevers
Tropical diseases
Zoonoses
Chiroptera-borne diseases
Filoviridae
Virus genera | Marburgvirus | Biology,Environmental_science | 876 |
62,480,458 | https://en.wikipedia.org/wiki/Sidorenko%27s%20conjecture | Sidorenko's conjecture is a major conjecture in the field of extremal graph theory, posed by Alexander Sidorenko in 1986. Roughly speaking, the conjecture states that for any bipartite graph and graph on vertices with average degree , there are at least labeled copies of in , up to a small error term. Formally, it provides an intuitive inequality about graph homomorphism densities in graphons. The conjectured inequality can be interpreted as a statement that the density of copies of in a graph is asymptotically minimized by a random graph, as one would expect a fraction of possible subgraphs to be a copy of if each edge exists with probability .
Statement
Let be a graph. Then is said to have Sidorenko's property if, for all graphons , the inequality
is true, where is the homomorphism density of in .
Sidorenko's conjecture (1986) states that every bipartite graph has Sidorenko's property.
If is a graph , this means that the probability of a uniform random mapping from to being a homomorphism is at least the product over each edge in of the probability of that edge being mapped to an edge in . This roughly means that a randomly chosen graph with fixed number of vertices and average degree has the minimum number of labeled copies of . This is not a surprising conjecture because the right hand side of the inequality is the probability of the mapping being a homomorphism if each edge map is independent. So one should expect the two sides to be at least of the same order. The natural extension to graphons would follow from the fact that every graphon is the limit point of some sequence of graphs.
The requirement that is bipartite to have Sidorenko's property is necessary — if is a bipartite graph, then since is triangle-free. But is twice the number of edges in , so Sidorenko's property does not hold for . A similar argument shows that no graph with an odd cycle has Sidorenko's property. Since a graph is bipartite if and only if it has no odd cycles, this implies that the only possible graphs that can have Sidorenko's property are bipartite graphs.
Equivalent formulation
Sidorenko's property is equivalent to the following reformulation:
For all graphs , if has vertices and an average degree of , then .
This is equivalent because the number of homomorphisms from to is twice the number of edges in , and the inequality only needs to be checked when is a graph as previously mentioned.
In this formulation, since the number of non-injective homomorphisms from to is at most a constant times , Sidorenko's property would imply that there are at least labeled copies of in .
Examples
As previously noted, to prove Sidorenko's property it suffices to demonstrate the inequality for all graphs . Throughout this section, is a graph on vertices with average degree . The quantity refers to the number of homomorphisms from to . This quantity is the same as .
Elementary proofs of Sidorenko's property for some graphs follow from the Cauchy–Schwarz inequality or Hölder's inequality. Others can be done by using spectral graph theory, especially noting the observation that the number of closed paths of length from vertex to vertex in is the component in the th row and th column of the matrix , where is the adjacency matrix of .
Cauchy–Schwarz: The 4-cycle C4
By fixing two vertices and of , each copy of that have and on opposite ends can be identified by choosing two (not necessarily distinct) common neighbors of and . Letting denote the codegree of and (i.e. the number of common neighbors), this implies:
by the Cauchy–Schwarz inequality. The sum has now become a count of all pairs of vertices and their common neighbors, which is the same as the count of all vertices and pairs of their neighbors. So:
by Cauchy–Schwarz again. So:
as desired.
Spectral graph theory: The 2k-cycle C2k
Although the Cauchy–Schwarz approach for is elegant and elementary, it does not immediately generalize to all even cycles. However, one can apply spectral graph theory to prove that all even cycles have Sidorenko's property. Note that odd cycles are not accounted for in Sidorenko's conjecture because they are not bipartite.
Using the observation about closed paths, it follows that is the sum of the diagonal entries in . This is equal to the trace of , which in turn is equal to the sum of the th powers of the eigenvalues of . If are the eigenvalues of , then the min-max theorem implies that:
where is the vector with components, all of which are . But then:
because the eigenvalues of a real symmetric matrix are real. So:
as desired.
Entropy: Paths of length 3
J.L. Xiang Li and Balázs Szegedy (2011) introduced the idea of using entropy to prove some cases of Sidorenko's conjecture. Szegedy (2015) later applied the ideas further to prove that an even wider class of bipartite graphs have Sidorenko's property. While Szegedy's proof wound up being abstract and technical, Tim Gowers and Jason Long reduced the argument to a simpler one for specific cases such as paths of length . In essence, the proof chooses a nice probability distribution of choosing the vertices in the path and applies Jensen's inequality (i.e. convexity) to deduce the inequality.
Partial results
Here is a list of some bipartite graphs which have been shown to have Sidorenko's property. Let have bipartition .
Paths have Sidorenko's property, as shown by Mulholland and Smith in 1959 (before Sidorenko formulated the conjecture).
Trees have Sidorenko's property, generalizing paths. This was shown by Sidorenko in a 1991 paper.
Cycles of even length have Sidorenko's property as previously shown. Sidorenko also demonstrated this in his 1991 paper.
Complete bipartite graphs have Sidorenko's property. This was also shown in Sidorenko's 1991 paper.
Bipartite graphs with have Sidorenko's property. This was also shown in Sidorenko's 1991 paper.
Hypercube graphs (generalizations of ) have Sidorenko's property, as shown by Hatami in 2008.
More generally, norming graphs (as introduced by Hatami) have Sidorenko's property.
If there is a vertex in that is neighbors with every vertex in (or vice versa), then has Sidorenko's property as shown by Conlon, Fox, and Sudakov in 2010. This proof used the dependent random choice method.
For all bipartite graphs , there is some positive integer such that the -blow-up of has Sidorenko's property. Here, the -blow-up of is formed by replacing each vertex in with copies of itself, each connected with its original neighbors in . This was shown by Conlon and Lee in 2018.
Some recursive approaches have been attempted, which take a collection of graphs that have Sidorenko's property to create a new graph that has Sidorenko's property. The main progress in this manner was done by Sidorenko in his 1991 paper, Li and Szegedy in 2011, and Kim, Lee, and Lee in 2013.
Li and Szegedy's paper also used entropy methods to prove the property for a class of graphs called "reflection trees."
Kim, Lee, and Lee's paper extended this idea to a class of graphs with a tree-like substructure called "tree-arrangeable graphs."
However, there are graphs for which Sidorenko's conjecture is still open. An example is the "Möbius strip" graph , formed by removing a -cycle from the complete bipartite graph with parts of size .
László Lovász proved a local version of Sidorenko's conjecture, i.e. for graphs that are "close" to random graphs in a sense of cut norm.
Forcing conjecture
A sequence of graphs is called quasi-random with density for some density if for every graph :
The sequence of graphs would thus have properties of the Erdős–Rényi random graph .
If the edge density is fixed at , then the condition implies that the sequence of graphs is near the equality case in Sidorenko's property for every graph .
From Chung, Graham, and Wilson's 1989 paper about quasi-random graphs, it suffices for the count to match what would be expected of a random graph (i.e. the condition holds for ). The paper also asks which graphs have this property besides . Such graphs are called forcing graphs as their count controls the quasi-randomness of a sequence of graphs.
The forcing conjecture states the following:
A graph is forcing if and only if it is bipartite and not a tree.
It is straightforward to see that if is forcing, then it is bipartite and not a tree. Some examples of forcing graphs are even cycles (shown by Chung, Graham, and Wilson). Skokan and Thoma showed that all complete bipartite graphs that are not trees are forcing.
Sidorenko's conjecture for graphs of density follows from the forcing conjecture. Furthermore, the forcing conjecture would show that graphs that are close to equality in Sidorenko's property must satisfy quasi-randomness conditions.
See also
Common graph
References
Graph theory
Conjectures | Sidorenko's conjecture | Mathematics | 1,988 |
3,110,216 | https://en.wikipedia.org/wiki/Omicron%20Piscium | Omicron Piscium (ο Piscium, abbreviated Omi Psc, ο Psc) is a binary star in the constellation of Pisces. It is visible to the naked eye, having an apparent visual magnitude of 4.27. Based upon an annual parallax shift of 11.67 mas as seen from the Earth, the system is located roughly 280 light-years from the Sun. It is positioned near the ecliptic, so is subject to occultation by the Moon. It is a member of the thin disk population of the Milky Way.
The two components are designated Omicron Piscium A (formally named Torcular ) and B.
Nomenclature
ο Piscium (Latinised to Omicron Piscium) is the system's Bayer designation. The designations of the two components as Omicron Piscium A and B derives from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The system bore the traditional name Torcularis septentrionalis, taken from the 1515 Almagest. The name is translated from the Greek ληνός ('full'), which was "erroneously written for λίνος" ('linen'). In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Torcular for the component Omicron Piscium A on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Official in Charge of the Pasturing, refers to an asterism consisting of Omicron Piscium, Eta Piscium, Rho Piscium, Pi Piscium and 104 Piscium. Consequently, the Chinese name for Omicron Piscium itself is (, .)
Properties
This is a probable astrometric binary system. The visible component, Omicron Piscium A, is an evolved K-type giant star with a stellar classification of K0 III. At the estimated age of 390 million years, it is most likely (76% chance) on the horizontal branch, rather than the red-giant branch. As such, it is a red clump star that is generating energy through helium fusion at its core. The star has three times the mass of the Sun and has expanded to over 14 times the Sun's radius. It is radiating 132 times the Sun's luminosity from its photosphere at an effective temperature of 5,004 K.
References
K-type giants
Horizontal-branch stars
Astrometric binaries
Torcular
Piscium, Omicron
Pisces (constellation)
Piscium, 110
Durchmusterung objects
010761
008198
0510 | Omicron Piscium | Astronomy | 613 |
1,840,072 | https://en.wikipedia.org/wiki/Computer%20rage | Computer rage refers to negative psychological responses towards a computer due to heightened anger or frustration. Examples of computer rage include cursing or yelling at a computer, slamming or throwing a keyboard or a mouse, and assaulting the computer or monitor with an object or weapon.
Notable cases
In April 2015, a Colorado man was cited for firing a gun within a residential area when he took his computer into a back alley and shot it eight times with a 9mm pistol. When questioned, he told police that he had become so frustrated with his computer that he had "reached critical mass," and stated that after he had shot his computer, "the angels sung on high." In 2007, a German man threw his computer out the window in the middle of the night, startling his neighbors. German police were sympathetic and did not press charges, stating, "Who hasn't felt like doing that?" In 2006, the staged surveillance video "Bad Day", showing a man assaulting his computer at work, became a viral hit on the Internet, reaching over two million views. Other instances of reported computer rage have ranged from a restaurant owner who threw his laptop into a deep fryer, to an individual who attempted to throw his computer out the window, but forgot that the window was closed.
The Angry German Kid is a popular Internet meme that stems from a viral video from the mid-2000s where the protagonist screams at his computer for loading too slowly, and repeatedly hits the table with the keyboard, causing keys to fall off.
Prevalence
In 1999, it was speculated that computer rage had become more common than road rage in traffic, but in a 2015 study, it was found that reported rates of anger when using a computer were lower than reported rates of anger while driving. However, reports of anger while driving or using computers were found to be far more common than anger in other situations.
In a 2013 survey of American adults, 36% of respondents who reported experiencing computer issues, also reported that they had screamed, yelled, cursed, or physically assaulted their computers within the last six months.
In 2009, a survey was conducted with British computer users about their experiences with computers. This survey found that 54% of respondents reported verbally abusing their computers, and 40% reported that they had become physically violent toward their computers. The survey also found that most users experienced computer rage three to four times a month.
Differences in types of computer rage have also been found between different geographical regions. For example, one survey found that individuals from London have been found to be five times more likely to physically assault their computers, while those from Yorkshire and the Humber were found to be more likely to yell at their computers. Differences have also been observed for age groups, as younger adults (18–24 years old) have reported more abusive behaviors in the face of computer frustration when compared to older adults (over 35 years old). Individuals with less computer experience in particular have also been reported to experience increased feelings of anger and helplessness when it comes to computers, but other research has argued that it is the self-efficacy beliefs about computers that are predictive of computer frustration, not the amount of computer experience or use.
In 1999, Professor Robert J. Edelmann, a chartered clinical, forensic and health psychologist and a fellow of the British Psychological Society, was offering a special helpline in the UK for those with technology-related anger.
Causes
Computer factors
Users can experience computer anger and frustration for a number of reasons. American adults surveyed in 2013 reported that almost half (46%) of their computer problems were due to malware or computer viruses, followed by software issues (10%) and not enough memory (8%). In another survey, users reported email, word processors, web browsing, operating system crashes, inability to locate features, and program crashes as frequent initiators of computer frustration. These technical issues, paired with tight timelines, poor work progress, and failure to complete a computer task can create heightened computer anger and frustration. When this anger and frustration exceeds a person's control, it can turn into rage.
Psychological factors
Research on emotion has shown that anger is often caused by interruptions of plans and expectations, especially through the violation of social norms. This sense of anger can be magnified when the individual does not understand why they are unable to meet their goal or task at hand or why there was a violation of social norms. Psychologists have argued that this is particularly relevant to computer rage, as computer users interact with computers in a manner similar to how they interact with other people. Thus, when computers fail to function in the face of incoming deadlines or an important task to accomplish, users can feel betrayed by the computer in the same way they can feel betrayed by other people. Specifically, when users fail to understand why their computer will not work properly, often in the times they need it to the most, it can invoke a sense of hostility as it is interpreted as a breach of social norms or a personal attack. Consistent with this finding, perceived betrayal by the computer can also elicit other negative emotions. One survey of US adults reported that 10% of users who experience computer issues experienced feeling helplessness, and 4% reported feeling victimized. In the same survey, 7% adults ages 18–34 reported that they had cried over their computer problems within the previous six months.
Dangers and potential benefits
Computer rage can result in damaged property or physical injuries, as well as psychological harm. Some experts have suggested that venting frustrations on the computer may have some benefits, but other experts disagree. For example, yelling at the computer has been suggested as a way to moderate one's anger to avoid the ill effects of anger suppression, but new research has suggested that yelling can negatively affect health in itself. Alternatively, releasing anger on a computer has been viewed as advantageous as it directs this rage at an object as opposed to another person, and can make individuals feel better afterwards.
Prevention and management
In response to computer issues that invoke frustration, some experts have suggested walking away from the computer for 15 minutes to "cool off". Other methods to prevent computer rage can be backing up computer data often, increasing memory of the computer, and even imagining pleasant images, such as petting an animal. Adopting a goal of improving computer knowledge may also be beneficial, as users are less likely to report computer rage when they view the issue as a challenge and not as a setback. If computer rage cannot be avoided, guidelines on how to rage with minimal consequences, such as wearing safety goggles and taking frustration out on older equipment, can be followed to reduce the likelihood of injury and significant property loss.
Employers of staff who work with computers, often in situations where time is crucial, can take steps to prevent computer rage, such as making sure there is adequate software, and providing employees with anger management strategies. Some computer technician companies have reported that, to reduce computer rage, their technicians are trained on how to work with customers in sensitive psychological states just as much as how to diagnose and fix technical issues.
Designing computer interfaces to display more emotional support when errors occur, or provide therapy strategies, has also been suggested as a way to mitigate computer anger and rage. The application of affective computing has been shown to effectively mitigate negative emotions connected to computer use. One study found that an interface that sought the user's feelings, provided empathy, and validated reported emotional states significantly reduced negative emotions associated with computer frustration for users. Another study found that when error messages contain positive wording ("Great that the computer will soon work again") compared to negative wording ("This is frustrating") or a neutral error message, users exhibited more signs of happiness.
See also
Rage (emotion)
Air rage
Bike rage
Road rage
Roid rage
Wrap rage
Technostress
The Media Equation
Debugging
Hang (computing)
Digital media use and mental health
References
Rage (emotion)
Computers
Digital media use and mental health | Computer rage | Technology | 1,615 |
1,274,727 | https://en.wikipedia.org/wiki/Bridgman%20seal | A Bridgman seal, invented by and named after Percy Williams Bridgman, can be used to seal a pressure chamber and compress its contents to high pressures (up to 40,000 MPa), without the seal leaking and releasing the pressure
A cylindrical driving piston is mounted within a cylindrical channel that is closed at its far end. This piston presses against a hard steel ring, followed by a softer steel ring, and then a ring of more viscous or elastic material such as rubber, copper or soap stone, all within the channel. These three intermediate stages provide an inner cylinder, and the third stage bears on a specially shaped final steel piston that applies the force to pressurize material held at the end of the outer, enclosing channel. The final piston consists of a wider portion that fills the main channel, and a narrower cylindrical extension that leads back through the inner channel formed by the three ring-shaped intermediate stages, ending within the hard steel ring without making direct contact with the driving piston. This arrangement ensures that higher pressures create tighter seals that resist any leakage from the material at the end, since the pressure within the last and softest ring is greater than that in the material at the end.
This arrangement has much in common with the earlier de Bange breech obturator system used to prevent the escape of gasses from breech loading artillery, whether inspired by that or independently invented, with the important further feature that Bridgman's system does not merely resist the escape of material under pressure while stationary, it applies that pressure by movement within the pressurising equipment.
Bridgman's previous pressurizing arrangement, a screw with a 2-m spanner, allowed him to get pressures of up to 400 MPa; the Bridgman seal allowed pressures up to 40,000 MPa. These are typical pressures expected in the Earth's internal structure. This advance allowed him to make many discoveries, including the high-pressure phases of ice (still known by the Bridgman nomenclature), high-pressure minerals, and novel high-pressure material properties. These discoveries were important in many fields of science and engineering. Bridgman won a Nobel Prize for his work.
References
Seals (mechanical) | Bridgman seal | Physics | 456 |
4,579,933 | https://en.wikipedia.org/wiki/Linear%20energy%20transfer | In dosimetry, linear energy transfer (LET) is the amount of energy that an ionizing particle transfers to the material traversed per unit distance. It describes the action of radiation into matter.
It is identical to the retarding force acting on a charged ionizing particle travelling through the matter. By definition, LET is a positive quantity. LET depends on the nature of the radiation as well as on the material traversed.
A high LET will slow down the radiation more quickly, generally making shielding more effective and preventing deep penetration. On the other hand, the higher concentration of deposited energy can cause more severe damage to any microscopic structures near the particle track. If a microscopic defect can cause larger-scale failure, as is the case in biological cells and microelectronics, the LET helps explain why radiation damage is sometimes disproportionate to the absorbed dose. Dosimetry attempts to factor in this effect with radiation weighting factors.
Linear energy transfer is closely related to stopping power, since both equal the retarding force. The unrestricted linear energy transfer is identical to linear electronic stopping power, as discussed below. But the stopping power and LET concepts are different in the respect that total stopping power has the nuclear stopping power component, and this component does not cause electronic excitations. Hence nuclear stopping power is not contained in LET.
The appropriate SI unit for LET is the newton, but it is most typically expressed in units of kiloelectronvolts per micrometre (keV/μm) or megaelectronvolts per centimetre (MeV/cm). While medical physicists and radiobiologists usually speak of linear energy transfer, most non-medical physicists talk about stopping power.
Restricted and unrestricted LET
The secondary electrons produced during the process of ionization by the primary charged particle are conventionally called delta rays, if their energy is large enough so that they themselves can ionize. Many studies focus upon the energy transferred in the vicinity of the primary particle track and therefore exclude interactions that produce delta rays with energies larger than a certain value Δ. This energy limit is meant to exclude secondary electrons that carry energy far from the primary particle track, since a larger energy implies a larger range. This approximation neglects the directional distribution of secondary radiation and the non-linear path of delta rays, but simplifies analytic evaluation.
In mathematical terms, Restricted linear energy transfer is defined by
where is the energy loss of the charged particle due to electronic collisions while traversing a distance , excluding all secondary electrons with kinetic energies larger than Δ. If Δ tends toward infinity, then there are no electrons with larger energy, and the linear energy transfer becomes the unrestricted linear energy transfer which is identical to the linear electronic stopping power. Here, the use of the term "infinity" is not to be taken literally; it simply means that no energy transfers, however large, are excluded.
Application to radiation types
During his investigations of radioactivity, Ernest Rutherford coined the terms alpha rays, beta rays and gamma rays for the three types of emissions that occur during radioactive decay.
Alpha particles and other positive ions
Linear energy transfer is best defined for monoenergetic ions, i.e. protons, alpha particles, and the heavier nuclei called HZE ions found in cosmic rays or produced by particle accelerators. These particles cause frequent direct ionizations within a narrow diameter around a relatively straight track, thus approximating continuous deceleration. As they slow down, the changing particle cross section modifies their LET, generally increasing it to a Bragg peak just before achieving thermal equilibrium with the absorber, i.e., before the end of range. At equilibrium, the incident particle essentially comes to rest or is absorbed, at which point LET is undefined.
Since the LET varies over the particle track, an average value is often used to represent the spread. Averages weighted by track length or weighted by absorbed dose are present in the literature, with the latter being more common in dosimetry. These averages are not widely separated for heavy particles with high LET, but the difference becomes more important in the other type of radiations discussed below.
Often overlooked for alpha particles is the recoil-nucleus of the alpha emitter, which has significant ionization energy of roughly 5% of the alpha particle, but because of its high electric charge and large mass, has an ultra-short range of only a few Angstroms. This can skew results significantly if one is examining the Relative Biological Effectiveness of the alpha particle in the cytoplasm, while ignoring the recoil nucleus contribution, which alpha-parent being one of numerous heavy metals, is typically adhered to chromatic material such as chromosomes.
Beta particles
Electrons produced in nuclear decay are called beta particles. Because of their low mass relative to atoms, they are strongly scattered by nuclei (Coulomb or Rutherford scattering), much more so than heavier particles. Beta particle tracks are therefore crooked. In addition to producing secondary electrons (delta rays) while ionizing atoms, they also produce bremsstrahlung photons. A maximum range of beta radiation can be defined experimentally which is smaller than the range that would be measured along the particle path.
Gamma rays
Gamma rays are photons, whose absorption cannot be described by LET. When a gamma quantum passes through matter, it may be absorbed in a single process (photoelectric effect, Compton effect or pair production), or it continues unchanged on its path. (Only in the case of the Compton effect, another gamma quantum of lower energy proceeds). Gamma ray absorption therefore obeys an exponential law (see Gamma rays); the absorption is described by the absorption coefficient or by the half-value thickness.
LET has therefore no meaning when applied to photons. However, many authors speak of "gamma LET" anyway, where they are actually referring to the LET of the secondary electrons, i.e., mainly Compton electrons, produced by the gamma radiation. The secondary electrons will ionize far more atoms than the primary photon. This gamma LET has little relation to the attenuation rate of the beam, but it may have some correlation to the microscopic defects produced in the absorber. Even a monoenergetic gamma beam will produce a spectrum of electrons, and each secondary electron will have a variable LET as it slows down, as discussed above. The "gamma LET" is therefore an average.
The transfer of energy from an uncharged primary particle to charged secondary particles can also be described by using the mass energy-transfer coefficient.
Biological effects
Many studies have attempted to relate linear energy transfer to the relative biological effectiveness (RBE) of radiation, with inconsistent results. The relationship varies widely depending on the nature of the biological material, and the choice of endpoint to define effectiveness. Even when these are held constant, different radiation spectra that shared the same LET have significantly different RBE.
Despite these variations, some overall trends are commonly seen. The RBE is generally independent of LET for any LET less than 10 keV/μm, so a low LET is normally chosen as the reference condition where RBE is set to unity. Above 10 keV/μm, some systems show a decline in RBE with increasing LET, while others show an initial increase to a peak before declining. Mammalian cells usually experience a peak RBE for LET's around 100 keV/μm. These are very rough numbers; for example, one set of experiments found a peak at 30 keV/μm.
The International Commission on Radiation Protection (ICRP) proposed a simplified model of RBE-LET relationships for use in dosimetry. They defined a quality factor of radiation as a function of dose-averaged unrestricted LET in water, and intended it as a highly uncertain, but generally conservative, approximation of RBE. Different iterations of their model are shown in the graph to the right. The 1966 model was integrated into their 1977 recommendations for radiation protection in ICRP 26. This model was largely replaced in the 1991 recommendations of ICRP 60 by radiation weighting factors that were tied to the particle type and independent of LET. ICRP 60 revised the quality factor function and reserved it for use with unusual radiation types that did not have radiation weighting factors assigned to them.
Application fields
When used to describe the dosimetry of ionizing radiation in the biological or biomedical setting, the LET (like linear stopping power) is usually expressed in units of keV/μm.
In space applications, electronic devices can be disturbed by the passage of energetic electrons, protons or heavier ions that may alter the state of a circuit, producing "single event effects". The effect of the radiation is described by the LET (which is here taken as synonymous with stopping power), typically expressed in units of MeV·cm2/mg of material, the units used for mass stopping power (the material in question is usually Si for MOS devices). The units of measurement arise from a combination of the energy lost by the particle to the material per unit path length (MeV/cm) divided by the density of the material (mg/cm3).
"Soft errors" of electronic devices due to cosmic rays on earth are, however, mostly due to neutrons which do not directly interact with the material and whose passage can therefore not be described by LET. Rather, one measures their effect in terms of neutrons per cm2 per hour, see Soft error.
References
Nuclear physics
Radiation effects
Radiobiology | Linear energy transfer | Physics,Chemistry,Materials_science,Engineering,Biology | 1,927 |
72,440,211 | https://en.wikipedia.org/wiki/Erik%20Ernst | Erik Ernst is a prominent computer scientist and an associate professor at the University of Aarhus in Denmark. In 2010, he won the Dahl-Nygaard Prize.
External links
References
Danish computer scientists
Living people
Dahl–Nygaard Prize
Academic staff of Aarhus University
Year of birth missing (living people) | Erik Ernst | Technology | 59 |
47,701,231 | https://en.wikipedia.org/wiki/Integer%20set%20library | isl (integer set library) is a portable C library for manipulating sets and relations of integer points bounded by linear constraints.
The following operations are supported:
intersection, union, set difference
emptiness check
convex hull
(integer) affine hull
integer projection
computing the lexicographic minimum using parametric integer programming
coalescing
parametric vertex enumeration
It also includes an ILP solver based on generalized basis reduction, transitive closures on maps (which may encode infinite graphs), dependence analysis and bounds on piecewise step-polynomials.
All computations are performed in exact integer arithmetic using GMP or imath.
Many program analysis techniques are based on integer set manipulations. The integers typically represent iterations of a loop nest or elements of an array.
isl uses parametric integer programming to obtain an explicit representation in terms of integer divisions.
It is used as backend polyhedral library in the GCC Graphite framework
and in the LLVM Polly framework
for loop optimizations.
See also
Frameworks supporting the polyhedral model
Integer programming
References
External links
Official ISL web site
ISL source repository
Integer sets and relations: from high-level modeling to low-level implementation (Sven Verdoolaege)
Computer arithmetic
C (programming language) libraries
Numerical libraries
Free software programmed in C
Software using the MIT license | Integer set library | Mathematics | 267 |
31,385,296 | https://en.wikipedia.org/wiki/Bacterial%20taxonomy | Bacterial taxonomy is subfield of taxonomy devoted to the classification of bacteria specimens into taxonomic ranks. Archaeal taxonomy are governed by the same rules.
In the scientific classification established by Carl Linnaeus, each species is assigned to a genus resulting in a two-part name. This name denotes the two lowest levels in a hierarchy of ranks, increasingly larger groupings of species based on common traits. Of these ranks, domains are the most general level of categorization. Presently, scientists classify all life into just three domains, Eukaryotes, Bacteria and Archaea.
Bacterial taxonomy is the classification of strains within the domain Bacteria into hierarchies of similarity. This classification is similar to that of plants, mammals, and other taxonomies. However, biologists specializing in different areas have developed differing taxonomic conventions over time. For example, bacterial taxonomists name types based on descriptions of strains. Zoologists among others use a type specimen instead.
Diversity
Bacteria (prokaryotes, together with Archaea) share many common features. These commonalities include the lack of a nuclear membrane, unicellularity, division by binary-fission and generally small size. The various species can be differentiated through the comparison of several characteristics, allowing their identification and classification. Examples include:
Phylogeny: All bacteria stem from a common ancestor and diversified since, and consequently possess different levels of evolutionary relatedness (see Bacterial phyla and Timeline of evolution)
Metabolism: Different bacteria may have different metabolic abilities (see Microbial metabolism)
Environment: Different bacteria thrive in different environments, such as high/low temperature and salt (see Extremophiles)
Morphology: There are many structural differences between bacteria, such as cell shape, Gram stain (number of lipid bilayers) or bilayer composition (see Bacterial cellular morphologies, Bacterial cell structure)
History
First descriptions
Bacteria were first observed by Antonie van Leeuwenhoek in 1676, using a single-lens microscope of his own design. He called them "animalcules" and published his observations in a series of letters to the Royal Society.
Early described genera of bacteria include Vibrio and Monas, by O. F. Müller (1773, 1786), then classified as Infusoria (however, many species before included in those genera are regarded today as protists); Polyangium, by H. F. Link (1809), the first bacterium still recognized today; Serratia, by Bizio (1823); and Spirillum, Spirochaeta and Bacterium, by Ehrenberg (1838).
The term Bacterium, introduced as a genus by Ehrenberg in 1838, became a catch-all for rod-shaped cells.
Early formal classifications
Bacteria were first classified as plants constituting the class Schizomycetes, which along with the Schizophyceae (blue green algae/Cyanobacteria) formed the phylum Schizophyta.
Haeckel in 1866 placed the group in the phylum Moneres (from μονήρης: simple) in the kingdom Protista and defines them as completely structureless and homogeneous organisms, consisting only of a piece of plasma. He subdivided the phylum into two groups:
(no envelope)
Protogenes – such as Protogenes primordialis, now classed as a eukaryote and not a bacterium
Protamaeba – now classed as a eukaryote and not a bacterium
Vibrio – a genus of comma shaped bacteria first described in 1854)
Bacterium – a genus of rod shaped bacteria first described in 1828, that later gave its name to the members of the Monera, formerly referred to as "a moneron" (plural "monera") in English and ""(fem. pl. "") in German
Bacillus – a genus of spore-forming rod shaped bacteria first described in 1835
Spirochaeta – thin spiral shaped bacteria first described in 1835
Spirillum – spiral shaped bacteria first described in 1832
etc.
(with envelope)
Protomonas – now classed as a eukaryote and not a bacterium. The name was reused in 1984 for an unrelated genus of Bacteria
Vampyrella – now classed as a eukaryote and not a bacterium
The classification of Ferdinand Cohn (1872) was influential in the nineteenth century, and recognized six genera: Micrococcus, Bacterium, Bacillus, Vibrio, Spirillum, and Spirochaeta.
The group was later reclassified as the Prokaryotes by Chatton.
The classification of Cyanobacteria (colloquially "blue green algae") has been fought between being algae or bacteria (for example, Haeckel classified Nostoc in the phylum Archephyta of Algae).
in 1905, Erwin F. Smith accepted 33 valid different names of bacterial genera and over 150 invalid names, and Vuillemin, in a 1913 study, concluded that all species of the Bacteria should fall into the genera Planococcus, Streptococcus, Klebsiella, Merista, Planomerista, Neisseria, Sarcina, Planosarcina, Metabacterium, Clostridium, Serratia, Bacterium, and Spirillum.
Cohn recognized four tribes: Spherobacteria, Microbacteria, Desmobacteria, and Spirobacteria. Stanier and van Neil recognized the kingdom Monera with two phyla, Myxophyta and Schizomycetae, the latter comprising classes Eubacteriae (three orders), Myxobacteriae (one order), and Spirochetae (one order). Bisset distinguished 1 class and 4 orders: Eubacteriales, Actinomycetales, Streptomycetales, and Flexibacteriales. Walter Migula's system, which was the most widely accepted system of its time and included all then-known species but was based only on morphology, contained the three basic groups Coccaceae, Bacillaceae, and Spirillaceae, but also Trichobacterinae for filamentous bacteria. Orla-Jensen established two orders: Cephalotrichinae (seven families) and Peritrichinae (presumably with only one family). Bergey et al. presented a classification which generally followed the 1920 Final Report of the Society of American Bacteriologists Committee (Winslow et al.), which divided class Schizomycetes into four orders: Myxobacteriales, Thiobacteriales, Chlamydobacteriales, and Eubacteriales, with a fifth group being four genera considered intermediate between bacteria and protozoans: Spirocheta, Cristospira, Saprospira, and Treponema.
However, different authors often reclassified the genera due to the lack of visible traits to go by, resulting in a poor state which was summarised in 1915 by Robert Earle Buchanan. By then, the whole group received different ranks and names by different authors, namely:
Schizomycetes (Naegeli 1857)
Bacteriaceae (Cohn 1872 a)
Bacteria (Cohn 1872 b)
Schizomycetaceae (DeToni and Trevisan 1889)
Furthermore, the families into which the class was subdivided changed from author to author and for some, such as Zipf, the names were in German and not in Latin.
The first edition of the Bacteriological Code in 1947 sorted out several problems.
A. R. Prévot's system) had four subphyla and eight classes, as follows:
Eubacteriales (classes Asporulales and Sporulales)
Mycobacteriales (classes Actinomycetales, Myxobacteriales, and Azotobacteriales)
Algobacteriales (classes Siderobacteriales and Thiobacteriales)
Protozoobacteriales (class Spirochetales)
Informal groups based on Gram staining
Despite there being little agreement on the major subgroups of the Bacteria, Gram staining results were most commonly used as a classification tool. Consequently, until the advent of molecular phylogeny, the Kingdom Prokaryota was divided into four divisions, A classification scheme still formally followed by Bergey's manual of systematic bacteriology for tome order
Gracilicutes (gram-negative)
Photobacteria (photosynthetic): class Oxyphotobacteriae (water as electron donor, includes the order Cyanobacteriales=blue-green algae, now phylum Cyanobacteria) and class Anoxyphotobacteriae (anaerobic phototrophs, orders: Rhodospirillales and Chlorobiales
Scotobacteria (non-photosynthetic, now the Proteobacteria and other gram-negative nonphotosynthetic phyla)
Firmacutes [sic] (gram-positive, subsequently corrected to Firmicutes)
several orders such as Bacillales and Actinomycetales (now in the phylum Actinobacteria)
Mollicutes (gram variable, e.g. Mycoplasma)
Mendocutes (uneven gram stain, "methanogenic bacteria", now known as the Archaea)
Molecular era
"Archaic bacteria" and Woese's reclassification
Woese argued that the bacteria, archaea, and eukaryotes represent separate lines of descent that diverged early on from an ancestral colony of organisms. However, a few biologists argue that the Archaea and Eukaryota arose from a group of bacteria. In any case, it is thought that viruses and archaea began relationships approximately two billion years ago, and that co-evolution may have been occurring between members of these groups. It is possible that the last common ancestor of the bacteria and archaea was a thermophile, which raises the possibility that lower temperatures are "extreme environments" in archaeal terms, and organisms that live in cooler environments appeared only later. Since the Archaea and Bacteria are no more related to each other than they are to eukaryotes, the term prokaryote only surviving meaning is "not a eukaryote", limiting its value.
With improved methodologies it became clear that the methanogenic bacteria were profoundly different and were (erroneously) believed to be relics of ancient bacteria thus Carl Woese, regarded as the forerunner of the molecular phylogeny revolution, identified three primary lines of descent: the Archaebacteria, the Eubacteria, and the Urkaryotes, the latter now represented by the nucleocytoplasmic component of the Eukaryotes. These lineages were formalised into the rank Domain (regio in Latin) which divided Life into 3 domains: the Eukaryota, the Archaea and the Bacteria.
In 2023, the Prokaryotic Code added the ranks of domain and kingdom to the prokaryotic nomenclature. The names of Bacteria and Archaea are validly-published taxa following Oren and Goker's publication that use these new rules.
Subdivisions
In 1987 Carl Woese divided the Eubacteria into 11 divisions based on 16S ribosomal RNA (SSU) sequences, which with several additions are still used today.
Oren and Goker has also validly published a number of kingdoms as a layer higher than the division/phylum:
Domain Bacteria
Kingdom Bacillati (divisions Firmicutes and ‘Tenericutes’, ‘Terrabacteria’, ‘Terrabacterida’, monoderms pro parte, subkingdom ‘Unibacteria’ pro parte)
Kingdom Fusobacteriati (‘Fusobacterida’)
Kingdom Pseudomonadati (division Gracilicutes, ‘Hydrobacteria’, ‘Hydrobacterida’ and ‘Aquificida’, diderms, subkingdom ‘Negibacteria’)
Kingdom Thermotogati (‘Thermotogida’)
Domain Archaea
Kingdom Methanobacteriati (phylum ‘Euryarchaeota’ sensu lato, ‘Euryarchaeida’)
Kingdom Nanobdellati (DPANN superphylum)
Kingdom Thermoproteati (TACK superphylum, ‘Crenarchaeida’)
Kingdom Promethearchaeati (Asgard, proposed by Imachi et al. later)
Opposition
While the three domain system is widely accepted, some authors have opposed it for various reasons.
One prominent scientist who opposes the three domain system is Thomas Cavalier-Smith, who proposed that the Archaea and the Eukaryotes (the Neomura) stem from Gram positive bacteria (Posibacteria), which in turn derive from gram negative bacteria (Negibacteria) based on several logical arguments, which are highly controversial and generally disregarded by the molecular biology community (c.f. reviewers' comments on, e.g. Eric Bapteste is "agnostic" regarding the conclusions) and are often not mentioned in reviews (e.g.) due to the subjective nature of the assumptions made.
However, despite there being a wealth of statistically supported studies towards the rooting of the tree of life between the Bacteria and the Neomura by means of a variety of methods, including some that are impervious to accelerated evolution—which is claimed by Cavalier-Smith to be the source of the supposed fallacy in molecular methods—there are a few studies which have drawn different conclusions, some of which place the root in the phylum Firmicutes with nested archaea.
Radhey Gupta's molecular taxonomy, based on conserved signature sequences of proteins, includes a monophyletic Gram negative clade, a monophyletic Gram positive clade, and a polyphyletic Archeota derived from Gram positives. Hori and Osawa's molecular analysis indicated a link between Metabacteria (=Archeota) and eukaryotes. The only cladistic analyses for bacteria based on classical evidence largely corroborate Gupta's results (see comprehensive mega-taxonomy).
James Lake presented a 2 primary kingdom arrangement (Parkaryotae + eukaryotes and eocytes + Karyotae) and suggested a 5 primary kingdom scheme (Eukaryota, Eocyta, Methanobacteria, Halobacteria, and Eubacteria) based on ribosomal structure and a 4 primary kingdom scheme (Eukaryota, Eocyta, Methanobacteria, and Photocyta), bacteria being classified according to 3 major biochemical innovations: photosynthesis (Photocyta), methanogenesis (Methanobacteria), and sulfur respiration (Eocyta). He has also discovered evidence that Gram-negative bacteria arose from a symbiosis between 2 Gram-positive bacteria.
Authorities
Classification is the grouping of organisms into progressively more inclusive groups based on phylogeny and phenotype, while nomenclature is the application of formal rules for naming organisms.
Nomenclature authority
Despite there being no official and complete classification of prokaryotes, the names (nomenclature) given to prokaryotes are regulated by the International Code of Nomenclature of Prokaryotes (Prokaryotic Code), a book which contains general considerations, principles, rules, and various notes, and advises in a similar fashion to the nomenclature codes of other groups.
Classification authorities
As taxa proliferated, computer aided taxonomic systems were developed. Early non networked identification software entering widespread use was produced by Edwards 1978, Kellogg 1979, Schindler, Duben, and Lysenko 1979, Beers and Lockhard 1962, Gyllenberg 1965, Holmes and Hill 1985, Lapage et al 1970 and Lapage et al 1973.
Today the taxa which have been correctly described are reviewed in Bergey's manual of Systematic Bacteriology, which aims to aid in the identification of species and is considered the highest authority. An online version of the taxonomic outline of bacteria and archaea (TOBA) is available .
List of Prokaryotic names with Standing in Nomenclature (LPSN) is an online database based on the International Code of Nomenclature of Prokaryotes which currently contains over two thousand accepted names with their references, etymologies and various notes.
Description of new species
The International Journal of Systematic Bacteriology/International Journal of Systematic and Evolutionary Microbiology (IJSB/IJSEM) is a peer reviewed journal which acts as the official international forum for the publication of new prokaryotic taxa. If a species is published in a different peer review journal, the author can submit a request to IJSEM with the appropriate description, which if correct, the new species will be featured in the Validation List of IJSEM.
Distribution
Microbial culture collections are depositories of strains which aim to safeguard them and to distribute them. The main ones being:
Alternative systems
A few other nomenclatural systems have been proposed to correct for perceived shortcomings in the Prokaryotic Code system:
SeqCode is a separate set of rules that govern prokaryotic nomenclature. Instead of using cultured strains as type material, it uses genome sequences. The SeqCode organization maintains its own database of names.
GTDB is a computer database that gives a prokaryotic nomenclature based on marker-gene phylogeny and its own rules. Some of its results have been adapted into the Prokaryotic Code and SeqCode systems.
These following systems provide a taxonomy database under more ad hoc rules:
The GenBank taxonomy browser includes all taxa that were used in GenBank submissions, with significant changes made by the curator. It's not limited to prokaryotes.
'The All-Species Living Tree' Project (SILVA LTP) provides a database of 16S rRNA sequences annotated with its own type of taxonomy. Ribosomal database project (RDP) is a similar project.
Greengenes is a system that combines the Web of Life phylogeny with 16S data and names from GTDB and LTP, as of version 2. It offers the 16S V4 region sequences with their placement in the tree.
Open Tree of Life aims to be phylogenetic and is not limited to prokaryotes.
Analyses
Bacteria were at first classified based solely on their shape (vibrio, bacillus, coccus etc.), presence of endospores, gram stain, aerobic conditions and motility. This system changed with the study of metabolic phenotypes, where metabolic characteristics were used. Recently, with the advent of molecular phylogeny, several genes are used to identify species, the most important of which is the 16S rRNA gene, followed by 23S, ITS region, gyrB and others to confirm a better resolution. The quickest way to identify to match an isolated strain to a species or genus today is done by amplifying its 16S gene with universal primers and sequence the 1.4kb amplicon and submit it to a specialised web-based identification database, namely either Ribosomal Database Project , which align the sequence to other 16S sequences using infernal, a secondary structure bases global alignment, or ARB SILVA, which aligns sequences via SINA (SILVA incremental aligner), which does a local alignment of a seed and extends it .
Several identification methods exists:
Phenotypic analyses
fatty acid analyses
Growth conditions (Agar plate, Biolog multiwell plates)
Genetic analyses
DNA-DNA hybridization
DNA profiling
Sequence
GC ratios
Phylogenetic analyses
16S-based phylogeny
phylogeny based on other genes
Multi-gene sequence analysis
Whole-genome sequence based analysis
New species
The minimal standards for describing a new species depend on which group the species belongs to. c.f.
Candidatus
Candidatus is a component of the taxonomic name for a bacterium that cannot be maintained in a Bacteriology Culture Collection. It is an interim taxonomic status for noncultivable organisms. e.g. "Candidatus Pelagibacter ubique"
Species concept
Bacteria divide asexually and for the most part do not show regionalisms ("Everything is everywhere"), therefore the concept of species, which works best for animals, becomes entirely a matter of judgment.
The number of named species of bacteria and archaea (approximately 13,000) is surprisingly small considering their early evolution, genetic diversity and residence in all ecosystems. The reason for this is the differences in species concepts between the bacteria and macro-organisms, the difficulties in growing/characterising in pure culture (a prerequisite to naming new species, vide supra) and extensive horizontal gene transfer blurring the distinction of species.
The most commonly accepted definition is the polyphasic species definition, which takes into account both phenotypic and genetic differences.
However, a quicker diagnostic ad hoc threshold to separate species is less than 70% DNA–DNA hybridisation, which corresponds to less than 97% 16S DNA sequence identity. It has been noted that if this were applied to animal classification, the order primates would be a single species.
For this reason, more stringent species definitions based on whole genome sequences have been proposed.
Pathology vs. phylogeny
Ideally, taxonomic classification should reflect the evolutionary history of the taxa, i.e. the phylogeny. Although some exceptions are present when the phenotype differs amongst the group, especially from a medical standpoint. Some examples of problematic classifications follow.
Escherichia coli: overly large and polyphyletic
In the family Enterobacteriaceae of the class Gammaproteobacteria, the species in the genus Shigella (S. dysenteriae, S. flexneri, S. boydii, S. sonnei) from an evolutionary point of view are strains of the species Escherichia coli (polyphyletic), but due to genetic differences cause different medical conditions in the case of the pathogenic strains. Confusingly, there are also E. coli strains that produce Shiga toxin known as STEC.
Escherichia coli is a badly classified species as some strains share only 20% of their genome. Being so diverse it should be given a higher taxonomic ranking. However, due to the medical conditions associated with the species, it will not be changed to avoid confusion in medical context.
Bacillus cereus group: close and polyphyletic
In a similar way, the Bacillus species (=phylum Firmicutes) belonging to the "B. cereus group" (B. anthracis, B. cereus, B . thuringiensis, B. mycoides, B. pseudomycoides, B. weihenstephanensis and B. medusa) have 99-100% similar 16S rRNA sequence (97% is a commonly cited adequate species cut-off) and are polyphyletic, but for medical reasons (anthrax etc.) remain separate.
Yersinia pestis: extremely recent species
Yersinia pestis is in effect a strain of Yersinia pseudotuberculosis, but with a pathogenicity island that confers a drastically different pathology (Black plague and tuberculosis-like symptoms respectively) which arose 15,000 to 20,000 years ago.
Nested genera in Pseudomonas
In the gammaproteobacterial order Pseudomonadales, the genus Azotobacter and the species Azomonas macrocytogenes are actually members of the genus Pseudomonas, but were misclassified due to nitrogen fixing capabilities and the large size of the genus Pseudomonas which renders classification problematic. This will probably rectified in the close future.
Nested genera in Bacillus
Another example of a large genus with nested genera is the genus Bacillus, in which the genera Paenibacillus and Brevibacillus are nested clades. There is insufficient genomic data at present to fully and effectively correct taxonomic errors in Bacillus.
Agrobacterium: resistance to name change
Based on molecular data it was shown that the genus Agrobacterium is nested in Rhizobium and the Agrobacterium species transferred to the genus Rhizobium (resulting in the following comp. nov.: Rhizobium radiobacter (formerly known as A. tumefaciens), R. rhizogenes, R. rubi, R. undicola and R. vitis) Given the plant pathogenic nature of Agrobacterium species, it was proposed to maintain the genus Agrobacterium and the latter was counter-argued
Other examples of name-change resistance
Gupta et al. 2018 proposed to split Mycobacterium into five genera. The medical community opposed this change. Either taxonomic opinion can be considered valid, according to LPSN, as the Gupta names appeared in Validation List 181.
Mycoplasma was split into six genera in three families by Gupta et al. 2018. The changes were made valid in Validation List 184. Medical researchers firmly opposed the renaming and seek to have the ICSP reject the new names, but the ICSP Judicial Commission did not grant this request. (As with the above case, the older names remain validly published, so it is still acceptable to use these names under the Prokaryotic Code.)
Nomenclature
Taxonomic names are written in italics (or underlined when handwritten) with a majuscule first letter with the exception of epithets for species and subspecies. Despite it being common in zoology, tautonyms (e.g. Bison bison) are not acceptable and names of taxa used in zoology, botany or mycology cannot be reused for Bacteria (Botany and Zoology do share names).
Nomenclature is the set of rules and conventions which govern the names of taxa. The difference in nomenclature between the various kingdoms/domains is reviewed in.
For Bacteria, valid names must have a Latin or Neolatin name and can only use basic latin letters (w and j inclusive, see History of the Latin alphabet for these), consequently hyphens, accents and other letters are not accepted and should be transliterated correctly (e.g. ß=ss). Ancient Greek being written in the Greek alphabet, needs to be transliterated into the Latin alphabet.
When compound words are created, a connecting vowel is needed depending on the origin of the preceding word, regardless of the word that follows, unless the latter starts with a vowel in which case no connecting vowel is added. If the first compound is Latin then the connecting vowel is an -i-, whereas if the first compound is Greek, the connecting vowel is an -o-.
For etymologies of names consult LPSN.
Rules for higher taxa
For the Prokaryotes (Bacteria and Archaea) the rank kingdom has not been used till 2024 (although some authors referred to phyla as kingdoms). The category of kingdom was included into the Bacteriological Code in November 2023, the first four proposals (Bacillati, Fusobacteriati, Pseudomonadati, Thermotogati) were validly published in January 2024.
If a new or amended species is placed in new ranks, according to Rule 9 of the Bacteriological Code the name is formed by the addition of an appropriate suffix to the stem of the name of the type genus. For subclass and class the recommendation from is generally followed, resulting in a neutral plural, however a few names do not follow this and instead keep into account graeco-latin grammar (e.g. the female plurals Thermotogae, Aquificae and Chlamydiae, the male plurals Chloroflexi, Bacilli and Deinococci and the greek plurals Spirochaetes, Gemmatimonadetes and Chrysiogenetes).
Phyla endings
Until 2021, phyla were not covered by the Bacteriological code, so they were named informally. This resulted in a variety of approaches to naming phyla. Some phyla, like Firmicutes, were named according to features shared across the phylum. Others, like Chlamydiae, were named using a class name or genus name as the stem (e.g., Chlamydia). In 2021, the decision was made to include names under the Bacteriological Code. Consequently, many phylum names were updated according to the new nomenclatural rules. The higher taxa proposed by Cavalier-Smith are generally disregarded by the molecular phylogeny community (e.g.) (vide supra).
Under the new rules, the name of a phylum is derived from the type genus:
Acidobacteriota (from Acidobacterium)
Actinomycetota (from Actinomyces)
Aquificota (from Aquifex)
Armatimonadota (from Armatimonas)
Atribacterota (from Atribacter)
Bacillota (from Bacillus)
Bacteroidota (from Bacteroides)
Balneolota (from Balneola)
Bdellovibrionota (from Bdellovibrio)
Caldisericota (from Caldisericum)
Calditrichota (from Caldithrix)
Campylobacterota (from Campylobacter)
Chlamydiota (from Chlamydia)
Chlorobiota (from Chlorobium)
Chloroflexota (from Chloroflexus)
Chrysiogenota (from Chrysiogenes)
Coprothermobacterota (from Coprothermobacter)
Deferribacterota (from Deferribacter)
Deinococcota (from Deinococcus)
Dictyoglomota (from Dictyoglomus)
Elusimicrobiota (from Elusimicrobium)
Fibrobacterota (from Fibrobacterota)
Fusobacteriota (from Fusobacterium)
Gemmatimonadota (from Gemmatimonas)
Ignavibacteriota (from Ignavibacterium)
Kiritimatiellota (from Kiritimatiella)
Lentisphaerota (from Lentisphaera)
Mycoplasmatota (from Mycoplasma)
Myxococcota (from Myxococcus)
Nitrospinota (from Nitrospina)
Nitrospirota (from Nitrospira)
Planctomycetota (from Planctomyces)
Pseudomonadota (from Pseudomonas)
Rhodothermota (from Rhodothermus)
Spirochaetota (from Spirochaeta)
Synergistota (from Synergistes)
Thermodesulfobacteriota (from Thermodesulfobacterium)
Thermomicrobiota (from Thermomicrobium)
Thermotogota (from Thermotoga)
Verrucomicrobiota (from Verrucomicrobium)
Names after people
Several species are named after people, either the discoverer or a famous person in the field of microbiology, for example Salmonella is after D.E. Salmon, who discovered it (albeit as "Bacillus typhi").
For the generic epithet, all names derived from people must be in the female nominative case, either by changing the ending to -a or to the diminutive -ella, depending on the name.
For the specific epithet, the names can be converted into either adjectival form (adding -nus (m.), -na (f.), -num (n.) according to the gender of the genus name) or the genitive of the Latinised name.
Names after places
Many species (the specific epithet) are named after the place they are present or found (e.g. Thiospirillum jenense). Their names are created by forming an adjective by joining the locality's name with the ending -ensis (m. or f.) or ense (n.) in agreement with the gender of the genus name, unless a classical Latin adjective exists for the place. However, names of places should not be used as nouns in the genitive case.
Vernacular names
Despite the fact that some hetero/homogeneus colonies or biofilms of bacteria have names in English (e.g. dental plaque or Star jelly), no bacterial species has a vernacular/trivial/common name in English.
For names in the singular form, plurals cannot be made (singulare tantum) as would imply multiple groups with the same label and not multiple members of that group (by analogy, in English, chairs and tables are types of furniture, which cannot be used in the plural form "furnitures" to describe both members), conversely names plural form are pluralia tantum. However, a partial exception to this is made by the use of vernacular names.
However, to avoid repetition of taxonomic names which break the flow of prose, vernacular names of members of a genus or higher taxa are often used and recommended, these are formed by writing the name of the taxa in sentence case roman ("standard" in MS Office) type, therefore treating the proper noun as an English common noun (e.g. the salmonellas), although there is some debate about the grammar of plurals, which can either be regular plural by adding -(e)s (the salmonellas) or using the ancient Greek or Latin plural form (irregular plurals) of the noun (the salmonellae); the latter is problematic as the plural of - bacter would be -bacteres, while the plural of myces (N.L. masc. n. from Gr. masc. n. mukes) is mycetes.
Customs are present for certain names, such as those ending in -monas are converted into -monad (one pseudomonad, two aeromonads and not -monades).
Bacteria which are the etiological cause for a disease are often referred to by the disease name followed by a describing noun (bacterium, bacillus, coccus, agent or the name of their phylum) e.g. cholera bacterium (Vibrio cholerae) or Lyme disease spirochete (Borrelia burgdorferi), note also rickettsialpox (Rickettsia akari) (for more see).
Treponema is converted into treponeme and the plural is treponemes and not treponemata.
Some unusual bacteria and archaea have special names such as Quin's oval (Quinella ovalis) and Walsby's square (Haloquadratum walsbyi).
Before the advent of molecular phylogeny, many higher taxonomic groupings had only trivial names, which are still used today, some of which are polyphyletic, such as Rhizobacteria. Some higher taxonomic trivial names are:
Blue-green algae are members of the phylum "Cyanobacteria"
Green non-sulfur bacteria are members of the phylum Chloroflexota
Green sulfur bacteria are members of the Chlorobiota
Purple bacteria are some, but not all, members of the phylum Pseudomonadota
Purple sulfur bacteria are members of the order Chromatiales
low G+C Gram-positive bacteria are members of the phylum Bacillota, regardless of GC content
high G+C Gram-positive bacteria are members of the phylum Actinomycetota, regardless of GC content
Rhizobia are members of various genera of Pseudomonadota
Lactic acid bacteria are members of the order Lactobacillales
Coryneform bacteria are members of the family Corynebacteriaceae
Fruiting gliding bacteria or myxobacteria are members of the phylum Myxococcota
Enterics are members of the order Enterobacteriales (although the term is avoided if they do not live in the intestines, such as Pectobacterium)
Acetic acid bacteria are members of the family Acetobacteraceae
Terminology
The abbreviation for species is sp. (plural spp.) and is used after a generic epithet to indicate a species of that genus. Often used to denote a strain of a genus for which the species is not known either because the organism has not been described yet as a species or insufficient tests were conducted to identify it. For example Halomonas sp. GFAJ-1 – see also open nomenclature
If a bacterium is known and well-studied but not culturable, it is given the term Candidatus in its name
A basonym is original name of a new combination, namely the first name given to a taxon before it was reclassified
A synonym is an alternative name for a taxon, i.e. a taxon was erroneously described twice
When a taxon is transferred it becomes a new combination (comb. nov.) or new name (nom. nov.)
paraphyly, monophyly, and polyphyly
See also
Branching order of bacterial phyla (Woese, 1987)
Branching order of bacterial phyla (Gupta, 2001)
Branching order of bacterial phyla (Cavalier-Smith, 2002)
Branching order of bacterial phyla (Rappe and Giovanoni, 2003)
Branching order of bacterial phyla (Battistuzzi et al.,2004)
Branching order of bacterial phyla (Ciccarelli et al., 2006)
Branching order of bacterial phyla after ARB Silva Living Tree
Branching order of bacterial phyla (Genome Taxonomy Database, 2018)
Bacterial phyla, a complicated classification
List of Archaea genera
List of Bacteria genera
List of bacterial orders
List of Latin and Greek words commonly used in systematic names
List of sequenced archaeal genomes
List of sequenced prokaryotic genomes
List of clinically important bacteria
Species problem
Evolutionary grade
Cryptic species complex
Synonym (taxonomy)
Taxonomy
LPSN, list of accepted bacterial and archaeal names
Cyanobacteria, a phylum of common bacteria but poorly classified at present
Human microbiome project
Microbial ecology
References
Biological nomenclature | Bacterial taxonomy | Biology | 8,070 |
7,132,704 | https://en.wikipedia.org/wiki/Fibre%20multi-object%20spectrograph | Fibre multi-object spectrograph (FMOS) is facility instrument for the Subaru Telescope on Mauna Kea in Hawaii. The instrument consists of a complex fibre-optic positioning system mounted at the prime focus of the telescope. Fibres are then fed to a pair of large spectrographs, each weighing nearly 3000 kg. The instrument will be used to look at the light from up to 400 stars or galaxies simultaneously over a field of view of 30 arcminutes (about the size of the full moon on the sky). The instrument will be used for a number of key programmes, including galaxy formation and evolution and dark energy via a measurement of the rate at which the universe is expanding.
Design, construction, operation
It is currently being built by a consortium of institutes led by Kyoto University and Oxford University with parts also being manufactured by the Rutherford Appleton Laboratory, Durham University and the Anglo-Australian Observatory. The instrument is scheduled for engineering first-light in late 2008.
OH-suppression
The spectrographs use a technique called OH-suppression to increase the sensitivity of the observations: The incoming light from the fibres is dispersed to a relatively high resolution and this spectrum forms an image on a pair of spherical mirrors which have been etched at the positions corresponding to the bright OH-lines. This spectrum is then re-imaged through a second diffraction grating to allow the full spectrum (without the OH lines) to be imaged onto a single infrared detector.
References
FMOS
FMOS Project
Telescope instruments
Spectrographs
Electronic test equipment
Signal processing
Laboratory equipment | Fibre multi-object spectrograph | Physics,Chemistry,Astronomy,Technology,Engineering | 317 |
63,584,864 | https://en.wikipedia.org/wiki/Copulas%20in%20signal%20processing | A copula is a mathematical function that provides a relationship between marginal distributions of random variables and their joint distributions. Copulas are important because it represents a dependence structure without using marginal distributions. Copulas have been widely used in the field of finance, but their use in signal processing is relatively new. Copulas have been employed in the field of wireless communication for classifying radar signals, change detection in remote sensing applications, and EEG signal processing in medicine. In this article, a short introduction to copulas is presented, followed by a mathematical derivation to obtain copula density functions, and then a section with a list of copula density functions with applications in signal processing.
Introduction
Using Sklar's theorem, a copula can be described as a cumulative distribution function (CDF) on a unit-space with uniform marginal distributions on the interval (0, 1). The CDF of a random variable X is the probability that X will take a value less than or equal to x when evaluated at x itself. A copula can represent a dependence structure without using marginal distributions. Therefore, it is simple to transform the uniformly distributed variables of copula (u, v, and so on) into the marginal variables (x, y, and so on) by the inverse marginal cumulative distribution function. Using the chain rule, copula distribution function can be partially differentiated with respect to the uniformly distributed variables of copula, and it is possible to express the multivariate probability density function (PDF) as a product of a multivariate copula density function and marginal PDF''s. The mathematics for converting a copula distribution function into a copula density function is shown for a bivariate case, and a family of copulas used in signal processing are listed in a TABLE 1.
Mathematical derivation
For any two random variables X and Y, the continuous joint probability distribution function can be written as
where and
are the marginal cumulative distribution functions of the random variables X and Y, respectively.
then the copula distribution function can be defined using Sklar's theorem as:
,
where and are marginal distribution functions, joint and .
We start by using the relationship between joint probability density function (PDF) and joint cumulative distribution function (CDF) and its partial derivatives.
(Equation 1)
where is the copula density function, and are the marginal probability density functions of X and Y, respectively. It is important understand that there are four elements in the equation 1, and if three of the four are know, the fourth element can me calculated. For example, equation 1 may be used
when joint probability density function between two random variables is known, the copula density function is known, and one of the two marginal functions are known, then, the other marginal function can be calculated, or
when the two marginal functions and the copula density function are known, then the joint probability density function between the two random variables can be calculated, or
when the two marginal functions and the joint probability density function between the two random variables are known, then the copula density function can be calculated.
Summary table
The use of copula in signal processing is fairly new compared to finance. Here, a family of new bivariate copula density functions are listed with importance in the area of signal processing. Here, and are marginal distributions functions and and are marginal density functions
TABLE 1: Copula density function of a family of copulas used in signal processing.
References
Signal processing | Copulas in signal processing | Technology,Engineering | 703 |
17,134,535 | https://en.wikipedia.org/wiki/Silicon%20tetrabromide | Silicon tetrabromide, also known as tetrabromosilane, is the inorganic compound with the formula SiBr4. This colorless liquid has a suffocating odor due to its tendency to hydrolyze with release of hydrogen bromide. The general properties of silicon tetrabromide closely resemble those of the more commonly used silicon tetrachloride.
Comparison of SiX4
The properties of the tetrasilanes, all of which are tetrahedral, are significantly affected by nature of the halide. These trends apply also to the mixed halides. Melting points, boiling points, and bond lengths increase with the atomic mass of the halide. The opposite trend is observed for the Si-X bond energies.
Lewis acidity
Covalently saturated silicon complexes like SiBr4, along with tetrahalides of germanium (Ge) and tin (Sn), are Lewis acids. Although silicon tetrahalides obey the octet rule, they add Lewis basic ligands to give adducts with the formula SiBr4L and SiBr4L2 (where L is a Lewis base). The Lewis acidic properties of the tetrahalides tend to increase as follows: SiI4 < SiBr4 < SiCl4 < SiF4. This trend is attributed to the relative electronegativities of the halogens.
The strength of the Si-X bonds decrease in the order: Si-F > Si-Cl > Si-Br > Si-I.
Synthesis
Silicon tetrabromide is synthesized by the reaction of silicon with hydrogen bromide at 600 °C.<ref name=booth>Schumb, W. B. Silicobromoform" Inorganic Syntheses 1939, volume 1, pp 38-42. .</ref>
Si + 4 HBr → SiBr4 + 2 H2
Side products include dibromosilane (SiH2Br2) and tribromosilane (SiHBr3).
Si + 2 HBr → SiH2Br2
Si + 3 HBr → SiHBr3 + H2
It can also be produced by treating silicon-copper mixture with bromine:
Reactivity
Like other halosilanes, SiBr4 can be converted to hydrides, alkoxides, amides, and alkyls, i.e., products with the following functional groups: Si-H, Si-OR, Si-NR2, Si-R, and Si-X bonds respectively.
Silicon tetrabromide can be readily reduced by hydrides or complex hydrides.
4 R2AlH + SiBr4 → SiH4 + 4 R2AlBr
Reactions with alcohols and amines proceed as follows:
SiBr4 + 4 ROH → Si(OR)4 + 4 HBr
SiBr4 + 8 HNR2 → Si(NR2)4 + 4 HNR2HBr
Grignard reactions with metal alkyl halides are particularly important reactions due to their production of organosilicon compounds which can be converted to silicones.
SiBr4 + n RMgX → RnSiBr4−n + n'' MgXBr
Redistribution reactions occur between two different silicon tetrahalides (as well as halogenated polysilanes) when heated to 100 ˚C, resulting in various mixed halosilanes. The melting points and boiling points of these mixed halosilanes generally increase as their molecular weights increase. (Can occur with X= H, F, Cl, Br, and I)
2 SiBr4 + 2 SiCl4 → SiBr3Cl + 2 SiBr2Cl2 + SiBrCl3
Si2Cl6 + Si2Br6 → Si2ClnBr6−n
Silicon tetrabromide hydrolyzes readily when exposed to air causing it to fume:
SiBr4 + 2 H2O → SiO2 + 4 HBr
Silicon tetrabromide is stable in the presence of oxygen at room temperature, but bromosiloxanes form at 670–695 ˚C .
2 SiBr4 + 1⁄2 O2 → Br3SiOSiBr3 + Br2
Uses
Due to its close similarity to silicon tetrachloride, there are few applications unique to SiBr4. The pyrolysis of SiBr4 does have the advantage of depositing silicon at faster rates than that of SiCl4, however SiCl4 is usually preferred due to its availability in high purity. Pyrolysis of SiBr4 followed by treatment with ammonia yields silicon nitride (Si3N4) coatings, a hard compound used for ceramics, sealants, and the production of many cutting tools.
References
Bromides
Inorganic silicon compounds | Silicon tetrabromide | Chemistry | 990 |
58,647,164 | https://en.wikipedia.org/wiki/Katharina%20Boll-Dornberger | Katharina Boll-Dornberger (2 November 1909 – 27 July 1981), also known as Käte Dornberger-Schiff, was an Austrian-German physicist and crystallographer. She is known for her work on order-disorder structures.
Life
Katharina Boll-Dornberger was born in Vienna in 1909 as the daughter of the university professor and Alice Friederike (Gertrude) Schiff. She studied physics and mathematics in Vienna and Göttingen. She wrote her dissertation under supervision of V. M. Goldschmidt on the crystal structure of water-free zinc sulfate in Göttingen and handed it in in Vienna in 1934. Afterwards, she conducted research in Philipp Gross's lab in Vienna. In 1937 she emigrated to England. In England, she worked with John D. Bernal, Nevill F. Mott, and Dorothy Hodgkin. She married Paul Dornberger in 1939. Her sons were born in 1943 and 1946. In 1946, she and her family returned to Germany. At first, she worked as a lecturer for physics and mathematics at the Hochschule für Baukunst in Weimar. Then, she moved to East Berlin. Starting in 1948, she was the head of a department at the Institut für Biophysik at the German Academy of Sciences at Berlin. In 1952, she married Ludwig Boll (1911–1984), a German mathematician. In 1956, she became a professor at the Humboldt University. In 1958, the Institut für Strukturforschung was created and she was head of the institute until 1968. She died in 1981 in Berlin.
Research
Her research focused on the crystallographic investigation of order-disorder structures. She introduced groupoids to crystallography to describe disordered structures. Roughly 2/3 of her 60 publications focused on order-disorder. The other publications dealt with structure determination of organic and inorganic crystals, methods development in single-crystal diffraction, and the development of equipment for this purpose.
Awards
For her work in crystallography, she was awarded two national awards by the German Democratic Republic:
Patriotic Order of Merit in 1959
National Prize of the German Democratic Republic in 1960
A street in Berlin is named after her.
Notes
References
Further reading
https://fakultaeten.hu-berlin.de/de/sprachlit/frauenbeauftragte/weitere-informationen/der_lange_weg_z_chancengleichheit_2014.pdf
1981 deaths
1909 births
Recipients of the National Prize of East Germany
Recipients of the Patriotic Order of Merit
Scientists from Vienna
Academic staff of the Humboldt University of Berlin
German women physicists
20th-century German physicists
Crystallographers | Katharina Boll-Dornberger | Chemistry,Materials_science | 560 |
44,163,266 | https://en.wikipedia.org/wiki/Klee%20diagram | Klee diagrams, named for their resemblance to paintings by Paul Klee, are false-colour maps that represent a way of assembling and viewing large genomic datasets. Contemporary research has produced genomic databases for an enormous range of life forms, inviting insights into the genetic basis of biodiversity.
Indicator vectors are used to depict nucleotide sequences. This technique produces correlation matrices or Klee diagrams. Researchers Lawrence Sirovich, Mark Y. Stoeckle and Yu Zhang (2010) used their improved algorithm on a set of some 17000 DNA barcode sequences from 12 disparate animal taxa, finding that indicator vectors were a viable taxonomic tool, and that discontinuities corresponded with taxonomic divisions.
External links
Klee diagram
References
DNA
Genomics
Nucleotides
Phylogenetics | Klee diagram | Biology | 160 |
1,793,092 | https://en.wikipedia.org/wiki/Fabric%20softener | A fabric softener (American English) or fabric conditioner (British English) is a conditioner applied to laundry after it has been washed in a washing machine. A similar, more dilute preparation meant to be applied to dry fabric is known as a wrinkle releaser.
Fabric softeners reduce the harsh feel of items dried in open air, add fragrance to laundry, and/or impart anti-static properties to textiles. In contrast to laundry detergents, fabric softeners are considered a type of after-treatment laundry aid.
Fabric softeners are available either in the form of a liquid, typically added during the washing machine's rinse cycle, or as dryer sheets that are added to a tumble dryer before drying begins. Liquid fabric softeners may be added manually during the rinse cycle, automatically if the machine has a dispenser designed for this purpose, through the use of a dispensing ball, or poured onto a piece of laundry to be dried (such as a washcloth) which is then placed into the dryer.
Washing machines exert significant mechanical stress on textiles, particularly natural fibers such as cotton and wool. The fibers at the fabric's surface become squashed and frayed, and this condition hardens into place when drying the laundry in open air, giving the textiles a harsh feel. Using a tumble dryer results in a softening effect, but it is less than what can be achieved through the use of a fabric softener.
As of 2009, nearly 80% of households in the United States had a mechanical clothes dryer. Consequently, fabric softeners are primarily used there to impart anti-static properties and fragrance to laundry.
Mechanism of action
Fabric softeners coat the surface of a fabric with chemical compounds that are electrically charged, causing threads to "stand up" from the surface and thereby imparting a softer and fluffier texture. Cationic softeners bind by electrostatic attraction to the negatively charged groups on the surface of the fibers and neutralize their charge. The long aliphatic chains then line up towards the outside of the fiber, imparting lubricity.
Fabric softeners impart anti-static properties to fabrics, and thus prevent the build-up of electrostatic charges on synthetic fibers, which in turn eliminates fabric cling during handling and wearing, crackling noises, and dust attraction. Also, fabric softeners make fabrics easier to iron and help reduce wrinkles in garments. In addition, they reduce drying times so that energy is saved when softened laundry is tumble-dried. Additionally, they can also impart a pleasant fragrance to the laundry.
Fabric softeners
Early cotton softeners were typically based on a water emulsion of soap and olive oil, corn oil, or tallow oil. Softening compounds differ in affinity to various fabrics. Some work better on cellulose-based fibers (i.e., cotton), others have higher affinity to hydrophobic materials like nylon, polyethylene terephthalate, polyacrylonitrile, etc. New silicone-based compounds, such as polydimethylsiloxane, work by lubricating the fibers. Manufacturers use derivatives with amine- or amide-containing functional groups as well. These groups improve the softener's binding to fabrics.
As softeners are often hydrophobic, they commonly occur in the form of an emulsion. In the early formulations, manufacturers used soaps as emulsifiers. The emulsions are usually opaque, milky fluids. However, there are also microemulsions, where the droplets of the hydrophobic phase are substantially smaller. Microemulsions provide the advantage of increased ability of smaller particles to penetrate into the fibers. Manufacturers often use a mixture of cationic and non-ionic surfactants as an emulsifier. Another approach is a polymeric network, an emulsion polymer.
In addition to fabric softening chemicals, fabric softeners may include acids or bases to maintain optimal pH for absorption, silicone-based anti-foaming agents, emulsion stabilizers, fragrances, and colors.
Cationic fabric softeners
Rinse-cycle softeners usually contain cationic surfactants of the quaternary ammonium type as the main active ingredient. Cationic surfactants adhere well to natural fibers (wool, cotton), but less so to synthetic fibers. Cationic softeners are incompatible with anionic surfactants in detergents because they combine with them to form a solid precipitate. This requires that the softener be added in the rinse cycle. Fabric softener reduces the absorbency of textiles, which adversely affects the function of towels and microfiber cloth.
Formerly, the active material of most softeners in Europe, the United States, and Japan, was distearyldimethylammonium chloride (DSDMAC) or related quat salts. Due to their poor biodegradability, such tallow-derived compounds were replaced by the more labile ester-quats in the 1980s and 1990s.
Conventional softeners, which contain 4–6% active material, have been partially replaced in many countries by softener concentrates having some 12–30% active material.
Anionic fabric softeners
Anionic softeners and antistatic agents can be, for example, salts of monoesters and diesters of phosphoric acid and the fatty alcohols. These are often used together with the conventional cationic softeners. Cationic softeners are incompatible with anionic surfactants in detergents because they combine with them to form a solid precipitate. This requires that they be added in the rinse cycle. Anionic softeners can combine with anionic surfactants directly. Other anionic softeners can be based on smectite clays. Some compounds, such as ethoxylated phosphate esters, have softening, anti-static, and surfactant properties.
Risks
As with soaps and detergents, fabric softeners may cause irritant contact dermatitis. Manufacturers produce some fabric softeners without dyes and perfumes to reduce the risk of skin irritation. Fabric softener overuse may make clothes more flammable, due to the fat-based nature of most softeners. Some deaths have been attributed to this phenomenon, and fabric softener makers recommend not using them on clothes labeled as flame-resistant.
References
Laundry substances
Cleaning products | Fabric softener | Chemistry | 1,366 |
2,338,241 | https://en.wikipedia.org/wiki/Lanczos%20resampling | Lanczos filtering and Lanczos resampling are two applications of a certain mathematical formula. It can be used as a low-pass filter or used to smoothly interpolate the value of a digital signal between its samples. In the latter case, it maps each sample of the given signal to a translated and scaled copy of the Lanczos kernel, which is a sinc function windowed by the central lobe of a second, longer, sinc function. The sum of these translated and scaled kernels is then evaluated at the desired points.
Lanczos resampling is typically used to increase the sampling rate of a digital signal, or to shift it by a fraction of the sampling interval. It is often used also for multivariate interpolation, for example to resize or rotate a digital image. It has been considered the "best compromise" among several simple filters for this purpose.
The filter was invented by Claude Duchon, who named it after Cornelius Lanczos due to Duchon's use of Sigma approximation in constructing the filter, a technique created by Lanczos.
Definition
Lanczos kernel
The effect of each input sample on the interpolated values is defined by the filter's reconstruction kernel , called the Lanczos kernel. It is the normalized sinc function , windowed (multiplied) by the Lanczos window, or sinc window, which is the central lobe of a horizontally stretched sinc function for .
Equivalently,
The parameter is a positive integer, typically 2 or 3, which determines the size of the kernel. The Lanczos kernel has lobes: a positive one at the center, and alternating negative and positive lobes on each side.
Interpolation formula
Given a one-dimensional signal with samples , for integer values of , the value interpolated at an arbitrary real argument is obtained by the discrete convolution of those samples with the Lanczos kernel:
where is the filter size parameter, and is the floor function. The bounds of this sum are such that the kernel is zero outside of them.
Properties
As long as the parameter is a positive integer, the Lanczos kernel is continuous everywhere, and its derivative is defined and continuous everywhere (even at , where both sinc functions go to zero). Therefore, the reconstructed signal too will be continuous, with continuous derivative.
The Lanczos kernel is zero at every integer argument , except at , where it has value 1. Therefore, the reconstructed signal exactly interpolates the given samples: we will have for every integer argument .
Lanczos resampling is one form of a general method developed by Lanczos to counteract the Gibbs phenomenon by multiplying coefficients of a truncated Fourier series by , where is the coefficient index and is how many coefficients we're keeping. The same reasoning applies in the case of truncated functions if we wish to remove Gibbs oscillations in their spectrum.
Multidimensional interpolation
Lanczos filter's kernel in two dimensions is
Evaluation
Advantages
The theoretically optimal reconstruction filter for band-limited signals is the sinc filter, which has infinite support. The Lanczos filter is one of many practical (finitely supported) approximations of the sinc filter. Each interpolated value is the weighted sum of consecutive input samples. Thus, by varying the parameter one may trade computation speed for improved frequency response. The parameter also allows one to choose between a smoother interpolation or a preservation of sharp transients in the data. For image processing, the trade-off is between the reduction of aliasing artefacts and the preservation of sharp edges. Also as with any such processing, there are no results for the borders of the image. Increasing the length of the kernel increases the cropping of the edges of the image.
The Lanczos filter has been compared with other interpolation methods for discrete signals, particularly other windowed versions of the sinc filter. Turkowski and Gabriel claimed that the Lanczos filter (with ) is the "best compromise in terms of reduction of aliasing, sharpness, and minimal ringing", compared with truncated sinc and the Bartlett, cosine-, and Hann-windowed sinc, for decimation and interpolation of 2-dimensional image data. According to Jim Blinn, the Lanczos kernel (with ) "keeps low frequencies and rejects high frequencies better than any (achievable) filter we've seen so far."
Lanczos interpolation is a popular filter for "upscaling" videos in various media utilities, such as AviSynth and FFmpeg.
Limitations
Since the kernel assumes negative values for , the interpolated signal can be negative even if all samples are positive. More generally, the range of values of the interpolated signal may be wider than the range spanned by the discrete sample values. In particular, there may be ringing artifacts just before and after abrupt changes in the sample values, which may lead to clipping artifacts. However, these effects are reduced compared to the (non-windowed) sinc filter. For a = 2 (a three-lobed kernel) the ringing is < 1%.
When using the Lanczos filter for image resampling, the ringing effect will create light and dark halos along any strong edges. While these bands may be visually annoying, they help increase the perceived sharpness, and therefore provide a form of edge enhancement. This may improve the subjective quality of the image, given the special role of edge sharpness in vision.
In some applications, the low-end clipping artifacts can be ameliorated by transforming the data to a logarithmic domain prior to filtering. In this case the interpolated values will be a weighted geometric mean, rather than an arithmetic mean, of the input samples.
The Lanczos kernel does not have the partition of unity property. That is, the sum of all integer-translated copies of the kernel is not always 1. Therefore, the Lanczos interpolation of a discrete signal with constant samples does not yield a constant function. This defect is most evident when . Also, for the interpolated signal has zero derivative at every integer argument. This is rather academic, since using a single-lobe kernel (a = 1) loses all the benefits of the Lanczos approach and provides a poor filter. There are many better single-lobe, bell-shaped windowing functions.
The partition of unity can be introduced by a normalization,
for .
See also
Bicubic interpolation
Bilinear interpolation
Spline interpolation
Nearest-neighbor interpolation
Sinc filter
References
External links
Anti-Grain Geometry examples: image_filters.cpp shows comparisons of repeatedly resampling an image with various kernels.
imageresampler: A public domain image resampling class in C++ with support for several windowed Lanczos filter kernels.
Signal processing
Multivariate interpolation | Lanczos resampling | Technology,Engineering | 1,433 |
29,172,014 | https://en.wikipedia.org/wiki/Gamma%20Comae%20Berenices | Gamma Comae Berenices, Latinized from γ Comae Berenices, is a single, orange-hued star in the northern constellation of Coma Berenices. It is faintly visible to the naked eye, having an apparent visual magnitude of 4.36. Based upon an annual parallax shift of 19.50 mas as seen from Earth, its distance can be estimated as around 167 light years from the Sun. The star is moving away from the Sun with a radial velocity of +3 km/s.
This is an evolved K-type giant star with a stellar classification of . The suffix notation indicates the star displays an overabundance of iron in its spectrum. It is most likely (91% chance) on the horizontal branch with an age of 2.7 billion years. If this is true, then it has an estimated 1.65 times the mass of the Sun and has expanded to nearly 12 times the Sun's radius. The star is radiating 58 times the Sun's luminosity from its enlarged photosphere at an effective temperature of around 4,652 K. Gamma Comae Berenices appears as part of the Coma Star Cluster, although it is probably not actually a member of this cluster.
References
K-type giants
Coma Berenices
Comae Berenices, Gamma
Durchmusterung objects
Comae Berenices, 15
108381
060742
4737 | Gamma Comae Berenices | Astronomy | 292 |
898,605 | https://en.wikipedia.org/wiki/Rain%20sensor | A rain sensor or rain switch is a switching device activated
by rainfall. There are two main applications for rain sensors. The first is a water conservation device connected to an automatic irrigation system that causes the system to shut down in the event of rainfall. The second is a device used to protect the interior of an automobile from rain and to support the automatic mode of
windscreen wipers.
Principle of operation
The rain sensor works on the principle of total internal reflection. An infrared light shone at a 45-degree angle on a clear area of the windshield is reflected and is sensed by the sensor inside the car. When it rains, the wet glass causes the light to scatter and a lesser amount of light gets reflected back to the sensor.
An additional application in professional satellite communications antennas is to trigger a rain blower on the aperture of the antenna feed, to remove water droplets from the mylar cover that keeps pressurized and dry air inside the wave-guides.
Irrigation sensors
Rain sensors for irrigation systems are available in both wireless and hard-wired versions, most employing hygroscopic disks that swell in the presence of rain and shrink back down again as they dry out — an electrical switch is in turn depressed or released by the hygroscopic disk stack, and the rate of drying is typically adjusted by controlling the ventilation reaching the stack. However, some electrical type sensors are also marketed that use tipping bucket or conductance type probes to measure rainfall. Wireless and wired versions both use similar mechanisms to temporarily suspend watering by the irrigation controller specifically they are connected to the irrigation controller's sensor terminals, or are installed in series with the solenoid valve common circuit such that they prevent the opening of any valves when rain has been sensed.
Some irrigation rain sensors also contain a freeze sensor to keep the system from operating in freezing temperatures, particularly where irrigation systems are still used over the winter.
Some type of sensor is required on new lawn sprinkler systems in Florida, New Jersey, Minnesota, Connecticut and most parts of Texas.
Automotive sensors
In 1958, the Cadillac Motor Car Division of General Motors experimented with a water-sensitive switch that triggered various electric motors to close the convertible top and raise the open windows of a specially-built Eldorado Biarritz model, in case of rain. The first such device appears to have been used for that same purpose in a concept vehicle designated Le Sabre and built around 1950–51.
General Motors' automatic rain sensor for convertible tops was available as a dealer-installed option during the 1950s for vehicles such as the Chevrolet Bel Air.
For the 1996 model year, Cadillac once again equipped cars with an automatic rain sensor; this time to automatically trigger the windshield wipers and adjust their speed to conditions as necessary.
In December 2017 Tesla started rolling out an OTA update (2017.52.3) enabling their AP2.x cars to utilize the onboard cameras to passively detect rain without the use of a dedicated sensor.
Most vehicles with this feature have an auto position on the control column.
Physics of rain sensor
The most common modern rain sensors are based on the principle of total internal reflection. At all times, an infrared light is beamed at a 45-degree angle into the windshield from the interior. If the glass is dry, the critical angle for total internal refraction is around 42°. This value is obtained with the total internal refraction formula
where is the approximate value on air's refraction index for infrared and is the approximate value of the glass refraction index, also for infrared. In that case, since the incident angle of light is 45°, all the light is reflected and the detector receives maximum intensity.
If the glass is wet, the critical angle changes to around 60° because the refraction index of water is higher than air (). In that case, because the incident angle is 45°, total internal reflection is not obtained. Part of the light beam is transmitted through the glass and the intensity measured for reflection is lower : the system detects water and the wipers turn on.
See also
List of sensors
Rain gauge
References
Irrigation
Sensors
Meteorological instrumentation and equipment
Windscreen wiper | Rain sensor | Technology,Engineering | 836 |
6,104,628 | https://en.wikipedia.org/wiki/Gothenburg%20International%20Bioscience%20Business%20School | Gothenburg International Bioscience Business School (GIBBS) is in Gothenburg, Sweden.
The education is a collaboration between Sahlgrenska Academy at Göteborg University and Chalmers University of Technology and is a part of the Center for Intellectual Property Studies, CIP. Most of the collaboration is during the first year with a shared curriculum studying with peers from various backgrounds such as law, engineering, life sciences, and management. The second year students work in groups with the innovation project with the aim to commercialize an innovation.
University of Gothenburg
Chalmers University of Technology
Life sciences industry | Gothenburg International Bioscience Business School | Biology | 114 |
38,041,270 | https://en.wikipedia.org/wiki/Mercury%20Systems | Mercury Systems, Inc. is a technology company serving the aerospace and defense industry. It designs, develops and manufactures open architecture computer hardware and software products, including secure embedded processing modules and subsystems, avionics mission computers and displays, rugged secure computer servers, and trusted microelectronics components, modules and subsystems.
Mercury sells its products to defense prime contractors, the US government and original equipment manufacturer (OEM) commercial aerospace companies.
Mercury is based in Andover, Massachusetts, with more than 2300 employees and annual revenues of approximately US$988 million for its fiscal year ended June 30, 2022.
History
Founded on July 14, 1981 as Mercury Computer Systems by Jay Bertelli.
Went public on the Nasdaq stock exchange on January 30, 1998, listed under the symbol MRCY.
In July 2005, Mercury Computer Systems acquired Echotek Corporation for approximately US$49 million.
In January 2011, Mercury Computer Systems acquired LNX Corporation.
In December, 2011, Mercury Computer Systems acquired KOR Electronics for US$70 million,
In August 2012, Mercury Computer Systems acquired Micronetics for US$74.9 million.
In November 2012, the company changed its name from Mercury Computer Systems to Mercury Systems.
In December 2015, Mercury Systems acquired Lewis Innovative Technologies, Inc. (LIT).
In November 2016, Mercury Systems acquired Creative Electronic Systems for US$38 million.
In April 2017, Mercury Systems acquired Delta Microwave, LLC (“Delta”) for US$40.5 million, enabling the Company to expand into the satellite communications (SatCom), datalinks and space launch markets.
In July 2017, Mercury Systems acquired Richland Technologies, LLC (RTL), increasing the Company's market penetration in commercial aerospace, defense platform management, C4I, and mission computing.
In January 2019, Mercury Systems acquired GECO Avionics, LLC for US$36.5 million.
In December 2020, Mercury Systems acquired Physical Optics Corporation (POC) for $310 million.
In May 2021, Mercury Systems acquired Pentek for $65.0 million.
In November, 2021 Mercury Systems acquired Avalex Technologies Corporation and Atlanta Micro, Inc.
In August, 2023 Mercury Systems appointed William L. Ballhaus as the president and CEO.
Facilities
Manufacturing centers
Mercury manufactures in New England, New York Metro-area, Southern California and a trusted DMEA facility in the Southwest, which has Missile Defense Agency approval and AS9100 certification. Four Mercury sites have been awarded the James S. Cogswell Award for Outstanding Industrial Security Achievement Award by the Defense Counterintelligence and Security Agency (DCSA).
References
External links
Manufacturing companies based in Massachusetts
Companies based in Essex County, Massachusetts
Andover, Massachusetts
Signal processing
Signals intelligence
Radio frequency propagation
Middleware
Cell BE architecture
Aerospace companies of the United States
Defense companies of the United States
American companies established in 1981
Technology companies established in 1981
1981 establishments in Massachusetts
Companies listed on the Nasdaq | Mercury Systems | Physics,Technology,Engineering | 600 |
52,149,903 | https://en.wikipedia.org/wiki/Citrix%20Cloud | Citrix Cloud is a cloud management platform that allows organizations to deploy cloud-hosted desktops and apps to end users. It was developed by Citrix Systems and released in 2015.
Overview
Citrix Cloud is a cloud-based platform for managing and deploying Citrix products and desktops and applications to end users using any type of cloud, whether public, private or hybrid, or on-premises hardware. The product supports cloud-based versions of every major Citrix product. These can be accessed together as an integrated "workspace" or independently.
Features
Citrix Cloud enables cloud services for Citrix products XenApp, XenDesktop, XenMobile, ShareFile, and NetScaler. In addition, Citrix has developed several cloud-native services, including its Secure Browser Service.
Citrix Cloud is compatible with any device and cloud or data center and can be synced via Citrix Cloud Connector. As of May 2016, Citrix states that Microsoft Azure is its preferred cloud partner. Citrix platforms reside in Citrix Cloud, however other applications and resources may make use of other clouds and infrastructures. A company's IT department retains the ability to choose a custom combination of data centers and cloud providers. Citrix continuously updates Citrix Cloud so that users are automatically running the most current version.
As of 2015, Citrix Cloud offers four different service packages.
History
Citrix Workspace Cloud was announced in May 2015 at the company's industry conference, Citrix Synergy. The offering launched in August 2015 with four core services: App and Desktop Service, Lifecycle Management, Secure Documents, and Mobility. The company positioned Workspace Cloud as an alternative to XenDesktop and XenApp, the company's traditional desktop and application virtualization platforms.
The company renamed Citrix Workspace Cloud to Citrix Cloud in May 2016.r In addition, cloud services were renamed with cloud-based versions of other Citrix products. XenDesktop and XenApp Service, ShareFile, and XenMobile Service replaced Desktop and App Service, Secure Documents Service, and Mobility Service, respectively. The company also announced in 2016 that Citrix Cloud users that are Windows 10 Enterprise customers would be able to access Windows 10 images on Azure via XenDesktop without having to pay an additional license fee.
Reception
Prior to its release, Citrix Workspace Cloud was praised by desktop virtualization blogger Brian Madden for its concept and CMSWire noted that it stood out among competitors as the only product of its kind.
Following its release, TechTarget stated that the platform was "intriguing" that it "provide[s] something IT professionals have wanted for a very long time: centralized management of on-premises and cloud desktop and application workloads", but "also surprisingly expensive". A review in Computerworld suggested the hybrid nature of the product was compatible with the rising use of hybrid cloud implementations by businesses, but that Citrix would need to ensure "adequate support for critical applications and [make] sure that company policies, such as access rules, are followed properly".
See also
Amazon Web Services
Microsoft Azure
Oracle Cloud
External links
References
Citrix Systems
Cloud infrastructure
Cloud platforms
Cloud computing
Cloud computing providers | Citrix Cloud | Technology | 657 |
30,553,668 | https://en.wikipedia.org/wiki/HD%2084117 | HD 84117 is a F-type main sequence star in the constellation of Hydra. It has an apparent visual magnitude of approximately 4.94.
References
Hydra (constellation)
F-type main-sequence stars
HD, 084117
084117
047592
3862
0364
Durchmusterung objects | HD 84117 | Astronomy | 67 |
22,245,489 | https://en.wikipedia.org/wiki/Kodama%20state | The Kodama state in physics for loop quantum gravity, is a zero energy solution to the Schrödinger equation (a linear partial differential equation that governs the wave function of a quantum-mechanical system).
In 1988, Hideo Kodama wrote down the equations of the Kodama state, but as it described a positive (de Sitter universe) spacetime, which was believed to be inconsistent with observation, it was largely ignored.
In 2002, Lee Smolin suggested that the Kodama state is a ground state which has a good semiclassical limit which reproduces the dynamics of general relativity with a positive (de Sitter) cosmological constant, 4 dimensions, and gravitons. It is an exact solution to ordinary constraints on background independent quantum gravity, providing evidence that loop quantum gravity is indeed a quantum gravity with the correct semiclassical description. In 2003, Edward Witten published a paper in response to Lee Smolin's, arguing that the Kodama state is unphysical, due to an analogy to a state in Chern–Simons theory wave functions, resulting in negative energies. In 2006, Andrew Randono published two papers which address these objections, by generalizing the Kodama state. Randono concluded that the Immirzi parameter, when generalized with a real value, fixed by matching with black hole entropy, describes parity violation in quantum gravity, and is CPT invariant, and is normalizable, and chiral, consistent with known observations of both gravity and quantum field theory. Randono claims that Witten's conclusions rest on the Immirzi parameter taking on an imaginary number, which simplifies the equation. The physical inner product may resemble the MacDowell–Mansouri action formulation of gravity.
References
Loop quantum gravity
Quantum states | Kodama state | Physics | 371 |
1,804,457 | https://en.wikipedia.org/wiki/Bornological%20space | In mathematics, particularly in functional analysis, a bornological space is a type of space which, in some sense, possesses the minimum amount of structure needed to address questions of boundedness of sets and linear maps, in the same way that a topological space possesses the minimum amount of structure needed to address questions of continuity.
Bornological spaces are distinguished by the property that a linear map from a bornological space into any locally convex spaces is continuous if and only if it is a bounded linear operator.
Bornological spaces were first studied by George Mackey. The name was coined by Bourbaki after , the French word for "bounded".
Bornologies and bounded maps
A on a set is a collection of subsets of that satisfy all the following conditions:
covers that is, ;
is stable under inclusions; that is, if and then ;
is stable under finite unions; that is, if then ;
Elements of the collection are called or simply if is understood.
The pair is called a or a .
A or of a bornology is a subset of such that each element of is a subset of some element of Given a collection of subsets of the smallest bornology containing is called the
If and are bornological sets then their on is the bornology having as a base the collection of all sets of the form where and
A subset of is bounded in the product bornology if and only if its image under the canonical projections onto and are both bounded.
Bounded maps
If and are bornological sets then a function is said to be a or a (with respect to these bornologies) if it maps -bounded subsets of to -bounded subsets of that is, if
If in addition is a bijection and is also bounded then is called a .
Vector bornologies
Let be a vector space over a field where has a bornology
A bornology on is called a if it is stable under vector addition, scalar multiplication, and the formation of balanced hulls (i.e. if the sum of two bounded sets is bounded, etc.).
If is a topological vector space (TVS) and is a bornology on then the following are equivalent:
is a vector bornology;
Finite sums and balanced hulls of -bounded sets are -bounded;
The scalar multiplication map defined by and the addition map defined by are both bounded when their domains carry their product bornologies (i.e. they map bounded subsets to bounded subsets).
A vector bornology is called a if it is stable under the formation of convex hulls (i.e. the convex hull of a bounded set is bounded) then
And a vector bornology is called if the only bounded vector subspace of is the 0-dimensional trivial space
Usually, is either the real or complex numbers, in which case a vector bornology on will be called a if has a base consisting of convex sets.
Bornivorous subsets
A subset of is called and a if it absorbs every bounded set.
In a vector bornology, is bornivorous if it absorbs every bounded balanced set and in a convex vector bornology is bornivorous if it absorbs every bounded disk.
Two TVS topologies on the same vector space have that same bounded subsets if and only if they have the same bornivores.
Every bornivorous subset of a locally convex metrizable topological vector space is a neighborhood of the origin.
Mackey convergence
A sequence in a TVS is said to be if there exists a sequence of positive real numbers diverging to such that converges to in
Bornology of a topological vector space
Every topological vector space at least on a non discrete valued field gives a bornology on by defining a subset to be bounded (or von-Neumann bounded), if and only if for all open sets containing zero there exists a with
If is a locally convex topological vector space then is bounded if and only if all continuous semi-norms on are bounded on
The set of all bounded subsets of a topological vector space is called or of
If is a locally convex topological vector space, then an absorbing disk in is bornivorous (resp. infrabornivorous) if and only if its Minkowski functional is locally bounded (resp. infrabounded).
Induced topology
If is a convex vector bornology on a vector space then the collection of all convex balanced subsets of that are bornivorous forms a neighborhood basis at the origin for a locally convex topology on called the .
If is a TVS then the is the vector space endowed with the locally convex topology induced by the von Neumann bornology of
Quasi-bornological spaces
Quasi-bornological spaces where introduced by S. Iyahen in 1968.
A topological vector space (TVS) with a continuous dual is called a if any of the following equivalent conditions holds:
Every bounded linear operator from into another TVS is continuous.
Every bounded linear operator from into a complete metrizable TVS is continuous.
Every knot in a bornivorous string is a neighborhood of the origin.
Every pseudometrizable TVS is quasi-bornological.
A TVS in which every bornivorous set is a neighborhood of the origin is a quasi-bornological space.
If is a quasi-bornological TVS then the finest locally convex topology on that is coarser than makes into a locally convex bornological space.
Bornological space
In functional analysis, a locally convex topological vector space is a bornological space if its topology can be recovered from its bornology in a natural way.
Every locally convex quasi-bornological space is bornological but there exist bornological spaces that are quasi-bornological.
A topological vector space (TVS) with a continuous dual is called a if it is locally convex and any of the following equivalent conditions holds:
Every convex, balanced, and bornivorous set in is a neighborhood of zero.
Every bounded linear operator from into a locally convex TVS is continuous.
Recall that a linear map is bounded if and only if it maps any sequence converging to in the domain to a bounded subset of the codomain. In particular, any linear map that is sequentially continuous at the origin is bounded.
Every bounded linear operator from into a seminormed space is continuous.
Every bounded linear operator from into a Banach space is continuous.
If is a Hausdorff locally convex space then we may add to this list:
The locally convex topology induced by the von Neumann bornology on is the same as 's given topology.
Every bounded seminorm on is continuous.
Any other Hausdorff locally convex topological vector space topology on that has the same (von Neumann) bornology as is necessarily coarser than
is the inductive limit of normed spaces.
is the inductive limit of the normed spaces as varies over the closed and bounded disks of (or as varies over the bounded disks of ).
carries the Mackey topology and all bounded linear functionals on are continuous.
has both of the following properties:
is or , which means that every convex sequentially open subset of is open,
is or , which means that every convex and bornivorous subset of is sequentially open.
where a subset of is called if every sequence converging to eventually belongs to
Every sequentially continuous linear operator from a locally convex bornological space into a locally convex TVS is continuous, where recall that a linear operator is sequentially continuous if and only if it is sequentially continuous at the origin.
Thus for linear maps from a bornological space into a locally convex space, continuity is equivalent to sequential continuity at the origin. More generally, we even have the following:
Any linear map from a locally convex bornological space into a locally convex space that maps null sequences in to bounded subsets of is necessarily continuous.
Sufficient conditions
As a consequent of the Mackey–Ulam theorem, "for all practical purposes, the product of bornological spaces
is bornological."
The following topological vector spaces are all bornological:
Any locally convex pseudometrizable TVS is bornological.
Thus every normed space and Fréchet space is bornological.
Any strict inductive limit of bornological spaces, in particular any strict LF-space, is bornological.
This shows that there are bornological spaces that are not metrizable.
A countable product of locally convex bornological spaces is bornological.
Quotients of Hausdorff locally convex bornological spaces are bornological.
The direct sum and inductive limit of Hausdorff locally convex bornological spaces is bornological.
Fréchet Montel spaces have bornological strong duals.
The strong dual of every reflexive Fréchet space is bornological.
If the strong dual of a metrizable locally convex space is separable, then it is bornological.
A vector subspace of a Hausdorff locally convex bornological space that has finite codimension in is bornological.
The finest locally convex topology on a vector space is bornological.
Counterexamples
There exists a bornological LB-space whose strong bidual is bornological.
A closed vector subspace of a locally convex bornological space is not necessarily bornological.
There exists a closed vector subspace of a locally convex bornological space that is complete (and so sequentially complete) but neither barrelled nor bornological.
Bornological spaces need not be barrelled and barrelled spaces need not be bornological. Because every locally convex ultrabornological space is barrelled, it follows that a bornological space is not necessarily ultrabornological.
Properties
The strong dual space of a locally convex bornological space is complete.
Every locally convex bornological space is infrabarrelled.
Every Hausdorff sequentially complete bornological TVS is ultrabornological.
Thus every complete Hausdorff bornological space is ultrabornological.
In particular, every Fréchet space is ultrabornological.
The finite product of locally convex ultrabornological spaces is ultrabornological.
Every Hausdorff bornological space is quasi-barrelled.
Given a bornological space with continuous dual the topology of coincides with the Mackey topology
In particular, bornological spaces are Mackey spaces.
Every quasi-complete (i.e. all closed and bounded subsets are complete) bornological space is barrelled. There exist, however, bornological spaces that are not barrelled.
Every bornological space is the inductive limit of normed spaces (and Banach spaces if the space is also quasi-complete).
Let be a metrizable locally convex space with continuous dual Then the following are equivalent:
is bornological.
is quasi-barrelled.
is barrelled.
is a distinguished space.
If is a linear map between locally convex spaces and if is bornological, then the following are equivalent:
is continuous.
is sequentially continuous.
For every set that's bounded in is bounded.
If is a null sequence in then is a null sequence in
If is a Mackey convergent null sequence in then is a bounded subset of
Suppose that and are locally convex TVSs and that the space of continuous linear maps is endowed with the topology of uniform convergence on bounded subsets of If is a bornological space and if is complete then is a complete TVS.
In particular, the strong dual of a locally convex bornological space is complete. However, it need not be bornological.
Subsets
In a locally convex bornological space, every convex bornivorous set is a neighborhood of ( is required to be a disk).
Every bornivorous subset of a locally convex metrizable topological vector space is a neighborhood of the origin.
Closed vector subspaces of bornological space need not be bornological.
Ultrabornological spaces
A disk in a topological vector space is called if it absorbs all Banach disks.
If is locally convex and Hausdorff, then a disk is infrabornivorous if and only if it absorbs all compact disks.
A locally convex space is called if any of the following equivalent conditions hold:
Every infrabornivorous disk is a neighborhood of the origin.
is the inductive limit of the spaces as varies over all compact disks in
A seminorm on that is bounded on each Banach disk is necessarily continuous.
For every locally convex space and every linear map if is bounded on each Banach disk then is continuous.
For every Banach space and every linear map if is bounded on each Banach disk then is continuous.
Properties
The finite product of ultrabornological spaces is ultrabornological. Inductive limits of ultrabornological spaces are ultrabornological.
See also
References
Bibliography
Topological vector spaces | Bornological space | Mathematics | 2,570 |
27,546,886 | https://en.wikipedia.org/wiki/Value%20sensitive%20design | Value sensitive design (VSD) is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner. VSD originated within the field of information systems design and human-computer interaction to address design issues within the fields by emphasizing the ethical values of direct and indirect stakeholders. It was developed by Batya Friedman and Peter Kahn at the University of Washington starting in the late 1980s and early 1990s. Later, in 2019, Batya Friedman and David Hendry wrote a book on this topic called "Value Sensitive Design: Shaping Technology with Moral Imagination". Value Sensitive Design takes human values into account in a well-defined matter throughout the whole process. Designs are developed using an investigation consisting of three phases: conceptual, empirical and technological. These investigations are intended to be iterative, allowing the designer to modify the design continuously.
The VSD approach is often described as an approach that is fundamentally predicated on its ability to be modified depending on the technology, value(s), or context of use. Some examples of modified VSD approaches are Privacy by Design which is concerned with respecting the privacy of personally identifiable information in systems and processes. Care-Centered Value Sensitive Design (CCVSD) proposed by Aimee van Wynsberghe is another example of how the VSD approach is modified to account for the values central to care for the design and development of care robots.
Design process
VSD uses an iterative design process that involves three types of investigations: conceptual, empirical and technical. Conceptual investigations aim at understanding and articulating the various stakeholders of the technology, as well as their values and any values conflicts that might arise for these stakeholders through the use of the technology. Empirical investigations are qualitative or quantitative design research studies used to inform the designers' understanding of the users' values, needs, and practices. Technical investigations can involve either analysis of how people use related technologies, or the design of systems to support values identified in the conceptual and empirical investigations. Friedman and Hendry account seventeen methods, including their main purpose, an overview of its function as well as key references: Stakeholder Analysis (Purpose: Stakeholder identification and legitimation): Identification of individuals, groups, organizations, institutions, and societies that might reasonably be affected by the technology under investigation and in what ways. Two overarching stakeholder categories: (1) those who interact directly with the technology, direct stakeholders; and (2) those indirectly affected by the technology, indirect stakeholders.
Stakeholder Tokens (Purpose: Stakeholder identification and interaction): Playful and versatile toolkit for identifying stakeholders and their interactions. Stakeholder tokens facilitate identifying stakeholders, distinguishing core from peripheral stakeholders, surfacing excluded stakeholders, and articulating relationships among stakeholders.
Value Source Analysis (Purpose: Identify value sources): Distinguish among the explicitly supported project values, designers’ personal values, and values held by other direct and indirect stakeholders.
Co-evolution of Technology and Social Structure (Purpose: Expand design space): Expanding the design space to include social structures integrated with technology may yield new solutions not possible when considering the technology alone. As appropriate, engage with the design of both technology and social structure as part of the solution space. Social structures may include policy, law, regulations, organizational practices, social norms, and others.
Value Scenario (Purpose: Values representation and elicitation): Narratives, comprising stories of use, intended to surface human and technical aspects of technology and context. Value scenarios emphasize implications for direct and indirect stakeholders, related key values, widespread use, indirect impacts, longer-term use, and similar systemic effects.
Value Sketch (Purpose: Values representation and elicitation): Sketching activities as a way to tap into stakeholders’ non-verbal understandings, views, and values about a technology.
Value-oriented Semi-structured Interview (Purpose: Values elicitation): Semi-structured interview questions as a way to tap into stakeholders’ understandings, views and values about a technology. Questions typically emphasize stakeholders’ evaluative judgments (e.g., all right or not all right) about a technology as well as rationale (e.g., why?). Additional considerations introduced by the stakeholder are pursued.
Scalable Information Dimensions (Purpose: Values elicitation): Sets of questions constructed to tease apart the impact of pervasiveness, proximity, granularity of information, and other scalable dimensions. Can be used in interview or survey formats.
Value-oriented Coding Manual (Purpose: Values analysis): Hierarchically structured categories for coding qualitative responses to the value representation and elicitation methods. Coding categories are generated from the data and a conceptualization of the domain. Each category contains a label, definition, and typically up to three sample responses from empirical data. Can be applied to oral, written, and visual responses.
Value-oriented Mockup, Prototype or Field Deployment (Purpose: Values representation and elicitation): Development, analysis, and co-design of mockups, prototypes and field deployments to scaffold the investigation of value implications of technologies that are yet to be built or widely adopted. Mock-ups, prototypes or field deployments emphasize implications for direct and indirect stakeholders, value tensions, and technology situated in human contexts.
Ethnographically Informed Inquiry regarding Values and Technology (Purpose: Values, technology and social structure framework and analysis): Framework and approach for data collection and analysis to uncover the complex relationships among values, technology and social structure as those relationships unfold. Typically involves indepth engagement in situated contexts over longer periods of time.
Model for Informed Consent Online (Purpose: Design principles and values analysis): Model with corresponding design principles for considering informed consent in online contexts. The construct of informed encompasses disclosure and comprehension; that of consent encompasses voluntariness, competence, and agreement. Furthermore, implementations of informed consent.
Value Dams and Flows (Purpose: Values analysis): Analytic method to reduce the solution space and resolve value tensions among design choices. First, design options that even a small percentage of stakeholders strongly object to are removed from the design space—the value dams. Then of the remaining design options, those that a good percentage of stakeholders find appealing are foregrounded in the design—the value flows. Can be applied to the design of both technology and social structures.
Value Sensitive Action-Reflection Model (Purpose: Values representation and elicitation): Reflective process for introducing value sensitive prompts into a co-design activity. Prompts can be designer or stakeholder generated.
Multi-lifespan timeline (Purpose: Priming longer-term and multi-generational design thinking): Priming activity for longer-term design thinking. Multi-lifespan timelines prompt individuals to situate themselves in a longer time frame relative to the present, with attention to both societal and technological change.
Multi-lifespan co-design (Purpose: Longer-term design thinking and envisioning): Co-design activities and processes that emphasize longer-term anticipatory futures with implications for multiple and future generations.
Envisioning Cards (Purpose: Value sensitive design toolkit for industry, research, and educational practice): Value sensitive envisioning toolkit. A set of 32 cards, the Envisioning Cards build on four criteria: stakeholders, time, values, and pervasiveness. Each card contains on one side a title and an evocative image related to the card theme; on the flip side, the envisioning criterion, card theme, and a focused design activity. Envisioning Cards can be used for ideation, co-design, heuristic critique, and evaluation. The second edition of the Envision Cards was published online under the CC BY-NC-ND 4.0 license in 2024. This second edition of the Envisioning Cards brings together under one cohesive design the original set of 32 Envisioning Cards published in 2011 including the four suits—Stakeholders, Time, Values, and Pervasiveness—with the supplementary set of 13 Envisioning Cards published in 2018 with the Multi-lifespan suit.
Criticisms
VSD is not without its criticisms. Two commonly cited criticisms are critiques of the heuristics of values on which VSD is built. These critiques have been forwarded by Le Dantec et al. and Manders-Huits. Le Dantec et al. argue that formulating a pre-determined list of implicated values runs the risk of ignoring important values that can be elicited from any given empirical case by mapping those value a priori. Manders-Huits instead takes on the concept of ‘values’ itself with VSD as the central issue. She argues that the traditional VSD definition of values as “what a person or group of people consider important in life” is nebulous and runs the risk of conflating stakeholders preferences with moral values.
Wessel Reijers and Bert Gordijn have built upon the criticisms of Le Dantec et alia and Manders-Huits that the value heuristics of VSD are insufficient given their lack of moral commitment. They propose that a heuristic of virtues stemming from a virtue ethics approach to technology design, mostly influenced by the works of Shannon Vallor, provides a more holistic approach to technology design. Steven Umbrello has criticized this approach arguing that not only can the heuristic of values be reinforced but that VSD does make moral commitments to at least three universal values: human well-being, justice and dignity. Batya Friedman and David Hendry, in "Value Sensitive Design: Shaping Technology with Moral Imagination", argue that although earlier iterations of the VSD approach did not make explicit moral commitments, it has since evolved over the past two decades to commit to at least those three fundamental values.
VSD as a standalone approach has also been criticized as being insufficient for the ethical design of artificial intelligence. This criticism is predicated on the self-learning and opaque artificial intelligence techniques like those stemming from machine learning and, as a consequence, the unforeseen or unforeseeable values or disvalues that may emerge after the deployment of an AI system. Steven Umbrello and Ibo van de Poel propose a modified VSD approach that uses the Artificial Intelligence for Social Good (AI4SG) factors as norms to translate abstract philosophical values into tangible design requirements. What they propose is that full-lifecycle monitoring is necessary to encourage redesign in the event that unwanted values manifest themselves during the deployment of a system.
See also
UrbanSim
Batya Friedman
Jeroen van den Hoven
Ibo van de Poel
Aimee van Wynsberghe
Alan Borning
Behnam Taebi
Steven Umbrello
Values-based innovation
Positive computing
References
External links
Value Sensitive Design Research Lab
Blackwell Reference Online: Value Sensitive Design
Information systems
Architectural design
Human–computer interaction | Value sensitive design | Technology,Engineering | 2,217 |
70,395,393 | https://en.wikipedia.org/wiki/HD%20179886 | HD 179886 (HR 7289) is a binary star located in the southern constellation Telescopium. It has a combined apparent magnitude of 5.37, making it faintly visible to the naked eye if viewed under ideal conditions. The system is situated at a distance of 700 light years but is receding with a heliocentric radial velocity of .
As of 2018, the two stars have a separation of along a position angle of
The brighter component has a stellar classification of K3 III, indicating that the object is an ageing K-type giant. Models show it to be on the red giant branch, a stage of stellar evolution where the star is fusing hydrogen in a shell around an inert core of helium. It has an angular diameter of , yielding a diameter 37 times that of the Sun at its estimated distance. At present it has 111% the mass of the Sun and radiates at from its enlarged photosphere at an effective temperature of , giving it an orange glow. HD 179886A has a metallicity 141% that of the Sun and spins modestly with a projected rotational velocity of .
References
K-type giants
Telescopium
179886
094712
7289
Telescopii, 51
Durchmusterung objects
Double stars | HD 179886 | Astronomy | 263 |
44,970,384 | https://en.wikipedia.org/wiki/Electrical%20transcription | Electrical transcriptions are special phonograph recordings made exclusively for radio broadcasting, which were widely used during the "Golden Age of Radio". They provided material—from station-identification jingles and commercials to full-length programs—for use by local stations, which were affiliates of one of the radio networks.
Physically, electrical transcriptions look much like long-playing records, but differ from consumer-oriented recordings in two major respects which gave longer playing time and reduced likelihood of diversion to private use: they are usually larger than diameter (often ) so did not fit on consumer playback equipment, and were recorded in a hill-and-dale, or vertical cutting action, as distinct from lateral modulation as in ordinary monophonic discs. They were distributed only to radio stations for the purpose of broadcast, and not for sale to the public. The ET had higher quality audio than was available on consumer records, largely because they had less surface noise than commercial recordings. Electrical transcriptions were often pressed on vinylite, instead of the more common shellac.
Emergence of electrical transcriptions
Electrical transcriptions were made practical by the development of electrical recording, which superseded Thomas Edison's original purely mechanical recording method in the mid-1920s. Marsh Laboratories in Chicago began issuing electrical recordings on its obscure Autograph label in 1924, but it was Western Electric's superior technology, adopted by the leading labels Victor and Columbia in 1925, which launched the then-new microphone-based method into general use in the recording industry.
Electrical transcriptions were often used for recording programs of genres which would come to be known later as old-time radio.
Although the earliest transcriptions ran at 78.26 rpm or 80 rpm if it was recorded on a three-phase power lathe, some of which were also 12 inches across and laterally recorded with a conventional 3-mil standard-groove stylus, which carried a maximum of 6 minutes per side, the format gave way very quickly to the rpm speed that would come to be used for Vitaphone talking pictures two years later, which could carry a maximum of 15 minutes per side.
Later ETs would have their groove size reduced first to 2.7 mil and then to the then-standard 1-mil monaural groove used in LPs of the period to squeeze 30 minutes per side onto a transcription.
Freeman Gosden and Charles Correll are credited with being the first to produce electrical transcriptions. In 1928, they began distributing their Amos 'n' Andy program to stations other than their 'home' station, WMAQ in Chicago, by using 12-inch 78 rpm discs that provided two five-minute segments with a commercial break between.
One audio historian wrote: "new methods of electronic reproduction and improved record material that produced very little background noise were developed ... by the end of the decade, the use of old phonograph music had largely been replaced by the new electrical transcription ... with the fidelity available, it was difficult to tell a transcription from the original artist." A 1948 ad for a disc manufacturer touted the use of transcriptions on the Voice of America, saying; "a substantial part of these daily programs is recorded and, due to the excellent quality of these transcriptions, such recorded portions cannot be distinguished from the live transmissions."
WOR in New York City was one of the first radio stations to broadcast transcriptions, starting in 1929. Other stations followed, until more than 100 were doing so, largely because "this new kind of recording made programming more flexible and improved sound." John R. Brinkley is generally credited with being the first performer to provide electrical transcriptions to radio stations. Brinkley's use of the then-new technology arose out of necessity when agencies of the federal government prevented him from crossing from Mexico into the United States to use telephone lines to connect to U.S. stations remotely. "Brinkley began recording ... onto electrical transcription discs and sending them across the border for later broadcast."
WOR used transcriptions for repeat broadcasts of programs. In 1940, for example, the station repeated episodes of Glenn Miller's and Kay Kyser's orchestras, The Goldbergs and Sherlock Holmes.
"Electrical transcriptions were indispensable from the mid '30s to the late '40s," wrote Walter J. Beaupre, who worked in radio before moving into academia.
Transcription services
As radio stations' demand for transcriptions grew, companies specializing in transcriptions grew to meet those demands. In October 1933, 33 companies competed in producing transcriptions. Such companies included Langlois & Wentworth, Inc., RCA Thesaurus, SESAC, World Broadcasting System and Ziv Company. Associated Broadcasting Company transcription service, a former division of the Muzak Corporation (Muzak sold its Manhattan studios, but not transcription service, to RCA Victor in 1951) Subscribing to a major transcription service meant a station received an initial group of transcriptions plus periodically issued new discs and a license, which allowed use of the material on-air. Typically, a station did not own the discs; "they were leased for as long as [the] station paid the necessary fees." Those fees typically ranged from $40 to $150 per week for eight 15-minute programs.
Customers for transcriptions were primarily smaller stations. Brewster and Broughton, in their book Last Night a DJ Saved My Life, wrote; (transcriptions) "lessened the reliance on the announcer/disc jockey and, because [a transcription] was made specifically for broadcast, it avoided record company litigation." They quoted Ben Selvin, who worked for a transcription company, as saying, "Most stations could not afford the orchestras and productions that went into the network radio shows, and so we supplied nearly 300 stations with transcriptions that frequently – but not always – featured the most popular bands and vocalists." A slogan used in an advertisement for one transcription service might well have been applied to the industry as a whole, "TRANSCRIBED ... so that advertisers everywhere may have 'radio at its commercial best.'"
A 1948 ad for the transcription service World Broadcasting System contained a letter which praised the company. S.A. Vetter, assistant to the owner of WWPB, AM and FM stations in Miami, Florida, wrote: "you will be interested in knowing that I consider the purchase of the World Feature Library as the best 'buy' I have made in my twenty-one years in Miami radio." The popularity of at least one library was indicated in another 1948 ad. One for Standard Radio Transcription Services, Inc. ad boasted of its Standard Program Library as: "now serving over 700 stations." That same year, an ad for another transcription service, World Broadcasting System, said, "over 640 stations now use this great world library." Another supply company, Associated Program Service, advertised its transcription library as being "not the usual one-shot recording date ... not the routine disc or two ... but real continuity of performance ... a dependable, steady supply of fresh music ... great depth of titles."
Among the companies providing transcription services were radio networks. NBC began its electrical transcription service in 1934. Lloyd C. Egner, manager of electrical transcriptions at NBC wrote that with the NBC Syndicated Recorded Program Service (later named the RCA/NBC Thesaurus Library) the company sought "to make available to stations associated with NBC our extensive programming resources to help in the sale of their facilities to local advertisers." He added: "each program series ... will be as completely programmed as if it were to be for a network client. In other words they will be designed to sell a sponsor's product or service." A 1948 ad for NBC's service touted: "now 25 better shows tailored for better programming at lower cost," adding that the company's material was "programmed and proven over 1000 radio stations." CBS also had a transcription division, called Columbia Recording Corporation.
Capitol Records, better known for its popular recordings, also had a transcription service. An ad in the trade publication Broadcasting asked in a headline if the reader was "finding it tough to sell time?" The ad's text promoted 3,000 selections – with more added monthly – from Peggy Lee, Jan Garber, Johnny Mercer and other "top stars", adding, "more than 300 stations already use it."
One source estimated: "by the end of the 1930s, [transcription] services had built up a market of $10 million."
Transcription services' programming was not limited to music. Mystery, drama and other genres of programming were distributed via transcription. At least two transcribed dramas, I Was a Communist for the FBI and Bold Venture, were distributed to more than 500 stations each. NBC's transcription offerings included Aunt Mary (a soap opera), The Haunting Hour (a psychological mystery), The Playhouse of Favorites (a drama) and Modern Romances.
Use by advertisers
Advertisers found electrical transcriptions useful for distributing their messages to local stations. Spot advertising is said to have begun in the 1930s. "The spot announcements were easily produced and distributed throughout the country via electrical transcription" as an alternative to network advertising. In 1944, the spot jingle segment of transcriptions was estimated to have an annual value of $10 million.
Benefits for performers
Transcriptions proved advantageous for performers, especially musicians in the Big-Band Era. Using transcriptions helped them reach one audience via radio while making personal appearances in front of other audiences. Additionally, if more stations used their transcriptions, that increased the audience for their music even more. An item in a 1946 issue of Radio Mirror magazine noted: "Bing Crosby's transcription deal with Philco has started a rush of other sought-after radio performers for deals of a similar nature. Their advantages from such a setup include more free time and corporate setups to relieve their tax costs."
Recording commercial jingles for spot announcements was a source of income for performers and writers. In 1944, Cliff Edwards received $1,500 for recording a 30-second gum jingle.
Government use of transcriptions
World War II brought a new use for electrical transcriptions—storage of audio material for broadcasting to people in the military. The American Forces Network began using ETs during that war and continued using them through 1998. More than 300,000 AFRTS electrical transcription discs are stored in a collection at the Library of Congress.
Transcriptions "were often used for ... government-issued programs which were sent to the individual stations for broadcast on designated dates. Recruiting shows for the branches of military service arrived on such discs ... the United States Government shipped many programs during wartime as transcriptions."
During the war, the federal government, in conjunction with the Intercollegiate Broadcasting System, provided "approximately eight 15-minute transcribed programs every week to each of ... 35 college stations." The United States Department of War, United States Department of the Navy, United States Department of the Treasury and United States Office of Education contributed to production of programs related to the war effort, such as The Treasury Star Parade and You Can't Do Business with Hitler.
The Voice of America also used transcriptions, with one disc manufacturer noting in an ad, "A substantial part of these daily programs is recorded ..."
Other notable uses
The network ban on prerecorded material was temporarily lifted on the occasion of the crash of the airship Hindenburg in Lakehurst, New Jersey, on 6 May 1937. A recording of the crash made for Chicago radio station WLS by announcer Herbert Morrison was allowed to be broadcast over the network by NBC. This is the well-known "oh, the humanity!" recording, usually heard only as a brief excerpt and reproduced at a speed which differs significantly from the original recording speed, causing Morrison's voice to sound unnaturally high-pitched and excessively frantic. When heard in its entirety and at the correct speed, the report is still powerful.
Transcription recordings from major American radio networks became commonplace during World War II as pressed vinyl copies of them were distributed worldwide by the U.S. Armed Forces Radio Service for rebroadcast to troops in the field. Disc-to-disc editing procedures were used to delete the commercials included in the original broadcasts, and when a sponsor's name was attached to the name of the program, it was removed as well—Lux Radio Theater, for example, became Your Radio Theater. Although the discs were government property and were supposed to be destroyed after they had served their purpose, some were saved as souvenirs and countless thousands of them were simply dumped rather than actually destroyed. Many of the dumped discs ended up in the hands of scavengers and collectors. Often, these discs are the only form in which the broadcasts on them have survived, and they are one of the reasons why recordings of entertainment broadcasts from the 1940s still exist in abundance.
Many long classical works performed live onstage were captured in a succession of transcription discs. With only 15 minutes per side at rpm not only did it become necessary to change discs in the middle of a performance, but a careful track needed to be kept of whether sides were recorded in the conventional outside-in format or the reverse style of inside-out, starting near the label and finishing near the edge.
This was due to the large fidelity difference from the variation in circumference on revolutions near the edge of a disc compared to those in the center. Therefore, odd sided discs (1, 3, 5 etc.) were always recorded outside-in with the even-sided discs (2, 4, 6 etc.) were recorded inside-out. Producers would often work with engineers to ensure that loud, active, bombastic or selections requiring a wide dynamic range in order to be reproduced faithfully would always be either near the beginning of odd sides or near the ends of even sides. Often a small amount of overlap occurred which upon transfer to tape years later would have to be discarded except in the cases
where the beginning of an even side or the end of an odd side or vice-versa had been damaged during the recording process or subsequent handling. This is why on some CD reissues of this material, a noticeable difference in quality can be ascertained between the two sections.
This practice is preserved for hours-long radio shows up until the 90s when multiple disc sets would be pressed in Radio Format to allow for rapid changing of sides.
A) Manual Sequence Side 1 is backed by Side 2, Side 3 is backed by Side 4, Side 5 is backed by Side 6 etc.
B) Automatic Sequence Side 1 is backed by Side 6, Side 2 is backed by Side 5 and Side 3 remains unchanged backed by Side 4
C) Radio Sequence Side 1 is backed by Side 4, Side 2 remains unchanged backed by Side 5 and Side 3 is backed by Side 6 to avoid having to turn a record over in the middle instead of being able to cue up the next side next to the one playing to be ready to go.
Well-known live broadcasts which were preserved on lacquer transcription discs include The War of the Worlds dramatized as breaking news by the Orson Welles anthology program The Mercury Theatre on the Air, heard over the CBS radio network on 30 October 1938.
Before magnetic tape recorders became available in the U.S., NBC Symphony Orchestra concert broadcasts were preserved on transcription discs. After its conductor Arturo Toscanini retired, he transferred many of these recordings to tape, with the assistance of his son Walter, and most were eventually released on LP or CD.
In the United States, NBC Radio continued to use the 16-inch disc format for archiving purposes into the early 1970s.
Transcription discs
A transcription disc is a special phonograph record intended for, or recorded from, a radio broadcast. Sometimes called a broadcast transcription or radio transcription or nicknamed a platter, it is also sometimes just referred to as an electrical transcription, usually abbreviated to E.T. among radio professionals.
Transcription discs are most commonly 16 inches (40 cm) in diameter and recorded at rpm. That format was standard from approximately 1930 to 1960 and physically distinguishes most transcriptions from records intended for home use, which were rarely more than 12 inches (30 cm) in diameter and until 1948 were nearly all recorded at approximately 78 rpm. However, some very early (c. 1928–1931) radio programs were on sets of 12-inch or even 10-inch (25 cm) 78 rpm discs, and some later (circa 1960–1990) syndicated radio programs were distributed on 12-inch rpm microgroove vinyl discs visually indistinguishable from ordinary records except by their label information.
Some unusual records which are not broadcast-related are sometimes mistakenly described as "transcription discs" because they were recorded on the so-called acetate recording blanks used for broadcast transcriptions or share some other physical characteristic with them. Transcription discs should not be confused with the 16-inch rpm shellac soundtrack discs used from 1926 into the early 1930s to provide the audio for some motion picture sound systems. Also a potential source of confusion are RCA Victor's "Program Transcription" discs, 10- or 12-inch rpm records pressed in shellac and "Victrolac" vinyl in the early 1930s. Despite their suggestive name, they were not recorded from broadcasts or intended for broadcast use, but were an early and unsuccessful attempt to introduce longer-playing records at the rpm speed for home use.
Disc types
Transcription discs are of two basic types: pressings and instantaneous discs.
Pressings were created in the same way as ordinary records. A master recording was cut into a blank wax or acetate disc. This was electroplated to produce a metal stamper from which a number of identical discs were pressed in shellac or vinyl in a record press. Although the earliest transcription discs were pressed in shellac, in the mid-1930s quieter vinyl compounds were substituted. These discs were used to distribute syndicated programming to individual radio stations. Their use for this purpose persisted long after the advent of magnetic tape recording because it was cheaper to cut and plate a master disc and press 100 identical high-quality discs than to make 100 equally high-quality tape dubs.
Instantaneous discs are so called because they can be played immediately after recording without any further processing, unlike the delicate wax master discs which had to be plated and replicated as pressings before they could be played non-destructively. By late 1929, instantaneous recordings were being made by indenting, as opposed to engraving, a groove into the surface of a bare aluminum disc. The sound quality of these discs was inadequate for broadcast purposes, but they were made for sponsors and performers who wanted to have recordings of their broadcasts, a luxury which was impractically expensive to provide by the wax mastering, plating and pressing procedure. Only a very few pre-1930 live broadcasts were deemed important enough to preserve as pressings, and many of the bare aluminum discs perished in the scrap metal drives of World War II, so that these early years of radio are mostly known today by the syndicated programs on pressed discs, typically recorded in a small studio without an audience, rather than by recordings of live network and local broadcasts.
In late 1934, a new type of instantaneous disc was commercially introduced. It consisted of an aluminum core disc coated with black cellulose nitrate lacquer, although for reasons which are unclear it soon came to be called an "acetate" disc by radio professionals. Later, during World War II, when aluminum was a critical war material, glass core discs were used. A recording lathe and chisel-like cutting stylus like those used to record in wax would be used to engrave the groove into this lacquer surface instead. Given a top-quality blank disc, cutting stylus, lathe, electronics and recording engineer, the result was a broadcast-quality recording which could be played several times before the effects of wear started to become apparent. The new medium was soon applied to a number of purposes by local stations, but not by the networks, which had a policy against broadcasting prerecorded material and mainly used the discs for archiving "reference recordings" of their broadcasts.
Standard 16-inch transcription discs of the 1930s and 1940s usually held about 15 minutes of audio on each side, but this was occasionally pushed to as much as 20 minutes. Unlike ordinary records, some were recorded inside out, with the start of the recording near the label and the end near the edge of the disc. The label usually noted whether the disc was "outside start" or "inside start". If there was no such notation, an outside start was assumed. Beginning in the mid-1950s, some transcription discs started employing the "microgroove" groove dimensions used by the 12- and 10-inch rpm vinyl LP records introduced for home use in 1948. This allowed 30 minutes to fit comfortably on each side of a 16-inch disc. These later discs can be played with an ordinary modern stylus or a vintage "LP" stylus. The earlier discs used a larger groove, nearer in size to the groove of a typical 78 rpm shellac record. Using a "78" stylus to play these "standard groove" discs usually produces much better results, and also insures against the groove damage that can be caused by the point of a too-small stylus skating around in the groove and scoring its surface. Some specialist audio transfer engineers keep a series of custom-ground styli of intermediate sizes and briefly test-play the disc with each in order to find the one that produces the best possible results.
The demise of transcriptions
Beginning in the 1940s, two factors caused radio stations' use of transcriptions to diminish. After World War II, use of transcriptions diminished as disc jockeys became more popular. That increased popularity meant that stations began to use commercial recordings more than they had in the past. The trade magazine Billboard reported in a November 22, 1952, article, "Transcription libraries have come upon rough times, owing to the fact that records have largely taken the place of the old-fashioned E.T.'s."
In the 1940s, decreased demand caused transcription services to reduce the royalty they paid copyright owners from $15 per track per year to $10 per track per year. By 1952, still less demand resulted in negotiations for a percentage of gross sales to replace the flat fee.
By late 1959, at least two transcription service companies had gone out of business, selling their libraries to a company that provided recorded background music on tapes and discs. The purchaser acquired a total of approximately 12,000 selections from the two companies.
Magnetic tape and tape recorders became popular at radio stations after World War II, taking over the functions that in-house transcription disc recording had served. Tape's advantages included lower cost, higher fidelity, more recording time, possibility of re-use after erasing, and ease of editing.
See also
Notes
References
External links
Transcriptions archive of the California Historical Radio Society
Elizabeth McLeod's Broadcasting History
Music Electrically Transcribed! Walter J. Beaupre
Electrical Transcription - Canadian Communication Foundation
Old Time Radio Researchers Group
Internet Archive's Old Time Radio Collection
A full day's broadcast on September 21, 1939 on the Washington, D.C. CBS affiliate station WJSV
The John R. Hickman Collection from American University Library
Fybush, Scott. Frequently-Asked Question. The Archives@BostonRadio.org.
Armed Forces Radio Services broadcasts. Bing Crosby Internet Museum.
Bensman, Marvin R.. A History of Radio Program Collecting . Radio Archive of the University of Memphis.
Vintage Radio and Communications Museum of Connecticut
Audio storage
Radio broadcasting
Sound recording technology | Electrical transcription | Technology | 4,829 |
74,454,680 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Watch%206 | The Samsung Galaxy Watch 6 (stylized as Samsung Galaxy Watch6) is a series of Wear OS-based smartwatches developed by Samsung Electronics. It was announced on July 26, 2023 at Samsung's biannual Galaxy Unpacked event in Seoul, South Korea, making it the first such release held in the company's home country. The watches were released on August 11, 2023.
Specifications
References
External links
Consumer electronics brands
Products introduced in 2023
Smartwatches
Samsung wearable devices
Watch 6
Wear OS devices | Samsung Galaxy Watch 6 | Technology | 108 |
4,774,419 | https://en.wikipedia.org/wiki/Herpes%20simplex%20virus | Herpes simplex virus 1 (cold sores) and 2 (genital herpes) (HSV-1 and HSV-2), also known by their taxonomic names Human alphaherpesvirus 1 and Human alphaherpesvirus 2, are two members of the human Herpesviridae family, a set of viruses that produce viral infections in the majority of humans. Both HSV-1 and HSV-2 are very common and contagious. They can be spread when an infected person begins shedding the virus.
As of 2016, about 67% of the world population under the age of 50 had HSV-1. In the United States, about 47.8% and 11.9% are estimated to have HSV-1 and HSV-2, respectively, though actual prevalence may be much higher. Because it can be transmitted through any intimate contact, it is one of the most common sexually transmitted infections.
Symptoms
Many of those who are infected never develop symptoms. Symptoms, when they occur, may include watery blisters in the skin of any location of the body, or in mucous membranes of the mouth, lips, nose, genitals, or eyes (herpes simplex keratitis). Lesions heal with a scab characteristic of herpetic disease. Sometimes, the viruses cause mild or atypical symptoms during outbreaks. However, they can also cause more troublesome forms of herpes simplex. As neurotropic and neuroinvasive viruses, HSV-1 and -2 persist in the body by hiding from the immune system in the cell bodies of neurons, particularly in sensory ganglia. After the initial or primary infection, some infected people experience sporadic episodes of viral reactivation or outbreaks. In an outbreak, the virus in a nerve cell becomes active and is transported via the neuron's axon to the skin, where virus replication and shedding occur and may cause new sores.
Transmission
HSV-1 and HSV-2 are transmitted by contact with an infected person who has reactivations of the virus.
HSV 1 and HSV-2 are periodically shed, most often asymptomatically.
In a study of people with first-episode genital HSV-1 infection from 2022, genital shedding of HSV-1 was detected on 12% of days at 2 months and declined significantly to 7% of days at 11 months. Most genital shedding was asymptomatic; genital and oral lesions and oral shedding were rare.
Most sexual transmissions of HSV-2 occur during periods of asymptomatic shedding. Asymptomatic reactivation means that the virus causes atypical, subtle, or hard-to-notice symptoms that are not identified as an active herpes infection, so acquiring the virus is possible even if no active HSV blisters or sores are present. In one study, daily genital swab samples detected HSV-2 at a median of 12–28% of days among those who had an outbreak, and 10% of days among those with asymptomatic infection (no prior outbreaks), with many of these episodes occurring without visible outbreak ("subclinical shedding").
In another study, 73 subjects were randomized to receive valaciclovir 1 g daily or placebo for 60 days each in a two-way crossover design. A daily swab of the genital area was self-collected for HSV-2 detection by polymerase chain reaction, to compare the effect of valaciclovir versus placebo on asymptomatic viral shedding in immunocompetent, HSV-2 seropositive subjects without a history of symptomatic genital herpes infection. The study found that valaciclovir significantly reduced shedding during subclinical days compared to placebo, showing a 71% reduction; 84% of subjects had no shedding while receiving valaciclovir versus 54% of subjects on placebo. About 88% of patients treated with valaciclovir had no recognized signs or symptoms versus 77% for placebo.
For HSV-2, subclinical shedding may account for most of the transmission. Studies on discordant partners (one infected with HSV-2, one not) show that the transmission rate is approximately 5–8.9 per 10,000 sexual contacts, with condom usage greatly reducing the risk of acquisition. Atypical symptoms are often attributed to other causes, such as a yeast infection. HSV-1 is often acquired orally during childhood. It may also be sexually transmitted, including contact with saliva, such as kissing and oral sex. Historically HSV-2 was primarily a sexually transmitted infection, but rates of HSV-1 genital infections have been increasing for the last few decades.
Both viruses may also be transmitted vertically during natural childbirth. However, the risk of transmission is minimal if the mother has no symptoms nor exposed blisters during delivery. The risk is considerable when the mother is infected with the virus for the first time during late pregnancy, reflecting a high viral load. While most viral STDs can not be transmitted through objects as the virus dies quickly outside of the body, HSV can survive for up to 4.5 hours on surfaces and can be transmitted through use of towels, toothbrushes, cups, cutlery, etc.
Herpes simplex viruses can affect areas of skin exposed to contact with an infected person. An example of this is herpetic whitlow, which is a herpes infection on the fingers; it was commonly found on dental surgeon's hands before the routine use of gloves when treating patients. Shaking hands with an infected person does not transmit this disease. Genital infection of HSV-2 increases the risk of acquiring HIV.
Virology
HSV has been a model virus for many studies in molecular biology. For instance, one of the first functional promoters in eukaryotes was discovered in HSV (of the thymidine kinase gene) and the virion protein VP16 is one of the most-studied transcriptional activators.
Viral structure
Animal herpes viruses all share some common properties. The structure of herpes viruses consists of a relatively large, double-stranded, linear DNA genome encased within an icosahedral protein cage called the capsid, which is wrapped in a lipid bilayer called the envelope. The envelope is joined to the capsid through a tegument. This complete particle is known as the virion. HSV-1 and HSV-2 each contain at least 74 genes (or open reading frames, ORFs) within their genomes, although speculation over gene crowding allows as many as 84 unique protein coding genes by 94 putative ORFs. These genes encode a variety of proteins involved in forming the capsid, tegument and envelope of the virus, as well as controlling the replication and infectivity of the virus. These genes and their functions are summarized in the table below.
The genomes of HSV-1 and HSV-2 are complex and contain two unique regions called the long unique region (UL) and the short unique region (US). Of the 74 known ORFs, UL contains 56 viral genes, whereas US contains only 12. Transcription of HSV genes is catalyzed by RNA polymerase II of the infected host. Immediate early genes, which encode proteins, for example, ICP22 that regulate the expression of early and late viral genes, are the first to be expressed following infection. Early gene expression follows, to allow the synthesis of enzymes involved in DNA replication and the production of certain envelope glycoproteins. Expression of late genes occurs last; this group of genes predominantly encodes proteins that form the virion particle.
Five proteins from (UL) form the viral capsid - UL6, UL18, UL35, UL38, and the major capsid protein UL19.
Cellular entry
Entry of HSV into a host cell involves several glycoproteins on the surface of the enveloped virus binding to their transmembrane receptors on the cell surface. Many of these receptors are then pulled inwards by the cell, which is thought to open a ring of three gHgL heterodimers stabilizing a compact conformation of the gB glycoprotein so that it springs out and punctures the cell membrane. The envelope covering the virus particle then fuses with the cell membrane, creating a pore through which the contents of the viral envelope enters the host cell.
The sequential stages of HSV entry are analogous to those of other viruses. At first, complementary receptors on the virus and the cell surface bring the viral and cell membranes into proximity. Interactions of these molecules then form a stable entry pore through which the viral envelope contents are introduced to the host cell. The virus can also be endocytosed after binding to the receptors, and the fusion could occur at the endosome. In electron micrographs, the outer leaflets of the viral and cellular lipid bilayers have been seen merged; this hemifusion may be on the usual path to entry or it may usually be an arrested state more likely to be captured than a transient entry mechanism.
In the case of a herpes virus, initial interactions occur when two viral envelope glycoproteins called glycoprotein C (gC) and glycoprotein B (gB) bind to a cell surface polysaccharide called heparan sulfate. Next, the major receptor binding protein, glycoprotein D (gD), binds specifically to at least one of three known entry receptors. These cell receptors include herpesvirus entry mediator (HVEM), nectin-1 and 3-O sulfated heparan sulfate. The nectin receptors usually produce cell-cell adhesion, to provide a strong point of attachment for the virus to the host cell. These interactions bring the membrane surfaces into mutual proximity and allow for other glycoproteins embedded in the viral envelope to interact with other cell surface molecules. Once bound to the HVEM, gD changes its conformation and interacts with viral glycoproteins H (gH) and L (gL), which form a complex. The interaction of these membrane proteins may result in a hemifusion state. gB interaction with the gH/gL complex creates an entry pore for the viral capsid. gB interacts with glycosaminoglycans on the surface of the host cell.
Genetic inoculation
After the viral capsid enters the cellular cytoplasm, it starts to express viral protein ICP27. ICP27 is a regulator protein that causes disruption in host protein synthesis and utilizes it for viral replication. ICP27 binds with a cellular enzyme Serine-Arginine Protein Kinase 1, SRPK1. Formation of this complex causes the SRPK1 shift from the cytoplasm to the nucleus, and the viral genome gets transported to the cell nucleus. Once attached to the nucleus at a nuclear entry pore, the capsid ejects its DNA contents via the capsid portal. The capsid portal is formed by 12 copies of the portal protein, UL6, arranged as a ring; the proteins contain a leucine zipper sequence of amino acids, which allow them to adhere to each other. Each icosahedral capsid contains a single portal, located in one vertex.
The DNA exits the capsid in a single linear segment.
Immune evasion
HSV evades the immune system through interference with MHC class I antigen presentation on the cell surface, by blocking the transporter associated with antigen processing (TAP) induced by the secretion of ICP-47 by HSV. In the host cell, TAP transports digested viral antigen epitope peptides from the cytosol to the endoplasmic reticulum, allowing these epitopes to be combined with MHC class I molecules and presented on the surface of the cell. Viral epitope presentation with MHC class I is a requirement for the activation of cytotoxic T-lymphocytes (CTLs), the major effectors of the cell-mediated immune response against virally infected cells. ICP-47 prevents the initiation of a CTL-response against HSV, allowing the virus to survive for a protracted period in the host. HSV usually produces cytopathic effect (CPE) within 24–72 hours post-infection in permissive cell lines which is observed by classical plaque formation. However, HSV-1 clinical isolates have also been reported that did not show any CPE in Vero and A549 cell cultures over several passages with low levels of virus protein expression. Probably these HSV-1 isolates are evolving towards a more "cryptic" form to establish chronic infection thereby unravelling yet another strategy to evade the host immune system, besides neuronal latency.
Replication
Following the infection of a cell, a cascade of herpes virus proteins, called immediate-early, early, and late, is produced. Research using flow cytometry on another member of the herpes virus family, Kaposi's sarcoma-associated herpesvirus, indicates the possibility of an additional lytic stage, delayed-late. These stages of lytic infection, particularly late lytic, are distinct from the latency stage. In the case of HSV-1, no protein products are detected during latency, whereas they are detected during the lytic cycle.
The early proteins transcribed are used in the regulation of genetic replication of the virus. On entering the cell, an α-TIF protein joins the viral particle and aids in immediate-early transcription. The virion host shutoff protein (VHS or UL41) is very important to viral replication. This enzyme shuts off protein synthesis in the host, degrades host mRNA, helps in viral replication, and regulates gene expression of viral proteins. The viral genome immediately travels to the nucleus, but the VHS protein remains in the cytoplasm.
The late proteins form the capsid and the receptors on the surface of the virus. Packaging of the viral particles — including the genome, core, and capsid - occurs in the nucleus of the cell. Here, concatemers of the viral genome are separated by cleavage and are placed into formed capsids. HSV-1 undergoes a process of primary and secondary envelopment. The primary envelope is acquired by budding into the inner nuclear membrane of the cell. This then fuses with the outer nuclear membrane. The virus acquires its final envelope by budding into cytoplasmic vesicles.
Latent infection
HSVs may persist in a quiescent but persistent form known as latent infection, notably in neural ganglia. The HSV genome circular DNA resides in the cell nucleus as an episome. HSV-1 tends to reside in the trigeminal ganglia, while HSV-2 tends to reside in the sacral ganglia, but these are historical tendencies only. During latent infection of a cell, HSVs express latency-associated transcript (LAT) RNA. LAT regulates the host cell genome and interferes with natural cell death mechanisms. By maintaining the host cells, LAT expression preserves a reservoir of the virus, which allows subsequent, usually symptomatic, periodic recurrences or "outbreaks" characteristic of non-latency. Whether or not recurrences are symptomatic, viral shedding occurs to infect a new host.
A protein found in neurons may bind to herpes virus DNA and regulate latency. Herpes virus DNA contains a gene for a protein called ICP4, which is an important transactivator of genes associated with lytic infection in HSV-1. Elements surrounding the gene for ICP4 bind a protein known as the human neuronal protein neuronal restrictive silencing factor (NRSF) or human repressor element silencing transcription factor (REST). When bound to the viral DNA elements, histone deacetylation occurs atop the ICP4 gene sequence to prevent initiation of transcription from this gene, thereby preventing transcription of other viral genes involved in the lytic cycle. Another HSV protein reverses the inhibition of ICP4 protein synthesis. ICP0 dissociates NRSF from the ICP4 gene and thus prevents silencing of the viral DNA.
Genome
The HSV genome spans about 150,000 bp and consists of two unique segments, named unique long (UL) and unique short (US), as well as terminal inverted repeats found to the two ends of them named repeat long (RL) and repeat short (RS). There are also minor "terminal redundancy" (α) elements found on the further ends of RS. The overall arrangement is RL-UL-RL-α-RS-US-RS-α with each pair of repeats inverting each other. The whole sequence is then encapsulated in a terminal direct repeat. The long and short parts each have their own origins of replication, with OriL located between UL28 and UL30 and OriS located in a pair near the RS. As the L and S segments can be assembled in any direction, they can be inverted relative to each other freely, forming various linear isomers.
Gene expression
HSV genes are expressed in 3 temporal classes: immediate early (IE or α), early (E or ß), and late (γ) genes. However, the progression of viral gene expression is rather gradual than in clearly distinct stages. Immediate early genes are transcribed right after infection and their gene products activate transcription of the early genes. Early gene products help to replicate the viral DNA. Viral DNA replication, in turn, stimulates the expression of the late genes, encoding the structural proteins.
Transcription of the immediate early (IE) genes begins right after virus DNA enters the nucleus. All virus genes are transcribed by the host RNA polymerase II. Although host proteins are sufficient for virus transcription, viral proteins are necessary for the transcription of certain genes. For instance, VP16 plays an important role in IE transcription and the virus particle brings it into the host cell, so that it does not need to be produced first. Similarly, the IE proteins RS1 (ICP4), UL54 (ICP27), and ICP0 promote the transcription of the early (E) genes. Like IE genes, early gene promoters contain binding sites for cellular transcription factors. One early protein, ICP8, is necessary for both transcription of late genes and DNA replication.
Later in the life cycle of HSV, the expression of immediate early and early genes is shut down. This is mediated by specific virus proteins, e.g. ICP4, which represses itself by binding to elements in its promoter. As a consequence, the down-regulation of ICP4 levels leads to a reduction of early and late gene expression, as ICP4 is important for both.
Importantly, HSV shuts down host cell RNA, DNA, and protein synthesis to direct cellular resources to virus production. First, the virus protein vhs induces the degradation of existing mRNAs early in infection. Other viral genes impede cellular transcription and translation. For instance, ICP27 inhibits RNA splicing, so that virus mRNAs (which are usually not spliced) gain an advantage over host mRNAs. Finally, virus proteins destabilize certain cellular proteins involved in the host cell cycle, so that both cell division and host cell DNA replication are disturbed in favor of virus replication.
Evolution
The herpes simplex 1 genomes can be classified into six clades. Four of these occur in East Africa, one in East Asia and one in Europe and North America. This suggests that the virus may have originated in East Africa. The most recent common ancestor of the Eurasian strains appears to have evolved ~60,000 years ago. The East Asian HSV-1 isolates have an unusual pattern that is currently best explained by the two waves of migration responsible for the peopling of Japan.
Herpes simplex 2 genomes can be divided into two groups: one is globally distributed and the other is mostly limited to sub Saharan Africa. The globally distributed genotype has undergone four ancient recombinations with herpes simplex 1. It has also been reported that HSV-1 and HSV-2 can have contemporary and stable recombination events in hosts simultaneously infected with both pathogens. All of the cases are HSV-2 acquiring parts of the HSV-1 genome, sometimes changing parts of its antigen epitope in the process.
The mutation rate has been estimated to be ~1.38×10−7 substitutions/site/year. In the clinical setting, mutations in either the thymidine kinase gene or DNA polymerase gene have caused resistance to aciclovir. However, most of the mutations occur in the thymidine kinase gene rather than the DNA polymerase gene.
Another analysis has estimated the mutation rate in the herpes simplex 1 genome to be 1.82×10−8 nucleotide substitution per site per year. This analysis placed the most recent common ancestor of this virus ~710,000 years ago.
Herpes simplex 1 and 2 diverged about .
Treatment
Similar to other herpesviridae, the herpes simplex viruses establish latent lifelong infection, and thus cannot be eradicated from the body with current treatments.
Treatment usually involves general-purpose antiviral drugs that interfere with viral replication, reduce the physical severity of outbreak-associated lesions, and lower the chance of transmission to others. Studies of vulnerable patient populations have indicated that daily use of antivirals such as aciclovir and valaciclovir can reduce reactivation rates. The extensive use of antiherpetic drugs has led to the development of some drug resistance, which in turn may lead to treatment failure. Therefore, new sources of drugs are broadly investigated to address the problem. In January 2020, a comprehensive review article was published that demonstrated the effectiveness of natural products as promising anti-HSV drugs. Pyrithione, a zinc ionophore, has shown antiviral activity against herpes simplex.
Alzheimer's disease
In 1979, it was reported that there is a possible link between HSV-1 and Alzheimer's disease, in people with the epsilon4 allele of the gene APOE. HSV-1 appears to be particularly damaging to the nervous system and increases one's risk of developing Alzheimer's disease. The virus interacts with the components and receptors of lipoproteins, which may lead to the development of Alzheimer's disease. This research identifies HSVs as the pathogen most clearly linked to the establishment of Alzheimer's. According to a study done in 1997, without the presence of the gene allele, HSV-1 does not appear to cause any neurological damage or increase the risk of Alzheimer's. However, a more recent prospective study published in 2008 with a cohort of 591 people showed a statistically significant difference between patients with antibodies indicating recent reactivation of HSV and those without these antibodies in the incidence of Alzheimer's disease, without direct correlation to the APOE-epsilon4 allele.
The trial had a small sample of patients who did not have the antibody at baseline, so the results should be viewed as highly uncertain. In 2011, Manchester University scientists showed that treating HSV1-infected cells with antiviral agents decreased the accumulation of β-amyloid and tau protein and also decreased HSV-1 replication.
A 2018 retrospective study from Taiwan on 33,000 patients found that being infected with herpes simplex virus increased the risk of dementia 2.56 times (95% CI: 2.3-2.8) in patients not receiving anti-herpetic medications (2.6 times for HSV-1 infections and 2.0 times for HSV-2 infections). However, HSV-infected patients who were receiving anti-herpetic medications (e.g., acyclovir, famciclovir, ganciclovir, idoxuridine, penciclovir, tromantadine, valaciclovir, or valganciclovir) showed no elevated risk of dementia compared to patients uninfected with HSV.
Multiplicity reactivation
Multiplicity reactivation (MR) is the process by which viral genomes containing inactivating damage interact within an infected cell to form a viable viral genome. MR was originally discovered with the bacterial virus bacteriophage T4 but was subsequently also found with pathogenic viruses including influenza virus, HIV-1, adenovirus simian virus 40, vaccinia virus, reovirus, poliovirus, and herpes simplex virus.
When HSV particles are exposed to doses of a DNA-damaging agent that would be lethal in single infections but are then allowed to undergo multiple infections (i.e. two or more viruses per host cell), MR is observed. Enhanced survival of HSV-1 due to MR occurs upon exposure to different DNA damaging agents, including methyl methanesulfonate, trimethylpsoralen (which causes inter-strand DNA cross-links), and UV light. After treatment of genetically marked HSV with trimethylpsoralen, recombination between the marked viruses increases, suggesting that trimethylpsoralen damage stimulates recombination. MR of HSV appears to partially depend on the host cell recombinational repair machinery since skin fibroblast cells defective in a component of this machinery (i.e. cells from Bloom's syndrome patients) are deficient in MR.
These observations suggest that MR in HSV infections involves genetic recombination between damaged viral genomes resulting in the production of viable progeny viruses. HSV-1, upon infecting host cells, induces inflammation and oxidative stress. Thus it appears that the HSV genome may be subjected to oxidative DNA damage during infection, and that MR may enhance viral survival and virulence under these conditions.
Use as an anti-cancer agent
Modified Herpes simplex virus is considered as a potential therapy for cancer and has been extensively clinically tested to assess its oncolytic (cancer-killing) ability. Interim overall survival data from Amgen's phase 3 trial of a genetically attenuated herpes virus suggests efficacy against melanoma.
Use in neuronal connection tracing
Herpes simplex virus is also used as a transneuronal tracer defining connections among neurons by traversing synapses.
Other related outcomes
HSV-2 is the most common cause of Mollaret's meningitis. HSV-1 can lead to potentially fatal cases of herpes simplex encephalitis. Herpes simplex viruses have also been studied in the central nervous system disorders such as multiple sclerosis, but research has been conflicting and inconclusive.
Following a diagnosis of genital herpes simplex infection, patients may develop an episode of profound depression. In addition to offering antiviral medication to alleviate symptoms and shorten their duration, physicians must also address the mental health impact of a new diagnosis. Providing information on the very high prevalence of these infections, their effective treatments, and future therapies in development may provide hope to patients who are otherwise demoralized.
HSV infection was found to increase all-cause mortality in Denmark: 19.3% excess one-year mortality for HSV-1 and 5.3% for HSV-2 in the first year of infection. Additionally, lower employment rates and higher disability pension rates were observed.
Research
There exist commonly used vaccines to some herpesviruses, such as the veterinary vaccine HVT/LT (Turkey herpesvirus vector laryngotracheitis vaccine). However, it prevents atherosclerosis (which histologically mirrors atherosclerosis in humans) in target animals vaccinated.
The only human vaccines available for herpesviruses are for Varicella zoster virus, given to children around their first birthday to prevent chickenpox (varicella), or to adults to prevent an outbreak of shingles (herpes zoster). There is, however, no human vaccine for herpes simplex viruses. As of 2022, there are active pre-clinical and clinical studies underway on herpes simplex in humans; vaccines are being developed for both treatment and prevention.
References
External links
Herpes simplex: Host viral protein interactions: A database of HSV-1 interacting host proteins
3D macromolecular structures of the Herpes simplex virus archived in the EM Data Bank(EMDB)
Simplexviruses
Unaccepted virus taxa
Sexually transmitted diseases and infections
Articles containing video clips | Herpes simplex virus | Biology | 5,994 |
13,778,073 | https://en.wikipedia.org/wiki/Developmental-behavioral%20surveillance%20and%20screening | Early detection of children with developmental-behavioral delays and disabilities is essential to ensure that the benefits of early intervention are maximized.
Background
Early intervention has been proven to help prevent school failure, reduce the need for expensive special education services, is associated with graduating from high school, avoiding teen pregnancy and violent crime, becoming employed when an adult, etc. Recent research from Head Start showed that for every $1 spent on early intervention, society as a whole saves $17.00. In the US, early intervention is guaranteed under the Individuals with Disabilities Education Act (IDEA) beginning at birth.
Because almost all children receive health care, primary care providers (e.g., nurses, family medicine physicians, and pediatricians) are charged by their various professional societies, by the Centers for Medicare and Medicaid Services, the Centers for Disease Control, and by IDEA to search for difficulties and make needed referrals. So what are the methods used to detect children with difficulties and how effective are they?
Developmental-behavioral screening
Screening tools are brief measures designed to sort those who probably have problems from those who do not. Screens are meant to be used on the asymptomatic and are not necessary when problems are obvious. Screens do not lead to a diagnosis but rather to a probability of a problem. The kind of problem that may exist is generally not defined by a screening test. The screens used in primary care are generally broad-band in nature, meaning that they tap a range of developmental domains, typically expressive and receptive language, fine and gross motor skills, self-help, social-emotional, and for older children pre-academic and academic skills. In contrast, narrow-band screens focus only on a single condition such mental health problems, and may parse via factor scores, the probability, for example of depression and anxiety, versus attention deficits, versus disorders of conduct. Typically, broad-band screens are used first and may be the only type of measure used to make referrals in primary care, referrals which are then followed up by in—depth or diagnostic testing and often with narrow-band screens used alongside them.
Screening measures require careful construction, research, and a high level of proof. High quality screens are ones that have been standardized (meaning administered in exactly the same way every time) on a large current (meaning in the last decade) nationally representative sample. Screens must be shown to be reliable (meaning that two different examiners get virtually the same results, and that measuring the same child over a short period of time, e.g., two weeks, returns nearly the same result). Screens must have proven validity, meaning that they are given alongside lengthier measures and found to have a strong relationship (usually via correlations). Validity studies should also view which problems are detected (e.g., movement disorders, language impairment, autism spectrum disorder, learning disabilities).
But the acid test of a quality screen, and what sets apart the psychometry of screens from any other type of test, is proof of accuracy. This means that test developers must show proof of sensitivity, i.e., the percentage of children with problems detected, and specificity, meaning the percentage of children without problems who are identified usually with passing or negative test results. The standards for sensitivity and specificity are 70% to 80% at any single administration. While this may seem low, development is a moving target and repeated screening is needed to identify all in need. This also means that even quality screens make errors but, one study of four different screens showed that over-referrals (meaning children who fail screens but who are not found to be eligible for services upon more in-depth testing) are children with psychosocial risk factors and below average performance. This is helpful information for marshalling non-special education services, such as Head Start, after-school tutoring, Boys and Girls Clubs, parent training, etc. for a description of quality measures and links to publishers. Screens are expensive to produce, translate, support, etc. and so all developmental screens are copyrighted products that much be purchased from publishers. However, most are inexpensive to deliver with time and material costs between $1.00 - $4.00 per visit.
Developmental-behavioral surveillance
Surveillance is the longitudinal process of getting "the big picture" of children's lives and intervening in potential problems preferably before they develop. Surveillance includes eliciting and addressing parents' concerns, and monitoring and addressing psychosocial risk factors that may deter development (e.g., limited parental education, more than 3 children in the home, single parenting, poverty, parental depression or other mental health problems, problematic parenting style such as not talking much with children, reading to them, etc.).
Surveillance involves the periodic use of broad-band developmental-behavioral screens but typically other kinds of measures are also deployed (preferably with quality tools enjoying psychometric support). Surveillance measures include tools eliciting and addressing parents' concerns, measures of psychosocial risk, parenting style, autism spectrum disorder, mental health, etc. Some available measures offer both surveillance and screening via longitudinal tracking forms for monitoring issues and progress. A combination of surveillance and screening is recommended by the American Academy of Pediatrics in their July 2006 policy statement.
Efficacy
Studies on the effectiveness of early detection show that when quality screening tests are used routinely, early detection and early intervention enrollment rates rise to meet prevalence figures identified by the Centers for Disease Control (e.g., see The National Library of Medicine for supporting studies and an example of an effective initiative conducted by The Center for Health Care Strategies). But, in the absence of quality measurement, only about 1/4 of eligible children ages 0 – 3 years of age are detected and enrolled in early intervention. So why are detection rates typically so low?
Challenges to early detection in primary care
There are 8 major reasons why children with difficulties are not identified in primary care:
The tendency to use informal milestones checklists. These lack criteria and items are not well- defined. For example, age-specific encounter forms typically used at well-child visits, may include an item such as "knows colors". What does that mean? Must the child name colors? If so, how many? Does he or she have to point to colors when named? Or does he or she simply need to match them? The difference in skill levels required for each of these tasks ranges from about age to age . Further, informal checklists lack psychometric scrutiny so we don't have proof that asking about color knowledge is even a good predictor of developmental delays. In contrast quality screening tools use questions proven to predict developmental status and because such measures are standardized, the same task is presented the same way every time along with clear criteria for performance.
Over-reliance on clinical observation without supporting measurement. Clinical judgment is helpful (e.g., for identifying pallor, clamminess, fussiness and other symptoms of illness) but development and developmental problems are usually far too subtle to simply observe. Most children with difficulties are not dysmorphic and so lack any visible physical differences from other children. Most walk and talk but how well they do these things requires careful measurement. We do not put a hand to a forehead to detect a fever. We measure. Development and behavior require measurement with quality instruments if we are to detect delays and disabilities.
Failing to measure at each well-visit. Development develops and developmental problems do to. A child may be normally developing at 9 months but will she be at 18 months if she is not using words? Or at 24 months if not combining words. We can't predict outcomes very well (except when problems are severe). Repeated measurement and measurement with quality tools is essential.
Difficulties communicating with families. Many parents don't raise concerns about their children. Those with limited education often do not know that primary care providers are interested in development and behavior, child-rearing, etc. Many informal questions to parents do not work well. For example, "Do you have worries about your child's development?" What is wrong with that question. The word "worries" is too strong and only about 50% of parents know what "development" means. Only about 2% of families will answer, even while the prevalence of problems in the 0 – 21 year age range is 16% - 18% (www.cdc.gov). In contrast, quality tools use questions proven to work and are far more likely to detect difficulties.
Limited awareness of referral resources. Many children, even if administered a good screening tool, and found to have problematic results, are not referred? Why? Many primary care providers are unaware of referral resources in their communities. Why? Early interventionists have not consistently informed providers of their services. They many not respond like the ideal sub-specialist (e.g., calling back, informing about results, engaging in collaborative decision making about treatment, etc.). See www.DBPeds.org for links to referral resources.
Failure to use a quality screening instrument. Unfortunately, the most famous and well known of screens, the Denver-II, lacks psychometric support. It under-identifies by about 50% or vastly over-refers depending on how questionable scores are handled. That it is also a hands-on measure taking longer to give than the usual 15 – 20 minute well-visit, means that most professionals use only selected items, and may thus further degrade what little accuracy there is. More accurate options and ones more workable for primary care in that they can be completed by parents in waiting or exam rooms, include Parents' Evaluation of Developmental Status (PEDS), Ages and Stages Questionnaire (ASQ) and PEDS:Developmental Milestones (PEDS:DM) with all three tools offering compliance with the tenets of both surveillance and screening. Practices with nurse practitioners or developmental specialists, and early intervention intake services may have the time to administer accurate but lengthier measures that elicit skills directly from children (e.g., Brigance Screens, developed by Albert Brigance), Bayley Infant Neurodevelopmental Screener (BINS), or Battelle Developmental Inventory Screening Test (BDIST).
Failing to monitor referral rates. Many providers are unaware of the prevalence of disabilities and delays and get little feedback when they've failed to identify a child with difficulties. Families often leave the practice or stop showing up for well-visits. So, there is an acute need to consider the prevalence of difficulties in light of personal referral rates: Overall about 1 in 6 children between 0 and 21 will need special assistance: about 4% of children 0 – 2, 8% of children 0 – 3, 12% of children 0 – 4, and 16% of children 0 – 8.
Constraints of time and money. Many health care providers feel there is little time for screening during busy well visits. Generally this complaint reflects lack of awareness of screening measures that can be completed in waiting rooms (e.g., paper-pencil tools that families can self-administer independently, thus saving providers substantive time). Reimbursement for early detection has been notoriously poor. However, in 2005 the Centers for Medicare and Medicaid Services enabled providers to add the -25 modifier to their preventive service code and to bill separately from the well-visit for 96110 (the developmental-behavioral screening code). Nationally, reimbursement now averages about $10. Some states have handled this mandate differently (e.g., North Carolina providers higher reimbursement for well care but does not allow screening to be unbundled from the well-visit for separate billing). Typically private payers honor Medicaid mandates and follow suit with billing and coding although this has not always occurred. The American Academy of Pediatrics has a Coding Hotline and advocates with private payers to provide reimbursement for screening.
Conclusion
The challenges of early detection in primary care are surmountable. But health care providers need to be better engaged by the early childhood community, trained in the use of tools that are accurate and effective in primary care, and reimbursed appropriately for their time. A number of model initiatives demonstrate that challenges of early detection are not insurmountable. Early detection initiatives that have encouraged greater contact between early childhood programs and primary care providers have greatly increased the likelihood of referral (see www.dbpeds.org for information on programs such as First Signs, ABCD, Pride, etc.)
See also
Denver Scale
Trivandrum Developmental Screening Chart
Notes
External links
American Academy of Pediatrics
The American Academy of Pediatrics' Section on Developmental and Behavioral Pediatrics website
the National Library of Medicine
Pediatrics
Developmental psychology
Developmental disabilities
Screening and assessment tools in child and adolescent psychiatry | Developmental-behavioral surveillance and screening | Biology | 2,614 |
3,094,328 | https://en.wikipedia.org/wiki/Tight%20binding | In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations.
Introduction
The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited.
Though the mathematical formulation of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix elements, which would simply be called the bond energies by a chemist.
In general there are a number of atomic energy levels and atomic orbitals involved in the model. This can lead to complicated band structures because the orbitals belong to different point-group representations. The reciprocal lattice and the Brillouin zone often belong to a different space group than the crystal of the solid. High-symmetry points in the Brillouin zone belong to different point-group representations. When simple systems like the lattices of elements or simple compounds are studied it is often not very difficult to calculate eigenstates in high-symmetry points analytically. So the tight-binding model can provide nice examples for those who want to learn more about group theory.
The tight-binding model has a long history and has been applied in many ways and with many different purposes and different outcomes. The model doesn't stand on its own. Parts of the model can be filled in or extended by other kinds of calculations and models like the nearly-free electron model. The model itself, or parts of it, can serve as the basis for other calculations. In the study of conductive polymers, organic semiconductors and molecular electronics, for example, tight-binding-like models are applied in which the role of the atoms in the original concept is replaced by the molecular orbitals of conjugated systems and where the interatomic matrix elements are replaced by inter- or intramolecular hopping and tunneling parameters. These conductors nearly all have very anisotropic properties and sometimes are almost perfectly one-dimensional.
Historical background
By 1928, the idea of a molecular orbital had been advanced by Robert Mulliken, who was influenced considerably by the work of Friedrich Hund. The LCAO method for approximating molecular orbitals was introduced in 1928 by B. N. Finklestein and G. E. Horowitz, while the LCAO method for solids was developed by Felix Bloch, as part of his doctoral dissertation in 1928, concurrently with and independent of the LCAO-MO approach. A much simpler interpolation scheme for approximating the electronic band structure, especially for the d-bands of transition metals, is the parameterized tight-binding method conceived in 1954 by John Clarke Slater and George Fred Koster, sometimes referred to as the SK tight-binding method. With the SK tight-binding method, electronic band structure calculations on a solid need not be carried out with full rigor as in the original Bloch's theorem but, rather, first-principles calculations are carried out only at high-symmetry points and the band structure is interpolated over the remainder of the Brillouin zone between these points.
In this approach, interactions between different atomic sites are considered as perturbations. There exist several kinds of interactions we must consider. The crystal Hamiltonian is only approximately a sum of atomic Hamiltonians located at different sites and atomic wave functions overlap adjacent atomic sites in the crystal, and so are not accurate representations of the exact wave function. There are further explanations in the next section with some mathematical expressions.
In the recent research about strongly correlated material the tight binding approach is basic approximation because highly localized electrons like 3-d transition metal electrons sometimes display strongly correlated behaviors. In this case, the role of electron-electron interaction must be considered using the many-body physics description.
The tight-binding model is typically used for calculations of electronic band structure and band gaps in the static regime. However, in combination with other methods such as the random phase approximation (RPA) model, the dynamic response of systems may also be studied. In 2019, Bannwarth et al. introduced the GFN2-xTB method, primarily for the calculation of structures and non-covalent interaction energies.
Mathematical formulation
We introduce the atomic orbitals , which are eigenfunctions of the Hamiltonian of a single isolated atom. When the atom is placed in a crystal, this atomic wave function overlaps adjacent atomic sites, and so are not true eigenfunctions of the crystal Hamiltonian. The overlap is less when electrons are tightly bound, which is the source of the descriptor "tight-binding". Any corrections to the atomic potential required to obtain the true Hamiltonian of the system, are assumed small:
where denotes the atomic potential of one atom located at site in the crystal lattice. A solution to the time-independent single electron Schrödinger equation is then approximated as a linear combination of atomic orbitals :
,
where refers to the m-th atomic energy level.
Translational symmetry and normalization
The Bloch theorem states that the wave function in a crystal can change under translation only by a phase factor:
where is the wave vector of the wave function. Consequently, the coefficients satisfy
By substituting , we find
(where in RHS we have replaced the dummy index with )
or
Normalizing the wave function to unity:
so the normalization sets as
where are the atomic overlap integrals, which frequently are neglected resulting in
and
The tight binding Hamiltonian
Using the tight binding form for the wave function, and assuming only the m-th atomic energy level is important for the m-th energy band, the Bloch energies are of the form
Here in the last step it was assumed that the overlap integral is zero and thus . The energy then becomes
where Em is the energy of the m-th atomic level, and , and are the tight binding matrix elements discussed below.
The tight binding matrix elements
The elements are the atomic energy shift due to the potential on neighboring atoms. This term is relatively small in most cases. If it is large it means that potentials on neighboring atoms have a large influence on the energy of the central atom.
The next class of terms is the interatomic matrix element between the atomic orbitals m and l on adjacent atoms. It is also called the bond energy or two center integral and it is the dominant term in the tight binding model.
The last class of terms denote the overlap integrals between the atomic orbitals m and l on adjacent atoms. These, too, are typically small; if not, then Pauli repulsion has a non-negligible influence on the energy of the central atom.
Evaluation of the matrix elements
As mentioned before the values of the -matrix elements are not so large in comparison with the ionization energy because the potentials of neighboring atoms on the central atom are limited. If is not relatively small it means that the potential of the neighboring atom on the central atom is not small either. In that case it is an indication that the tight binding model is not a very good model for the description of the band structure for some reason. The interatomic distances can be too small or the charges on the atoms or ions in the lattice is wrong for example.
The interatomic matrix elements can be calculated directly if the atomic wave functions and the potentials are known in detail. Most often this is not the case. There are numerous ways to get parameters for these matrix elements. Parameters can be obtained from chemical bond energy data. Energies and eigenstates on some high symmetry points in the Brillouin zone can be evaluated and values integrals in the matrix elements can be matched with band structure data from other sources.
The interatomic overlap matrix elements should be rather small or neglectable. If they are large it is again an indication that the tight binding model is of limited value for some purposes. Large overlap is an indication for too short interatomic distance for example. In metals and transition metals the broad s-band or sp-band can be fitted better to an existing band structure calculation by the introduction of next-nearest-neighbor matrix elements and overlap integrals but fits like that don't yield a very useful model for the electronic wave function of a metal. Broad bands in dense materials are better described by a nearly free electron model.
The tight binding model works particularly well in cases where the band width is small and the electrons are strongly localized, like in the case of d-bands and f-bands. The model also gives good results in the case of open crystal structures, like diamond or silicon, where the number of neighbors is small. The model can easily be combined with a nearly free electron model in a hybrid NFE-TB model.
Connection to Wannier functions
Bloch functions describe the electronic states in a periodic crystal lattice. Bloch functions can be represented as a Fourier series
where denotes an atomic site in a periodic crystal lattice, is the wave vector of the Bloch's function, is the electron position, is the band index, and the sum is over all atomic sites. The Bloch's function is an exact eigensolution for the wave function of an electron in a periodic crystal potential corresponding to an energy , and is spread over the entire crystal volume.
Using the Fourier transform analysis, a spatially localized wave function for the m-th energy band can be constructed from multiple Bloch's functions:
These real space wave functions are called Wannier functions, and are fairly closely localized to the atomic site . Of course, if we have exact Wannier functions, the exact Bloch functions can be derived using the inverse Fourier transform.
However it is not easy to calculate directly either Bloch functions or Wannier functions. An approximate approach is necessary in the calculation of electronic structures of solids. If we consider the extreme case of isolated atoms, the Wannier function would become an isolated atomic orbital. That limit suggests the choice of an atomic wave function as an approximate form for the Wannier function, the so-called tight binding approximation.
Second quantization
Modern explanations of electronic structure like t-J model and Hubbard model are based on tight binding model. Tight binding can be understood by working under a second quantization formalism.
Using the atomic orbital as a basis state, the second quantization Hamiltonian operator in the tight binding framework can be written as:
,
- creation and annihilation operators
- spin polarization
- hopping integral
- nearest neighbor index
- the hermitian conjugate of the other term(s)
Here, hopping integral corresponds to the transfer integral in tight binding model. Considering extreme cases of , it is impossible for an electron to hop into neighboring sites. This case is the isolated atomic system. If the hopping term is turned on () electrons can stay in both sites lowering their kinetic energy.
In the strongly correlated electron system, it is necessary to consider the electron-electron interaction. This term can be written in
This interaction Hamiltonian includes direct Coulomb interaction energy and exchange interaction energy between electrons. There are several novel physics induced from this electron-electron interaction energy, such as metal-insulator transitions (MIT), high-temperature superconductivity, and several quantum phase transitions.
Example: one-dimensional s-band
Here the tight binding model is illustrated with a s-band model for a string of atoms with a single s-orbital in a straight line with spacing a and σ bonds between atomic sites.
To find approximate eigenstates of the Hamiltonian, we can use a linear combination of the atomic orbitals
where N = total number of sites and is a real parameter with . (This wave function is normalized to unity by the leading factor 1/√N provided overlap of atomic wave functions is ignored.) Assuming only nearest neighbor overlap, the only non-zero matrix elements of the Hamiltonian can be expressed as
The energy Ei is the ionization energy corresponding to the chosen atomic orbital and U is the energy shift of the orbital as a result of the potential of neighboring atoms. The elements, which are the Slater and Koster interatomic matrix elements, are the bond energies . In this one dimensional s-band model we only have -bonds between the s-orbitals with bond energy . The overlap between states on neighboring atoms is S. We can derive the energy of the state using the above equation:
where, for example,
and
Thus the energy of this state can be represented in the familiar form of the energy dispersion:
.
For the energy is and the state consists of a sum of all atomic orbitals. This state can be viewed as a chain of bonding orbitals.
For the energy is and the state consists of a sum of atomic orbitals which are a factor out of phase. This state can be viewed as a chain of non-bonding orbitals.
Finally for the energy is and the state consists of an alternating sum of atomic orbitals. This state can be viewed as a chain of anti-bonding orbitals.
This example is readily extended to three dimensions, for example, to a body-centered cubic or face-centered cubic lattice by introducing the nearest neighbor vector locations in place of simply n a. Likewise, the method can be extended to multiple bands using multiple different atomic orbitals at each site. The general formulation above shows how these extensions can be accomplished.
Table of interatomic matrix elements
In 1954 J.C. Slater and G.F. Koster published, mainly for the calculation of transition metal d-bands, a table of interatomic matrix elements
which can also be derived from the cubic harmonic orbitals straightforwardly. The table expresses the matrix elements as functions of LCAO two-centre bond integrals between two cubic harmonic orbitals, i and j, on adjacent atoms. The bond integrals are for example the , and for sigma, pi and delta bonds (Notice that these integrals should also depend on the distance between the atoms, i.e. are a function of , even though it is not explicitly stated every time.).
The interatomic vector is expressed as
where d is the distance between the atoms and l, m and n are the direction cosines to the neighboring atom.
Not all interatomic matrix elements are listed explicitly. Matrix elements that are not listed in this table can be constructed by permutation of indices and cosine directions of other matrix elements in the table. Note that swapping orbital indices amounts to taking , i.e. . For example, .
See also
Electronic band structure
Nearly-free electron model
Bloch's theorems
Kronig-Penney model
Fermi surface
Wannier function
Hubbard model
t-J model
Effective mass
Anderson's rule
Dynamical theory of diffraction
Solid state physics
Linear combination of atomic orbitals molecular orbital method (LCAO)
Holstein–Herring method
Peierls substitution
Hückel method
References
N. W. Ashcroft and N. D. Mermin, Solid State Physics (Thomson Learning, Toronto, 1976).
Stephen Blundell Magnetism in Condensed Matter(Oxford, 2001).
S.Maekawa et al. Physics of Transition Metal Oxides (Springer-Verlag Berlin Heidelberg, 2004).
John Singleton Band Theory and Electronic Properties of Solids (Oxford, 2001).
Further reading
External links
Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012,
Tight-Binding Studio: A Technical Software Package to Find the Parameters of Tight-Binding Hamiltonian
Electronic structure methods
Electronic band structures | Tight binding | Physics,Chemistry,Materials_science | 3,471 |
1,363,291 | https://en.wikipedia.org/wiki/Medical%20device | A medical device is any device intended to be used for medical purposes. Significant potential for hazards are inherent when using a device for medical purposes and thus medical devices must be proved safe and effective with reasonable assurance before regulating governments allow marketing of the device in their country. As a general rule, as the associated risk of the device increases the amount of testing required to establish safety and efficacy also increases. Further, as associated risk increases the potential benefit to the patient must also increase.
Discovery of what would be considered a medical device by modern standards dates as far back as in Baluchistan where Neolithic dentists used flint-tipped drills and bowstrings. Study of archeology and Roman medical literature also indicate that many types of medical devices were in widespread use during the time of ancient Rome. In the United States it was not until the Federal Food, Drug, and Cosmetic Act (FD&C Act) in 1938 that medical devices were regulated. Later in 1976, the Medical Device Amendments to the FD&C Act established medical device regulation and oversight as we know it today in the United States. Medical device regulation in Europe as we know it today came into effect in 1993 by what is collectively known as the Medical Device Directive (MDD). On May 26, 2017, the Medical Device Regulation (MDR) replaced the MDD.
Medical devices vary in both their intended use and indications for use. Examples range from simple, low-risk devices such as tongue depressors, medical thermometers, disposable gloves, and bedpans to complex, high-risk devices that are implanted and sustain life. One example of high-risk devices are those with embedded software such as pacemakers, and which assist in the conduct of medical testing, implants, and prostheses. The design of medical devices constitutes a major segment of the field of biomedical engineering.
The global medical device market was estimated to be between $220 and US$250 billion in 2013. The United States controls ≈40% of the global market followed by Europe (25%), Japan (15%), and the rest of the world (20%). Although collectively Europe has a larger share, Japan has the second largest country market share. The largest market shares in Europe (in order of market share size) belong to Germany, Italy, France, and the United Kingdom. The rest of the world comprises regions like (in no particular order) Australia, Canada, China, India, and Iran. This article discusses what constitutes a medical device in these different regions and throughout the article these regions will be discussed in order of their global market share.
Definition
A global definition for medical device is difficult to establish because there are numerous regulatory bodies worldwide overseeing the marketing of medical devices. Although these bodies often collaborate and discuss the definition in general, there are subtle differences in wording that prevent a global harmonization of the definition of a medical device, thus the appropriate definition of a medical device depends on the region. Often a portion of the definition of a medical device is intended to differentiate between medical devices and drugs, as the regulatory requirements of the two are different. Definitions also often recognize In vitro diagnostics as a subclass of medical devices and establish accessories as medical devices.
Definitions by region
United States (Food and Drug Administration)
Section 201(h) of the Federal Food Drug & Cosmetic (FD&C) Act defines a device as an "instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is:
recognized in the official National Formulary, or the United States Pharmacopoeia, or any supplement to them
Intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or
Intended to affect the structure or any function of the body of man or other animals, and
which does not achieve its primary intended purposes through chemical action within or on the body of man or other animals and which is not dependent upon being metabolized for the achievement of its primary intended purposes. The term 'device' does not include software functions excluded pursuant to section 520(o)."
European Union
According to Article 1 of Council Directive 93/42/EEC, 'medical device' means any "instrument, apparatus, appliance, software, material or other article, whether used alone or in combination, including the software intended by its manufacturer to be used specifically for diagnostic and/or therapeutic purposes and necessary for its proper application, intended by the manufacturer to be used for human beings for the purpose of:
diagnosis, prevention, monitoring, treatment or alleviation of disease,
diagnosis, monitoring, treatment, alleviation of or compensation for an injury or handicap,
investigation, replacement or modification of the anatomy or of a physiological process,
control of conception,
and which does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means, but which may be assisted in its function by such means;"
EU Legal framework
Based on the New Approach, rules that relate to safety and performance of medical devices were harmonised in the EU in the 1990s. The New Approach, defined in a European Council Resolution of May 1985, represents an innovative way of technical harmonisation. It aims to remove technical barriers to trade and dispel the consequent uncertainty for economic operators, to facilitate free movement of goods inside the EU.
The previous core legal framework consisted of three directives:
Directive 90/385/EEC regarding active implantable medical devices
Directive 93/42/EEC regarding medical devices
Directive 98/79/EC regarding in vitro diagnostic medical devices (Until 2022, the In Vitro Diagnosis Regulation (IVDR) will replace the EU's current Directive on In-Vitro Diagnostic (98/79/EC)).
They aim at ensuring a high level of protection of human health and safety and the good functioning of the Single Market. These three main directives have been supplemented over time by several modifying and implementing directives, including the last technical revision brought about by Directive 2007/47 EC.
The government of each Member State must appoint a competent authority responsible for medical devices. The competent authority (CA) is a body with authority to act on behalf of the member state to ensure that member state government transposes requirements of medical device directives into national law and applies them. The CA reports to the minister of health in the member state. The CA in one Member State has no jurisdiction in any other member state, but exchanges information and tries to reach common positions.
In the UK, for example, the Medicines and Healthcare products Regulatory Agency (MHRA) acted as a CA. In Italy it is the Ministero Salute (Ministry of Health) Medical devices must not be mistaken with medicinal products. In the EU, all medical devices must be identified with the CE mark. The conformity of a medium or high risk medical device with relevant regulations is also assessed by an external entity, the Notified Body, before it can be placed on the market.
In September 2012, the European Commission proposed new legislation aimed at enhancing safety, traceability, and transparency. The regulation was adopted in 2017.
The currct core legal framework consists of two regulations, replacing the previous three directives:
The Medical Devices Regulation (MDR (EU) 2017/745)
The In Vitro Diagnostic medical devices regulation (IVDR (EU) 2017/746)
The two regulations are supplemented by several guidances developed by the Medical Devices Coordination Group (MDCG).
Japan
Article 2, Paragraph 4, of the Pharmaceutical Affairs Law (PAL) defines medical devices as "instruments and apparatus intended for use in diagnosis, cure or prevention of diseases in humans or other animals; intended to affect the structure or functions of the body of man or other animals."
Rest of the world
Canada
The term medical device, as defined in the Food and Drugs Act, is "any article, instrument, apparatus or contrivance, including any component, part or accessory thereof, manufactured, sold or represented for use in: the diagnosis, treatment, mitigation or prevention of a disease, disorder or abnormal physical state, or its symptoms, in a human being; the restoration, correction or modification of a body function or the body structure of a human being; the diagnosis of pregnancy in a human being; or the care of a human being during pregnancy and at and after the birth of a child, including the care of the child. It also includes a contraceptive device but does not include a drug."
The term covers a wide range of health or medical instruments used in the treatment, mitigation, diagnosis or prevention of a disease or abnormal physical condition. Health Canada reviews medical devices to assess their safety, effectiveness, and quality before authorizing their sale in Canada. According to the Act, medical device does not include any device that is intended for use in relation to animals.
India
India has introduced National Medical Device Policy 2023. However, certain medical devices are notified as DRUGS under the Drugs & Cosmetics Act. Section 3 (b) (iv) relating to definition of "drugs" holds that "Devices intended for internal or external use in the diagnosis, treatment, mitigation or prevention of disease or disorder in human beings or animals" are also drugs. As of April 2022, 14 classes of devices are classified as drugs.
Regulation and oversight
Risk classification
The regulatory authorities recognize different classes of medical devices based on their potential for harm if misused, design complexity, and their use characteristics. Each country or region defines these categories in different ways. The authorities also recognize that some devices are provided in combination with drugs, and regulation of these combination products takes this factor into consideration.
Classifying medical devices based on their risk is essential for maintaining patient and staff safety while simultaneously facilitating the marketing of medical products. By establishing different risk classifications, lower risk devices, for example, a stethoscope or tongue depressor, are not required to undergo the same level of testing that higher risk devices such as artificial pacemakers undergo. Establishing a hierarchy of risk classification allows regulatory bodies to provide flexibility when reviewing medical devices.
Classification by region
United States
Under the Food, Drug, and Cosmetic Act, the U.S. Food and Drug Administration recognizes three classes of medical devices, based on the level of control necessary to assure safety and effectiveness.
Class I
Class II
Class III
The classification procedures are described in the Code of Federal Regulations, Title 21, part 860 (usually known as 21 CFR 860).
Class I devices are subject to the least regulatory control and are not intended to help support or sustain life or be substantially important in preventing impairment to human health, and may not present an unreasonable risk of illness or injury. Examples of Class I devices include elastic bandages, examination gloves, and hand-held surgical instruments.
Class II devices are subject to special labeling requirements, mandatory performance standards and postmarket surveillance. Examples of Class II devices include acupuncture needles, powered wheelchairs, infusion pumps, air purifiers, surgical drapes, stereotaxic navigation systems, and surgical robots.
Class III devices are usually those that support or sustain human life, are of substantial importance in preventing impairment of human health, or present a potential, unreasonable risk of illness or injury and require premarket approval. Examples of Class III devices include implantable pacemakers, pulse generators, HIV diagnostic tests, automated external defibrillators, and endosseous implants.
European Union (EU) and European Free Trade Association (EFTA)
The classification of medical devices in the European Union is outlined in Article IX of the Council Directive 93/42/EEC and Annex VIII of the EU medical device regulation. There are basically four classes, ranging from low risk to high risk, Classes I, IIa, IIb, and III (this excludes in vitro diagnostics including software, which fall in four classes: from A (lowest risk) to D (highest risk)):
Class I Devices: Non-invasive, everyday devices or equipment. Class I devices are generally low risk and can include bandages, compression hosiery, or walking aids. Such devices require only for the manufacturer to complete a Technical File.
Class Is Devices: Class Is devices are similarly non-invasive devices, however this sub-group extends to include sterile devices. Examples of Class Is devices include stethoscopes, examination gloves, colostomy bags, or oxygen masks. These devices also require a technical file, with the added requirement of an application to a European Notified Body for certification of manufacturing in conjunction with sterility standards.
Class Im Devices: This refers chiefly to similarly low-risk measuring devices. Included in this category are: thermometers, droppers, and non-invasive blood pressure measuring devices. Once again the manufacturer must provide a technical file and be certified by a European Notified Body for manufacturing in accordance with metrology regulations.
Class IIa Devices: Class IIa devices generally constitute low to medium risk and pertain mainly to devices installed within the body in the short term. Class IIa devices are those which are installed within the body for only between 60 minutes and 30 days. Examples include hearing-aids, blood transfusion tubes, and catheters. Requirements include technical files and a conformity test carried out by a European Notified Body.
Class IIb Devices: Slightly more complex than IIa devices, class IIb devices are generally medium to high risk and will often be devices installed within the body for periods of 30 days or longer. Examples include ventilators and intensive care monitoring equipment. Identical compliance route to Class IIa devices with an added requirement of a device type examination by a Notified Body.
Class III Devices: Class III devices are strictly high risk devices. Examples include balloon catheters, prosthetic heart valves, pacemakers, etc. The steps to approval here include a full quality assurance system audit, along with examination of both the device's design and the device itself by a European Notified Body.
The authorization of medical devices is guaranteed by a Declaration of Conformity. This declaration is issued by the manufacturer itself, but for products in Class Is, Im, Ir, IIa, IIb or III, it must be verified by a Certificate of Conformity issued by a Notified Body. A Notified Body is a public or private organisation that has been accredited to validate the compliance of the device to the European Directive. Medical devices that pertain to class I (on condition they do not require sterilization or do not measure a function) can be marketed purely by self-certification.
The European classification depends on rules that involve the medical device's duration of body contact, invasive character, use of an energy source, effect on the central circulation or nervous system, diagnostic impact, or incorporation of a medicinal product. Certified medical devices should have the CE mark on the packaging, insert leaflets, etc.. These packagings should also show harmonised pictograms and EN standardised logos to indicate essential features such as instructions for use, expiry date, manufacturer, sterile, do not reuse, etc.
In November 2018, the Federal Administrative Court of Switzerland decided that the "Sympto" app, used to analyze a woman's menstrual cycle, was a medical device because it calculates a fertility window for each woman using personal data. The manufacturer, Sympto-Therm Foundation, argued that this was a didactic, not a medical process. the court laid down that an app is a medical device if it is to be used for any of the medical purposes provided by law, and creates or modifies health information by calculations or comparison, providing information about an individual patient.
Japan
Medical devices (excluding in vitro diagnostics) in Japan are classified into four classes based on risk:
Classes I and II distinguish between extremely low and low risk devices. Classes III and IV, moderate and high risk respectively, are highly and specially controlled medical devices. In vitro diagnostics have three risk classifications.
Rest of the world
For the remaining regions in the world, the risk classifications are generally similar to the United States, European Union, and Japan or are a variant combining two or more of the three countries' risk classifications.
ASEAN
The ASEAN Medical Device Directive (AMDD) has been adopted by several southeast Asian countries. The nations are at varying stages of adopting and implementing the Directive. The AMDD classification is risk-based and defines four levels: A - Low Risk, B - Low to Moderate Risk, C - Moderate – High Risk, and D - High Risk.
Australia
The classification of medical devices in Australia is outlined in section 41BD of the Therapeutic Goods Act 1989 and Regulation 3.2 of the Therapeutic Goods Regulations 2002, under control of the Therapeutic Goods Administration. Similarly to the EU classification, they rank in several categories, by order of increasing risk and associated required level of control. Various rules identify the device's category
Canada
The Medical Devices Bureau of Health Canada recognizes four classes of medical devices based on the level of control necessary to assure the safety and effectiveness of the device. Class I devices present the lowest potential risk and do not require a licence. Class II devices require the manufacturer's declaration of device safety and effectiveness, whereas Class III and IV devices present a greater potential risk and are subject to in-depth scrutiny. A guidance document for device classification is published by Health Canada.
Canadian classes of medical devices correspond to the European Council Directive 93/42/EEC (MDD) devices:
Class I (Canada) generally corresponds to Class I (ECD)
Class II (Canada) generally corresponds to Class IIa (ECD)
Class III (Canada) generally corresponds to Class IIb (ECD)
Class IV (Canada) generally corresponds to Class III (ECD)
Examples include surgical instruments (Class I), contact lenses and ultrasound scanners (Class II), orthopedic implants and hemodialysis machines (Class III), and cardiac pacemakers (Class IV).
India
Medical devices in India are regulated by Central Drugs Standard Control Organisation (CDSCO). Medical devices under the Medical Devices Rules, 2017 are classified as per Global Harmonization Task Force (GHTF) based on associated risks.
The CDSCO classifications of medical devices govern alongside the regulatory approval and registration by the CDSCO is under the DCGI. Every single medical device in India pursues a regulatory framework that depends on the drug guidelines under the Drug and Cosmetics Act (1940) and the Drugs and Cosmetics runs under 1945. CDSCO classification for medical devices has a set of risk classifications for numerous products planned for notification and guidelines as medical devices.
Iran
Iran produces about 2,000 types of medical devices and medical supplies, such as appliances, dental supplies, disposable sterile medical items, laboratory machines, various biomaterials and dental implants. 400 Medical products are produced at the C and D risk class with all of them licensed by the Iranian Health Ministry in terms of safety and performance based on EU-standards.
Some Iranian medical devices are produced according to the European Union standards.
Some producers in Iran export medical devices and supplies which adhere to European Union standards to applicant countries, including 40 Asian and European countries.
Some Iranian producers export their products to foreign countries.
United Kingdom
Following Brexit, the UK medical device regulation was closely aligned with the EU medical device regulation, including classification. The regulation 7 of the Medical Devices Regulations 2002 (SI 2002 No 618, as amended) (UK medical devices regulations), classified general medical devices into four classes of increasing levels of risk: Class I, IIa, IIb or III in accordance with criteria in the UK medical devices regulations, Annex IX (as modified by Schedule 2A to the UK medical devices regulations).
Validation and verification
Validation and verification of medical devices ensure that they fulfil their intended purpose. Validation or verification is generally needed when a health facility acquires a new device to perform medical tests.
The main difference between the two is that validation is focused on ensuring that the device meets the needs and requirements of its intended users and the intended use environment, whereas verification is focused on ensuring that the device meets its specified design requirements.
Standardization and regulatory concerns
The ISO standards for medical devices are covered by ICS 11.100.20 and 11.040.01. The quality and risk management regarding the topic for regulatory purposes is convened by ISO 13485 and ISO 14971. ISO 13485:2016 is applicable to all providers and manufacturers of medical devices, components, contract services and distributors of medical devices. The standard is the basis for regulatory compliance in local markets, and most export markets. Additionally, ISO 9001:2008 sets precedence because it signifies that a company engages in the creation of new products. It requires that the development of manufactured products have an approval process and a set of rigorous quality standards and development records before the product is distributed. Further standards are IEC 60601-1 which is for electrical devices (mains-powered as well as battery powered), EN 45502-1 which is for Active implantable medical devices, and IEC 62304 for medical software. The US FDA also published a series of guidances for industry regarding this topic against 21 CFR 820 Subchapter H—Medical Devices. Subpart B includes quality system requirements, an important component of which are design controls (21 CFR 820.30). To meet the demands of these industry regulation standards, a growing number of medical device distributors are putting the complaint management process at the forefront of their quality management practices. This approach further mitigates risks and increases visibility of quality issues.
Starting in the late 1980s, the FDA increased its involvement in reviewing the development of medical device software. The precipitant for change was a radiation therapy device (Therac-25) that overdosed patients because of software coding errors. FDA is now focused on regulatory oversight on medical device software development process and system-level testing.
A 2011 study by Dr. Diana Zuckerman and Paul Brown of the National Center for Health Research, and Dr. Steven Nissen of the Cleveland Clinic, published in the Archives of Internal Medicine, showed that most medical devices recalled in the last five years for "serious health problems or death" had been previously approved by the FDA using the less stringent, and cheaper, 510(k) process. In a few cases, the devices had been deemed so low-risk that they did not they did not undergo any FDA regulatory review. Of the 113 devices recalled, 35 were for cardiovascular issues. This study was the topic of Congressional hearings re-evaluating FDA procedures and oversight.
A 2014 study by Dr. Diana Zuckerman, Paul Brown, and Dr. Aditi Das of the National Center for Health Research, published in JAMA Internal Medicine, examined the scientific evidence that is publicly available about medical implants that were cleared by the FDA 510(k) process from 2008 to 2012. They found that scientific evidence supporting "substantial equivalence" to other devices already on the market was required by law to be publicly available, but the information was available for only 16% of the randomly selected implants, and only 10% provided clinical data. Of the more than 1,100 predicate implants that the new implants were substantially equivalent to, only 3% had any publicly available scientific evidence, and only 1% had clinical evidence of safety or effectiveness. The researchers concluded that publicly available scientific evidence on implants was needed to protect the public health.
In 2014–2015, a new international agreement, the Medical Device Single Audit Program (MDSAP), was put in place with five participant countries: Australia, Brazil, Canada, Japan, and the United States. The aim of this program was to "develop a process that allows a single audit, or inspection to ensure the medical device regulatory requirements for all five countries are satisfied".
In 2017, a study by Dr. Jay Ronquillo and Dr. Diana Zuckerman published in the peer-reviewed policy journal Milbank Quarterly found that electronic health records and other device software were recalled due to life-threatening flaws. The article pointed out the lack of safeguards against hacking and other cybersecurity threats, stating "current regulations are necessary but not sufficient for ensuring patient safety by identifying and eliminating dangerous defects in software currently on the market". They added that legislative changes resulting from the law entitled the 21st Century Cures Act "will further deregulate health IT, reducing safeguards that facilitate the reporting and timely recall of flawed medical software that could harm patients".
A study by Dr. Stephanie Fox-Rawlings and colleagues at the National Center for Health Research, published in 2018 in the policy journal Milbank Quarterly, investigated whether studies reviewed by the FDA for high-risk medical devices are proven safe and effective for women, minorities, or patients over 65 years of age. The law encourages patient diversity in clinical trials submitted to the FDA for review, but does not require it. The study determined that most high-risk medical devices are not tested and analyzed to ensure that they are safe and effective for all major demographic groups, particularly racial and ethnic minorities and people over 65. Therefore, they do not provide information about safety or effectiveness that would help patients and physicians make well informed decisions.
In 2018, an investigation involving journalists across 36 countries coordinated by the International Consortium of Investigative Journalists (ICIJ) prompted calls for reform in the United States, particularly around the 510(k) substantial equivalence process; the investigation prompted similar calls in the UK and Europe Union.
Packaging standards
Medical device packaging is highly regulated. Often medical devices and products are sterilized in the package.
Sterility must be maintained throughout distribution to allow immediate use by physicians. A series of special packaging tests measure the ability of the package to maintain sterility. Relevant standards include:
ASTM F2097 – Standard Guide for Design and Evaluation of Primary Flexible Packaging for Medical Products
ASTM F2475-11 – Standard Guide for Biocompatibility Evaluation of Medical Device Packaging Materials
EN 868 Packaging materials and systems for medical devices to be sterilized, General requirements and test methods
ISO 11607 Packaging for terminally sterilized medical devices
Package testing is part of a quality management system including verification and validation. It is important to document and ensure that packages meet regulations and end-use requirements. Manufacturing processes must be controlled and validated to ensure consistent performance. EN ISO 15223-1 defines symbols that can be used to convey important information on packaging and labeling.
Biocompatibility standards
ISO 10993 - Biological Evaluation of Medical Devices
Cleanliness standards
Medical device cleanliness has come under greater scrutiny since 2000, when Sulzer Orthopedics recalled several thousand metal hip implants that contained a manufacturing residue. Based on this event, ASTM established a new task group (F04.15.17) for established test methods, guidance documents, and other standards to address cleanliness of medical devices. This task group has issued two standards for permanent implants to date: 1. ASTM F2459: Standard test method for extracting residue from metallic medical components and quantifying via gravimetric analysis 2. ASTM F2847: Standard Practice for Reporting and Assessment of Residues on Single Use Implants 3. ASTM F3172: Standard Guide for Validating Cleaning Processes Used During the Manufacture of Medical Devices
In addition, the cleanliness of re-usable devices has led to a series of standards, including:
ASTM E2314: Standard Test Method for Determination of Effectiveness of Cleaning Processes for Reusable Medical Instruments Using a Microbiologic Method (Simulated Use Test)"
ASTM D7225: Standard Guide for Blood Cleaning Efficiency of Detergents and Washer-Disinfectors
ASTM F3208: Standard Guide for Selecting Test Soils for Validation of Cleaning Methods for Reusable Medical Devices
The ASTM F04.15.17 task group is working on several new standards that involve designing implants for cleaning, selection and testing of brushes for cleaning reusable devices, and cleaning assessment of medical devices made by additive manufacturing. Additionally, the FDA is establishing new guidelines for reprocessing reusable medical devices, such as orthoscopic shavers, endoscopes, and suction tubes. New research was published in ACS Applied Interfaces and Material to keep Medical Tools pathogen free.
Safety standards
Design, prototyping, and product development
Medical device manufacturing requires a level of process control according to the classification of the device. Higher risk; more controls. When in the initial R&D phase, manufacturers are now beginning to design for manufacturability. This means products can be more precision-engineered to for production to result in shorter lead times, tighter tolerances and more advanced specifications and prototypes. These days, with the aid of CAD or modelling platforms, the work is now much faster, and this can act also as a tool for strategic design generation as well as a marketing tool.
Failure to meet cost targets will lead to substantial losses for an organisation. In addition, with global competition, the R&D of new devices is not just a necessity, it is an imperative for medical device manufacturers. The realisation of a new design can be very costly, especially with the shorter product life cycle. As technology advances, there is typically a level of quality, safety and reliability that increases exponentially with time.
For example, initial models of the artificial cardiac pacemaker were external support devices that transmits pulses of electricity to the heart muscles via electrode leads on the chest. The electrodes contact the heart directly through the chest, allowing stimulation pulses to pass through the body. Recipients of this typically developed an infection at the entrance of the electrodes, which led to the subsequent trial of the first internal pacemaker, with electrodes attached to the myocardium by thoracotomy. Future developments led to the isotope-power source that would last for the lifespan of the patient.
Software
Mobile medical applications
With the rise of smartphone usage in the medical space, in 2013, the FDA issued to regulate mobile medical applications and protect users from their unintended use, soon followed by European and other regulatory agencies. This guidance distinguishes the apps subjected to regulation based on the marketing claims of the apps. Incorporation of the guidelines during the development phase of such apps can be considered as developing a medical device; the regulations have to adapt and propositions for expedite approval may be required due to the nature of 'versions' of mobile application development.
On September 25, 2013, the FDA released a draft guidance document for regulation of mobile medical applications, to clarify what kind of mobile apps related to health would not be regulated, and which would be.
Cybersecurity
Medical devices such as pacemakers, insulin pumps, operating room monitors, defibrillators, and surgical instruments, including deep-brain stimulators, can incorporate the ability to transmit vital health information from a patient's body to medical professionals. Some of these devices can be remotely controlled. This has engendered concern about privacy and security issues, human error, and technical glitches with this technology. While only a few studies have looked at the susceptibility of medical devices to hacking, there is a risk. In 2008, computer scientists proved that pacemakers and defibrillators can be hacked wirelessly via radio hardware, an antenna, and a personal computer. These researchers showed they could shut down a combination heart defibrillator and pacemaker and reprogram it to deliver potentially lethal shocks or run out its battery. Jay Radcliff, a security researcher interested in the security of medical devices, raised fears about the safety of these devices. He shared his concerns at the Black Hat security conference. Radcliff fears that the devices are vulnerable and has found that a lethal attack is possible against those with insulin pumps and glucose monitors. Some medical device makers downplay the threat from such attacks and argue that the demonstrated attacks have been performed by skilled security researchers and are unlikely to occur in the real world. At the same time, other makers have asked software security experts to investigate the safety of their devices. As recently as June 2011, security experts showed that by using readily available hardware and a user manual, a scientist could both tap into the information on the system of a wireless insulin pump in combination with a glucose monitor. With the PIN of the device, the scientist could wirelessly control the dosage of the insulin. Anand Raghunathan, a researcher in this study, explains that medical devices are getting smaller and lighter so that they can be easily worn. The downside is that additional security features would put an extra strain on the battery and size and drive up prices. Dr. William Maisel offered some thoughts on the motivation to engage in this activity. Motivation to do this hacking might include acquisition of private information for financial gain or competitive advantage; damage to a device manufacturer's reputation; sabotage; intent to inflict financial or personal injury or just satisfaction for the attacker. Researchers suggest a few safeguards. One would be to use rolling codes. Another solution is to use a technology called "body-coupled communication" that uses the human skin as a wave guide for wireless communication. On 28 December 2016, the US Food and Drug Administration released its recommendations that are not legally enforceable for how medical device manufacturers should maintain the security of Internet-connected devices.
Similar to hazards, cybersecurity threats and vulnerabilities cannot be eliminated but must be managed and reduced to a reasonable level. When designing medical devices, the tier of cybersecurity risk should be determined early in the process in order to establish a cybersecurity vulnerability and management approach (including a set of cybersecurity design controls). The medical device design approach employed should be consistent with the NIST Cybersecurity Framework for managing cybersecurity-related risks.
In August 2013, the FDA released over 20 regulations aiming to improve the security of data in medical devices, in response to the growing risks of limited cybersecurity.
Artificial intelligence
The number of approved medical devices using artificial intelligence or machine learning (AI/ML) is increasing. As of 2020, there were several hundred AI/ML medical devices approved by the US FDA or CE-marked devices in Europe. Most AI/ML devices focus upon radiology. As of 2020, there was no specific regulatory pathway for AI/ML-based medical devices in the US or Europe. However, in January 2021, the FDA published a proposed regulatory framework for AI/ML-based software, and the EU medical device regulation which replaces the EU Medical Device Directive in May 2021, defines regulatory requirements for medical devices, including AI/ML software.
Medical equipment
Medical equipment (also known as armamentarium) is designed to aid in the diagnosis, monitoring or treatment of medical conditions.
Types
There are several basic types:
Diagnostic equipment includes medical imaging machines, used to aid in diagnosis. Examples are ultrasound and MRI machines, PET and CT scanners, and x-ray machines.
Treatment equipment includes infusion pumps, medical lasers and LASIK surgical machines.
Life support equipment is used to maintain a patient's bodily function. This includes medical ventilators, incubators, anaesthetic machines, heart-lung machines, ECMO, and dialysis machines.
Medical monitors allow medical staff to measure a patient's medical state. Monitors may measure patient vital signs and other parameters including ECG, EEG, and blood pressure.
Medical laboratory equipment automates or helps analyze blood, urine, genes, and dissolved gases in the blood.
Diagnostic medical equipment may also be used in the home for certain purposes, e.g. for the control of diabetes mellitus, such as in the case of continuous glucose monitoring.
Therapeutic: physical therapy machines like continuous passive range of motion (CPM) machines
Air purifying equipment may be used in the periphery of the operating room or at point sources including near the surgical site for the removal of surgical plume.
The identification of medical devices has been recently improved by the introduction of Unique Device Identification (UDI) and standardised naming using the Global Medical Device Nomenclature (GMDN) which have been endorsed by the International Medical Device Regulatory Forum (IMDRF).
A biomedical equipment technician (BMET) is a vital component of the healthcare delivery system. Employed primarily by hospitals, BMETs are the people responsible for maintaining a facility's medical equipment. BMET mainly act as an interface between doctor and equipment.
Medical equipment donation
There are challenges surrounding the availability of medical equipment from a global health perspective, with low-resource countries unable to obtain or afford essential and life-saving equipment. In these settings, well-intentioned equipment donation from high- to low-resource settings is a frequently used strategy to address this through individuals, organisations, manufacturers and charities. However, issues with maintenance, availability of biomedical equipment technicians (BMET), supply chains, user education and the appropriateness of donations means these frequently fail to deliver the intended benefits. The WHO estimates that 95% of medical equipment in low- and middle-income countries (LMICs) is imported and 80% of it is funded by international donors or foreign governments. While up to 70% of medical equipment in sub-Saharan Africa is donated, only 10%–30% of donated equipment becomes operational. A review of current practice and guidelines for the donation of medical equipment for surgical and anaesthesia care in LMICs has demonstrated a high level of complexity within the donation process and numerous shortcomings. Greater collaboration and planning between donors and recipients is required together with evaluation of donation programs and concerted advocacy to educate donors and recipients on existing equipment donation guidelines and policies.
The circulation of medical equipment is not limited to donations. The rise of reuse and recycle-based solutions, where gently-used medical equipment is donated and redistributed to communities in need, is another form of equipment distribution. An interest in reusing and recycling emerged in the 1980s when the potential health hazards of medical waste on the East Coast beaches became highlighted by the media. Connecting the large demand for medical equipment and single-use medical devices, with a need for waste reduction, as well as the problem of unequal access for low-income communities led to the Congress enacting the Medical Waste Tracking Act of 1988. Medical equipment can be donated either by governments or non-governmental organizations, domestic or international. Donated equipment ranges from bedside assistance to radiological equipment.
Medical equipment donation has come under scrutiny with regard to donated-device failure and loss of warranty in the case of previous-ownership. Most medical devices and production company warranties to do not extend to reused or donated devices, or to devices donated by initial owners/patients. Such reuse raises matters of patient autonomy, medical ethics, and legality. Such concerns conflict with the importance of equal access to healthcare resources, and the goal of serving the greatest good for the greatest number.
Academic resources
Medical & Biological Engineering & Computing journal
Expert Review of Medical Devices journal
University-based research packaging institutes
University of Minnesota - Medical Devices Center (MDC)
University of Strathclyde - Strathclyde Institute of Medical Devices (SIMD)
Flinders University - Medical Device Research Institute (MDRI)
Michigan State University - School of Packaging (SoP)
IIT Bombay - Biomedical Engineering and Technology (incubation) Centre (BETiC)
See also
Assistive technology
Clinical engineer
Design history file
Durable medical equipment
Electromagnetic compatibility
Electronic health record
Federal Institute for Drugs and Medical Devices
GHTF
Health Level 7
Home medical equipment
Instruments used in general medicine
Instruments used in obstetrics and gynecology
List of common EMC test standards
Medical grade silicone
Medical logistics
Medical technology
Pharmacopoeia
Safety engineer
Telemedicine
References
Further reading
External links
Health care | Medical device | Biology | 8,136 |
76,844,678 | https://en.wikipedia.org/wiki/UGC%209684 | UGC 9684 is a barred spiral galaxy with a ring structure in the Boötes constellation. It is located 250 million light-years from the Solar System and has an approximate diameter of 90,000 light-years.
The luminosity class of UGC 9684 is I-II and it is classified as an active star-forming galaxy according to a study published in 2022, in which produces one solar mass of stars every few years, with levels of stellar formation.
Studying of star formation rate for UGC 9684
Scientists who studied UGC 9684, have longed to find out the star-formation rate for UGC 9684. To do this, they used a Fitting and Assessment of Synthetic Templates code. The scientists used further observations via ultraviolet, both optical and near-infrared and from the luminosity measurements from different databases from GALEX, SDSS and from the final release of the MASS extended source catalog by Jarrett et al. 2000, with all the data retrieved from NASA/IPAC Extragalactic Database.
As for the star formation, they employed a decreasing function of (SFR ∝ e−t ) and also a delayed function (SFR ∝ t × e−t ) as well as the stellar population libraries written from Bruzual & Charlot and Convoy et al. Several metallicity estimates, published by Prieto et al. 2008, Kelly & Kirshner from 2012, whom the majority agreed, it is slightly above solar oxygen abundance 12+ log(O/H) ≈ 9.0 which corresponds to ~2 Z⊙.
Scientists therefore found that the star-formation rate of UGC 9684 is 0.25–0.39 M⊙ yr−1. Apart from that, they found the total stellar mass for the galaxy is M⋆ = (2.0–3.5) × 1010 M⊙ which is a current specific of SFR sSFR ≈ 0.01 Gyr−1. This is higher compared to literature but compatible to large number of recent events in UGC 9684.
Supernovae
Three supernovae and one astronomical transient have been discovered in UGC 9684: SN 2006ed, SN 2012ib, AT 2017cgh, and SN 2020pni. This makes it as one of the most active supernova-producing galaxies.
SN 2006ed
SN 2006ed was discovered on September 18, 2006, via unfiltered CCD images, by N. Joubert, D. R. Madison, R. Mostardi, H. Khandrika and W. Li from University of California, Berkeley on behalf of Lick Observatory Supernova Search program (LOSS). SN 2006ed had a magnitude on 19.0. It was located 1".8 east and 7".2 south of the nucleus. This supernova was Type II.
SN 2012ib
SN 2012ib was discovered on December 20, 2012, by amateur astronomer, V. Shumkov from Sternberg Astronomical Institute (SAI), on four 60-sec unfiltered images from the MASTER-Amur robotic telescope via a 0.40-m f/2.5 reflector. The supernova was located at 48".7 east and 0".4 south of the nucleus, which it had a magnitude of 18.9. The supernova was Type Ib/c.
AT 2017cgh
AT 2017cgh was discovered on March 15, 2017, by Pan-STARRS1 Science consortium. It was located 0".0 east and 0".0 north of the nucleus with a magnitude of 17.7. This astronomical transient had an unknown type, and was never officially classified as a supernova.
SN 2020pni
SN 2020pni was discovered on July 16, 2020, by a team of astronomers on behalf of the ALeRCE broker via r-ZTF filters which was taken by a Palomar 1.2m Oachin telescope. It was located 5".7 west and 5".0 south of the nucleus with a magnitude of 17.0. The supernova was Type II in which its progenitor, a massive star, was enriched in helium and nitrogen in relative abundances in mass fractions of 0.30–0.40 and 8.2 × 10−3, respectively.
A first study shows 1 day after the discovery, there is a significant He II emission which has strong flash features. Another study shows during the 4 days after, there was an increase in velocity of hydrogen lines (from ~250 to ~1000 km/s) suggesting complex circumstellar medium (CSM). A presence of dense and confined CSM as well as its inhomogeneous structure, indicates a phrase of enhanced mass loss of the SN 2020pni progenitor a year before the explosion. As of 2023, the supernova has since faded from view.
References
9684
Boötes
053758
053758
Barred spiral galaxies
SDSS objects
2MASS objects
IRAS catalogue objects
Starburst galaxies
+07-31-024 | UGC 9684 | Astronomy | 1,046 |
30,659,735 | https://en.wikipedia.org/wiki/National%20Terrorism%20Advisory%20System | The National Terrorism Advisory System (NTAS) is a terrorism threat advisory scale used by the US Department of Homeland Security since April 26, 2011.
The NTAS is the replacement for the often-criticized, color-coded Homeland Security Advisory System introduced by the George W. Bush administration in 2002. Janet Napolitano said that the color-coded system often presented "little practical information" to the public and that the NTAS will provide alerts "specific to the threat" with "a specified end date."
On December 16, 2015, Secretary Jeh Johnson activated the bulletin capability for the first time.
The Department of Homeland Security added an intermediate threat level in 2015, after the department identified a "new phase" in the global terrorist threat against the homeland.
Background
The Homeland Security Advisory System was created in response to the 9/11 attacks by the George W. Bush administration. After its announcement, Peter T. King, a Republican Representative from New York, said that the color-based assessments were useful at the time of their creation but that a more specific system was now needed. The five-level color system has been criticized as being vague and ineffective, and alert levels have rarely changed from the yellow ("elevated") and orange ("high") levels.
Mississippi Democratic Representative Bennie Thompson said that the color codes were often better at causing "Americans to be scared" rather than at telling citizens "the reason, how to proceed, or for how long to be on alert." The color-coded system has also been ridiculed by television comedians and shows such as Saturday Night Live.
In July 2009, Napolitano created a task force to reassess the scale and concluded that the Homeland Security Advisory System was unclear and lacked public support, and the task force recommended discontinuing the scale. In November 2010, the Department of Homeland Security submitted a draft plan to overhaul the color system and create what one official called "a system that communicates precise, actionable information based on the latest intelligence."
Launch
The system was announced on January 27, 2011, by Secretary of Homeland Security Napolitano, during a speech at George Washington University. Her official announcement followed reports about the NTAS that had surfaced the day before.
Introducing the National Terrorism Advisory System, Napolitano said, "Today I announce the end of the old system of color-coded alerts. In its place, we will implement a new system that's built on a clear and simple premise: When a threat develops that could impact you—the public—we will tell you. We will provide whatever information we can so you know how to protect yourselves, your families, and your communities." Her speech was timed to complement US President Barack Obama's 2011 State of the Union Address, two days earlier.
Modifications
In 2011, DHS added bulletins to NTAS to distribute information about trends and non-specific threats.
On December 7, 2015, a day after an Address to the Nation by the President from the Oval Office, a plan to add a new "intermediate" threat level to the NTAS was announced by DHS Secretary Johnson to reflect a "new phase" in the global terrorist threat against the homeland following the November 2015 Paris attacks and the 2015 San Bernardino attack.
The Secretary of DHS for the Obama administration stated that the level understood as a normal alert level would have been considered a higher level years ago, but because of the continual threat, a high threat level is now considered as the "baseline."
Description
Alerts are issued under the categories of "elevated," "intermediate," or "imminent." According to Napolitano, "When [the Department of Homeland Security has] information about a specific, credible threat, [it] will issue a formal alert providing as much information as [it] can." When an alert is provided to the public it includes the following information if available: geographic region, mode of transportation, critical infrastructure potentially affected by the threat, protective actions authorities are taking, and steps individuals or communities should be taking to protect themselves and families. That includes providing government agencies and emergency officials with threat assessments as well as using news outlets and social networking resources to notify the public.
It also outlines steps to take in response to a particular terrorist threat. Individual threat alerts are issued for a specific amount of time, and the threat alert then automatically expires.
If new information becomes available, the threat alert may be extended. Information on whether the threat has been extended or is expiring is distributed to the public in the same way that the original notification was made.
References
External links
NTAS alerts list
National Terrorism Advisory System Public Guide, Department of Homeland Security
Alert measurement systems
United States Department of Homeland Security
Disaster preparedness in the United States
2011 introductions | National Terrorism Advisory System | Technology | 971 |
76,222 | https://en.wikipedia.org/wiki/Flaviviridae | Flaviviridae is a family of enveloped positive-strand RNA viruses which mainly infect mammals and birds. They are primarily spread through arthropod vectors (mainly ticks and mosquitoes). The family gets its name from the yellow fever virus; flavus is Latin for "yellow", and yellow fever in turn was named because of its propensity to cause jaundice in victims. There are 89 species in the family divided among four genera. Diseases associated with the group include: hepatitis (hepaciviruses), hemorrhagic syndromes, fatal mucosal disease (pestiviruses), hemorrhagic fever, encephalitis, and the birth defect microcephaly (flaviviruses).
Structure
Virus particles are enveloped and spherical with icosahedral-like geometries that have pseudo T=3 symmetry. They are about 40–60 nm in diameter.
Genome
Members of the family Flaviviridae have monopartite, linear, single-stranded RNA genomes of positive polarity, and 9.6 to 12.3 kilobase in total length. The 5'-termini of flaviviruses carry a methylated nucleotide cap, while other members of this family are uncapped and encode an internal ribosome entry site.
The genome encodes a single polyprotein with multiple transmembrane domains that is cleaved, by both host and viral proteases, into structural and non-structural proteins. Among the non-structural protein products (NS), the locations and sequences of NS3 and NS5, which contain motifs essential for polyprotein processing and RNA replication respectively, are relatively well conserved across the family and may be useful for phylogenetic analysis.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by attachment of the viral envelope protein E to host receptors, which mediates clathrin-mediated endocytosis. Replication follows the positive-stranded RNA virus replication model. Positive-stranded RNA virus transcription is the method of transcription. Translation takes place by viral initiation. The virion assembles by budding through intracellular membranes and exits the host cell by exocytosis.
Host range and evolutionary history
A wide variety of natural hosts are used by different members of the Flaviviridae, including fish, mammals including humans and various invertebrates, such as those specific to mollusks and crustaceans. The genomes of these flaviviruses show close synteny with that of the flavivirus type species, yellow fever virus. One flavivirus, the Wenzhou shark flavivirus, infects both Pacific spadenose sharks (Scoliodon macrorhynchos) and Gazami crabs (Portunus trituberculatus) with overlapping ranges, raising the possibility of a two-host marine lifecycle. However, another clade of flavivirus, the insect-specific flaviviruses, have genomes that do not demonstrate strong synteny with any of these groups, suggesting a complex evolutionary history.
Flavivirus endogenous viral elements, traces of flavivirus genomes integrated into the host's DNA, are found in many species, including a tadpole shrimp Lepidurus articus, the water flea Daphnia magna and a freshwater jellyfish Craspedacusta sowerbii, suggesting ancient coevolution between animal and flavivirus lineages. Many of the well-known members of the family causing disease in vertebrates are transmitted via arthropod vectors (ticks and mosquitoes).
Taxonomy
The Flaviviridae are part of RNA virus supergroup II, which includes certain plant viruses and bacterial viruses.
The family has four genera:
Genus Flavivirus, renamed Orthoflavivirus in 2023, (includes Dengue virus, Japanese encephalitis, Kyasanur Forest disease, Powassan virus, West Nile virus, Yellow fever virus, and Zika virus)
Genus Hepacivirus (includes Hepacivirus C (hepatitis C virus) and Hepacivirus B (GB virus B))
Genus Pegivirus (includes Pegivirus A (GB virus A), Pegivirus C (GB virus C), and Pegivirus B (GB virus D))
Genus Pestivirus (includes Pestivirus A (bovine viral diarrhea virus 1) and Pestivirus C (classical swine fever virus, previously hog cholera virus)). Viruses in this genus infect nonhuman mammals.
Unclassified
Other Orthoflaviviruses are known that have yet to be classified. These include Wenling shark virus.
Jingmenvirus is a group of unclassified viruses in the family which includes Alongshan virus, Guaico Culex virus, Jingmen tick virus and Mogiana tick virus. These viruses have a segmented genome of four or five pieces. Two of these segments are derived from flaviviruses.
A number of viruses may be related to the flaviviruses, but have features that are atypical of the flaviviruses. These include citrus Jingmen-like virus, soybean cyst nematode virus 5, Toxocara canis larva agent, Wuhan cricket virus, and possibly Gentian Kobu-sho-associated virus.
Clinical importance
Major diseases caused by members of the family Flaviviridae include:
References
External links
ICTV Report: Flaviviridae
Flaviviridae Genomes database search results from the Viral Bioinformatics Resource Center
Viralzone: Flaviviridae
Virus Pathogen Database and Analysis Resource (ViPR): Flaviviridae
Virus families
Riboviria | Flaviviridae | Biology | 1,190 |
19,249,767 | https://en.wikipedia.org/wiki/Hip%20hip%20hooray | Hip hip hooray (also hippity hip hooray; hooray may also be spelled and pronounced hoorah, hurrah, hurray etc.) is a cheer called out to express congratulation toward someone or something, in the English-speaking world and elsewhere, usually given three times.
By a sole speaker, it is a form of interjection. In a group, it takes the form of call and response: the cheer is initiated by one person exclaiming "Three cheers for...[someone or something]" (or, more archaically, "Three times three"), then calling out "hip hip" (archaically, "hip hip hip") three times, each time being responded by "hooray" or "hurrah".
The cheer continues to be used to express congratulations. In Australia, South Africa, and to a lesser extent the United Kingdom, the cheer is usually expressed after the singing of "Happy Birthday to You". In Canada and the United Kingdom, the cheer has been used to greet and salute the monarch at public events.
History
The call was recorded in England in the beginning of the 19th century in connection with making a toast. Eighteenth century dictionaries list "Hip" as an attention-getting interjection, and in an example from 1790 it is repeated. "Hip-hip" was added as a preparatory call before making a toast or cheer in the early 19th century, probably after 1806. By 1813, it had reached its modern form, hip-hip-hurrah.
It has been suggested that the word "hip" stems from a medieval Latin acronym, "Hierosolyma Est Perdita", meaning "Jerusalem is lost", a term that gained notoriety in the German Hep hep riots of August to October 1819. Cornell's Michael Fontaine disputes this etymology, tracing it to a single letter in an English newspaper published August 28, 1819, some weeks after the riots. He concludes that the "acrostic interpretation ... has no basis in fact." Ritchie Robertson also disputes the "folk etymology" of the acronym interpretation, citing Jacob Katz.
One theory about the origin of "hurrah" is that the Europeans picked up the Mongol exclamation "hooray" as an enthusiastic cry of bravado and mutual encouragement. See Jack Weatherford's book Genghis Khan and the Making of the Modern World.
See also
Huzzah
References
English phrases
Interjections
Etiquette | Hip hip hooray | Biology | 522 |
5,168,898 | https://en.wikipedia.org/wiki/Choi%27s%20theorem%20on%20completely%20positive%20maps | In mathematics, Choi's theorem on completely positive maps is a result that classifies completely positive maps between finite-dimensional (matrix) C*-algebras. An infinite-dimensional algebraic generalization of Choi's theorem is known as Belavkin's "Radon–Nikodym" theorem for completely positive maps.
Statement
Choi's theorem. Let be a linear map. The following are equivalent:
(i) is -positive (i.e. is positive whenever is positive).
(ii) The matrix with operator entries
is positive, where is the matrix with 1 in the -th entry and 0s elsewhere. (The matrix CΦ is sometimes called the Choi matrix of .)
(iii) is completely positive.
Proof
(i) implies (ii)
We observe that if
then E=E* and E2=nE, so E=n−1EE* which is positive. Therefore CΦ =(In ⊗ Φ)(E) is positive by the n-positivity of Φ.
(iii) implies (i)
This holds trivially.
(ii) implies (iii)
This mainly involves chasing the different ways of looking at Cnm×nm:
Let the eigenvector decomposition of CΦ be
where the vectors lie in Cnm . By assumption, each eigenvalue is non-negative so we can absorb the eigenvalues in the eigenvectors and redefine so that
The vector space Cnm can be viewed as the direct sum compatibly with the above identification
and the standard basis of Cn.
If Pk ∈ Cm × nm is projection onto the k-th copy of Cm, then Pk* ∈ Cnm×m is the inclusion of Cm as the k-th summand of the direct sum and
Now if the operators Vi ∈ Cm×n are defined on the k-th standard
basis vector ek of Cn by
then
Extending by linearity gives us
for any A ∈ Cn×n. Any map of this form is manifestly completely positive: the map is completely positive, and the sum (across ) of completely positive operators is again completely positive. Thus is completely positive, the desired result.
The above is essentially Choi's original proof. Alternative proofs have also been known.
Consequences
Kraus operators
In the context of quantum information theory, the operators {Vi} are called the Kraus operators (after Karl Kraus) of Φ. Notice, given a completely positive Φ, its Kraus operators need not be unique. For example, any "square root" factorization of the Choi matrix gives a set of Kraus operators.
Let
where bi*'s are the row vectors of B, then
The corresponding Kraus operators can be obtained by exactly the same argument from the proof.
When the Kraus operators are obtained from the eigenvector decomposition of the Choi matrix, because the eigenvectors form an orthogonal set, the corresponding Kraus operators are also orthogonal in the Hilbert–Schmidt inner product. This is not true in general for Kraus operators obtained from square root factorizations. (Positive semidefinite matrices do not generally have a unique square-root factorizations.)
If two sets of Kraus operators {Ai}1nm and {Bi}1nm represent the same completely positive map Φ, then there exists a unitary operator matrix
This can be viewed as a special case of the result relating two minimal Stinespring representations.
Alternatively, there is an isometry scalar matrix {uij}ij ∈ Cnm × nm such that
This follows from the fact that for two square matrices M and N, M M* = N N* if and only if M = N U for some unitary U.
Completely copositive maps
It follows immediately from Choi's theorem that Φ is completely copositive if and only if it is of the form
Hermitian-preserving maps
Choi's technique can be used to obtain a similar result for a more general class of maps. Φ is said to be Hermitian-preserving if A is Hermitian implies Φ(A) is also Hermitian. One can show Φ is Hermitian-preserving if and only if it is of the form
where λi are real numbers, the eigenvalues of CΦ, and each Vi corresponds to an eigenvector of CΦ. Unlike the completely positive case, CΦ may fail to be positive. Since Hermitian matrices do not admit factorizations of the form B*B in general, the Kraus representation is no longer possible for a given Φ.
See also
Stinespring factorization theorem
Quantum operation
Holevo's theorem
References
M.-D. Choi, Completely Positive Linear Maps on Complex Matrices, Linear Algebra and its Applications, 10, 285–290 (1975).
V. P. Belavkin, P. Staszewski, Radon-Nikodym Theorem for Completely Positive Maps, Reports on Mathematical Physics, v.24, No 1, 49–55 (1986).
J. de Pillis, Linear Transformations Which Preserve Hermitian and Positive Semidefinite Operators, Pacific Journal of Mathematics, 23, 129–137 (1967).
Linear algebra
Operator theory
Articles containing proofs
Theorems in functional analysis | Choi's theorem on completely positive maps | Mathematics | 1,073 |
38,143,080 | https://en.wikipedia.org/wiki/Mobileye | Mobileye Global Inc. is a United States-domiciled, Israel-headquartered autonomous driving company. It is developing self-driving technologies and advanced driver-assistance systems (ADAS) including cameras, computer chips, and software. Mobileye was acquired by Intel in 2017 and went public again in 2022.
History
Mobileye was founded in 1999 by Hebrew University professor Amnon Shashua. He evolved his academic research into a vision system that could detect vehicles using a camera and software. It developed into a supplier of automotive safety technologies based on adding "intelligence" to inexpensive cameras for commercialization.
Mobileye established its first research center in 2004. It launched the first generation EyeQ1 processor in 2008. The technology offered driver assistance including automatic emergency braking. One of the first vehicles to use this technology was the fifth-generation BMW 7 Series. Versions of the chip were released in 2010, 2014 and 2018.
In 2013, Mobileye announced the sale of a 25% stake to investors for $400 million, valuing the company at approximately $1.5 billion.
Mobileye went public on the New York Stock Exchange in 2014. It raised $890 million, and became the largest Israeli IPO in U.S. history. By the end of the year, Mobileye's technology had been implemented in 160 car models made by 18 different OEMs.
In 2017, Mobileye unveiled a mathematical model for safe self-driving cars based on research by CEO Amnon Shashua and VP of Technology Shai Shalev-Shwartz. Their study outlined a system called Responsibility-Sensitive Safety (RSS) which redefines fault and caution and could potentially be used to inform insurers and driving laws. Shalev-Shwartz was promoted to CTO in 2019.
In March 2017, Intel announced that it would acquire Mobileye for $15.3 billion — the biggest-ever acquisition of an Israeli tech company. Following the acquisition, Reuters reported that the U.S. Securities and Exchange Commission had charged two Israelis, Ariel Darvasi and Amir Waldman, with insider trading prior to the announcement. Both had connections to Mobileye through the Hebrew University of Jerusalem, where Mobileye's technology was first developed. The SEC obtained an emergency court order, freezing certain assets of Virginia residents Lawrence F. Cluff, Jr. and Roger E. Shaoul, who allegedly used insider information to make approximately $1 million on the announcement. Neither Intel nor Mobileye were accused by the SEC of violating the law.
In October 2018, Mobileye and Volkswagen released plans to commercialize Mobility-as-a-Service (MaaS) in Israel. Mobileye instead began "robotaxi" trials with Nio electric vehicles in Israel in May 2020 due to Volkswagen delays, and unveiled its robotaxi in 2021 at the IAA Mobility show in Munich.
Mobileye demonstrated an autonomous car equipped only with cameras in Jerusalem in January 2020. It later tested the cars in Munich and New York City.
In December 2021, Intel announced its plan to take Mobileye public via in 2022, while maintaining its majority ownership. In October 2022, Intel offered 5–6% of outstanding shares, raising $861million on 41million shares. This valued Mobileye at around $17billionmore than what Intel had paid in 2017. Intel continued to hold all Class B shares, giving itself an overall 99.4% of voting power.
Partnerships
Mobileye formed partnerships with various automakers. Mobileye launched multiple series productions for LDW on GM Cadillac STS and DTS vehicles, and on BMW 5 and 6 Series vehicles. In 2016, Mobileye and Delphi formed a partnership to develop an autonomous driving system. In early 2017, Mobileye announced a partnership with BMW to integrate Mobileye technology into vehicles going to market in 2018. In 2018, Mobileye announced partnerships with BMW, Nissan and Volkswagen. In 2019, Mobileye and NIO announced that they would partner on the development of AVs for consumer markets in China and other major territories. In July 2020, Mobileye and Ford announced a deal in which Mobileye would supply its EyeQ camera-based gear and software across Ford's global product line. Also in 2020, Mobileye partnered with WILLER29 to launch a robotaxi service in Japan, Taiwan and Southeast Asia and with Geely for ADAS. The same year, Intel announced that it had acquired Moovit, a mobility-as-a-service (MaaS) company, to enhance Mobileye's MaaS offering.
In February 2021, Mobileye, Transdev Autonomous Transport System (ATS) and Lohr Group formed a partnership to develop and deploy autonomous shuttles, and in April Mobileye announced a partnership with Udelv on the company's Transporter electric self-driving delivery vehicle. In 2021, Toyota Motor Corp. selected Mobileye and German supplier ZF to develop and supply ADAS and Mobileye began a partnership with Mahindra.
Porsche
In May 2023, Porsche and Mobileye launched a collaboration to provide Mobileye’s SuperVision™ in future Porsche production models.
Tesla
In August 2015, Tesla Motors announced that it would incorporate Mobileye's technology in Model S cars. Tesla reportedly did not share its plans with Mobileye, and after the first deadly crash of a self-driving Model S with Autopilot became public in June 2016, Mobileye ended their partnership. The two companies expressed disagreement over what caused the accident, with Shashua claiming that Tesla "was pushing the envelope in terms of safety" and that Autopilot is a "driver assistance system" and not a "driverless system". Mobileye issued a statement that its systems did not recognize a "lateral turn across path".
Technology
EyeQ
The EyeQ system-on-chip (SoC) utilizes a single camera sensor to provide passive/active ADAS features including automatic emergency braking (AEB), adaptive cruise control (ACC), lane keeping assist(LKA), traffic jam assist (TJA) and forward collision warning (FCW). Mobileye's fifth-generation EyeQ supports fully autonomous vehicles. More than 27 automobile manufacturers utilize EyeQ for their assisted-driving technologies.
Road Experience Management (REM)
Mobileye's Road Experience Management, or REM, uses real-time data from Mobileye-equipped vehicles to maintain its 3D maps. The data collected amounts to about 10 kilobytes per kilometer. It is compiled in a map called Mobileye RoadBook that leverages anonymized, crowdsourced data from vehicle cameras for navigation and localization. According to Mobileye, REM had mapped more than 7.5 billion kilometers of roads by January 2021.
Responsibility-Sensitive Safety Model (RSS)
RSS, or the Responsibility-Sensitive Safety Model, is a mathematical safety model first proposed by Mobileye in 2017. RSS models AV decision-making and digitizes the implicit rules of safe driving for AVs to prevent self-driving vehicles from causing accidents. RSS is defined in software.
True Redundancy
True Redundancy is an integrated autonomous driving system that utilizes data streams from 360-surround view cameras, lidar, and radar. This approach adds a lidar/radar subsystem to its computer-vision subsystem for redundancy.
Mobileye SuperVision
SuperVision uses EyeQ5 SoC data from 11 cameras. The system uses cameras only and is designed for hands-off cars. Geely's Zeekr electric vehicle is equipped with Mobileye SuperVision ADAS and began road trials in 2021.
Mobileye Drive
Mobileye Drive is a Level 4 self-driving system. The sensor suite includes 13 cameras, 3 long-range LiDARs, 6 short-range LiDARs and 6 radars. Mobileye Drive was first fitted to vehicles used for ride-hailing services in 2021, with plans for public testing in Germany and Israel in 2022.
Mobileye Chauffeur
Mobileye Chauffeur is a full-featured hands-off/eyes-off (highway)/hands-off/eyes-on street autonomous driving system. As of August 2023, it was planned for initial release on Polestar 4.
Aftermarket
Mobileye's aftermarket vision-based ADAS systems are based on the same core technology as for production models. These systems offer lane departure warning, forward collision warning, headway monitoring and warning, intelligent headlamp control and speed limit indication (tsr). These systems have been integrated with fleet management systems.
Operating system
Mobileye created operating system DXP for autonomous vehicles.
Chips
Comparison
Hardware
Business
It has sales and marketing offices in Midtown, Manhattan, U.S.; Shanghai, China; Tokyo, Japan; and Düsseldorf, Germany.
See also
Science and technology in Israel
Economy of Israel
Start-up Nation
OrCam device
Automatic emergency braking
ADAS
Self-driving car
References
External links
Vehicle safety technologies
Automotive electronics
Automotive technology tradenames
Israeli brands
Commercial computer vision systems
Applications of artificial intelligence
Software companies of Israel
Software companies established in 1999
Intelligent transportation systems
Warning systems
Companies based in Jerusalem
Companies formerly listed on the New York Stock Exchange
Companies listed on the Nasdaq
Intel acquisitions
Mergers and acquisitions of Israeli companies
Israeli inventions
2017 mergers and acquisitions
1999 establishments in Israel
2014 initial public offerings
2022 initial public offerings
Self-driving_car_companies | Mobileye | Technology,Engineering | 1,915 |
63,346,757 | https://en.wikipedia.org/wiki/Npj%202D%20Materials%20and%20Applications | npj 2D Materials and Applications, is an open access peer-reviewed scientific journal published by Nature Publishing Group. It focuses on 2D materials (such as thin films), including fundamental behaviour, synthesis, properties and applications.
According to the Journal Citation Reports, npj 2D Materials and Applications has a 2022 impact factor of 9.7. The current editor-in-chief is Andras Kis (École Polytechnique Fédérale de Lausanne).
Scope
npj 2D Materials and Applications publishes articles, brief communication, comment, matters arising, perspective, and editorial on 2D materials in their entirety, including fundamental behaviour, synthesis, properties and applications. Specific materials of interest will include, but are not limited to:
2D materials in all their forms: graphene, transition metal dichalcogenides, phosphorene and molecular systems, including relevant allotropes and compounds, and topological materials
fundamental understanding of their basic science
synthesis by physical and chemical approaches
behavior and properties: electronic, magnetic, spintronic, photonic, mechanical, including in heterostructures and other architectures
applications: sensors, memory, high-frequency electronics, energy harvesting and storage, flexible electronics, water treatment, biomedical, thermal management.
References
External links
Nature Research academic journals
Materials science journals
English-language journals | Npj 2D Materials and Applications | Materials_science,Engineering | 265 |
15,462 | https://en.wikipedia.org/wiki/Integral%20domain | In mathematics, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element a has the cancellation property, that is, if , an equality implies .
"Integral domain" is defined almost universally as above, but there is some variation. This article follows the convention that rings have a multiplicative identity, generally denoted 1, but some authors do not follow this, by not requiring integral domains to have a multiplicative identity. Noncommutative integral domains are sometimes admitted. This article, however, follows the much more usual convention of reserving the term "integral domain" for the commutative case and using "domain" for the general case including noncommutative rings.
Some sources, notably Lang, use the term entire ring for integral domain.
Some specific kinds of integral domains are given with the following chain of class inclusions:
Definition
An integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Equivalently:
An integral domain is a nonzero commutative ring with no nonzero zero divisors.
An integral domain is a commutative ring in which the zero ideal {0} is a prime ideal.
An integral domain is a nonzero commutative ring for which every nonzero element is cancellable under multiplication.
An integral domain is a ring for which the set of nonzero elements is a commutative monoid under multiplication (because a monoid must be closed under multiplication).
An integral domain is a nonzero commutative ring in which for every nonzero element r, the function that maps each element x of the ring to the product xr is injective. Elements r with this property are called regular, so it is equivalent to require that every nonzero element of the ring be regular.
An integral domain is a ring that is isomorphic to a subring of a field. (Given an integral domain, one can embed it in its field of fractions.)
Examples
The archetypical example is the ring of all integers.
Every field is an integral domain. For example, the field of all real numbers is an integral domain. Conversely, every Artinian integral domain is a field. In particular, all finite integral domains are finite fields (more generally, by Wedderburn's little theorem, finite domains are finite fields). The ring of integers provides an example of a non-Artinian infinite integral domain that is not a field, possessing infinite descending sequences of ideals such as:
Rings of polynomials are integral domains if the coefficients come from an integral domain. For instance, the ring of all polynomials in one variable with integer coefficients is an integral domain; so is the ring of all polynomials in n-variables with complex coefficients.
The previous example can be further exploited by taking quotients from prime ideals. For example, the ring corresponding to a plane elliptic curve is an integral domain. Integrality can be checked by showing is an irreducible polynomial.
The ring is an integral domain for any non-square integer . If , then this ring is always a subring of , otherwise, it is a subring of
The ring of p-adic integers is an integral domain.
The ring of formal power series of an integral domain is an integral domain.
If is a connected open subset of the complex plane , then the ring consisting of all holomorphic functions is an integral domain. The same is true for rings of analytic functions on connected open subsets of analytic manifolds.
A regular local ring is an integral domain. In fact, a regular local ring is a UFD.
Non-examples
The following rings are not integral domains.
The zero ring (the ring in which ).
The quotient ring when m is a composite number. Indeed, choose a proper factorization (meaning that and are not equal to or ). Then and , but .
A product of two nonzero commutative rings. In such a product , one has .
The quotient ring for any . The images of and are nonzero, while their product is 0 in this ring.
The ring of n × n matrices over any nonzero ring when n ≥ 2. If and are matrices such that the image of is contained in the kernel of , then . For example, this happens for .
The quotient ring for any field and any non-constant polynomials . The images of and in this quotient ring are nonzero elements whose product is 0. This argument shows, equivalently, that is not a prime ideal. The geometric interpretation of this result is that the zeros of form an affine algebraic set that is not irreducible (that is, not an algebraic variety) in general. The only case where this algebraic set may be irreducible is when is a power of an irreducible polynomial, which defines the same algebraic set.
The ring of continuous functions on the unit interval. Consider the functions
Neither nor is everywhere zero, but is.
The tensor product . This ring has two non-trivial idempotents, and . They are orthogonal, meaning that , and hence is not a domain. In fact, there is an isomorphism defined by . Its inverse is defined by . This example shows that a fiber product of irreducible affine schemes need not be irreducible.
Divisibility, prime elements, and irreducible elements
In this section, R is an integral domain.
Given elements a and b of R, one says that a divides b, or that a is a divisor of b, or that b is a multiple of a, if there exists an element x in R such that .
The units of R are the elements that divide 1; these are precisely the invertible elements in R. Units divide all other elements.
If a divides b and b divides a, then a and b are associated elements or associates. Equivalently, a and b are associates if for some unit u.
An irreducible element is a nonzero non-unit that cannot be written as a product of two non-units.
A nonzero non-unit p is a prime element if, whenever p divides a product ab, then p divides a or p divides b. Equivalently, an element p is prime if and only if the principal ideal (p) is a nonzero prime ideal.
Both notions of irreducible elements and prime elements generalize the ordinary definition of prime numbers in the ring if one considers as prime the negative primes.
Every prime element is irreducible. The converse is not true in general: for example, in the quadratic integer ring the element 3 is irreducible (if it factored nontrivially, the factors would each have to have norm 3, but there are no norm 3 elements since has no integer solutions), but not prime (since 3 divides without dividing either factor). In a unique factorization domain (or more generally, a GCD domain), an irreducible element is a prime element.
While unique factorization does not hold in , there is unique factorization of ideals. See Lasker–Noether theorem.
Properties
A commutative ring R is an integral domain if and only if the ideal (0) of R is a prime ideal.
If R is a commutative ring and P is an ideal in R, then the quotient ring R/P is an integral domain if and only if P is a prime ideal.
Let R be an integral domain. Then the polynomial rings over R (in any number of indeterminates) are integral domains. This is in particular the case if R is a field.
The cancellation property holds in any integral domain: for any a, b, and c in an integral domain, if and then . Another way to state this is that the function is injective for any nonzero a in the domain.
The cancellation property holds for ideals in any integral domain: if , then either x is zero or .
An integral domain is equal to the intersection of its localizations at maximal ideals.
An inductive limit of integral domains is an integral domain.
If A, B are integral domains over an algebraically closed field k, then is an integral domain. This is a consequence of Hilbert's nullstellensatz, and, in algebraic geometry, it implies the statement that the coordinate ring of the product of two affine algebraic varieties over an algebraically closed field is again an integral domain.
Field of fractions
The field of fractions K of an integral domain R is the set of fractions a/b with a and b in R and modulo an appropriate equivalence relation, equipped with the usual addition and multiplication operations. It is "the smallest field containing R" in the sense that there is an injective ring homomorphism such that any injective ring homomorphism from R to a field factors through K. The field of fractions of the ring of integers is the field of rational numbers The field of fractions of a field is isomorphic to the field itself.
Algebraic geometry
Integral domains are characterized by the condition that they are reduced (that is implies ) and irreducible (that is there is only one minimal prime ideal). The former condition ensures that the nilradical of the ring is zero, so that the intersection of all the ring's minimal primes is zero. The latter condition is that the ring have only one minimal prime. It follows that the unique minimal prime ideal of a reduced and irreducible ring is the zero ideal, so such rings are integral domains. The converse is clear: an integral domain has no nonzero nilpotent elements, and the zero ideal is the unique minimal prime ideal.
This translates, in algebraic geometry, into the fact that the coordinate ring of an affine algebraic set is an integral domain if and only if the algebraic set is an algebraic variety.
More generally, a commutative ring is an integral domain if and only if its spectrum is an integral affine scheme.
Characteristic and homomorphisms
The characteristic of an integral domain is either 0 or a prime number.
If R is an integral domain of prime characteristic p, then the Frobenius endomorphism is injective.
See also
Dedekind–Hasse norm – the extra structure needed for an integral domain to be principal
Zero-product property
Notes
Citations
References
External links
Commutative algebra
Ring theory | Integral domain | Mathematics | 2,191 |
285,967 | https://en.wikipedia.org/wiki/Situated%20ethics | Situated ethics, often confused with situational ethics, is a view of applied ethics in which abstract standards from a culture or theory are considered to be far less important than the ongoing processes in which one is personally and physically involved, e.g. climate, ecosystem, etc. It is one of several theories of ethics within the philosophy of action.
There are also situated theories of economics, e.g. most green economics, and of knowledge, usually based on some situated ethics. All emphasize the actual physical, geographical, ecological and infrastructural state the actor is in, which determines that actor's actions or range of actions - all deny that there is any one point of view from which to apply standards of or by authority. This makes such theories unpopular with authority, and popular with those who advocate political decentralisation.
Embodiment
Humans pass through Kohlberg/Gilligan's stages of moral development. Up to stage 3 (Conventional morality:Good Interpersonal Relationships), these stages are compatible with embodiment. Most philosophy of law emphasizes that the fact that bodies take risk to enforce laws, make laws embodied at least to the degree they are enforced.
However, the stages become problematic when Lawrence Kohlberg posits a universal ethics - that is, a disembodied ethic. All ethical decisions are necessarily situated in a world. Carol Gilligan's view is closer to an embodied view and emphasizes ethical relationships - necessarily between bodies - over universal ethical principles that require a "God's Eye view". Some ethicists emphasize the role of the ethicist to sort out right versus right in a given context. This is stage 4 but assumes that the ethicist is hesitant to damage relationships or violate principles, e.g. that survival or human rights take precedence over property rights.
References
Helen Simons Robin Usher (2000) Situated Ethics in Educational Research ()
See also
Situational ethics
Applied ethics | Situated ethics | Biology | 398 |
62,548,396 | https://en.wikipedia.org/wiki/History%20of%20metallurgy%20in%20Mosul | During the thirteenth century, Mosul, Iraq became home to a school of luxury metalwork which rose to international renown. Artifacts classified as Mosul are some of the most intricately designed and revered pieces of the Middle Ages.
Background
The school of metalwork in Mosul is believed to have been founded in the early 13th century under Zengid patronage. During this time, the Zengid region was operating as a vassal under the Ayyubid Sultanate. Control over Mosul as a city central to trade between China, the Mediterranean, Anatolia, and Mesopotamia was contested between the Zengids and the Ayyubid sultan, Saladin, throughout the early acquisitions of the Ayyubid Sultanate in Syria and Iraq after the decline of Fatimid rule. However, the Zengids remained in Mosul and were allowed some degree of authority under the Sultanate.
Around 1256, the Mongol occupation of Iraq began, and the region became a part of the Ilkhanate. Of the artifacts agreed to be "nabish al-Mawsili" (of Mosul), approximately 80% were produced after the commencement of Mongol rule in Mosul. However, it is unclear as to whether or not all of these artifacts were produced within Mosul and later exported as esteemed gifts, or created elsewhere by Mosulian artisans who relocated but maintained the "al-Mawsili" signature.
Design
The process of creating these luxuriously inlaid objects is somewhat complicated and has multiple stages. First, designs are formed on the surface of the metal (usually copper or brass) by relief, piercing, engraving, or chasing. Color is then added to the crevices of the surface by encrustation, overlay or, most commonly, inlay of precious metals. These metal inlays could be sheets or wires hammered into place. The area around the inlaid design was often roughened or covered with some sort of black material. Each craftsman in the industry had their own personal specialization. This specialization could be in a particular metal, technique, object, or step in the process. There are two reasons the casting step of the process usually took place in an urban workshop. The first is simply because most patrons were located in these urban areas. The second is because it would be too difficult to move all of the heavy equipment necessary for casting from one rural location to the next. Inlayers and precious metalworkers were able to travel with ease and were not confined to the workshops as casters were. There were three main inlay innovations that are believed to have originated in Mosul in the thirteenth century- gold inlays, black inlay, and background scrolls inlaid with silver.
The designs themselves are quite varied in subject matter. Some of the popular motifs include: astrology, hunting, enthronements, battles, court life, and genre scenes. Genre scenes, images of everyday life are particularly prominent. Among the original design traditions there is evidence that can trace them to East Asia through the designs within textiles. Mosul was a great textile industry during the same period that they were producing these inlaid objects and they happened to specialize in reproductions of Chinese silks. It is speculated that many of the traditional metalwork designs were heavily influenced or even direct copies of these silk reproductions.
Historically, many scholars have argued that the Mongol sack of Mosul led to the demise of the luxury metalworking industry, however modern scholarship and an abundance of evidence disproves this. For example, it is known that Mosul metalworkers received an imperial commission by Il-Khan Abu Sa'id in the last years of the Ilkanate. Not only did Mosul continue to produce elaborate inlaid objects after the Mongol sack, they also altered their traditional stylistic choices to coalesce with Mongol taste. There was a new emphasis on minuscule style, the figures represented reflect the Ilkanhid fashion of the period, and they started to put more emphasis on pattern over figuration.
One of the finest examples of the Mosul school of metalworking is the Blacas Ewer.
Another item tentatively attributed to Mosul is the Courtauld bag, which is thought to be the world's oldest surviving handbag.
Scholarship
The scholarship surrounding Mosul Metalwork has been ongoing for a very long time, since it became the first Islamic objects d'art studied in Europe, due to its early arrival on the continent. The diverse opinions on what constitutes as Mosul Metalwork arise due to the style's dispersion across lands and through the component of signatures which identify creators as "al Mawsili", meaning "of Mosul". Within the section of metalwork with signatures, twenty-seven out of the thirty-five state themselves as "al- Mawsili". Out of those, eight state their provenance through the name of the people for which they were created along with statements declaring their engendering within Mosul. Some notable scholars that have helped shape the basis of this study include: Joseph Toussaint Reinaud, Henri Lavoix, Gaston Migeon, Max Van Berchem, Mehmed Aga-Oglu, David Storm Rice.
In the early years of Mosul Metalwork, around 1828, Joseph Toussaint Reinaud, published a collection that included the first item to clearly state its creation in Mosul, the 'Blacas ewer', an artifact consistently scrutinized by scholars when exploring Mosul style. Then in the 1860s the credibility of Mosul was being questioned by scholars, it was during that century that Henry Lavoix declared that Damascus, Aleppo, Mosul, and Egypt all created inlaid metalwork, but specifically singled out Mosul as a source for a unique style unseen throughout the medium.
A critical point in the scholarship came in the beginning of the 20th, through Gaston Migeon, whose claims over the precedency of Mosul caused objection and an urgency for reliability. Migeon also wrote the first comprehensive article introducing the inlaid Islamic metalwork. In the following years, the fluctuation of precedence of Mosul and the lack of it continued, leading up to David Storm Rice, who released the first series of articles exploring the complexities of multiple objects, a process similar to that of Max Van Brehmen and Mehmed Aga-Oglu, two scholars that impacted the relevance and viability of Mosul Metalwork, some of which included the Blacas Ewer, Louvre basin and the Munich Tray.
Present day, Mosul Metalwork is still elusive, and lacks a sustaining amount of scholarship, but scholars continue to construct a field that utilizes substantiated evidence through designs, inscription, and other items engendered specifically in Mosul around the 13th century. An example of this is represented in an article written by Ruba Kana' An who utilizes its iconography and description to construct the argument stating the Freer Ewer as one of many metalworks constructed in Mosul.
See also
Timeline of materials technology
Nonferrous archaeometallurgy of the Southern Levant
History of metallurgy in the Indian subcontinent
Further reading
References
Metalworking
Mosul
History of Mosul | History of metallurgy in Mosul | Chemistry,Materials_science | 1,472 |
735,661 | https://en.wikipedia.org/wiki/Deterrence%20theory | Deterrence theory refers to the scholarship and practice of how threats of using force by one party can convince another party to refrain from initiating some other course of action. The topic gained increased prominence as a military strategy during the Cold War with regard to the use of nuclear weapons and is related to but distinct from the concept of mutual assured destruction, according to which a full-scale nuclear attack on a power with second-strike capability would devastate both parties. The central problem of deterrence revolves around how to credibly threaten military action or nuclear punishment on the adversary despite its costs to the deterrer. Deterrence in an international relations context is the application of deterrence theory to avoid conflict.
Deterrence is widely defined as any use of threats (implicit or explicit) or limited force intended to dissuade an actor from taking an action (i.e. maintain the status quo). Deterrence is unlike compellence, which is the attempt to get an actor (such as a state) to take an action (i.e. alter the status quo). Both are forms of coercion. Compellence has been characterized as harder to successfully implement than deterrence. Deterrence also tends to be distinguished from defense or the use of full force in wartime.
Deterrence is most likely to be successful when a prospective attacker believes that the probability of success is low and the costs of attack are high. Central problems of deterrence include the credible communication of threats and assurance. Deterrence does not necessarily require military superiority.
"General deterrence" is considered successful when an actor who might otherwise take an action refrains from doing so due to the consequences that the deterrer is perceived likely to take. "Immediate deterrence" is considered successful when an actor seriously contemplating immediate military force or action refrains from doing so. Scholars distinguish between "extended deterrence" (the protection of allies) and "direct deterrence" (protection of oneself). Rational deterrence theory holds that an attacker will be deterred if they believe that:(Probability of deterrer carrying out deterrent threat × Costs if threat carried out) > (Probability of the attacker accomplishing the action × Benefits of the action)This model is frequently simplified in game-theoretic terms as:Costs × P(Costs) > Benefits × P(Benefits)
History
By November 1945 general Curtis LeMay, who led American air raids on Japan during World War II, was thinking about how the next war would be fought. He said in a speech that month to the Ohio Society of New York that since "No air attack, once it is launched, can be completely stopped", his country needed an air force that could immediately retaliate: "If we are prepared it may never come. It is not immediately conceivable that any nation will dare to attack us if we are prepared".
Most of the innovative work on deterrence theory occurred from the late 1940s to mid-1960s. Historically, scholarship on deterrence has tended to focus on nuclear deterrence. Since the end of the Cold War, there has been an extension of deterrence scholarship to areas that are not specifically about nuclear weapons.
NATO was founded in 1949 with deterring aggression as one of its goals.
A distinction is sometimes made between nuclear deterrence and "conventional deterrence."
The two most prominent deterrent strategies are "denial" (denying the attacker the benefits of attack) and "punishment" (inflicting costs on the attacker).
Lesson of Munich, where appeasement failed, contributes to deterrence theory. In the words of scholars Frederik Logevall and Kenneth Osgood, "Munich and appeasement have become among the dirtiest words in American politics, synonymous with naivete and weakness, and signifying a craven willingness to barter away the nation's vital interests for empty promises." They claimed that the success of US foreign policy often depends upon a president withstanding "the inevitable charges of appeasement that accompany any decision to negotiate with hostile powers.
Concept
The use of military threats as a means to deter international crises and war has been a central topic of international security research for at least 2000 years.
The concept of deterrence can be defined as the use of threats in limited force by one party to convince another party to refrain from initiating some course of action. In Arms and Influence (1966), Schelling offers a broader definition of deterrence, as he defines it as "to prevent from action by fear of consequences." Glenn Snyder also offers a broad definition of deterrence, as he argues that deterrence involves both the threat of sanction and the promise of reward.
A threat serves as a deterrent to the extent that it convinces its target not to carry out the intended action because of the costs and losses that target would incur. In international security, a policy of deterrence generally refers to threats of military retaliation directed by the leaders of one state to the leaders of another in an attempt to prevent the other state from resorting to the use of military force in pursuit of its foreign policy goals.
As outlined by Huth, a policy of deterrence can fit into two broad categories: preventing an armed attack against a state's own territory (known as direct deterrence) or preventing an armed attack against another state (known as extended deterrence). Situations of direct deterrence often occur if there is a territorial dispute between neighboring states in which major powers like the United States do not directly intervene. On the other hand, situations of extended deterrence often occur when a great power becomes involved. The latter case has generated most interest in academic literature. Building on the two broad categories, Huth goes on to outline that deterrence policies may be implemented in response to a pressing short-term threat (known as immediate deterrence) or as strategy to prevent a military conflict or short-term threat from arising (known as general deterrence).
A successful deterrence policy must be considered in military terms but also political terms: International relations, foreign policy and diplomacy. In military terms, deterrence success refers to preventing state leaders from issuing military threats and actions that escalate peacetime diplomatic and military co-operation into a crisis or militarized confrontation that threatens armed conflict and possibly war. The prevention of crises of wars, however, is not the only aim of deterrence. In addition, defending states must be able to resist the political and the military demands of a potential attacking nation. If armed conflict is avoided at the price of diplomatic concessions to the maximum demands of the potential attacking nation under the threat of war, it cannot be claimed that deterrence has succeeded.
Furthermore, as Jentleson et al. argue, two key sets of factors for successful deterrence are important: a defending state strategy that balances credible coercion and deft diplomacy consistent with the three criteria of proportionality, reciprocity, and coercive credibility and minimizes international and domestic constraints and the extent of an attacking state's vulnerability as shaped by its domestic political and economic conditions. In broad terms, a state wishing to implement a strategy of deterrence is most likely to succeed if the costs of noncompliance that it can impose on and the benefits of compliance it can offer to another state are greater than the benefits of noncompliance and the costs of compliance.
Deterrence theory holds that nuclear weapons are intended to deter other states from attacking with their nuclear weapons, through the promise of retaliation and possibly mutually assured destruction. Nuclear deterrence can also be applied to an attack by conventional forces. For example, the doctrine of massive retaliation threatened to launch US nuclear weapons in response to Soviet attacks.
A successful nuclear deterrent requires a country to preserve its ability to retaliate by responding before its own weapons are destroyed or ensuring a second-strike capability. A nuclear deterrent is sometimes composed of a nuclear triad, as in the case of the nuclear weapons owned by the United States, Russia, China and India. Other countries, such as the United Kingdom and France, have only sea-based and air-based nuclear weapons.
Proportionality
Jentleson et al. provides further detail in relation to those factors. Proportionality refers to the relationship between the defending state's scope and nature of the objectives being pursued and the instruments available for use to pursue them. The more the defending state demands of another state, the higher that state's costs of compliance and the greater need for the defending state's strategy to increase the costs of noncompliance and the benefits of compliance. That is a challenge, as deterrence is by definition a strategy of limited means. George (1991) goes on to explain that deterrence sometimes goes beyond threats to the actual use of military force, but if force is actually used, it must be limited and fall short of full-scale use to succeed.
The main source of disproportionality is an objective that goes beyond policy change to regime change, which has been seen in Libya, Iraq, and North Korea. There, defending states have sought to change the leadership of a state and to policy changes relating primarily to their nuclear weapons programs.
Reciprocity
Secondly, Jentleson et al. outlines that reciprocity involves an explicit understanding of linkage between the defending state's carrots and the attacking state's concessions. The balance lies in not offering too little, too late or for too much in return and not offering too much, too soon, or for too little return.
Coercive credibility
Finally, coercive credibility requires that in addition to calculations about costs and benefits of co-operation, the defending state convincingly conveys to the attacking state that failure to co-operate has consequences. Threats, uses of force, and other coercive instruments such as economic sanctions must be sufficiently credible to raise the attacking state's perceived costs of noncompliance. A defending state having a superior military capability or economic strength in itself is not enough to ensure credibility. Indeed, all three elements of a balanced deterrence strategy are more likely to be achieved if other major international actors like the UN or NATO are supportive, and opposition within the defending state's domestic politics is limited.
The other important considerations outlined by Jentleson et al. that must be taken into consideration is the domestic political and economic conditions in the attacking state affecting its vulnerability to deterrence policies and the attacking state's ability to compensate unfavourable power balances. The first factor is whether internal political support and regime security are better served by defiance, or there are domestic political gains to be made from improving relations with the defending state. The second factor is an economic calculation of the costs that military force, sanctions, and other coercive instruments can impose and the benefits that trade and other economic incentives may carry. That is partly a function of the strength and flexibility of the attacking state's domestic economy and its capacity to absorb or counter the costs being imposed. The third factor is the role of elites and other key domestic political figures within the attacking state. To the extent that such actors' interests are threatened with the defending state's demands, they act to prevent or block the defending state's demands.
Rational deterrence theory
One approach to theorizing about deterrence has entailed the use of rational choice and game-theoretic models of decision making (see game theory). Rational deterrence theory entails:
Rationality: actors are rational
Unitary actor assumption: actors are understood as unitary
Dyads: interactions tend to be between dyads (or triads) of states
Strategic interactions: actors consider the choices of other actors
Cost-benefit calculations: outcomes reflect actors' cost-benefit calculations
Deterrence theorists have consistently argued that deterrence success is more likely if a defending state's deterrent threat is credible to an attacking state. Huth outlines that a threat is considered credible if the defending state possesses both the military capabilities to inflict substantial costs on an attacking state in an armed conflict, and the attacking state believes that the defending state is resolved to use its available military forces. Huth goes on to explain the four key factors for consideration under rational deterrence theory: the military balance, signaling and bargaining power, reputations for resolve, interests at stake.
The American economist Thomas Schelling brought his background in game theory to the subject of studying international deterrence. Schelling's (1966) classic work on deterrence presents the concept that military strategy can no longer be defined as the science of military victory. Instead, it is argued that military strategy was now equally, if not more, the art of coercion, intimidation and deterrence. Schelling says the capacity to harm another state is now used as a motivating factor for other states to avoid it and influence another state's behavior. To be coercive or deter another state, violence must be anticipated and avoidable by accommodation. It can therefore be summarized that the use of the power to hurt as bargaining power is the foundation of deterrence theory and is most successful when it is held in reserve.
In an article celebrating Schelling's Nobel Memorial Prize for Economics, Michael Kinsley, Washington Post op‑ed columnist and one of Schelling's former students, anecdotally summarizes Schelling's reorientation of game theory thus: "[Y]ou're standing at the edge of a cliff, chained by the ankle to someone else. You'll be released, and one of you will get a large prize, as soon as the other gives in. How do you persuade the other guy to give in, when the only method at your disposal—threatening to push him off the cliff—would doom you both? Answer: You start dancing, closer and closer to the edge. That way, you don't have to convince him that you would do something totally irrational: plunge him and yourself off the cliff. You just have to convince him that you are prepared to take a higher risk than he is of accidentally falling off the cliff. If you can do that, you win."
Military balance
Deterrence is often directed against state leaders who have specific territorial goals that they seek to attain either by seizing disputed territory in a limited military attack or by occupying disputed territory after the decisive defeat of the adversary's armed forces. In either case, the strategic orientation of potential attacking states generally is for the short term and is driven by concerns about military cost and effectiveness. For successful deterrence, defending states need the military capacity to respond quickly and strongly to a range of contingencies. Deterrence often fails if either a defending state or an attacking state underestimates or overestimates the other's ability to undertake a particular course of action.
Signaling and bargaining power
The central problem for a state that seeks to communicate a credible deterrent threat by diplomatic or military actions is that all defending states have an incentive to act as if they are determined to resist an attack in the hope that the attacking state will back away from military conflict with a seemingly resolved adversary. If all defending states have such incentives, potential attacking states may discount statements made by defending states along with any movement of military forces as merely bluffs. In that regard, rational deterrence theorists have argued that costly signals are required to communicate the credibility of a defending state's resolve. Those are actions and statements that clearly increase the risk of a military conflict and also increase the costs of backing down from a deterrent threat. States that bluff are unwilling to cross a certain threshold of threat and military action for fear of committing themselves to an armed conflict.
Reputations for resolve
There are three different arguments that have been developed in relation to the role of reputations in influencing deterrence outcomes. The first argument focuses on a defending state's past behavior in international disputes and crises, which creates strong beliefs in a potential attacking state about the defending state's expected behaviour in future conflicts. The credibilities of a defending state's policies are arguably linked over time, and reputations for resolve have a powerful causal impact on an attacking state's decision whether to challenge either general or immediate deterrence. The second approach argues that reputations have a limited impact on deterrence outcomes because the credibility of deterrence is heavily determined by the specific configuration of military capabilities, interests at stake, and political constraints faced by a defending state in a given situation of attempted deterrence. The argument of that school of thought is that potential attacking states are not likely to draw strong inferences about a defending states resolve from prior conflicts because potential attacking states do not believe that a defending state's past behaviour is a reliable predictor of future behavior. The third approach is a middle ground between the first two approaches and argues that potential attacking states are likely to draw reputational inferences about resolve from the past behaviour of defending states only under certain conditions. The insight is the expectation that decisionmakers use only certain types of information when drawing inferences about reputations, and an attacking state updates and revises its beliefs when a defending state's unanticipated behavior cannot be explained by case-specific variables.
An example shows that the problem extends to the perception of the third parties as well as main adversaries and underlies the way in which attempts at deterrence can fail and even backfire if the assumptions about the others' perceptions are incorrect.
Interests at stake
Although costly signaling and bargaining power are more well established arguments in rational deterrence theory, the interests of defending states are not as well known. Attacking states may look beyond the short-term bargaining tactics of a defending state and seek to determine what interests are at stake for the defending state that would justify the risks of a military conflict. The argument is that defending states that have greater interests at stake in a dispute are more resolved to use force and more willing to endure military losses to secure those interests. Even less well-established arguments are the specific interests that are more salient to state leaders such as military interests and economic interests.
Furthermore, Huth argues that both supporters and critics of rational deterrence theory agree that an unfavorable assessment of the domestic and international status quo by state leaders can undermine or severely test the success of deterrence. In a rational choice approach, if the expected utility of not using force is reduced by a declining status quo position, deterrence failure is more likely since the alternative option of using force becomes relatively more attractive.
Tripwires
International relations scholars Dan Reiter and Paul Poast have argued that so-called "tripwires" do not deter aggression. Tripwires entail that small forces are deployed abroad with the assumption that an attack on them will trigger a greater deployment of forces. Dan Altman has argued that tripwires do work to deter aggression, citing the Western deployment of forces to Berlin in 1948–1949 to deter Soviet aggression as a successful example.
A 2022 study by Brian Blankenship and Erik Lin-Greenberg found that high-resolve, low-capability signals (such as tripwires) were not viewed as more reassuring to allies than low-resolve, high-capability alternatives (such as forces stationed offshore). Their study cast doubt on the reassuring value of tripwires.
Nuclear deterrence theory
In 1966, Schelling is prescriptive in outlining the impact of the development of nuclear weapons in the analysis of military power and deterrence. In his analysis, before the widespread use of assured second strike capability, or immediate reprisal, in the form of SSBN submarines, Schelling argues that nuclear weapons give nations the potential to destroy their enemies but also the rest of humanity without drawing immediate reprisal because of the lack of a conceivable defense system and the speed with which nuclear weapons can be deployed. A nation's credible threat of such severe damage empowers their deterrence policies and fuels political coercion and military deadlock, which can produce proxy warfare.
According to Kenneth Waltz, there are three requirements for successful nuclear deterrence:
Part of a state's nuclear arsenal must appear to be able to survive an attack by the adversary and be used for a retaliatory second strike
The state must not respond to false alarms of a strike by the adversary
The state must maintain command and control
The stability–instability paradox is a key concept in rational deterrence theory. It states that when two countries each have nuclear weapons, the probability of a direct war between them greatly decreases, but the probability of minor or indirect conflicts between them increases. This occurs because rational actors want to avoid nuclear wars, and thus they neither start major conflicts nor allow minor conflicts to escalate into major conflicts—thus making it safe to engage in minor conflicts. For instance, during the Cold War the United States and the Soviet Union never engaged each other in warfare, but fought proxy wars in Korea, Vietnam, Angola, the Middle East, Nicaragua and Afghanistan and spent substantial amounts of money and manpower on gaining relative influence over the third world.
Bernard Brodie wrote in 1959 that a credible nuclear deterrent must be always ready. An extended nuclear deterrence guarantee is also called a nuclear umbrella.
Scholars have debated whether having a superior nuclear arsenal provides a deterrent against other nuclear-armed states with smaller arsenals. Matthew Kroenig has argued that states with nuclear superiority are more likely to win nuclear crises, whereas Todd Sechser, Matthew Fuhrmann and David C. Logan have challenged this assertion. A 2023 study found that a state with nuclear weapons is less likely to be targeted by non-nuclear states, but that a state with nuclear weapons is not less likely to target other nuclear states in low-level conflict. A 2022 study by Kyungwon Suh suggests that nuclear superiority may not reduce the likelihood that nuclear opponents will initiate nuclear crises.
Proponents of nuclear deterrence theory argue that newly nuclear-armed states may pose a short- or medium-term risk, but that "nuclear learning" occurs over time as states learn to live with new nuclear-armed states. Mark S. Bell and Nicholas L. Miller have however argued that there is a weak theoretical and empirical basis for notions of "nuclear learning."
Stages of US policy of deterrence
The US policy of deterrence during the Cold War underwent significant variations.
Containment
The early stages of the Cold War were generally characterized by the containment of communism, an aggressive stance on behalf of the US especially on developing nations under its sphere of influence. The period was characterized by numerous proxy wars throughout most of the globe, particularly Africa, Asia, Central America, and South America. One notable conflict was the Korean War. George F. Kennan, who is taken to be the founder of this policy in his Long Telegram, asserted that he never advocated military intervention, merely economic support, and that his ideas were misinterpreted as espoused by the general public.
Détente
With the US drawdown from Vietnam, the normalization of US relations with China, and the Sino-Soviet Split, the policy of containment was abandoned and a new policy of détente was established, with peaceful co-existence was sought between the United States and the Soviet Union. Although all of those factors contributed to this shift, the most important factor was probably the rough parity achieved in stockpiling nuclear weapons with the clear capability of mutual assured destruction (MAD). Therefore, the period of détente was characterized by a general reduction in the tension between the Soviet Union and the United States and a thawing of the Cold War, which lasted from the late 1960s until the start of the 1980s. The doctrine of mutual nuclear deterrence then characterized relations between the United States and the Soviet Union and relations with Russia until the onset of the New Cold War in the early 2010s. Since then, relations have been less clear.
Reagan era
A third shift occurred with US President Ronald Reagan's arms build-up during the 1980s. Reagan attempted to justify the policy by concerns of growing Soviet influence in Latin America and the post-1979 revolutionary government of Iran. Similar to the old policy of containment, the US funded several proxy wars, including support for Saddam Hussein of Iraq during the Iran–Iraq War, support for the mujahideen in Afghanistan, who were fighting for independence from the Soviet Union, and several anticommunist movements in Latin America such as the overthrow of the Sandinista government in Nicaragua. The funding of the Contras in Nicaragua led to the Iran-Contra Affair, while overt support led to a ruling from the International Court of Justice against the United States in Nicaragua v. United States.
The final expression of the full impact of deterrence during the cold war can be seen in the agreement between Reagan and Mikhail Gorbachev in 1985. They "agreed that a nuclear war cannot be won and must never be fought. Recognizing that any conflict between the USSR and the U.S. could have catastrophic consequences, they emphasized the importance of preventing any war between them, whether nuclear or conventional. They will not seek to achieve military superiority.".
While the army was dealing with the breakup of the Soviet Union and the spread of nuclear technology to other nations beyond the United States and Russia, the concept of deterrence took on a broader multinational dimension. The US policy on deterrence after the Cold War was outlined in 1995 in the document called "Essentials of Post–Cold War Deterrence". It explains that while relations with Russia continue to follow the traditional characteristics of MAD, but the US policy of deterrence towards nations with minor nuclear capabilities should ensure by threats of immense retaliation (or even pre-emptive action) not to threaten the United States, its interests, or allies. The document explains that such threats must also be used to ensure that nations without nuclear technology refrain from developing nuclear weapons and that a universal ban precludes any nation from maintaining chemical or biological weapons. The current tensions with Iran and North Korea over their nuclear programs are caused partly by the continuation of the policy of deterrence.
Post-Cold War period
By the beginning of the 2022 Russian invasion of Ukraine, many western hawks expressed the view that deterrence worked in that war but only in one way – in favor of Russia. Former US security advisor, John Bolton, said: Deterrence is working in the Ukraine crisis, just not for the right side. The United States and its allies failed to deter Russia from invading. The purpose of deterrence strategy is to prevent the conflict entirely, and there Washington failed badly. On the other hand, Russian deterrence is enjoying spectacular success. Russia has convinced the West that even a whisper of NATO military action in Ukraine would bring disastrous consequences. Putin threatens, blusters, uses the word “nuclear,” and the West wilts.
When Elon Musk prevented Ukraine from carrying drone attacks on the Russian Black Sea fleet by denying to enable needed Starlink communications in Crimea, Anne Applebaum argued Musk had been deterred by Russia after the country's ambassador warned him an attack on Crimea would be met with a nuclear response. Later Ukrainian attacks on the same fleet using a different communications system also caused deterrence, this time to the Russian Navy.
Timo S. Koster who served at NATO as Director of Defence Policy & Capabilities similarly argued: A massacre is taking place in Europe and the strongest military alliance in the world is staying out of it. We are deterred and Russia is not. Philip Breedlove, a retired four-star U.S. Air Force general and a former SACEUR, said that Western fears about nuclear weapons and World War III have left it "fully deterred" and Putin "completely undeterred." The West have "ceded the initiative to the enemy." No attempt was made by NATO to deter Moscow with the threat of military force, wondered another expert. To the contrary, it was Russia’s deterrence that proved to be successful.
Cyber deterrence
Since the early 2000s, there has been an increased focus on cyber deterrence. Cyber deterrence has two meanings:
The use of cyber actions to deter other states
The deterrence of an adversary's cyber operations
Scholars have debated how cyber capabilities alter traditional understandings of deterrence, given that it may be harder to attribute responsibility for cyber attacks, the barriers to entry may be lower, the risks and costs may be lower for actors who conduct cyber attacks, it may be harder to signal and interpret intentions, the advantage of offense over defense, and weak actors and non-state actors can develop considerable cyber capabilities. Scholars have also debated the feasibility of launching highly damaging cyber attacks and engaging in destructive cyber warfare, with most scholars expressing skepticism that cyber capabilities have enhanced the ability of states to launch highly destructive attacks. The most prominent cyber attack to date is the Stuxnet attack on Iran's nuclear program. By 2019, the only publicly acknowledged case of a cyber attack causing a power outage was the 2015 Ukraine power grid hack.
There are various ways to engage in cyber deterrence:
Denial: preventing adversaries from achieving military objectives by defending against them
Punishment: the imposition of costs on the adversary
Norms: the establishment and maintenance of norms that establish appropriate standards of behavior
Escalation: raising the probability that costs will be imposed on the adversary
Entanglement and interdependence: interdependence between actors can have a deterrent effect
There is a risk of unintended escalation in cyberspace due to difficulties in discerning the intent of attackers, and complexities in state-hacker relationships. According to political scientists Joseph Brown and Tanisha Fazal, states frequently neither confirm nor deny responsibility for cyber operations so that they can avoid the escalatory risks (that come with public credit) while also signaling that they have cyber capabilities and resolve (which can be achieved if intelligence agencies and governments believe they were responsible).
According to Lennart Maschmeyer, cyber weapons have limited coercive effectiveness due to a trilemma "whereby speed, intensity, and control are negatively correlated. These constraints pose a trilemma for actors because a gain in one variable tends to produce losses across the other two variables."
Intrawar deterrence
Intrawar deterrence is deterrence within a war context. It means that war has broken out but actors still seek to deter certain forms of behavior. In the words of Caitlin Talmadge, "intra-war deterrence failures... can be thought of as causing wars to get worse in some way." Examples of intrawar deterrence include deterring adversaries from resorting to nuclear, chemical and biological weapons attacks or attacking civilian populations indiscriminately. Broadly, it involves any prevention of escalation.
Criticism
Deterrence failures
Deterrence theory has been criticized by numerous scholars for various reasons, the most basic being skepticism that decision makers are rational. A prominent strain of criticism argues that rational deterrence theory is contradicted by frequent deterrence failures, which may be attributed to misperceptions. Here it's argued that misestimations of perceived costs and benefits by analysts contribute to deterrence failures, as exemplified in case of Russian invasion of Ukraine. Frozen conflicts can be seen as rewarding aggression.
Misprediction of behavior
Scholars have also argued that leaders do not behave in ways that are consistent with the predictions of nuclear deterrence theory. Scholars have also argued that rational deterrence theory does not grapple sufficiently with emotions and psychological biases that make accidents, loss of self-control, and loss of control over others likely. Frank C. Zagare has argued that deterrence theory is logically inconsistent and empirically inaccurate. In place of classical deterrence, rational choice scholars have argued for perfect deterrence, which assumes that states may vary in their internal characteristics and especially in the credibility of their threats of retaliation.
Suicide attacks
Advocates for nuclear disarmament, such as Global Zero, have criticized nuclear deterrence theory. Sam Nunn, William Perry, Henry Kissinger, and George Shultz have all called upon governments to embrace the vision of a world free of nuclear weapons, and created the Nuclear Security Project to advance that agenda. In 2010, the four were featured in a documentary film entitled Nuclear Tipping Point where proposed steps to achieve nuclear disarmament. Kissinger has argued, "The classical notion of deterrence was that there was some consequences before which aggressors and evildoers would recoil. In a world of suicide bombers, that calculation doesn't operate in any comparable way." Shultz said, "If you think of the people who are doing suicide attacks, and people like that get a nuclear weapon, they are almost by definition not deterrable."
Stronger deterrent
Paul Nitze argued in 1994 that nuclear weapons were obsolete in the "new world disorder" after the dissolution of the Soviet Union, and he advocated reliance on precision guided munitions to secure a permanent military advantage over future adversaries.
Minimum deterrence
As opposed to the extreme mutually assured destruction form of deterrence, the concept of minimum deterrence in which a state possesses no more nuclear weapons than is necessary to deter an adversary from attacking is presently the most common form of deterrence practiced by nuclear weapon states, such as China, India, Pakistan, Britain, and France. Pursuing minimal deterrence during arms negotiations between the United States and Russia allows each state to make nuclear stockpile reductions without the state becoming vulnerable, but it has been noted that there comes a point that further reductions may be undesirable, once minimal deterrence is reached, as further reductions beyond that point increase a state's vulnerability and provide an incentive for an adversary to expand its nuclear arsenal secretly.
France has developed and maintained its own nuclear deterrent under the belief that the United States will refuse to risk its own cities by assisting Western Europe in a nuclear war.
Ethical objections
In the post cold war era, philosophical objections to the reliance upon deterrence theories in general have also been raised on purely ethical grounds. Scholars such as Robert L. Holmes have noted that the implementation of such theories is inconsistent with a fundamental deontological presumption which prohibits the killing of innocent life. Consequently, such theories are prima facie immoral in nature. In addition, he observes that deterrence theories serve to perpetuate a state of mutual assured destruction between nations over time. Holmes further argues that it is therefore both irrational and immoral to utilize a methodology for perpetuating international peace which relies exclusively upon the continuous development of new iterations of the very weapons which it is designed to prohibit.
See also
Balance of terror
Chainstore paradox
Confidence-building measures
Decapitation strike
International relations
Launch on warning
Long Peace
N-deterrence
Nuclear blackmail
Nuclear ethics
Nuclear peace
Nuclear strategy
Nuclear terrorism
Nuclear warfare
Peace through strength
Prisoner's dilemma
Reagan Doctrine
Security dilemma
Tripwire force
Wargaming
Notes
References
Further reading
Schultz, George P. and Goodby, James E. The War that Must Never be Fought, Hoover Press, , 2015.
Freedman, Lawrence. 2004. Deterrence. New York: Polity Press.
Jervis, Robert, Richard N. Lebow and Janice G. Stein. 1985. Psychology and Deterrence. Baltimore: Johns Hopkins University Press. 270 pp.
Morgan, Patrick. 2003. Deterrence Now. Cambridge University Press.
T.V. Paul, Patrick M. Morgan, James J. Wirtz, Complex Deterrence: Strategy In the Global Age (University of Chicago Press, 2009) .
Waltz, Kenneth N. "Nuclear Myths and Political Realities". The American Political Science Review. Vol. 84, No. 3 (Sep, 1990), pp. 731–746.
External links
Cold War policies
Cold War terminology
Geopolitical terminology
International relations theory
International security
Military strategy
Military ethics
Nuclear strategy
Nuclear warfare
Peace and conflict studies
Subfields of political science | Deterrence theory | Chemistry | 7,273 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.