id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,516,611 | https://en.wikipedia.org/wiki/Shadow%20Copy | Shadow Copy (also known as Volume Snapshot Service, Volume Shadow Copy Service or VSS) is a technology included in Microsoft Windows that can create backup copies or snapshots of computer files or volumes, even when they are in use. It is implemented as a Windows service called the Volume Shadow Copy service. A software VSS provider service is also included as part of Windows to be used by Windows applications. Shadow Copy technology requires either the Windows NTFS or ReFS filesystems in order to create and store shadow copies. Shadow Copies can be created on local and external (removable or network) volumes by any Windows component that uses this technology, such as when creating a scheduled Windows Backup or automatic System Restore point.
Overview
VSS operates at the block level of volumes.
A snapshot is a read-only point-in-time copy of the volume. Snapshots allow the creation of consistent backups of a volume, ensuring that the contents do not change and are not locked while the backup is being made.
The core component of shadow copy is the Volume Shadow Copy service, which initiates and oversees the snapshot creation process. The components that perform all the necessary data transfer are called providers. While Windows comes with a default System Provider, software and hardware vendors can create their own software or hardware providers and register them with Volume Shadow Copy service. Each provider has a maximum of 10 seconds' time to complete the snapshot generation.
Other components that are involved in the snapshot creation process are writers. The aim of Shadow Copy is to create consistent reliable snapshots. But sometimes, this cannot simply be achieved by completing all pending file change operations. Sometimes, it is necessary to complete a series of inter-related changes to several related files. For example, when a database application transfers a piece of data from one file to another, it needs to delete it from the source file and create it in the destination file. Hence, a snapshot must not be between the first deletion and the subsequent creation, or else it is worthless; it must either be before the deletion or after the creation. Enforcing this semantic consistency is the duty of writers. Each writer is application-specific and has 60 seconds to establish a backup-safe state before providers start snapshot creation. If the Volume Shadow Copy service does not receive acknowledgement of success from the corresponding writers within this time-frame, it fails the operation.
By default, snapshots are temporary; they do not survive a reboot. The ability to create persistent snapshots was added in Windows Server 2003 onward. Windows 8 removed the GUI portion necessary to browse them, but it was restored in later Windows versions. ()
Windows software and services that support VSS include Windows Failover Cluster, Windows Server Backup, Hyper-V, Virtual Server, Active Directory, SQL Server, Exchange Server and SharePoint.
The end result is similar to a versioning file system, allowing any file to be retrieved as it existed at the time any of the snapshots was made. Unlike a true versioning file system, however, users cannot trigger the creation of new versions of an individual file, only the entire volume. As a side-effect, whereas the owner of a file can create new versions in a versioning file system, only a system administrator or a backup operator can create new snapshots (or control when new snapshots are taken), because this requires control of the entire volume rather than an individual file. Also, many versioning file systems (such as the one in VMS) implicitly save a version of files each time they are changed; systems using a snapshotting approach like Windows only capture the state periodically.
History
Windows XP and Server 2003
Volume Snapshot Service was first added to Microsoft Windows in Windows XP. It can only create temporary snapshots, used for accessing stable on-disk version of files that are opened for editing (and therefore locked). This version of VSS is used by NTBackup.
The creation of persistent snapshots (which remain available across reboots until specifically deleted) has been added in Windows Server 2003, allowing up to 512 snapshots to exist simultaneously for the same volume. In Windows Server 2003, VSS is used to create incremental periodic snapshots of data of changed files over time. A maximum of 64 snapshots are stored on the server and are accessible to clients over the network. This feature is known as Shadow Copies for Shared Folders and is designed for a client–server model. Its client component is included with Windows XP SP2 or later, and is available for installation on Windows 2000 SP3 or later, as well as Windows XP RTM or SP1.
Windows XP and later include a command line utility called vssadmin that can list, create or delete volume shadow copies and list installed shadow copy writers and providers.
Windows Vista, 7 and Server 2008
Microsoft updated a number of Windows components to make use of Shadow Copy. Backup and Restore in Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2 use shadow copies of files in both file-based and sector-by-sector backup. The System Protection component uses VSS when creating and maintaining periodic copies of system and user data on the same local volume (similar to the Shadow Copies for Shared Folders feature in Windows Server); VSS allows such data to be locally accessed by System Restore.
System Restore allows reverting to an entire previous set of shadow copies called a restore point.
Prior to Windows Vista, System Restore depended on a file-based filter that watched for changes to files with a certain set of extensions, and then copied files before they were overwritten. In addition, a part of Windows Explorer called Previous Versions allows restoring individual files or folders locally from restore points as they existed at the time of the snapshot, thus retrieving an earlier version of a file or recovering a file deleted by mistake.
Finally, Windows Server 2008 introduces the diskshadow utility which exposes VSS functionality through 20 different commands.
The system creates shadow copies automatically once per day, or when triggered by the backup utility or installer applications which create a restore point. The "Previous Versions" feature is available in the Business, Enterprise, and Ultimate editions of Windows Vista and in all Windows 7 editions. The Home Editions of Vista lack the "Previous Versions" feature, even though the Volume Snapshot Service is included and running. Using third-party tools it is still possible to restore previous versions of files on the local volume.
Some of these tools also allow users to schedule snapshots at user-defined intervals, configure the storage used by volume-shadow copies and compare files or directories from different points-in-time using snapshots.
Windows 7 also adds native support through a GUI to configure the storage used by volume-shadow copies.
Windows 8 and Server 2012
While supporting persistent shadow copies, Windows 8 lacks the GUI portion necessary to browse them; therefore the ability to browse, search or recover older versions of files via the Previous Versions tab of the Properties dialog of files was removed for local volumes. However, using third party tools (such as ShadowExplorer) it is possible to recover that functionality. The feature is fully available in Windows Server 2012.
Windows 10
Windows 10 restored the Previous Versions tab that was removed in Windows 8; however, in earlier builds it depended upon the File History feature instead of Volume Shadow copy. Current builds now allow restoration from both File History and System Protection (System Restore) points, which use Volume Shadow Copy.
Windows 11
Windows 11 retains the same Previous Versions and File History feature introduced in Windows 10, although it is disabled by default.
Samba Server
Samba on Linux is capable of providing Shadow Copy Service on an LVM-backed storage or with an underlying ZFS or btrfs.
Compatibility
While the different NTFS versions have a certain degree of both forward and backward compatibility, there are certain issues when mounting newer NTFS volumes containing persistent shadow copies in older versions of Windows. This affects dual-booting, and external portable hard drives. Specifically, the persistent shadow copies created by Windows Vista on an NTFS volume are deleted when Windows XP or Windows Server 2003 mount that NTFS volume. This happens because the older operating system does not understand the newer format of persistent shadow copies. Likewise, System Restore snapshots created by Windows 8 are deleted if they are exposed to a previous version of Windows.
See also
List of Microsoft Windows components
Snapshot (computer storage)
Copy-on-write
References
Further reading
Windows services
Windows administration | Shadow Copy | [
"Technology"
] | 1,757 | [
"Windows commands",
"Computing commands"
] |
1,516,624 | https://en.wikipedia.org/wiki/Chemical%20table%20file | Chemical table file (CT file) is a family of text-based chemical file formats that describe molecules and chemical reactions. One format, for example, lists each atom in a molecule, the x-y-z coordinates of that atom, and the bonds among the atoms.
File formats
There are several file formats in the family.
The formats were created by MDL Information Systems (MDL), which was acquired by Symyx Technologies then merged with Accelrys Corp., and now called BIOVIA, a subsidiary of Dassault Systemes of Dassault Group.
The CT file is an open format. BIOVIA publishes its specification. BIOVIA requires users to register to download the CT file format specifications.
Molfile
An MDL Molfile is a file format for holding information about the atoms, bonds, connectivity and coordinates of a molecule.
The molfile consists of some header information, the Connection Table (CT) containing atom info, then bond connections and types, followed by sections for more complex information.
The molfile is sufficiently common that most, if not all, cheminformatics software systems/applications are able to read the format, though not always to the same degree. It is also supported by some computational software such as Mathematica.
The current de facto standard version is molfile V2000, although, more recently, the V3000 format has been circulating widely enough to present a potential compatibility issue for those applications that are not yet V3000-capable.
Counts line block specification
Bond block specification
The Bond Block is made up of bond lines, one line per bond, with the following format:
111 222 ttt sss xxx rrr ccc
where the values are described in the following table:
Extended Connection Table (V3000)
The extended (V3000) molfile consists of a regular molfile “no structure” followed by a single molfile appendix that contains the body of the connection table (Ctab). The following figure shows both an alanine structure and the extended molfile corresponding to it.
Note that the “no structure” is flagged with the “V3000” instead of the “V2000” version stamp. There are two other changes to the header in addition to the version:
The number of appendix lines is always written as 999, regardless of how many there actually are. (All current readers will disregard the count and stop at M END.)
The “dimensional code” is maintained more explicitly. Thus “3D” really means 3D, although “2D” will be interpreted as 3D if any non-zero Z-coordinates are found.
Unlike the V2000 molfile, the V3000 extended Rgroup molfile has the same header format as a non-Rgroup molfile.
Counts line
A counts line is required, and must be first. It specifies the number of atoms, bonds, 3D objects, and Sgroups. It also specifies whether or not the CHIRAL flag is set. Optionally, the counts line can specify molregno. This is only used when the regno exceeds 999999 (the limit of the format in the molfile header line). The format of the counts line is:
SDF
SDF is one of a family of chemical-data file formats developed by MDL; it is intended especially for structural information. "SDF" stands for structure-data format, and SDF files actually wrap the molfile (MDL Molfile) format. Multiple records are delimited by lines consisting of four dollar signs ($$$$). A key feature of this format is its ability to include associated data.
Associated data items are denoted as follows:
> <Unique_ID>
XCA3464366
> <ClogP>
5.825
> <Vendor>
Sigma
> <Molecular Weight>
499.611
Multiple-line data items are also supported. The MDL SDF-format specification requires that a hard-carriage-return character be inserted if a single line of any text field exceeds 200 characters. This requirement is frequently violated in practice, as many SMILES and InChI strings exceed that length.
Other formats of the family
There are other, less commonly used formats of the family:
RXNFile - for representing a single chemical reaction;
RDFile - for representing a list of records with associated data. Each record can contain chemical structures, reactions, textual and tabular data;
RGFile - for representing the Markush structures (deprecated, Molfile V3000 can represent Markush structures);
XDFile - for representing chemical information in XML format.
See also
Chemical file format#Converting between formats
References
External links
Adroit Repository paid software to process SD files (SDF) from Adroit DI.
SDF Toolkit free software to process SD files (SDF).
NCI/CADD Chemical Identifier Resolver generates SD files (SDF) from chemical names, CAS Registry Numbers, SMILES, InChI, InChIKey, ....
KNIME free software to manipulate data and do datamining, can also read and write SD files (SDF).
Comparative Toxicology Dashboard service provided by the Environmental Protection Agency (EPA) which generates SD files (SDF) from chemical names, CAS Registry Numbers, SMILES, InChI, InChIKey, ...
Computational chemistry
Chemical file formats | Chemical table file | [
"Chemistry"
] | 1,120 | [
"Theoretical chemistry",
"Computational chemistry",
"Chemistry software",
"Chemical file formats"
] |
1,516,694 | https://en.wikipedia.org/wiki/Long-range%20dependence | Long-range dependence (LRD), also called long memory or long-range persistence, is a phenomenon that may arise in the analysis of spatial or time series data. It relates to the rate of decay of statistical dependence of two points with increasing time interval or spatial distance between the points. A phenomenon is usually considered to have long-range dependence if the dependence decays more slowly than an exponential decay, typically a power-like decay. LRD is often related to self-similar processes or fields. LRD has been used in various fields such as internet traffic modelling, econometrics, hydrology, linguistics and the earth sciences. Different mathematical definitions of LRD are used for different contexts and purposes.
Short-range dependence versus long-range dependence
One way of characterising long-range and short-range dependent stationary process is in terms of their autocovariance functions. For a short-range dependent process, the coupling between values at different times decreases rapidly as the time difference increases. Either the autocovariance drops to zero after a certain time-lag, or it eventually has an exponential decay. In the case of LRD, there is much stronger coupling. The decay of the autocovariance function is power-like and so is slower than exponential.
A second way of characterizing long- and short-range dependence is in terms of the variance of partial sum of consecutive values. For short-range dependence, the variance grows typically proportionally to the number of terms. As for LRD, the variance of the partial sum increases more rapidly which is often a power function with the exponent greater than 1. A way of examining this behavior uses the rescaled range. This aspect of long-range dependence is important in the design of dams on rivers for water resources, where the summations correspond to the total inflow to the dam over an extended period.
The above two ways are mathematically related to each other, but they are not the only ways to define LRD. In the case where the autocovariance of the process does not exist (heavy tails), one has to find other ways to define what LRD means, and this is often done with the help of self-similar processes.
The Hurst parameter H is a measure of the extent of long-range dependence in a time series (while it has another meaning in the context of self-similar processes). H takes on values from 0 to 1. A value of 0.5 indicates the absence of long-range dependence. The closer H is to 1, the greater the degree of persistence or long-range dependence. H less than 0.5 corresponds to anti-persistency, which as the opposite of LRD indicates strong negative correlation so that the process fluctuates violently.
Estimation of the Hurst parameter
Slowly decaying variances, LRD, and a spectral density obeying a power-law are different manifestations of the property of the underlying covariance of a stationary process. Therefore, it is possible to approach the problem of estimating the Hurst parameter from three difference angles:
Variance-time plot: based on the analysis of the variances of the aggregate processes
R/S statistics: based on the time-domain analysis of the rescaled adjusted range
Periodogram: based on a frequency-domain analysis
Relation to self-similar processes
Given a stationary LRD sequence, the partial sum if viewed as a process indexed by the number of terms after a proper scaling, is a self-similar process with stationary increments asymptotically, the most typical one being fractional Brownian motion. In the converse, given a self-similar process with stationary increments with Hurst index H > 0.5, its increments (consecutive differences of the process) is a stationary LRD sequence.
This also holds true if the sequence is short-range dependent, but in this case the self-similar process resulting from the partial sum can only be Brownian motion (H = 0.5).
Models
Among stochastic models that are used for long-range dependence, some popular ones are autoregressive fractionally integrated moving average models, which are defined for discrete-time processes, while continuous-time models might start from fractional Brownian motion.
See also
Long-tail traffic
Traffic generation model
Detrended fluctuation analysis
Tweedie distributions
Fractal dimension
Hurst exponent
Notes
Further reading
Autocorrelation
Teletraffic
Time series
Spatial analysis | Long-range dependence | [
"Physics"
] | 917 | [
"Spacetime",
"Space",
"Spatial analysis"
] |
1,516,712 | https://en.wikipedia.org/wiki/Arcetri%20Observatory | The Arcetri Observatory () is an astrophysical observatory located in the hilly area of Arcetri on the outskirts of Florence, Italy. It is located close to Villa Il Gioiello, the residence of Galileo Galilei from 1631 to 1642.
Observatory staff carry out theoretical and observational astronomy as well as designing and constructing astronomical instrumentation. The observatory has been heavily involved with the following instrumentation projects:
The MMT 6.5 m telescope
The LBT 2x 8.4 m telescopes
The Telescopio Nazionale Galileo 3.5 m telescope
The VLT telescope adaptive secondary mirror
The 1.5 m Gornergrat Infrared Telescope (TIRGO)
See also
List of solar telescopes
References
External links
Osservatorio Astrofisico di Arcetri English website
The observatory on Google maps
Arcetri
Arcetri
Buildings and structures in Florence | Arcetri Observatory | [
"Astronomy"
] | 177 | [
"Astronomy organizations",
"Astronomy institutes and departments"
] |
1,516,915 | https://en.wikipedia.org/wiki/Swine%20influenza | Swine influenza is an infection caused by any of several types of swine influenza viruses. Swine influenza virus (SIV) or swine-origin influenza virus (S-OIV) refers to any strain of the influenza family of viruses that is endemic in pigs. As of 2009, identified SIV strains include influenza C and the subtypes of influenza A known as H1N1, H1N2, H2N1, H3N1, H3N2, and H2N3.
The swine influenza virus is common throughout pig populations worldwide. Transmission of the virus from pigs to humans is rare and does not always lead to human illness, often resulting only in the production of antibodies in the blood. If transmission causes human illness, it is called a zoonotic swine flu. People with regular exposure to pigs are at increased risk of swine flu infections.
Around the mid-20th century, the identification of influenza subtypes was made possible, allowing accurate diagnosis of transmission to humans. Since then, only 50 such transmissions have been confirmed. These strains of swine flu rarely pass from human to human. Symptoms of zoonotic swine flu in humans are similar to those of influenza and influenza-like illness and include chills, fever, sore throat, muscle pains, severe headache, coughing, weakness, shortness of breath, and general discomfort.
It is estimated that, in the 2009 flu pandemic, 11–21% of the then global population (of about 6.8 billion), equivalent to around 700 million to 1.4 billion people, contracted the illness—more, in absolute terms, than the Spanish flu pandemic. There were 18,449 confirmed fatalities. However, in a 2012 study, the CDC estimated more than 284,000 possible fatalities worldwide, with numbers ranging from 150,000 to 575,000.
In August 2010, the World Health Organization declared the swine flu pandemic officially over.
Subsequent cases of swine flu were reported in India in 2015, with over 31,156 positive test cases and 1,841 deaths.
Signs and symptoms
In pigs, a swine influenza infection produces fever, lethargy, discharge from the nose or eyes, sneezing, coughing, difficulty breathing, eye redness or inflammation, and decreased appetite. In some cases, the infection can cause miscarriage. However, infected pigs may not exhibit any symptoms. Although mortality is usually low (around 1–4%), the virus can cause weight loss and poor growth, in turn causing economic loss to farmers. Infected pigs can lose up to 12 pounds of body weight over a three- to four-week period. Influenza A is responsible for infecting swine and was first identified in 1918. Because both avian and mammalian influenza viruses can bind to receptors in pigs, pigs have often been seen as "mixing vessels", facilitating the evolution of strains that can be passed on to other mammals, such as humans.
Humans
Direct transmission of a swine flu virus from pigs to humans is possible (zoonotic swine flu). Fifty cases are known to have occurred since the first report in medical literature in 1958, which have resulted in a total of six deaths. Of these six people, one was pregnant, one had leukemia, one had Hodgkin's lymphoma, and two were known to be previously healthy. No medical history was reported for the remaining case The true rate of infection may be higher, as most cases only cause a very mild disease and may never be reported or diagnosed.
According to the United States Centers for Disease Control and Prevention (CDC), in humans the symptoms of the 2009 "swine flu" H1N1 virus are similar to influenza and influenza-like illness. Symptoms include fever, cough, sore throat, watery eyes, body aches, shortness of breath, headache, weight loss, chills, sneezing, runny nose, coughing, dizziness, abdominal pain, lack of appetite, and fatigue. During the 2009 outbreak, an elevated percentage of patients reporting diarrhea and vomiting.
Because these symptoms are not specific to swine flu, a differential diagnosis of probable swine flu requires not only symptoms, but also a high likelihood of swine flu due to the person's recent and past medical history. For example, during the 2009 swine flu outbreak in the United States, the CDC advised physicians to "consider swine influenza infection in the differential diagnosis of patients with acute febrile respiratory illness who have either been in contact with persons with confirmed swine flu, or who were in one of the five U.S. states that have reported swine flu cases or in Mexico during the seven days preceding their illness onset." A diagnosis of confirmed swine flu requires laboratory testing of a respiratory sample (a simple nose and throat swab).
The most common cause of death is respiratory failure. Other causes of death are pneumonia (leading to sepsis), high fever (leading to neurological problems), dehydration (from excessive vomiting and diarrhea), electrolyte imbalance and kidney failure. Fatalities are more likely in young children and the elderly.
Virology
Transmission
Between pigs
Influenza is common in pigs. About half of breeding pigs in the USA have been exposed to the virus. Antibodies to the virus are also common in pigs in other countries.
The main route of transmission is through direct contact between infected and uninfected animals. These close contacts are particularly common during animal transport. Intensive farming may also increase the risk of transmission, as the pigs are raised in very close proximity to each other. Direct transfer of the virus probably occurs though pigs touching noses or through dried mucus. Airborne transmission through the aerosols produced by pigs coughing or sneezing are also an important means of infection. The virus usually spreads quickly through a herd, infecting all the pigs within just a few days. Transmission may also occur through wild animals, such as wild boar, which can spread the disease between farms.
To humans
People who work with poultry and swine, especially those with intense exposures, are at increased risk of zoonotic infection with influenza virus endemic in these animals, and constitute a population of human hosts in which zoonosis and reassortment can co-occur. Vaccination of these workers against influenza and surveillance for new influenza strains among this population may therefore be an important public health measure. Transmission of influenza from swine to humans who work with swine was documented in a small surveillance study performed in 2004 at the University of Iowa. This study, among others, forms the basis of a recommendation that people whose jobs involve handling poultry and swine be the focus of increased public health surveillance. Other professions at particular risk of infection are veterinarians and meat processing workers, although the risk of infection for both of these groups is lower than that of farm workers.
Interaction with avian H5N1 in pigs
Pigs are unusual because they can be infected with influenza strains that usually infect three different species: pigs, birds, and humans. Within pigs, influenza viruses may exchange genes and produce novel strains. Avian influenza virus H3N2 is endemic in pigs in China and has been detected in pigs in Vietnam, increasing fears of the emergence of new variant strains. H3N2 evolved from H2N2 by antigenic shift. In August 2004, researchers in China found H5N1 in pigs.
These H5N1 infections may be common. In a survey of 10 apparently healthy pigs housed near poultry farms in West Java, where avian flu had broken out, five of the pig samples contained the H5N1 virus. The Indonesian government found similar results in the same region, though additional tests of 150 pigs outside the area were negative.
Structure
The influenza virion is roughly spherical. It is an enveloped virus; the outer layer is a lipid membrane which is taken from the host cell in which the virus multiplies. Inserted into the lipid membrane are glycoprotein "spikes" of hemagglutinin (HA) and neuraminidase (NA). The combination of HA and NA proteins determine the subtype of influenza virus (A/H1N1, for example). HA and NA are important in the immune response against the virus, and antibodies against these spikes may protect against infection. The antiviral drugs Relenza and Tamiflu target NA by inhibiting neuraminidase and preventing the release of viruses from host cells. Also embedded in the lipid membrane is the M2 protein, which is the target of the antiviral adamantanes amantadine and rimantadine.
Classification
Of the three genera of influenza viruses that cause human flu, two also cause influenza in pigs, with influenza A being common in pigs and influenza C being rare. Influenza B has not been reported in pigs. Within influenza A and influenza C, the strains found in pigs and humans are largely distinct, although because of reassortment there have been transfers of genes among strains crossing swine, avian, and human species boundaries.
Influenza C
Influenza viruses infect both humans and pigs, but do not infect birds. Transmission between pigs and humans have occurred in the past. For example, influenza C caused small outbreaks of a mild form of influenza amongst children in Japan and California. As a result of the limited host range and lack of genetic diversity in influenza C, this form of influenza does not cause pandemics in humans.
Influenza A
Swine influenza is caused by influenza A subtypes H1N1, H1N2, H2N3, H3N1, and H3N2. In pigs, four influenza A virus subtypes (H1N1, H1N2, H3N2 and H7N9) are the most common strains worldwide. In the United States, the H1N1 subtype was exclusively prevalent among swine populations before 1998. Since late August 1998, H3N2 subtypes have been isolated from pigs. As of 2004, H3N2 virus isolates in US swine and turkey stocks were triple reassortants, containing genes from human (HA, NA, and PB1), swine (NS, NP, and M), and avian (PB2 and PA) lineages. In August 2012, the Center for Disease Control and Prevention confirmed 145 human cases (113 in Indiana, 30 in Ohio, one in Hawaii and one in Illinois) of H3N2v since July 2012. The death of a 61-year-old Madison County, Ohio woman is the first in the USA associated with a new swine flu strain. She contracted the illness after having contact with hogs at the Ross County Fair.
Diagnosis
The CDC recommends real-time PCR as the method of choice for diagnosing H1N1. The oral or nasal fluid collection and RNA virus-preserving filter-paper card is commercially available. This method allows a specific diagnosis of novel influenza (H1N1) as opposed to seasonal influenza. Near-patient point-of-care tests are in development.
Prevention
Prevention of swine influenza has three components: prevention in pigs, prevention of transmission to humans, and prevention of its spread among humans. Proper handwashing techniques can prevent the virus from spreading. Individuals can prevent infection by not touching the eyes, nose, or mouth, distancing from others who display symptoms of the cold or flu, and avoiding contact with others when displaying symptoms.
Swine
Methods of preventing the spread of influenza among swine include facility management, herd management, and vaccination (ATCvet code: ). Because much of the illness and death associated with swine flu involves secondary infection by other pathogens, control strategies that rely on vaccination may be insufficient.
Control of swine influenza by vaccination has become more difficult in recent decades, as the evolution of the virus has resulted in inconsistent responses to traditional vaccines. Standard commercial swine flu vaccines are effective in controlling the infection when the virus strains match enough to have significant cross-protection, and custom (autogenous) vaccines made from the specific viruses isolated are created and used in the more difficult cases.
Present vaccination strategies for SIV control and prevention in swine farms typically include the use of one of several bivalent SIV vaccines commercially available in the United States. Of the 97 recent H3N2 isolates examined, only 41 isolates had strong serologic cross-reactions with antiserum to three commercial SIV vaccines. Since the protective ability of influenza vaccines depends primarily on the closeness of the match between the vaccine virus and the epidemic virus, the presence of nonreactive H3N2 SIV variants suggests current commercial vaccines might not effectively protect pigs from infection with a majority of H3N2 viruses. The United States Department of Agriculture researchers say while pig vaccination keeps pigs from getting sick, it does not block infection or shedding of the virus.
Facility management includes using disinfectants and ambient temperature to control viruses in the environment. They are unlikely to survive outside living cells for more than two weeks, except in cold (but above freezing) conditions, and are readily inactivated by disinfectants. Herd management includes not adding pigs carrying influenza to herds that have not been exposed to the virus. The virus survives in healthy carrier pigs for up to three months and can be recovered from them between outbreaks. Carrier pigs are usually responsible for the introduction of SIV into previously uninfected herds and countries, so new animals should be quarantined. After an outbreak, as immunity in exposed pigs wanes, new outbreaks of the same strain can occur.
Humans
Prevention of pig-to-human transmission
Swine can be infected by both avian and human flu strains of influenza, and therefore are hosts where the antigenic shifts can occur that create new influenza strains.
The transmission from swine to humans is believed to occur mainly in swine farms, where farmers are in close contact with live pigs. Although strains of swine influenza are usually not able to infect humans, it may occasionally happen, so farmers and veterinarians are encouraged to use face masks when dealing with infected animals. The use of vaccines on swine to prevent their infection is a major method of limiting swine-to-human transmission. Risk factors that may contribute to the swine-to-human transmission include smoking and, especially, not wearing gloves when working with sick animals, thereby increasing the likelihood of subsequent hand-to-eye, hand-to-nose, or hand-to-mouth transmission.
Prevention of human-to-human transmission
Influenza spreads between humans when infected people cough or sneeze, then other people breathe in the virus or touch something with the virus on it and then touch their own face. The CDC warned against touching mucosal membranes such as the eyes, nose, or mouth during the 2009 H1N1 pandemic, as these are common entry points for flu viruses. Swine flu cannot be spread by pork products, since the virus is not transmitted through food. The swine flu in humans is most contagious during the first five days of the illness, although some people, most commonly children, can remain contagious for up to ten days. Diagnosis can be made by sending a specimen, collected during the first five days, for analysis.
Recommendations to prevent the spread of the virus among humans include using standard infection control, which includes frequent washing of hands with soap and water or with alcohol-based hand sanitizers, especially after being out in public. Chance of transmission is also reduced by disinfecting household surfaces, which can be done effectively with a diluted chlorine bleach solution.
Influenza can spread in coughs or sneezes, but an increasing body of evidence shows small droplets containing the virus can linger on tabletops, telephones, and other surfaces and be transferred via the fingers to the eyes, nose, or mouth. Alcohol-based gel or foam hand sanitizers work well to destroy viruses and bacteria. Anyone with flu-like symptoms, such as a sudden fever, cough, or muscle aches, should stay away from work or public transportation and should contact a doctor for advice.
Social distancing can be another infection control tactic. Individuals should avoid other people who might be infected or if infected themselves isolate from others for the duration of the infection. During active outbreaks, avoiding large gatherings, increasing physical distance in public places, or if possible remaining at home as much as is feasible can prevent further spread of disease. Public health and other responsible authorities have action plans which may request or require social distancing actions, depending on the severity of the outbreak.
Vaccination
Vaccines are available for different kinds of swine flu. The U.S. Food and Drug Administration (FDA) approved the new swine flu vaccine for use in the United States on September 15, 2009. Studies by the National Institutes of Health show a single dose creates enough antibodies to protect against the virus within about 10 days.
In the aftermath of the 2009 pandemic, several studies were conducted to see which population groups were most likely to have received an influenza vaccine. These studies demonstrated that caucasians are much more likely to be vaccinated for seasonal influenza and for the H1N1 strain than African Americans. This could be due to several factors. Historically, there has been mistrust of vaccines and of the medical community from African Americans. Many African Americans do not believe vaccines or doctors to be effective. This mistrust stems from the exploitation of the African American communities during studies like the Tuskegee study. Additionally, vaccines are typically administered in clinics, hospitals, or doctor's offices. Many people of lower socioeconomic status are less likely to receive vaccinations because they do not have health insurance.
Surveillance
Although there is no formal national surveillance system in the United States to determine what viruses are circulating in pigs, an informal surveillance network in the United States is part of a world surveillance network.
Treatment
Swine
As swine influenza is rarely fatal to pigs, little treatment beyond rest and supportive care is required. Instead, veterinary efforts are focused on preventing the spread of the virus throughout the farm or to other farms. Vaccination and animal management techniques are most important in these efforts. Antibiotics are also used to treat the disease, which, although they have no effect against the influenza virus, do help prevent bacterial pneumonia and other secondary infections in influenza-weakened herds.
In Europe the avian-like H1N1 and the human-like H3N2 and H1N2 are the most common influenza subtypes in swine, of which avian-like H1N1 is the most frequent. Since 2009 another subtype, pdmH1N1(2009), emerged globally and also in European pig population. The prevalence varies from country to country but all of the subtypes are continuously circulating in swine herds.
In the EU region whole-virus vaccines are available which are inactivated and adjuvanted. Vaccination of sows is common practice and reveals also a benefit to young pigs by prolonging the maternally level of antibodies. Several commercial vaccines are available including a trivalent one being used in sow vaccination and a vaccine against pdmH1N1(2009). In vaccinated sows multiplication of viruses and virus shedding are significantly reduced.
Humans
If a human becomes sick with swine flu, antiviral drugs can make the illness milder and make the patient feel better faster. They may also prevent serious flu complications. For treatment, antiviral drugs work best if started soon after getting sick (within two days of symptoms). Beside antivirals, supportive care at home or in a hospital focuses on controlling fevers, relieving pain and maintaining fluid balance, as well as identifying and treating any secondary infections or other medical problems. The U.S. Centers for Disease Control and Prevention recommends the use of oseltamivir (Tamiflu) or zanamivir (Relenza) for the treatment and/or prevention of infection with swine influenza viruses; however, the majority of people infected with the virus make a full recovery without requiring medical attention or antiviral drugs. The virus isolated in the 2009 outbreak have been found resistant to amantadine and rimantadine.
History
Pandemics
Swine influenza was first proposed to be a disease related to human flu during the 1918 flu pandemic, when pigs became ill at the same time as humans. The first identification of an influenza virus as a cause of disease in pigs occurred about ten years later, in 1930. For the following 60 years, swine influenza strains were almost exclusively H1N1. Then, between 1997 and 2002, new strains of three different subtypes and five different genotypes emerged as causes of influenza among pigs in North America. In 1997–1998, H3N2 strains emerged. These strains, which include genes derived by reassortment from human, swine and avian viruses, have become a major cause of swine influenza in North America. Reassortment between H1N1 and H3N2 produced H1N2. In 1999 in Canada, a strain of H4N6 crossed the species barrier from birds to pigs, but was contained on a single farm.
The H1N1 form of swine flu is one of the descendants of the strain that caused the 1918 flu pandemic. As well as persisting in pigs, the descendants of the 1918 virus have also circulated in humans through the 20th century, contributing to the normal seasonal epidemics of influenza. However, direct transmission from pigs to humans is rare, with only 12 recorded cases in the U.S. since 2005. Nevertheless, the retention of influenza strains in pigs after these strains have disappeared from the human population might make pigs a reservoir where influenza viruses could persist, later emerging to reinfect humans once human immunity to these strains has waned.
Swine flu has been reported numerous times as a zoonosis in humans, usually with limited distribution, rarely with a widespread distribution. Outbreaks in swine are common and cause significant economic losses in industry, primarily by causing stunting and extended time to market. For example, this disease costs the British meat industry about £65 million every year.
1918
The 1918 flu pandemic in humans was associated with H1N1 and influenza appearing in pigs; this may reflect a zoonosis either from swine to humans, or from humans to swine. Although it is not certain in which direction the virus was transferred, some evidence suggests that in this case pigs caught the disease from humans. For instance, swine influenza was only noted as a new disease of pigs in 1918 after the first large outbreaks of influenza amongst people. Although a recent phylogenetic analysis of more recent strains of influenza in humans, birds, and other animals including swine suggests the 1918 outbreak in humans followed a reassortment event within a mammal, the exact origin of the 1918 strain remains elusive. It is estimated that anywhere from 50 to 100 million people were killed worldwide.
U.S. 2009
The swine flu was initially seen in the US in April 2009, where the strain of the particular virus was a mixture from 3 types of strains. Six of the genes are very similar to the H1N2 influenza virus that was found in pigs around 2000.
Outbreaks
1976 U.S.
On February 5, 1976, a United States army recruit at Fort Dix said he felt tired and weak. He died the next day, and four of his fellow soldiers were later hospitalized. Two weeks after his death, health officials announced the cause of death was a new strain of swine flu. The strain, a variant of H1N1, is known as A/New Jersey/1976 (H1N1). It was detected only from January 19 to February 9 and did not spread beyond Fort Dix.
This new strain appeared to be closely related to the strain involved in the 1918 flu pandemic. Moreover, the ensuing increased surveillance uncovered another strain in circulation in the U.S.: A/Victoria/75 (H3N2), which spread simultaneously, also caused illness, and persisted until March. Alarmed public health officials decided action must be taken to head off another major pandemic, and urged President Gerald Ford that every person in the U.S. be vaccinated for the disease.
The vaccination program was plagued by delays and public relations problems. On October 1, 1976, immunizations began, and three senior citizens died soon after receiving their injections. This resulted in a media outcry that linked these deaths to the immunizations, despite the lack of any proof the vaccine was the cause. According to science writer Patrick Di Justo, however, by the time the truth was known—that the deaths were not proven to be related to the vaccine—it was too late. "The government had long feared mass panic about swine flu—now they feared mass panic about the swine flu vaccinations." This became a strong setback to the program.
There were reports of Guillain–Barré syndrome (GBS), a paralyzing neuromuscular disorder, affecting some people who had received swine flu immunizations. Although whether a link exists is still not clear, this syndrome may be a side effect of influenza vaccines. As a result, Di Justo writes, "the public refused to trust a government-operated health program that killed old people and crippled young people." In total, 48,161,019 Americans, or just over 22% of the population, had been immunized by the time the National Influenza Immunization Program was effectively halted on December 16, 1976.
Overall, there were 1098 cases of GBS recorded nationwide by CDC surveillance, 532 of which occurred after vaccination and 543 before vaccination. About one to two cases per 100,000 people of GBS occur every year, whether or not people have been vaccinated. The vaccination program seems to have increased this normal risk of developing GBS by about to one extra case per 100,000 vaccinations.
Recompensation charges were filed for over 4,000 cases of severe vaccination damage, including 25 deaths, totaling US$3.5 billion, by 1979.
The CDC stated most studies on modern influenza vaccines have seen no link with GBS, Although one review gives an incidence of about one case per million vaccinations, a large study in China, reported in the New England Journal of Medicine, covering close to 100 million doses of H1N1 flu vaccine, found only 11 cases of GBS, which is lower than the normal rate of the disease in China: "The risk-benefit ratio, which is what vaccines and everything in medicine is about, is overwhelmingly in favor of vaccination."
1988 U.S.
In September 1988, a swine flu virus killed one woman and infected others. A 32-year-old woman, Barbara Ann Wieners, was eight months pregnant when she and her husband, Ed, became ill after visiting the hog barn at a county fair in Walworth County, Wisconsin. Barbara died eight days later, after developing pneumonia. The only pathogen identified was an H1N1 strain of swine influenza virus. Doctors were able to induce labor and deliver a healthy daughter before she died. Her husband recovered from his symptoms.
Influenza-like illness (ILI) was reportedly widespread among the pigs exhibited at the fair. Of the 25 swine exhibitors aged 9 to 19 at the fair, 19 tested positive for antibodies to SIV, but no serious illnesses were seen. The virus was able to spread between people, since one to three health care personnel who had cared for the pregnant woman developed mild, influenza-like illnesses, and antibody tests suggested they had been infected with swine flu, but there was no community outbreak.
In 1998, swine flu was found in pigs in four U.S. states. Within a year, it had spread through pig populations across the United States. Scientists found this virus had originated in pigs as a recombinant form of flu strains from birds and humans. This outbreak confirmed that pigs can serve as a crucible where novel influenza viruses emerge as a result of the reassortment of genes from different strains. Genetic components of these 1998 triple-hybrid strains would later form six out of the eight viral gene segments in the 2009 flu outbreak.
2007 Philippines
On August 20, 2007, Department of Agriculture officers investigated the outbreak of swine flu in Nueva Ecija and central Luzon, Philippines. The mortality rate is less than 10% for swine flu, unless there are complications like hog cholera. On July 27, 2007, the Philippine National Meat Inspection Service (NMIS) raised a hog cholera "red alert" warning over Metro Manila and five regions of Luzon after the disease spread to backyard pig farms in Bulacan and Pampanga, even if they tested negative for the swine flu virus.
2009 Northern Ireland
Since November 2009, 14 deaths as a result of swine flu in Northern Ireland have been reported. The majority of the deceased were reported to have pre-existing health conditions which had lowered their immunity. This closely corresponds to the 19 patients who had died in the year prior due to swine flu, where 18 of the 19 were determined to have lowered immune systems. Because of this, many mothers who have just given birth are strongly encouraged to get a flu shot because their immune systems are vulnerable. Also, studies have shown that people between the ages of 15 and 44 have the highest rate of infection. Although most people now recover, having any conditions that lower one's immune system increases the risk of having the flu become potentially lethal. In Northern Ireland now, approximately 56% of all people under 65 who are entitled to the vaccine have gotten the shot, and the outbreak is said to be under control.
2015 and 2019 India
Swine flu outbreaks were reported in India in late 2014 and early 2015. As of March 19, 2015 the disease has affected 31,151 people and claimed over 1,841 lives. The largest number of reported cases and deaths due to the disease occurred in the western part of India including states like Delhi, Madhya Pradesh, Rajasthan, and Gujarat Andhra Pradesh
Researchers of MIT have claimed that the swine flu has mutated in India to a more virulent version with changes in Hemagglutinin protein, contradicting earlier research by Indian researchers.
There was another outbreak in India in 2017. The states of Maharashtra and Gujarat were the worst affected. Gujarat high court has given Gujarat government instructions to control deaths by swine flu. 1,090 people died of swine flu in India in 2019 until August 31, 2019.
2015 Nepal
Swine flu outbreaks were reported in Nepal in the spring of 2015. Up to April 21, 2015, the disease had claimed 26 lives in the most severely affected district, Jajarkot in Northwest Nepal. Cases were also detected in the districts of Kathmandu, Morang, Kaski, and Chitwan. As of 22 April 2015 the Nepal Ministry of Health reported that 2,498 people had been treated in Jajarkot, of whom 552 were believed to have swine flu, and acknowledged that the government's response had been inadequate. The Jajarkot outbreak had just been declared an emergency when the April 2015 Nepal earthquake struck on 25 April 2015, diverting all medical and emergency resources to quake-related rescue and recovery.
2016 Pakistan
Seven cases of swine flu were reported in Punjab province of Pakistan, mainly in the city of Multan, in January 2017. Cases of swine flu were also reported in Lahore and Faisalabad.
2017 Maldives
As of March 16, 2017, over a hundred confirmed cases of swine flu and at least six deaths were reported in the Maldivian capital of Malé and some other islands. Makeshift flu clinics were opened in Malé. Schools in the capital were closed, prison visitations suspended, several events cancelled, and all non-essential travel to other islands outside the capital was advised against by the HPA. An influenza vaccination program focusing on pregnant women was initiated thereafter. An official visit by Saudi King Salman bin Abdulaziz Al Saud to the Maldives during his Asian tour was also cancelled last minute amidst fears over the outbreak of swine flu.
2020 G4 EA H1N1 publication
G4 EA H1N1, also known as the G4 swine flu virus (G4) is a swine influenza virus strain discovered in China. The virus is a variant genotype 4 (G4) Eurasian avian-like (EA) H1N1 virus that mainly affects pigs, but there is some evidence of it infecting people. A peer-reviewed paper from the Proceedings of the National Academy of Sciences (PNAS) stated that "G4 EA H1N1 viruses possess all the essential hallmarks of being highly adapted to infect humans ... Controlling the prevailing G4 EA H1N1 viruses in pigs and close monitoring of swine working populations should be promptly implemented."
Michael Ryan, executive director of the World Health Organization (WHO) Health Emergencies Program, stated in July 2020 that this strain of influenza virus was not new and had been under surveillance since 2011. Almost 30,000 swine had been monitored via nasal swabs between 2011 and 2018. While other variants of the virus have appeared and diminished, the study claimed the G4 variant has sharply increased since 2016 to become the predominant strain. The Chinese Ministry of Agriculture and Rural Affairs rebutted the study, saying that the media had interpreted the study "in an exaggerated and nonfactual way" and that the number of pigs sampled was too small to demonstrate G4 had become the dominant strain.
Between 2016 and 2018, a serum surveillance program screened 338 swine production workers in China for exposure (presence of antibodies) to G4 EA H1N1 and found 35 (10.4%) positive. Among another 230 people screened who did not work in the swine industry, 10 (4.4%) were serum positive for antibodies indicating exposure. Two cases of infection caused by the G4 variant have been documented as of July 2020, with no confirmed cases of human-to-human transmission.
Health officials (including Anthony Fauci) say the virus should be monitored, particularly among those in close contact with pigs, but it is not an immediate threat. There are no reported cases or evidence of the virus outside of China as of July 2020.
See also
COVID-19 pandemic
Risk assessment for organic swine health
Notes
Further reading
External links
Official swine flu advice and latest information from the UK National Health Service
on fora.tv
Swine flu charts and maps Numeric analysis and approximation of current active cases
"Swine Influenza" disease card on World Organisation for Animal Health
Worried about swine flu? Then you should be terrified about the regular flu.
Centers for Disease Control and Prevention (CDC) – Swine Flu
Center for Infectious Disease Research and Policy – Novel H1N1 influenza resource list
Pandemic Flu US Government Site
World Health Organization (WHO): Swine influenza
Medical Encyclopedia Medline Plus: Swine Flu
Health-EU portal EU response to influenza
European Commission – Public Health EU coordination on Pandemic (H1N1) 2009
Combating H3N2 Virus
Animal viral diseases
Zoonoses
Health disasters
Swine diseases
Influenza
Pandemics
Articles containing video clips
Vaccine-preventable diseases | Swine influenza | [
"Biology"
] | 7,318 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
1,516,916 | https://en.wikipedia.org/wiki/Magnetic%20core | A magnetic core is a piece of magnetic material with a high magnetic permeability used to confine and guide magnetic fields in electrical, electromechanical and magnetic devices such as electromagnets, transformers, electric motors, generators, inductors, loudspeakers, magnetic recording heads, and magnetic assemblies. It is made of ferromagnetic metal such as iron, or ferrimagnetic compounds such as ferrites. The high permeability, relative to the surrounding air, causes the magnetic field lines to be concentrated in the core material. The magnetic field is often created by a current-carrying coil of wire around the core.
The use of a magnetic core can increase the strength of magnetic field in an electromagnetic coil by a factor of several hundred times what it would be without the core. However, magnetic cores have side effects which must be taken into account. In alternating current (AC) devices they cause energy losses, called core losses, due to hysteresis and eddy currents in applications such as transformers and inductors. "Soft" magnetic materials with low coercivity and hysteresis, such as silicon steel, or ferrite, are usually used in cores.
Core materials
An electric current through a wire wound into a coil creates a magnetic field through the center of the coil, due to Ampere's circuital law. Coils are widely used in electronic components such as electromagnets, inductors, transformers, electric motors and generators. A coil without a magnetic core is called an "air core" coil. Adding a piece of ferromagnetic or ferrimagnetic material in the center of the coil can increase the magnetic field by hundreds or thousands of times; this is called a magnetic core. The field of the wire penetrates the core material, magnetizing it, so that the strong magnetic field of the core adds to the field created by the wire. The amount that the magnetic field is increased by the core depends on the magnetic permeability of the core material. Because side effects such as eddy currents and hysteresis can cause frequency-dependent energy losses, different core materials are used for coils used at different frequencies.
In some cases the losses are undesirable and with very strong fields saturation can be a problem, and an 'air core' is used. A former may still be used; a piece of material, such as plastic or a composite, that may not have any significant magnetic permeability but which simply holds the coils of wires in place.
Solid metals
Soft iron
"Soft" (annealed) iron is used in magnetic assemblies, direct current (DC) electromagnets and in some electric motors; and it can create a concentrated field that is as much as 50,000 times more intense than an air core.
Iron is desirable to make magnetic cores, as it can withstand high levels of magnetic field without saturating (up to 2.16 teslas at ambient temperature.) Annealed iron is used because, unlike "hard" iron, it has low coercivity and so does not remain magnetised when the field is removed, which is often important in applications where the magnetic field is required to be repeatedly switched.
Due to the electrical conductivity of the metal, when a solid one-piece metal core is used in alternating current (AC) applications such as transformers and inductors, the changing magnetic field induces large eddy currents circulating within it, closed loops of electric current in planes perpendicular to the field. The current flowing through the resistance of the metal heats it by Joule heating, causing significant power losses. Therefore, solid iron cores are not used in transformers or inductors, they are replaced by laminated or powdered iron cores, or nonconductive cores like ferrite.
Laminated silicon steel
In order to reduce the eddy current losses mentioned above, most low frequency power transformers and inductors use laminated cores, made of stacks of thin sheets of silicon steel:
Lamination
Laminated magnetic cores are made of stacks of thin iron sheets coated with an insulating layer, lying as much as possible parallel with the lines of flux. The layers of insulation serve as a barrier to eddy currents, so eddy currents can only flow in narrow loops within the thickness of each single lamination. Since the current in an eddy current loop is proportional to the area of the loop, this prevents most of the current from flowing, reducing eddy currents to a very small level. Since power dissipated is proportional to the square of the current, breaking a large core into narrow laminations reduces the power losses drastically. From this, it can be seen that the thinner the laminations, the lower the eddy current losses.
Silicon alloying
A small addition of silicon to iron (around 3%) results in a dramatic increase of the resistivity of the metal, up to four times higher. The higher resistivity reduces the eddy currents, so silicon steel is used in transformer cores. Further increase in silicon concentration impairs the steel's mechanical properties, causing difficulties for rolling due to brittleness.
Among the two types of silicon steel, grain-oriented (GO) and grain non-oriented (GNO), GO is most desirable for magnetic cores. It is anisotropic, offering better magnetic properties than GNO in one direction. As the magnetic field in inductor and transformer cores is always along the same direction, it is an advantage to use grain oriented steel in the preferred orientation. Rotating machines, where the direction of the magnetic field can change, gain no benefit from grain-oriented steel.
Special alloys
A family of specialized alloys exists for magnetic core applications. Examples are mu-metal, permalloy, and supermalloy. They can be manufactured as stampings or as long ribbons for tape wound cores. Some alloys, e.g. Sendust, are manufactured as powder and sintered to shape.
Many materials require careful heat treatment to reach their magnetic properties, and lose them when subjected to mechanical or thermal abuse. For example, the permeability of mu-metal increases about 40 times after annealing in hydrogen atmosphere in a magnetic field; subsequent sharper bends disrupt its grain alignment, leading to localized loss of permeability; this can be regained by repeating the annealing step.
Vitreous metal
Amorphous metal is a variety of alloys (e.g. Metglas) that are non-crystalline or glassy. These are being used to create high-efficiency transformers. The materials can be highly responsive to magnetic fields for low hysteresis losses, and they can also have lower conductivity to reduce eddy current losses. Power utilities are currently making widespread use of these transformers for new installations. High mechanical strength and corrosion resistance are also common properties of metallic glasses which are positive for this application.
Powdered metals
Powder cores consist of metal grains mixed with a suitable organic or inorganic binder, and pressed to desired density. Higher density is achieved with higher pressure and lower amount of binder. Higher density cores have higher permeability, but lower resistance and therefore higher losses due to eddy currents. Finer particles allow operation at higher frequencies, as the eddy currents are mostly restricted to within the individual grains. Coating of the particles with an insulating layer, or their separation with a thin layer of a binder, lowers the eddy current losses. Presence of larger particles can degrade high-frequency performance. Permeability is influenced by the spacing between the grains, which form distributed air gap; the less gap, the higher permeability and the less-soft saturation. Due to large difference of densities, even a small amount of binder, weight-wise, can significantly increase the volume and therefore intergrain spacing.
Lower permeability materials are better suited for higher frequencies, due to balancing of core and winding losses.
The surface of the particles is often oxidized and coated with a phosphate layer, to provide them with mutual electrical insulation.
Iron
Powdered iron is the cheapest material. It has higher core loss than the more advanced alloys, but this can be compensated for by making the core bigger; it is advantageous where cost is more important than mass and size. Saturation flux of about 1 to 1.5 tesla. Relatively high hysteresis and eddy current loss, operation limited to lower frequencies (approx. below 100 kHz). Used in energy storage inductors, DC output chokes, differential mode chokes, triac regulator chokes, chokes for power factor correction, resonant inductors, and pulse and flyback transformers.
The binder used is usually epoxy or other organic resin, susceptible to thermal aging. At higher temperatures, typically above 125 °C, the binder degrades and the core magnetic properties may change. With more heat-resistant binders the cores can be used up to 200 °C.
Iron powder cores are most commonly available as toroids. Sometimes as E, EI, and rods or blocks, used primarily in high-power and high-current parts.
Carbonyl iron is significantly more expensive than hydrogen-reduced iron.
Carbonyl iron
Powdered cores made of carbonyl iron, a highly pure iron, have high stability of parameters across a wide range of temperatures and magnetic flux levels, with excellent Q factors between 50 kHz and 200 MHz. Carbonyl iron powders are basically constituted of micrometer-size spheres of iron coated in a thin layer of electrical insulation. This is equivalent to a microscopic laminated magnetic circuit (see silicon steel, above), hence reducing the eddy currents, particularly at very high frequencies. Carbonyl iron has lower losses than hydrogen-reduced iron, but also lower permeability.
A popular application of carbonyl iron-based magnetic cores is in high-frequency and broadband inductors and transformers, especially higher power ones.
Carbonyl iron cores are often called "RF cores".
The as-prepared particles, "E-type"and have onion-like skin, with concentric shells separated with a gap. They contain significant amount of carbon. They behave as much smaller than what their outer size would suggest. The "C-type" particles can be prepared by heating the E-type ones in hydrogen atmosphere at 400 °C for prolonged time, resulting in carbon-free powders.
Hydrogen-reduced iron
Powdered cores made of hydrogen reduced iron have higher permeability but lower Q than carbonyl iron. They are used mostly for electromagnetic interference filters and low-frequency chokes, mainly in switched-mode power supplies.
Hydrogen-reduced iron cores are often called "power cores".
MPP (molypermalloy)
An alloy of about 2% molybdenum, 81% nickel, and 17% iron. Very low core loss, low hysteresis and therefore low signal distortion. Very good temperature stability. High cost. Maximum saturation flux of about 0.8 tesla. Used in high-Q filters, resonant circuits, loading coils, transformers, chokes, etc.
The material was first introduced in 1940, used in loading coils to compensate capacitance in long telephone lines. It is usable up to about 200 kHz to 1 MHz, depending on vendor. It is still used in above-ground telephone lines, due to its temperature stability. Underground lines, where temperature is more stable, tend to use ferrite cores due to their lower cost.
High-flux (Ni-Fe)
An alloy of about 50–50% of nickel and iron. High energy storage, saturation flux density of about 1.5 tesla. Residual flux density near zero. Used in applications with high DC current bias (line noise filters, or inductors in switching regulators) or where low residual flux density is needed (e.g. pulse and flyback transformers, the high saturation is suitable for unipolar drive), especially where space is constrained. The material is usable up to about 200 kHz.
Sendust, KoolMU
An alloy of 6% aluminium, 9% silicon, and 85% iron. Core losses higher than MPP. Very low magnetostriction, makes low audio noise. Loses inductance with increasing temperature, unlike the other materials; can be exploited by combining with other materials as a composite core, for temperature compensation. Saturation flux of about 1 tesla. Good temperature stability. Used in switching power supplies, pulse and flyback transformers, in-line noise filters, swing chokes, and in filters in phase-fired controllers (e.g. dimmers) where low acoustic noise is important.
Absence of nickel results in easier processing of the material and its lower cost than both high-flux and MPP.
The material was invented in Japan in 1936. It is usable up to about 500 kHz to 1 MHz, depending on vendor.
Nanocrystalline
A nanocrystalline alloy of a standard iron-boron-silicon alloy, with addition of smaller amounts of copper and niobium. The grain size of the powder reaches down to 10–100 nanometers. The material has very good performance at lower frequencies. It is used in chokes for inverters and in high power applications. It is available under names like e.g. Nanoperm, Vitroperm, Hitperm and Finemet.
Ceramics
Ferrite
Ferrite ceramics are used for high-frequency applications. The ferrite materials can be engineered with a wide range of parameters. As ceramics, they are essentially insulators, which prevents eddy currents, although losses such as hysteresis losses can still occur.
Air
A coil not containing a magnetic core is called an air core. This includes coils wound on a plastic or ceramic form in addition to those made of stiff wire that are self-supporting and have air inside them. Air core coils generally have a much lower inductance than similarly sized ferromagnetic core coils, but are used in radio frequency circuits to prevent energy losses called core losses that occur in magnetic cores. The absence of normal core losses permits a higher Q factor, so air core coils are used in high frequency resonant circuits, such as up to a few megahertz. However, losses such as proximity effect and dielectric losses are still present. Air cores are also used when field strengths above around 2 Tesla are required as they are not subject to saturation.
Commonly used structures
Straight cylindrical rod
Most commonly made of ferrite or powdered iron, and used in radios especially for tuning an inductor. The coil is wound around the rod, or a coil form with the rod inside. Moving the rod in or out of the coil changes the flux through the coil, and can be used to adjust the inductance. Often the rod is threaded to allow adjustment with a screwdriver. In radio circuits, a blob of wax or resin is used once the inductor has been tuned to prevent the core from moving.
The presence of the high permeability core increases the inductance, but the magnetic field lines must still pass through the air from one end of the rod to the other. The air path ensures that the inductor remains linear. In this type of inductor radiation occurs at the end of the rod and electromagnetic interference may be a problem in some circumstances.
Single "I" core
Like a cylindrical rod but is square, rarely used on its own.
This type of core is most likely to be found in car ignition coils.
"C" or "U" core
U and C-shaped cores are used with I or another C or U core to make a square closed core, the simplest closed core shape. Windings may be put on one or both legs of the core.
"E" core
E-shaped core are more symmetric solutions to form a closed magnetic system. Most of the time, the electric circuit is wound around the center leg, whose section area is twice that of each individual outer leg. In 3-phase transformer cores, the legs are of equal size, and all three legs are wound.
{{multiple image
| align = center
| direction = horizontal
| width = 200
| image1 = E_core.png
| caption1 = Classical E core
| image2 = EFD_core.png
| caption2 = The EFD core allows for construction of inductors or transformers with a lower profile
| image3 = ER_core.png
| caption3 = The ETD core has a cylindrical central leg.
| image4 = EP_core.png
| caption4 = The EP core is halfway between a E and a pot core
}}
"E" and "I" core
Sheets of suitable iron stamped out in shapes like the (sans-serif) letters "E" and "I", are stacked with the "I" against the open end of the "E" to form a 3-legged structure. Coils can be wound around any leg, but usually the center leg is used. This type of core is frequently used for power transformers, autotransformers, and inductors.
Pair of "E" cores
Again used for iron cores. Similar to using an "E" and "I" together, a pair of "E" cores will accommodate a larger coil former and can produce a larger inductor or transformer. If an air gap is required, the centre leg of the "E" is shortened so that the air gap sits in the middle of the coil to minimize fringing and reduce electromagnetic interference.
Planar core
A planar core consists of two flat pieces of magnetic material, one above and one below the coil. It is typically used with a flat coil that is part of a printed circuit board. This design is excellent for mass production and allows a high power, small volume transformer to be constructed for low cost. It is not as ideal as either a pot core or toroidal core''' but costs less to produce.
Pot core
Usually ferrite or similar. This is used for inductors and transformers. The shape of a pot core is round with an internal hollow that almost completely encloses the coil. Usually a pot core is made in two halves which fit together around a coil former (bobbin). This design of core has a shielding effect, preventing radiation and reducing electromagnetic interference.
Toroidal core
This design is based on a toroid (the same shape as a doughnut). The coil is wound through the hole in the torus and around the outside. An ideal coil is distributed evenly all around the circumference of the torus. The symmetry of this geometry creates a magnetic field of circular loops inside the core, and the lack of sharp bends will constrain virtually all of the field to the core material. This not only makes a highly efficient transformer, but also reduces the electromagnetic interference radiated by the coil.
It is popular for applications where the desirable features are: high specific power per mass and volume, low mains hum, and minimal electromagnetic interference. One such application is the power supply for a hi-fi audio amplifier. The main drawback that limits their use for general purpose applications is the inherent difficulty of winding wire through the center of a torus.
Unlike a split core (a core made of two elements, like a pair of E cores), specialized machinery is required for automated winding of a toroidal core. Toroids have less audible noise, such as mains hum, because the magnetic forces do not exert bending moment on the core. The core is only in compression or tension, and the circular shape is more stable mechanically.
Ring or bead
The ring is essentially identical in shape and performance to the toroid, except that inductors commonly pass only through the center of the core, without wrapping around the core multiple times.
The ring core may also be composed of two separate C-shaped hemispheres secured together within a plastic shell, permitting it to be placed on finished cables with large connectors already installed, that would prevent threading the cable through the small inner diameter of a solid ring.
AL value
The AL value of a core configuration is frequently specified by manufacturers. The relationship between inductance and AL number in the linear portion of the magnetisation curve is defined to be:
where n is the number of turns, L is the inductance (e.g. in nH) and AL is expressed in inductance per turn squared (e.g. in nH/n2).
Core loss
When the core is subjected to a changing magnetic field, as it is in devices that use AC current such as transformers, inductors, and AC motors and alternators, some of the power that would ideally be transferred through the device is lost in the core, dissipated as heat and sometimes noise. Core loss is commonly termed iron loss in contradistinction to copper loss, the loss in the windings. Iron losses are often described as being in three categories:
Hysteresis losses
When the magnetic field through the core changes, the magnetization of the core material changes by expansion and contraction of the tiny magnetic domains it is composed of, due to movement of the domain walls. This process causes losses, because the domain walls get "snagged" on defects in the crystal structure and then "snap" past them, dissipating energy as heat. This is called hysteresis loss. It can be seen in the graph of the B field versus the H'' field for the material, which has the form of a closed loop.
The net energy that flows into the inductor expressed in relationship to the B-H characteristic of the core is shown by the equation
This equation shows that the amount of energy lost in the material in one cycle of the applied field is proportional to the area inside the hysteresis loop. Since the energy lost in each cycle is constant, hysteresis power losses increase proportionally with frequency. The final equation for the hysteresis power loss is
Eddy-current losses
If the core is electrically conductive, the changing magnetic field induces circulating loops of current in it, called eddy currents, due to electromagnetic induction. The loops flow perpendicular to the magnetic field axis. The energy of the currents is dissipated as heat in the resistance of the core material. The power loss is proportional to the area of the loops and inversely proportional to the resistivity of the core material. Eddy current losses can be reduced by making the core out of thin laminations which have an insulating coating, or alternatively, making the core of a magnetic material with high electrical resistance, like ferrite. Most magnetic cores intended for power converter application use ferrite cores for this reason.
Anomalous losses
By definition, this category includes any losses in addition to eddy-current and hysteresis losses. This can also be described as broadening of the hysteresis loop with frequency. Physical mechanisms for anomalous loss include localized eddy-current effects near moving domain walls.
Legg's equation
An equation known as Legg's equation models the magnetic material core loss at low flux densities. The equation has three loss components: hysteresis, residual, and eddy current, and it is given by
where
is the effective core loss resistance (ohms),
is the material permeability,
is the inductance (henrys),
is the hysteresis loss coefficient,
is the maximum flux density (gauss),
is the residual loss coefficient,
is the frequency (hertz), and
e is the eddy loss coefficient.
Steinmetz coefficients
Losses in magnetic materials can be characterized by the Steinmetz coefficients, which however do not take into account temperature variability. Material manufacturers provide data on core losses in tabular and graphical form for practical conditions of use.
See also
Balun
Magnetic-core memory
Pole piece
Toroidal inductors and transformers
References
External links
Online calculator for ferrite coil winding calculations
What are the bumps at the end of computer cables?
How to use ferrites for EMI suppression via Wayback Machine by Murata Manufacturing
Electromagnetic components
Radio electronics
Electromagnetic radiation | Magnetic core | [
"Physics",
"Engineering"
] | 4,958 | [
"Electromagnetic radiation",
"Physical phenomena",
"Radiation",
"Radio electronics"
] |
1,516,949 | https://en.wikipedia.org/wiki/Kinematic%20determinacy | Kinematic determinacy is a term used in structural mechanics to describe a structure where material compatibility conditions alone can be used to calculate deflections. A kinematically determinate structure can be defined as a structure where, if it is possible to find nodal displacements compatible with member extensions, those nodal displacements are unique. The structure has no possible mechanisms, i.e. nodal displacements, compatible with zero member extensions, at least to a first-order approximation. Mathematically, the mass matrix of the structure must have full rank. Kinematic determinacy can be loosely used to classify an arrangement of structural members as a structure (stable) instead of a mechanism (unstable). The principles of kinematic determinacy are used to design precision devices such as mirror mounts for optics, and precision linear motion bearings.
See also
Statical determinacy
Precision engineering
Kinematic coupling
References
Mechanical engineering | Kinematic determinacy | [
"Physics",
"Engineering"
] | 191 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
1,517,049 | https://en.wikipedia.org/wiki/Lidstone%20series | In mathematics, a Lidstone series, named after George James Lidstone, is a kind of polynomial expansion that can express certain types of entire functions.
Let ƒ(z) be an entire function of exponential type less than (N + 1)π, as defined below. Then ƒ(z) can be expanded in terms of polynomials An as follows:
Here An(z) is a polynomial in z of degree n, Ck a constant, and ƒ(n)(a) the nth derivative of ƒ at a.
A function is said to be of exponential type of less than t if the function
is bounded above by t. Thus, the constant N used in the summation above is given by
with
References
Ralph P. Boas, Jr. and C. Creighton Buck, Polynomial Expansions of Analytic Functions, (1964) Academic Press, NY. Library of Congress Catalog 63-23263. Issued as volume 19 of Moderne Funktionentheorie ed. L.V. Ahlfors, series Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer-Verlag
Mathematical series | Lidstone series | [
"Mathematics"
] | 235 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Series (mathematics)",
"Mathematical analysis stubs",
"Calculus"
] |
7,126,735 | https://en.wikipedia.org/wiki/Heater%20core | A heater core is a radiator-like device used in heating the cabin of a vehicle. Hot coolant from the vehicle's engine is passed through a winding tube of the core, a heat exchanger between coolant and cabin air. Fins attached to the core tubes serve to increase surface area for heat transfer to air that is forced past them by a fan, thereby heating the passenger compartment.
How it works
The internal combustion engine in most cars and trucks is cooled by a water and antifreeze mixture that is circulated through the engine and radiator by a water pump to enable the radiator to give off engine heat to the atmosphere. Some of that coolant can be diverted through the heater core to give some engine heat to the cabin, or adjust the temperature of the conditioned air.
A heater core is a small radiator located under the dashboard of the vehicle, and it consists of conductive aluminium or brass tubing with cooling fins to increase surface area. Hot coolant passing through the heater core gives off heat before returning to the engine cooling circuit.
The squirrel cage fan of the vehicle's ventilation system forces air through the heater core to transfer heat from the coolant to the cabin air, which is directed into the vehicle through vents at various points.
Control
Once the engine has warmed up, the coolant is kept at a more or less constant temperature by the thermostat. The temperature of the air entering the vehicle's interior can be controlled by using a valve limiting the amount of coolant that goes through the heater core. Another method is blocking off the heater core with a door, directing part (or all) of the incoming air around the heater core completely, so it does not get heated (or re-heated if the air conditioning compressor is active). Some cars use a combination of these systems.
Simpler systems allow the driver to control the valve or door directly (usually by means of a rotary knob, or a lever). More complicated systems use a combination of electromechanical actuators and thermistors to control the valve or doors to deliver air at a precise temperature value selected by the user.
Cars with dual climate function (allowing driver and passenger to each set a different temperature) may use a heater core split in two, where different amounts of coolant flow through the heater core on either side to obtain the desired heating.
Air conditioning
In a car equipped with air conditioning, outside air, or cabin air if the recirculation flap has been set to close the external air passages, is first forced, often after being filtered by a cabin air filter, through the air conditioner's evaporator coil. This can be thought of as a heater core filled with very cold liquid that is undergoing a phase change to gas (the evaporation), a process which cools rather than heats the incoming air. In order to obtain the desired temperature, incoming air may first be cooled by the air conditioning and then heated again by the heater core. In a vehicle fitted with manual controls for the heater and air conditioning compressor, using both systems together will dehumidify the air in the cabin, as the evaporator coil removes moisture from the air due to condensation. This can result in increased air comfort levels inside the vehicle. Automatic temperature control systems can take the best course of action in regulating the compressor operation, amount of reheating and blower speed depending upon the external air temperature, the internal one and the cabin air temperature value or a rapid defrost effect requested by the user.
Engine cooling function
Because the heater core cools the heated coolant from the engine by transferring its heat to the cabin air, it can also act as an auxiliary radiator for the engine. If the radiator is working improperly, the operator may turn the heat on (together with the cabin blower fan placed on full speed, and with the windows opened) in the passenger cabin, resulting in a certain cooling effect on the overheated engine coolant. This idea only works to a certain degree, as the heater core is not large enough nor does it have enough cold air going through it to cool large amounts of coolant significantly.
Possible problems
The heater core is made up of small piping that has numerous bends. Clogging of the piping may occur if the coolant system is not flushed or if the coolant is not changed regularly. If clogging occurs the heater core will not work properly. If coolant flow is restricted, heating capacity will be reduced or even lost altogether if the heater core becomes blocked. Control valves may also clog or get stuck. Where a blend door is used instead of a control valve as a method of controlling the air's heating amount, the door itself or its control mechanism can become stuck due to thermal expansion. If the climate control unit is automatic, actuators can also fail.
Another possible problem is a leak in one of the connections to the heater core. This may first be noticeable by smell (ethylene glycol is widely used as coolant and has a sweet smell); it may also cause (somewhat greasy) fogging of the windshield above the windshield heater vent. Glycol may also leak directly into the car, causing wet upholstery or carpeting.
Electrolysis can cause excessive corrosion leading to the heater core rupturing. Coolant will spray directly into the passenger compartment followed with white coloured smoke, a significant driving hazard.
Because the heater core is usually located under the dashboard inside of the vehicle and is enclosed in the ventilation system's ducting, servicing it often requires disassembling a large part of the dashboard, which can be labour-intensive and therefore expensive.
Since the heater core relies on the coolant's heat to warm the cabin air up, it will not begin working until the engine's coolant warms up enough. This problem can be resolved by equipping the vehicle with an auxiliary heating system, which can either use electricity or burn the vehicle's fuel in order to rapidly bring the engine's coolant to operating temperatures.
Air cooled engines
Engines that do not have a water cooling system cannot heat the cabin via a heater core; one alternative is to guide air around the (very hot) engine exhaust manifold and then into the vehicle's interior. Temperature control is achieved by mixing with unheated outside air. Air-cooled Volkswagen engines use this method. Another example is the air-cooled Briggs & Stratton Vanguard, used in the ultra and microlight flight amateur construction scene. This method for cockpit heating is a simple option for the Spacek SD-1 Minisport and other homebuilt sportplanes. However, depending on the design, this can cause a safety issue where a leak in the exhaust system will begin to fill the passenger cabin with deadly fumes.
Reuse for other purposes
Car heat cores are also used for Do-It-Yourself projects, such as for cooling homemade liquid cooling systems for computers.
See also
Air cooling
Internal combustion engine cooling
Radiator (engine cooling)
Radiator (heating)
Water cooling
References
External links
entire cooling system
Automotive technologies
Cooling technology
Engine cooling systems
Heating, ventilation, and air conditioning
Heat exchangers | Heater core | [
"Chemistry",
"Engineering"
] | 1,500 | [
"Chemical equipment",
"Heat exchangers"
] |
7,127,168 | https://en.wikipedia.org/wiki/Friction%20loss | In fluid dynamics, friction loss (or frictional loss) is the head loss that occurs in a containment such as a pipe or duct due to the effect of the fluid's viscosity near the surface of the containment.
Engineering
Friction loss is a significant engineering concern wherever fluids are made to flow, whether entirely enclosed in a pipe or duct, or with a surface open to the air.
Historically, it is a concern in aqueducts of all kinds, throughout human history. It is also relevant to sewer lines. Systematic study traces back to Henry Darcy, an aqueduct engineer.
Natural flows in river beds are important to human activity; friction loss in a stream bed has an effect on the height of the flow, particularly significant during flooding.
The economies of pipelines for petrochemical delivery are highly affected by friction loss. The Yamal–Europe pipeline carries methane at a volume flow rate of 32.3 × 109 m3 of gas per year, at Reynolds numbers greater than 50 × 106.
In hydropower applications, the energy lost to skin friction in flume and penstock is not available for useful work, say generating electricity.
In refrigeration applications, energy is expended pumping the coolant fluid through pipes or through the condenser. In split systems, the pipes carrying the coolant take the place of the air ducts in HVAC systems.
Calculating volumetric flow
In the following discussion, we define volumetric flow rate V̇ (i.e. volume of fluid flowing per time) as
where
r = radius of the pipe (for a pipe of circular section, the internal radius of the pipe).
v = mean velocity of fluid flowing through the pipe.
A = cross sectional area of the pipe.
In long pipes, the loss in pressure (assuming the pipe is level) is proportional to the length of pipe involved.
Friction loss is then the change in pressure Δp per unit length of pipe L
When the pressure is expressed in terms of the equivalent height of a column of that fluid, as is common with water, the friction loss is expressed as S, the "head loss" per length of pipe, a dimensionless quantity also known as the hydraulic slope.
where
ρ = density of the fluid, (SI kg / m3)
g = the local acceleration due to gravity;
Characterizing friction loss
Friction loss, which is due to the shear stress between the pipe surface and the fluid flowing within, depends on the conditions of flow and the physical properties of the system. These conditions can be encapsulated into a dimensionless number Re, known as the Reynolds number
where V is the mean fluid velocity and D the diameter of the (cylindrical) pipe. In this expression, the properties of the fluid itself are reduced to the kinematic viscosity ν
where
μ = viscosity of the fluid (SI kg / m • s)
Friction loss in straight pipe
The friction loss in uniform, straight sections of pipe, known as "major loss", is caused by the effects of viscosity, the movement of fluid molecules against each other or against the (possibly rough) wall of the pipe. Here, it is greatly affected by whether the flow is laminar (Re < 2000) or turbulent (Re > 4000):
In laminar flow, losses are proportional to fluid velocity, V; that velocity varies smoothly between the bulk of the fluid and the pipe surface, where it is zero. The roughness of the pipe surface influences neither the fluid flow nor the friction loss.
In turbulent flow, losses are proportional to the square of the fluid velocity, V2; here, a layer of chaotic eddies and vortices near the pipe surface, called the viscous sub-layer, forms the transition to the bulk flow. In this domain, the effects of the roughness of the pipe surface must be considered. It is useful to characterize that roughness as the ratio of the roughness height ε to the pipe diameter D, the "relative roughness". Three sub-domains pertain to turbulent flow:
In the smooth pipe domain, friction loss is relatively insensitive to roughness.
In the rough pipe domain, friction loss is dominated by the relative roughness and is insensitive to Reynolds number.
In the transition domain, friction loss is sensitive to both.
For Reynolds numbers 2000 < Re < 4000, the flow is unstable, varying with time as vortices within the flow form and vanish randomly. This domain of flow is not well modeled, nor are the details well understood.
Form friction
Factors other than straight pipe flow induce friction loss; these are known as "minor loss":
Fittings, such as bends, couplings, valves, or transitions in hose or pipe diameter, or
Objects intruded into the fluid flow.
For the purposes of calculating the total friction loss of a system, the sources of form friction are sometimes reduced to an equivalent length of pipe.
Surface roughness
The roughness of the surface of the pipe or duct affects the fluid flow in the regime of turbulent flow. Usually denoted by ε, values used for calculations of water flow, for some representative materials are:
Values used in calculating friction loss in ducts (for, e.g., air) are:
Calculating friction loss
Hagen–Poiseuille Equation
Laminar flow is encountered in practice with very viscous fluids, such as motor oil, flowing through small-diameter tubes, at low velocity. Friction loss under conditions of laminar flow follow the Hagen–Poiseuille equation, which is an exact solution to the Navier-Stokes equations. For a circular pipe with a fluid of density ρ and viscosity μ, the hydraulic slope S can be expressed
In laminar flow (that is, with Re < ~2000), the hydraulic slope is proportional to the flow velocity.
Darcy–Weisbach Equation
In many practical engineering applications, the fluid flow is more rapid, therefore turbulent rather than laminar. Under turbulent flow, the friction loss is found to be roughly proportional to the square of the flow velocity and inversely proportional to the pipe diameter, that is, the friction loss follows the phenomenological Darcy–Weisbach equation in which the hydraulic slope S can be expressed
where we have introduced the Darcy friction factor fD (but see Confusion with the Fanning friction factor);
fD = Darcy friction factor
Note that the value of this dimensionless factor depends on the pipe diameter D and the roughness of the pipe surface ε. Furthermore, it varies as well with the flow velocity V and on the physical properties of the fluid (usually cast together into the Reynolds number Re). Thus, the friction loss is not precisely proportional to the flow velocity squared, nor to the inverse of the pipe diameter: the friction factor takes account of the remaining dependency on these parameters.
From experimental measurements, the general features of the variation of fD are, for fixed relative roughness ε / D and for Reynolds number Re = V D / ν > ~2000,
With relative roughness ε / D < 10−6, fD declines in value with increasing Re in an approximate power law, with one order of magnitude change in fD over four orders of magnitude in Re. This is called the "smooth pipe" regime, where the flow is turbulent but not sensitive to the roughness features of the pipe (because the vortices are much larger than those features).
At higher roughness, with increasing Reynolds number Re, fD climbs from its smooth pipe value, approaching an asymptote that itself varies logarithmically with the relative roughness ε / D; this regime is called "rough pipe" flow.
The point of departure from smooth flow occurs at a Reynolds number roughly inversely proportional to the value of the relative roughness: the higher the relative roughness, the lower the Re of departure. The range of Re and ε / D between smooth pipe flow and rough pipe flow is labeled "transitional". In this region, the measurements of Nikuradse show a decline in the value of fD with Re, before approaching its asymptotic value from below, although Moody chose not to follow those data in his chart, which is based on the Colebrook–White equation.
At values of 2000 < Re < 4000, there is a critical zone of flow, a transition from laminar to turbulence, where the value of fD increases from its laminar value of 64 / Re to its smooth pipe value. In this regime, the fluid flow is found to be unstable, with vortices appearing and disappearing within the flow over time.
The entire dependence of fD on the pipe diameter D is subsumed into the Reynolds number Re and the relative roughness ε / D, likewise the entire dependence on fluid properties density ρ and viscosity μ is subsumed into the Reynolds number Re. This is called scaling.
The experimentally measured values of fD are fit to reasonable accuracy by the (recursive) Colebrook–White equation, depicted graphically in the Moody chart which plots friction factor fD versus Reynolds number Re for selected values of relative roughness ε / D.
Calculating friction loss for water in a pipe
In a design problem, one may select pipe for a particular hydraulic slope S based on the candidate pipe's diameter D and its roughness ε.
With these quantities as inputs, the friction factor fD can be expressed in closed form in the Colebrook–White equation or other fitting function, and the flow volume Q and flow velocity V can be calculated therefrom.
In the case of water (ρ = 1 g/cc, μ = 1 g/m/s) flowing through a 12-inch (300 mm) Schedule-40 PVC pipe (ε = 0.0015 mm, D = 11.938 in.), a hydraulic slope S = 0.01 (1%) is reached at a flow rate Q = 157 lps (liters per second), or at a velocity V = 2.17 m/s (meters per second).
The following table gives Reynolds number Re, Darcy friction factor fD, flow rate Q, and velocity V such that hydraulic slope S = hf / L = 0.01, for a variety of nominal pipe (NPS) sizes.
Note that the cited sources recommend that flow velocity be kept below 5 feet / second (~1.5 m/s).
Also note that the given fD in this table is actually a quantity adopted by the NFPA and the industry, known as C, which has the customary units psi/(100 gpm2ft) and can be calculated using the following relation:
where is the pressure in psi, is the flow in 100gpm and is the length of the pipe in 100ft
Calculating friction loss for air in a duct
Friction loss takes place as a gas, say air, flows through duct work.
The difference in the character of the flow from the case of water in a pipe stems from the differing Reynolds number Re and the roughness of the duct.
The friction loss is customarily given as pressure loss for a given duct length, Δp / L, in units of (US) inches of water for 100 feet or (SI) kg / m2 / s2.
For specific choices of duct material, and assuming air at standard temperature and pressure (STP), standard charts can be used to calculate the expected friction loss. The chart exhibited in this section can be used to graphically determine the required diameter of duct to be installed in an application where the volume of flow is determined and where the goal is to keep the pressure loss per unit length of duct S below some target value in all portions of the system under study. First, select the desired pressure loss Δp / L, say 1 kg / m2 / s2 (0.12 in H2O per 100 ft) on the vertical axis (ordinate). Next scan horizontally to the needed flow volume Q, say 1 m3 / s (2000 cfm): the choice of duct with diameter D = 0.5 m (20 in.) will result in a pressure loss rate Δp / L less than the target value. Note in passing that selecting a duct with diameter D = 0.6 m (24 in.) will result in a loss Δp / L of 0.02 kg / m2 / s2 (0.02 in H2O per 100 ft), illustrating the great gains in blower efficiency to be achieved by using modestly larger ducts.
The following table gives flow rate Q such that friction loss per unit length Δp / L (SI kg / m2 / s2) is 0.082, 0.245, and 0.816, respectively, for a variety of nominal duct sizes. The three values chosen for friction loss correspond to, in US units inch water column per 100 feet, 0.01, .03, and 0.1. Note that, in approximation, for a given value of flow volume, a step up in duct size (say from 100mm to 120mm) will reduce the friction loss by a factor of 3.
Note that, for the chart and table presented here, flow is in the turbulent, smooth pipe domain, with R* < 5 in all cases.
Notes
Further reading
– In translation, NACA TT F-10 359. The data are available in digital form.
Cited by Moody, L. F. (1944)
– In English translation, as NACA TM 1292, 1950. The data show in detail the transition region for pipes with high relative roughness (ε/D > 0.001).
Cited by Moody, L. F. (1944)
Exhibits Nikuradse data.
Large amounts of field data on commercial pipes. The Colebrook–White equation was found inadequate over a wide range of flow conditions.
Shows friction factor in the smooth flow region for 1 < Re < 108 from two very different measurements.
References
External links
Pipe pressure drop calculator for single phase flows.
Pipe pressure drop calculator for two phase flows.
Open source pipe pressure drop calculator.
Friction
Fluid dynamics
Fluid mechanics
Mechanical engineering
Piping | Friction loss | [
"Physics",
"Chemistry",
"Engineering"
] | 2,900 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Applied and interdisciplinary physics",
"Building engineering",
"Chemical engineering",
"Surface science",
"Civil engineering",
"Mechanical engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"... |
7,127,409 | https://en.wikipedia.org/wiki/BC%20Research | BC Research Inc. is a Canadian process technology incubator, specializing in R&D, chemical process development, clean technology innovation, and technology commercialization. BC Research Inc. (BCRI) is part of the NORAM Group, a vertically integrated group of companies under common Canadian ownership and located in the Vancouver, B.C area that specialize in the development, scale-up and full-scale commercialization of chemical processes. The group provides a wide range of services from technical consulting to complete, turn-key chemical plants. We have a 35+ year track-record in taking novel technologies from the laboratory to the marketplace. Headquartered in downtown Vancouver, British Columbia, BC Research operates primarily from their "Technology Innovation and Commercialization Centre" on Mitchell Island in the Vancouver suburb of Richmond, BC.
Technology Portfolio
BCRI focuses on the development of innovative chemical processes related to Clean Technology. Key areas of focus include:
Concept development, design, fabrication, commissioning and operation of sophisticated pilot plants and demonstration plants.
Hydrogen production and hydrogen purification.
Electrochemical processes including electrolysis and electrodialysis.
Thermochemical processes including pyrolysis, gasification, reforming, and combustion.
Water treatment.
Bioproducts and green chemistry processes.
Carbon capture and storage.
Mineral processing.
Clean fuels and hydrocarbon processing.
Lithium processing and production of lithium hydroxide.
Multiphase flow reactors, catalytic reactors, gas–liquid contactors.
Process design, process and equipment modelling, and chemical process engineering.
BCRI has an extensive Intellectual Property portfolio of international patents and can work with partners to develop new technologies in a collaborative fashion.
Current Facilities
BCRI's Technology Innovation and Commercialization Centre has a facility that containss a pilot plant development and operations area with 30 ft (9.2m) vertical clearance, wet laboratory space, state-of-the-art analytical chemistry equipment, engineering and design office space, as well as a machine shop and fenced outdoor piloting space. Technologies are scaled up from concept to pilot or demonstration scale in preparation for commercialization.
Industries Served
As part of the NORAM Group, BC Research works in the advancement of process and equipment technologies, in mature industries such as mononitrobenzene production, sulfuric acid manufacture, pulp and paper industry, green chemistry, water treatment, mineral processing, and environmental industry.
BCRI executes projects worldwide and works in close collaboration with other NORAM Group companies including: NORAM Engineering and Constructors Ltd, NORAM Electrolysis Systems Inc. (NESI), NORAM International AB, Axton Inc, and ECOfluid Inc.
Analytical Capabilities
Chromatography: HPLC (High-Performance Liquid Chromatography), GCMS (Gas Chromatography-Mass Spectrometry), GCFID (Gas Chromatography-Flame Ionization Detector), HPIC (High-Performance Ion Chromatography).
Spectroscopy: ICP-MS (Inductively Coupled Plasma Mass Spectrometry), FAAS (Flame Atomic Absorption Spectroscopy), UV/VIS (Ultraviolet/Visible Spectroscopy), FTIR (Fourier-Transform Infrared Spectroscopy).
Thermal Analyzers: TGA (Thermogravimetric Analysis), DSC (Differential Scanning Calorimetry).
Surface Tension : Force tensiometer, SDT (Spinning Drop Tensiometer).
Physical properties: Rheology, particle analysis, density, high-speed cameras and sound velocity, laser PIV (Particle Image Velocimetry), and many others.
Autotitration, portable gas analyzers.
Microscopy including Confocal Microscope and SEM-EDXS (Scanning Electron Microscope with Energy-Dispersive X-ray Spectroscopy).
History
Previously, BC Research occupied a scientific research and development company located at the BC Research and Innovation Complex at the south end of the University of British Columbia campus close to the TRIUMF particle accelerator centre. This facility closed in November 2007. The company specialized in consulting and applied research and development in the area of plant biotechnology and environment, health and safety, process and analysis, transportation and ship dynamics.
The company can be traced back to 1944 as it developed from the non-profit BC Research Council to a private company in 1993, founded by Dr. Hugh Wynne-Edwards, Ph.D, DSc., FRSC, a member of the Order of Canada, who served as the founding Chief Executive Officer and developed the facility into an incubator in the fields of biotechnology, drug discovery and alternative fuel technologies.
In 2000, part of BC Research was purchased by Immune Network Ltd and was sold to Cromedica (now PRA International) in July 2001. Its plant biotechnology team was mostly spun off in Silvagen Inc. which specialized in clonal reforestation and which became a part of CellFor. In 1999 Azure Dynamics, a hybrid commercial vehicle systems developer, was formed with some of the transportation team and left the facility in 2004 having gone public in 2001 as Azure Dynamics Corporation. Radient Technologies, specializing in microwave-assisted cannabis extraction, purification and isolation, was also spun off in 2001 as a joint venture with Environment Canada. The remaining laboratory and consulting business functions continued under the name Vizon SciTec until August 2006 when CANTEST Ltd. announced its acquisition from BC Research Inc. which continues as a privately held technology holding company.
Relaunch as Part of the NORAM Group of Companies
BC Research Inc. is now a wholly owned subsidiary of the NORAM group, a private, vertically integrated portfolio of businesses serving process scale-up, engineering, R&D, pilot plants, demonstration plants, modular plants, custom fabrication, and site assistance. In 2010, BC Research Inc. (BCRI), opened again for business in Burnaby, B.C under the NORAM Group of Companies. The Company continued to provide specialized consulting and applied research and development in an expanding number of different technologies and industries, including fluidized beds, storage of energy in batteries, fuel cells, electrochemical cells, corrosion testing and analysis, hydrogen, sulfur, chlorine, nitration, water treatment, and pulp and paper chemistry.
In 2017, moved to a newly constructed Technology Innovation and Commercialization Center on Mitchell Island in Richmond, BC to expand their capabilities. This current facility is described in detail above.
References
External links
BC Research Inc. - BCRI
BC Research facilities video
NORAM Group of Companies and NORAM Engineering
NORAM Electrolysis Systems Inc. - NESI
Axton Inc
NORAM International AB
ECOfluid
Hugh Wynne-Edwards http://www.legacy.com/obituaries/vancouversun/obituary.aspx?pid=165088497
Encyclopedia of British Columbia, Edited by Daniel Francis, Harbour Publishing 2000
Research institutes in Canada
Companies based in Vancouver
Canadian companies established in 2010 | BC Research | [
"Chemistry",
"Engineering"
] | 1,400 | [
"Chemistry laboratories",
"Chemical research institutes",
"Engineering research institutes"
] |
7,127,508 | https://en.wikipedia.org/wiki/Open%20Prosthetics%20Project | The Open Prosthetics Project (OPP) is an open design effort, dedicated to public domain prosthetics.
By creating an online collaboration between prosthetic users and designers, the project aims to make new technology available for anyone to use and customize. On the project's website, medical product designers can post new ideas for prosthetic devices as CAD files, which are then available to the public free of charge. Prosthetic users or other designers can download the Computer-aided design (CAD) data, customize or improve upon the prosthesis, and repost the modifications to the web site. Users are free to take 3D models to a fabricator and have the hardware built for less cost than buying a manufactured limb.
The project was started by Jonathon Kuniholm, a member of United States Marine Corps Reserve who lost part of his right arm to an improvised explosive device (IED) in Iraq. Upon returning home and receiving his first myoelectric hand, he decided there must be a better solution.
References
Sources
Public domain
Prosthetics
Medical and health organizations based in North Carolina
Open content projects
Open-source hardware | Open Prosthetics Project | [
"Engineering",
"Biology"
] | 240 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
7,127,688 | https://en.wikipedia.org/wiki/Nest%20%28magazine%29 | Nest: A Quarterly of Interiors was a magazine published from 1997 to 2004, for a total run of 26 issues. The first issue was Fall 1997, and the second issue was Fall 1998. Thereafter, the issues were Winter '98-'99, Spring '99, Summer '99, Fall '99, Winter '99-'00, and so on until Fall '04. The founder was Joseph Holtzman. It was published in Upper East Side, New York City.
Marketed as an interior design magazine, and edited by Joseph Holtzman, Nest generally eschewed the conventionally beautiful luxury interiors showcased in other magazines, and instead featured photographs of nontraditional, exceptional, and unusual environments. Fred A. Bernstein, writing in the New York Times, wrote that Joseph Holtzman "believed that an igloo, a prison cell or a child's attic room (adorned with Farrah Fawcett posters) could be as compelling as a room by a famous designer." During its run, Nest showed the room of a 40-year-old diaper lover, the lair of an Indonesian bird that decorates with colored stones and vomit, the final resting place of Napoleon's penis, the quarters of Navy seamen, a barbed-wire-trimmed bed that doubled as a tank, and a Gothic Christmas card from filmmaker John Waters. Noted architect Rem Koolhaas called it "an anti-materialistic, idealistic magazine about the hyperspecific in a world that is undergoing radical leveling, an 'interior design' magazine hostile to the cosmetic." Artist Richard Tuttle was quoted as saying that Mr. Holtzman "channeled the collective unconscious, to give us the pleasure of ornament before we even knew we wanted it."
Awards
2000, General Excellence Award, The American Society of Magazine Editors
2001, Best Design, The American Society of Magazine Editors
References
External links
The now defunct website of Nest: A Quarterly of Interiors
Commentary on Nest
Nest Magazine Closes
I miss Nest Magazine - commentary with pictures
Visual arts magazines published in the United States
Quarterly magazines published in the United States
Defunct magazines published in the United States
Design magazines
Independent magazines
Interior design
Magazines established in 1998
Magazines disestablished in 2004
Magazines published in New York City | Nest (magazine) | [
"Engineering"
] | 456 | [
"Design magazines",
"Design"
] |
7,127,872 | https://en.wikipedia.org/wiki/Flagpole | A flagpole, flagmast, flagstaff, or staff is a pole designed to support a flag. If it is taller than can be easily reached to raise the flag, a cord is used, looping around a pulley at the top of the pole with the ends tied at the bottom. The flag is fixed to one lower end of the cord, and is then raised by pulling on the other end. The cord is then tightened and tied to the pole at the bottom. The pole is usually topped by a flat plate or ball called a "truck" (originally meant to keep a wooden pole from splitting) or a finial in a more complex shape. Very high flagpoles may require more complex support structures than a simple pole, such as a guyed mast.
Dwajasthambam are flagpoles commonly found at the entrances of South Indian Hindu temples.
Design
Flagpoles are usually made of wood or metal. Flagpoles can be designed in one piece with a taper (typically a steel taper or a Greek entasis taper), or be made from multiple pieces to make them able to expand. In the United States, ANSI/NAAMM guide specification FP-1001-97 covers the engineering design of metal flagpoles to ensure safety.
Flag orientation
Most flags are flown horizontally, with the shorter edge attached to the pole (no. 1 in the following illustration.) Vertical flags, with the longer edge attached to the pole, are sometimes used in lieu of the standard horizontal flag in central and eastern Europe, particularly in the German-speaking countries. This practice came about because the relatively brisk wind needed to display horizontal flags is not common in these countries. Nevertheless, horizontal flags are still the most common even in these countries.
The standard vertical flag (German: Hochformatflagge or Knatterflagge; no. 2) is a vertical form of the standard flag. The flag's design may remain unchanged (No. 2a) or it may change, e.g. by altering horizontal stripes to vertical ones (no. 2b). If the flag carries an emblem, it may remain centred or may be shifted slightly upwards.
The vertical flag for hoisting from a beam (German: Auslegerflagge or Galgenflagge; no. 3) is additionally attached to a horizontal beam, ensuring that it is fully displayed even if there is no wind.
The vertical flag for hoisting from a horizontal pole (German: Hängeflagge; no. 4) is hoisted from a horizontal pole, normally attached to a building. The topmost stripe on the horizontal version of the flag faces away from the building.
The vertical flag for hoisting from a crossbar or banner (German: Bannerflagge; no. 5) is firmly attached to a horizontal crossbar from which it is hoisted, either by a vertical pole (no. 5a) or a horizontal one (no. 5b). The topmost stripe on the horizontal version of the flag normally faces to the left.
Record heights
Since 26 December 2021, the tallest free-standing flagpole in the world is the Cairo Flagpole, located in the New Administrative Capital, Egypt at a height of , exceeding the former record holders, the Jeddah Flagpole in Saudi Arabia (height: ), the Dushanbe Flagpole in Tajikistan (height: ) and the National Flagpole in Azerbaijan (height: ). The flagpole in North Korea is the fourth tallest flagpole in the world, however, it is not free-standing. It is a radio tower supported-flagpole. Many of these were built by American company Trident Support: the Dushanbe Flagpole, the National Flagpole in Azerbaijan, the Ashgabat flagpole in Turkmenistan at ; the Aqaba Flagpole in Jordan at ; the Raghadan Flagpole in Jordan at ; and the Abu Dhabi Flagpole in the United Arab Emirates at .
The current tallest flagpole in India is the flagpole in Belgaum, Karnataka which was first hoisted on 12 March 2018. The tallest flagpole in the United Kingdom from 1959 until 2007 stood in Kew Gardens. It was made from a Canadian Douglas-fir tree and was in height.
The current tallest flagpole in the United States (and the tallest flying an American flag) is the pole completed before Memorial Day 2014 and custom-made with an base in concrete by wind turbine manufacturer Broadwind Energy. It is situated on the north side of the Acuity Insurance headquarters campus along Interstate 43 in Sheboygan, Wisconsin, and is visible from Cedar Grove. The pole can fly a 220-pound flag for in light wind conditions and a heavier 350-pound flag in higher wind conditions.
References
Vexillology | Flagpole | [
"Mathematics"
] | 981 | [
"Symbols",
"Flags"
] |
7,128,296 | https://en.wikipedia.org/wiki/SmartComputing | Smart Computing was a monthly computing and technology magazine published by Sandhills Publishing Company in Lincoln, Nebraska, USA. First released under the name PC Novice, it was published from 1990 to 2013.
Content
The magazine featured articles, reviews of hardware and software, editorial content and classified advertising. It was geared more toward newer users than its sister publications, Computer Power User and CyberTrend (previously known as PC Today).
Articles and Features
Technology News and Notes, by Christian Perry - News and a monthly Q/A help desk
Tech Diaries, various authors - Reviews
Software Head-to-Head, various authors - a comparison of software
September 2006: Anti-Spam: , SonicWALL Email Security Desktop, OnlyMyEmail, VQme Anti Spam with Webmail. Winner: SonicWALL Email Security Desktop
October 2006: Instant Messaging clients: Yahoo! Messenger 8, AIM Triton 1.5, Google Talk, ICQ 5.1, Trillian 3.1, Windows Live Messenger. Winner: Yahoo! Messenger
January 2007: Office suites: StarOffice 8, Microsoft Office 2007 Home and Student Edition, Corel WordPerfect X3 Standard Edition, Ability Office Standard Edition. Winner: StarOffice 8
Software Reviews, various
Staff Picks, various - staff's choices of hardware
Windows Tips & Tricks, various - helpful hints for using Microsoft Windows
General Computing, various - articles about no specific topic
Reader's Tips, by readers - readers give hints to other readers
Learning Linux, by Vince Cogley, NEW COLUMN - teach yourself using Linux with the Ubuntu distribution
Plugged In, various - tips on using the Internet
Mr. Modem's Desktop, by Mr. Modem - various tips and Internet links
Quick Studies, various - tips on and fixing problems with using very commonly used software
Tidbits, by Marty Sems - information on new stuff
Tech Support, various - consists of:
What to Do When... - a guide on fixing road-block problems
Examining Errors - the magazine helps readers with errors
Fast Fixes - information on new software updates
Q&A - answers to tech support questions
FAQ - answers to frequently asked questions; each month all questions are about the same topic
Action Editor, unknown - Action Editor comes to the rescue when companies deny service or give bad service
Tales From The Trenches, by Gregory Anderson - his bad experiences when using computers and what to do about them if they happen to you
Editorial License, by Rod Scher - description unknown
See also
Computer magazines
References
External links
Publisher's website
1990 establishments in Nebraska
2013 disestablishments in Nebraska
Monthly magazines published in the United States
Defunct computer magazines published in the United States
Home computer magazines
Magazines established in 1990
Magazines disestablished in 2013
Magazines published in Nebraska
Mass media in Lincoln, Nebraska | SmartComputing | [
"Technology"
] | 568 | [
"Computing stubs",
"Computer magazine stubs"
] |
7,128,334 | https://en.wikipedia.org/wiki/RCA%20clean | The RCA clean is a standard set of wafer cleaning steps which need to be performed before high-temperature processing steps (oxidation, diffusion, CVD) of silicon wafers in semiconductor manufacturing.
Werner Kern developed the basic procedure in 1965 while working for RCA, the Radio Corporation of America. It involves the following chemical processes performed in sequence:
Removal of the organic contaminants (organic clean + particle clean)
Removal of thin oxide layer (oxide strip, optional)
Removal of ionic contamination (ionic clean)
Standard recipe
The wafers are prepared by soaking them in deionized water. If they are grossly contaminated (visible residues), they may require a preliminary cleanup in piranha solution. The wafers are thoroughly rinsed with deionized water between each step.
Ideally, the steps below are carried out by immersing the wafers in solutions prepared in fused silica or fused quartz vessels (borosilicate glassware must not be used, as its impurities leach out and cause contamination). Likewise it is recommended that the chemicals used be of electronic grade (or "CMOS grade") to avoid impurities that will recontaminate the wafer.
First step (SC-1): organic clean + particle clean
The first step (called SC-1, where SC stands for Standard Clean) is performed with a solution of (ratios may vary)
5 parts of deionized water
1 part of ammonia water, (29% by weight of NH3)
1 part of aqueous H2O2 (hydrogen peroxide, 30%)
at 75 or 80 °C typically for 10 minutes. This base-peroxide mixture removes organic residues. Particles are also very effectively removed, even insoluble particles, since SC-1 modifies the surface and particle zeta potentials and causes them to repel. This treatment results in the formation of a thin silicon dioxide layer (about 10 Angstrom) on the silicon surface, along with a certain degree of metallic contamination (notably iron) that will be removed in subsequent steps.
Second step (optional): oxide strip
The optional second step (for bare silicon wafers) is a short immersion in a 1:100 or 1:50 solution of aqueous HF (hydrofluoric acid) at 25 °C for about fifteen seconds, in order to remove the thin oxide layer and some fraction of ionic contaminants. If this step is performed without ultra high purity materials and ultra clean containers, it can lead to recontamination since the bare silicon surface is very reactive. In any case, the subsequent step (SC-2) dissolves and regrows the oxide layer.
Third step (SC-2): ionic clean
The third and last step (called SC-2) is performed with a solution of (ratios may vary)
6 parts of deionized water
1 part of aqueous HCl (hydrochloric acid, 37% by weight)
1 part of aqueous H2O2 (hydrogen peroxide, 30%)
at 75 or 80 °C, typically for 10 minutes. This treatment effectively removes the remaining traces of metallic (ionic) contaminants, some of which were introduced in the SC-1 cleaning step. It also leaves a thin passivating layer on the wafer surface, which protects the surface from subsequent contamination (bare exposed silicon is contaminated immediately).
Fourth step: rinsing and drying
Provided the RCA clean is performed with high-purity chemicals and clean glassware, it results in a very clean wafer surface while the wafer is still submersed in water. The rinsing and drying steps must be performed correctly (e.g., with flowing water) since the surface can be easily recontaminated by organics and particulates floating on the surface of water. A variety of procedures can be used to rinse and dry the wafer effectively.
Additions
The first step in the ex situ cleaning process is to ultrasonically degrease the wafer in trichloroethylene, acetone and methanol.
See also
Chemical-mechanical planarization
Piranha solution
Plasma etching
Silicon on insulator
Wafer (electronics)
References
External links
RCA Clean, School of Electrical and Computer Engineering, Georgia Institute of Technology
Semiconductor device fabrication | RCA clean | [
"Materials_science"
] | 880 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
7,128,429 | https://en.wikipedia.org/wiki/Fusicoccin | Fusicoccins are organic compounds produced by a fungus. It has detrimental effect on plants and causes their death.
Fusicoccins are diterpenoid glycosides produced by the fungus Fusicoccum amygdali, which is a parasite of mainly almond and peach trees. It stimulates a quick acidification of the plant cell wall; this causes the stomata to irreversibly open, which brings about the death of the plant.
Fusicoccins contains three fused carbon rings and another ring which contains an oxygen atom and five carbons.
Fusicoccin was and is extensively used in research regarding the plant hormone auxin and its mechanisms.
Biosynthesis
Fusicoccin is a member of a diterpenoid class which shares a 5-8-5 ring structure and is called fusicoccane. In fungi, fusicoccin is biosynthesized via Phomopsis amygdali fusicoccadiene synthase (PaFS) from universal C5 isoprene units dimethylallyl diphosphate (DMAPP) and isopentenyl diphosphate (IPP). PaFS has two domains, a C-terminal prenyltransferase domain which converts isoprene units into geranylgeranyl diphosphate (GGDP) and an N-terminal terpene cyclase domain where GGDP gets cyclized and turns into fusicocca-2,10(14)-diene. It is also reported in this study that a 2-oxoglutarate-dependent dioxygenase-like gene, a cytochrome P450 monooxygenase-like gene, a short-chain dehydrogenase/reductase-like gene, and an α-mannosidase-like gene at the 3’ location downstream of PaFS which are responsible for converting fusicocca-2,10(14)-diene into fusicoccin. Two enzymes, one dioxygenase and PAPT, are in charge of catalyzing a hydroxylation at the 3-position of fusicocca-2,10(14)-diene-8β,16-diol and prenylation of the hydroxyl group of glucose in fusicoccin P, respectively.
References
Plant physiology
Cyclopentanes
Diterpene glycosides | Fusicoccin | [
"Biology"
] | 514 | [
"Plant physiology",
"Plants"
] |
7,128,467 | https://en.wikipedia.org/wiki/PC%20Today | PC Today (Later Cyber Trend) was a monthly mobile computing and technology computer magazine published by Sandhills Publishing Company in Lincoln, Nebraska, US.
History and profile
The article and editorial content focused primarily around mobile and wireless technologies, notebooks, mobile phones, PDAs, Windows, and office and home software. The magazine was renamed CyberTrend in 2014, which was distributed to business-class hotels, airline clubs, and fixed-base operators. The magazine also included classified advertising. Nancy Hammel served as the editor-in-chief of the magazine when it was published under the title of PC Today. The magazine ceased publication in July 2017.
References
External links
Publisher's website
Monthly magazines published in the United States
Defunct computer magazines published in the United States
Home computer magazines
Magazines with year of establishment missing
Magazines disestablished in 2017
Magazines published in Nebraska
Mass media in Lincoln, Nebraska | PC Today | [
"Technology"
] | 180 | [
"Computing stubs",
"Computer magazine stubs"
] |
7,128,600 | https://en.wikipedia.org/wiki/Gray%20baby%20syndrome | Gray baby syndrome (also termed gray syndrome or grey syndrome) is a rare but serious, even fatal, side effect that occurs in newborn infants (especially premature babies) following the accumulation of the antibiotic chloramphenicol.
Chloramphenicol is a broad-spectrum antibiotic that has been used to treat a variety of bacteria infections like Streptococcus pneumoniae as well as typhoid fever, meningococcal sepsis, cholera, and eye infections. Chloramphenicol works by binding to ribosomal subunits which blocks transfer ribonucleic acid (RNA) and prevents the synthesis of bacterial proteins. Chloramphenicol has also been used to treat neonates born before 37 weeks of the gestational period for prophylactic purposes.
In 1958, newborns born prematurely due to rupture of the amniotic sac were given chloramphenicol to prevent possible infections, and it was noticed that these newborns had a higher mortality rate compared with those who were not treated with the antibiotic. Over the years, chloramphenicol has been used less in clinical practice due to the risks of toxicity not only to neonates, but also to adults due to the risk of aplastic anemia. Chloramphenicol is now reserved to treat certain severe bacteria infections that were not successfully treated with other antibiotic medications.
Signs and symptoms
Since the syndrome is due to the accumulation of chloramphenicol, the signs and symptoms are dose related. According to Kasten's review published in the Mayo Clinic Proceedings, a serum concentration of more than 50 μg/mL is a warning sign, while Hammett-Stabler and John states that the common therapeutics peak level is 10-20 μg/mL and is expected to achieve after 0.5-1.5 hours of intravenous administration in their review of antimicrobial drugs. The common onset of signs and symptoms are 2 to 9 days after the initiation of the medication, which allows the serum concentration to build up to reach the toxic concentration above. Common signs and symptoms include loss of appetite, fussiness, vomiting, ashen gray color of the skin, hypotension (low blood pressure), cyanosis (blue discoloration of lips and skin), hypothermia, cardiovascular collapse, hypotonia (muscle stiffness), abdominal distension, irregular respiration, and increased blood lactate.
Pathophysiology
Two pathophysiologic mechanisms are thought to play a role in the development of gray baby syndrome after exposure to chloramphenicol. This condition is due to a lack of glucuronidation reactions occurring in the baby (phase II hepatic metabolism), thus leading to an accumulation of toxic chloramphenicol metabolites:
Metabolism: The UDP-glucuronyl transferase enzyme system in infants, especially premature infants, is not fully developed and incapable of metabolizing the excessive drug load needed to excrete chloramphenicol.
Elimination: Insufficient renal excretion of the unconjugated drug.
Insufficient metabolism and excretion of chloramphenicol leads to increased blood concentrations of the drug, causing blockade of the electron transport of the liver, myocardium, and skeletal muscles. Since the electron transport is an essential part of cellular respiration, its blockade can result in cell damage. In addition, the presence of chloramphenicol weakens the binding of bilirubin and albumin, so increased levels of the drug can lead to high levels of free bilirubin in the blood, resulting in brain damage or kernicterus. If left untreated, possible bleeding, renal (kidney) and/or hepatic (liver) failure, anemia, infection, confusion, weakness, blurred vision, or eventually death are expected. Additionally, chloramphenicol is significantly insoluble due to an absence of acidic and basic groups in its molecular compound. As a result, larger amounts of the medication are required to achieve the desired therapeutic effect. High volumes of a medication that can cause various toxicities is another avenue how chloramphenicol can potentially lead to grey baby syndrome.
Diagnosis
Gray baby syndrome should be suspected in a newborn with abdominal distension, progressive pallid cyanosis, irregular respirations, and refusal to breastfeed. The cause of gray baby syndrome can come from the direct use of intravenous or oral chloramphenicol in neonates. Direct chronological relation between the use of the medication and signs and symptoms of the syndrome should be found in the previous medical history.
In terms of the possible route of chloramphenicol, gray baby syndrome do not come from the mother's use of chloramphenicol during pregnancy or breastfeeding. According to the Drug and Lactation database (LactMed), it states that "milk concentrations are not sufficient to induce gray baby syndrome". It is also reported that the syndrome may not develop in infants when their mothers use the medication in their late period of pregnancy. According to the Oxford Review, chloramphenicol given to mothers during their pregnancy did not result in gray baby syndrome, but was caused by infants receiving supra-therapeutic doses of chloramphenicol after birth.
The presentation of symptoms can depend on the level of exposure of the drug to the baby, given its dose-related nature. A broad diagnosis is usually needed for babies who present with cyanosis. To support the diagnosis, blood work should be done to determine the level of serum chloramphenicol, and to further evaluate chloramphenicol toxicity, a metabolic panel and a complete blood panel including levels of serum ketones and glucose (due to the risk of hypoglycemia) should be completed to help determine if an infant has the syndrome. Other tools used to help with diagnosis include CT scans, ultrasound, and electrocardiogram.
Prevention
Since the syndrome is a side effect of chloramphenicol, the prevention is primarily related to the proper use of the medication. The WHO Model Formulary for Children 2010 recommends to reserve chloramphenicol for life-threatening infections. As well as using chloramphenicol only when necessary, it should also be limited to short periods of time to also prevent the potential for toxicity. In particular, this medication should not be prescribed especially in neonates less than one week old due to the significant risk of toxicity. Preterm infants especially should not be administered chloramphenicol. Gray baby syndrome has been noted to be dose-dependent as it typically occurs in neonates who have received a daily dose greater than 200 milligrams.
When chloramphenicol is necessary, the condition can be prevented by using the recommended doses and monitoring blood levels, or alternatively, third generation cephalosporins can be effectively substituted for the drug, without the associated toxicity. Also, repeated administration and prolonged treatment should be avoided. In terms of neonatal hepatic development, it take only weeks from birth for them to develop their UDPGT expression and function to be at an adult-like level, while the function is only about 1% in the late pregnancy, even right before birth. According to MSD Manuals, chloramphenicol should not be given to neonates younger than 1 month of age with more than a dose of 25 mg/kg/day to start with. The serum concentration of the medication should be monitored to titrate to a therapeutic level and to prevent toxicity.
Reconciliation of other medications that neonates may be taking that can decrease blood cell count should be monitored because this medication can suppress bone marrow activity. Rifampicin and trimethoprim are examples of such medications and are contraindicated for concomitant use with chloramphenicol. Regrading bone marrow suppression, chloramphenicol has two major etiological manifestations. The first affects hematopoiesis, and this is reversible being an early sign of toxicity. The second is bone marrow aplasia, associated with terminal toxicity, and sometimes irreversible.
Chloramphenicol is contraindicated in breastfeeding due to the risk of toxic effects to the baby. However, if maternal use cannot be avoided, close monitoring of the baby's symptoms such as feeding difficulties and blood work is recommended.
Treatment
Chloramphenicol therapy should be stopped immediately if objective or subjective signs of gray baby syndrome are suspected since gray baby syndrome can be fatal for the infant if not diagnosed early as it can lead to anemia, shock, and terminal organ damage. After discontinuing the antibiotic, the side effects caused by the toxicity should be treated. This includes treating hypoglycemia to help prevent hemodynamic instability, as well as warming the infant if they have developed hypothermia.
Since symptoms of gray baby syndrome are correlated with elevated serum chloramphenicol concentrations, exchange transfusion may be required to remove the drug. Charcoal column hemoperfusion is a type of transfusion that has shown significant effects but is associated with numerous side effects. The associated side effects are not the only reason why this method of treatment is not a first line therapy. According to the American Journal of Kidney Diseases, elevated cartridge prices and viable lifespan of the product are deterring factors to consider. Phenobarbital and theophylline are two drugs in particular that have shown significant efficacy with charcoal hemoperfusion, aside from its most significant indication for chronic aluminum toxicity in people with end-stage renal disease (ESRD) traditionally. Sometimes, phenobarbital is used to induce UDP-glucuronyl transferase enzyme function.
For hemodynamically unstable neonates, supportive care measures such as resuscitation, oxygenation, and treatment for hypothermia are common practices when cessation of chloramphenicol alone is insufficient. With sepsis being a complication of severe gray baby syndrome, usage of broad-spectrum antibiotics such as vancomycin, for example, is a recommended treatment option. Third generation antibiotics have also proven efficacy in treating gray baby-induced sepsis.
References
Further reading
External links
Poisoning by drugs, medicaments and biological substances
Syndromes | Gray baby syndrome | [
"Environmental_science"
] | 2,174 | [
" medicaments and biological substances",
"Toxicology",
"Poisoning by drugs"
] |
7,128,952 | https://en.wikipedia.org/wiki/A36%20steel | A36 steel is a common structural steel alloy used in the United States. The A36 (UNS K02600) standard was established by the ASTM International. The standard was published in 1960 and has been updated several times since. Prior to 1960, the dominant standards for structural steel in North America were A7 (until 1967) and A9 (for buildings, until 1940). Note that SAE/AISI A7 and A9 tool steels are not the same as the obsolete ASTM A7 and A9 structural steels.
Chemical composition
Note: For shapes with a flange thickness more than 3 in (76 mm), 0.85-1.35% manganese content and 0.15-0.40% silicon content are required.
Properties
As with most steels, A36 has a density of . Young's modulus for A36 steel is . A36 steel has a Poisson's ratio of 0.26 and a shear modulus of .
A36 steel in plates, bars, and shapes with a thickness of less than has a minimum yield strength of and ultimate tensile strength of . Plates thicker than 8 inches have a yield strength and the same ultimate tensile strength of . The electrical resistance of A36 is 0.142 μΩm at . A36 bars and shapes maintain their ultimate strength up to . Above that temperature, the minimum strength drops off from : at ; at ; at .
Fabricated forms
A36 is produced in a wide variety of forms, including:
Plates
Structural Shapes
Bars
Girders
Angle iron
T iron
Methods of joining
A36 is readily welded by all welding processes. As a result, the most common welding methods for A36 are the cheapest and easiest: shielded metal arc welding (SMAW, or stick welding), gas metal arc welding (GMAW, or MIG welding), and oxyacetylene welding. A36 steel is also commonly bolted and riveted in structural applications. High-strength bolts have largely replaced structural steel rivets. Indeed, the latest steel construction specifications published by AISC (the 15th Edition) no longer covers their installation.
See also
Structural steel
References
Steels
Structural steel | A36 steel | [
"Engineering"
] | 448 | [
"Steels",
"Structural engineering",
"Alloys",
"Structural steel"
] |
7,129,083 | https://en.wikipedia.org/wiki/TIA-MC-1 | The TIA-MC-1 () — Телевизионный Игровой Автомат Многокадровый Цветной (pronounced Televizionniy Igrovoi Automat Mnogokadrovyi Tcvetnoi; meaning Video Game Machine – Multiframe Colour) was a Soviet arcade machine with replaceable game programs and was one of the most famous arcade machines from the Soviet Union. The TIA-MC-1 was developed in Vinnytsia, Ukraine by the Ekstrema-Ukraina company in the mid-1980s under the leadership of V.B. Gerasimov. The machine was manufactured by the production association Terminal and some other factories.
Games
Some of the TIA-MC-1 based games are:
Автогонки (Avtogonki, Autoracing)
Биллиард (Billiard, a pool-like game)
Звёздный рыцарь (Zvezdnyi Rytsar, Star Knight)
Истребитель (Istrebiteli, Fighter Jet, Harrier)
Конёк-Горбунок (Konek-Gorbunok, The Humpbacked Horse by Pyotr Pavlovich Yershov)
Кот-рыболов (Kot-Rybolov, Cat the fisher)
Котигорошко (Kotigoroshko, title of a Russian fairy tale)
Остров дракона (Ostrov Drakona, Dragon Island)
Остров сокровищ (Ostrov Sokrovish, Treasure Island by Robert Louis Stevenson)
Снежная королева (Snezhnaja koroleva, The Snow Queen by Hans Christian Andersen)
S.O.S.
The Konek-Gurbunok game is comparable to The Legend of Zelda and included environments such as forests and castles.
Technical specifications
The arcade machine consists of several boards called BEIA (Russian:БЭИА, Блок Элементов Игрового Автомата, Blok Elementov Igrovogo Automata).
The boards have the following purposes:
BEIA-100: data processing; RGB DAC; sound generation; coin-op and game controller interface
BEIA-101: video sync and background generation
BEIA-102: sprite generation
BEIA-103: game ROM and main RAM
Games in a TIA-MC-1 arcade machine can be switched by replacing the BEIA-103 module, not unlike cartridges in video game consoles.
Main system characteristics are as follows:
CPU: КР580ВМ80А (clone of Intel 8080), 1.78 MHz
Video resolution: 256×256, 4 bits per pixel selectable from a palette of 256 colors
Background: two video pages composed of 32×32 tiles, each tile is 8×8 pixels. Tile RAM can store 256 separate tiles.
Sprites: up to 16 simultaneously displayed hardware-generated sprites; total of 256 sprites can be stored in sprite ROM. Sprites can be vertically and horizontally mirrored in hardware.
Sound: two КР580ВИ53 interval timers (Intel 8253) driving a mono speaker.
Display: 20" (51 cm) TV screen
Main RAM — 8KiB.
Character RAM — 8KiB.
Video RAM — 2KiB.
Sprite ROM — 32KiB.
ROM with game code and background graphics — up to 56KiB.
Emulation
For a long time the TIA-MC-1 hardware remained unemulated due to a lack of technical information and ROM dumps. Soon after the Russian emulation community obtained technical documentation and ROM dumps of one of the games, Konek-Gorbunok, the first emulator named TIA-MC Emulator was released on July 27, 2006. A TIA-MC-1 driver was included in MAME on August 21, 2006 (since version 0.108). By now, only five games (Konek-Gorbunok, S.O.S., Billiard, Snezhnaja koroleva, Kot-Rybolov) are dumped and supported by emulators. An ongoing search for other games is in progress.
See also
Photon (arcade cabinet)
List of Soviet computer systems
References
External links
Extreme-Ukraine site, short history of TIA-MC-1 in English, as of October 23, 2007, courtesy of the Wayback Machine.
The Alternate Universe of Soviet Arcade Games (September 1, 2015. Kristin Winet.)
Arcade system boards
Computing in the Soviet Union
Soviet brands
Goods manufactured in the Soviet Union
Arcade-only video games
Arcade video games | TIA-MC-1 | [
"Technology"
] | 1,062 | [
"Computing in the Soviet Union",
"History of computing"
] |
7,129,616 | https://en.wikipedia.org/wiki/Mucicarmine%20stain | Mucicarmine stain is a staining procedure used for different purposes. In microbiology the stain aids in the identification of a variety of microorganisms based on whether or not the cell wall stains intensely red. Generally this is limited to microorganisms with a cell wall that is composed, at least in part, of a polysaccharide component. One of the organisms that is identified using this staining technique is Cryptococcus neoformans.
Another use is in surgical pathology where it can identify mucin. This is helpful, for example, in determining if the cancer is a type that produces mucin.
Example would be to distinguish between high grade Mucoepidermoid Carcinoma of the parotid, which stains positive vs Squamous Cell Carcinoma of the parotid which does not.
References
Carbohydrate methods
Staining dyes | Mucicarmine stain | [
"Chemistry",
"Biology"
] | 184 | [
"Biochemistry methods",
"Carbohydrate chemistry",
"Carbohydrate methods"
] |
7,130,280 | https://en.wikipedia.org/wiki/FKBP | The FKBPs, or FK506 binding proteins, constitute a family of proteins that have prolyl isomerase activity and are related to the cyclophilins in function, though not in amino acid sequence. FKBPs have been identified in many eukaryotes, ranging from yeast to humans, and function as protein folding chaperones for proteins containing proline residues. Along with cyclophilin, FKBPs belong to the immunophilin family.
FKBP1A (also known as FKBP12) is notable in humans for binding the immunosuppressant molecule tacrolimus (originally designated FK506), which is used in treating patients after organ transplant and patients with autoimmune disorders. Tacrolimus has been found to reduce episodes of organ rejection over a related treatment, the drug ciclosporin, which binds cyclophilin. Both the FKBP-tacrolimus complex and the cyclosporin-cyclophilin complex inhibit a phosphatase called calcineurin, thus blocking signal transduction in the T-lymphocyte transduction pathway. This therapeutic role is not related to its prolyl isomerase activity. FKBP25 is a nuclear FKBP which non-specifically binds with DNA and has a role in DNA repair.
Use as a biological research tool
FKBP (FKBP1A) does not normally form a dimer but will dimerize in the presence of FK1012, a derivative of the drug tacrolimus (FK506). This has made it a useful tool for chemically induced dimerization applications where it can be used to manipulate protein localization, signalling pathways and protein activation.
Examples
Human genes encoding proteins in this family include:
AIP; AIPL1
FKBPL; FKBP1A; FKBP1B; FKBP2; FKBP3; FKBP4; FKBP5; FKBP6; FKBP7; FKBP8; FKBP9; FKBP10; FKBP11; FKBP14; FKBP15;
Gene with unclear status (may be pseudogene):
FKBP1C
Pseudogenes in humans:
LOC541473; FKBP9L;
See also
Immunophilins
References
External links
Anti-rejection drugs
EC 5.2.1
Protein families | FKBP | [
"Biology"
] | 522 | [
"Protein families",
"Protein classification"
] |
7,130,614 | https://en.wikipedia.org/wiki/Muscular%20layer | The muscular layer (muscular coat, muscular fibers, muscularis propria, muscularis externa) is a region of muscle in many organs in the vertebrate body, adjacent to the submucosa. It is responsible for gut movement such as peristalsis. The Latin, tunica muscularis, may also be used.
Structure
It usually has two layers of smooth muscle:
inner and "circular"
outer and "longitudinal"
However, there are some exceptions to this pattern.
In the stomach, there are three layers to the muscular layer. Stomach contains an additional oblique muscle layer just interior to circular muscle layer.
In the upper esophagus, part of the externa is skeletal muscle, rather than smooth muscle.
In the vas deferens of the spermatic cord, there are three layers: inner longitudinal, middle circular, and outer longitudinal.
In the ureter, the smooth muscle orientation is opposite that of the GI tract. There is an inner longitudinal and an outer circular layer.
The inner layer of the muscularis externa forms a sphincter at two locations of the gastrointestinal tract:
in the pylorus of the stomach, it forms the pyloric sphincter.
in the anal canal, it forms the internal anal sphincter.
In the colon, the fibres of the external longitudinal smooth muscle layer are collected into three longitudinal bands, the teniae coli.
The thickest muscularis layer is found in the stomach (triple layered) and thus maximum peristalsis occurs in the stomach. Thinnest muscularis layer in the alimentary canal is found in the rectum, where minimum peristalsis occurs.
Function
The muscularis layer is responsible for the peristaltic movements and segmental contractions in and the alimentary canal. The Auerbach's nerve plexus (myenteric nerve plexus) is found between longitudinal and circular muscle layers, it starts muscle contractions to initiate peristalsis.
References
External links
— "Muscle Tissue: smooth muscle, muscularis externa"
Membrane biology | Muscular layer | [
"Chemistry"
] | 433 | [
"Membrane biology",
"Molecular biology"
] |
7,130,691 | https://en.wikipedia.org/wiki/Signal%20recognition%20particle%20receptor | Signal recognition particle (SRP) receptor, also called the docking protein, is a dimer composed of 2 different subunits that are associated exclusively with the rough ER in mammalian cells. Its main function is to identify the SRP units. SRP (signal recognition particle) is a molecule that helps the ribosome-mRNA-polypeptide complexes to settle down on the membrane of the endoplasmic reticulum.
The eukaryotic SRP receptor (termed SR) is a heterodimer of SR-alpha (70 kDa; SRPRA) and SR-beta (25 kDa; SRPRB), both of which contain a GTP-binding domain, while the prokaryotic SRP receptor comprises only the monomeric loosely membrane-associated SR-alpha homologue FtsY ().
SRX domain
SR-alpha regulates the targeting of SRP-ribosome-nascent polypeptide complexes to the translocon. SR-alpha binds to the SRP54 subunit of the SRP complex. The SR-beta subunit is a transmembrane GTPase that anchors the SR-alpha subunit (a peripheral membrane GTPase) to the ER membrane. SR-beta interacts with the N-terminal SRX-domain of SR-alpha, which is not present in the bacterial FtsY homologue. SR-beta also functions in recruiting the SRP-nascent polypeptide to the protein-conducting channel.
The SRX family represents eukaryotic homologues of the alpha subunit of the SR receptor. Members of this entry consist of a central six-stranded anti-parallel beta-sheet sandwiched by helix alpha1 on one side and helices alpha2-alpha4 on the other. They interact with the small GTPase SR-beta, forming a complex that matches a class of small G protein-effector complexes, including Rap-Raf, Ras-PI3K(gamma), Ras-RalGDS, and Arl2-PDE(delta). On the C-terminal of SR-alpha and FtsY is the NG domain similar to SRP54.
NG domain
The receptor binds to SPR54/Ffh by the "NG domain", a combination of a 4-helical-bundle "N" domain () and a GTPase "G" domain (), shared by both proteins. The bound structure is a quasi-symmetric heterodimer termed a targeting complex.
Signal recognition particle (SRP)
The signal recognition particle (SRP) is a multimeric protein, which along with its conjugate receptor (SR), is involved in targeting secretory proteins to the rough endoplasmic reticulum (RER) membrane in eukaryotes, or to the plasma membrane in prokaryotes. SRP recognises the signal sequence of the nascent polypeptide on the ribosome, retards its elongation, and docks the SRP-ribosome-polypeptide complex to the RER membrane via the SR receptor. SRP consists of six polypeptides (SRP9, SRP14, SRP19, SRP54, SRP68 and SRP72) and a single 300 nucleotide 7S RNA molecule. The RNA component catalyses the interaction of SRP with its SR receptor. In higher eukaryotes, the SRP complex consists of the Alu domain and the S domain linked by the SRP RNA. The Alu domain consists of a heterodimer of SRP9 and SRP14 bound to the 5' and 3' terminal sequences of SRP RNA. This domain is necessary for retarding the elongation of the nascent polypeptide chain, which gives SRP time to dock the ribosome-polypeptide complex to the RER membrane.
References
Receptors
Protein targeting
Single-pass transmembrane proteins | Signal recognition particle receptor | [
"Chemistry",
"Biology"
] | 829 | [
"Receptors",
"Protein targeting",
"Cellular processes",
"Signal transduction"
] |
7,131,166 | https://en.wikipedia.org/wiki/Operations%2C%20administration%2C%20and%20management | Operations, administration, and management or operations, administration, and maintenance (OA&M or OAM) are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonly applies to telecommunication, computer networks, and computer hardware.
In particular, Ethernet operations, administration and maintenance (EOAM) is the protocol for installing, monitoring and troubleshooting Ethernet metropolitan area network (MANs) and Ethernet WANs. The OAM features covered by this protocol are discovery, link monitoring, remote fault detection and remote loopback.
Standards
Fault management and performance monitoring (ITU-T Y.1731) - Defines performance monitoring measurements such as frame loss ratio, frame delay and frame delay variation to assist with SLA assurance and capacity planning. For fault management the standard defines continuity checks, loopbacks, link trace, and alarm suppression (AIS, RDI) for effective fault detection, verification, isolation, and notification in carrier networks.
Connectivity fault management (IEEE 802.1ag) - Defines standardized continuity checks, loopbacks and link trace for fault management capabilities in enterprise and carrier networks. This standard also partitions the network into 8 hierarchical administrative domains.
Link layer discovery (IEEE 802.1AB) - Defines discovery for all provider edges (PEs) supporting a common service instance and/or discovery for all edge devices and P routers) common to a single network domain.
Ethernet in the First Mile defined in IEEE 802.3ah mechanisms for monitoring and troubleshooting Ethernet access links. Specifically, it defines tools for discovery, remote failure indication, remote and local loopbacks, and status and performance monitoring.
Ethernet protection switching (ITU G.8031) - Brings SONET APS / SDH MSP-like protection switching to Ethernet trunks.
OAMP
OAMP, traditionally OAM&P, stands for operations, administration, maintenance, and provisioning. The addition of 'T' in recent years stands for troubleshooting, and reflects its use in network operations environments. The term is used to describe the collection of disciplines generally, as well as whatever specific software package(s) or functionality a given company uses to track these things.
Though the term, and the concept, originated in the wired telephony world, the discipline (if not the term) has expanded to other spheres in which the same sorts of work are done, including cable television and many aspects of Internet services and network operations. 'Ethernet OAM' is another recent concept in which the terminology is used.
Operations encompass automatic monitoring of the environment, detecting and determining faults, and alerting admins. Administration typically involves collecting performance stats, accounting data for the purpose of billing, capacity planning using Usage data, and maintaining system reliability. It can also involve maintaining the service databases which are used to determine periodic billing.
Maintenance involves upgrades, fixes, new feature enablement, backing up and restoring data, and monitoring the media health. The major task is Diagnostics and troubleshooting. Provisioning is the setting up of the user accounts, devices, and services.
Although they both target the same set of markets, OAMP covers much more than the five specific areas targeted by FCAPS (See FCAPS for more details; it is a terminology that has been more popular than OAMP in non-telecom environs in the past). In NOC environments, OAMP and OAMPT are used to describe the problem management life cycle more and more - and especially with the dawn of Carrier-Grade Ethernet, telco terminology is becoming more and more embedded in traditionally IP termed worlds.
O - Operations
A - Administration
M - Maintenance
P - Provisioning
T - Troubleshooting
Procedures
Operation
Basically, these are the procedures you use during normal network operations.
They are day-to-day organisational procedures: handover, escalation, major issue management, call out, support procedures, regular updates including emails and meetings. In this section group, you will find things like Daily Checklists, On-call and Shift rotas, Call response and ticket opening procedures, Manufacturer documentation like technical specifications and operator handbooks, OOB Procedures
Administration
These are support procedures that are necessary for day-to-day operations - things like common passwords, equipment and tools access, organisational forms and timesheets, meeting minutes and agendas, and customer Service Reports.
This is not necessarily 'network admin', but also 'network operations admin'.
Maintenance
Tasks that if not done will affect service or system operation, but are not necessarily as a result of a failure. Configuration and hardware changes that are a response to system deterioration. These involve scheduling provider maintenance, standard network equipment configuration changes as a result of policy or design, routine equipment checks, hardware changes, and software/firmware upgrades. Maintenance tasks can also involve the removal of administrative privileges as a security policy.
Provisioning
Introducing a new service, creating new circuits and setting up new equipment, installing new hardware. Provisioning processes will normally include 'how to' guides and checklists that need to be strictly adhered to and signed off. They can also involve integration and commissioning process which will involve sign-off to other parts of the business life cycle.
Troubleshooting
Troubleshooting is carried out as a result of a fault or failure, may result in maintenance procedures, or emergency workarounds until such time as a maintenance procedure can be carried out. Troubleshooting procedures will involve knowledge databases, guides, and processes to cover the role of network operations engineers from initial diagnostics to advanced troubleshooting. This stage often involves problem simulation and is the traditional interface to design.
See also
Carrier Ethernet
FCAPS
IEEE 802.1
International Telecommunication Union
Metro Ethernet Forum
NComm
Operations support system
Provider Backbone Transport
References
External links
Ethernet Operations, Administration, and Maintenance from Cisco
Operational Efficiency in ERP and CMMS with integration of AI
"EFM OAM Tutorial" presentation by Kevin Daines, IEEE
Ethernet
System administration
Telephony
Network management | Operations, administration, and management | [
"Technology",
"Engineering"
] | 1,223 | [
"Information systems",
"Computer networks engineering",
"Network management",
"System administration"
] |
7,131,175 | https://en.wikipedia.org/wiki/Climatiiformes | The Climatiiformes is an order of extinct fish belonging to the class Acanthodii. Like most other "spiny sharks", the Climatiiformes had sharp spines. These animals were often fairly small in size and lived from the Late Silurian to the Early Carboniferous period. The type genus is Climatius. The order used to be subdivided into the suborders Climatiida and Diplacanthida, but subsequently Diplacanthida has been elevated to a separate order, the Diplacanthiformes. The Diplacanthiformes take their name from Diplacanthus, first described by Agassiz in 1843. Family Gyracanthidae is sometimes rejected from this order.
References
Prehistoric cartilaginous fish
Prehistoric cartilaginous fish orders
Paraphyletic groups | Climatiiformes | [
"Biology"
] | 176 | [
"Phylogenetics",
"Paraphyletic groups"
] |
7,131,295 | https://en.wikipedia.org/wiki/Knockdown%20texture | Knockdown texture is a drywall finishing style. It is a mottled texture, it has more changes in textures than a simple flat finish, but less changes than orange peel, or popcorn, texture.
Knockdown texture is created by watering down joint compound to a soupy consistency. A trowel is then used to apply the joint compound. The joint compound will begin to form stalactites as it dries. The trowel is then run over the surface of the drywall, knocking off the stalactites and leaving the mottled finish.
A much more common, and faster technique is to apply the texture mud (which is slightly different from joint compound, in that it has less shrinkage upon drying) with a texture machine – a compressor and a texture spray hopper which sprays mud instead of paint. This applies what is referred to as a splatter coat. The use of a compressor allows this to be applied to walls as well as ceilings. When knocking this down, the mud is allowed to dry for a short period, then skimmed with a knockdown knife – a large, usually plastic (to reduce noticeable edges) knife.
Knockdown texture reduces construction costs because it conceals imperfections in the drywall that normally require higher more expensive stages of sand and prime for drywall installers.
Construction | Knockdown texture | [
"Engineering"
] | 273 | [
"Construction"
] |
7,131,583 | https://en.wikipedia.org/wiki/MERCURE | Mercure can also refer to the chain of hotels run by Accor. See Mercure Hotels.
MERCURE is an atmospheric dispersion modeling CFD code developed by Électricité de France (EDF) and distributed by ARIA Technologies, a French company.
MERCURE is a version of the CFD software ESTET, developed by EDF's Laboratoire National d'Hydraulique. Thus, it has directly benefited from the improvements developed for ESTET. When requested, ARIA integrates MERCURE as a module into the ARIA RISK software for use in industrial risk assessments.
Features of the model
MERCURE is particularly well adapted to perform air pollution dispersion modelling on local or urban scales. Some of the models capabilities and features are:
Pollution source types: Point or line sources, continuous or intermittent.
Pollution plume types: Buoyant or dense gas plumes.
Deposition: The model is capable of simulating the deposition or decay of plume pollutants.
Users of the model
There are many organizations that have used MERCURE. To name a few:
Électricité de France (EDF)
Laboratoire de Mécanique des Fluides et d’Acoustique (LMFA) de l'École Centrale de Lyon, France
Institut de radioprotection et de sûreté nucléaire (IRSN), Fontenay, France
The Italian National Agency for New Technology, Energy and the Environment (ENEA), Bologna, Italy
Queensland University of Technology, Brisbane, Australia
See also
Bibliography of atmospheric dispersion modeling
Atmospheric dispersion modeling
List of atmospheric dispersion models
Further reading
For those who are unfamiliar with air pollution dispersion modelling and would like to learn more about the subject, it is suggested that either one of the following books be read:
www.crcpress.com
www.air-dispersion.com
References
External links
ARIA Technologies web site (English version)
EDF website (English version)
Atmospheric dispersion modeling
Électricité de France | MERCURE | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 407 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
7,132,137 | https://en.wikipedia.org/wiki/Matteucci%20Medal | The Matteucci Medal is an Italian award for physicists, named after Carlo Matteucci from Forlì. It was established to award physicists for their fundamental contributions. Under an Italian Royal Decree dated July 10, 1870, the Italian Society of Sciences was authorized to receive a donation from Carlo Matteucci for the establishment of the Prize.
Recipients
1868 Hermann von Helmholtz
1875 Henri Victor Regnault
1876 Lord Kelvin
1877 Gustav Kirchhoff
1878 Gustav Wiedemann
1879 Wilhelm Eduard Weber
1880 Antonio Pacinotti
1881 Emilio Villari
1882 Augusto Righi
1887 Thomas Edison
1888 Heinrich Hertz
1894 John William Strutt, 3rd Baron Rayleigh
1895 Henry Augustus Rowland
1896 Wilhelm Röntgen and Philipp Lenard
1901 Guglielmo Marconi
1903 Albert Abraham Michelson
1904 Marie Curie and Pierre Curie
1905 Henri Poincaré
1906 James Dewar
1907 William Ramsay
1908 Antonio Garbasso
1909 Orso Mario Corbino
1910 Heike Kamerlingh Onnes
1911 Jean Baptiste Perrin
1912 Pieter Zeeman
1913 Ernest Rutherford
1914 Max von Laue
1915 Johannes Stark
1915 William Henry Bragg and Lawrence Bragg
1917 Antonino Lo Surdo
1918 Robert W. Wood
1919 Henry Moseley
1921 Albert Einstein
1923 Niels Bohr
1924 Arnold Sommerfeld
1925 Robert Andrews Millikan
1926 Enrico Fermi
1927 Erwin Schrödinger
1928 C. V. Raman
1929 Werner Heisenberg
1930 Arthur Compton
1931 Franco Rasetti
1932 Frédéric Joliot-Curie and Irène Joliot-Curie
1956 Wolfgang Pauli
1975 Bruno Touschek
1978 Abdus Salam
1979 Luciano Maiani
1980 Giancarlo Wick
1982 Rudolf Peierls
1985 Hendrik Casimir
1987 Pierre-Gilles De Gennes
1988 Lev Okun
1989 Freeman Dyson
1990 Jack Steinberger
1991 Bruno Rossi
1992 Anatole Abragam
1993 John Archibald Wheeler
1994 Claude Cohen-Tannoudji
1995 Tsung Dao Lee
1996 Wolfgang K.H. Panofsky
1998 Oreste Piccioni
2001 Theodor W. Hänsch
2002 Nicola Cabibbo
2003 Manuel Cardona
2004 David Ruelle
2005 John Iliopoulos
2006
2016 Adalberto Giazotto
2017
2018 Gianluigi Fogli
2019 Federico Capasso
2020
2021 Amos Maritan
2022 Jocelyn Bell Burnell
2023 Francesco Di Martini
2024 Helen Quinn
Source:
See also
List of physics awards
External links
Matteucci Medal at the Italian National Academy of Sciences
References
Physics awards
Awards established in 1868
Italian awards
1868 establishments in Italy | Matteucci Medal | [
"Technology"
] | 499 | [
"Science and technology awards",
"Physics awards"
] |
7,132,178 | https://en.wikipedia.org/wiki/Engineer%27s%20Day | Engineer's Day is observed in several countries on various dates of the year.
Country-wise list
See also
UNESCO World Engineering Day for Sustainable Development
References
External links
Engineering awards
Types of secular holidays
January observances
February observances
March observances
April observances
May observances
June observances
July observances
August observances
September observances
October observances
December observances
Holidays and observances by scheduling (nth weekday of the month)
Observances set by the Vikram Samvat calendar
Holidays and observances by scheduling (varies) | Engineer's Day | [
"Technology"
] | 119 | [
"Science and technology awards",
"Engineering awards"
] |
7,132,704 | https://en.wikipedia.org/wiki/Fibre%20multi-object%20spectrograph | Fibre multi-object spectrograph (FMOS) is facility instrument for the Subaru Telescope on Mauna Kea in Hawaii. The instrument consists of a complex fibre-optic positioning system mounted at the prime focus of the telescope. Fibres are then fed to a pair of large spectrographs, each weighing nearly 3000 kg. The instrument will be used to look at the light from up to 400 stars or galaxies simultaneously over a field of view of 30 arcminutes (about the size of the full moon on the sky). The instrument will be used for a number of key programmes, including galaxy formation and evolution and dark energy via a measurement of the rate at which the universe is expanding.
Design, construction, operation
It is currently being built by a consortium of institutes led by Kyoto University and Oxford University with parts also being manufactured by the Rutherford Appleton Laboratory, Durham University and the Anglo-Australian Observatory. The instrument is scheduled for engineering first-light in late 2008.
OH-suppression
The spectrographs use a technique called OH-suppression to increase the sensitivity of the observations: The incoming light from the fibres is dispersed to a relatively high resolution and this spectrum forms an image on a pair of spherical mirrors which have been etched at the positions corresponding to the bright OH-lines. This spectrum is then re-imaged through a second diffraction grating to allow the full spectrum (without the OH lines) to be imaged onto a single infrared detector.
References
FMOS
FMOS Project
Telescope instruments
Spectrographs
Electronic test equipment
Signal processing
Laboratory equipment | Fibre multi-object spectrograph | [
"Physics",
"Chemistry",
"Astronomy",
"Technology",
"Engineering"
] | 317 | [
"Telecommunications engineering",
"Telescope instruments",
"Spectrum (physical sciences)",
"Computer engineering",
"Signal processing",
"Electronic test equipment",
"Measuring instruments",
"Spectrographs",
"Astronomical instruments",
"Spectroscopy"
] |
7,133,120 | https://en.wikipedia.org/wiki/Commercial%20astronaut | A commercial astronaut is a person who has commanded, piloted, or served as an active crew member of a privately-funded spacecraft. This is distinct from an otherwise non-government astronaut, for example Charlie Walker, who flies while representing a non-government corporation but with funding or training or both coming from government sources.
Criteria
The definition of "astronaut" and the criteria for determining who has achieved human spaceflight vary. The defines spaceflight as any flight over of altitude. In the United States, professional, military, and commercial astronauts who travel above an altitude of are eligible to be awarded astronaut wings. Until 2003, professional space travelers were sponsored and trained exclusively by governments, whether by the military or by civilian space agencies. However, with the first sub-orbital flight by the privately funded Scaled Composites Tier One program in 2004, the commercial astronaut category was created. The next commercial program to achieve sub-orbital flight was Virgin Galactic's SpaceShipTwo program in 2018. Criteria for commercial astronaut status in other countries have yet to be made public.
By 2021, with the substantial increase in commercial spaceflight—with the first suborbital passenger flight by both Virgin Galactic's SpaceShipTwo and Blue Origin's New Shepard in July, and with SpaceX's first orbital private spaceflight completed on September 18, 2021—the roles and functions of people going to space are expanding. Criteria for the broader designation "astronaut" has become open to interpretation. Even in the US alone, the "FAA, U.S. military and NASA all have different definitions of what it means to be designated as an 'astronaut' and none of them fit perfectly with the way Blue Origin or Virgin Galactic are doing business." It is even possible that by the FAA commercial astronaut definition, one company's July flight participants may receive FAA commercial astronaut wings while the other will not. SpaceNews reported that "Blue Origin awarded their version of astronaut wings" to the four participants of the first Blue Origin passenger flight but was unclear on whether these included the FAA astronaut designation.
FAA Commercial Astronaut rating
With the advent of private commercial space flight ventures in the U.S., the FAA has been faced with the task of developing a certification process for the pilots of commercial spacecraft. The Commercial Space Launch Act of 1984 established the FAA's Office of Commercial Space Transportation and required companies to obtain a launch license for vehicles, but at the time crewed commercial flight – and the licensing of crewmembers – was not considered. The Commercial Space Launch Amendments Act has led to the issuance of draft guidelines by the FAA in February 2005 for the administration of vehicle and crew certifications. Currently, the FAA has not issued formal regulatory guidance for the issuance of a Commercial Astronaut Certificate, but as an interim measure, has established the practice of awarding "Commercial Astronaut Wings" to commercial pilots who have demonstrated the requisite proficiency. The content of 14 CFR Part 460 implies that an instrument rating and second-class medical certificate issued within the 12 months prior to the proposed qualifying flight will be included as a minimum standard.
The FAA's Commercial Astronaut Wings Program is designed to recognize flight crewmembers who further the FAA's mission to promote the safety of vehicles designed to carry humans. Astronaut Wings are given to flight crew who have demonstrated a safe flight to and return from space on an FAA/AST licensed mission. To be eligible for FAA Commercial Space Astronaut Wings, commercial launch crewmembers must meet the following criteria:
Meet the requirements for flight crew qualifications and training under Title 14 of the Code of Federal Regulations (14 CFR) part 460.
Demonstrated flight beyond 50 statute miles above the surface of the Earth as flight crew on an FAA/AST licensed or permitted launch or reentry vehicle.
Demonstrated activities during flight that were essential to public safety, or contributed to human spaceflight safety.
Astronaut Wings
The emblem for the first set of FAA Commercial Astronaut Wings issued in 2004 has in its center a green globe on a blue background, with the three-prong astronaut symbol superimposed on top. In yellow block text around the globe are the words "Commercial Space Transportation" in all capital letters. In a gold ring outside the blue are the words "Department of Transportation Federal Aviation Administration" in black. Beginning with the wings awarded for flights in 2018, the design has been simplified to be the astronaut symbol, surrounded by the words "Commercial Space Transportation", all in gold on a black background. In December 2021, the FAA reconsidered the Commercial Astronaut Wings program as commercial space travel increased, and decided to end the program in January 2022. Despite this, the FAA will still continue to recognize future commercial astronauts and will maintain a list of commercial astronauts who have flown to an altitude of 50 miles or higher.
List of commercial astronauts
Beginning in January 2022, the FAA started to maintain a list of individuals who have received FAA human spaceflight recognition. As of July 2022, there are the names of 45 individuals on that list that qualify for FAA human spaceflight recognition, but only 30 individuals on that list received FAA Commercial Space Astronaut Wings.
See also
List of commercial space stations
List of private spaceflight companies
NewSpace
Pilot certification in the United States
Private spaceflight
Space Adventures
Space colonization
Space tourism
Spaceport
Sub-orbital spaceflight
References
External links
FAA Commercial Human Spaceflight Recognition (includes list of commercial astronauts)
Astronauts
2004 introductions | Commercial astronaut | [
"Biology"
] | 1,094 | [
"Astronauts",
"Space-flown life"
] |
7,133,473 | https://en.wikipedia.org/wiki/Commutation%20matrix | In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn permutation matrix which, for any m × n matrix A, transforms vec(A) into vec(AT):
K(m,n) vec(A) = vec(AT) .
Here vec(A) is the mn × 1 column vector obtain by stacking the columns of A on top of one another:
where A = [Ai,j]. In other words, vec(A) is the vector obtained by vectorizing A in column-major order. Similarly, vec(AT) is the vector obtaining by vectorizing A in row-major order. The cycles and other properties of this permutation have been heavily studied for in-place matrix transposition algorithms.
In the context of quantum information theory, the commutation matrix is sometimes referred to as the swap matrix or swap operator
Properties
The commutation matrix is a special type of permutation matrix, and is therefore orthogonal. In particular, K(m,n) is equal to , where is the permutation over for which
The determinant of K(m,n) is .
Replacing A with AT in the definition of the commutation matrix shows that Therefore, in the special case of m = n the commutation matrix is an involution and symmetric.
The main use of the commutation matrix, and the source of its name, is to commute the Kronecker product: for every m × n matrix A and every r × q matrix B,
This property is often used in developing the higher order statistics of Wishart covariance matrices.
The case of n=q=1 for the above equation states that for any column vectors v,w of sizes m,r respectively,
This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory.
Two explicit forms for the commutation matrix are as follows: if er,j denotes the j-th canonical vector of dimension r (i.e. the vector with 1 in the j-th coordinate and 0 elsewhere) then
The commutation matrix may be expressed as the following block matrix:
Where the p,q entry of n x m block-matrix Ki,j is given by
For example,
Code
For both square and rectangular matrices of m rows and n columns, the commutation matrix can be generated by the code below.
Python
import numpy as np
def comm_mat(m, n):
# determine permutation applied by K
w = np.arange(m * n).reshape((m, n), order="F").T.ravel(order="F")
# apply this permutation to the rows (i.e. to each column) of identity matrix and return result
return np.eye(m * n)[w, :]
Alternatively, a version without imports:
# Kronecker delta
def delta(i, j):
return int(i == j)
def comm_mat(m, n):
# determine permutation applied by K
v = [m * j + i for i in range(m) for j in range(n)]
# apply this permutation to the rows (i.e. to each column) of identity matrix
I = [[delta(i, j) for j in range(m * n)] for i in range(m * n)]
return [I[i] for i in v]
MATLAB
function P = com_mat(m, n)
% determine permutation applied by K
A = reshape(1:m*n, m, n);
v = reshape(A', 1, []);
% apply this permutation to the rows (i.e. to each column) of identity matrix
P = eye(m*n);
P = P(v,:);
R
# Sparse matrix version
comm_mat = function(m, n){
i = 1:(m * n)
j = NULL
for (k in 1:m) {
j = c(j, m * 0:(n-1) + k)
}
Matrix::sparseMatrix(
i = i, j = j, x = 1
)
}
Example
Let denote the following matrix:
has the following column-major and row-major vectorizations (respectively):
The associated commutation matrix is
(where each denotes a zero). As expected, the following holds:
References
Jan R. Magnus and Heinz Neudecker (1988), Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley.
Linear algebra
Matrices
Articles with example Python (programming language) code
Articles with example MATLAB/Octave code | Commutation matrix | [
"Mathematics"
] | 1,047 | [
"Matrices (mathematics)",
"Linear algebra",
"Mathematical objects",
"Algebra"
] |
7,133,648 | https://en.wikipedia.org/wiki/Covert%20conditioning | Covert conditioning is an approach to mental health treatment that utilizes the principles of applied behavior analysis, or cognitive-behavior therapies (CBTs) to help individuals improve their behavior or inner experience. This method relies on the individual's ability to use imagery for purposes such as mental rehearsal. In some populations, it has been found that an imaginary reward can be as effective as a real one. The effectiveness of covert conditioning is believed to depend on the careful application of behavioral treatment principles, including a comprehensive behavioral analysis.
Some clinicians include the mind's ability to spontaneously generate imagery that can provide intuitive solutions or even reprocessing that improves people's typical reactions to situations or inner material. However, this goes beyond the behavioristic principles on which covert conditioning is based.
Therapies and self-help methods have aspects of covert conditioning. This can be seen in focusing, some neuro-linguistic programming methods such as future pacing, and various visualization or imaginal processes used in behavior therapies, such as CBTs or clinical behavior analysis.
Therapeutic interventions
"Covert desensitization" associates an aversive stimulus with a behavior that the client wishes to reduce or eliminate. This is achieved by imagining the target behavior followed by imagining an aversive consequence. "Covert extinction" attempts to reduce a behavior by imagining the target behavior while imagining that the reinforcer does not occur. "Covert response cost" seeks to reduce a behavior by associating the loss of a reinforcer with the target behavior that is to be decreased.
"Contact desensitization" intends to increase a behavior by imagining a reinforcing experience in connection with modeling the correct behavior. "Covert negative reinforcement" attempts to increase a behavior by connecting the termination of an aversive stimulus with increased production of a target behavior.
"Dialectical behavior therapy" (DBT) and "Acceptance and commitment therapy" (ACT) uses positive reinforcement and covert conditioning through mindfulness.
Effectiveness
Previous research in the early 1990s has shown covert conditioning to be effective with sex offenders as part of a behavior modification treatment package. Clinical studies continue to find it effective with some generalization from office to natural environment with this population.
See also
Covert hypnosis
Notes
References
Cautela, Joseph R and Kearney, Albert J. (1990) "Behavior analysis, cognitive therapy, and covert conditioning", Journal of behavior therapy and experimental psychiatry, 21 (2), pp. 83–90.
Behaviorism | Covert conditioning | [
"Biology"
] | 502 | [
"Behavior",
"Behaviorism"
] |
7,133,995 | https://en.wikipedia.org/wiki/Stefan%20flow | The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on the evaporation rate and other factors such as droplet size and composition. In addition to this mean flow, a concentration gradient must exist in the neighborhood of the droplet (assuming an isolated droplet) since the flowing medium is mostly air far from the droplet and mostly vapor near the droplet. This gradient causes Fickian diffusion that transports the vapor away from the droplet and the air towards it, with respect to the mean flow. Thus, in the frame of the droplet, the flow of vapor away from the droplet is faster than for the pure Stefan flow, since diffusion is working in the same direction as the mean flow. However, the flow of air away from the droplet is slower than the pure Stefan flow, since diffusion is working to transport air back towards the droplet against the Stefan flow. Such flow from evaporating droplets is important in understanding the combustion of liquid fuels such as diesel in internal combustion engines, and in the design of such engines. The Stefan flow from evaporating droplets and subliming ice particles also plays prominently in meteorology as it influences the formation and dispersion of clouds and precipitation.
References
C. T. Bowman, Course Notes on Combustion, 2004, Stanford University course reference material for ME 371: Fundamentals of Combustion.
C. T. Bowman, Course Notes on Combustion, 2005, Stanford University course reference material for ME 372: Combustion Applications.
Transport phenomena
Flow regimes | Stefan flow | [
"Physics",
"Chemistry",
"Engineering"
] | 694 | [
"Transport phenomena",
"Physical phenomena",
"Chemical engineering",
"Flow regimes",
"Fluid dynamics"
] |
7,134,343 | https://en.wikipedia.org/wiki/Owney%20%28dog%29 | Owney (ca. 1887 – June 11, 1897) was a terrier mix adopted in the United States as a postal mascot by the Albany, New York, post office about 1888. The Albany mail professionals recommended the dog to their Railway Mail Service colleagues, and he became a nationwide mascot for nine years (1888–1897). He traveled over 140,000 miles throughout the 48 contiguous United States and around the world as a mascot of the Railway Post Office and the United States Postal Service. He was the subject of commemorative activities, including a 2011 U.S. postage stamp.
Story
Unofficial mascot
Owney belonged to a clerk at the Albany post office who would often come with him to work. Owney seemed to love the smell of the mail bags and would often sleep on them. The clerk quit the Albany post office but knew that Owney was happier there with the mail bags.
Owney continued to sleep on the bags and would ride on trains wherever they were taken. He was considered to be good luck by postal railway clerks, since no train he rode on was ever in a wreck. He was a welcome addition in any railway post office; he was a faithful guardian of railway mail and the bags holding it, and would not allow anyone other than mail clerks to touch the bags.
This was an important duty and Owney was well-situated for it, as the Albany train station was a key division point on the New York Central railroad system, one of the two largest railroads in the U.S. at that time. Mail trains from Albany rolled eastward to Boston, south to New York City, and westward to Buffalo, Cleveland, Toledo, Chicago, and points further west. As a contemporary book recounted: "The terrier 'Owney' travels from one end of the country to the other in the postal cars, tagged through, petted, talked to, looked out for, as a brother, almost. But sometimes, no matter what the attention, he suddenly departs for the south, the east, or the west, and is not seen again for months." In 1893 he was feared dead after having disappeared, but it turned out he was involved in an accident in Canada.
As Owney's trips grew longer, the postal clerks at Albany became concerned for his safety. To ensure that he could be returned if he became lost, they bought him a dog collar with a metal tag that read: "Owney, Post Office, Albany, New York". Other post offices would attach tags of their own to his collar as he visited them. The collar and tags made the mixed-breed terrier the unofficial mascot of the U.S. Railway Mail Service, and as shown by the 2011 postage stamp issued in his honor, his identifications became an essential element of his identity.
Owney received tags everywhere he went, and as he moved they jingled like sleigh bells. As the tags accumulated, he was given a jacket to hold them so that their weight would not injure his neck or shoulders. Once the tags became too heavy for Owney to carry even with the help of the jacket, clerks adding tags would remove others and forward them to Albany or Washington D.C. for safekeeping. One source suggests that 1,017 medals and tokens were bestowed upon the mascot, but the exact number is unknown. Some of these tags did not survive; the National Postal Museum currently has 372 Owney tags in its collections. Other Owney tokens, trinkets, and medals are also in the NPM collection and are displayed there.
International mail
One of Owney's more famous trips was to Montreal, Quebec, Canada. The city postmaster kept him in a kennel, incurring a total expense of $2.50 for his care and feeding, and sent a request to Albany for reimbursement. Once the money had been collected, Owney was sent home.
The Universal Postal Union was created by treaty in 1874 to standardize the shipping and handling of international mail; adherence to this pact by an increasing number of countries around what was then called the "civilized world" made it possible to extend Owney's horizons a bit. In 1895, the terrier enjoyed an around-the-world trip, riding with mail bags aboard trains and steamships. Starting from Tacoma, Washington, on August 19, he traveled for four months throughout Asia and across Europe, before returning to New York City on December 23 and from thence to Albany. Upon his return during Christmas week, the Los Angeles Times reported that he visited Asia, North Africa, and the Middle East. Another report claimed the Emperor of Japan awarded the dog two passports and several medals bearing the Japanese coat of arms. Owney's triumphant return to American shores was covered by newspapers nationwide. Owney became world famous after the trip, even though he broke no speed records in doing it.
Death and honors
As Owney aged, Post Office management came to believe that his traveling days were over. Mail clerk J. M. Elben, of St. Louis, agreed to take him in, and the influential Chicago manager of the Railway Mail Service, using insulting language to refer to the "mongrel cur", asked his employees not to allow him to ride on future mail trains. Owney had by this time traveled more than in his lifetime.
The exact details of the incident which led to Owney's death are unclear. Newspapers around the country carried the story of Owney's death. They reported that Owney had been ill and had become aggressive in his old age. In June 1897, after allegedly attacking a postal clerk and a U.S. Marshal in Toledo, Ohio, Owney was shot and killed on the orders of the local postmaster. The Chicago Tribune termed it "an execution". The contemporary accounts suggest that a postal clerk in Toledo chained Owney to a post in the corner of a basement at a post office in Toledo, which was not his normal treatment. That clerk then called in a reporter for the local paper to get a story. Owney may not have been used to that treatment and that may have contributed to his aggression. Whatever the reason, it is not disputed that Owney was put down in Toledo on 11 June 1897.
Owney's death made public that a gap existed between the workplace attitudes of U.S. postal clerks and their management, with the deceased dog serving as a focus of this gap. The 1890s were a foundational decade for the new discipline of scientific management, with consultants like Frederick Winslow Taylor seeking to help managers reduce what they saw as industrial inefficiencies by examining workers' "wasted time" and "slacking". Postal clerks used Owney's death, and the expressions of sadness contained in press obituaries in honor of the dog, to make a statement: "Postal clerks refused to bury their beloved mascot. Clerks across the country asked that the dog receive the honor they considered he was due by being preserved and presented to the Post Office Department's headquarters." Owney's remains were preserved and sent for taxidermy. In 1904, Owney's effigy was displayed by the Postal Service at the St. Louis World's Fair. A commemorative silver spoon was commissioned by Cleveland, Ohio postal workers and fashioned by "Webb C. Ball Co. Cleveland.O."
Owney is the subject of an exhibit at the Smithsonian Museum. He was sent there in 1911, and has been called one of the museum's "most interesting" artifacts. His remains deteriorated over the intervening century, and were (along with associated artifacts) given an extensive makeover in 2011. One of the Smithsonian's employees opined the makeover a success, and called its culmination "the big reveal".
On July 27, 2011, the United States Postal Service issued a forever stamp honoring Owney. Artist Bill Bond said he wanted to render the dog "in a spirited and lively" presentation, and that he wound up working from the mounted remains, as numerous trips to dog parks left him uninspired. Owney was also honored locally at the Albany, New York post office. The stamp was also central to an augmented reality app for Windows, Apple iPhone, iPad 2 and iPod Touch.
Like his contemporary Australian counterpart Bob the Railway Dog active from 1881–1894 he was the subject of poetry. One was from a clerk in Detroit:
Owney is a tramp, as you can plainly see.
Only treat him kindly, and take him 'long wid ye."
Another was penned by a clerk in Minnesota:
"On'y one Owney, and this is he;
the dog is aloney, so let him be."
Owney has been the main character in five hardcover books, and one e-book published by the National Postal Museum (of the Smithsonian Institution) in 2012 titled, Owney: Tales from the Rails, written by Jerry Rees with songs by Stephen Michael Schwartz and illustrations by Fred Cline. The book is narrated and songs are performed by Trace Adkins.
See also
Bob the Railway Dog
List of individual dogs
Sergeant Stubby, a Boston bull terrier, the most decorated war dog of World War I and the only dog to be nominated for rank and then promoted to sergeant through combat. Among other exploits, he is said to have captured a German spy. Also on display at the Smithsonian. He was also a mascot at Georgetown University.
Station Jim – a popular and successful collector for the Widows' and Orphans' fund of the Great Western Railway.
Bibliography
Footnotes
Notes
Sources
Free children's E-book narrated and sung by Trace Adkins.
External links
Owney images from Smithsonian Institution
Free children's ebook narrated and sung by Trace Adkins. Free children's ebook narrated and sung by Trace Adkins
1887 animal births
1897 animal deaths
American mascots
Collection of the Smithsonian Institution
Dog mascots
Real-life animal mascots
Dog monuments
Missing or escaped animals
Postal history
Postal services
Postal systems
Rail transportation in the United States
United States Postal Service
Year of birth uncertain
Individual dogs in the United States
Individual taxidermy exhibits | Owney (dog) | [
"Technology"
] | 2,059 | [
"Transport systems",
"Postal systems"
] |
5,462,075 | https://en.wikipedia.org/wiki/Decision%20tree%20pruning | Pruning is a data compression technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting.
One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.
Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.
Techniques
Pruning processes can be divided into two types (pre- and post-pruning).
Pre-pruning procedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion.
Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall.
The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up).
Bottom-up pruning
These procedures start at the last node in the tree (the lowest point). Following recursively upwards, they determine the relevance of each individual node. If the relevance for the classification is not given, the node is dropped or replaced by a leaf. The advantage is that no relevant sub-trees can be lost with this method.
These methods include Reduced Error Pruning (REP), Minimum Cost Complexity Pruning (MCCP), or Minimum Error Pruning (MEP).
Top-down pruning
In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped. One of these representatives is pessimistic error pruning (PEP), which brings quite good results with unseen items.
Pruning algorithms
Reduced error pruning
One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage of simplicity and speed.
Cost complexity pruning
Cost complexity pruning generates a series of trees where is the initial tree and is the root alone. At step , the tree is created by removing a subtree from tree and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows:
Define the error rate of tree over data set as .
The subtree that minimizes is chosen for removal.
The function defines the tree obtained by pruning the subtrees from the tree . Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation.
Examples
Pruning could be applied in a compression scheme of a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons.
See also
Alpha–beta pruning
Artificial neural network
Null-move heuristic
Pruning (artificial neural network)
References
Further reading
MDL based decision tree pruning
Decision tree pruning using backpropagation neural networks
External links
Fast, Bottom-Up Decision Tree Pruning Algorithm
Introduction to Decision tree pruning
Decision trees
Machine learning | Decision tree pruning | [
"Engineering"
] | 1,025 | [
"Artificial intelligence engineering",
"Machine learning"
] |
5,463,497 | https://en.wikipedia.org/wiki/Tsung | Tsung (formerly known as idx-Tsunami) is a software load testing tool written in the Erlang language and distributed under the GPL license. It can currently stress test HTTP, WebDAV, LDAP, MySQL, PostgreSQL, SOAP and XMPP servers. Tsung can simulate hundreds of simultaneous users on a single system. It can also function in a clustered environment.
Features
Features include:
Several IP addresses can be used on a single machine using the underlying OS's IP Aliasing.
OS monitoring (CPU, memory, and network traffic) using SNMP, munin-node agents or Erlang agents on remote servers.
Different types of users can be simulated.
Dynamic sessions can be described in XML (to retrieve, at runtime, an ID from the server output and use it later in the session).
Simulated user thinktimes and the arrival rate can be randomized via probability distribution.
HTML reports can be generated during the load to view response time measurements, server CPU, and other statistics.
References
External links
Load Testing AWS Kinesis
Tsung Information Provided By Process One
Performance Measurement & Applications Benchmarking With Erlang. EUC05
Benchmarks (computing)
Erlang (programming language)
Load testing tools | Tsung | [
"Technology"
] | 260 | [
"Benchmarks (computing)",
"Computing comparisons",
"Computer performance"
] |
5,463,517 | https://en.wikipedia.org/wiki/Shadow%20%28OS/2%29 | In the graphical Workplace Shell (WPS) of the OS/2 operating system, a shadow is an object that represents another object.
A shadow is a stand-in for any other object on the desktop, such as a document, an application, a folder, a hard disk, a network share or removable medium, or a printer. A target object can have an arbitrary number of shadows. When double-clicked, the desktop acts the same way as if the original object had been double-clicked. The shadow's context menu is the same as the target object's context menu, with the addition of an "Original" sub-menu, that allows the location of, and explicit operation upon, the original object.
A shadow is a dynamic reference to an object. The original may be moved to another place in the file system, without breaking the link. The WPS updates shadows of objects whenever the original target objects are renamed or moved. To do this, it requests notification from the operating system of all file rename operations. (Thus if a target filesystem object is renamed when the WPS is not running, the link between the shadow and the target object is broken.)
Similarities to and differences from other mechanisms
Shadows are similar in operation to aliases in Mac OS, although there are some differences:
Shadows in the WPS are not filesystem objects, as aliases are. They are derived from the WPAbstract class, and thus their backing storage is the user INI file, not a file in the file system. Thus shadows are invisible to applications that do not use the WPS API.
The WPS has no mechanism for re-connecting shadows when the link between them and the target object has been broken. (Although where the link has been broken because target objects are temporarily inaccessible, restarting the WPS after the target becomes accessible once more often restores the link.)
Shadows are different from symbolic links and shortcuts because they are not filesystem objects, and because shadows are dynamically updated as target objects are moved.
Shadows are different from hard links because unlike hard links they can cross volume boundaries and because their names are always the same as those of their target objects.
Distinguishing marks
On (and within nested folders on) the WPS desktop, shadows' "icon titles" can be set to one's preferred font color, independently of the preferred font-color assigned to other non-shadow WPS objects, although they share the font actually selected for that text.
Like the icons for all other 'open' objects on the WPS Desktop, whether for folders or applications, Shadows' icons become diagonally hatched on 'opening' and remain in that state until closed/exited respectively.
Managing shadows
There are several ways to create a shadow. One way is to select the target object and choose "Create Shadow" from its context menu. The desktop then prompts with a dialogue box allowing the user to specify where the shadow should be created. Another way is to employ drag-and-drop to create shadows, holding down the shift and control modifier keys whilst dragging.
The dialog box opens initially with a view of currently opened folders on the "Opened" Tab (page) of the dialog, first of which is the current Desktop folder, allowing direct selection of the destination. There are a further four tabs for "Related", "Desktop", "Drives" and "Path", this latter allowing textual path specification including drive (volume), whereas the other three options display an expandable hierarchical tree of folders to select from.
References
OS/2
Legacy systems | Shadow (OS/2) | [
"Technology"
] | 742 | [
"Computing platforms",
"Computer systems",
"OS/2",
"Legacy systems",
"History of computing"
] |
5,463,599 | https://en.wikipedia.org/wiki/Acrodynia | Acrodynia is a medical condition which occurs due to mercury poisoning. The condition of pain and dusky pink discoloration in the hands and feet is due to exposure or ingesting of mercury. It was known as pink disease (due to these symptoms) before it was accepted that it was just mercury poisoning.
The word acrodynia is derived from the , which means end or extremity, and , which means pain. As such, it might be (erroneously) used to indicate that a patient has pain in the hands or feet. The condition is known by various other names including hydrargyria, mercurialism, erythredema, erythredema polyneuropathy, Bilderbeck's, Selter's, Swift's and Swift-Feer disease.
Symptoms and signs
Besides peripheral neuropathy (presenting as paresthesia or itching, burning or pain) and discoloration, swelling (edema) and desquamation may occur. Since mercury blocks the degradation pathway of catecholamines, epinephrine excess causes profuse sweating (diaphora), tachycardia, salivation and elevated blood pressure. Mercury is suggested to inactivate S-adenosyl-methionine, which is necessary for catecholamine catabolism by catechol-o-methyl transferase. Affected children may show red cheeks and nose, red (erythematous) lips, loss of hair, teeth, and nails, transient rashes, hypotonia and photophobia. Other symptoms may include kidney dysfunction (e.g. Fanconi syndrome) or neuropsychiatric symptoms (emotional lability, memory impairment, insomnia).
Thus, the clinical presentation may resemble pheochromocytoma or Kawasaki disease.
There is some evidence that the same mercury poisoning may predispose to Young's syndrome (men with bronchiectasis and low sperm count).
Causes
Mercury compounds like calomel were historically used for various medical purposes: as laxatives, diuretics, antiseptics or antimicrobial drugs for syphilis, typhus and yellow fever.
Teething powders were a widespread source of mercury poisoning until the recognition of mercury toxicity in the 1940s.
However, mercury poisoning and acrodynia still exist today. Modern sources of mercury intoxication include broken thermometers.
Diagnosis
Removal of the inciting agent is the goal of treatment. Correcting fluid and electrolyte losses and rectifying any nutritional imbalances (vitamin-rich diets, vitamin-B complex) are of utmost importance in the treatment of the disease.
The chelating agent meso 2,3-dimercaptosuccinic acid has been shown to be the preferred treatment modality. It can almost completely prevent methylmercury uptake by erythrocytes and hepatocytes. In the past, dimercaprol (British antilewisite; 2,3-dimer-capto-l-propanol) and D-penicillamine were the most popular treatment modalities. Disodium edetate (Versene) was also used. Neither disodium edetate nor British antilewisite has proven reliable. British antilewisite has now been shown to increase CNS levels and exacerbate toxicity. N -acetyl-penicillamine has been successfully given to patients with mercury-induced neuropathies and chronic toxicity, although it is not approved for such uses. It has a less favorable adverse effect profile than meso 2,3-dimercaptosuccinic acid.
Hemodialysis with and without the addition of L-cysteine as a chelating agent has been used in some patients experiencing acute kidney injury from mercury toxicity. Peritoneal dialysis and plasma exchange also may be of benefit.
Tolazoline (Priscoline) has been shown to offer symptomatic relief from sympathetic overactivity. Antibiotics are necessary when massive hyperhidrosis, which may rapidly lead to miliaria rubra, is present. This can easily progress to bacterial secondary infection with a tendency for ulcerating pyoderma.
References
Occupational diseases
Toxicology
Pediatrics | Acrodynia | [
"Environmental_science"
] | 896 | [
"Toxicology"
] |
5,463,779 | https://en.wikipedia.org/wiki/11P/Tempel%E2%80%93Swift%E2%80%93LINEAR | 11P/Tempel–Swift–LINEAR is a periodic Jupiter-family comet in the Solar System.
In 1869 perihelion (closest approach to the Sun) was 1.063 AU. Ernst Wilhelm Leberecht Tempel (Marseille) originally discovered the comet on November 27, 1869, it was later observed by Lewis Swift (Warner Observatory) on October 11, 1880, and realised to be the same comet.
After 1908 the comet became an unobservable lost comet, but on December 7, 2001, an object was found by the Lincoln Near-Earth Asteroid Research (LINEAR) program, and confirmed by previous images from September 10 and October 17 as being the same comet. The comet was not observed during the 2008 unfavorable apparition because the perihelion passage occurred when the comet was on the far side of the Sun. The comet was observed during the 2014 and 2020 apparitions.
The comet will next come to perihelion on 9 November 2026, then two days later on the 11th, make a closest approach to Earth of .
References
External links
11P at Kronk's Cometography
Periodic comets
0011
011P
011P
011P
18691127
Discoveries by Wilhelm Tempel | 11P/Tempel–Swift–LINEAR | [
"Astronomy"
] | 260 | [
"Astronomy stubs",
"Comet stubs"
] |
5,463,848 | https://en.wikipedia.org/wiki/Dynamic%20program%20analysis | Dynamic program analysis is the act of analyzing software that involves executing a program as opposed to static program analysis, which does not execute it.
Analysis can focus on different aspects of the software including but not limited to: behavior, test coverage, performance and security.
To be effective, the target program must be executed with sufficient test inputs to address the ranges of possible inputs and outputs. Software testing measures, such as code coverage, and tools such as mutation testing, are used to identify where testing is inadequate.
Types
Functional testing
Functional testing includes relatively common programming techniques such as unit testing, integration testing and system testing.
Code coverage
Computing the code coverage of a test identifies code that is not tested; not covered by a test.
Although this analysis identifies code that is not tested it does not determine whether tested coded is adequately tested. Code can be executed even if the tests do not actually verify correct behavior.
Gcov is the GNU source code coverage program.
VB Watch injects dynamic analysis code into Visual Basic programs to monitor code coverage, call stack, execution trace, instantiated objects and variables.
Dynamic testing
Dynamic testing involves executing a program on a set of test cases.
Memory error detection
AddressSanitizer: Memory error detection for Linux, macOS, Windows, and more. Part of LLVM.
BoundsChecker: Memory error detection for Windows based applications. Part of Micro Focus DevPartner.
Dmalloc: Library for checking memory allocation and leaks. Software must be recompiled, and all files must include the special C header file dmalloc.h.
Intel Inspector: Dynamic memory error debugger for C, C++, and Fortran applications that run on Windows and Linux.
Purify: Mainly memory corruption detection and memory leak detection.
Valgrind: Runs programs on a virtual processor and can detect memory errors (e.g., misuse of malloc and free) and race conditions in multithread programs.
Fuzzing
Fuzzing is a testing technique that involves executing a program on a wide variety of inputs; often these inputs are randomly generated (at least in part). Gray-box fuzzers use code coverage to guide input generation.
Dynamic symbolic execution
Dynamic symbolic execution (also known as DSE or concolic execution) involves executing a test program on a concrete input, collecting the path constraints associated with the execution, and using a constraint solver (generally, an SMT solver) to generate new inputs that would cause the program to take a different control-flow path, thus increasing code coverage of the test suite. DSE can considered a type of fuzzing ("white-box" fuzzing).
Dynamic data-flow analysis
Dynamic data-flow analysis tracks the flow of information from sources to sinks. Forms of dynamic data-flow analysis include dynamic taint analysis and even dynamic symbolic execution.
Invariant inference
Daikon is an implementation of dynamic invariant detection. Daikon runs a program, observes the values that
the program computes, and then reports properties that were true over the observed executions, and thus likely true over all executions.
Security analysis
Dynamic analysis can be used to detect security problems.
IBM Rational AppScan is a suite of application security solutions targeted for different stages of the development lifecycle. The suite includes two main dynamic analysis products: IBM Rational AppScan Standard Edition, and IBM Rational AppScan Enterprise Edition. In addition, the suite includes IBM Rational AppScan Source Edition—a static analysis tool.
Concurrency errors
Parasoft Jtest uses runtime error detection to expose defects such as race conditions, exceptions, resource and memory leaks, and security attack vulnerabilities.
Intel Inspector performs run-time threading and memory error analysis in Windows.
Parasoft Insure++ is a runtime memory analysis and error detection tool. Its Inuse component provides a graphical view of memory allocations over time, with specific visibility of overall heap usage, block allocations, possible outstanding leaks, etc.
Google's Thread Sanitizer is a data race detection tool. It instruments LLVM IR to capture racy memory accesses.
Program slicing
For a given subset of a program’s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. The reduced program is called a “slice” and is a faithful representation of the original program within the domain of the specified behavior subset.
Generally, finding a slice is an unsolvable problem, but by specifying the target behavior subset by the values of a set of variables, it is possible to obtain approximate slices using a data-flow algorithm. These slices are usually used by developers during debugging to locate the source of errors.
Performance analysis
Most performance analysis tools use dynamic program analysis techniques.
Techniques
Most dynamic analysis involves instrumentation or transformation.
Since instrumentation can affect runtime performance, interpretation of test results must account for this to avoid misidentifying a performance problem.
Examples
DynInst is a runtime code-patching library that is useful in developing dynamic program analysis probes and applying them to compiled executables. Dyninst does not require source code or recompilation in general, however, non-stripped executables and executables with debugging symbols are easier to instrument.
Iroh.js is a runtime code analysis library for JavaScript. It keeps track of the code execution path, provides runtime listeners to listen for specific executed code patterns and allows the interception and manipulation of the program's execution behavior.
See also
Abstract interpretation
Daikon
Dynamic load testing
Profiling (computer programming)
Runtime verification
Program analysis (computer science)
Static code analysis
Time Partition Testing
References
Program analysis
Software testing | Dynamic program analysis | [
"Engineering"
] | 1,163 | [
"Software engineering",
"Software testing"
] |
5,463,935 | https://en.wikipedia.org/wiki/Sycamore%20Canyon%20Test%20Facility | Sycamore Canyon Test Facility is a rocket and weapons test site located east of MCAS Miramar in northern San Diego, in Scripps Ranch. A number of weapons contractors have had facilities at the site including Lockheed-Martin, Hughes Aircraft and General Dynamics. The engines for the Atlas missile and Centaur rocket stage were tested at the site. The entire measuring equipment for the test site has been built by Baldwin-Lima-Hamilton.
References
External links
San Diego’s Secret Missile-Testing Sites
Buildings and structures in San Diego County, California
Rocketry
General Dynamics | Sycamore Canyon Test Facility | [
"Astronomy",
"Engineering"
] | 115 | [
"Rocketry",
"Rocketry stubs",
"Astronomy stubs",
"Aerospace engineering"
] |
5,463,978 | https://en.wikipedia.org/wiki/Topological%20entropy | In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy.
Definition
A topological dynamical system consists of a Hausdorff topological space X (usually assumed to be compact) and a continuous self-map f : X → X. Its topological entropy is a nonnegative extended real number that can be defined in various ways, which are known to be equivalent.
Definition of Adler, Konheim, and McAndrew
Let X be a compact Hausdorff topological space. For any finite open cover C of X, let H(C) be the logarithm (usually to base 2) of the smallest number of elements of C that cover X. For two covers C and D, let be their (minimal) common refinement, which consists of all the non-empty intersections of a set from C with a set from D, and similarly for multiple covers.
For any continuous map f: X → X, the following limit exists:
Then the topological entropy of f, denoted h(f), is defined to be the supremum of H(f,C) over all possible finite covers C of X.
Interpretation
The parts of C may be viewed as symbols that (partially) describe the position of a point x in X: all points x ∈ Ci are assigned the symbol Ci . Imagine that the position of x is (imperfectly) measured by a certain device and that each part of C corresponds to one possible outcome of the measurement. then represents the logarithm of the minimal number of "words" of length n needed to encode the points of X according to the behavior of their first n − 1 iterates under f, or, put differently, the total number of "scenarios" of the behavior of these iterates, as "seen" by the partition C. Thus the topological entropy is the average (per iteration) amount of information needed to describe long iterations of the map f.
Definition of Bowen and Dinaburg
This definition uses a metric on X (actually, a uniform structure would suffice). This is a narrower definition than that of Adler, Konheim, and McAndrew, as it requires the additional metric structure on the topological space (but is independent of the choice of metrics generating the given topology). However, in practice, the Bowen-Dinaburg topological entropy is usually much easier to calculate.
Let (X, d) be a compact metric space and f: X → X be a continuous map. For each natural number n, a new metric dn is defined on X by the formula
Given any ε > 0 and n ≥ 1, two points of X are ε-close with respect to this metric if their first n iterates are ε-close. This metric allows one to distinguish in a neighborhood of an orbit the points that move away from each other during the iteration from the points that travel together. A subset E of X is said to be (n, ε)-separated if each pair of distinct points of E is at least ε apart in the metric dn.
Denote by N(n, ε) the maximum cardinality of an (n, ε)-separated set. The topological entropy of the map f is defined by
Interpretation
Since X is compact, N(n, ε) is finite and represents the number of distinguishable orbit segments of length n, assuming that we cannot distinguish points within ε of one another. A straightforward argument shows that the limit defining h(f) always exists in the extended real line (but could be infinite). This limit may be interpreted as the measure of the average exponential growth of the number of distinguishable orbit segments. In this sense, it measures complexity of the topological dynamical system (X, f). Rufus Bowen extended this definition of topological entropy in a way which permits X to be non-compact under the assumption that the map f is uniformly continuous.
Properties
Topological entropy is an invariant of topological dynamical systems, meaning that it is preserved by topological conjugacy.
Let be an expansive homeomorphism of a compact metric space and let be a topological generator. Then the topological entropy of relative to is equal to the topological entropy of , i.e.
Let be a continuous transformation of a compact metric space , let be the measure-theoretic entropy of with respect to and let be the set of all -invariant Borel probability measures on X. Then the variational principle for entropy states that
.
In general the maximum of the quantities over the set is not attained, but if additionally the entropy map is upper semicontinuous, then a measure of maximal entropy - meaning a measure in with - exists.
If has a unique measure of maximal entropy , then is ergodic with respect to .
Examples
Let by denote the full two-sided k-shift on symbols . Let denote the partition of into cylinders of length 1. Then is a partition of for all and the number of sets is respectively. The partitions are open covers and is a topological generator. Hence
. The measure-theoretic entropy of the Bernoulli -measure is also . Hence it is a measure of maximal entropy. Further on it can be shown that no other measures of maximal entropy exist.
Let be an irreducible matrix with entries in and let be the corresponding subshift of finite type. Then where is the largest positive eigenvalue of .
Notes
See also
Milnor–Thurston kneading theory
For the measure of correlations in systems with topological order see Topological entanglement entropy
Mean dimension
References
Roy Adler, Tomasz Downarowicz, Michał Misiurewicz, Topological entropy at Scholarpedia
External links
http://www.scholarpedia.org/article/Topological_entropy
Entropy and information
Ergodic theory
Topological dynamics | Topological entropy | [
"Physics",
"Mathematics"
] | 1,318 | [
"Physical quantities",
"Ergodic theory",
"Entropy and information",
"Entropy",
"Topology",
"Topological dynamics",
"Dynamical systems"
] |
5,463,983 | https://en.wikipedia.org/wiki/Ordinal%20date | An ordinal date is a calendar date typically consisting of a year and an ordinal number, ranging between 1 and 366 (starting on January 1), representing the multiples of a day, called day of the year or ordinal day number (also known as ordinal day or day number). The two parts of the date can be formatted as "YYYY-DDD" to comply with the ISO 8601 ordinal date format. The year may sometimes be omitted, if it is implied by the context; the day may be generalized from integers to include a decimal part representing a fraction of a day.
Nomenclature
Ordinal date is the preferred name for what was formerly called the "Julian date" or , or , which still seen in old programming languages and spreadsheet software. The older names are deprecated because they are easily confused with the earlier dating system called 'Julian day number' or , which was in prior use and which remains ubiquitous in astronomical and some historical calculations.
The U.S. military sometimes uses a system they call the "Julian date format", which indicates the year and the day number (out of the 365 or 366 days of the year). For example, "11 December 1999" can be written as "1999345" or "99345", for the 345th day of 1999.
Calculation
Computation of the ordinal day within a year is part of calculating the ordinal day throughout the years from a reference date, such as the Julian date. It is also part of calculating the day of the week, though for this purpose modulo 7 simplifications can be made.
In the following text, several algorithms for calculating the ordinal day are presented. The inputs taken are integers , and , for the year, month, and day numbers of the Gregorian or Julian calendar date.
Trivial methods
The most trivial method of calculating the ordinal day involves counting up all days that have elapsed per the definition:
Let O be 0.
From , add the length of month to O, taking care of leap year according to the calendar used.
Add d to O.
Similarly trivial is the use of a lookup table, such as the one referenced.
Zeller-like
The table of month lengths can be replaced following the method of encoding the month-length variation in Zeller's congruence. As in Zeller, the is changed to if . It can be shown (see below) that for a month-number , the total days of the preceding months is equal to . As a result, the March 1-based ordinal day number is .
The formula reflects the fact that any five consecutive months in the range March–January have a total length of 153 days, due to a fixed pattern 31–30–31–30–31 repeating itself twice. This is similar to encoding of the month offset (which would be the same sequence modulo 7) in Zeller's congruence. As is 30.6, the sequence oscillates in the desired pattern with the desired period 5.
To go from the March 1 based ordinal day to a January 1 based ordinal day:
For (March through December), where is a function returning 0 or 1 depending whether the input is a leap year.
For January and February, two methods can be used:
The trivial method is to skip the calculation of and go straight for for January and for February.
The less redundant method is to use , where 306 is the number of dates in March through December. This makes use of the fact that the formula correctly gives a month-length of 31 for January.
"Doomsday" properties:
With and gives
giving consecutive differences of 63 (9 weeks) for 3, 4, 5, and 6, i.e., between 4/4, 6/6, 8/8, 10/10, and 12/12.
and gives
and with m and d interchanged
giving a difference of 119 (17 weeks) for (difference between 5/9 and 9/5), and also for (difference between 7/11 and 11/7).
Table
For example, the ordinal date of April 15 is in a common year, and in a leap year.
Month–day
The number of the month and date is given by
the term can also be replaced by with the ordinal date.
Day 100 of a common year:
April 10.
Day 200 of a common year:
July 19.
Day 300 of a leap year:
November - 5 = October 26 (31 - 5).
Helper conversion table
See also
Julian day
Zeller's congruence
ISO week date
References
Calendars
Ordinal numbers | Ordinal date | [
"Physics",
"Mathematics"
] | 972 | [
"Ordinal numbers",
"Calendars",
"Physical quantities",
"Time",
"Mathematical objects",
"Spacetime",
"Order theory",
"Numbers"
] |
5,464,288 | https://en.wikipedia.org/wiki/Telluric%20contamination | Telluric contamination is contamination of the astronomical spectra by the Earth's atmosphere.
Interference with astronomical observations
Most astronomical observations are conducted by measuring photons (electromagnetic waves) which originate beyond the sky. The molecules in the Earth's atmosphere, however, absorb and emit their own light, especially in the visible and near-IR portion of the spectrum, and any ground-based observation is subject to contamination from these telluric (earth-originating) sources. Water vapor and oxygen are two of the more important molecules in telluric contamination. Contamination by water vapor was particularly pronounced in the Mount Wilson solar Doppler measurements.
Many scientific telescopes have spectrographs, which measure photons as a function of wavelength or frequency, with typical resolution on the order of a nanometer of visible light. Spectroscopic observations can be used in myriad contexts, including measuring the chemical composition and physical properties of astronomical objects as well as measuring object velocities from the Doppler shift of spectral lines. Unless they are corrected for, telluric contamination can produce errors or reduce precision in such data.
Telluric contamination can also be important for photometric measurements.
Telluric correction
It is possible to correct for the effects of telluric contamination in an astronomical spectrum. This is done by preparing a telluric correction function, made by dividing a model spectrum of a star by an observation of an astronomical photometric standard star. This function can then be multiplied by an astronomical observation at each wavelength point.
While this method can restore the original shape of the spectrum, the regions affected can be prone to high levels noise due to the low number of counts in that area of the spectrum.
See also
Pollution and light pollution
Interferometry and astronomy
Spectroscopy and spectrograph
References
Further reading
Christopher S. Carter, Herschel B. Snodgrass, and Claia Bryja, "Telluric water vapor contamination of the Mount Wilson solar Doppler measurements". Solar Physics volume 139, pages 13–24 (1992).
Atmosphere of Earth
Measurement
Observational astronomy | Telluric contamination | [
"Physics",
"Astronomy",
"Mathematics"
] | 422 | [
"Physical quantities",
"Quantity",
"Observational astronomy",
"Measurement",
"Size",
"Astronomical sub-disciplines"
] |
5,464,396 | https://en.wikipedia.org/wiki/Principle%20%28chemistry%29 | Principle, in chemistry, refers to a historical concept of the constituents of a substance, specifically those that produce a certain quality or effect in the substance, such as a bitter principle, which is any one of the numerous compounds having a bitter taste.
The idea of chemical principles developed out of the classical elements. Paracelsus identified the tria prima as principles in his approach to medicine. In his book The Sceptical Chymist of 1661, Robert Boyle criticized the traditional understanding of the composition of materials and initiated the modern understanding of chemical elements. Nevertheless, the concept of chemical principles continued to be used. Georg Ernst Stahl published Philosophical Principles of Universal Chemistry in 1730 as an early effort to distinguish between mixtures and compounds. He writes, "the simple are Principles, or the first material causes of Mixts;..." To define a Principle, he wrote
A Principle is defined, à priori, that in a mix’d matter, which first existed; and a posteriori, that into which it is at last resolved. (...) chemical Principles are called Salt, Sulfur and Mercury (...) or Salt, Oil, and Spirit.
Stahl recounts theories of chemical principles according to Helmont and J. J. Becher. He says Helmont took Water to be the "first and only material Principle of all things." According to Becher, Water and Earth are principles, where Earth is distinguished into three kinds. Stahl also ascribes to Earth the "principle of rest and aggregation."
Historians have described how early analysts used Principles to classify substances:
The classification of substances varies from one author to the next, but it generally relied on tests to which materials could be submitted or procedures that could be applied to them. "Test" must be understood here in a double sense, experimental and moral: gold was considered noble because it resisted fire, humidity, and being buried underground. Camphor, like sulfur, arsenic, mercury, and ammonia, belonged to the "spirits" because it was volatile. Glass belonged among the metals because, like them, it could be melted. And since the seven known metals – gold, silver, iron, copper, tin, lead, and mercury – were characterized by their capacity to be melted, what made a metal a metal was defined by reference to the only metal that was liquid at room temperature, mercury or quicksilver. But "common" mercury differed from the mercuric principle, which was cold and wet. Like all other metals, it involved another "principle", which was hot and dry, sulfur.
Guillaume-François Rouelle "attributed two functions to principles: that of forming mixts and that of being an agent or instrument of chemical principles."
Thus the four principles, earth, air, fire, and water, were principles both of the chemist's operations and of the mixts they operated upon. As instruments they were, unlike specific chemical reagents, "natural and general," always at work in every chemical operation. As constituent elements, they did not contradict the chemistry of displacement but transcended it: the chemist could never isolate or characterize an element as he characterized a body; an element was not isolable, for it could not be separated from a mixt without re-creating a new mixt in the process.
See also
Sulfur-mercury theory of metals
References
History of chemistry
Alchemical substances | Principle (chemistry) | [
"Chemistry"
] | 697 | [
"Alchemical substances"
] |
5,464,635 | https://en.wikipedia.org/wiki/Tioguanine | Tioguanine, also known as thioguanine or 6-thioguanine (6-TG) or tabloid is a medication used to treat acute myeloid leukemia (AML), acute lymphocytic leukemia (ALL), and chronic myeloid leukemia (CML). Long-term use is not recommended. It is given by mouth.
Common side effects include bone marrow suppression, liver problems and inflammation of the mouth. It is recommended that liver enzymes be checked weekly when on the medication. People with a genetic deficiency in thiopurine S-methyltransferase are at higher risk of side effects. Avoiding pregnancy when on the medication is recommended. Tioguanine is in the antimetabolite family of medications. It is a purine analogue of guanine and works by disrupting DNA and RNA.
Tioguanine was developed between 1949 and 1951. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Acute leukemias in both adults and children
Chronic myelogenous leukemia
Inflammatory bowel disease, especially ulcerative colitis
Psoriasis
Colorectal cancer in mice resistant to immunotherapy
Side effects
Leukopenia and neutropenia
Thrombocytopenia
Anemia
Anorexia
Nausea and vomiting
Hepatotoxicity: this manifests as:
Hepatic veno-occlusive disease
The major concern that has inhibited the use of thioguanine has been veno-occlusive disease (VOD) and its histological precursor nodular regenerative hyperplasia (NRH). The incidence of NRH with thioguanine was reported as between 33 and 76%. The risk of ensuing VOD is serious and frequently irreversible so this side effect has been a major concern.
However, recent evidence using an animal model for thioguanine-induced NRH/VOD has shown that, contrary to previous assumptions, NRH/VOD is dose dependent and the mechanism for this has been demonstrated. This has been confirmed in human trials, where thioguanine has proven to be safe but efficacious for coeliac disease when used at doses below those commonly prescribed. This has led to a revival of interest in thioguanine because of its higher efficacy and faster action compared to other thiopurines and immunosuppressants such as mycophenylate.
Contraindications
Pregnancy
Lactation: The safety warning against breastfeeding may have been a conservative assessment, but research evidence suggests that thiopurines do not enter breastmilk.
Interactions
Cancers that do not respond to treatment with mercaptopurine do not respond to thioguanine. On the other hand, some cases of IBD that are resistant to mercaptopurine (or its pro-drug azathioprine) may be responsive to thioguanine.
Pharmacogenetics
The enzyme thiopurine S-methyltransferase (TPMT) is responsible for the direct inactivation of thioguanine to its methylthioguanine base – this methylation prevents thioguanine from further conversion into active, cytotoxic thioguanine nucleotide (TGN) metabolites. Certain genetic variations within the TPMT gene can lead to decreased or absent TPMT enzyme activity, and individuals who are homozygous or heterozygous for these types of genetic variations may have increased levels of TGN metabolites and an increased risk of severe bone marrow suppression (myelosuppression) when receiving thioguanine. In many ethnicities, TPMT polymorphisms that result in decreased or absent TPMT activity occur with a frequency of approximately 5%, meaning that about 0.25% of patients are homozygous for these variants. However, an assay of TPMT activity in red blood cells or a TPMT genetic test can identify patients with reduced TPMT activity, allowing for the adjustment of thiopurine dose or avoidance of the drug entirely. The FDA-approved drug label for thioguanine notes that patients who are TPMT-deficient may be prone to developing myelosuppression and that laboratories offer testing for TPMT deficiency. Indeed, testing for TPMT activity is currently one of the few examples of pharmacogenetics being translated into routine clinical care.
Metabolism and pharmacokinetics
A single oral dose of thioguanine has incomplete metabolism, absorption and high interindividual variability. The bioavailability of thioguanine has an average of 30% (range 14-46%). The maximum concentration in plasma after a single oral dose is attained after 8 hours.
Thioguanine, like other thiopurines, is cytotoxic to white cells; as a result it is immunosuppressive at lower doses and anti-leukemic/anti-neoplastic at higher doses. Thioguanine is incorporated into human bone marrow cells, but like other thiopurines, it is not known to cross the blood-brain barrier. Thioguanine cannot be demonstrated in cerebrospinal fluid, similar to the closely related compound 6-mercaptopurine which also cannot penetrate to the brain.
The plasma half-life of thioguanine is short, due to the rapid uptake into liver and blood cells and conversion to 6-TGN. The median plasma half-life of 80-minutes with a range of 25–240 minutes. Thioguanine is excreted primarily through the kidneys in urine, but mainly as a metabolite, 2-amino-6-methylthiopurine. However, the intra-cellular thio-nucleotide metabolites of thioguanine (6-TGN) have longer half-lives and can therefore be measured after thioguanine is eliminated from the plasma.
Thioguanine is catabolized (broken down) via two pathways. One route is through the deamination by the enzyme guanine deaminase to 6-thioxanthine, which has minimal anti-neoplastic activity, then by oxidation by xanthine oxidase of the thioxanthine to thiouric acid. This metabolic pathway is not dependent on the efficacy of xanthine oxidase, so that the inhibitor of xanthine oxidase, the drug allopurinol, does not block the breakdown of thioguanine, in contrast to its inhibition of the breakdown of the related thiopurine 6-mercaptopurine. The second pathway is the methylation of thioguanine to 2-amino-6-methylthiopurine, which is minimally effective as an anti-neoplastic and significantly less toxic than thioguanine. This pathway also is independent of the enzyme activity of xanthine oxidase.
Mechanism of action
6-Thioguanine is a thio analogue of the naturally occurring purine base guanine. 6-thioguanine utilises the enzyme hypoxanthine-guanine phosphoribosyltransferase (HGPRTase) to be converted to 6-thioguanosine monophosphate (TGMP). High concentrations of TGMP may accumulate intracellularly and hamper the synthesis of guanine nucleotides via the enzyme Inosine monophosphate dehydrogenase (IMP dehydrogenase), leading to DNA mutations.
TGMP is converted by phosphorylation to thioguanosine diphosphate (TGDP) and thioguanosine triphosphate (TGTP). Simultaneously deoxyribosyl analogs are formed, via the enzyme ribonucleotide reductase. The TGMP, TGDP and TGTP are collectively named 6-thioguanine nucleotides (6-TGN). 6-TGN are cytotoxic to cells by: (1) incorporation into DNA during the synthesis phase (S-phase) of the cell; and (2) through inhibition of the GTP-binding protein (G protein) Rac1, which regulates the Rac/Vav pathway.
Chemistry
It is a pale yellow, odorless, crystalline powder.
Names
Tioguanine (INN, BAN, AAN), or thioguanine (USAN).
Thioguanine is administered by mouth (as a tablet – 'Lanvis').
References
Further reading
Nucleobases
Purine antagonists
Purines
Thioketones
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Tioguanine | [
"Chemistry"
] | 1,852 | [
"Functional groups",
"Thioketones"
] |
5,464,960 | https://en.wikipedia.org/wiki/Enzyme%20inhibitor | An enzyme inhibitor is a molecule that binds to an enzyme and blocks its activity. Enzymes are proteins that speed up chemical reactions necessary for life, in which substrate molecules are converted into products. An enzyme facilitates a specific chemical reaction by binding the substrate to its active site, a specialized area on the enzyme that accelerates the most difficult step of the reaction.
An enzyme inhibitor stops ("inhibits") this process, either by binding to the enzyme's active site (thus preventing the substrate itself from binding) or by binding to another site on the enzyme such that the enzyme's catalysis of the reaction is blocked. Enzyme inhibitors may bind reversibly or irreversibly. Irreversible inhibitors form a chemical bond with the enzyme such that the enzyme is inhibited until the chemical bond is broken. By contrast, reversible inhibitors bind non-covalently and may spontaneously leave the enzyme, allowing the enzyme to resume its function. Reversible inhibitors produce different types of inhibition depending on whether they bind to the enzyme, the enzyme-substrate complex, or both.
Enzyme inhibitors play an important role in all cells, since they are generally specific to one enzyme each and serve to control that enzyme's activity. For example, enzymes in a metabolic pathway may be inhibited by molecules produced later in the pathway, thus curtailing the production of molecules that are no longer needed. This type of negative feedback is an important way to maintain balance in a cell. Enzyme inhibitors also control essential enzymes such as proteases or nucleases that, if left unchecked, may damage a cell. Many poisons produced by animals or plants are enzyme inhibitors that block the activity of crucial enzymes in prey or predators.
Many drug molecules are enzyme inhibitors that inhibit an aberrant human enzyme or an enzyme critical for the survival of a pathogen such as a virus, bacterium or parasite. Examples include methotrexate (used in chemotherapy and in treating rheumatic arthritis) and the protease inhibitors used to treat HIV/AIDS. Since anti-pathogen inhibitors generally target only one enzyme, such drugs are highly specific and generally produce few side effects in humans, provided that no analogous enzyme is found in humans. (This is often the case, since such pathogens and humans are genetically distant.) Medicinal enzyme inhibitors often have low dissociation constants, meaning that only a minute amount of the inhibitor is required to inhibit the enzyme. A low concentration of the enzyme inhibitor reduces the risk for liver and kidney damage and other adverse drug reactions in humans. Hence the discovery and refinement of enzyme inhibitors is an active area of research in biochemistry and pharmacology.
Structural classes
Enzyme inhibitors are a chemically diverse set of substances that range in size from organic small molecules to macromolecular proteins.
Small molecule inhibitors include essential primary metabolites that inhibit upstream enzymes that produce those metabolites. This provides a negative feedback loop that prevents over production of metabolites and thus maintains cellular homeostasis (steady internal conditions). Small molecule enzyme inhibitors also include secondary metabolites, which are not essential to the organism that produces them, but provide the organism with an evolutionary advantage, in that they can be used to repel predators or competing organisms or immobilize prey. In addition, many drugs are small molecule enzyme inhibitors that target either disease-modifying enzymes in the patient or enzymes in pathogens which are required for the growth and reproduction of the pathogen.
In addition to small molecules, some proteins act as enzyme inhibitors. The most prominent example are serpins (serine protease inhibitors) which are produced by animals to protect against inappropriate enzyme activation and by plants to prevent predation. Another class of inhibitor proteins is the ribonuclease inhibitors, which bind to ribonucleases in one of the tightest known protein–protein interactions. A special case of protein enzyme inhibitors are zymogens that contain an autoinhibitory N-terminal peptide that binds to the active site of enzyme that intramolecularly blocks its activity as a protective mechanism against uncontrolled catalysis. The Nterminal peptide is cleaved (split) from the zymogen enzyme precursor by another enzyme to release an active enzyme.
The binding site of inhibitors on enzymes is most commonly the same site that binds the substrate of the enzyme. These active site inhibitors are known as orthosteric ("regular" orientation) inhibitors. The mechanism of orthosteric inhibition is simply to prevent substrate binding to the enzyme through direct competition which in turn prevents the enzyme from catalysing the conversion of substrates into products. Alternatively, the inhibitor can bind to a site remote from the enzyme active site. These are known as allosteric ("alternative" orientation) inhibitors. The mechanisms of allosteric inhibition are varied and include changing the conformation (shape) of the enzyme such that it can no longer bind substrate (kinetically indistinguishable from competitive orthosteric inhibition) or alternatively stabilise binding of substrate to the enzyme but lock the enzyme in a conformation which is no longer catalytically active.
Reversible inhibitors
Reversible inhibitors attach to enzymes with non-covalent interactions such as hydrogen bonds, hydrophobic interactions and ionic bonds. Multiple weak bonds between the inhibitor and the enzyme active site combine to produce strong and specific binding.
In contrast to irreversible inhibitors, reversible inhibitors generally do not undergo chemical reactions when bound to the enzyme and can be easily removed by dilution or dialysis. A special case is covalent reversible inhibitors that form a chemical bond with the enzyme, but the bond can be cleaved so the inhibition is fully reversible.
Reversible inhibitors are generally categorized into four types, as introduced by Cleland in 1963. They are classified according to the effect of the inhibitor on the Vmax (maximum reaction rate catalysed by the enzyme) and Km (the concentration of substrate resulting in half maximal enzyme activity) as the concentration of the enzyme's substrate is varied.
Competitive
In competitive inhibition the substrate and inhibitor cannot bind to the enzyme at the same time. This usually results from the inhibitor having an affinity for the active site of an enzyme where the substrate also binds; the substrate and inhibitor compete for access to the enzyme's active site. This type of inhibition can be overcome by sufficiently high concentrations of substrate (Vmax remains constant), i.e., by out-competing the inhibitor. However, the apparent Km will increase as it takes a higher concentration of the substrate to reach the Km point, or half the Vmax. Competitive inhibitors are often similar in structure to the real substrate (see for example the "methotrexate versus folate" figure in the "Drugs" section).
Uncompetitive
In uncompetitive inhibition the inhibitor binds only to the enzyme-substrate complex. This type of inhibition causes Vmax to decrease (maximum velocity decreases as a result of removing activated complex) and Km to decrease (due to better binding efficiency as a result of Le Chatelier's principle and the effective elimination of the ES complex thus decreasing the Km which indicates a higher binding affinity). Uncompetitive inhibition is rare.
Non-competitive
In non-competitive inhibition the binding of the inhibitor to the enzyme reduces its activity but does not affect the binding of substrate. This type of inhibitor binds with equal affinity to the free enzyme as to the enzyme-substrate complex. It can be thought of as having the ability of competitive and uncompetitive inhibitors, but with no preference to either type. As a result, the extent of inhibition depends only on the concentration of the inhibitor. Vmax will decrease due to the inability for the reaction to proceed as efficiently, but Km will remain the same as the actual binding of the substrate, by definition, will still function properly.
Mixed
In mixed inhibition the inhibitor may bind to the enzyme whether or not the substrate has already bound. Hence mixed inhibition is a combination of competitive and noncompetitive inhibition. Furthermore, the affinity of the inhibitor for the free enzyme and the enzyme-substrate complex may differ. By increasing concentrations of substrate [S], this type of inhibition can be reduced (due to the competitive contribution), but not entirely overcome (due to the noncompetitive component). Although it is possible for mixed-type inhibitors to bind in the active site, this type of inhibition generally results from an allosteric effect where the inhibitor binds to a different site on an enzyme. Inhibitor binding to this allosteric site changes the conformation (that is, the tertiary structure or three-dimensional shape) of the enzyme so that the affinity of the substrate for the active site is reduced.
These four types of inhibition can also be distinguished by the effect of increasing the substrate concentration [S] on the degree of inhibition caused by a given amount of inhibitor. For competitive inhibition the degree of inhibition is reduced by increasing [S], for noncompetitive inhibition the degree of inhibition is unchanged, and for uncompetitive (also called anticompetitive) inhibition the degree of inhibition increases with [S].
Quantitative description
Reversible inhibition can be described quantitatively in terms of the inhibitor's binding to the enzyme and to the enzyme-substrate complex, and its effects on the kinetic constants of the enzyme. In the classic Michaelis-Menten scheme (shown in the "inhibition mechanism schematic" diagram), an enzyme (E) binds to its substrate (S) to form the enzyme–substrate complex ES. Upon catalysis, this complex breaks down to release product P and free enzyme. The inhibitor (I) can bind to either E or ES with the dissociation constants Ki or Ki', respectively.
Competitive inhibitors can bind to E, but not to ES. Competitive inhibition increases Km (i.e., the inhibitor interferes with substrate binding), but does not affect Vmax (the inhibitor does not hamper catalysis in ES because it cannot bind to ES).
Uncompetitive inhibitors bind to ES. Uncompetitive inhibition decreases both Km and Vmax. The inhibitor affects substrate binding by increasing the enzyme's affinity for the substrate (decreasing Km) as well as hampering catalysis (decreases Vmax).
Non-competitive inhibitors have identical affinities for E and ES (Ki = Ki'). Non-competitive inhibition does not change Km (i.e., it does not affect substrate binding) but decreases Vmax (i.e., inhibitor binding hampers catalysis).
Mixed-type inhibitors bind to both E and ES, but their affinities for these two forms of the enzyme are different (Ki ≠ Ki'). Thus, mixed-type inhibitors affect substrate binding (increase or decrease Km) and hamper catalysis in the ES complex (decrease Vmax).
When an enzyme has multiple substrates, inhibitors can show different types of inhibition depending on which substrate is considered. This results from the active site containing two different binding sites within the active site, one for each substrate. For example, an inhibitor might compete with substrate A for the first binding site, but be a non-competitive inhibitor with respect to substrate B in the second binding site.
Traditionally reversible enzyme inhibitors have been classified as competitive, uncompetitive, or non-competitive, according to their effects on Km and Vmax. These three types of inhibition result respectively from the inhibitor binding only to the enzyme E in the absence of substrate S, to the enzyme–substrate complex ES, or to both. The division of these classes arises from a problem in their derivation and results in the need to use two different binding constants for one binding event. It is further assumed that binding of the inhibitor to the enzyme results in 100% inhibition and fails to consider the possibility of partial inhibition. The common form of the inhibitory term also obscures the relationship between the inhibitor binding to the enzyme and its relationship to any other binding term be it the Michaelis–Menten equation or a dose response curve associated with ligand receptor binding. To demonstrate the relationship the following rearrangement can be made:
This rearrangement demonstrates that similar to the Michaelis–Menten equation, the maximal rate of reaction depends on the proportion of the enzyme population interacting with its substrate.
fraction of the enzyme population bound by substrate
fraction of the enzyme population bound by inhibitor
the effect of the inhibitor is a result of the percent of the enzyme population interacting with inhibitor. The only problem with this equation in its present form is that it assumes absolute inhibition of the enzyme with inhibitor binding, when in fact there can be a wide range of effects anywhere from 100% inhibition of substrate turn over to no inhibition. To account for this the equation can be easily modified to allow for different degrees of inhibition by including a delta Vmax term.
or
This term can then define the residual enzymatic activity present when the inhibitor is interacting with individual enzymes in the population. However the inclusion of this term has the added value of allowing for the possibility of activation if the secondary Vmax term turns out to be higher than the initial term. To account for the possibly of activation as well the notation can then be rewritten replacing the inhibitor "I" with a modifier term (stimulator or inhibitor) denoted here as "X".
While this terminology results in a simplified way of dealing with kinetic effects relating to the maximum velocity of the Michaelis–Menten equation, it highlights potential problems with the term used to describe effects relating to the Km. The Km relating to the affinity of the enzyme for the substrate should in most cases relate to potential changes in the binding site of the enzyme which would directly result from enzyme inhibitor interactions. As such a term similar to the delta Vmax term proposed above to modulate Vmax should be appropriate in most situations:
Dissociation constants
An enzyme inhibitor is characterised by its dissociation constant Ki, the concentration at which the inhibitor half occupies the enzyme. In non-competitive inhibition the inhibitor can also bind to the enzyme-substrate complex, and the presence of bound substrate can change the affinity of the inhibitor for the enzyme, resulting in a second dissociation constant Ki'. Hence Ki and Ki' are the dissociation constants of the inhibitor for the enzyme and to the enzyme-substrate complex, respectively. The enzyme-inhibitor constant Ki can be measured directly by various methods; one especially accurate method is isothermal titration calorimetry, in which the inhibitor is titrated into a solution of enzyme and the heat released or absorbed is measured. However, the other dissociation constant Ki' is difficult to measure directly, since the enzyme-substrate complex is short-lived and undergoing a chemical reaction to form the product. Hence, Ki' is usually measured indirectly, by observing the enzyme activity under various substrate and inhibitor concentrations, and fitting the data via nonlinear regression to a modified Michaelis–Menten equation.
where the modifying factors α and α' are defined by the inhibitor concentration and its two dissociation constants
Thus, in the presence of the inhibitor, the enzyme's effective Km and Vmax become (α/α')Km and (1/α')Vmax, respectively. However, the modified Michaelis-Menten equation assumes that binding of the inhibitor to the enzyme has reached equilibrium, which may be a very slow process for inhibitors with sub-nanomolar dissociation constants. In these cases the inhibition becomes effectively irreversible, hence it is more practical to treat such tight-binding inhibitors as irreversible (see below).
The effects of different types of reversible enzyme inhibitors on enzymatic activity can be visualised using graphical representations of the Michaelis–Menten equation, such as Lineweaver–Burk, Eadie-Hofstee or Hanes-Woolf plots. An illustration is provided by the three Lineweaver–Burk plots depicted in the Lineweaver–Burk diagrams figure. In the top diagram the competitive inhibition lines intersect on the y-axis, illustrating that such inhibitors do not affect Vmax. In the bottom diagram the non-competitive inhibition lines intersect on the x-axis, showing these inhibitors do not affect Km. However, since it can be difficult to estimate Ki and Ki' accurately from such plots, it is advisable to estimate these constants using more reliable nonlinear regression methods.
Special cases
Partially competitive
The mechanism of partially competitive inhibition is similar to that of non-competitive, except that the EIS complex has catalytic activity, which may be lower or even higher (partially competitive activation) than that of the enzyme–substrate (ES) complex. This inhibition typically displays a lower Vmax, but an unaffected Km value.
Substrate or product
Substrate or product inhibition is where either an enzymes substrate or product also act as an inhibitor. This inhibition may follow the competitive, uncompetitive or mixed patterns. In substrate inhibition there is a progressive decrease in activity at high substrate concentrations, potentially from an enzyme having two competing substrate-binding sites. At low substrate, the high-affinity site is occupied and normal kinetics are followed. However, at higher concentrations, the second inhibitory site becomes occupied, inhibiting the enzyme. Product inhibition (either the enzyme's own product, or a product to an enzyme downstream in its metabolic pathway) is often a regulatory feature in metabolism and can be a form of negative feedback.
Slow-tight
Slow-tight inhibition occurs when the initial enzyme–inhibitor complex EI undergoes conformational isomerism (a change in shape) to a second more tightly held complex, EI*, but the overall inhibition process is reversible. This manifests itself as slowly increasing enzyme inhibition. Under these conditions, traditional Michaelis–Menten kinetics give a false value for Ki, which is time–dependent. The true value of Ki can be obtained through more complex analysis of the on (kon) and off (koff) rate constants for inhibitor association with kinetics similar to irreversible inhibition.
Multi-substrate analogues
Multi-substrate analogue inhibitors are high affinity selective inhibitors that can be prepared for enzymes that catalyse reactions with more than one substrate by capturing the binding energy of each of those substrate into one molecule. For example, in the formyl transfer reactions of purine biosynthesis, a potent Multi-substrate Adduct Inhibitor (MAI) to glycinamide ribonucleotide (GAR) TFase was prepared synthetically by linking analogues of the GAR substrate and the N-10-formyl tetrahydrofolate cofactor together to produce thioglycinamide ribonucleotide dideazafolate (TGDDF), or enzymatically from the natural GAR substrate to yield GDDF. Here the subnanomolar dissociation constant (KD) of TGDDF was greater than predicted presumably due to entropic advantages gained and/or positive interactions acquired through the atoms linking the components. MAIs have also been observed to be produced in cells by reactions of pro-drugs such as isoniazid or enzyme inhibitor ligands (for example, PTC124) with cellular cofactors such as nicotinamide adenine dinucleotide (NADH) and adenosine triphosphate (ATP) respectively.
Examples
As enzymes have evolved to bind their substrates tightly, and most reversible inhibitors bind in the active site of enzymes, it is unsurprising that some of these inhibitors are strikingly similar in structure to the substrates of their targets. Inhibitors of dihydrofolate reductase (DHFR) are prominent examples. Other examples of these substrate mimics are the protease inhibitors, a therapeutically effective class of antiretroviral drugs used to treat HIV/AIDS. The structure of ritonavir, a peptidomimetic (peptide mimic) protease inhibitor containing three peptide bonds, as shown in the "competitive inhibition" figure above. As this drug resembles the peptide that is the substrate of the HIV protease, it competes with the substrate in the enzyme's active site.
Enzyme inhibitors are often designed to mimic the transition state or intermediate of an enzyme-catalysed reaction. This ensures that the inhibitor exploits the transition state stabilising effect of the enzyme, resulting in a better binding affinity (lower Ki) than substrate-based designs. An example of such a transition state inhibitor is the antiviral drug oseltamivir; this drug mimics the planar nature of the ring oxonium ion in the reaction of the viral enzyme neuraminidase.
However, not all inhibitors are based on the structures of substrates. For example, the structure of another HIV protease inhibitor tipranavir is not based on a peptide and has no obvious structural similarity to a protein substrate. These non-peptide inhibitors can be more stable than inhibitors containing peptide bonds, because they will not be substrates for peptidases and are less likely to be degraded.
In drug design it is important to consider the concentrations of substrates to which the target enzymes are exposed. For example, some protein kinase inhibitors have chemical structures that are similar to ATP, one of the substrates of these enzymes. However, drugs that are simple competitive inhibitors will have to compete with the high concentrations of ATP in the cell. Protein kinases can also be inhibited by competition at the binding sites where the kinases interact with their substrate proteins, and most proteins are present inside cells at concentrations much lower than the concentration of ATP. As a consequence, if two protein kinase inhibitors both bind in the active site with similar affinity, but only one has to compete with ATP, then the competitive inhibitor at the protein-binding site will inhibit the enzyme more effectively.
Irreversible inhibitors
Types
Irreversible inhibitors covalently bind to an enzyme, and this type of inhibition can therefore not be readily reversed. Irreversible inhibitors often contain reactive functional groups such as nitrogen mustards, aldehydes, haloalkanes, alkenes, Michael acceptors, phenyl sulfonates, or fluorophosphonates. These electrophilic groups react with amino acid side chains to form covalent adducts. The residues modified are those with side chains containing nucleophiles such as hydroxyl or sulfhydryl groups; these include the amino acids serine (that reacts with DFP, see the "DFP reaction" diagram), and also cysteine, threonine, or tyrosine.
Irreversible inhibition is different from irreversible enzyme inactivation. Irreversible inhibitors are generally specific for one class of enzyme and do not inactivate all proteins; they do not function by destroying protein structure but by specifically altering the active site of their target. For example, extremes of pH or temperature usually cause denaturation of all protein structure, but this is a non-specific effect. Similarly, some non-specific chemical treatments destroy protein structure: for example, heating in concentrated hydrochloric acid will hydrolyse the peptide bonds holding proteins together, releasing free amino acids.
Irreversible inhibitors display time-dependent inhibition and their potency therefore cannot be characterised by an IC50 value. This is because the amount of active enzyme at a given concentration of irreversible inhibitor will be different depending on how long the inhibitor is pre-incubated with the enzyme. Instead, kobs/[I] values are used, where kobs is the observed pseudo-first order rate of inactivation (obtained by plotting the log of % activity versus time) and [I] is the concentration of inhibitor. The kobs/[I] parameter is valid as long as the inhibitor does not saturate binding with the enzyme (in which case kobs = kinact) where kinact is the rate of inactivation.
Measuring
Irreversible inhibitors first form a reversible non-covalent complex with the enzyme (EI or ESI). Subsequently, a chemical reaction occurs between the enzyme and inhibitor to produce the covalently modified "dead-end complex" EI* (an irreversible covalent complex). The rate at which EI* is formed is called the inactivation rate or kinact. Since formation of EI may compete with ES, binding of irreversible inhibitors can be prevented by competition either with substrate or with a second, reversible inhibitor. This protection effect is good evidence of a specific reaction of the irreversible inhibitor with the active site.
The binding and inactivation steps of this reaction are investigated by incubating the enzyme with inhibitor and assaying the amount of activity remaining over time. The activity will be decreased in a time-dependent manner, usually following exponential decay. Fitting these data to a rate equation gives the rate of inactivation at this concentration of inhibitor. This is done at several different concentrations of inhibitor. If a reversible EI complex is involved the inactivation rate will be saturable and fitting this curve will give kinact and Ki.
Another method that is widely used in these analyses is mass spectrometry. Here, accurate measurement of the mass of the unmodified native enzyme and the inactivated enzyme gives the increase in mass caused by reaction with the inhibitor and shows the stoichiometry of the reaction. This is usually done using a MALDI-TOF mass spectrometer. In a complementary technique, peptide mass fingerprinting involves digestion of the native and modified protein with a protease such as trypsin. This will produce a set of peptides that can be analysed using a mass spectrometer. The peptide that changes in mass after reaction with the inhibitor will be the one that contains the site of modification.
Slow binding
Not all irreversible inhibitors form covalent adducts with their enzyme targets. Some reversible inhibitors bind so tightly to their target enzyme that they are essentially irreversible. These tight-binding inhibitors may show kinetics similar to covalent irreversible inhibitors. In these cases some of these inhibitors rapidly bind to the enzyme in a low-affinity EI complex and this then undergoes a slower rearrangement to a very tightly bound EI* complex (see the "irreversible inhibition mechanism" diagram). This kinetic behaviour is called slow-binding. This slow rearrangement after binding often involves a conformational change as the enzyme "clamps down" around the inhibitor molecule. Examples of slow-binding inhibitors include some important drugs, such methotrexate, allopurinol, and the activated form of acyclovir.
Some examples
Diisopropylfluorophosphate (DFP) is an example of an irreversible protease inhibitor (see the "DFP reaction" diagram). The enzyme hydrolyses the phosphorus–fluorine bond, but the phosphate residue remains bound to the serine in the active site, deactivating it. Similarly, DFP also reacts with the active site of acetylcholine esterase in the synapses of neurons, and consequently is a potent neurotoxin, with a lethal dose of less than 100mg.
Suicide inhibition is an unusual type of irreversible inhibition where the enzyme converts the inhibitor into a reactive form in its active site. An example is the inhibitor of polyamine biosynthesis, α-difluoromethylornithine (DFMO), which is an analogue of the amino acid ornithine, and is used to treat African trypanosomiasis (sleeping sickness). Ornithine decarboxylase can catalyse the decarboxylation of DFMO instead of ornithine (see the "DFMO inhibitor mechanism" diagram). However, this decarboxylation reaction is followed by the elimination of a fluorine atom, which converts this catalytic intermediate into a conjugated imine, a highly electrophilic species. This reactive form of DFMO then reacts with either a cysteine or lysine residue in the active site to irreversibly inactivate the enzyme.
Since irreversible inhibition often involves the initial formation of a non-covalent enzyme inhibitor (EI) complex, it is sometimes possible for an inhibitor to bind to an enzyme in more than one way. For example, in the figure showing trypanothione reductase from the human protozoan parasite Trypanosoma cruzi, two molecules of an inhibitor called quinacrine mustard are bound in its active site. The top molecule is bound reversibly, but the lower one is bound covalently as it has reacted with an amino acid residue through its nitrogen mustard group.
Applications
Enzyme inhibitors are found in nature and also produced artificially in the laboratory. Naturally occurring enzyme inhibitors regulate many metabolic processes and are essential for life. In addition, naturally produced poisons are often enzyme inhibitors that have evolved for use as toxic agents against predators, prey, and competing organisms. These natural toxins include some of the most poisonous substances known. Artificial inhibitors are often used as drugs, but can also be insecticides such as malathion, herbicides such as glyphosate, or disinfectants such as triclosan. Other artificial enzyme inhibitors block acetylcholinesterase, an enzyme which breaks down acetylcholine, and are used as nerve agents in chemical warfare.
Metabolic regulation
Enzyme inhibition is a common feature of metabolic pathway control in cells. Metabolic flux through a pathway is often regulated by a pathway's metabolites acting as inhibitors and enhancers for the enzymes in that same pathway. The glycolytic pathway is a classic example. This catabolic pathway consumes glucose and produces ATP, NADH and pyruvate. A key step for the regulation of glycolysis is an early reaction in the pathway catalysed by phosphofructokinase1 (PFK1). When ATP levels rise, ATP binds an allosteric site in PFK1 to decrease the rate of the enzyme reaction; glycolysis is inhibited and ATP production falls. This negative feedback control helps maintain a steady concentration of ATP in the cell. However, metabolic pathways are not just regulated through inhibition since enzyme activation is equally important. With respect to PFK1, fructose 2,6-bisphosphate and ADP are examples of metabolites that are allosteric activators.
Physiological enzyme inhibition can also be produced by specific protein inhibitors. This mechanism occurs in the pancreas, which synthesises many digestive precursor enzymes known as zymogens. Many of these are activated by the trypsin protease, so it is important to inhibit the activity of trypsin in the pancreas to prevent the organ from digesting itself. One way in which the activity of trypsin is controlled is the production of a specific and potent trypsin inhibitor protein in the pancreas. This inhibitor binds tightly to trypsin, preventing the trypsin activity that would otherwise be detrimental to the organ. Although the trypsin inhibitor is a protein, it avoids being hydrolysed as a substrate by the protease by excluding water from trypsin's active site and destabilising the transition state. Other examples of physiological enzyme inhibitor proteins include the barstar inhibitor of the bacterial ribonuclease barnase.
Natural poisons
Animals and plants have evolved to synthesise a vast array of poisonous products including secondary metabolites, peptides and proteins that can act as inhibitors. Natural toxins are usually small organic molecules and are so diverse that there are probably natural inhibitors for most metabolic processes. The metabolic processes targeted by natural poisons encompass more than enzymes in metabolic pathways and can also include the inhibition of receptor, channel and structural protein functions in a cell. For example, paclitaxel (taxol), an organic molecule found in the Pacific yew tree, binds tightly to tubulin dimers and inhibits their assembly into microtubules in the cytoskeleton.
Many natural poisons act as neurotoxins that can cause paralysis leading to death and function for defence against predators or in hunting and capturing prey. Some of these natural inhibitors, despite their toxic attributes, are valuable for therapeutic uses at lower doses. An example of a neurotoxin are the glycoalkaloids, from the plant species in the family Solanaceae (includes potato, tomato and eggplant), that are acetylcholinesterase inhibitors. Inhibition of this enzyme causes an uncontrolled increase in the acetylcholine neurotransmitter, muscular paralysis and then death. Neurotoxicity can also result from the inhibition of receptors; for example, atropine from deadly nightshade (Atropa belladonna) that functions as a competitive antagonist of the muscarinic acetylcholine receptors.
Although many natural toxins are secondary metabolites, these poisons also include peptides and proteins. An example of a toxic peptide is alpha-amanitin, which is found in relatives of the death cap mushroom. This is a potent enzyme inhibitor, in this case preventing the RNA polymerase II enzyme from transcribing DNA. The algal toxin microcystin is also a peptide and is an inhibitor of protein phosphatases. This toxin can contaminate water supplies after algal blooms and is a known carcinogen that can also cause acute liver haemorrhage and death at higher doses.
Proteins can also be natural poisons or antinutrients, such as the trypsin inhibitors (discussed in the "metabolic regulation" section above) that are found in some legumes. A less common class of toxins are toxic enzymes: these act as irreversible inhibitors of their target enzymes and work by chemically modifying their substrate enzymes. An example is ricin, an extremely potent protein toxin found in castor oil beans. This enzyme is a glycosidase that inactivates ribosomes. Since ricin is a catalytic irreversible inhibitor, this allows just a single molecule of ricin to kill a cell.
Drugs
The most common uses for enzyme inhibitors are as drugs to treat disease. Many of these inhibitors target a human enzyme and aim to correct a pathological condition. For instance, aspirin is a widely used drug that acts as a suicide inhibitor of the cyclooxygenase enzyme. This inhibition in turn suppresses the production of proinflammatory prostaglandins and thus aspirin may be used to reduce pain, fever, and inflammation.
an estimated 29% of approved drugs are enzyme inhibitors of which approximately one-fifth are kinase inhibitors. A notable class of kinase drug targets is the receptor tyrosine kinases which are essential enzymes that regulate cell growth; their over-activation may result in cancer. Hence kinase inhibitors such as imatinib are frequently used to treat malignancies. Janus kinases are another notable example of drug enzyme targets. Inhibitors of Janus kinases block the production of inflammatory cytokines and hence these inhibitors are used to treat a variety of inflammatory diseases in including arthritis, asthma, and Crohn's disease.
An example of the structural similarity of some inhibitors to the substrates of the enzymes they target is seen in the figure comparing the drug methotrexate to folic acid. Folic acid is the oxidised form of the substrate of dihydrofolate reductase, an enzyme that is potently inhibited by methotrexate. Methotrexate blocks the action of dihydrofolate reductase and thereby halts thymidine biosynthesis. This block of nucleotide biosynthesis is selectively toxic to rapidly growing cells, therefore methotrexate is often used in cancer chemotherapy.
A common treatment for erectile dysfunction is sildenafil (Viagra). This compound is a potent inhibitor of cGMP specific phosphodiesterase type 5, the enzyme that degrades the signalling molecule cyclic guanosine monophosphate. This signalling molecule triggers smooth muscle relaxation and allows blood flow into the corpus cavernosum, which causes an erection. Since the drug decreases the activity of the enzyme that halts the signal, it makes this signal last for a longer period of time.
Antibiotics
Drugs are also used to inhibit enzymes needed for the survival of pathogens. For example, bacteria are surrounded by a thick cell wall made of a net-like polymer called peptidoglycan. Many antibiotics such as penicillin and vancomycin inhibit the enzymes that produce and then cross-link the strands of this polymer together. This causes the cell wall to lose strength and the bacteria to burst. In the figure, a molecule of penicillin (shown in a ball-and-stick form) is shown bound to its target, the transpeptidase from the bacteria Streptomyces R61 (the protein is shown as a ribbon diagram).
Antibiotic drug design is facilitated when an enzyme that is essential to the pathogen's survival is absent or very different in humans. Humans do not make peptidoglycan, therefore antibiotics that inhibit this process are selectively toxic to bacteria. Selective toxicity is also produced in antibiotics by exploiting differences in the structure of the ribosomes in bacteria, or how they make fatty acids.
Antivirals
Drugs that inhibit enzymes needed for the replication of viruses are effective in treating viral infections. Antiviral drugs include protease inhibitors used to treat HIV/AIDS and Hepatitis C, reverse-transcriptase inhibitors targeting HIV/AIDS, neuraminidase inhibitors targeting influenza, and terminase inhibitors targeting human cytomegalovirus.
Pesticides
Many pesticides are enzyme inhibitors. Acetylcholinesterase (AChE) is an enzyme found in animals, from insects to humans. It is essential to nerve cell function through its mechanism of breaking down the neurotransmitter acetylcholine into its constituents, acetate and choline. This is somewhat unusual among neurotransmitters as most, including serotonin, dopamine, and norepinephrine, are absorbed from the synaptic cleft rather than cleaved. A large number of AChE inhibitors are used in both medicine and agriculture. Reversible competitive inhibitors, such as edrophonium, physostigmine, and neostigmine, are used in the treatment of myasthenia gravis and in anaesthesia to reverse muscle blockade. The carbamate pesticides are also examples of reversible AChE inhibitors. The organophosphate pesticides such as malathion, parathion, and chlorpyrifos irreversibly inhibit acetylcholinesterase.
Herbicides
The herbicide glyphosate is an inhibitor of 3-phosphoshikimate 1-carboxyvinyltransferase, other herbicides, such as the sulfonylureas inhibit the enzyme acetolactate synthase. Both enzymes are needed for plants to make branched-chain amino acids. Many other enzymes are inhibited by herbicides, including enzymes needed for the biosynthesis of lipids and carotenoids and the processes of photosynthesis and oxidative phosphorylation.
Discovery and design
New drugs are the products of a long drug development process, the first step of which is often the discovery of a new enzyme inhibitor. There are two principle approaches of discovering these inhibitors.
The first general method is rational drug design based on mimicking the transition state of the chemical reaction catalysed by the enzyme. The designed inhibitor often closely resembles the substrate, except that the portion of the substrate that undergoes chemical reaction is replaced by a chemically stable functional group that resembles the transition state. Since the enzyme has evolved to stabilise the transition state, transition state analogues generally possess higher affinity for the enzyme compared to the substrate, and therefore are effective inhibitors.
The second way of discovering new enzyme inhibitors is high-throughput screening of large libraries of structurally diverse compounds to identify hit molecules that bind to the enzyme. This method has been extended to include virtual screening of databases of diverse molecules using computers, which are then followed by experimental confirmation of binding of the virtual screening hits. Complementary approaches that can provide new starting points for inhibitors include fragment-based lead discovery and DNA Encoded Chemical Libraries (DEL).
Hits from any of the above approaches can be optimised to high affinity binders that efficiently inhibit the enzyme. Computer-based methods for predicting the binding orientation and affinity of an inhibitor for an enzyme such as molecular docking and molecular mechanics can be used to assist in the optimisation process. New inhibitors are used to obtain crystallographic structures of the enzyme in an inhibitor/enzyme complex to show how the molecule is binding to the active site, allowing changes to be made to the inhibitor to optimise binding in a process known as structure-based drug design. This test and improve cycle is repeated until a sufficiently potent inhibitor is produced.
See also
Activity-based proteomics – a branch of proteomics that uses covalent enzyme inhibitors as reporters to monitor enzyme activity.
Antimetabolite – an enzyme inhibitor that is used to interfere with cell growth and division
Transition state analogue – a type of enzyme inhibitor that mimics the transition state of the chemical reaction catalysed by the enzyme
References
External links
, Database of enzymes giving lists of known inhibitors for each entry
Database of drugs and enzyme inhibitors
Recommendations of the Nomenclature Committee of the International Union of Biochemistry (NC-IUB) on enzyme inhibition terminology
Biochemical reactions
Medicinal chemistry
Metabolism | Enzyme inhibitor | [
"Chemistry",
"Biology"
] | 8,660 | [
"Biochemical reactions",
"Cellular processes",
"nan",
"Medicinal chemistry",
"Biochemistry",
"Metabolism"
] |
5,464,994 | https://en.wikipedia.org/wiki/Reaction%20inhibitor | A reaction inhibitor is a substance that decreases the rate of, or prevents, a chemical reaction.
A catalyst or an Enzyme activator, in contrast, is a substance that increases the rate of a chemical reaction.
Examples
Added acetanilide slows the decomposition of drug-store hydrogen peroxide solution, inhibiting the reaction
2 → 2 + , which is catalyzed by heat, light, and impurities.
Inhibition of a catalyst
An inhibitor can reduce the effectiveness of a catalyst in a catalysed reaction (either a non-biological catalyst or an enzyme). E.g., if a compound is so similar to (one of) the reactants that it can bind to the active site of a catalyst but does not undergo a catalytic reaction then that catalyst molecule cannot perform its job because the active site is occupied. When the inhibitor is released, the catalyst is again available for reaction.
Inhibition and catalyst poisoning
Inhibition should be distinguished from catalyst poisoning. An inhibitor only hinders the working of a catalyst without changing it, whilst in catalyst poisoning the catalyst undergoes a chemical reaction that is irreversible in the environment in question (the active catalyst may only be regained by a separate process).
Potency
Index inhibitors or simplified as inhibitor predictably inhibit metabolism via a given pathway and are commonly used in prospective clinical drug-drug interaction studies.
Inhibitors of CYP can be classified by their potency, such as:
Strong inhibitor being one that causes at least a 5-fold increase in the plasma AUC values, or more than 80% decrease in clearance of substrates ( Over 1.8 times slower than usual clearance rate.) .
Moderate inhibitor being one that causes at least a 2-fold increase in the plasma AUC values, or 50-80% decrease in clearance of substrates ( At least 1.5-1.8 times slower than usual clearance speed.) .
Weak inhibitor being one that causes at least a 1.25-fold but less than 2-fold increase in the plasma AUC values, or 20-50% decrease in clearance of substrates ( At least 1.2-1.5 times slower than usual clearance rate.) .
See also
Enzyme inhibition
Catalyst poisoning
References
Catalysis | Reaction inhibitor | [
"Chemistry"
] | 448 | [
"Catalysis",
"Chemical reaction stubs",
"Chemical kinetics",
"Chemical process stubs"
] |
5,465,041 | https://en.wikipedia.org/wiki/Phycologia%20Australica | Phycologia Australica, written by William Henry Harvey, is one of the most important 19th-century works on phycology, the study of algae.
The work, published in five volumes between 1858 and 1863, is the result of Harvey's extensive collecting along the Australian shores during a three-year sabbatical. By the time Harvey set foot in Western Australia, he had already established himself as a leading phycologist, having published several large works on algae from the British Isles, northern America as well as the Southern Ocean (Nereis Australica). The fact that Harvey travelled the globe on several occasions and collected the seaweeds which he described himself in his later publications, set him apart from most of his contemporaries who relied for the most part on specimens collected by others. In addition, Harvey's zest for work meant he pressed sometimes over 700 specimens in a single day, which were distributed to his colleagues a set of Australian algae. Upon his return to Trinity College in Dublin, Harvey embarked on a mission: the illustration and description of over 300 species of Australian algae, for which he earned the title "father of Australian Phycology".
The dedications and specific epithets of the species commemorate his friend George Clifton, of Fremantle, who assisted Harvey as a collector.
References
Harvey, W.H. 1858. Phycologia australica Vol. 1. London. Pp. [i]-xi + v-viii [Index], pls. I-LX.
Harvey, W.H. 1859. Phycologia australica Vol. 2. London. viii pp., pls. LXI-CXX.
Harvey, W.H. 1860. Phycologia australica Vol. 3. London. viii pp., pls. CXXI-CLXXX.
Harvey, W.H. 1862. Phycologia australica Vol. 4. London. viii pp., pls. CLXXXI-CCXL.
Harvey, W.H. 1863. Phycologia australica Vol. 5. London. Pp. [i]-x + v-lxxiii [Synoptic catalogue], pls. CCXLI-CCC.
External links
Digitized copy of Phycologia australica Vol. 1-5 with metadata connections to the Encyclopedia of Life website at Biodiversity Heritage Library
Searchable database of Phycologia Australica
Biology books
1858 non-fiction books
1859 non-fiction books
1860 non-fiction books
1862 non-fiction books
1863 non-fiction books
1858 in science
1859 in science
1860 in science
1861 in science
1862 in science
1863 in science
Phycology
1850s in science
1860s in science | Phycologia Australica | [
"Biology"
] | 575 | [
"Algae",
"Phycology"
] |
5,465,118 | https://en.wikipedia.org/wiki/K%C5%91nig%27s%20theorem%20%28graph%20theory%29 | In the mathematical area of graph theory, Kőnig's theorem, proved by , describes an equivalence between the maximum matching problem and the minimum vertex cover problem in bipartite graphs. It was discovered independently, also in 1931, by Jenő Egerváry in the more general case of weighted graphs.
Setting
A vertex cover in a graph is a set of vertices that includes at least one endpoint of every edge, and a vertex cover is minimum if no other vertex cover has fewer vertices. A matching in a graph is a set of edges no two of which share an endpoint, and a matching is maximum if no other matching has more edges.
It is obvious from the definition that any vertex-cover set must be at least as large as any matching set (since for every edge in the matching, at least one vertex is needed in the cover). In particular, the minimum vertex cover set is at least as large as the maximum matching set. Kőnig's theorem states that, in any bipartite graph, the minimum vertex cover set and the maximum matching set have in fact the same size.
Statement of the theorem
In any bipartite graph, the number of edges in a maximum matching equals the number of vertices in a minimum vertex cover.
Example
The bipartite graph shown in the above illustration has 14 vertices; a matching with six edges is shown in blue, and a vertex cover with six vertices is shown in red. There can be no smaller vertex cover, because any vertex cover has to include at least one endpoint of each matched edge (as well as of every other edge), so this is a minimum vertex cover. Similarly, there can be no larger matching, because any matched edge has to include at least one endpoint in the vertex cover, so this is a maximum matching. Kőnig's theorem states that the equality between the sizes of the matching and the cover (in this example, both numbers are six) applies more generally to any bipartite graph.
Proofs
Constructive proof
The following proof provides a way of constructing a minimum vertex cover from a maximum matching. Let be a bipartite graph and let be the two parts of the vertex set . Suppose that is a maximum matching for .
Construct the flow network derived from in such way that there are edges of capacity from the source to every vertex and from every vertex to the sink , and of capacity from to for any .
The size of the maximum matching in is the size of a maximum flow in , which, in turn, is the size of a minimum cut in the network , as follows from the max-flow min-cut theorem.
Let be a minimum cut. Let and , such that and . Then the minimum cut is composed only of edges going from to or from to , as any edge from to would make the size of the cut infinite.
Therefore, the size of the minimum cut is equal to . On the other hand, is a vertex cover, as any edge that is not incident to vertices from and must be incident to a pair of vertices from and , which would contradict the fact that there are no edges between and .
Thus, is a minimum vertex cover of .
Constructive proof without flow concepts
No vertex in a vertex cover can cover more than one edge of (because the edge half-overlap would prevent from being a matching in the first place), so if a vertex cover with vertices can be constructed, it must be a minimum cover.
To construct such a cover, let be the set of unmatched vertices in (possibly empty), and let be the set of vertices that are either in or are connected to by alternating paths (paths that alternate between edges that are in the matching and edges that are not in the matching). Let
Every edge in either belongs to an alternating path (and has a right endpoint in ), or it has a left endpoint in . For, if is matched but not in an alternating path, then its left endpoint cannot be in an alternating path (because two matched edges can not share a vertex) and thus belongs to . Alternatively, if is unmatched but not in an alternating path, then its left endpoint cannot be in an alternating path, for such a path could be extended by adding to it. Thus, forms a vertex cover.
Additionally, every vertex in is an endpoint of a matched edge.
For, every vertex in is matched because is a superset of , the set of unmatched left vertices.
And every vertex in must also be matched, for if there existed an alternating path to an unmatched vertex then changing the matching by removing the matched edges from this path and adding the unmatched edges in their place would increase the size of the matching. However, no matched edge can have both of its endpoints in . Thus, is a vertex cover of cardinality equal to , and must be a minimum vertex cover.
Proof using linear programming duality
To explain this proof, we first have to extend the notion of a matching to that of a fractional matching - an assignment of a weight in [0,1] to each edge, such that the sum of weights near each vertex is at most 1 (an integral matching is a special case of a fractional matching in which the weights are in {0,1}). Similarly we define a fractional vertex-cover - an assignment of a non-negative weight to each vertex, such that the sum of weights in each edge is at least 1 (an integral vertex-cover is a special case of a fractional vertex-cover in which the weights are in {0,1}).
The maximum fractional matching size in a graph is the solution of the following linear program:Maximize 1E · x
Subject to: x ≥ 0E
__ AG · x ≤ 1V.where x is a vector of size |E| in which each element represents the weight of an edge in the fractional matching. 1E is a vector of |E| ones, so the first line indicates the size of the matching. 0E is a vector of |E| zeros, so the second line indicates the constraint that the weights are non-negative. 1V is a vector of |V| ones and AG is the incidence matrix of G, so the third line indicates the constraint that the sum of weights near each vertex is at most 1.
Similarly, the minimum fractional vertex-cover size in is the solution of the following LP:Minimize 1V · y
Subject to: y ≥ 0V
__ AGT · y ≥ 1E.where y is a vector of size |V| in which each element represents the weight of a vertex in the fractional cover. Here, the first line is the size of the cover, the second line represents the non-negativity of the weights, and the third line represents the requirement that the sum of weights near each edge must be at least 1.
Now, the minimum fractional cover LP is exactly the dual linear program of the maximum fractional matching LP. Therefore, by the LP duality theorem, both programs have the same solution. This fact is true not only in bipartite graphs but in arbitrary graphs:In any graph, the largest size of a fractional matching equals the smallest size of a fractional vertex cover.What makes bipartite graphs special is that, in bipartite graphs, both these linear programs have optimal solutions in which all variable values are integers. This follows from the fact that in the fractional matching polytope of a bipartite graph, all extreme points have only integer coordinates, and the same is true for the fractional vertex-cover polytope. Therefore the above theorem implies:
In any bipartite graph, the largest size of a matching equals the smallest size of a vertex cover.
Algorithm
The constructive proof described above provides an algorithm for producing a minimum vertex cover given a maximum matching. Thus, the Hopcroft–Karp algorithm for finding maximum matchings in bipartite graphs may also be used to solve the vertex cover problem efficiently in these graphs.
Despite the equivalence of the two problems from the point of view of exact solutions, they are not equivalent for approximation algorithms. Bipartite maximum matchings can be approximated arbitrarily accurately in constant time by distributed algorithms; in contrast, approximating the minimum vertex cover of a bipartite graph requires at least logarithmic time.
Example
In the graph shown in the introduction take to be the set of vertices in the bottom layer of the diagram and to be the set of vertices in the top layer of the diagram. From left to right label the vertices in the bottom layer with the numbers 1, …, 7 and label the vertices in the top layer with the numbers 8, …, 14. The set of unmatched vertices from is {1}. The alternating paths starting from are 1–10–3–13–7, 1–10–3–11–5–13–7, 1–11–5–13–7, 1–11–5–10–3–13–7, and all subpaths of these starting from 1. The set is therefore {1,3,5,7,10,11,13}, resulting in , and the minimum vertex cover .
Non-bipartite graphs
For graphs that are not bipartite, the minimum vertex cover may be larger than the maximum matching. Moreover, the two problems are very different in complexity: maximum matchings can be found in polynomial time for any graph, while minimum vertex cover is NP-complete.
The complement of a vertex cover in any graph is an independent set, so a minimum vertex cover is complementary to a maximum independent set; finding maximum independent sets is another NP-complete problem. The equivalence between matching and covering articulated in Kőnig's theorem allows minimum vertex covers and maximum independent sets to be computed in polynomial time for bipartite graphs, despite the NP-completeness of these problems for more general graph families.
History
Kőnig's theorem is named after the Hungarian mathematician Dénes Kőnig. Kőnig had announced in 1914 and published in 1916 the results that every regular bipartite graph has a perfect matching, and more generally that the chromatic index of any bipartite graph (that is, the minimum number of matchings into which it can be partitioned) equals its maximum degree – the latter statement is known as Kőnig's line coloring theorem. However, attribute Kőnig's theorem itself to a later paper of Kőnig (1931).
According to , Kőnig attributed the idea of studying matchings in bipartite graphs to his father, mathematician Gyula Kőnig. In Hungarian, Kőnig's name has a double acute accent, but his theorem is sometimes spelled (incorrectly) in German characters, with an umlaut.
Related theorems
Kőnig's theorem is equivalent to many other min-max theorems in graph theory and combinatorics, such as Hall's marriage theorem and Dilworth's theorem. Since bipartite matching is a special case of maximum flow, the theorem also results from the max-flow min-cut theorem.
Connections with perfect graphs
A graph is said to be perfect if, in every induced subgraph, the chromatic number equals the size of the largest clique. Any bipartite graph is perfect, because each of its subgraphs is either bipartite or independent; in a bipartite graph that is not independent the chromatic number and the size of the largest clique are both two while in an independent set the chromatic number and clique number are both one.
A graph is perfect if and only if its complement is perfect, and Kőnig's theorem can be seen as equivalent to the statement that the complement of a bipartite graph is perfect. For, each color class in a coloring of the complement of a bipartite graph is of size at most 2 and the classes of size 2 form a matching, a clique in the complement of a graph G is an independent set in G, and as we have already described an independent set in a bipartite graph G is a complement of a vertex cover in G. Thus, any matching M in a bipartite graph G with n vertices corresponds to a coloring of the complement of G with n-|M| colors, which by the perfection of complements of bipartite graphs corresponds to an independent set in G with n-|M| vertices, which corresponds to a vertex cover of G with M vertices. Conversely, Kőnig's theorem proves the perfection of the complements of bipartite graphs, a result proven in a more explicit form by .
One can also connect Kőnig's line coloring theorem to a different class of perfect graphs, the line graphs of bipartite graphs. If G is a graph, the line graph L(G) has a vertex for each edge of G, and an edge for each pair of adjacent edges in G. Thus, the chromatic number of L(G) equals the chromatic index of G. If G is bipartite, the cliques in L(G) are exactly the sets of edges in G sharing a common endpoint. Now Kőnig's line coloring theorem, stating that the chromatic index equals the maximum vertex degree in any bipartite graph, can be interpreted as stating that the line graph of a bipartite graph is perfect.
Since line graphs of bipartite graphs are perfect, the complements of line graphs of bipartite graphs are also perfect. A clique in the complement of the line graph of G is just a matching in G. And a coloring in the complement of the line graph of G, when G is bipartite, is a partition of the edges of G into subsets of edges sharing a common endpoint; the endpoints shared by each of these subsets form a vertex cover for G. Therefore, Kőnig's theorem itself can also be interpreted as stating that the complements of line graphs of bipartite graphs are perfect.
Weighted variants
Konig's theorem can be extended to weighted graphs.
Egerváry's theorem for edge-weighted graphs
Jenő Egerváry (1931) considered graphs in which each edge e has a non-negative integer weight we. The weight vector is denoted by w. The w-weight of a matching is the sum of weights of the edges participating in the matching. A w-vertex-cover is a multiset of vertices ("multiset" means that each vertex may appear several times), in which each edge e is adjacent to at least we vertices. Egerváry's theorem says:In any edge-weighted bipartite graph, the maximum w-weight of a matching equals the smallest number of vertices in a w-vertex-cover.The maximum w-weight of a fractional matching is given by the LP:
Maximize w · x
Subject to: x ≥ 0E
__ AG · x ≤ 1V.And the minimum number of vertices in a fractional w-vertex-cover is given by the dual LP:Minimize 1V · y
Subject to: y ≥ 0V
__ AGT · y ≥ w.As in the proof of Konig's theorem, the LP duality theorem implies that the optimal values are equal (for any graph), and the fact that the graph is bipartite implies that these programs have optimal solutions in which all values are integers.
Theorem for vertex-weighted graphs
One can consider a graph in which each vertex v has a non-negative integer weight bv. The weight vector is denoted by b. The b-weight of a vertex-cover is the sum of bv for all v in the cover. A b-matching is an assignment of a non-negative integral weight to each edge, such that the sum of weights of edges adjacent to any vertex v is at most bv. Egerváry's theorem can be extended, using a similar argument, to graphs that have both edge-weights w and vertex-weights b:
In any edge-weighted vertex-weighted bipartite graph, the maximum w-weight of a b-matching equals the minimum b-weight of vertices in a w-vertex-cover.
See also
Kőnig's property in hypergraphs
Notes
References
.
.
.
.
.
.
.
.
.
Theorems in graph theory
Articles containing proofs
Perfect graphs
Matching (graph theory)
Bipartite graphs | Kőnig's theorem (graph theory) | [
"Mathematics"
] | 3,370 | [
"Matching (graph theory)",
"Graph theory",
"Theorems in discrete mathematics",
"Mathematical relations",
"Articles containing proofs",
"Theorems in graph theory"
] |
5,465,213 | https://en.wikipedia.org/wiki/Coefficient%20diagram%20method | In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space. A special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information and as the criterion of good design. The performance of the closed-loop system is monitored by the coefficient diagram.
The most considerable advantages of CDM can be listed as follows:
The design procedure is easily understandable, systematic and useful. Therefore, the coefficients of the CDM controller polynomials can be determined more easily than those of the PID or other types of controller. This creates the possibility of an easy realisation for a new designer to control any kind of system.
There are explicit relations between the performance parameters specified before the design and the coefficients of the controller polynomials as described in. For this reason, the designer can easily realize many control systems having different performance properties for a given control problem in a wide range of freedom.
The development of different tuning methods is required for time delay processes of different properties in PID control. But it is sufficient to use the single design procedure in the CDM technique. This is an outstanding advantage.
It is particularly hard to design robust controllers realizing the desired performance properties for unstable, integrating and oscillatory processes having poles near the imaginary axis. It has been reported that successful designs can be achieved even in these cases by using CDM.
It is theoretically proven that CDM design is equivalent to LQ design with proper state augmentation. Thus, CDM can be considered an ‘‘improved LQG’’, because the order of the controller is smaller and weight selection rules are also given.
It is usually required that the controller for a given plant should be designed under some practical limitations.
The controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability and time response requirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficient damping of the disturbance effects and the low economic property.
Although the main principles of CDM have been known since the 1950s, the first systematic method was proposed by Shunji Manabe. He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the classical and modern control techniques are integrated with the basic principles of this method, which is derived by making use of the previous experience and knowledge of the controller design. Thus, an efficient and fertile control method has appeared as a tool with which control systems can be designed without needing much experience and without confronting many problems.
Many control systems have been designed successfully using CDM. It is very easy to design a controller under the conditions of stability, time domain performance and robustness. The close relations between these conditions and coefficients of the characteristic polynomial can be simply determined. This means that CDM is effective not only for control system design but also for controller parameters tuning.
See also
Polynomials
References
External links
Coefficient Diagram Method
.
Polynomials
Control theory | Coefficient diagram method | [
"Mathematics"
] | 684 | [
"Applied mathematics",
"Control theory",
"Polynomials",
"Algebra",
"Dynamical systems"
] |
5,466,491 | https://en.wikipedia.org/wiki/Xylan | Xylan (; ) (CAS number: 9014-63-5) is a type of hemicellulose, a polysaccharide consisting mainly of xylose residues. It is found in plants, in the secondary cell walls of dicots and all cell walls of grasses. Xylan is the third most abundant polysaccharide on Earth, after cellulose and chitin.
Composition
Xylans are polysaccharides made up of β-1,4-linked xylose (a pentose sugar) residues with side branches of α-arabinofuranose and/or α-glucuronic acids. On the basis of substituted groups xylan can be categorized into three classes i) glucuronoxylan (GX) ii) neutral arabinoxylan (AX) and iii) glucuronoarabinoxylan (GAX). In some cases contribute to cross-linking of cellulose microfibrils and lignin through ferulic acid residues.
Occurrence
Plant cell structure
Xylans play an important role in the integrity of the plant cell wall and increase cell wall recalcitrance to enzymatic digestion; thus, they help plants to defend against herbivores and pathogens (biotic stress). Xylan also plays a significant role in plant growth and development. Typically, xylans content in hardwoods is 10-35%, whereas they are 10-15% in softwoods. The main xylan component in hardwoods is O-acetyl-4-O-methylglucuronoxylan, whereas arabino-4-O-methylglucuronoxylans are a major component in softwoods. In general, softwood xylans differ from hardwood xylans by the lack of acetyl groups and the presence of arabinose units linked by α-(1,3)-glycosidic bonds to the xylan backbone.
Algae
Some macrophytic green algae contain xylan (specifically homoxylan) especially those within the Codium and Bryopsis genera where it replaces cellulose in the cell wall matrix. Similarly, it replaces the inner fibrillar cell-wall layer of cellulose in some red algae.
Food science
The quality of cereal flours and the hardness of dough are affected by their xylan content, thus, playing a significant role in bread industry. The main constituent of xylan can be converted into xylitol (a xylose derivative), which is used as a natural food sweetener, which helps to reduce dental cavities and acts as a sugar substitute for diabetic patients. Poultry feed has a high percentage of xylan.
Xylan is one of the foremost anti-nutritional factors in common use feedstuff raw materials. Xylooligosaccharides produced from xylan are considered as "functional food" or dietary fibers due their potential prebiotic properties.
Crystallinity
The regular branching patterns of xylans may facilitate their co-crystallization with cellulose in the plant cell wall. Xylan also tends to crystallize from aqueous solution. Additional polymorphs of (1→4)-β-D-xylan have been obtained by crystallization from non-aqueous environments.
Biosynthesis
Several glycosyltransferases are involved in the biosynthesis of xylans.
In eukaryotes, GTs represent about 1% to 2% of gene products. GTs are assembled into complexes existing in the Golgi apparatus. However, no xylan synthase complexes have been isolated from Arabidopsis tissues (dicot). The first gene involved in the biosynthesis of xylan was revealed on xylem mutants (irx) in Arabidopsis thaliana because of some mutation affecting xylan biosynthesis genes. As a result, abnormal plant growth due to thinning and weakening of secondary xylem cell walls was seen. Arabidopsis mutant irx9 (At2g37090), irx14 (At4g36890), irx10/gut2 (At1g27440), irx10-L/gut1 (At5g61840) showed defect in xylan backbone biosynthesis. Arabidopsis mutants irx7, irx8, and parvus are thought to be related to the reducing end oligosaccharide biosynthesis. Thus, many genes have been associated with xylan biosynthesis but their biochemical mechanism is still unknown. Zeng et al. (2010) immuno-purified xylan synthase activity from etiolated wheat (Triticum aestivum) microsomes. Jiang et al. (2016) reported a xylan synthase complex (XSC) from wheat that has a central core formed of two members of the GT43 and GT47 families (CAZy database). They purified xylan synthase activity from wheat seedlings through proteomics analysis and showed that two members of TaGT43 and TaGT47 are sufficient for the synthesis of a xylan-like polymer in vitro.
Breakdown
Xylanase converts xylan into xylose. Given that plants contain up to 30% xylan, xylanase is important to the nutrient cycle. The degradation of xylan and other hemicelluloses is relevant to the production of biofuels. Being less crystalline and more highly branched, these hemicelluloses are particularly susceptible to hydrolysis.
Research
As a major component of plants, xylan is potentially a significant source of renewable energy especially for second generation biofuels. However, xylose (backbone of xylan) is a pentose sugar that is hard to ferment during biofuel conversion because microorganisms like yeast cannot ferment pentose naturally.
References
Polysaccharides
Cell biology | Xylan | [
"Chemistry",
"Biology"
] | 1,276 | [
"Carbohydrates",
"Cell biology",
"Polysaccharides"
] |
5,466,576 | https://en.wikipedia.org/wiki/Recovery%20Console | The Recovery Console is a feature of the Windows 2000, Windows XP and Windows Server 2003 operating systems. It provides the means for administrators to perform a limited range of tasks using a command-line interface.
Its primary function is to enable administrators to recover from situations where Windows does not boot as far as presenting its graphical user interface. The recovery console is used to provide a way to access the hard drive in an emergency through the command prompt. The Recovery Console can be started from Windows 2000 / XP / 2003 Setup CD.
The Recovery Console can be accessed in two ways, either through the original installation media used to install Windows, or by installing it onto the hard drive and adding it to the NTLDR menu. However, the latter option is much more risky than the former one because it requires that the computer can boot to the point that NTLDR loads, or else the Recovery Console will not work at all.
Abilities
The Recovery Console has a simple command-line interpreter (or CLI). Many of the available commands closely resemble the commands that are normally available in cmd.exe, namely attrib, copy, del, and so forth.
From the Recovery Console an administrator can:
create and remove directories, and copy, erase, display, and rename files
enable and disable services (which modifies the service control database in the registry, to take effect when the system is next bootstrapped)
repair boot file, using the bootcfg command
write a new master boot record to a disk, using the fixmbr command
write a new volume boot record to a volume, using the fixboot command
format volumes
expand files from the compressed format in which they are stored on the installation CD-ROM
perform a full chkdsk scan to repair corrupted disks and files, especially if the computer cannot be started properly
Filesystem access on the Recovery Console is by default severely limited. An administrator using the Recovery Console has only read-only access to all volumes except for the boot volume, and even on the boot volume only access to the root directory and to the Windows system directory (e.g. \WINNT). This can be changed by changing Security Policies to enable read/write access to the complete file system including copying files from removable media (i.e. floppy drives).
Commands
The following is a list of the Recovery Console internal commands:
attrib
batch
bootcfg (introduced in Windows XP)
cd
chdir
chkdsk
cls
copy
del
delete
dir
disable
diskpart
enable
exit
expand
fixboot
fixmbr
format
help
listsvc
logon
map
md
mkdir
more
rd
ren
rename
rmdir
set (introduced in Windows XP)
systemroot
type
Although it appears in the list of commands available by using the help command, and in many articles about the Recovery Console (including those authored by Microsoft), the net command is not available. No protocol stacks are loaded, so there is no way to connect to a shared folder on a remote computer as implied.
See also
Emergency Repair Disk
Comparison of command shells
References
External links
Description of the Recovery Console
Windows components
Windows command shells
Live CD | Recovery Console | [
"Technology"
] | 642 | [
"Windows commands",
"Computing commands"
] |
5,466,649 | https://en.wikipedia.org/wiki/Stable%20manifold | In mathematics, and in particular the study of dynamical systems, the idea of stable and unstable sets or stable and unstable manifolds give a formal mathematical definition to the general notions embodied in the idea of an attractor or repellor. In the case of hyperbolic dynamics, the corresponding notion is that of the hyperbolic set.
Physical example
The gravitational tidal forces acting on the rings of Saturn provide an easy-to-visualize physical example. The tidal forces flatten the ring into the equatorial plane, even as they stretch it out in the radial direction. Imagining the rings to be sand or gravel particles ("dust") in orbit around Saturn, the tidal forces are such that any perturbations that push particles above or below the equatorial plane results in that particle feeling a restoring force, pushing it back into the plane. Particles effectively oscillate in a harmonic well, damped by collisions. The stable direction is perpendicular to the ring. The unstable direction is along any radius, where forces stretch and pull particles apart. Two particles that start very near each other in phase space will experience radial forces causing them to diverge, radially. These forces have a positive Lyapunov exponent; the trajectories lie on a hyperbolic manifold, and the movement of particles is essentially chaotic, wandering through the rings. The center manifold is tangential to the rings, with particles experiencing neither compression nor stretching. This allows second-order gravitational forces to dominate, and so particles can be entrained by moons or moonlets in the rings, phase locking to them. The gravitational forces of the moons effectively provide a regularly repeating small kick, each time around the orbit, akin to a kicked rotor, such as found in a phase-locked loop.
The discrete-time motion of particles in the ring can be approximated by the Poincaré map. The map effectively provides the transfer matrix of the system. The eigenvector associated with the largest eigenvalue of the matrix is the Frobenius–Perron eigenvector, which is also the invariant measure, i.e the actual density of the particles in the ring. All other eigenvectors of the transfer matrix have smaller eigenvalues, and correspond to decaying modes.
Definition
The following provides a definition for the case of a system that is either an iterated function or has discrete-time dynamics. Similar notions apply for systems whose time evolution is given by a flow.
Let be a topological space, and a homeomorphism. If is a fixed point for , the stable set of is defined by
and the unstable set of is defined by
Here, denotes the inverse of the function , i.e.
, where is the identity map on .
If is a periodic point of least period , then it is a fixed point of , and the stable and unstable sets of are defined by
and
Given a neighborhood of , the local stable and unstable sets of are defined by
and
If is metrizable, we can define the stable and unstable sets for any point by
and
where is a metric for . This definition clearly coincides with the previous one when is a periodic point.
Suppose now that is a compact smooth manifold, and is a diffeomorphism, . If is a hyperbolic periodic point, the stable manifold theorem assures that for some neighborhood of , the local stable and unstable sets are embedded disks, whose tangent spaces at are and (the stable and unstable spaces of ), respectively; moreover, they vary continuously (in a certain sense) in a neighborhood of in the topology of (the space of all diffeomorphisms from to itself). Finally, the stable and unstable sets are injectively immersed disks. This is why they are commonly called stable and unstable manifolds. This result is also valid for nonperiodic points, as long as they lie in some hyperbolic set (stable manifold theorem for hyperbolic sets).
Remark
If is a (finite-dimensional) vector space and an isomorphism, its stable and unstable sets are called stable space and unstable space, respectively.
See also
Invariant manifold
Center manifold
Limit set
Julia set
Slow manifold
Inertial manifold
Normally hyperbolic invariant manifold
Lagrangian coherent structure
References
Limit sets
Dynamical systems
Manifolds | Stable manifold | [
"Physics",
"Mathematics"
] | 870 | [
"Limit sets",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Mechanics",
"Manifolds",
"Dynamical systems"
] |
5,467,149 | https://en.wikipedia.org/wiki/Cheating%20%28biology%29 | Cheating is a term used in behavioral ecology and ethology to describe behavior whereby organisms receive a benefit at the cost of other organisms. Cheating is common in many mutualistic and altruistic relationships. A cheater is an individual who does not cooperate (or cooperates less than their fair share) but can potentially gain the benefit from others cooperating. Cheaters are also those who selfishly use common resources to maximize their individual fitness at the expense of a group. Natural selection favors cheating, but there are mechanisms to regulate it. The stress gradient hypothesis states that facilitation, cooperation or mutualism should be more common in stressful environments, while cheating, competition or parasitism are common in benign environments (i.e nutrient excess).
Theoretical models
Organisms communicate and cooperate to perform a wide range of behaviors. Mutualism, or mutually beneficial interactions between species, is common in ecological systems. These interactions can be thought of "biological markets" in which species offer partners goods that are relatively inexpensive for them to produce and receive goods that are more expensive or even impossible for them to produce. However, these systems provide opportunities for exploitation by individuals that can obtain resources while providing nothing in return. Exploiters can take on several forms: individuals outside a mutualistic relationship who obtain a commodity in a way that confers no benefit to either mutualist, individuals who receive benefits from a partner but have lost the ability to give any in return, or individuals who have the option of behaving mutualistically towards their partners but chose not to do so.
Cheaters, who do not cooperate but benefit from others who do cooperate gain a competitive edge. In an evolutionary context, this competitive edge refers to a greater ability to survive or to reproduce. If individuals who cheat are able to gain survivorship and reproductive benefits while incurring no costs, natural selection should favor cheaters. However, mechanisms also exist that prevent cheaters from undermining mutualistic systems. One main factor is that the advantages of cheating are often frequency-dependent. Frequency-dependent selection occurs when the fitness of a phenotype depends on its frequency relative to other phenotypes in a population. Cheater phenotypes often display negative frequency-dependent selection, where fitness increases as a phenotype becomes less common and vice versa. In other words, cheaters do best (in terms of evolutionary benefits such as increased survival and reproduction) when there are relatively few of them, but as cheaters become more abundant, they do worse.
For example, in Escherichia coli colonies, there are antibiotic-sensitive "cheaters" that persist at low numbers on antibiotic-laced mediums when in a cooperative colony. These cheaters enjoy the benefit of others producing antibiotic-resistant agents while producing none themselves. However, as numbers increase, if they persist in not producing the antibiotic agent themselves, they are more likely to be negatively impacted by the antibiotic substrate because there is less antibiotic agent to protect everyone. Thus, cheaters can persist in a population because their exploitative behavior gives them an advantage when they exist at low frequencies but these benefits are diminished when they are greater in number.
Others have proposed that cheating (exploitive behavior) can stabilize cooperation in mutualistic systems. In many mutualistic systems, there will be feedback benefits to those that cooperate. For instance, the fitness of both partners may be improved. If there is a high reward or many benefits for the individual that initiated the cooperative behavior, mutualism should be selected for. When researchers investigated the co-evolution of cooperation and choice in a choosy host and its symbiont (an organism that lives in a relationship that benefits all parties involved), their model indicated that although choice and cooperation may be initially selected for, this would often be unstable. In other words, one cooperative partner will choose another cooperative partner if given a choice. However, if this choice is made over and over, variation is removed and this selection can no longer be maintained. This situation is similar to the lek paradox in female choice. For example, in lek paradox, if females consistently choose for a particular male trait, genetic variance for that trait should eventually be eliminated, removing the benefits of choice. However, that choice somehow still persists.
One theory states that cheating maintains genetic variation in the face of selection for mutualism. One study shows that a small influx of immigrants with a tendency to cooperate less can generate enough genetic variability to stabilize selection for mutualism. This suggests that the presence of exploitive individuals, otherwise known as cheaters, contribute enough genetic variation to maintain mutualism itself. Both this theory and the negative frequency-dependent theory suggest that that cheating exists as part of a stable mixed evolutionary strategy with mutualism. In other words, cheating is a stable strategy used by individuals in a population where many other individuals cooperate. Another study supports that cheating can exist as a mixed strategy with mutualism using a mathematical game model. Thus, cheating can arise and be maintained in mutualistic populations.
Examples
Studies of cheating and dishonest communication in populations presupposes an organismal system that cooperates. Without a collective population that has signaling and interactions among individuals, behaviors such as cheating do not manifest. In other words, in order to study cheating behavior, a model system that engages in cooperation is needed. Models that provide insight on cheating include the social amoeba Dictyostelium discoideum; eusocial insects, such as ants, bees, and wasps; and inter-specific interactions found in cleaning mutualisms. Common examples of cleaning mutualisms include cleaner fish such as wrasses and gobies, and some cleaner shrimp.
In Dictyostelium discoideum
Dictyostelium discoideum is a widely used model for cooperation and the development of multicellularity. This species of amoeba is most commonly found in a haploid, single-celled state that feeds independently and undergoes asexual reproduction. However, when the scarcity of food sources causes individual cells to starve, roughly 10⁴ to 10⁵ cells aggregate to form a mobile, multicellular structure dubbed a "slug". In the wild, aggregates generally contain multiple genotypes, resulting in chimeric mixtures. Unlike clonal (genetically identical) aggregates typically found in multicellular organisms, the potential for competition exists in chimeric aggregates. For example, because individuals in the aggregate contain different genomes, differences in fitness can result in conflict of interest among cells in the aggregate, where different genotypes could potentially compete against each other for resources and reproduction. In Dictyostelium discoideum, roughly 20% of the cells in the aggregate become dead to make the stalk of a fruiting body. The remaining 80% of cells become spores in the sorus of the fruiting body, which can germinate again once conditions are more favorable. In this case, 20% of the cells must give up reproduction so that the fruiting body forms successfully. This makes chimeric aggregates of Dictyostelium discoideum susceptible to cheating individuals that take advantage of the reproductive behavior without paying the fair price. In other words, if certain individuals tend to become a part of the sorus more frequently, they can gain increased benefit from the fruiting body system without sacrificing their own opportunities to reproduce. Cheating behavior in D. discoideum is well established, and many studies have attempted to elucidate the evolutionary and genetic mechanisms underlying the behavior. Having a 34Mb genome that is completely sequenced and well annotated makes D. discoideum a useful model in studying the genetic bases and molecular mechanisms of cheating, and in a broader sense, social evolution.
In eusocial insects
Eusocial insects also serve as valuable tools in studying cheating. Eusocial insects behave cooperatively, where members of the community forgo reproduction to assist a few individuals to reproduce. Such model systems have potential for conflict of interest to arise among individuals, and thus also have potential for cheating to occur. Eusocial insects in the order Hymenoptera, which includes bees and wasps, exhibit good examples of conflicts of interest present in insect societies. In these systems, queen bees and wasps can mate and lay fertilized eggs that hatch into females. On the other hand, workers of most species in Hymenoptera can produce eggs, but cannot produce fertilized eggs due to loss of mating ability. Workers that lay eggs represent a cost for the colony because workers that lay eggs often do significantly less work, and thus negatively impact the health of the colony (for example: decreased amount of collected food, or less attention given to tending the queen's eggs). In this case, a conflict of interest arises between the workers and the colony. The workers should lay eggs in order to pass on their genes; however, as a colony, having only the queen reproduce leads to better productivity. If workers sought to pass their own genes by laying eggs, foraging activities would diminish, leading to decreased resources for the entire colony. This, in turn, can cause a tragedy of the commons, where selfish behavior lead to the depletion of resources, with long-term negative consequences for the group. However, in natural bee and wasp societies, only 0.01–0.1% and 1%, respectively, of the workers lay eggs, suggesting that strategies exist to combat cheating to prevent tragedy of the commons. These insect systems have given scientists opportunities to study strategies that keep cheating in check. Such strategies are commonly referred to as "policing" strategies, generally where additional costs are imposed on cheaters to discourage or eliminate cheating behaviors. For example, honeybees and wasps may eat eggs produced by workers. In some ant species and yellowjackets, policing may occur via aggression towards or killing egg-laying individuals to minimize cheating.
In cleaning symbiosis
Cleaning symbiosis that develop between small and larger marine organisms often represent models useful for studying the evolution of stable social interactions and cheating. In the cleaning fish Labroides dimidiatus (Bluestreak cleaner wrasse), as in many cleaner species, client fish seeks to have ectoparasites removed by the cleaners. In these situations, instead of picking off the parasites on the surface of the client fish, the cleaner can cheat by feeding on the client's tissue (mucus layer, scales, etc.), thereby gaining additional benefit from the symbiotic system. It has been well documented that cleaners will feed on mucus when their clients are unable to control the cleaner's behavior; however, in natural settings, client fish often jolt, chase after cheating cleaners, or terminate interactions of swimming way, effectively controlling the cheating behavior. Studies on cleaning mutualisms generally suggest that cheating behavior is often adjusted depending on the species of the client. In cleaning shrimp, cheating is predicted to occur less often because shrimps bear a higher cost if the clients use aggression to control the cleaner's behavior. Studies have found that cleaner species can strategically adjust cheating behavior according to the potential associated risk. For example, predatory clients, which present a significantly high cost for cheating, experience less cheating behavior. On the other hand, nonpredatory clients present a lower cost for cheating, and thus experience more cheating behaviors from the cleaners. Some evidence suggest that physiological processes can mediate the cleaners' decision to switch from cooperating to cheating in mutualistic interactions. For example, in the bluestreak cleaner wrasse, changes in cortisol levels are associated with behavior changes. For smaller clients, increase cortisol levels in the water lead to more cooperative behavior, while for larger clients, the same treatment lead to more dishonest behavior. It has been suggested that "good behavior" toward smaller clients often allow wrasses to attract larger clients that are often cheated.
Other
Other models of cheating include the European tree frog, "Hyla arborea". In many sexually reproducing species such as this, some males can access mates by exploiting resources of more competitive males. Many species have dynamic reproductive strategies that can change in response to changes in the environment. In these instances, several factors contribute to the decision to switch between mating strategies. For example, in the European tree frog, a sexually competitive (as in, perceived to be attractive by females) male tend to call to attract mates. This is often referred to as the "bourgeois" tactic. On the other hand, a smaller male that would likely fail to attract mates using the bourgeois tactic will tend to hide near attractive males and attempt to access females. In this instance, the males can gain access to females without having to defend territories or acquiring additional resources (which often serve as the basis for attractiveness). This is referred to as the "parasitic" tactic, where the smaller male effectively cheats its way to accessing females, by reaping the benefit of sexual reproduction without contributing resources that normally attract females. Models such as this provide valuable tools for research aimed at energetic constraints and environmental cues involved in cheating. Studies find that mating strategies are highly adaptable and depend on a variety of factors, such as competitiveness, energetic costs involved in defending territory or acquiring resources.
Constraints and countermeasures
Environmental conditions and social interactions affecting microbial cheating
Like many other organisms, bacteria rely on iron intake for its biological processes. However, iron is sometimes difficult to access in certain environments, like soil. Some bacteria have evolved siderophores, iron-chelating particles that seek and bring back iron for the bacteria. Siderophores are not necessarily specific to its producer - sometimes another individual could take up the particles instead. Pseudomonas fluorescens is a bacterium commonly found in the soil. Under low-iron conditions, P. fluorescens produces siderophores, specifically pyoverdine, to retrieve the iron necessary for survival. However, when iron is readily available, either from freely diffusing in environment or another bacterium's siderophores, P. fluorescens ceases production, allowing the bacterium to devote its energy towards growth. One study showed that when P. fluorescens grew in association with Streptomyces ambofaciens, another bacterium that produces the siderophore coelichelin, no pyoverdine was detected. This result suggested that P. fluorescens ceased siderophore production in favor of taking up iron-bound coelichelin, an association also known as siderophore piracy.
More studies, however, suggested that P. fluorescens''' cheating behavior could be suppressed. In another study, two strains of P. fluorescens were studied in the soil, their natural environment. One strain, known as the producer, produced a higher level of siderophores, which meant that other strain, known as the non-producer, ceased siderophore production in favor of using the other's siderophores. Although one would expect that the non-producer would outcompete the producer, like the P. fluorescens and S. ambofaciens association, the study demonstrated that the non-producer was unable to do so in soil conditions, suggesting that the two strains could coexist. Further experiments suggested that this cheating prevention may be due to interactions with other microbes in the soil influencing the relationship or the spatial structure of the soil preventing siderophore diffusion and therefore limiting the non-producer's ability to exploit the producer's siderophores.
Selection pressure in bacteria (intraspecies)
By definition, individuals cheat to gain benefits that their non-cheating counterparts do not receive. This raises the question of how a cooperative system can exist in face of these cheaters. One answer is that the cheaters actually have a reduced fitness compared to the non-cheaters.
In a study by Dandekar et al., the researchers examined the survival rates of cheating and non-cheating bacteria populations (Pseudomonas aeruginosa) under varying environmental conditions. These microorganisms, like many species of bacteria, use a cell-cell communication system called quorum sensing that detect their population density and prompt the transcription of various resources when needed. In this case, the resources are publicly shared proteases that break down a food source like casein, and privately used adenosine hydrolase, which breaks down another food source, adenosine. The problem arises when some individuals ("cheaters") do not respond to these quorum sensing signals and therefore do not contribute to the costly protease production yet enjoy the benefits of the broken down resources.
When P. aeruginosa populations are placed into growth conditions where cooperation (and responding to the quorum signal) is costly, the number of cheaters increases, and the public resources are depleted, which can lead to a tragedy of the commons. However, when P. aeruginosa populations are placed into growth conditions with a proportion of adenosine, the cheaters are suppressed because the bacteria that responds to the quorum signal now produces adenosine hydrolase that they privately use for themselves to digest adenosine food source. In wild populations where the presence of adenosine is common, this is an explanation for how individuals that cooperate could have higher fitness than those that cheat, thereby suppressing the cheaters and maintaining cooperation.
Policing/punishment in insects
Cheating is also commonly found in insects. The social and seemingly altruistic communities found in insects such as ants and bees provide ample opportunities for cheaters to take advantage of the system and accrue additional benefits at the expense of the community.
Sometimes, a colony of insects is called a "superorganism" for its ability to take on properties greater than those of the sum of individuals. A colony of insects in which different individuals are specialized for specific tasks means a greater colony production and greater efficiency. Moreover, based on the kin-selection theory, it is collectively beneficial for all the individuals in the community to have the queen to lay eggs rather than the workers lay eggs. This is because if the workers lay eggs, it benefits the egg-laying worker individually, but the rest of the workers are now twice removed from this worker's offspring. Therefore, though it is beneficial for one individual to have its own offspring, it is collectively beneficial to have the queen lay the eggs. Therefore, a system of worker and queen policing exists against worker-laid eggs.
One form of policing occurs by the oophagy of the worker-laid eggs, found in many ant and bee species. This could be done by both or either the queen or the workers. In a series of experiments with honeybees (Apis mellifera), Ratneiks & Visscher found that other workers effectively removed worker-laid eggs in all colonies, whether the eggs were from originated from the same colonies or not. An example of a combination of queen and worker policing is found in ants, in the genus Diacamma, in which worker-laid eggs are taken by other workers and fed to the "queen". In general, these signals that identify the eggs as queen-laid are likely incorruptible, since it must be an honest signal to be maintained and not be used by cheating workers.
The other form of policing occurs through aggression towards egg-laying workers. In a species of tree wasp Dolichovespula sylvestris, Wenseleers et al. found that a combination of aggressive behavior and destruction of worker-laid eggs kept the number of worker-laid eggs low. In fact, 91% of the worker-laid eggs were policed within one day. They also found that about 20% of workers laying eggs were prevented from doing so through both the queen's and workers' aggressive behavior. The workers and the queen would grab the egg-laying worker and try to sting her or push her off the cell. This usually results in the worker removing her abdomen and not depositing her eggs.
Policing/punishment in other organisms
Aggression and punishment are not just found in insects. For example, in naked mole rats, punishments by the queen are a way she motivates the lazier, less-related workers in their groups. The queen would shove the lazier workers, with the number of shoves increasing when there are fewer active workers. Reeve found that if the queen is removed when colonies are satiated, there is a significant drop in weight of the active workers because the lazier workers are taking advantage of the system.
Punishment is also a method used by cichlid Neolamprologous pulcher in their cooperative breeding systems. It is a pay-to-stay system where helper fish are allowed to stay in certain territories in exchange for their help. Similar to the naked mole rats, the helpers that were prevented from helping, the "idle helpers", receive more aggression than control helpers in the study. Researchers theorize that this system developed because the fish are usually not closely related (so kinship benefits have little impact), and because there is a high level of predation risk when the fish is outside the group (therefore a strong motivator for the helper fish to stay in the group).
Rhesus monkeys also use aggression as a punishment. These animals have five distinct calls that they can "decide" to produce upon finding food. Whether they call or not is related to their sex and number of kin: females call more often and females with more kin call more often. However, sometimes when food is found, the individual ("discoverer") does not call to attract its kin and, presumably, to share food. If lower ranked individuals find this discoverer to be in the food drop area of the experiment, they recruit coalition support against this individual by screaming. The formed coalition then chases this individual away. If higher ranked individuals find this discoverer, they either chase the discoverer away or became physically aggressive towards the individual. These results show that aggression as punishment is a way to encourage members to work together and share food when it is found.
Interspecific countermeasures
Cheating and constraints of cheating are not limited to intraspecific interactions; it can also occur in a mutualistic relationship between two species. A common example is the mutualistic relationship between cleaner fish Labroides dimidiatus'' and reef fish. Bshary and Grutter found that cleaner wrasse prefers the client tissue mucus over ectoparasites. This creates a conflict between the cleaner fish and reef fish, because the reef fish only benefit when the cleaner fish eats the ectoparasites. Further studies revealed that in a lab setting, the cleaner fish undergoes behavioral change in face of deterrents against eating their preferential food. In several trials, the plate of their preferential food source was immediately removed when they eat it, to mimic "client fleeing" in natural settings. In other trials, the plate of their preferential food source chased the cleaner fish when they eat it, mimicking "client chasing" in natural setting. After only six learning trials, the cleaners learned to choose against their preference, indicating that punishment is potentially a very effective countermeasure against cheating in mutualistic relationships.
Finally the countermeasures are not limited to organismal relationships. West et al. found a similar countermeasure against cheating in legume-rhizobium mutualism. In this relationship, nitrogen fixing bacteria rhizobium fixes atmospheric N2 from inside the roots of leguminous plants, providing this essential source of nitrogen to these plants while also receiving organic acids for themselves. However, some bacteria are more mutualistic, while others are more parasitic because they consume the plant's resources but fixes little to no N2. Moreover, these plants cannot tell whether the bacteria are more or less parasitic until they are settled in the plant nodules. To prevent cheating, these plants seem to be able to punish the rhizobium bacteria. In a series of experiments, researchers forced non-cooperation between the bacteria and the plants by placing various nodules in nitrogen-free atmosphere. They saw a decrease in the rhizobium reproductive success by 50%. West et al. created a model for legume sanctioning the bacteria and hypothesizes that these behaviors exist to stabilize mutualistic interactions.
Another well-known example of plant-organism interaction occurs between yuccas and yucca moths. The female yucca moths deposit their eggs one at a time to the yucca flower. At the same time, she also deposits a small amount of pollen from yucca flowers as nutrition for the yucca moths. Because most of the pollen is not consumed by the larva, yucca moths are therefore also the active pollinators for the yucca plant. Moreover, sometimes the female moths do not successfully deposit their eggs the first time, and may try again and again. The yucca plant receives scars from the multiple attempts, but they also receive more pollen, since the moths are depositing pollen with every try.
"Cheating" sometimes happens when the yucca moth deposits too many eggs in one plant. In this case, the yucca plant has little to no benefits from this interaction. However, the plant has a unique way of constraining this behavior. While the constraint against cheating often occurs directly to the individual, in this case, the constraint occurs to the individual's offspring. The yucca plant can "abort" the moths by aborting the flowers. Pellmyr and Huth found that there is selective maturation for flowers that have low egg loads and high number of scars (and therefore a high amount of pollen). In this way, there is selection against the "cheaters" who try to use the yucca plant without providing the benefits of pollination.
References
Behavior
Biological interactions | Cheating (biology) | [
"Biology"
] | 5,301 | [
"Biological interactions",
"Ethology",
"Behavior",
"nan"
] |
5,467,598 | https://en.wikipedia.org/wiki/Ericsson%20R310s | The Ericsson R310s, produced by Ericsson Mobile Communications, now known as Sony Mobile, was a mobile phone produced in the early 2000s, designed for use in environments which might easily damage a standard handset.
Björn Andersson was the development lead, and the device had the internal working name of "Marina". Development work began in 1997 and went on for around two-and-a-half years before the product launch in 2000.
The outside of the body is reinforced with rubber inlays to withstand harsh treatment and to provide a good grip, preventing the phone from being slippery when wet.
It is water-resistant, the lid having silicone gaskets and Gore-Tex membranes which prevented water from leaking in.
At the time the R310s was designed, most handsets still had a vulnerable external antenna. The R310s had a so-called "shark fin" antenna which was short and almost flat, and could withstand flexing.
The software was similar to that in other Ericsson phones of the period and the package offered voice dialing, vibrating call alert, and data/fax capabilities.
Aimed at "active lifestyle" users as well as tradespeople and industry, the phone was available in both high-visibility and fashion colours: bright orange, bright yellow, blue and green.
The phone survived slightly longer than others launched at the same time due to the lack of a replacement model.
References
R310s
Ericsson
Mobile phones introduced in 2000 | Ericsson R310s | [
"Technology"
] | 300 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
5,468,083 | https://en.wikipedia.org/wiki/Weitzenb%C3%B6ck%20identity | In mathematics, in particular in differential geometry, mathematical physics, and representation theory, a Weitzenböck identity, named after Roland Weitzenböck, expresses a relationship between two second-order elliptic operators on a manifold with the same principal symbol. Usually Weitzenböck formulae are implemented for G-invariant self-adjoint operators between vector bundles associated to some principal G-bundle, although the precise conditions under which such a formula exists are difficult to formulate. This article focuses on three examples of Weitzenböck identities: from Riemannian geometry, spin geometry, and complex analysis.
Riemannian geometry
In Riemannian geometry there are two notions of the Laplacian on differential forms over an oriented compact Riemannian manifold M. The first definition uses the divergence operator δ defined as the formal adjoint of the de Rham operator d:
where α is any p-form and β is any ()-form, and is the metric induced on the bundle of ()-forms. The usual form Laplacian is then given by
On the other hand, the Levi-Civita connection supplies a differential operator
where ΩpM is the bundle of p-forms. The Bochner Laplacian is given by
where is the adjoint of . This is also known as the connection or rough Laplacian.
The Weitzenböck formula then asserts that
where A is a linear operator of order zero involving only the curvature.
The precise form of A is given, up to an overall sign depending on curvature conventions, by
where
R is the Riemann curvature tensor,
Ric is the Ricci tensor,
is the map that takes the wedge product of a 1-form and p-form and gives a (p+1)-form,
is the universal derivation inverse to θ on 1-forms.
Spin geometry
If M is an oriented spin manifold with Dirac operator ð, then one may form the spin Laplacian Δ = ð2 on the spin bundle. On the other hand, the Levi-Civita connection extends to the spin bundle to yield a differential operator
As in the case of Riemannian manifolds, let . This is another self-adjoint operator and, moreover, has the same leading symbol as the spin Laplacian. The Weitzenböck formula yields:
where Sc is the scalar curvature. This result is also known as the Lichnerowicz formula.
Complex differential geometry
If M is a compact Kähler manifold, there is a Weitzenböck formula relating the -Laplacian (see Dolbeault complex) and the Euclidean Laplacian on (p,q)-forms. Specifically, let
and
in a unitary frame at each point.
According to the Weitzenböck formula, if , then
where is an operator of order zero involving the curvature. Specifically, if in a unitary frame, then with k in the s-th place.
Other Weitzenböck identities
In conformal geometry there is a Weitzenböck formula relating a particular pair of differential operators defined on the tractor bundle. See Branson, T. and Gover, A.R., "Conformally Invariant Operators, Differential Forms, Cohomology and a Generalisation of Q-Curvature", Communications in Partial Differential Equations, 30 (2005) 1611–1669.
See also
Bochner identity
Bochner–Kodaira–Nakano identity
Laplacian operators in differential geometry
References
Mathematical identities
Differential operators
Differential geometry | Weitzenböck identity | [
"Mathematics"
] | 708 | [
"Mathematical analysis",
"Mathematical problems",
"Mathematical identities",
"Mathematical theorems",
"Differential operators",
"Algebra"
] |
5,468,606 | https://en.wikipedia.org/wiki/Ring-opening%20metathesis%20polymerisation | In polymer chemistry, ring-opening metathesis polymerization (ROMP) is a type of chain-growth polymerization involving olefin metathesis. The reaction is driven by relieving ring strain in cyclic olefins. A variety of heterogeneous and homogeneous catalysts have been developed for different polymers and mechanisms. Heterogeneous catalysts are typical in large-scale commercial processes, while homogeneous catalysts are used in finer laboratory chemical syntheses. Organometallic catalysts used in ROMP usually have transition metal centres, such as tungsten, rubidium, titanium, etc., with organic ligands.
Heterogeneous catalysis
Heterogeneous catalysis consists of catalysts and substrates in different physical states. The catalyst is typically in solid phase. The mechanism of heterogeneous ring-opening metathesis polymerization is still under investigation.
Ring-opening metathesis polymerization of cyclic olefins has been commercialized since the 1970s. Examples of polymers produced on an industrial level through ROMP catalysis are Vestenamer and Norsorex, among others.
Mechanism
The mechanism of homogeneous ring-opening metathesis polymerization is well-studied. It is similar to any olefin metathesis reaction. Initiation occurs by forming an open coordination site on the catalyst. Propagation happens via a metallacycle intermediate formed after a 2+2 cycloaddition. When using a G3 catalyst, 2+2 cycloaddition is the rate determining step.
Frontal ring-opening metathesis polymerization
Frontal ring-opening metathesis polymerization (FROMP) is a variation of ROMP. It is a polymerization system that only reacts on a localized zone. One example of this system is the FROMP of dicyclopentadiene with a Grubbs' catalyst initiated by heat.
See also
Acyclic diene metathesis
Ring-opening polymerization
Further reading
References
Polymerization reactions | Ring-opening metathesis polymerisation | [
"Chemistry",
"Materials_science"
] | 399 | [
"Polymerization reactions",
"Polymer chemistry"
] |
5,468,630 | https://en.wikipedia.org/wiki/Lobe%20pump | A lobe pump, or rotary lobe pump, is a type of positive displacement pump. It is similar to a gear pump except the lobes are designed to almost meet, rather than touch and turn each other. An early example of a lobe pump is the Roots Blower, patented in 1860 to blow combustion air to melt iron in blast furnaces, but now more commonly used as an engine supercharger.
Lobe pumps are used in a variety of industries including pulp and paper, chemical, food, beverage, pharmaceutical, and biotechnology. They are popular in these diverse industries because they offer superb sanitary qualities, high efficiency, reliability, corrosion resistance and good clean-in-place and sterilization-in-place (CIP/SIP) characteristics.
Rotary pumps can handle solids (e.g., cherries and olives), slurries, pastes, and a variety of liquids. If wetted, they offer self-priming performance. A gentle pumping action minimizes product degradation. They also offer continuous and intermittent reversible flows and can operate dry for brief periods of time. Flow is relatively independent of changes in process pressure, too, so output is relatively constant and continuous.
Function
Lobe pumps are similar to external gear pumps in operation in that fluid flows around the interior of the casing. Unlike external gear pumps, however, the lobes do not make contact. Lobe contact is prevented by external timing gears located in the gearbox. Pump shaft support bearings are located in the gearbox, and since the bearings are out of the pumped liquid, pressure is limited by bearing location and shaft deflection which reduces the noise levels of this pump. Lobe pump is one of the Positive Displacement Pump rotary type.
1. As the lobes come out of mesh, they create an expanding volume on the inlet side of the pump. Material to be pumped (liquid, or gas, possibly containing small solid particles) flows into this cavity. Rotation of the lobes past the inlet port creates enclosed volumes of material between the rotors and the pump casing.
2. The material travels around the interior of the casing in these enclosed volumes between the rotor's lobes and the casing — it does not pass between the lobes.
3. Finally, the meshing of the lobes on the discharge side of the pump prevents the pumped material from returning to the inlet side. Continued pumping forces the pumped material out through the outlet port. If the discharge port is restricted - such as discharging a large volume of air into an engine's intake manifold - then pressure is created in the discharge space. A lobe pump itself does not compress the material it pumps.
Lobe pumps are frequently used in food applications because they handle solids without damaging the product. Particle size pumped can be much larger in lobe pumps than in other positive displacement types. Since the lobes do not make contact, and clearances are not as close as in other Positive displacement pumps, this design handles low viscosity liquids with diminished performance. Loading characteristics are not as good as other designs, and suction ability is low. High-viscosity liquids require reduced speeds to achieve satisfactory performance. Reductions of 25% of rated speed and lower are common with high-viscosity liquids.
See also
Positive displacement pump
Gear pump
Progressing Cavity Pump
Multistage Screw Pump
Applications of Lobe Pump
References
External links
PumpSchool.com Lobe Pump entry
Ace lobe pumps
Pumps | Lobe pump | [
"Physics",
"Chemistry"
] | 688 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
5,469,271 | https://en.wikipedia.org/wiki/14P/Wolf | 14P/Wolf is a periodic comet in the Solar System.
Max Wolf (Heidelberg, Germany) discovered the comet on September 17, 1884 () before it passed 0.8 AU from Earth. It was later rediscovered by, but not credited to, Ralph Copeland (Dun Echt Observatory, Aberdeen, Scotland) on September 23.
Before approaching Jupiter in 1875, the comet had a perihelion of 2.74 AU and an orbital period of 8.84 years, and the approach dropped perihelion to 1.57 AU. An approach to Jupiter in September 1922 lifted perihelion to 2.43 AU. The current perihelion of 2.7 AU is from when the comet passed Jupiter on August 13, 2005. Another close approach to Jupiter on March 10, 2041 will return the comet to parameters similar to the period 1925–2000.
The comet nucleus is estimated to be 4.7 kilometers in diameter. Its rotational period is estimated to be 9.02 ± 0.01 hours.
References
External links
14P at Kronk's Cometography
14P at Kazuo Kinoshita's Comets
14P at Seiichi Yoshida's Comet Catalog
Orbital simulation from JPL (Java) / Horizons Ephemeris
Periodic comets
0014
014P
18840917 | 14P/Wolf | [
"Astronomy"
] | 271 | [
"Astronomy stubs",
"Comet stubs"
] |
5,469,332 | https://en.wikipedia.org/wiki/305%20%28number%29 | 305 is the natural number following 304 and preceding 306.
In mathematics
305 is an odd composite number with two prime factors.
305 is the convolution of the first 7 primes with themselves.
305 is the fifth hexagonal prism number which is defined by (n+1)(3n2+3n+1).
305 is the hypotenuse of two Pythagorean triples. 3052=2072+2242=1362+2732.
References
Integers | 305 (number) | [
"Mathematics"
] | 106 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
5,469,882 | https://en.wikipedia.org/wiki/Behavioral%20enrichment | Behavioral enrichment is an animal husbandry principle that seeks to enhance the quality of captive animal care by identifying and providing the environmental stimuli necessary for optimal psychological and physiological well-being. Enrichment can either be active or passive, depending on whether it requires direct contact between the animal and the enrichment. A variety of enrichment techniques are used to create desired outcomes similar to an animal's individual and species' history. Each of the techniques used is intended to stimulate the animal's senses similarly to how they would be activated in the wild. Provided enrichment may be seen in the form of auditory, olfactory, habitat factors, food, research projects, training, and objects.
Purpose
Environmental enrichment can improve the overall welfare of animals in captivity and create a habitat similar to what they would experience in their wild environment. It aims to maintain an animal's physical and psychological health by increasing the range or number of species-specific behaviors, increasing positive interaction with the captive environment, preventing or reducing the frequency of abnormal behaviors, such as stereotypies, and increasing the individual's ability to cope with the challenges of captivity. Stereotypies are seen in captive animals due to stress and boredom. This includes pacing, self-harm, over-grooming, head-weaving, etc.
Environmental enrichment can be offered to any animal in captivity, including:
Animals in zoos and related facilities
Animals in sanctuaries
Animals in shelters and adoption centers
Animals used for research
Animals used for companionship, e.g. dogs, cats, rabbits, etc.
Environmental enrichment can be beneficial to a wide range of vertebrates and invertebrates such as land mammals, marine mammals, and amphibians. In the United States, specific regulations (Animal Welfare Act of 1966) must be followed for enrichment plans in order to guarantee, regulate, and provide appropriate living environments and stimulation for animals in captivity. Moreover, the Association of Zoos and Aquariums (also known as the AZA), requires that animal husbandry and welfare be a main concern for those caring for animals in captivity.
Passive enrichment
Passive enrichment provides sensory stimulation but no direct contact or control. This type of enrichment is commonly used for its potential to benefit several animals simultaneously as well as requiring limited direct animal contact.
Visual enrichment
Visual enrichment is typically provided by changing the layout of an animal's holding area. The type of visual enrichment can vary, from something as simple as adding pictures on walls to videotapes and television. Visual enrichment such as television can especially benefit animals housed in single cages.
Mirrors are also a potential form of enrichment, specifically for animals that display an understanding of self-recognition, such as non-human primates. In addition to using mirrors to reflect the animal's own image, mirrors can also be angled so the animal is able to see normally out-of-sight areas of the holding area.
Enclosures in modern zoos are often designed to facilitate environmental enrichment. For example, the Denver Zoo's exhibit Predator Ridge allows different African carnivores to be rotated among several enclosures, providing the animals with a differently sized environment.
Auditory enrichment
In the wild, animals are exposed to a variety of sounds that they normally do not encounter in captivity. Auditory enrichment can be used to mimic the animal's natural habitat. Types of nature-based auditory enrichment include rain forest sounds and con-specific vocalizations.
The most common form of auditory enrichment is music, whose principal stems primarily from its benefit to humans. The benefits of classical music have been widely studied in animals, from sows to non-human primates. Studies have also looked at various other genres, such as pop and rock, but their ability to provide effective enrichment remains inconclusive. Most types of music that are selected for enrichment are based on human preferences, causing anthropomorphic biases that may not translate to other animals. Therefore, music that is specifically attuned to the animal's auditory senses could be beneficial. Species-specific sounds require further research to find what pitch, frequency, and range is most suitable for the animal.
Active enrichment
Active enrichment often requires the animal to perform some sort of physical activity as well as direct interaction with the enrichment object. Active enrichment items can temporarily reduce stereotypic behaviors as their beneficial effects are usually limited to the short periods of active use.
Food-based enrichment
Food-based enrichment is meant to mimic what a captive animal would do in the wild for food. This is extremely important because in the wild, animals are adapted to work hard for what they eat. A lot of time and energy is spent finding food, which is why this tactic is used to make it more challenging for the animal rather than just feeding it simple food. Feeding enrichment techniques causes the animal to indulge in natural, active behaviors that allow for more stimulation and prevents boredom. This form of enrichment forms active behaviors that can also help with not only a captive animal's mental health, but the animal's physical health.
For example, food can be hidden and spread across an enclosure making the animal actively search for it. Other common manipulable tactile objects include rubber toys stuffed with treats. Instead of providing the food directly, foraging devices are useful in increasing the amount of searching and foraging of food, comparable to the amount of time they would spend in the wild. Most food-based enrichment occurs in the context of searching for food, such as cracking open a nut or digging holes in tree trunks for worms.
Structural enrichment
Structural Enrichment is when objects are added to an enclosure to mimic an animal's natural habitat. These objects can be switched out occasionally or kept permanently. The environment of captive animals should be switched frequently since their environment in the wild would bring on new objects and exploration. Research into what constitutes the most beneficial and appropriate forms of enrichment must be used when considering the provision of enrichment options, especially for species where natural-like settings may be difficult to achieve. The animal should never become too familiar with their environment because that can cause boredom, no stimulation or stereotypical behavior. Examples of this could be swings or climbing structures. Stones have also been shown to encourage exploratory behavior in Japanese macaques. Interaction with the stones exhibited behaviors such as gathering, rolling in hands, rubbing, and carrying.
Other common forms include cardboard, forage, and even the texture of the food (i.e. hard, smooth, cold, warm).
Olfactory enrichment
Olfactory enrichment can stimulate naturalistic behavior, enhance exploration, and reduce inactive behaviors. Olfactory enrichment can be utilized by itself, paired with novel toys, or paired with food-based enrichment. This type of enrichment is most commonly used with species that commonly utilize their olfactory senses in the wild. Although highly beneficial, it is important for researchers to analyze the long-term effects of certain odors on captive animals. Odors can be scattered on a novel toy such as a ball or semi-randomly throughout an enclosure. Various forms of odors can include catnip, odor of conspecific, perfume, feces of a prey species, or spices.
Cognitive enrichment
Cognitive enrichment is defined as, improving animal welfare by providing opportunities for captive animals to use cognitive skills for problem solving and providing limited control over some aspects of its environment. In the wild, animals deal with ecological challenges in order to acquire the resources, such as food and shelter, that they require to survive. These challenges arise from interactions with other animals, or through changes to their environment that require the individuals to exercise their cognitive ability and to improve their behavioral strategies. Therefore, these challenges act as an important problem-solving element in the animals' day-to-day lives, and in-turn, increases their overall fitness. The animal anticipates positive benefits from a challenging situation which can directly affect its emotional processes. Cognitive enrichment should be provided in addition to a diverse environment that is already structurally and socially enriched; it goes beyond the basic needs of the animals.
Social enrichment
Social enrichment can either involve housing a group of conspecifics or animals of different species that would naturally encounter each other in the wild. Social animals in particular (i.e. most primates, lions, flamingos, etc.), benefit from social enrichment because it has the positive effect of creating confidence in the group. Social enrichment can encourage social behaviors that are seen in the wild, including feeding, foraging, defense, territoriality, reproduction, and courtship.
Human-interaction enrichment
The most common form of human-interaction enrichment is training. The human and animal interaction during training builds trust, and increases the animal's cooperation during clinical and research procedures. In addition, training sessions have been shown to benefit the welfare of both individually housed animals and communally housed animals by providing cognitive stimulation, increasing social play, decreasing inactivity, and mitigating social aggression during feeding.
Assessing the success
A range of methods can be used to assess which environmental enrichment should be provided. These are based on the premises that captive animals should perform behaviors in a similar way to those in the ethogram of their ancestral species, animals should be allowed to perform the activities or interactions they prefer, i.e. preference test studies, and animals should be allowed to perform those activities for which they are highly motivated, i.e. motivation studies.
Environmental enrichment is a way to ensure that an animals natural and instinctual behaviors are kept and able to be passed and taught from one generation to the next. Enrichment techniques that encourage species specific behaviors, like those that are discovered in the wild, have been studied and found to help the process of reintroduction of endangered species into their natural habitats, as well as helping to create offspring with natural traits and behaviors.
The main way the success of environmental enrichment can be measured is by recognizing the behavioral changes that occur from the techniques used to shape desired behaviors of the animal compared to the behaviors of those found in the wild. Other ways that the success of environmental enrichment can be assessed quantitatively by a range of behavioral and physiological indicators of animal welfare. In addition to those listed above, behavioral indicators include the occurrence of abnormal behaviours (e.g. stereotypies), cognitive bias studies, and the effects of frustration. Physiological indicators include heart rate, corticosteroids, immune function, neurobiology, eggshell quality and thermography.
It is very difficult for zookeepers to measure the effectiveness of enrichment in terms of the stress due to the fact that animals that are found in zoos are oftentimes on display and presented with very abnormal conditions that can cause uneasiness and stress. Measuring enrichment in terms of reproduction is easier because of our ability to record offspring numbers and fertility. By making necessary environment changes and providing mental stimulation, animals in captivity have been seen to reproduce at a more similar rate to their wild ancestors in comparison to those provided with less behavioral and environmental enrichment.
Issues and concerns
Habituation
Although environmental enrichment can provide sensory and social stimulations, it can also have limited efficacy if not changed frequently. Animals can become habituated to environmental enrichments, showing positive behaviors at onset of exposure and progressively declining with time. Environmental enrichments are effective primarily because it offers novelty stimuli, making the animal's daily routines less predictable, as would be in the wild. Therefore, maintaining novelty is important for the efficacy of the enrichment. Frequently changing the type of environmental enrichment will help prevent habituation.
Training
Usage of more highly advanced enrichment devices, such as computerized devices, requires training. This can lead to issues as training often consists of food as a reward. While food encourages the animal to participate with the device, the animal could associate the device with food. As a result, the interaction with the enrichment would bring about behaviors that are associated with training instead of the desired playful and voluntary behaviors.
Time and resources
The process of producing and providing environmental enrichment usually require a large allocation of time and resources. In a survey, "time taken by animal care staff to complete other tasks" was the most significant factor influencing environmental enrichment provisions and scheduling. Therefore, it is important to develop appropriate environmental enrichment programs that can be effectively carried out with the size of staff and time available.
References
External links
Laboratory Animal Refinement Database
Animals in Laboratories (awionline.org)
3R Research Foundation Switzerland (forschung3R.ch)
Environmental Enrichment, Animal Welfare Information Center
The Shape of Enrichment selected articles on enrichment for zoo animals.
Environmental Enrichment for Pet Cats (ASPCA)
Environmental Enrichment for Pet Dogs(ASPCA)
Environmental Enrichment for Horses(ASPCA)
Animal welfare
Ethology
Zoos | Behavioral enrichment | [
"Biology"
] | 2,551 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
5,470,038 | https://en.wikipedia.org/wiki/Vampire%20by%20Night | The Vampire by Night (Nina Price) is a fictional character that appears in comic books published by Marvel Comics. She is the niece to Jack Russell and has the ability to shapeshift into either a werewolf or a vampiress between dusk and dawn.
Publication history
Vampire by Night first appeared in Amazing Fantasy vol. 2 #10 (September 2005) and was created by writer Jeff Parker and artist Federica Manfredi.
Fictional character biography
Nina Price is the niece of Jack Russell / Werewolf by Night and inherited a long-running familial lycanthropic curse that originated when her ancestor Grigori was corrupted by the Darkhold and bitten by a werewolf who served Dracula. At some point Nina was attacked and bitten by a vampire, transforming her into a hybrid. As a result, she transforms into a vampire by night and a wolf during full moons.
Nina used her father's money and status to create a special area to cage herself and prevent herself from harming others. However, she had no problem using her supernatural abilities to kill criminals, seeing them as worthy prey to satiate her thrist for blood.
After being caught in a trap set by S.H.I.E.L.D. during an adventure with her uncle Jack, Nina joins the organization's Paranormal Containment Unit, nicknamed the Howling Commandos. During her time there, she is partnered with the werewolf Warwolf and the vampire Lilith, Dracula's daughter.
In the All-New, All-Different Marvel event, Nina joins S.T.A.K.E.'s Howling Commandos.
Nina later ends up under the thrall of Dracula at the time when Old Man Logan and the Howling Commandos arrived to rescue Jubilee, who is also under Dracula's thrall. Both of them end up freed from Dracula's thrall upon his defeat.
Powers and abilities
Nina is a hybrid of a vampire and a werewolf and possesses the abilities of both, but only after sunset. Her vampire powers give her superhuman physical abilities, a powerful healing factor, and an inability to be captured on film. Due to her hybrid nature, she is not affected by sunlight.
She is also cursed with the power of lycanthropy. When the moon is full, she transforms into a white wolf resembling an Arctic wolf. This gives her superhuman physical abilities and senses as well as overwhelming feral instincts.
Reception
In 2021, Screen Rant included Nina Price in their "Marvel: 10 Most Powerful Vampires" list.
In other media
The Vampire by Night appears in Hulk: Where Monsters Dwell, voiced by Chiara Zanni. This version is a member of the Howling Commandos.
The Vampire by Night appears as an unlockable playable character in Marvel Avengers Academy.
References
External links
Vampire by Night at Marvel.com
Vampire by Night at Comic Vine
Vampire by Night at Marvel Appendix
Comics characters introduced in 2005
Fictional half-vampires
Fictional hypnotists
Fictional hybrids
Fictional vampires
Fictional werewolves
Howling Commandos
Marvel Comics shapeshifters
Marvel Comics characters who can move at superhuman speeds
Marvel Comics characters with accelerated healing
Marvel Comics immortals
Marvel Comics characters with superhuman senses
Marvel Comics characters with superhuman strength
Marvel Comics female superheroes
Marvel Comics hybrids
Marvel Comics vampires
S.H.I.E.L.D. agents
Vampire superheroes | Vampire by Night | [
"Biology"
] | 666 | [
"Fictional hybrids",
"Hybrid organisms"
] |
5,470,137 | https://en.wikipedia.org/wiki/Nuclear%20astrophysics | Nuclear astrophysics studies the origin of the chemical elements and isotopes, and the role of nuclear energy generation, in cosmic sources such as stars, supernovae, novae, and violent binary-star interactions.
It is an interdisciplinary part of both nuclear physics and astrophysics, involving close collaboration among researchers in various subfields of each of these fields. This includes, notably, nuclear reactions and their rates as they occur in cosmic environments, and modeling of astrophysical objects where these nuclear reactions may occur, but also considerations of cosmic evolution of isotopic and elemental composition (often called chemical evolution). Constraints from observations involve multiple messengers, all across the electromagnetic spectrum (nuclear gamma-rays, X-rays, optical, and radio/sub-mm astronomy), as well as isotopic measurements of solar-system materials such as meteorites and their stardust inclusions, cosmic rays, material deposits on Earth and Moon). Nuclear physics experiments address stability (i.e., lifetimes and masses) for atomic nuclei well beyond the regime of stable nuclides into the realm of radioactive/unstable nuclei, almost to the limits of bound nuclei (the drip lines), and under high density (up to neutron star matter) and high temperature (plasma temperatures up to ). Theories and simulations are essential parts herein, as cosmic nuclear reaction environments cannot be realized, but at best partially approximated by experiments.
History
In the 1940s, geologist Hans Suess speculated that the regularity that was observed in the abundances of elements may be related to structural properties of the atomic nucleus. These considerations were seeded by the discovery of radioactivity by Becquerel in 1896 as an aside of advances in chemistry which aimed at production of gold. This remarkable possibility for transformation of matter created much excitement among physicists for the next decades, culminating in discovery of the atomic nucleus, with milestones in Ernest Rutherford's scattering experiments in 1911, and the discovery of the neutron by James Chadwick (1932). After Aston demonstrated that the mass of helium is less than four times that of the proton, Eddington proposed that, through an unknown process in the Sun's core, hydrogen is transmuted into helium, liberating energy. Twenty years later, Bethe and von Weizsäcker independently derived the CN cycle, the first known nuclear reaction that accomplishes this transmutation. The interval between Eddington's proposal and derivation of the CN cycle can mainly be attributed to an incomplete understanding of nuclear structure. The basic principles for explaining the origin of elements and energy generation in stars appear in the concepts describing nucleosynthesis, which arose in the 1940s, led by George Gamow and presented in a 2-page paper in 1948 as the Alpher–Bethe–Gamow paper. A complete concept of processes that make up cosmic nucleosynthesis was presented in the late 1950s by Burbidge, Burbidge, Fowler, and Hoyle, and by Cameron. Fowler is largely credited with initiating collaboration between astronomers, astrophysicists, and theoretical and experimental nuclear physicists, in a field that we now know as nuclear astrophysics (for which he won the 1983 Nobel Prize). During these same decades, Arthur Eddington and others were able to link the liberation of nuclear binding energy through such nuclear reactions to the structural equations of stars.
These developments were not without curious deviations. Many notable physicists of the 19th century such as Mayer, Waterson, von Helmholtz, and Lord Kelvin, postulated that the Sun radiates thermal energy by converting gravitational potential energy into heat. Its lifetime as calculated from this assumption using the virial theorem, around 19 million years, was found inconsistent with the interpretation of geological records and the (then new) theory of biological evolution. Alternatively, if the Sun consisted entirely of a fossil fuel like coal, considering the rate of its thermal energy emission, its lifetime would be merely four or five thousand years, clearly inconsistent with records of human civilization.
Basic concepts
During cosmic times, nuclear reactions re-arrange the nucleons that were left behind from the big bang (in the form of isotopes of hydrogen and helium, and traces of lithium, beryllium, and boron) to other isotopes and elements as we find them today (see graph). The driver is a conversion of nuclear binding energy to exothermic energy, favoring nuclei with more binding of their nucleons - these are then lighter as their original components by the binding energy. The most tightly-bound nucleus from symmetric matter of neutrons and protons is 56Ni. The release of nuclear binding energy is what allows stars to shine for up to billions of years, and may disrupt stars in stellar explosions in case of violent reactions (such as 12C+12C fusion for thermonuclear supernova explosions). As matter is processed as such within stars and stellar explosions, some of the products are ejected from the nuclear-reaction site and end up in interstellar gas. Then, it may form new stars, and be processed further through nuclear reactions, in a cycle of matter. This results in compositional evolution of cosmic gas in and between stars and galaxies, enriching such gas with heavier elements. Nuclear astrophysics is the science to describe and understand the nuclear and astrophysical processes within such cosmic and galactic chemical evolution, linking it to knowledge from nuclear physics and astrophysics. Measurements are used to test our understanding: Astronomical constraints are obtained from stellar and interstellar abundance data of elements and isotopes, and other multi-messenger astronomical measurements of the cosmic object phenomena help to understand and model these. Nuclear properties can be obtained from terrestrial nuclear laboratories such as accelerators with their experiments. Theory and simulations are needed to understand and complement such data, providing models for nuclear reaction rates under the variety of cosmic conditions, and for the structure and dynamics of cosmic objects.
Findings, current status, and issues
Nuclear astrophysics remains as a complex puzzle to science. The current consensus on the origins of elements and isotopes are that only hydrogen and helium (and traces of lithium) can be formed in a homogeneous Big Bang (see Big Bang nucleosynthesis), while all other elements and their isotopes are formed in cosmic objects that formed later, such as in stars and their explosions.
The Sun's primary energy source is hydrogen fusion to helium at about 15 million degrees. The proton–proton chain reactions dominate, they occur at much lower energies although much more slowly than catalytic hydrogen fusion through CNO cycle reactions. Nuclear astrophysics gives a picture of the Sun's energy source producing a lifetime consistent with the age of the Solar System derived from meteoritic abundances of lead and uranium isotopes – an age of about 4.5 billion years. The core hydrogen burning of stars, as it now occurs in the Sun, defines the main sequence of stars, illustrated in the Hertzsprung-Russell diagram that classifies stages of stellar evolution. The Sun's lifetime of H burning via pp-chains is about 9 billion years. This primarily is determined by extremely slow production of deuterium,
which is governed by the weak interaction.
Work that led to discovery of neutrino oscillation (implying a non-zero mass for the neutrino absent in the Standard Model of particle physics) was motivated by a solar neutrino flux about three times lower than expected from theories — a long-standing concern in the nuclear astrophysics community colloquially known as the Solar neutrino problem.
The concepts of nuclear astrophysics are supported by observation of the element technetium (the lightest chemical element without stable isotopes) in stars, by galactic gamma-ray line emitters (such as 26Al, 60Fe, and 44Ti), by radioactive-decay gamma-ray lines from the 56Ni decay chain observed from two supernovae (SN1987A and SN2014J) coincident with optical supernova light, and by observation of neutrinos from the Sun and from supernova 1987a. These observations have far-reaching implications. 26Al has a lifetime of a million years, which is very short on a galactic timescale, proving that nucleosynthesis is an ongoing process within our Milky Way Galaxy in the current epoch.
Current descriptions of the cosmic evolution of elemental abundances are broadly consistent with those observed in the Solar System and galaxy.
The roles of specific cosmic objects in producing these elemental abundances are clear for some elements, and heavily debated for others. For example, iron is believed to originate mostly from thermonuclear supernova explosions (also called supernovae of type Ia), and carbon and oxygen is believed to originate mostly from massive stars and their explosions. Lithium, beryllium, and boron are believed to originate from spallation reactions of cosmic-ray nuclei such as carbon and heavier nuclei, breaking these apart. Elements heavier than nickel are produced via the slow and rapid neutron capture processes, each contributing roughly half the abundance of these elements. The s-process is believed to occur in the envelopes of dying stars, whereas some uncertainty exists regarding r-process sites. The r-process is believed to occur in supernova explosions and compact object mergers, though observational evidence is limited to a single event, GW170817, and relative yields of proposed r-process sites leading to observed heavy element abundances are uncertain.
The transport of nuclear reaction products from their sources through the interstellar and intergalactic medium also is unclear. Additionally, many nuclei that are involved in cosmic nuclear reactions are unstable and may only exist temporarily in cosmic sites, and their properties (e.g., binding energy) cannot be investigated in the laboratory due to difficulties in their synthesis. Similarly, stellar structure and its dynamics is not satisfactorily described in models and hard to observe except through asteroseismology, and supernova explosion models lack a consistent description based on physical processes, and include heuristic elements. Current research extensively utilizes computation and numerical modeling.
Future work
Although the foundations of nuclear astrophysics appear clear and plausible, many puzzles remain. These include understanding helium fusion (specifically the 12C(α,γ)16O reaction(s)), astrophysical sites of the r-process, anomalous lithium abundances in population II stars, the explosion mechanism in core-collapse supernovae, and progenitors of thermonuclear supernovae.
See also
Nuclear physics
Astrophysics
Nucleosynthesis
Abundance of the chemical elements
Joint Institute for Nuclear Astrophysics
References
astrophysics
Astronomical sub-disciplines
Astrophysics
Subfields of physics | Nuclear astrophysics | [
"Physics",
"Astronomy"
] | 2,176 | [
"nan",
"Astronomical sub-disciplines",
"Astrophysics",
"Nuclear physics"
] |
5,470,239 | https://en.wikipedia.org/wiki/Xylophagy | Xylophagy is a term used in ecology to describe the habits of an herbivorous animal whose diet consists primarily (often solely) of wood. The word derives from Greek ξυλοφάγος (xulophagos) "eating wood", from () "wood" and () "to eat". Animals feeding only on dead wood are called sapro-xylophagous or saproxylic.
Xylophagous insects
Most such animals are arthropods, primarily insects of various kinds, in which the behavior is quite common, and found in many different orders. It is not uncommon for insects to specialize to various degrees; in some cases, they limit themselves to certain plant groups (a taxonomic specialization), and in others, it is the physical characteristics of the wood itself (e.g., state of decay, hardness, whether the wood is alive or dead, or the choice of heartwood versus sapwood versus bark).
Many xylophagous insects have symbiotic protozoa and/or bacteria in their digestive system which assist in the breakdown of cellulose; others (e.g., the termite family Termitidae) possess their own cellulase. Others, especially among the groups feeding on decaying wood, derive much of their nutrition from the digestion of various fungi that are growing amidst the wood fibers. Such insects often carry the spores of the fungi in special structures on their bodies (called "mycangia"), and infect the host tree themselves when they are laying their eggs.
Examples of wood-eating animals
African forest elephants
Bark beetles
Beavers
Cossidae moths
Cryptocercus punctulatus, the brown-hooded cockroach
Dioryctria sylvestrella, the maritime pine borer, a snout moth in the Pyralidae family
Gribbles
Horntails
Panaque (catfish)
Panesthia cribrata, the Australian wood cockroach
Sesiidae moths
Shipworms
Termites
Wood-boring beetles
Woodlice
Amphipods
Squat lobster
References
Herbivory
Dead wood
Wood decomposition | Xylophagy | [
"Biology"
] | 444 | [
"Eating behaviors",
"Herbivory"
] |
15,935,893 | https://en.wikipedia.org/wiki/Turing%20Talk | The Turing Talk, previously known as the Turing Lecture, is an annual award lecture delivered by a noted speaker on the subject of Computer Science. Sponsored and co-hosted by the Institution of Engineering and Technology (IET) and the British Computer Society, the talk has been delivered at different locations in the United Kingdom annually since 1999. Venues for the talk have included Savoy Place, the Royal Institution in London, Cardiff University, The University of Manchester, Belfast City Hall and the University of Glasgow. The main talk is preluded with an insightful speaker, who performs an opening act for the main event.
The talk is named in honour of Alan Turing and should not be confused with the Turing Award lecture organised by the Association for Computing Machinery (ACM). Recent Turing talks are available as a live webcast and archived online.
Turing Talks
Previous speakers have included:
2022: Julie McCann, a day in the life of a smart city
2021: Cecilia Mascolo, Sounding out wearable and audio data for health diagnostics
2020: Mark Girolami, Digital Twins: The Next Phase of the AI Revolution
2019: Engineering a fair future: Why we need to train unbiased AI
2018: Andy Harter, Innovation and technology – art or science?
2017: Guruduth Banavar, Beneficial AI for the Advancement of Humankind
2016: Robert Schukai, The Internet of Me: It's all about my screens
2015: Robert Pepper, The Internet Paradox: How bottom-up beat(s) command and control
2014: Bernard S. Meyerson, Beyond silicon: Cognition and much, much more
2013: Suranga Chandratillake, What they didn't teach me: building a technology company and taking it to market
2012: Ray Dolan, From cryptoanalysis to cognitive neuroscience – a hidden legacy of Alan Turing
2011: Donald Knuth, An Evening with Donald Knuth – All Questions Answered
2010: Christopher Bishop. Embracing Uncertainty: the new machine intelligence
2009: Mike Brady, Information Engineering and its Future
2008: James Martin, Target Earth and the meaning of the 21st century
2007: Grady Booch, The Promise, the Limits and the Beauty of Software
2006: Chris Mairs, Lifestyle access for the disabled
2005: Fred Brooks, Collaboration and Telecollaboration in Design
2004: Fred Piper, Cyberspace Security, The Good, The Bad & The Ugly
2003: Caroline Kovac, Computing in the Age of the Genome
2002: Mark Welland, Smaller, faster, better – but is it nanotechnology?
2001: Nick Donofrio, Technology, Innovation and the New Economy
2000: Brian Randell, Facing up to Faults
1999: Samson Abramsky From Computation to Interaction – Towards a Science of Information
See also
Pinkerton Lecture
References
1998 establishments in the United Kingdom
Recurring events established in 1998
British lecture series
Computer science education
Academic awards
British Computer Society
Institution of Engineering and Technology
Alan Turing | Turing Talk | [
"Technology",
"Engineering"
] | 589 | [
"Institution of Engineering and Technology",
"Computer science stubs",
"Computer science education",
"Computer science",
"Computing stubs"
] |
15,936,128 | https://en.wikipedia.org/wiki/Pilobolus%20crystallinus | Pilobolus crystallinus, commonly known as the "dung cannon" or "hat thrower", is a species of fungus belonging to the Mucorales order. It is unique in that it adheres its spores to vegetation, so as to be eaten by grazing animals. It then passes through the animals' digestive systems and grows in their feces. Although these fungi only grow to be tall, they can shoot their sporangium, containing their spores, up to away. Due to an increase of pressure in the vesicle, the sporangium can accelerate 0–45 mph in the first millimeter of its flight, which corresponds to an acceleration of 20000 g. Using a mucus-like substance found in the vesicle of the fungus, the sporangium can adhere itself onto whatever it lands, thus completing its life cycle.
The basionym of this species is Hydrogera crystallina F.H. Wigg. 1780.
The ability of this fungus to cause problems for florists was noted in the scientific literature in 1881:
... this small fungus had proved this season to be an expensive annoyance to florists engaged in winter forcing flowers. Rose-growers especially had found it to interfere seriously with their profits. The injury was caused by the projection of the sporangia which covered the flowers and leaves of the roses as if profusely dusted with black pepper. The flowers were almost unsaleable as the first impression was that the black dots were aphids.
Description
This fungus normally grows beneath the surface – a sensitivity to oxygen inhibits radial growth at the hyphae.
According to McVickar (1942), and later amended by Ootaki et al. (1993), the development of P. crystallinus may be divided into six stages: In stage I, the sporangiophore initially elongates at the apex, but does not rotate. In stage II, the sporangiophore develops a sporangium. In stage III, after the development of the sporangium, there is a temporary cessation of growth. In stage IV, a subsporangial vesicle expands beneath the sporangium. This is followed by stage V, where the spore matures, and the region of hypha directly below the subsporangial vesicle continues elongating. Finally, in stage VI, the subsporangial vesicle bursts and throws the sporangium into the air.
Scanning and transmission electron microscopy has shown that the surface of the sporangium is covered with crystals of two distinct sizes. The larger crystals enclose spines having a central pore.
Host species
Pilobolus crystallinus has been reported to grow on the dung of cattle.
References
External links
BBC Nature
Fungi described in 1780
Zygomycota
Fungus species | Pilobolus crystallinus | [
"Biology"
] | 600 | [
"Fungi",
"Fungus species"
] |
15,936,151 | https://en.wikipedia.org/wiki/Patatin | Patatin is a family of glycoproteins found in potatoes (Solanum tuberosum) and is also known as tuberin as it is commonly found within vacuoles of parenchyma tissue in the tuber of the plant. They consist of about 366 amino acids all making up and isoelectric point of 4.9. They have a molecular weight ranging from 40 to 45 kDa, but are commonly found as a 80kDa dimer. The main function of patatin is as a storage protein but it also has lipase activity and can cleave fatty acids from membrane lipids. The patatin protein makes up about 40% of the soluble protein in potato tubers. Members of this protein family have also been found in animals.
Allergy
Patatin is identified as a major cause of potato allergy. It has found to be similar to latex, and when in contact with open skin, there has been an increase of immunoglobulin E which causes some allergic reactions and symptoms, such as asthmatic symptoms, or atopic dermatitis. It is unclear as to why the plant does this, however it could be a potential defense mechanism against insects.
Function
Functionally, patatin serves as a key contributor to the antioxidant activity in potato tubers, in order to keep the potato fresh. Additionally, they function as acyl hydrolases, which breaks down different types of substrates. Notably, patatins also demonstrate β-1,3-glucanase activity, suggesting their involvement in breaking down polysaccharides. This diverse enzymatic activity contributes to ensuring the nutritional composition of the potato. Beyond their role as storage proteins, patatins play a significant part in the plant's defense mechanisms against pests and fungal pathogens. The galactolipase and β-1,3-glucanase activities exhibited by patatins are believed to contribute to the plant's resistance to external threats. This dual functionality underscores the importance of patatins in safeguarding the potato plant against potential environmental challenges.
Beyond its role as a storage protein, patatin's functions extend to antioxidant activity and categorization as an esterase enzyme complex. It demonstrates enzymatic activity in lipid metabolism through lipid acyl hydrolases (LAHs) and acyl transferases. This activity varies across potato cultivars, extraction techniques, and fatty acid substrates.
Structure
The patatin genes are located at a single major locus, comprising both functional and non-functional genes. Patatin isoforms exhibit considerable variability among different potato cultivars. Patatin's primary residence in the vacuole, alongside protease inhibitor variants, positions it as a major player in potato tuber proteins. The ngLOC software predicts 296 vacuolar proteins, with 450 putative vacuolar proteins identified through mass spectrometric sequencing. Notably, the tuber vacuole is recognized as a protein storage vacuole, with a distinct absence of proteolytic or glycolytic enzymes. Structurally, patatin emerges as a tertiary stabilized protein, exhibiting stability up to 45 °C. Beyond this threshold, its secondary structure begins to unfold, with the α-helical portion denaturing at 55 °C. This vulnerability to temperature changes highlights the delicate balance in maintaining its structural integrity.
Patatin's hydrolase activity, attributed to its parallel β-sheet core with a catalytic serine located in the nucleophilic elbow loop, places it within the hydrolase family. This core structure is crucial for its lipid acyl hydrolase (LAH) activity, providing insights into its enzymatic functions and potential participation in plant defenses.
There is a study delves into the multifaceted properties of patatin, the predominant protein in potatoes, revealing its structural diversity through the identification of several isoforms. Notably rich in essential amino acids, patatin emerges as a valuable source of nutrition. The glycoprotein nature of patatin, characterized by O-linked glycosylation, incorporates various monosaccharides, including fucose, indicating a fucosylated glycan structural feature. The specific binding of patatin to AAL, a fucose-affine lectin, underscores its distinctive glycan composition. Moving beyond its molecular characteristics, the research explores the regulatory effects of patatin on lipid metabolism, fat catabolism, fat absorption, and lipase activity in zebrafish larvae subjected to high-fat feeding. Results suggest that patatin, at a concentration of 37.0 μg/mL, promotes lipid decomposition metabolism by 23% and exhibits inhibitory effects on lipase activity and fat absorption, positioning it as a potential natural constituent with anti-obesity properties. These findings illuminate the diverse facets of patatin, shedding light on its nutritional significance and its prospective role in combating obesity.
Isoforms
Patatin is a complex assembly of proteins represented by two multigene families: class I in large concentrations in the tuber and class II in smaller concentrations throughout the potato plant. Isoforms A, B, C, and D exhibit charge-based differences, with isoform A presenting the lowest surface charge. These isoforms, homologous in nature, differ in molecular masses and ratios, showcasing their structural diversity.
Glycosylation
Patatin isoforms undergo glycosylation, impacting their molecular masses and contributing to variations between isoforms. Experimental discrepancies in molar mass differences indicate potential glycosylation between protein and carbohydrates in potatoes. This glycosylation may play a role in the protein's functional characteristics.
Patatin-like phospholipase
The patatin-like phospholipase (PNPLA) domain, found in proteins encoding patatin, is widespread across diverse life forms, spanning eukaryotes and prokaryotes. These proteins are involved in a variety of biological functions, encompassing sepsis induction, host colonization, triglyceride metabolism, and membrane trafficking. Key features of PNPLA domain-containing proteins include their lipase and transacylase properties, signifying their significant roles in maintaining lipid and energy homeostasis across different organisms and biological contexts.
References
Potatoes | Patatin | [
"Chemistry"
] | 1,317 | [
"Biochemistry stubs",
"Protein stubs"
] |
15,936,520 | https://en.wikipedia.org/wiki/RNA%20extraction | RNA extraction is the purification of RNA from biological samples. This procedure is complicated by the ubiquitous presence of ribonuclease enzymes in cells and tissues, which can rapidly degrade RNA. Several methods are used in molecular biology to isolate RNA from samples, the most common of these is guanidinium thiocyanate-phenol-chloroform extraction.. Usually, the phenol-chloroform solution used for RNA extraction has lower pH, this aids in separating DNA from RNA and leads to a more pure RNA preparation. The filter paper based lysis and elution method features high throughput capacity..
RNA extraction in liquid nitrogen, commonly using a mortar and pestle (or specialized steel devices known as tissue pulverizers) is also useful in preventing ribonuclease activity.
RNase contamination
The extraction of RNA in molecular biology experiments is greatly complicated by the presence of ubiquitous and hardy RNases that degrade RNA samples. Certain RNases can be extremely hardy and inactivating them is difficult compared to neutralizing DNases. In addition to the cellular RNases that are released there are several RNases that are present in the environment. RNases have evolved to have many extracellular functions in various organisms. For example, RNase 7, a member of the RNase A superfamily, is secreted by human skin and serves as a potent antipathogen defence. For these secreted RNases, enzymatic activity may not even be necessary for the RNase's exapted function. For example, immune RNases act by destabilizing the cell membranes of bacteria.
To counter this, equipment used for RNA extraction is usually cleaned thoroughly, kept separate from common lab equipment and treated with various harsh chemicals that destroy RNases. For the same reason, experimenters take special care not to let their bare skin touch the equipment. Broad RNAse inhibitors are also commercially available and sometimes added to in vitro transcription (RNA synthesis) reactions .
See also
Column purification
DNA extraction
Ethanol precipitation
Phenol-chloroform extraction
References
External links
Two-phase wash to solve the ubiquitous contaminant-carryover problem in commercial nucleic-acid extraction kits; by Erik Jue, Daan Witters & Rustem F. Ismagilov; Nature, Scientific reports, 2020.
Biochemical separation processes
Genetics techniques | RNA extraction | [
"Chemistry",
"Engineering",
"Biology"
] | 486 | [
"Biochemistry methods",
"Genetics techniques",
"Separation processes",
"Genetic engineering",
"Biochemical separation processes"
] |
15,936,857 | https://en.wikipedia.org/wiki/Private%20landowner%20assistance%20program | Private landowner assistance program (PLAP) is a class of government assistance program available throughout the U.S. for landowners interested in maintaining, developing, improving and protecting wildlife on their property. Each state provides various programs that assist landowners in agriculture, forestry and conserving wildlife habitat. This helps landowners in the practice of good land stewardship and provides multiple benefits to the environment. Some states offer technical assistance which includes:
assisting the landowner to decide which programs will fit the landowner's needs,
assist landowners with processes/procedures,
and assist in coming up with a plan that will be beneficial to the species present on their land while preserving their natural habitat.
Landowner incentive programs
Landowner incentive programs work to financially assist landowners in the restoration and protection of endangered species Generally any private landowner or organization can apply for assistance but preference is given to areas in greatest need of protection.
Wildlife Habitat Incentives Program (WHIP)
WHIP is a voluntary landowner program that is devoted to the improvement of upland wildlife habitat. It is available in all 50 states and has enrolled nearly 11,000 landowners totaling since its beginning in 1998. Eligibility is limited to privately owned, federal, tribal and government lands (Limited). Once approved, land management plans are designed with one of two primary agendas.
Habitat for declining species
Wildlife and fishery habitats and sustainable practices
Proposed management plans are considered for 5,10 or 15 year time spans with increased cost-share benefits for longer commitments.
Forest Land Enhancement Program (FLEP)
FLEP is a type of USDS incentive program designed to maintain the long term sustainability of non-industrial private forest. The program provides financial and educational assistance to landowners that compose a qualifying management plan. Initially proposed plans must be 10 years management strategies and can manage no more than (additional area can be added in special cases).
Tax incentives
Another way landowners can be persuaded to conserve their private land is through tax incentive programs. For example, Louisiana has a tax exemption program providing tax relief for landowner that commit to specific management plans.
Agricultural conversion programs
Conservation Reserve Program - State Acres for wildlife Enhancement (SAFE)
The United States Department of Agriculture USDA started the Conservation Reserve Program as part of the Food Security Act of 1985. The program is designed to provide assistance and incentive for farmers to maintain sustainable farming practices and to encourage the development of natural wildlife habitat.
The State Acres for wildlife Enhancement (SAFE) program was approved by the USDS as an offshoot of the Conservation Reserve Program. The program is designed to further protect threatened and endangered species habitat through the restoration of eligible property. The overall goal of the program is to restore and enhance up to but no more than of wildlife habitat. Eligibility requirements, designated SAFE zones and sign-up practices vary from state to state.
Agricultural Management Assistance
Agricultural Management Assistance (AMA) can provide financial assistance to farming landowners willing to volunteer their land for conservation. Funding can be used in a variety of management plans including; windbreak planting, irrigation improvements, soil erosion control, sustainable pest management or development of new organic farming operations. The AMA has a limited annual budget of $20million and individual landowners can qualify for up to $50,000 in AMA payments per year. AMA is available in 15 states and interested landowners can apply via their local Natural Resources Conservation Service (NRCS) or conservation district office.
Grassland Reserve Program
The grassland reserve program is a voluntary landowner program than provides financial and educational support to landowners wishing to maintain or enhance grasslands on their property. The program allows for restoration of multiple types of grasslands including shrub-land, pasture, and range. The grassland reserve programs main goal is to prevent the conversion of native grasslands to other land uses such as development and agriculture. Once protected the land does not necessary remain untouched. Easements may be applied for which allow temporary practices such as grazing, hay harvest, seed harvest or mowing to occur. All temporary easements are decided on while taking disturbance possibilities into account. In terms of land cover, grasslands have the highest percentage of coverage with more than in the United States alone.
Grazing Land Conservation Initiative (GLCI)
The Grazing Land Conservation Initiative (GLCI) is set up to help improve grazing land that is privately owned. This program targets landowners and promotes the maintenance of private grazing land in order to produce higher quality grass than previously found in a specific location. The GLCI provides education materials for anyone who is interested in improving their private grazing land.
Conservation of Private Grazing Land Program (CPGL)
Conservation of Private Grazing Land Program (CPGL) provides private landowners with the necessary tools to maintain high quality grasslands. The primary agenda of the CPGL is to increase the diversity of the land and aid in water managing practices for grazing. No funding is available through this program.
Forest legacy program
The Forest Legacy Program (FLP) is a Federal program in partnership with individual states that protect forests which are environmentally sensitive or endangered. The program focuses on interests and issues that deal with privately owned forests. The FLP provides financial assistance for privately owned forest that is endangered due to anthropogenic development, or forest that has become fragmented due to previous practices. The Forest Legacy program provides alternatives for landowners located in these troubled forested areas. The FLP also develops cooperative conservation plans that allow private landowners to retain land ownership without the need to negotiate property rights. This reduces the effort needed to maintain a sustainable management plan and ultimately increases the benefit to the forest. The Forest Legacy Program has two main goals. The first is to support property acquisition and the second is to acquire donated conservation easements. Participation in the FLP program is limited to private land owners and the federal government funds up to 75% of the costs that are involved. The remaining 25% comes from the landowners as well as other local and state resources. The FLP program has partnered with the Montana Department of Fish, Wildlife and Parks in an effort to protect almost of forested terrain. The Forest Legacy Program has websites for specific states working together.
Forest stewardship programs
The Forest Stewardship Program (FSP) provides assistance to non-industrial private forest owners by encouraging and enabling them for long-term forest management. The program provides landowners with information on development and multi-source planning in an effort to manage private forests for goods and services. Increased economic output along with increased output from the forest is the main goal of the program. Since its introduction, the program has developed 270,000 management plans that consist of more than 31,000,000 acres (130,000 km2) of private land. Stewardship plans promote forest health and development through active management while providing timber, wildlife habitat, natural watersheds, recreational opportunities and many other benefits. Stewardship plans also motivate landowners to become actively involved in planning and managing their land which eventually can lead healthier and more productive forests. Participation in forest stewardship programs is generally open to all private landowners who are committed to a management plan for at least ten years.
Forestry Contractors
Forestry contractors are local individuals and professionals that can provide landowners with general forest management information and assistance on a wide range of questions and projects. Forestry contractors assist private landowners on issues such as; species identification, timber management, timber stand improvement, timber sales, wildlife management and habitat improvement, endangered and threatened species information, erosion management, recreational development, tree and shrub selection, hazard tree appraisal, forest inventory and damage appraisal. Contact information for forestry contractors and other service forestry experts can generally be found on local Department of Natural Resource Websites.
Urban and community forestry programs
Urban and community forests are the trees, plants and ecosystems occurring in developed areas. Urban and community forestry programs work to create and maintain sustainable communities and improve overall urban aesthetics. Programs are designed to conserve natural resources by utilizing a variety of tools including property tax assessment and forest easement programs. They assist landowners with species identification and management of existing community forests with the main goal of creating healthy functional ecosystems within residential communities. Urban and community forestry programs are not only limited to trees and shrubs but also to the factors that contribute to the growth of these organisms. Additional factors include soil, water and air quality. These programs educate citizens on proper tree planting techniques, gardening, nature and how to utilize their land more efficiently. Investments in this program provide clean air and water, energy conservation, reduction in greenhouse gases and add beauty to urban areas.
Watershed forestry programs
Watersheds or drainage basins are an area of land that drain into a common water body such as stream, lake, estuary, aquifer or ocean. The Watershed Approach is an important framework to address today's water challenges. More than $450 billion in food and fiber, manufactured goods, and tourism depends on clean water and healthy watersheds. The watershed approach consists of three main strategies:
Hydrologically defined: which takes geography and all other factors into consideration
Involves the stakeholders: which includes the federal, state, local and private sectors
Strategically addresses water resource goals: which focuses on the water quality and habitat of a particular region. The strategy uses adaptive management and multiple programs which consist of mandatory and voluntary aspects
The Environmental Protection Agency (EPA) has created a website for which contains information about sources of funding of practitioners and funders that have the goal to serve and protect the watersheds.
Nursery and seedbank programs
Nursery and seedbank programs aid conservation programs by supplying trees and shrubs at different successive levels. Plant materials are available for both private and public conservation programs and must be used for the following conservation purposes:
Windbreaks
Shelterbelts
Woodlots
Erosion Control
Wildlife Habitat
Christmas Tree Farms
Streambank Stabilization
Greenstripping
Mine Reclamation
Northeastern forest legacy program
The Northeastern Forest Legacy Program is an alliance between the USDA Forest Service and the individual states to protect the forest for the future generations. The purpose of this program is to preserve the forest areas that are threatened by the conversion to non forest uses. Seventy five percent of the programs that belong to this alliance are funded by the government and the other 25% comes from private, state, and local communities or organizations. The technique used to protect the forests is conservation easement. Land that has scenic value, fish and wildlife value, contains endangered or threatened species are prioritized. Some of the main characteristics of the program are:
It is voluntary
The program helps state and local to identify important areas that need immediate attention
The program is based on a “willing seller and willing buyer” concept
When conservation easements are used the land remains privately owned
FLP consists of protection tools such as full-fee purchase, voluntary deed restriction, and agreements
Illinois Acres for Wildlife
Illinois Acres for Wildlife is an Illinois Department of Natural Resources (IDNR) voluntary program designed to provide assistance to private landowners wishing to maintain their property. The ultimate goal of the program is to inform and educate landowners so they understand how their property fits into a broad management plan. The IDNR provides an initial resource assessment for participating landowners in order to design an effective management plan. No financial assistance is described or offered by the acres for wildlife program but the IDNR can provide seed and seedling stock for qualifying areas.
Note: This Illinois plan was discontinued in 2020 per Jeff Horn of the Illinois Department of Natural Resources.
American Tree Farm System
"Wood is a crop. Forestry is Tree Farming."
— Gifford Pinchot, First chief of the USDA Forest Service.
The American Tree Farm system is an organized collection of private landowners interested in effectively managing their woodland properties. Founded in 1941, the ATFS consists of more than of privately owned forest in 46 states. There are 4,400 volunteers who inspect the forest grounds and there are 87,000 family forest owners. The ATFS is primarily known for continuous wood and timber production but it also consists of many programs and committees that work to ensure the protection of wildlife habitats, watersheds, soil quality and recreation for communities. The habitat and resources that tree farms provided differ greatly based on their location and by the species of trees that are planted. Farms in the system attempt to maintain a healthy level of biodiversity by creating natural forest buffers, practicing sustainable harvesting techniques and by minimizing land fragmentation. Tree farm systems in each state are self-governing and all work under specific guidelines developed by the ATFS's National Operating Committee. The term tree farming was introduced in 1940 by linking the terms in an attempt to make it easier for the public to conceptualize that trees are renewable resources.
Forest Landowners Association (FLA)
60% of the nation's forestlands are privately owned. In order to sustain private forests FLA works to sustain the people who own them. The association works on the behalf of all private landowners interests regardless of whether they are members or not. Since 1941, FLA has provided its members, who own and operate more than 40 million acres of forestland in 48 states, with education, information, and national grassroots advocacy, which enables them to sustain their forestlands across generations and help protect the rights of America's private forest landowners - along with the diverse habitats, clean water and air, recreation and the other, benefits that private forests provide. Outreach on behalf of private forest landowners nationwide enhances landowners forestland management practices and stewardship.
Viable markets and reasonable regulations are fundamental to sustaining private forests, forestry related jobs and forest stewardship. FLA communicates advice, support and information to policy makers on behalf of all private landowners, on how proposed legislation could affect private forest management, stewardship and owners’ rights. FLA provides a voice for forest landowners on national and regional issues, and follows legislation appearing before Congress that affects forest landowners and their property.
Members of the Forest Landowners Association are a diverse group of individual & institutional landowners, consulting foresters, and corporations. Motives for their support are varied but FLA is an advocate of all private landowners-regardless of size, corporate structure, location, certification status, or tax classification. Forest Landowners Association works with many organizations.
Wetland Reserve Program (WRP)
The Wetland Reserve Program (WRP) funds landowners that volunteer their land for wetland development and provides opportunities for landowners participate in the maintenance of the project. The land must meet specific requirement to receive funding and the program is set up for each state in the United States.
The Landowner has up to three choices:
Permanent Easement
30-Year Easement
Restoration Cost-Share Agreement
References
U.S. Department of Natural Resource External links
The following list is a collection of links to state department websites and other natural resource organizations. Each link is specific to the many private landowner services provided by different departments throughout the United States.
Alabama-http://www.dcnr.state.al.us/
Alaska-http://www.state.ak.us/adfg/
Arizona-http://www.gf.state.az.us/
Arkansas-http://www.agfc.com/index.html
California-http://www.dfg.ca.gov/
Colorado-http://wildlife.state.co.us/, http://coloradoriparian.org/
Connecticut-http://dep.state.ct.us/
Delaware-http://www.dnrec.state.de.us/fw/
Florida-http://www.floridaconservation.org//, http://www.floridaforestservice.com/services.html
Georgia-http://www.DNR.State.GA.US/
Hawaii-http://www.hawaii.gov/dlnr/
Idaho-http://www2.state.id.us/fishgame/
Illinois- http://dnr.state.il.us/OREP/C2000/Incentives.htm#PLWHP
Indiana- http://www.in.gov/dnr/forestry/
Iowa- https://web.archive.org/web/20080219134410/http://www.iowadnr.gov/forestry/private.html
Kansas- https://web.archive.org/web/20080216002459/http://www.kdwp.state.ks.us/news/other_services/private_landowner_assistance
Kentucky- https://web.archive.org/web/20080221220153/http://fw.ky.gov/navigation.asp?cid=647&NavPath=C100C366
Louisiana- https://web.archive.org/web/20070813173836/http://www.biodiversitypartners.org/state/la/incentives.shtml
Maine- http://www.swoam.org/
Maryland- https://web.archive.org/web/20110809172610/http://www.dnr.state.md.us/wildlife/Habitat/lip_intro.asp
Massachusetts- http://www.mass.gov/dfwele/dfw/habitat/grants/lip/lip_home.htm
Michigan- http://www.michigan.gov/dnr/0,1607,7-153-10370_36649---,00.html
Minnesota- https://web.archive.org/web/20061008132530/http://www.dnr.state.mn.us/lip/index.html - http://files.dnr.state.mn.us/forestry/urban/bmps.pdf
Mississippi- http://www.mdwfp.com/Level2/Wildlife/Lip/Introduction.asp
Missouri- http://www.mo.nrcs.usda.gov/programs/whip/whip.html
Montana - https://web.archive.org/web/20080224214819/http://dnrc.mt.gov/forestry/Assistance/Stewardship/fsp.asp
Nevada- https://web.archive.org/web/20080228062800/http://www.forestry.nv.gov/main/resource01.htm
New Hampshire - http://www.wildlife.state.nh.us/Wildlife/Landowner_LIP_program.htm
New Jersey - http://www.state.nj.us/dep/parksandforests/forest/njfs_private_lands_mgt.html
New Mexico - http://www.emnrd.state.nm.us/FD/ForestMgt/ForestStewardship.htm
New York - http://www.dec.ny.gov/lands/4972.html
North Carolina - http://www.dfr.state.nc.us/tending/tending_your_forest.htm
North Dakota - https://web.archive.org/web/20080224085612/http://gf.nd.gov/maps/pli-program.html
Ohio - http://www.dnr.state.oh.us/Home/landowner/default/tabid/5279/Default.aspx
Oklahoma - https://web.archive.org/web/20080312180554/http://www.wildlifedepartment.com/laprogrm4.htm
Oregon - https://web.archive.org/web/20080307050241/http://www.dfw.state.or.us/LIP/
Pennsylvania - http://www.dcnr.state.pa.us/forestry/privatelands.aspx
https://web.archive.org/web/20080305052933/http://www.treefarmsystem.org/cms/pages/69_1.html
Rhode Island- https://web.archive.org/web/20080509072341/http://www.dem.ri.gov/programs/bnatres/forest/index.htm
South Carolina-http://www.dnr.sc.gov/land/foreststeward.html
South Dakota-http://www.sdgfp.info/Wildlife/privatelands/Index.htm
Tennessee-http://www.state.tn.us/twra/tnlip.html
Texas-http://www.tpwd.state.tx.us/landwater/land/private/ (Private Land)
http://www.tpwd.state.tx.us/landwater/land/technical_guidance/ (Landowner Assistanship)
Utah-http://www.ffsl.utah.gov/mmlandownerforassist.php
Vermont-http://www.vtfpr.org/lands/index.cfm
Virginia-http://www.dgif.state.va.us/habitat/lip/
Washington-http://wdfw.wa.gov/lands/lip/
West Virginia- http://www.joe.org/joe/2004august/rb5.shtml
Wisconsin- https://web.archive.org/web/20080304062221/http://dnr.wi.gov/forestry/private/financial/costshare.htm
Wyoming- http://gf.state.wy.us/wildlife/nongame/LIP/index.asp
Agricultural economics
Nature conservation in the United States
Wildlife conservation
Federal assistance in the United States | Private landowner assistance program | [
"Biology"
] | 4,465 | [
"Wildlife conservation",
"Biodiversity"
] |
15,939,094 | https://en.wikipedia.org/wiki/JUGENE | JUGENE (Jülich Blue Gene) was a supercomputer built by IBM for Forschungszentrum Jülich in Germany. It was based on the Blue Gene/P and succeeded the JUBL based on an earlier design. It was at the introduction the second fastest computer in the world, and the month before its decommissioning in July 2012 it was still at the 25th position in the TOP500 list. The computer was owned by the "Jülich Supercomputing Centre" (JSC) and the Gauss Centre for Supercomputing.
With 65,536 PowerPC 450 cores, clocked at 850 MHz and housed in 16 cabinets the computer reaches a peak processing power of 222.8 TFLOPS (Rpeak). With an official Linpack rating of 167.3 TFLOPS (Rmax) JUGENE took second place overall and is the fastest civil/commercially used computer in the TOP500 list of November 2007.
The computer was financed by Forschungszentrum Jülich, the State of North Rhine-Westphalia, the Federal Ministry for Research and Education as well as the Helmholtz Association of German Research Centres. The head of the JSC, Thomas Lippert, said that "The unique thing about our JUGENE is its extremely low power consumption compared to other systems even at maximum computing power". A Blue Gene/P-System should reach about 0.35 GFLOPS/Watt and is therefore an order of magnitude more effective than a common x86 based supercomputer for a similar task.
In February 2009 it was announced that JUGENE would be upgraded to reach petaflops performance in June 2009, making it the first petascale supercomputer in Europe.
On May 26, 2009, the newly configured JUGENE was unveiled. It includes 294,912 processor cores, 144 terabyte memory, 6 petabyte storage in 72 racks. With a peak performance of about one PetaFLOPS, it was at the time the third fastest supercomputer in the world, ranking behind IBM Roadrunner and Jaguar. The new configuration also incorporates a new water cooling system that will reduce the cooling cost substantially.
The two front nodes of JUGENE are operated with SUSE Linux Enterprise Server 10.
JUGENE was decommissioned on 31 July 2012 and replaced by the Blue Gene/Q system JUQUEEN.
References
IBM supercomputers
Parallel computing
Petascale computers
Supercomputing in Europe
Jülich Research Centre | JUGENE | [
"Technology"
] | 515 | [
"Supercomputing in Europe",
"Supercomputing"
] |
15,939,934 | https://en.wikipedia.org/wiki/Rhodobacter%20sphaeroides | Rhodobacter sphaeroides is a kind of purple bacterium; a group of bacteria that can obtain energy through photosynthesis. Its best growth conditions are anaerobic phototrophy (photoheterotrophic and photoautotrophic) and aerobic chemoheterotrophy in the absence of light. R. sphaeroides is also able to fix nitrogen. It is remarkably metabolically diverse, as it is able to grow heterotrophically via fermentation and aerobic and anaerobic respiration. Such a metabolic versatility has motivated the investigation of R. sphaeroides as a microbial cell factory for biotechnological applications.
Rhodobacter sphaeroides has been isolated from deep lakes and stagnant waters.
Rhodobacter sphaeroides is one of the most pivotal organisms in the study of bacterial photosynthesis. It requires no unusual conditions for growth and is incredibly efficient. The regulation of its photosynthetic machinery is of great interest to researchers, as R. sphaeroides has an intricate system for sensing O2 tensions. Also, when exposed to a reduction in the partial pressure of oxygen, R. sphaeroides develops invaginations in its cellular membrane. The photosynthetic apparatus is housed in these invaginations. These invaginations are also known as chromatophores.
The genome of R. sphaeroides is also somewhat intriguing. It has two chromosomes, one of 3 Mb (CI) and one of 900 Kb (CII), and five naturally occurring plasmids. Many genes are duplicated between the two chromosomes but appear to be differentially regulated. Moreover, many of the open reading frames (ORFs) on CII seem to code for proteins of unknown function. When genes of unknown function on CII are disrupted, many types of auxotrophy result, emphasizing that the CII is not merely a truncated version of CI.
Small non-coding RNA
Bacterial small RNAs have been identified as components of many regulatory networks. Twenty sRNAs were experimentally identified in Rhodobacter spheroides, and the abundant ones were shown to be affected by singlet oxygen (1O2) exposure. 1O2 which generates photooxidative stress, is made by bacteriochlorophyll upon exposure to oxygen and light. One of the 1O2 induced sRNAs SorY (1O2 resistance RNA Y) was shown to be induced under several stress conditions and conferred resistance against 1O2 by affecting a metabolite transporter. SorX is the second 1O2 induced sRNA that counteracts oxidative stress by targeting mRNA for a transporter. It also has an impact on resistance against organic hydroperoxides. A cluster of four homologous sRNAs called CcsR for conserved CCUCCUCCC motif stress-induced RNA has been shown to play a role in photo-oxidative stress resistance as well. PcrZ (photosynthesis control RNA Z) identified in R. sphaeroides, is a trans-acting sRNA which counteracts the redox-dependent induction of photosynthesis genes, mediated by protein regulators.
Metabolism
R. sphaeroides encodes several terminal oxidases which allow electron transfer to oxygen and other electron acceptors (e.g. DMSO or TMAO). Therefore, this microorganism can respire under oxic, micro-oxic and anoxic conditions under both light and dark conditions.
Moreover, it is capable to accept a variety of carbon substrates, including C1 to C4 molecules, sugars and fatty acids. Several pathways for glucose catabolism are present in its genome, such as the Embden–Meyerhof–Parnas pathway (EMP), the Entner–Doudoroff pathway (ED) and the Pentose phosphate pathway (PP). The ED pathway is the predominant glycolytic pathway in this microorganism, whereas the EMP pathway contributing only to a smaller extent. Variation in nutrient availability has important effects on the physiology of this bacterium. For example, decrease in oxygen tensions activates the synthesis of photosynthetic machinery (including photosystems, antenna complexes and pigments). Moreover, depletion of nitrogen in the medium triggers intracellular accumulation of polyhydroxybutyrate, a reserve polymer.
Biotechnological applications
A genome-scale metabolic model exists for this microorganism, which can be used for predicting the effect of gene manipulations on its metabolic fluxes. For facilitating genome editing in this species, a CRISPR/Cas9 genome editing tool was developed and expanded. Moreover, partitioning of intracellular fluxes has been studied in detail, also with the help of 13C-glucose isotopomers. Altogether, these tools can be employed for improving R. sphaeroides as cell factory for industrial biotechnology.
Knowledge of the physiology of R. sphaeroides allowed the development of biotechnological processes for the production of some endogenous compounds. These are hydrogen, polyhydroxybutyrate and isoprenoids (e.g. coenzyme Q10 and carotenoids). Moreover, this microorganism is used also for wastewater treatment. Hydrogen evolution occurs via the activity of the enzyme nitrogenase, whereas isoprenoids are synthesized naturally via the endogenous MEP pathway. The native pathway has been optimized via genetic engineering for improving coenzyme Q10 synthesis. Alternatively, improvement of isoprenoid synthesis was obtained via the introduction of a heterologous mevalonate pathway. Synthetic biology-driven engineering of the metabolism of R. sphaeroides, in combination to the functional replacement the MEP pathway with mevalonate pathway, allowed to further increase bioproduction of isoprenoids in this species.
Accepted name
Rhodobacter sphaeroides (van Niel 1944) Imhoff et al., 1984
Synonyms
Rhodococcus minor Molisch 1907
Rhodococcus capsulatus Molisch 1907
Rhodosphaera capsulata (Molisch) Buchanan 1918
Rhodosphaera minor (Molisch) Bergey et al. 1923
Rhodorrhagus minor (Molisch) Bergey et al. 1925
Rhodorrhagus capsulatus (Molisch) Bergey et al. 1925
Rhodorrhagus capsulatus Bergey et al. 1939
Rhodopseudomonas sphaeroides van Niel 1944
Rhodopseudomonas spheroides van Niel 1944
Rhodorrhagus spheroides (van Niel) Brisou 1955
Reclassification
In 2020 it was recommended that Rhodobacter sphaeroides be moved to the genus Cereibacter. This is the name currently used by the NCBI taxonomy database.
References
Bibliography
Inomata Tsuyako, Higuchi Masataka (1976), Incorporation of tritium into cell materials of Rhodpseudomonas spheroides from tritiated water in the medium under aerobic conditions ; Journal of Biochemistry 80(3), p569-578, 1976-09
External links
Video recordings van R. sphaeroides
Type strain of Rhodobacter sphaeroides at BacDive - the Bacterial Diversity Metadatabase
Phototrophic bacteria
Rhodobacteraceae
Bacteria described in 1944 | Rhodobacter sphaeroides | [
"Chemistry",
"Biology"
] | 1,569 | [
"Bacteria",
"Photosynthesis",
"Phototrophic bacteria"
] |
15,939,973 | https://en.wikipedia.org/wiki/Tripartite%20motif%20family | The tripartite motif family (TRIM) is a protein family.
Function
Many TRIM proteins are induced by interferons, which are important component of resistance to pathogens and several TRIM proteins are known to be required for the restriction of infection by lentiviruses. TRIM proteins are involved in pathogen-recognition and by regulation of transcriptional pathways in host defence.
Structure
The tripartite motif is always present at the N-terminus of the TRIM proteins. The TRIM motif includes the following three domains:
(1) a RING finger domain
(2) one or two B-box zinc finger domains
when only one B-box is present, it is always a type-2 B-box
when two B-boxes are present the type-1 B-Box always precedes the type-2 B-Box
(3) coiled coil region
The C-terminus of TRIM proteins contain either:
Group 1 proteins: a C-terminal domain selected from the following list:
NHL and IGFLMN domains, either in association or alone
PHD domain associated with a bromodomain
MATH domain (in e.g., TRIM37)
ARF domain (in e.g., TRIM23)
EXOIII domain (in e.g., TRIM19) or
Group 2 proteins: a SPRY C-terminal domain
e.g. TRIM21
Family members
The TRIM family is split into two groups that differ in domain structure and genomic organization:
Group 1 members possess a variety of C-terminal domains, and are represented in both vertebrate and invertebrates
Group 2 is absent in invertebrates, possess a C-terminal SPRY domain
Members of the family include:
Group 1
PHD-BROMO domain containing: TRIM24 (TIF1α), TRIM28 (TIF1β), TRIM33 (TIF1γ)– act as corepressors
1-10: TRIM1, TRIM2, TRIM3, TRIM8, TRIM9
11-20: TRIM12, TRIM13, TRIM14, TRIM16, TRIM18, TRIM19
21-30: TRIM23, TRIM25, TRIM29, TRIM30
31-40: TRIM32, TRIM36, TRIM37
41-50: TRIM42, TRIM44, TRIM45, TRIM46, TRIM47
51-60: TRIM51, TRIM53, TRIM54, TRIM55, TRIM56, TRIM57, TRIM59
61-70: TRIM62, TRIM63, TRIM65, TRIM66, TRIM67, TRIM69, TRIM70
71-75: TRIM71
Group 2
1-10: TRIM4, TRIM5, TRIM6, TRIM7, TRIM10
11-20: TRIM11, TRIM12, TRIM15, TRIM17, TRIM20
21-30: TRIM21, TRIM22, TRIM26, TRIM27, TRIM30
31-40: TRIM31, TRIM34, TRIM35, TRIM38, TRIM39, TRIM40
41-50: TRIM41, TRIM43, TRIM48, TRIM49, TRIM50
51-60: TRIM51, TRIM52, TRIM53, TRIM57, TRIM58, TRIM60
61-70: TRIM61, TRIM64, TRIM68, TRIM69, TRIM70
71-75: TRIM72, TRIM73, TRIM74, TRIM75
References
Gene expression
Transcription coregulators | Tripartite motif family | [
"Chemistry",
"Biology"
] | 685 | [
"Protein stubs",
"Gene expression",
"Biochemistry stubs",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
15,939,976 | https://en.wikipedia.org/wiki/Optical%20comparator | An optical comparator (often called just a comparator in context) or profile projector is a device that applies the principles of optics to the inspection of manufactured parts. In a comparator, the magnified silhouette of a part is projected upon the screen, and the dimensions and geometry of the part are measured against prescribed limits. It is a useful item in a small parts machine shop or production line for the quality control inspection team.
The measuring happens in any of several ways. The simplest way is that graduations on the screen, being superimposed over the silhouette, allow the viewer to measure, as if a clear ruler were laid over the image. Another way is that various points on the silhouette are lined up with the reticle at the centerpoint of the screen, one after another, by moving the stage on which the part sits, and a digital read out reports how far the stage moved to reach those points. Finally, the most technologically advanced methods involve software that analyzes the image and reports measurements. The first two methods are the most common; the third is newer and not as widespread, but its adoption is ongoing in the digital era.
History
The first commercial comparator was developed by James Hartness and Russell W. Porter. Hartness' long-continuing work as the Chairman of the U.S.'s National Screw-Thread Commission led him to apply his familiarity with optics (from his avocations of astronomy and telescope-building) to the problem of screw thread inspection. The Hartness Screw-Thread Comparator was for many years a profitable product for the Jones and Lamson Machine Company, of which he was president.
In subsequent decades optical comparators have been made by many companies and have been applied to the inspection of many kinds of parts. Today they may be found in many machine shops.
The idea of mixing optics and measurement, and the use of the term comparator for metrological equipment, had existed in other forms prior to Hartness's work; but they had remained in realms of pure science (such as telescopy and microscopy) and highly specialized applied science (such as comparing master measuring standards). Hartness's comparator, intended for the routine inspection of machined parts, was a natural next step in the era during which applied science became widely integrated into industrial production.
Usage
The profile projector is widely used for complex-shape stampings, gears, cams, threads and comparing the measured contour model. The profile projector is hence widely used in precision machinery manufacturing, including aviation, aerospace industry, watches and clocks, electronics, instrumentation industry, research institutes and detection metering stations at all levels, etc.
Work principle
The projector magnifies the profile of the specimen, and displays this on the built-in projection screen.
On this screen there is typically a grid that can be rotated 360 degrees so the X-Y axis of the screen can be aligned with a straight edge of the machined part to examine or measure. This projection screen displays the profile of the specimen and is magnified for better ease of calculating linear measurements.
An edge of the specimen to examine may be lined up with the grid on the screen. From there, simple measurements may be taken for distances to other points. This is being done on a magnified profile of the specimen. It can be simpler as well as reduce errors by measuring on the magnified projection screen of a profile projector.
The typical method for lighting is by diascopic illumination, which is lighting from behind. This type of lighting is also called transmitted illumination when the specimen is translucent and light can pass through it. If the specimen is opaque, then the light will not go through it, but will form a profile of the specimen.
Measuring of the sample can be done on the projection screen. A profile projector may also have episcopic illumination (which is light shining from above). This is useful in displaying bores or internal areas that may need to be measured.
Features
Projection methods
Vertical projector: The main axis is parallel to the plane of the screen. They're most common, and suitable for flat parts or smaller work-pieces.
Horizontal projector: The main axis is perpendicular to the plane of the projection screen. Screens are thus made mainly in medium and large versions generally suited for examining shaft parts or heavy work-pieces with large profiles, although having a horizontal table below without a hole for light transmission can be convenient for small machines with a silhouette lighting arrangement.
Positive or inverted images
For the simplest type of profile projector, the part's inverted image, also known as its mirror image, will be displayed on the screen.
In order to facilitate the measurement, sometimes a plus-image system is deliberately added, changing the inverted image into a positive one, which increases the cost due to scale/material used, while somewhat reducing its measurement accuracy.
Screen size
As for selection of screen size, one should carefully consider whether the entire part must be imaged on the screen. If the inspection can readily be done at a modest scale, there is no need for a larger screen. Projector manufacturers offer multiple screen sizes to meet various needs.
Magnification
The projection lens magnification is fixed. Different views of measured pieces often require different magnifications. However, the usual projector factory configuration is with a single lens, so according to needs, additional lenses may be purchased and used.
Work table and accessories
The work table is used to place and hold the measured piece. Its own volume, X, Y travel and carrying capacity are critical. Meanwhile, for the convenience of holding the workpiece, a precision rotary table, a V-block part holder and other accessories are generally added.
Also, the projector must have a flexible and stable focusing mechanism and large working distance (the top surface of the workpiece relative to the lens pitch). The user selects appropriate data processing modes: without exception, all modern optical measuring projectors on market have been digitized. We will therefore also consider relevant data-processing capabilities.
See also
Shadowgraph
References
Bibliography
Industrial equipment
Metrology
Metalworking measuring instruments | Optical comparator | [
"Engineering"
] | 1,248 | [
"nan"
] |
15,940,275 | https://en.wikipedia.org/wiki/HEAO%20Program | The High Energy Astronomy Observatory Program was a NASA program of the late 1970s and early 1980s that included a series of three large low-Earth-orbiting spacecraft for X-ray and Gamma-Ray astronomy and Cosmic-Ray investigations. After launch, they were denoted HEAO 1, HEAO 2 (also known as The Einstein Observatory), and HEAO 3, respectively. The large (~3000 kg) satellites were 3-axis stabilized to arc-minute accuracy, with fixed solar panels. All three observatories were launched from Cape Canaveral, Florida on Atlas-Centaur SLV-3D launch vehicles into near-circular orbits with initial altitudes slightly above 500 km.
HEAO 1
HEAO 1, launched August 12, 1977, was a sky survey mission that included four large X-ray and gamma-ray astronomy instruments, known as A1, A2, A3, and A4, respectively. Inclination was about 22.7 degrees. It re-entered the Earth's atmosphere and burned up on March 15, 1979.
The A1, or Large-Area Sky Survey (LASS) instrument, was managed by the Naval Research Laboratory and used large proportional counters to cover the 0.25 to 25 keV energy range.
The A2, or Cosmic X-ray Experiment (CXE), from the Goddard Space Flight Center, covered the 2-60 keV energy range with high spatial and spectral resolution.
The A3, or Modulation Collimator (MC) instrument, provided high-precision positions of X-ray sources, accurate enough to permit follow-up observations to identify optical and radio counterparts. It was provided by the Center for Astrophysics (Smithsonian Astrophysical Observatory and the Harvard College Observatory, SAO/HCO).
The A4, Hard X-ray / Low Energy Gamma-ray experiment, used scintillation counters to cover the energy range from about 20 keV to 10 MeV. It was provided and managed by the University of California at San Diego, in collaboration with MIT.
HEAO 2 (Einstein Observatory)
HEAO 2, more commonly known as the Einstein Observatory, launched 13 November 1978 into a 23.5 deg inclination orbit. It carried a single large grazing-incidence focusing X-ray telescope, providing unprecedented levels of sensitivity (hundreds of times better than previously achieved) and arc-second angular resolution for pointed observations of known objects, and operated over the 0.2 to 3.5 keV energy range. HEAO 2 differed from HEAO 1 and HEAO 3 in that it was used for pointed, deep, small-field-of-view observations rather than sky-survey studies.
A suite of four focal plane instruments were provided:
HRI, or High Resolution Imaging camera, 0.15-3 keV.
IPC, or Imaging Proportional Counter, 0.4 to 4 keV.
SSS, or Solid State Spectrometer, 0.5 to 4.5 keV.
FPCS, or Bragg Focal Plane Crystal Spectrometer,
as well as a 1-20 keV Monitor Proportional Counter (MPC), a Broad Band Filter Spectrometer (BBFS), and an objective grating spectrometer (OGS). The observatory re-entered the Earth's atmosphere and burned up on March 25, 1982.
HEAO 3
HEAO 3, launched on 20 September 1979 into a 43.6-degree inclination orbit, carried three experiments, known as C1, C2, and C3. The first was a cryogenically cooled germanium (Ge) high-resolution gamma-ray spectrometer, while the C2 and C3 experiments were large cosmic-ray instruments. The satellite re-entered the Earth's atmosphere and burned up on December 7, 1981.
Program
The experiment designations A1, A2, A3, A4, for HEAO A, thru C1, C2, C3 for HEAO C, were most common before launch, but also often appear in the later scientific literature. The overall HEAO program was managed out of NASA's Marshall Space Flight Center in Huntsville, AL. NASA Program Manager was Mr. Richard E. Halpern; NASA Program Scientist was Dr. Albert G. Opp. All three satellites were built by TRW Systems of Redondo Beach, California, who won the Nelson P. Jackson Aerospace Award for their work. The total program cost was roughly $250 million.
References
External links
NASA programs
Space telescopes
TRW Inc. | HEAO Program | [
"Astronomy"
] | 910 | [
"Space telescopes"
] |
15,940,961 | https://en.wikipedia.org/wiki/Shortcut%20model | An important question in statistical mechanics is the dependence of model behaviour on the dimension of the system. The shortcut model was introduced in the course of studying this dependence. The model interpolates between discrete regular lattices of integer dimension.
Introduction
The behaviour of different processes on discrete regular lattices have been studied quite extensively. They show a rich diversity of behaviour, including a non-trivial dependence on the dimension of the regular lattice. In recent years the study has been extended from regular lattices to complex networks. The shortcut model has been used in studying several processes and their dependence on dimension.
Dimension of complex network
Usually, dimension is defined based on the scaling exponent of some property in the appropriate limit. One property one could use is the scaling of volume with distance. For regular lattices the number of nodes within a distance of node scales as .
For systems which arise in physical problems one usually can identify some physical space relations among the vertices. Nodes which are linked directly will have more influence on each other than nodes which are separated by several links. Thus, one could define the distance between nodes and as the length of the shortest path connecting the nodes.
For complex networks one can define the volume as the number of nodes within a distance of node , averaged over , and the dimension may be defined as the exponent which determines the scaling behaviour of the volume with distance. For a vector , where is a positive integer, the Euclidean norm is defined as the Euclidean distance from the origin to , i.e.,
However, the definition which generalises to complex networks is the norm,
The scaling properties hold for both the Euclidean norm and the norm. The scaling relation is
where d is not necessarily an integer for complex networks. is a geometric constant which depends on the complex network. If the scaling relation Eqn. holds, then one can also define the surface area as the number of nodes which are exactly at a distance from a given node, and scales as
A definition based on the complex network zeta function generalises the definition based on the scaling property of the volume with distance and puts it on a mathematically robust footing.
Shortcut model
The shortcut model starts with a network built on a one-dimensional regular lattice. One then adds edges to create shortcuts that join remote parts of the lattice to one another. The starting network is a one-dimensional lattice of vertices with periodic boundary conditions. Each vertex is joined to its neighbors on either side, which results in a system with edges. The network is extended by taking each node in turn and, with probability , adding an edge to a new location nodes distant.
The rewiring process allows the model to interpolate between a one-dimensional regular lattice and a two-dimensional regular lattice. When the rewiring probability , we have a one-dimensional regular lattice of size . When , every node is connected to a new location and the graph is essentially a two-dimensional lattice with and nodes in each direction. For between and , we have a graph which interpolates between the one and two dimensional regular lattices. The graphs we study are parametrized by
Application to extensiveness of power law potential
One application using the above definition of dimension was to the
extensiveness of statistical mechanics systems with a power law potential where the interaction varies with the distance as . In one dimension the system properties like the free energy do not behave extensively when , i.e., they increase faster than N as , where N is the number of spins in the system.
Consider the Ising model with the Hamiltonian (with N spins)
where are the spin variables, is the distance between node and node , and are the couplings between the spins. When the have the behaviour , we have the power law potential. For a general complex network the condition on the exponent which preserves extensivity of the Hamiltonian was studied. At zero temperature, the energy per spin is proportional to
and hence extensivity requires that be finite. For a general complex network is proportional to the Riemann zeta function . Thus, for the potential to be extensive, one requires
Other processes which have been studied are self-avoiding random walks, and the scaling of the mean path length with the network size. These studies lead to the interesting result that the dimension transitions sharply as the shortcut probability increases from zero. The sharp transition in the dimension has been explained in terms of the combinatorially large
number of available paths for points separated by distances large compared to 1.
Conclusion
The shortcut model is useful for studying the dimension dependence of different processes. The processes studied include the behaviour of the power law potential as a function of the dimension, the behaviour of self-avoiding random walks, and the scaling of the mean path length. It may be useful to compare the shortcut model with the small-world network, since the definitions have a lot of similarity. In the small-world network also one starts with a regular lattice and adds shortcuts with probability . However, the shortcuts are not constrained to connect to a node a fixed distance ahead. Instead, the other end of the shortcut can connect to any randomly chosen node. As a result, the small world model tends to a random graph rather than a two-dimensional graph as the shortcut probability is increased.
References
Networks
Statistical mechanics | Shortcut model | [
"Physics"
] | 1,075 | [
"Statistical mechanics"
] |
15,941,337 | https://en.wikipedia.org/wiki/Time-domain%20thermoreflectance | Time-domain thermoreflectance (TDTR) is a method by which the thermal properties of a material can be measured, most importantly thermal conductivity. This method can be applied most notably to thin film materials (up to hundreds of nanometers thick), which have properties that vary greatly when compared to the same materials in bulk. The idea behind this technique is that once a material is heated up, the change in the reflectance of the surface can be utilized to derive the thermal properties. The reflectivity is measured with respect to time, and the data received can be matched to a model with coefficients that correspond to thermal properties.
Experiment setup
The technique of this method is based on the monitoring of acoustic waves that are generated with a pulsed laser. Localized heating of a material will create a localized temperature increase, which induces thermal stress. This stress build in a localized region causes an acoustic strain pulse. At an interface, the pulse will be subjected to a transmittance/reflectance state, and the characteristics of the interface may be monitored with the reflected waves. A probe laser will detect the effects of the reflecting acoustic waves by sensing the piezo-optic effect.
The amount of strain is related to the optical laser pulse as follows. Take the localized temperature increase due to the laser,
where R is the sample reflectivity, Q is the optical pulse energy, C is the specific heat (per unit volume), A is the optical spot area, ζ is the optical absorption length, and z is the distance into the sample. This temperature increase results in a strain that can be estimated by multiplying it with the linear coefficient of thermal expansion of the film. Usually, a typical magnitude value of the acoustic pulse will be small, and for long propagation nonlinear effects could become important. But propagation of such short duration pulses will suffer acoustic attenuation if the temperature is not very low. Thus, this method is most efficient with the utilization of surface acoustic waves, and studies on investigation of this method toward lateral structures are being conducted.
To sense the piezo-optic effect of the reflected waves, fast monitoring is required due to the travel time of the acoustic wave and heat flow. Acoustic waves travel a few nanometers in a picosecond, where heat flows about a hundred nanometers in a second. Thus, lasers such as titanium sapphire (Ti:Al2O3) laser, with pulse width of ~200 fs, are used to monitor the characteristics of the interface. Other type of lasers include Yb:fiber, Yb:tungstate, Er:fiber, Nd:glass. Second-harmonic generation may be utilized to achieve frequency of double or higher.
The output of the laser is split into pump and probe beams by a half-wave plate followed by a polarizing beam splitter leading to a cross-polarized pump and probe. The pump beam is modulated on the order of a few megahertz by an acousto-optic or electro-optic modulator and focused onto the sample with a lens. The probe is directed into an optical delay line. The probe beam is then focused with a lens onto the same spot on the sample as the pulse. Both pump and probe have a spot size on the order of 10–50 μm. The reflected probe light is input to a high bandwidth photodetector. The output is fed into a lock-in amplifier whose reference signal has the same frequency used to modulate the pump. The voltage output from the lock-in will be proportional to the change in reflectivity (ΔR). Recording this signal as the optical delay line is changed provides a measurement of ΔR as a function of optical probe-pulse time delay.
Modeling materials
The surface temperature of a single layer
The frequency domain solution for a semi-infinite solid which is heated by a point source with angular frequency can be expressed by the following equation:
, where .
Here, Λ is the thermal conductivity of the solid, D is the thermal diffusivity of the solid, and r is the radial coordinate. In a typical time-domain thermoreflectance experiment, the co-aligned laser beams have cylindrical symmetry, therefore the Hankel transform can be used to simplify the computation of the convolution of the equation with the distributions of the laser intensities.
Here is radially symmetric and by the definition of Hankel transform,
Since the pump and probe beams used here have Gaussian distribution, the radius of the pump and probe beam are and respectively.
The surface is heated by the pump laser beam with the intensity , i.e.
where is the amplitude of the heat absorbed by the sample at frequency .
Then the Hankel transform of is
.
Then the distributions of temperature oscillations at the surface is the inverse Hankel transforms of the product and , i.e.
The surface temperatures are measured due to the change in the reflectivity with the temperature , i.e. ,
while this change is measured by the changes in the reflected intensity of a probe laser beam.
The probe laser beam measures a weighted average of the temperature , i.e.
This last integral can be simplified to an integral over :
The surface temperature of a layered structure
In the similar way, frequency domain solution for the surface temperature of a layered structure can be acquired. for a layered structure is
where
, .
Here Λn is the thermal conductivity of nth layer, Dn is the thermal diffusivity of nth layer, and Ln is the thickness of nth layer. Then we can calculate the changes of temperature of a layered structure as before using the updated .
Modeling of data acquired in time-domain thermoreflectance
The acquired data from time-domain thermoreflectance experiments are required to be compared with the model.
where Q is the quality factor of the resonant circuit. This calculated would be compared with the measured one.
Application
Through this process of time-domain thermoreflectance, the thermal properties of many materials can be obtained. Common test setups include having multiple metal blocks connected together in a diffusion multiple, where once subjected to high temperatures various compounds can be created as a result of the diffusion of two adjacent metal blocks. An example would be a Ni-Cr-Pd-Pt-Rh-Ru diffusion multiple which would have diffusion zones of Ni-Cr, Ni-Pd, Ni-Pt and so on. In this way, many different materials can be tested at the same time. Lowest thermal conductivity for a thin film of solid, fully dense material (i.e. not porous) was also recently reported with measurements using this method.
Once this test sample is obtained, time-domain thermoreflectance measurements can take place, with laser pulses of very short duration for both the pump and the probe lasers (<1 ps). The thermoreflected signal is then measured by a photodiode which is connected to a RF lock-in amplifier. The signals that come out of the amplifier consist of an in phase and out of phase component, and the ratio of these allow thermal conductivity data to be measured for a specific delay time.
The data received from this process can then be compared to a thermal model, and the thermal conductivity and thermal conductance can then be derived. It is found that these two parameters can be derived independently based on the delay times, with short delay times (0.1–0.5 ns) resulting in the thermal conductivity and longer delay times (> 2ns) resulting in the thermal conductance.
There is much room for error involved due to phase errors in the RF amplifier in addition to noise from the lasers. Typically, however, accuracy can be found to be within 8%.
See also
Thermal conductivity measurement
References
Thermodynamics
Materials testing | Time-domain thermoreflectance | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,601 | [
"Materials testing",
"Materials science",
"Thermodynamics",
"Dynamical systems"
] |
15,942,369 | https://en.wikipedia.org/wiki/Fugacity%20capacity | The fugacity capacity constant (Z) is used to help describe the concentration of a chemical in a system (usually in mol/m3Pa). Hemond and Hechner-Levy (2000) describe how to utilize the fugacity capacity to calculate the concentration of a chemical in a system. Depending on the chemical, fugacity capacity varies. The concentration in media 'm' equals the fugacity capacity in media 'm' multiplied by the fugacity of the chemical.
For a chemical system at equilibrium, the fugacity of the chemical will be the same in each media/phase/compartment. Therefore equilibrium is sometimes called "equifugacity" in the context of these calculations.
where Z is a proportional constant, termed fugacity capacity. This equation does not necessarily imply that C and f are always linearly related. Non-linearity can be accommodated by allowing Z to vary as a function of C or f.
For a better understanding of the fugacity capacity concept, heat capacity may provide a precedent for introducing Z as a capacity of a phase to absorb particular quantity of chemical. However, phases with high fugacity capacity do not necessarily retain high fugacity.
In calculations of fugacity capacity key factors would be (a) the nature of the solute (chemical), (b) the nature of the medium or compartment, (c) temperature.
Expressions for fugacity capacity
The expression for Zm is dependent on the media/phase/compartment. The following list gives the fugacity capacities for common medias:
Air (under ideal gas assumptions): Zair = 1/RT
Water: Zwater = 1/H
Octanol: Zoct = Kow/H
Pure phase of target chemical: Zpure = 1/Psv
Where: R is the Ideal gas constant (8.314 Pa·m3/mol·K); T is the absolute temperature (K); H is the Henry's law constant for the target chemical (Pa/m3mol); Kow is the octanol-water partition coefficient for the target chemical (dimensionless ratio); Ps is the vapor pressure of the target chemical (Pa); and v is the molar volume of the target chemical (m3/mol).
Notice that the ratio between Z-values for different media (e.g. octanol and water) is the same as the ratio between the concentrations of the target chemical in each media at equilibrium.
When using a fugacity capacity approach to calculate the concentrations of a chemical in each of several medias/phases/compartments, it is often convenient to calculate the prevailing fugacity of the system using the following equation if the total mass of target chemical (MT) and the volume of each compartment (Vm) are known:
Alternatively, if the target chemical is present as a pure phase at equilibrium, its vapor pressure will be the prevailing fugacity of the system.
See also
Multimedia fugacity model
References
Chemical thermodynamics
Environmental chemistry
Equilibrium chemistry | Fugacity capacity | [
"Chemistry",
"Environmental_science"
] | 630 | [
"Equilibrium chemistry",
"Chemical thermodynamics",
"Environmental chemistry",
"nan"
] |
15,944,225 | https://en.wikipedia.org/wiki/Ecological%20threshold | Ecological threshold is the point at which a relatively small change or disturbance in external conditions causes a rapid change in an ecosystem. When an ecological threshold has been passed, the ecosystem may no longer be able to return to its state by means of its inherent resilience. Crossing an ecological threshold often leads to rapid change of ecosystem health. Ecological threshold represent a non-linearity of the responses in ecological or biological systems to pressures caused by human activities or natural processes.
Critical load, regime shift, critical transition and tipping point are examples of other closely related terms.
Characteristics
Thresholds can be characterized as points or as zones. Zone-type thresholds imply a gradual shift or transition from one state to another rather than an abrupt change at a specific point. Ecological thresholds have caught attention because many cases of catastrophic worsening of conditions have proved to be difficult or nearly impossible to remedy (also known as points of no return). Ecological extinction is an example of a definitive point of no return.
Ecological thresholds are often characterized by hysteresis, which means the dependence of the state of a system on the history of its state. Even when the change is not irreversible, the return path from altered to original state can be drastically different from the development leading to the altered state.
Another related concept is panarchy. Panarchy views coupled human-natural systems as a cross-scale set of adaptive cycles that reflect the dynamic nature of human and natural structures across time and space. Sudden shifts in ecosystem state can induce changes in human understanding of the way the systems need to be managed. These changes, in turn, may alter the institutions that carry out that management and as a result, some new changes occur in ecosystems.
Detection
There are many different types of thresholds and detecting the occurrence of a threshold is not always straightforward. One approach is to process time series which are thought to display a shift in order to identify a possible jump. Methods have been developed to enhance and localize the jumps.
Examples
Some examples of ecological thresholds, such as clear lakes turning into turbid ones, are well documented but many more probably exist. The thresholds database by Resilience Alliance and Santa Fe Institute includes over one hundred examples.
See also
Carrying capacity
Catastrophe theory
Dual-phase evolution suggests a mechanism underlying ecological thresholds and zones.
Inflection point
Tipping point (climatology)
Gaia hypothesis
References
External links
Resilience Alliance A multidisciplinary research group that explores the dynamics of complex adaptive systems
Thresholds of environmental sustainability A research project focusing on ecological thresholds
Ecological economics
Ecology terminology | Ecological threshold | [
"Biology"
] | 525 | [
"Ecology terminology"
] |
15,944,818 | https://en.wikipedia.org/wiki/Single%20Table%20Inheritance | Single table inheritance is a way to emulate object-oriented inheritance in a relational database. When mapping from a database table to an object in an object-oriented language, a field in the database identifies what class in the hierarchy the object belongs to. All fields of all the classes are stored in the same table, hence the name "Single Table Inheritance". In Ruby on Rails the field in the table called 'type' identifies the name of the class. In Hibernate (Java) and Entity Framework this pattern is called Table-Per-Class-Hierarchy and Table-Per-Hierarchy (TPH) respectively., and the column containing the class name is called the Discriminator column.
Example
The table have the Url which is used by all blogs but only blogs of type RssBlog have a value assigned in the RssUrl column, other rows have NULL.
See also
Object–relational mapping
ActiveRecord (Rails)
References
External links
Single Table Inheritance
Single Table Inheritance in Yii
Single Table Inheritance in Django
Database theory | Single Table Inheritance | [
"Engineering"
] | 213 | [
"Software engineering",
"Software engineering stubs"
] |
15,944,924 | https://en.wikipedia.org/wiki/Damien%20Sandras | Damien Sandras is known in the free software community due to his work on GNOME, more specifically on Ekiga, the leading open-source softphone for the Linux desktop. He is one of the founders of FOSDEM, an event dedicated to free software developers in Europe.
FOSDEM was initially created by Raphaël Bauduin under the name OSDEM. Sandras joined Bauduin and helped him setting up the event. Sandras was one of the driving forces behind the organization during 7 years.
Ekiga was supported by the Free Software Foundation and more specifically by Richard Stallman as an alternative to the proprietary Skype. Stallman's e-mail signature contained a mention to the softphone during a few years:
Sandras is a graduate of the University of Louvain (UCLouvain). He is mentioned on the university portal in the University Success Stories.
He is now leading Be IP, a startup dealing with enterprise open-source VoIP software.
Sources
http://www.journaldunet.com/solutions/itws/050317_it_gnomemeeting.shtml
http://www.linuxdevcenter.com/pub/a/linux/2005/03/17/gnomemeeting.html
http://www.linuxtoday.com/news_story.php3?ltsn=2002-01-11-005-20-IN-GN
https://web.archive.org/web/20070629120232/http://ghj.sunsite.dk/index.php?1=articles%2F1%2Finterview_gnomemeeting.html&article=1
References
External links
Home page
Ekiga.org
Be IP
Belgian computer programmers
GNOME developers
Free software programmers
Living people
Year of birth missing (living people) | Damien Sandras | [
"Technology"
] | 395 | [
"Computing stubs",
"Computer specialist stubs"
] |
15,945,246 | https://en.wikipedia.org/wiki/Abolitionism%20%28animal%20rights%29 | Abolitionism or abolitionist veganism is the animal rights based opposition to all animal use by humans. Abolitionism intends to eliminate all forms of animal use by maintaining that all sentient beings, humans or nonhumans, share a basic right not to be treated as properties or objects. Abolitionists emphasize that the production of animal products requires treating animals as property or resources, and that animal products are not necessary for human health in modern societies. Abolitionists believe that everyone who can live vegan is therefore morally obligated to be vegan.
Abolitionists disagree on the strategy that must be used to achieve their goal. While some abolitionists, like Gary L. Francione, professor of law, argue that abolitionists should create awareness about the benefits of veganism through creative and nonviolent education (by also pointing to health and environmental benefits) and inform people that veganism is a moral imperative, others such as Tom Regan believe that abolitionists should seek to stop animal exploitation in society, and fight for this goal through political advocacy, without using the environmental or health arguments. Abolitionists such as Steven Best and David Nibert argue, respectively, that embracing alliance politics and militant direct action for change (including civil disobedience, mass confrontation, etc), and transcending capitalism are integral to ending animal exploitation.
Abolitionists generally oppose movements that seek to make animal use more humane or to abolish specific forms of animal use, since they believe this undermines the movement to abolish all forms of animal use. The objective is to secure a moral and legal paradigm shift, whereby animals are no longer regarded as things to be owned and used. The American philosopher Tom Regan writes that abolitionists want empty cages, not bigger ones. This is contrasted with animal welfare, which seeks incremental reform, and animal protectionism, which seeks to combine the first principles of abolitionism with an incremental approach, but which is regarded by some abolitionists as another form of welfarism or "New Welfarism".
Concepts
The word relates to the historical term abolitionism—a social movement to end slavery or human ownership of other humans. Based on the way of evaluating welfare reforms, abolitionists can be either radical or pragmatic. While the former maintain that welfare reforms can only be dubiously described as moral improvements, the latter consider welfare reforms as moral improvements even when the conditions they permit are unjust.
Gary L. Francione, professor of law and philosophy at Rutgers School of Law–Newark, argues from the abolitionist perspective that self-described animal-rights groups who pursue welfare concerns, such as People for the Ethical Treatment of Animals, risk making the public feel comfortable about its use of animals. He calls such groups the "new welfarists", arguing that, though their aim is an end to animal use, the reforms they pursue are indistinguishable from reforms agreeable to traditional welfarists, who he says have no interest in abolishing animal use. He argues that reform campaigns entrench the property status of animals, and validate the view that animals simply need to be treated better. Instead, he writes, the public's view that animals can be used and consumed ought to be challenged. His position is that this should be done by promoting ethical veganism. Others think that this should be done by creating a public debate in society.
Philosopher Steven Best of the University of Texas at El Paso has been critical of Francione for his denunciation of militant direct actions carried out by the underground animal liberation movement and organizations like the Animal Liberation Front, which Best compares favorably to the "nineteenth-century-abolitionist movement" to end slavery, and also for placing the onus on individual consumers rather than powerful institutions such as corporations, the state and the mass media along with ignoring the "constraints imposed by poverty, class, and social conditioning." In this, he says that Francione "exculpates capitalism" and fails to "articulate a structural theory of oppression." The "vague, elitist, asocial 'vegan education' approach," Best argues, is no substitute for "direct action, mass confrontation, civil disobedience, alliance politics, and struggle for radical change."
Sociologist David Nibert of Wittenberg University argues that attempting to create a vegan world under global capitalism is unrealistic given that "tens of millions of animals are tortured and brutally killed every year to produce profits for twenty-first century elites, who hold investments in the corporate equivalents of Genghis Khan" and that any real and meaningful change will only come by transcending capitalism. He writes that the contemporary entrenchment of capitalism and continued exploitation of animals by human civilization dovetail into the ongoing expansion of what he describes as the animal–industrial complex, with the number of CAFOs and the animals to fill them dramatically increasing, along with growing numbers of humans consuming animal products. He rhetorically asks, how can one hope to create some consumer base for this new vegan world when over a billion people live on less than a dollar a day? Nibert acknowledges that post-capitalism on its own will not automatically end animal exploitation or bring about a more just world, but that it is a "necessary precondition" for such changes.
New welfarists argue that there is no logical or practical contradiction between abolitionism and "welfarism". Welfarists think that they can be working toward abolition, but by gradual steps, pragmatically taking into account what most people can be realistically persuaded to do in the short as well as the long term, and reduce animal suffering as it is most urgent to relieve. People for the Ethical Treatment of Animals, for example, in addition to promoting local improvements in the treatment of animals, promote vegetarianism. Although some people believe that changing the legal status of nonhuman sentient beings is a first step in abolishing ownership or mistreatment, others argue that this will not succeed if the consuming public has not already begun to reduce or eliminate its exploitation of animals for food.
Personhood
In 1992, Switzerland amended its constitution to recognize animals as beings and not things. The dignity of animals is also protected in Switzerland.
New Zealand granted basic rights to five great ape species in 1999. Their use is now forbidden in research, testing or teaching.
Germany added animal welfare in a 2002 amendment to its constitution, becoming the first European Union member to do so.
In 2007, the parliament of the Balearic Islands, an autonomous province of Spain, passed the world's first legislation granting legal rights to all great apes.
In 2013, India officially recognized dolphins as non-human persons.
In 2014, France revised the legal status of animals from movable property to sentient beings.
In 2015, the province of Quebec in Canada adopted the Animal Welfare and Safety Act, which gave animals the legal status of "sentient beings with biological needs".
See also
Animal liberationist
Animal rights
List of animal rights advocates
References
Further reading
Francione, Gary. Rain Without Thunder: The Ideology of the Animal Rights Movement. Temple University Press, 1996.
Francione, Gary and Garner, Robert. The Animal Rights Debate: Abolition Or Regulation?. Columbia University Press, 2010.
Francione, Gary. Ingrid Newkirk on Principled Veganism: "Screw the principle", Animal Rights: The Abolitionist Approach, September 2010.
Francione, Gary. "Animal Rights: The Abolitionist Approach", accessed February 26, 2011.
Francione, Gary. Animals, Property, and the Law. Temple University Press, 1995.
Hall, Lee. "An Interview with Professor Gary L. Francione on the State of the U.S. Animal Rights Movement", Friends of Animals, accessed February 25, 2008.
Regan, Tom. Empty Cages. Rowman & Littlefield Publishers, Inc., 2004.
Regan, Tom. "The Torch of Reason, The Sword of Justice", animalsvoice.com, accessed May 29, 2012.
Regan, Tom. "On Achieving Abolitionist Goals", Animal Rights Zone, May 18, 2011, accessed May 24, 2011.
Regan, Tom. The Case for Animal Rights. University of California Press, 1980.
Animal ethics
Animal rights
Bioethics | Abolitionism (animal rights) | [
"Technology"
] | 1,696 | [
"Bioethics",
"Ethics of science and technology"
] |
15,945,403 | https://en.wikipedia.org/wiki/Lifetime%20Homes%20Standards | The Lifetime Homes Standard is a series of sixteen design criteria intended to make homes more easily adaptable for lifetime use at minimal cost. The concept was initially developed in 1991 by the Joseph Rowntree Foundation and Habinteg Housing Association.
The administration and technical support on Lifetime Homes is provided by Habinteg, who took on this responsibility from the Joseph Rowntree Foundation in 2008.
On 25 February 2008 the UK Government announced its intention to work towards all new homes being built to Lifetime Homes Standards by 2013.
Criteria
The sixteen criteria are:
Parking (width or widening capability)
Approach to dwelling from parking (distance, gradients and widths)
Approach to all entrances
Entrances
Communal stairs and lifts
Internal doorways and hallways
Circulation space
Entrance level living space
Potential for entrance level bed space
Entrance level WC and shower drainage
WC and bathroom walls
Stairs and potential through-floor lift in dwellings
Potential for fitting of hoists and bedroom / bathroom relationship
Bathrooms
Glazing and window handle
Location of service controls
Other standards
Part M of the Building Regulations includes requirements aimed in a similar direction to the Lifetime Homes Standards, but generally not going quite as far.
The Code for Sustainable Homes (Level 6) includes the Lifetime Homes Standard.
A revised version of the Lifetime Homes Standard was published on 5 July 2010 in response to a consultation, introduced to achieve a higher level of practicability for volume developers in meeting the requirements of the Code for Sustainable Homes. The revisions will also facilitate the adoption of Lifetime Homes design as a requirement for all future publicly funded housing developments.
The revisions are the result of work by the Lifetime Homes Technical Advisory Group representing a cross-section of practitioners involved in housing design, housing development, access consultancy and provision of adaptations.
Notes
External links
Joseph Rowntree Foundation introduction to Lifetime Homes
Lifetime Homes
Habinteg Housing Association
Houses | Lifetime Homes Standards | [
"Technology"
] | 367 | [
"Structural system",
"Houses"
] |
15,945,431 | https://en.wikipedia.org/wiki/Bone%20segment%20navigation | Bone segment navigation is a surgical method used to find the anatomical position of displaced bone fragments in fractures, or to position surgically created fragments in craniofacial surgery. Such fragments are later fixed in position by osteosynthesis. It has been developed for use in craniofacial and oral and maxillofacial surgery.
Bone segment navigation is a patented surgical procedure, using a frameless and markerless registration technique. It uses for the first time natural registration surfaces instead of single artificial x-ray visible markers, in order to achieve a higher precision (1 mm and better). Previous methods of Cutting and Watzinger do not meet the criteria of bone segment navigation.
After an accident or injury, a fracture can be produced and the resulting bony fragments can be displaced. In the oral and maxillofacial area, such a displacement could have a major effect both on facial aesthetics and organ function: a fracture occurring in a bone that delimits the orbit can lead to diplopia; a mandibular fracture can induce significant modifications of the dental occlusion; in the same manner, a skull (neurocranium) fracture can produce an increased intracranial pressure.
In severe congenital malformations of the facial skeleton surgical creation of usually multiple bone segments is required with precise movement of these segments to produce a more normal face.
Surgical planning and surgical simulation
An osteotomy is a surgical intervention that consists of cutting through bone and repositioning the resulting fragments in the correct anatomical place. To insure optimal repositioning of the bony structures by osteotomy, the intervention can be planned in advance and simulated. The surgical simulation is a key factor in reducing the actual operating time. Often, during this kind of operation, the surgical access to the bone segments is very limited by the presence of the soft tissues: muscles, fat tissue and skin - thus, the correct anatomical repositioning is very difficult to assess, or even impossible. Preoperative planning and simulation on models of the bare bony structures can be done. An alternate strategy is to plan the procedure entirely on a CT scan generated model and output the movement specifications purely numerically.
Materials and devices needed for preoperative planning and simulation
The osteotomies performed in orthognathic surgery are classically planned on cast models of the tooth-bearing jaws, fixed in an articulator. For edentulous patients, the surgical planning may be made by using stereolithographic models. These tridimensional models are then cut along the planned osteotomy line, slid and fixed in the new position.
Since the 1990s, modern techniques of presurgical planning were developed – allowing the surgeon to plan and simulate the osteotomy in a virtual environment, based on a preoperative CT or MRI; this procedure reduces the costs and the duration of creating, positioning, cutting, repositioning and refixing the cast models for each patient.
Transferring the preoperative planning to the operating theatre
The usefulness of the preoperative planning, no matter how accurate, depends on the accuracy of the reproduction of the simulated osteotomy in the surgical field. The transfer of the planning was mainly based on the surgeon's visual skills. Different guiding headframes were further developed to mechanically guide bone fragment repositioning.
Such a headframe is attached to the patient's head, during CT or MRI, and surgery. There are certain difficulties in using this device. First, exact reproducibility of the headframe position on the patient's head is needed, both during CT or MRI registration, and during surgery. The headframe is relatively uncomfortable to wear, and very difficult or even impossible to use on small children, who can be uncooperative during medical procedures. For this reason headframes have been abandoned in favor of frameless stereotaxy of the mobilized segments with respect to the skull base. Intraoperative registration of the patient's anatomy with the computer model is done such that pre-CT placement of fiducial points is not necessary.
Surgical Segment Navigator
Initial bone fragment positioning efforts using an electro-magnetic system were abandoned due to the need for an environment without ferrous metals. In 1991 Taylor at IBM working in collaboration with the craniofacial surgery team at New York University developed a bone fragment tracking system based on an infrared (IR) camera and IR transmitters attached to the skull. This system was patented by IBM in 1994. At least three IR transmitters are attached in the neurocranium area to compensate the movements of the patient's head. There are three or more IR transmitters are attached to the bones where the osteotomy and bone repositioning is about to be performed onto. The 3D position of each transmitter is measured by the IR camera, using the same principle as in satellite navigation. A computer workstation is constantly visualizing the actual position of the bone fragments, compared with the predetermined position, and also makes real-time spatial determinations of the free-moving bony segments resulting from the osteotomy.
Thus, fragments can be very accurately positioned into the target position, predetermined by surgical simulation. More recently a similar system, the Surgical Segment Navigator (SSN), was developed in 1997 at the University of Regensburg, Germany, with the support of the Carl Zeiss Company.
References
Oral and maxillofacial surgery
Computer-assisted surgery
Surgery
Health informatics | Bone segment navigation | [
"Biology"
] | 1,119 | [
"Health informatics",
"Medical technology"
] |
13,257,850 | https://en.wikipedia.org/wiki/Tyropanoic%20acid | Tyropanoic acid and its salt sodium tyropanoate are radiocontrast agents used in cholecystography (X-ray diagnosis of gallstones). Trade names include Bilopaque, Lumopaque, Tyropaque, and Bilopac. This molecule contains three heavy iodine atoms which obstruct X-rays in the same way as the calcium in bones to produce a visible image. After injection it is rapidly excreted into the bile.
References
Iodobenzene derivatives
Carboxylic acids
Butyramides
Anilides | Tyropanoic acid | [
"Chemistry"
] | 123 | [
"Pharmacology",
"Carboxylic acids",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
13,257,986 | https://en.wikipedia.org/wiki/Plane%20wave%20expansion%20method | Plane wave expansion method (PWE) refers to a computational technique in electromagnetics to solve the Maxwell's equations by formulating an eigenvalue problem out of the equation. This method is popular among the photonic crystal community as a method of solving for the band structure (dispersion relation) of specific photonic crystal geometries. PWE is traceable to the analytical formulations, and is useful in calculating modal solutions of Maxwell's equations over an inhomogeneous or periodic geometry. It is specifically tuned to solve problems in a time-harmonic forms, with non-dispersive media (a reformulation of the method named Inverse dispersion allows frequency-dependent refractive indices).
Principles
Plane waves are solutions to the homogeneous Helmholtz equation, and form a basis to represent fields in the periodic media. PWE as applied to photonic crystals as described is primarily sourced from Dr. Danner's tutorial.
The electric or magnetic fields are expanded for each field component in terms of the Fourier series components along the reciprocal lattice vector. Similarly, the dielectric permittivity (which is periodic along reciprocal lattice vector for photonic crystals) is also expanded through Fourier series components.
with the Fourier series coefficients being the K numbers subscripted by m, n respectively, and the reciprocal lattice vector given by . In real modeling, the range of components considered will be reduced to just instead of the ideal, infinite wave.
Using these expansions in any of the curl-curl relations like,
and simplifying under assumptions of a source free, linear, and non-dispersive region we obtain the eigenvalue relations which can be solved.
Example for 1D case
For a y-polarized z-propagating electric wave, incident on a 1D-DBR periodic in only z-direction and homogeneous along x,y, with a lattice period of a. We then have the following simplified relations:
The constitutive eigenvalue equation we finally have to solve becomes,
This can be solved by building a matrix for the terms in the left hand side, and finding its eigenvalue and vectors. The eigenvalues correspond to the modal solutions, while the corresponding magnetic or electric fields themselves can be plotted using the Fourier expansions. The coefficients of the field harmonics are obtained from the specific eigenvectors.
The resulting band-structure obtained through the eigenmodes of this structure are shown to the right.
Example code
We can use the following code in MATLAB or GNU Octave to compute the same band structure,
%
% solve the DBR photonic band structure for a simple
% 1D DBR. air-spacing d, periodicity a, i.e, a > d,
% we assume an infinite stack of 1D alternating eps_r|air layers
% y-polarized, z-directed plane wave incident on the stack
% periodic in the z-direction;
%
% parameters
d = 8; % air gap
a = 10; % total periodicity
d_over_a = d / a;
eps_r = 12.2500; % dielectric constant, like GaAs,
% max F.S coefs for representing E field, and Eps(r), are
Mmax = 50;
% Q matrix is non-symmetric in this case, Qij != Qji
% Qmn = (2*pi*n + Kz)^2*Km-n
% Kn = delta_n / eps_r + (1 - 1/eps_r) (d/a) sinc(pi.n.d/a)
% here n runs from -Mmax to + Mmax,
freqs = [];
for Kz = - pi / a:pi / (10 * a): + pi / a
Q = zeros(2 * Mmax + 1);
for x = 1:2 * Mmax + 1
for y = 1:2 * Mmax + 1
X = x - Mmax;
Y = y - Mmax;
kn = (1 - 1 / eps_r) * d_over_a .* sinc((X - Y) .* d_over_a) + ((X - Y) == 0) * 1 / eps_r;
Q(x, y) = (2 * pi * (Y - 1) / a + Kz) .^ 2 * kn; % -Mmax<=(Y-1)<=Mmax
end
end
fprintf('Kz = %g\n', Kz)
omega_c = eig(Q);
omega_c = sort(sqrt(omega_c)); % important step
freqs = [freqs; omega_c.'];
end
close
figure
hold on
idx = 1;
for idx = 1:length(- pi / a:pi / (10 * a): + pi / a)
plot(- pi / a:pi / (10 * a): + pi / a, freqs(:, idx), '.-')
end
hold off
xlabel('Kz')
ylabel('omega/c')
title(sprintf('PBG of 1D DBR with d/a=%g, Epsr=%g', d / a, eps_r))
Advantages
PWE expansions are rigorous solutions. PWE is extremely well suited to the modal solution problem. Large size problems can be solved using iterative techniques like Conjugate gradient method.
For both generalized and normal eigenvalue problems, just a few band-index plots in the band-structure diagrams are required, usually lying on the brillouin zone edges. This corresponds to eigenmodes solutions using iterative techniques, as opposed to diagonalization of the entire matrix.
The PWEM is highly efficient for calculating modes in periodic dielectric structures. Being a Fourier space method, it suffers from the Gibbs phenomenon and slow convergence in some configuration when fast Fourier factorization is not used. It is the method of choice for calculating the band structure of photonic crystals. It is not easy to understand at first, but it is easy to implement.
Disadvantages
Sometimes spurious modes appear. Large problems scaled as O(n3), with the number of the plane waves (n) used in the problem. This is both time consuming and complex in memory requirements.
Alternatives include Order-N spectral method, and methods using Finite-difference time-domain (FDTD) which are simpler, and model transients.
If implemented correctly, spurious solutions are avoided. It is less efficient when index contrast is high or when metals are incorporated. It cannot be used for scattering analysis.
Being a Fourier-space method, Gibbs phenomenon affects the method's accuracy. This is particularly problematic for devices with high dielectric contrast.
See also
Photonic crystal
Computational electromagnetics
Finite-difference time-domain method
Finite element method
Maxwell's equations
References
Computational science
Electrodynamics
Computational electromagnetics | Plane wave expansion method | [
"Physics",
"Mathematics"
] | 1,482 | [
"Computational electromagnetics",
"Applied mathematics",
"Computational physics",
"Computational science",
"Electrodynamics",
"Dynamical systems"
] |
13,258,353 | https://en.wikipedia.org/wiki/Cosmographia%20%28Bernardus%20Silvestris%29 | ("Cosmography"), also known as ("On the totality of the world"), is a Latin philosophical allegory, dealing with the creation of the universe, by the twelfth-century author Bernardus Silvestris. In form, it is a prosimetrum, in which passages of prose alternate with verse passages in various classical meters. The philosophical basis of the work is the Platonism of contemporary philosophers associated with the cathedral school of Chartres—one of whom, Thierry of Chartres, is the dedicatee of the work. According to a marginal note in one early manuscript, the was recited before Pope Eugene III when he was traveling in France (1147–48).
Synopsis
The work is divided into two parts: "Megacosmus", which describes the ordering of the physical universe, and "Microcosmus", which describes the creation of man.
Megacosmus
1 (verse): Natura (Nature) complains to her mother Noys (Divine Providence; Greek ) that Hyle (Primordial Matter; Greek ), although held in check by Silva (the Latin equivalent of hyle), is chaotic and unformed and asks that Noys impose order and form on the confused matter.
2 (prose): Noys reveals her status as the daughter of God and asserts that the time is right for Natura's plea to be granted. She then separates out the four elements of fire, earth, water, and air from primordial matter. Seeing that the results are good, she begets the World Soul, or Endelechia (Greek ), as a bride for Mundus (World). Their marriage is the source of life in the universe.
3 (verse): This long poem in elegiac couplets presents the results of the ordering of the universe. Ether, the stars and sky, the earth, and the sea have become distinguished, and the nine orders of angels attend on the God who exists outside the universe. There follows a catalogue of the stars and constellations, along with the planets and their natures. Then the earth and its creatures are described, with catalogues of mountains, beasts, rivers, plants (which are treated in particular detail), fish, and birds.
4 (prose): The relationships between the powers operating in the universe are analyzed. All things under the heavens form part of a cosmic cycle, controlled by Natura, which will never cease, since its maker and cause are eternal. Hyle is the basis, whom the rational plan of God and Noys has ordered in an everlasting system, although subject to time: "For as Noys is forever pregnant of the divine will, she in turn informs Endelechia with the images she conceives of the eternal patterns, Endelechia impresses them upon Nature, and Nature imparts to Imarmene [Destiny; Greek ] what the well-being of the universe demands."
Microcosmus
1 (prose): Noys displays the created universe to Natura and points out its various features.
2 (verse): With the work of Noys, Silva has recovered her true beauty. Noys (still speaking to Natura) declares herself proud of the harmony she has brought to the universe.
3 (prose): Noys says that for the completion of the cosmic design, the creation of man is needed. For this it is necessary that Natura seek out Urania (the celestial principle) and Physis (the material principle). Natura sets forth and searches through various regions of the heavens. When she reaches the outermost limit of the heavens, she encounters the Genius whose responsibility it is to delineate the celestial forms on the individual objects of the universe. He greets Natura and points out Urania, whose brightness dazzles Natura.
4 (verse): Urania agrees to descend to Earth and collaborate in the creation of man. She will take with her the human soul, guiding it through all the heavens so that it may become acquainted with the laws of fate and learn the rules that govern its behavior.
5 (prose): To gain the sanction of the divine powers, Natura and Urania travel outside the cosmos, to the sanctuary of the supreme divinity, Tugaton (the Good; Greek ), whose favor they pray for. They then descend, one by one, through the planetary spheres.
6 (verse): Having reached the lower boundary of the sphere of the Moon, where the quintessence meets the terrestrial elements, Natura pauses to look about her.
7 (prose): Natura and Urania see thousands of spirits. Urania tells Natura that, in addition to the angels who dwell beyond the created universe and in the heavenly spheres, there are spirits below the Moon—some good, some evil.
8 (verse): Urania bids Natura to review the totality of the universe and note the principles of divine concord that it manifests.
9 (prose): Natura and Urania descend to Earth and reach a secluded locus amoenus (called Gramision or Granusion—the readings of the manuscripts are disputed). There they meet Physis, accompanied by her daughters Theorica (Contemplative Knowledge) and Practica (Active Knowledge), who is rapt in contemplation of created life in all its aspects. Suddenly, Noys appears.
10 (verse): Noys explains that Natura, Urania, and Physis can collaborate to complete the creation by fashioning a creature who participates in both the divine and earthly realms.
11 (prose): Noys assigns Urania, Physis, and Natura specific tasks in the creation of man, providing a model for each. Urania, using the Mirror of Providence, is to provide him with a soul derived from Endelechia; Physis, using the Book of Memory, is to provide him with a body; and Natura, using the Table of Destiny, is to unite the soul and the body.
12 (verse): Natura summons her two companions to begin the work. Physis, however, is somewhat angry, since she sees that matter is ill-suited for the fashioning of a being that requires intellect. Urania assists her by eliminating the evil taint from Silva and containing the matter within definite limits.
13 (prose): Physis—making use of the imperfect aspects of Silva that had (somewhat uncertainly) submitted to the will of God and had been left over from the rest of creation—fashions a body. The four humors are described, along with the tripartite division of the body into the head (seat of the brain and the sensory organs), the breast (seat of the heart) and the loins (seat of the liver).
14 (verse): The powers of the senses and the brain, heart, and liver are detailed. The organs of generation will prevent human life from wholly passing away and the universe from returning to chaos.
Platonic background
The ultimate source for much of Bernardus' allegory is the account of creation in Plato's Timaeus, as transmitted in the incomplete Latin translation, with lengthy commentary, by Calcidius. This was the only work of Plato's that was widely known in western Europe during the Middle Ages, and it was central to the renewed interest in natural science among the philosophers associated with the school of Chartres:
Chartres … would long remain the fertile soil in which this conception [of man as microcosm] would grow, and this the more as the Timaeus, itself constructed upon the parallelism between microcosm and macrocosm, became a central preoccupation of teaching at Chartres. This was the first age, the golden age, of Platonism as such in the West, an age which found in the Timaeus an entire physics, an anthropology, a metaphysics, and even a lofty spiritual teaching.
From the Timaeus Bernardus and the Chartrian thinkers, such as Thierry of Chartres and William of Conches, adopted three fundamental assumptions: "that the visible universe is a unified whole, a 'cosmos'; that it is the copy of an ideal exemplar; and that its creation was the expression of the goodness of its creator". Thierry had written a Tractatus de sex dierum operibus, in which he had essayed to elucidate the biblical account of creation iuxta physicas rationes tantum ("purely in terms of physical causes"); and this perhaps accounts for Bernardus' dedication of the to Thierry.
Along with the Timaeus and Calcidius' commentary, Bernardus' work also draws on Platonic themes diffused throughout a variety of works of late antiquity, such as Apuleius' philosophical treatises, Macrobius' commentary on Cicero's Dream of Scipio, the Hermetic Asclepius, the De nuptiis Philologiae et Mercurii of Martianus Capella, and Boethius' Consolation of Philosophy. In addition to their Platonic elements, the latter two works would have provided models of the prosimetrum form; and Macrobius' commentary had authorized the use of allegorical (fabulosa) methods in philosophers' treatment of certain subjects, since sciunt inimicam esse naturae apertam nudamque expositionem sui ("they realize that a frank, open exposition of herself is distasteful to Nature").
Reception
That the survives, in whole or in part, in about fifty manuscripts indicates that it enjoyed a good deal of popularity in the Middle Ages. Scholars have traced its influence on "a wide variety of medieval and renaissance authors, including Hildegard of Bingen, Vincent of Beauvais, Dante, Chaucer, Nicholas of Cusa, and Boccaccio—whose annotated copy of the work we possess". In particular, Bernardus' conceptions of Natura and Genius would be echoed and transformed in the works of Alain de Lille, in the Roman de la Rose, in Chaucer's Parlement of Foules, and in Gower's Confessio Amantis.
Although there is no evidence that medieval readers considered the incompatible with orthodox Christianity, some modern scholars, from the 18th century into the 20th century, have found it to be radically un-Christian, variously viewing the work as at bottom either pantheistic or pagan. These views were challenged by Étienne Gilson in the 1920s, though he himself thought that the had dualistic features. The theological implications of the work continue to be a subject of debate.
Editions and translations
Editions
De mundi universitate libri duo sive megacosmus et microcosmus, ed. C. S. Barach and J. Wrobel (Innsbruck, 1876).
, ed. André Vernet, in Bernardus Silvestris: Recherches sur l'auteur et l'oeuvre, suivies d'une édition critique de la 'Cosmographia (unpublished dissertation, École nationale des chartes, 1938). This is the only critical edition of the produced to date.
, ed. Peter Dronke (Leiden: Brill, 1978).
, in Bernardus Silvestris, Poetic Works, ed. and trans. Winthrop Wetherbee, Dumbarton Oaks Medieval Library 38 (Cambridge, Mass.: Harvard UP, 2015).
, ed. Marco Albertazzi (Lavis: La Finestra Editrice, 2020).
Translations
German: Über die allumfassende Einheit der Welt: Makrokosmos und Mikrokosmos, trans. Wilhelm Rath (Stuttgart: Mellinger, [1953]).
English: The Cosmographia of Bernardus Silvestris, trans. Winthrop Wetherbee (New York: Columbia UP, 1973). . A revised version of this translation appears in Wetherbee's edition of Bernardus' Poetic Works, cited above under "Editions".
French: Cosmographie, trans. Michel Lemoine (Paris: Cerf, 1998).
See also
Renaissance of the 12th century
Notes and references
Further reading
Kauntze, Mark. Authority and Imitation: A Study of the Cosmographia of Bernardus Silvestris. Mittellateinische Studien und Texte 47. Leiden: Brill, 2014. . Review
External links
Latin text (Barach & Wrobel edition) at the Internet Archive
12th-century books in Latin
1140s books
Cosmogony
Neoplatonic texts
Medieval philosophical literature | Cosmographia (Bernardus Silvestris) | [
"Astronomy"
] | 2,619 | [
"Cosmogony"
] |
13,258,442 | https://en.wikipedia.org/wiki/Phantom%20center | Phantom center refers to the psycho-acoustic phenomenon of a sound source appearing to emanate from a point between two speakers in a stereo configuration. When the same sound arrives at both ears at the same time with the same intensity, it appears to originate from a point in the center of the two speakers.
A difference in intensity (volume) will cause the sound to appear to come from the louder side. Similarly, if a sound arrives at one ear before the other (no later than approximately 30 ms, see Precedence effect), it will appear to originate from that side.
The ear–brain system evolved to use these cues to determine the location of sounds, an important evolutionary advantage.
Frequency variations can also affect perceived directivity of sound. Therefore the tightness of the stereo field (and hence phantom center image) is highly dependent on the frequency response of the speakers producing it being matched as closely as possible.
These psycho-acoustic properties can be used to artificially place sounds within a stereo field as is done in stereo mixing, most frequently with the use of panning. In surround sound, vocals are often mapped to a dedicated center channel, eliminating the need to create a phantom center using the left and right channels.
See also
Pan law
Stereo imaging
References
Stereophonic sound | Phantom center | [
"Engineering"
] | 256 | [
"Audio engineering",
"Stereophonic sound"
] |
13,258,459 | https://en.wikipedia.org/wiki/HP%2095LX | The HP 95LX Palmtop PC (F1000A, F1010A), also known as project Jaguar, is Hewlett Packard's first DOS-based pocket computer, or personal digital assistant, introduced in April 1991 in collaboration with Lotus Development Corporation. The abbreviation "LX" stood for "Lotus Expandable". The computer can be seen as successor to a series of larger portable PCs like the HP 110 and HP 110 Plus.
Hardware
HP 95LX has an Intel 8088-clone NEC V20 CPU running at 5.37 MHz with an Intel system on a chip (SoC) device. It cannot be considered completely PC-compatible because of its quarter-CGA (MDA)-resolution LCD screen.
The device includes a CR2032 lithium coin cell for memory backup when the two AA main batteries run out. For mass storage, HP 95LX has a single PCMCIA slot which can hold a static RAM card with its own CR2025 back-up coin cell. An RS-232-compatible serial port is provided, as well as an infrared port for printing on compatible models of Hewlett Packard printers.
Display
In character mode, the display shows 16 lines of 40 characters, and has no backlight. While most IBM-compatible PCs work with a hardware code page 437, HP 95LX's text mode font is hard-wired to code page 850 instead. Lotus 1-2-3 internally used the Lotus International Character Set (LICS), but characters are translated to code page 850 for display and printing purposes.
Software
The palmtop runs MS-DOS 3.22 and has a customized version of Lotus 1-2-3 Release 2.2 built in. Other software in read-only memory (ROM) includes a calculator, an appointment calendar, a telecommunications program, and a simple text editor.
Successors
Successor models to HP 95LX include HP 100LX, HP Palmtop FX, HP 200LX, HP 1000CX, and HP OmniGo 700LX.
See also
DIP Pocket PC
Atari Portfolio
Poqet PC
Poqet PC Prime
Poqet PC Plus
Sharp PC-3000
ZEOS Pocket PC
Yukyung Viliv N5
Sub-notebook
Netbook
Palmtop PC
Ultra-mobile PC
References
Further reading
External links
Hewlett Packard Web site on HP 95LX
HP 95LX technical information (contains PCB photos)
Skolob's Hewlett Packard 95LX Palmtop Page (Information and FAQ on HP 95LX)
95LX
Computer-related introductions in 1991
IBM PC compatibles
NEC V20 | HP 95LX | [
"Technology"
] | 542 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,259,064 | https://en.wikipedia.org/wiki/Downloading%20the%20Repertoire | Downloading the Repertoire is a 1996 album by American singer John "Jack" Mudurian (May 23, 1929 – September 30, 2013). It consists of an uncut, a cappella field recording of Mudurian, in a stream of consciousness, singing a battery of mostly show tunes, old country-western and folk music, and Tin Pan Alley standards.
Mudurian was a resident of Duplex Nursing Home in Boston, Massachusetts. In 1981, David Greenberger, an employee who also edited the zine The Duplex Planet, overheard Mudurian singing at a home talent show, and when Greenberger spoke to him about it, Mudurian boasted that he could sing as many songs as Frank Sinatra. Greenberger brought in a cassette tape recorder and asked him to sing; Mudurian proceeded to sing 129 songs, many from the Tin Pan Alley repertory (and several more than once), continuously over the next 47 minutes.
The recording was issued as Downloading the Repertoire on Arf! Arf! Records in 1996, and it became a cult novelty hit. Neil Strauss, writing about the recording for The New York Times, wrote: "What is most interesting about this CD is not Mr. Mudurian's slurred, rushed singing but the way his entire life story unfolds in his selection of material." In a review for AllMusic, Cub Koda commented: "[Mudurian's]... free association from tune to tune is downright astounding. No matter what kind of music you might have in your collection, it's a good bet you don't have anything that sounds quite like this." A reviewer for CMJ New Music Monthly described the album as "a hysterical, bizarre tour through the history of American popular song."
A shortened version of the music heard on Downloading appeared on Irwin Chusid's compilation of outsider music called Songs in the Key of Z, Vol. 1. Mudurian can also be heard on the compilations The Talent Show (1996), and The Tarquin Records All Star Holiday Extravaganza (2000). After meeting Mudurian, singer Jad Fair transcribed his version of "Chicago (That Toddlin' Town)" and performed it in his own live shows.
According to Greenberger, the nursing home at which Mudurian resided closed in 1987, and the two lost touch. Greenberger, who affectionately referred to the marathon recording session as "Jack's and my private Olympic event," recalled: "That June afternoon lives on for me. Planes flew overhead, birds chirped in the trees and another resident... could be heard singing in the background from time to time."
Songs sung on Downloading the Repertoire
(in order of songs sung)
Chicago (That Toddlin' Town)
It's Been a Long, Long Time
Why Am I Always Yearning for Theresa
The Halls of Montezuma
So Long It's Been Good to Know You
Step Right Up (and Help Old Uncle Sam)
It's Only a Paper Moon
Music! Music! Music! (Put Another Nickel In)
Take Me Out to the Ball Game
Some Sunday Morning
Any Bonds Today?
Red River Valley
My Bonnie
Jimmy Crack Corn
The Wabash Cannonball
I Wonder Who's Kissing Her Now
Ramona
Toot Toot Tootsie! (Goo' Bye)
If You Knew Susie Like I Know Susie
I Don't Care If the Sun Don't Shine
I Love My Baby (My Baby Loves Me)
I'll See You in My Dreams
Lucky Me
I Don't Know Why (I Just Do)
Near You
South of the Border (Down Mexico Way)
I've Been Working on the Railroad
Goody-Goody
Home on the Range
Joshua Fit the Battle of Jericho
Bell Bottom Trousers
Ragtime Cowboy Joe
Over the Rainbow
When You Wish Upon a Star
Pistol Packin' Mama
Frankie and Johnnie
Rudolph the Red-Nosed Reindeer
Jingle Bells
I Love You
Cuddle Up a Little Closer
Ain't She Sweet
Rose O'Day (The Filla-Ga-Dusha Song)
The Band Played On
Sparrow in the Treetop
"Pep Talk"/South of the Border (Down Mexico Way)
It's Only a Paper Moon
California, Here I Come
Row, Row, Row Your Boat
Singin' in the Rain
Five Foot Two, Eyes of Blue
Lullaby Of Broadway
I Wonder Who's Kissing Her Now
Some Sunday Morning
For Me and My Gal
Blue Skies
Smoke That Cigarette
Ain't Misbehavin'
Cheek to Cheek
Let's Call the Whole Thing Off
I've Got a Lovely Bunch of Coconuts (Roll or Bowl a Ball-A Penny a Pitch)
Michael Row the Boat Ashore
Row, Row, Row Your Boat
When You Wish Upon a Star
I'll See You in My Dreams
Chiquita Banana
Your Cheatin' Heart
Sparrow in the Treetop
Rock Around the Clock
That Old Flying Machine
The Man on the Flying Trapeze
School Days
Take Me Out to the Ball Game
Johnson Rag
Sugarfoot Rag
Chicago (That Toddling Town)
Pistol Packin' Mama
Boola Boola
Honeysuckle Rose
Volare
Quando Quando Quando (Tell Me When)
San Antonio Rose
Ragtime Cowboy Joe
Chattanooga Choo Choo
The Trolley Song
"Pep Talk"/Sing Sing Sing
Goody Goody
"Pep Talk"/Pistol Packin' Mama
Any Bonds Today
Music Music Music! (Put Another Nickel In)
It's Only a Paper Moon
Melody Time
When Irish Eyes Are Smiling
Heartaches
Night and Day
The Band Played On
Rose O'Day (The Filla-Ga-Dusha Song)
The Wabash Cannonball
"Pep Talk"/Pistol Packin' Mama
The Halls Of Montezuma
Jingle Bell Rock
I'll Never Say "Never Again" Again
Million Dollar Baby
Shine on Harvest Moon
Carolina in the Morning
You Must Have Been a Beautiful Baby
Jimmy Crack Corn
Any Bonds Today
Rose O'Day (The Filla-Ga-Dusha Song)
Michael Row the Boat Ashore
Three Blind Mice
Ramona
Mona Lisa
Bye Bye Baby
My Baby Just Cares for Me
Five Foot Two Eyes of Blue
If You Knew Susie Like I Know Susie
That's Amore
The Music Goes 'Round And Around
Jeepers Creepers
Some Sunday Morning
Alexander's Ragtime Band
Any Bonds Today
I Don't Want to Set the World on Fire
Oh What a Gal
The Wabash Cannonball
My Bonnie
Chicago (That Toddling Town)
Rose O'Day (The Filla-Ga-Dusha Song)
References
1996 albums
Outsider music albums
Novelty albums
A cappella albums
Field recording | Downloading the Repertoire | [
"Engineering"
] | 1,333 | [
"Audio engineering",
"Field recording"
] |
13,259,181 | https://en.wikipedia.org/wiki/Bollard%20pull | Bollard pull is a conventional measure of the pulling (or towing) power of a watercraft. It is defined as the force (usually in tonnes-force or kilonewtons (kN)) exerted by a vessel under full power, on a shore-mounted bollard through a tow-line, commonly measured in a practical test (but sometimes simulated) under test conditions that include calm water, no tide, level trim, and sufficient depth and side clearance for a free propeller stream. Like the horsepower or mileage rating of a car, it is a convenient but idealized number that must be adjusted for operating conditions that differ from the test. The bollard pull of a vessel may be reported as two numbers, the static or maximum bollard pull – the highest force measured – and the steady or continuous bollard pull, the average of measurements over an interval of, for example, 10 minutes. An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above "normal" tugboats. The worlds strongest tug since its delivery in 2020 is Island Victory (Vard Brevik 831) of Island Offshore, with a bollard pull of . Island Victory is not a typical tug, rather it is a special class of ship used in the petroleum industry called an Anchor Handling Tug Supply vessel.
For vessels that hold station by thrusting under power against a fixed object, such as crew transfer ships used in offshore wind turbine maintenance, an equivalent measure "bollard push" may be given.
Background
Unlike in ground vehicles, the statement of installed horsepower is not sufficient to understand how strong a tug is – this is because the tug operates mainly in very low or zero speeds, thus may not be delivering power (power = force × velocity; so, for zero speeds, the power is also zero), yet still absorbing torque and delivering thrust. Bollard pull values are stated in tonnes-force (written as t or tonne) or kilonewtons (kN).
Effective towing power is equal to total resistance times velocity of the ship.
Total resistance is the sum of frictional resistance, , residual resistance, , and air resistance, .
Where:
is the density of water
is the density of air
is the velocity of (relative to) water
is the velocity of (relative to) air
is resistance coefficient of frictional resistance
is resistance coefficient of residual resistance
is resistance coefficient of air resistance (usually quite high, >0.9, as ships are not designed to be aerodynamic)
is the wetted area of the ship
is the cross-sectional area of the ship above the waterline
Measurement
Values for bollard pull can be determined in two ways.
Practical trial
This method is useful for one-off ship designs and smaller shipyards. It is limited in precision - a number of boundary conditions need to be observed to obtain reliable results. Summarizing the below requirements, practical bollard pull trials need to be conducted in a deep water seaport, ideally not at the mouth of a river, on a calm day with hardly any traffic.
The ship needs to be in undisturbed water. Currents or strong winds would falsify the measurement.
The static force that intends to move the ship forward must only be generated by the propeller discharge. If the ship were too close to a wall, water could rebound back, creating a propulsive wave. This would falsify the measurement.
The ship must be in deep water. If there were any ground effect, the measurement would be falsified. The same holds true for propeller walk.
Water salinity must have a well-defined value, as it influences the specific weight of the water and thereby the mass moved by the propeller per unit of time.
The geometry of the towing line must have a well-defined value. Ideally, one would expect it to be exactly horizontal and straight. This is impossible in reality, because
the line falls into a catenary due to its weight;
the two fixed points of the line, being the bollard on shore and the ship's towing hook or cleat, may not have the same height above water.
Conditions must be static. The engine power, the heading of the ship, the conditions of the propeller discharge race and the tension in the towing line must have settled to a constant or near-constant value for a reliable measurement.
One condition to watch out for is the formation of a short circuit in propeller discharge race. If part of the discharge race is sucked back into the propeller, efficiency decreases sharply. This could occur due to a trial that is performed in too shallow water or too close to a wall.
See Figure 2 for an illustration of error influences in a practical bollard pull trial. Note the difference in elevation of the ends of the line (the port bollard is higher than the ship's towing hook). Furthermore, there is the partial short circuit in propeller discharge current, the uneven trim of the ship and the short length of the tow line. All of these factors contribute to measurement error.
Simulation
This method eliminates much of the uncertainties of the practical trial. However, any numerical simulation also has an error margin. Furthermore, simulation tools and computer systems capable of determining bollard pull for a ship design are costly. Hence, this method makes sense for larger shipyards and for the design of a series of ships.
Both methods can be combined. Practical trials can be used to validate the result of numerical simulation.
Human-powered vehicles
Practical bollard pull tests under simplified conditions are conducted for human powered vehicles. There, bollard pull is often a category in competitions and gives an indication of the power train efficiency. Although conditions for such measurements are inaccurate in absolute terms, they are the same for all competitors. Hence, they can still be valid for comparing several craft.
See also
Azipod
Kort nozzle
Tractive force
Notes
Further reading
External links
International Standard for Bollard Pull trials - 2019
Bollard Pull by Capt. P. Zahalka, Association of Hanseatic Marine Underwriters
Physical quantities
Water transport
Nautical terminology
Force | Bollard pull | [
"Physics",
"Mathematics"
] | 1,333 | [
"Physical phenomena",
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Physical properties",
"Matter"
] |
13,259,237 | https://en.wikipedia.org/wiki/Parity%20of%20zero | In mathematics, zero is an even number. In other words, its parity—the quality of an integer being even or odd—is even. This can be easily verified based on the definition of "even": zero is an integer multiple of 2, specifically . As a result, zero shares all the properties that characterize even numbers: for example, 0 is neighbored on both sides by odd numbers, any decimal integer has the same parity as its last digit—so, since 10 is even, 0 will be even, and if is even then has the same parity as —indeed, and always have the same parity.
Zero also fits into the patterns formed by other even numbers. The parity rules of arithmetic, such as , require 0 to be even. Zero is the additive identity element of the group of even integers, and it is the starting case from which other even natural numbers are recursively defined. Applications of this recursion from graph theory to computational geometry rely on zero being even. Not only is 0 divisible by 2, it is divisible by every power of 2, which is relevant to the binary numeral system used by computers. In this sense, 0 is the "most even" number of all.
Among the general public, the parity of zero can be a source of confusion. In reaction time experiments, most people are slower to identify 0 as even than 2, 4, 6, or 8. Some teachers—and some children in mathematics classes—think that zero is odd, or both even and odd, or neither. Researchers in mathematics education propose that these misconceptions can become learning opportunities. Studying equalities like can address students' doubts about calling 0 a number and using it in arithmetic. Class discussions can lead students to appreciate the basic principles of mathematical reasoning, such as the importance of definitions. Evaluating the parity of this exceptional number is an early example of a pervasive theme in mathematics: the abstraction of a familiar concept to an unfamiliar setting.
Why zero is even
The standard definition of "even number" can be used to directly prove that zero is even. A number is called "even" if it is an integer multiple of 2. As an example, the reason that 10 is even is that it equals . In the same way, zero is an integer multiple of 2, namely so zero is even.
It is also possible to explain why zero is even without referring to formal definitions. The following explanations make sense of the idea that zero is even in terms of fundamental number concepts. From this foundation, one can provide a rationale for the definition itself—and its applicability to zero.
Basic explanations
Given a set of objects, one uses a number to describe how many objects are in the set. Zero is the count of no objects; in more formal terms, it is the number of objects in the empty set. The concept of parity is used for making groups of two objects. If the objects in a set can be marked off into groups of two, with none left over, then the number of objects is even. If an object is left over, then the number of objects is odd. The empty set contains zero groups of two, and no object is left over from this grouping, so zero is even.
These ideas can be illustrated by drawing objects in pairs. It is difficult to depict zero groups of two, or to emphasize the nonexistence of a leftover object, so it helps to draw other groupings and to compare them with zero. For example, in the group of five objects, there are two pairs. More importantly, there is a leftover object, so 5 is odd. In the group of four objects, there is no leftover object, so 4 is even. In the group of just one object, there are no pairs, and there is a leftover object, so 1 is odd. In the group of zero objects, there is no leftover object, so 0 is even.
There is another concrete definition of evenness: if the objects in a set can be placed into two groups of equal size, then the number of objects is even. This definition is equivalent to the first one. Again, zero is even because the empty set can be divided into two groups of zero items each.
Numbers can also be visualized as points on a number line. When even and odd numbers are distinguished from each other, their pattern becomes obvious, especially if negative numbers are included:
The even and odd numbers alternate. Starting at any even number, counting up or down by twos reaches the other even numbers, and there is no reason to skip over zero.
With the introduction of multiplication, parity can be approached in a more formal way using arithmetic expressions. Every integer is either of the form or the former numbers are even and the latter are odd. For example, 1 is odd because and 0 is even because Making a table of these facts then reinforces the number line picture above.
Defining parity
The precise definition of a mathematical term, such as "even" meaning "integer multiple of two", is ultimately a convention. Unlike "even", some mathematical terms are purposefully constructed to exclude trivial or degenerate cases. Prime numbers are a famous example. Before the 20th century, definitions of primality were inconsistent, and significant mathematicians such as Goldbach, Lambert, Legendre, Cayley, and Kronecker wrote that 1 was prime. The modern definition of "prime number" is "positive integer with exactly 2 factors", so 1 is not prime. This definition can be rationalized by observing that it more naturally suits mathematical theorems that concern the primes. For example, the fundamental theorem of arithmetic is easier to state when 1 is not considered prime.
It would be possible to similarly redefine the term "even" in a way that no longer includes zero. However, in this case, the new definition would make it more difficult to state theorems concerning the even numbers. Already the effect can be seen in the algebraic rules governing even and odd numbers. The most relevant rules concern addition, subtraction, and multiplication:
even ± even = even
odd ± odd = even
even × integer = even
Inserting appropriate values into the left sides of these rules, one can produce 0 on the right sides:
2 − 2 = 0
−3 + 3 = 0
4 × 0 = 0
The above rules would therefore be incorrect if zero were not even. At best they would have to be modified. For example, one test study guide asserts that even numbers are characterized as integer multiples of two, but zero is "neither even nor odd". Accordingly, the guide's rules for even and odd numbers contain exceptions:
even ± even = even (or zero)
odd ± odd = even (or zero)
even × nonzero integer = even
Making an exception for zero in the definition of evenness forces one to make such exceptions in the rules for even numbers. From another perspective, taking the rules obeyed by positive even numbers and requiring that they continue to hold for integers forces the usual definition and the evenness of zero.
Mathematical contexts
Countless results in number theory invoke the fundamental theorem of arithmetic and the algebraic properties of even numbers, so the above choices have far-reaching consequences. For example, the fact that positive numbers have unique factorizations means that one can determine whether a number has an even or odd number of distinct prime factors. Since 1 is not prime, nor does it have prime factors, it is a product of 0 distinct primes; since 0 is an even number, 1 has an even number of distinct prime factors. This implies that the Möbius function takes the value , which is necessary for it to be a multiplicative function and for the Möbius inversion formula to work.
Not being odd
A number is odd if there is an integer such that . One way to prove that zero is not odd is by contradiction: if then , which is not an integer. Since zero is not odd, if an unknown number is proven to be odd, then it cannot be zero. This apparently trivial observation can provide a convenient and revealing proof explaining why an odd number is nonzero.
A classic result of graph theory states that a graph of odd order (having an odd number of vertices) always has at least one vertex of even degree. (The statement itself requires zero to be even: the empty graph has an even order, and an isolated vertex has an even degree.) In order to prove the statement, it is actually easier to prove a stronger result: any odd-order graph has an odd number of even degree vertices. The appearance of this odd number is explained by a still more general result, known as the handshaking lemma: any graph has an even number of vertices of odd degree. Finally, the even number of odd vertices is naturally explained by the degree sum formula.
Sperner's lemma is a more advanced application of the same strategy. The lemma states that a certain kind of coloring on a triangulation of a simplex has a subsimplex that contains every color. Rather than directly construct such a subsimplex, it is more convenient to prove that there exists an odd number of such subsimplices through an induction argument. A stronger statement of the lemma then explains why this number is odd: it naturally breaks down as when one considers the two possible orientations of a simplex.
Even-odd alternation
The fact that zero is even, together with the fact that even and odd numbers alternate, is enough to determine the parity of every other natural number. This idea can be formalized into a recursive definition of the set of even natural numbers:
0 is even.
(n + 1) is even if and only if n is not even.
This definition has the conceptual advantage of relying only on the minimal foundations of the natural numbers: the existence of 0 and of successors. As such, it is useful for computer logic systems such as LF and the Isabelle theorem prover. With this definition, the evenness of zero is not a theorem but an axiom. Indeed, "zero is an even number" may be interpreted as one of the Peano axioms, of which the even natural numbers are a model. A similar construction extends the definition of parity to transfinite ordinal numbers: every limit ordinal is even, including zero, and successors of even ordinals are odd.
The classic point in polygon test from computational geometry applies the above ideas. To determine if a point lies within a polygon, one casts a ray from infinity to the point and counts the number of times the ray crosses the edge of polygon. The crossing number is even if and only if the point is outside the polygon. This algorithm works because if the ray never crosses the polygon, then its crossing number is zero, which is even, and the point is outside. Every time the ray does cross the polygon, the crossing number alternates between even and odd, and the point at its tip alternates between outside and inside.
In graph theory, a bipartite graph is a graph whose vertices are split into two colors, such that neighboring vertices have different colors. If a connected graph has no odd cycles, then a bipartition can be constructed by choosing a base vertex v and coloring every vertex black or white, depending on whether its distance from v is even or odd. Since the distance between v and itself is 0, and 0 is even, the base vertex is colored differently from its neighbors, which lie at a distance of 1.
Algebraic patterns
In abstract algebra, the even integers form various algebraic structures that require the inclusion of zero. The fact that the additive identity (zero) is even, together with the evenness of sums and additive inverses of even numbers and the associativity of addition, means that the even integers form a group. Moreover, the group of even integers under addition is a subgroup of the group of all integers; this is an elementary example of the subgroup concept. The earlier observation that the rule "even − even = even" forces 0 to be even is part of a general pattern: any nonempty subset of an additive group that is closed under subtraction must be a subgroup, and in particular, must contain the identity.
Since the even integers form a subgroup of the integers, they partition the integers into cosets. These cosets may be described as the equivalence classes of the following equivalence relation: if is even. Here, the evenness of zero is directly manifested as the reflexivity of the binary relation ~. There are only two cosets of this subgroup—the even and odd numbers—so it has index 2.
Analogously, the alternating group is a subgroup of index 2 in the symmetric group on n letters. The elements of the alternating group, called even permutations, are the products of even numbers of transpositions. The identity map, an empty product of no transpositions, is an even permutation since zero is even; it is the identity element of the group.
The rule "even × integer = even" means that the even numbers form an ideal in the ring of integers, and the above equivalence relation can be described as equivalence modulo this ideal. In particular, even integers are exactly those integers k where This formulation is useful for investigating integer zeroes of polynomials.
2-adic order
There is a sense in which some multiples of 2 are "more even" than others. Multiples of 4 are called doubly even, since they can be divided by 2 twice. Not only is zero divisible by 4, zero has the unique property of being divisible by every power of 2, so it surpasses all other numbers in "evenness".
One consequence of this fact appears in the bit-reversed ordering of integer data types used by some computer algorithms, such as the Cooley–Tukey fast Fourier transform. This ordering has the property that the farther to the left the first 1 occurs in a number's binary expansion, or the more times it is divisible by 2, the sooner it appears. Zero's bit reversal is still zero; it can be divided by 2 any number of times, and its binary expansion does not contain any 1s, so it always comes first.
Although 0 is divisible by 2 more times than any other number, it is not straightforward to quantify exactly how many times that is. For any nonzero integer n, one may define the 2-adic order of n to be the number of times n is divisible by 2. This description does not work for 0; no matter how many times it is divided by 2, it can always be divided by 2 again. Rather, the usual convention is to set the 2-order of 0 to be infinity as a special case. This convention is not peculiar to the 2-order; it is one of the axioms of an additive valuation in higher algebra.
The powers of two—1, 2, 4, 8, ...—form a simple sequence of numbers of increasing 2-order. In the 2-adic numbers, such sequences actually converge to zero.
Education
The subject of the parity of zero is often treated within the first two or three years of primary education, as the concept of even and odd numbers is introduced and developed.
Students' knowledge
The chart on the right depicts children's beliefs about the parity of zero, as they progress from Year 1 (age 5–6 years) to Year 6 (age 10–11 years) of the English education system. The data is from Len Frobisher, who conducted a pair of surveys of English schoolchildren. Frobisher was interested in how knowledge of single-digit parity translates to knowledge of multiple-digit parity, and zero figures prominently in the results.
In a preliminary survey of nearly 400 seven-year-olds, 45% chose even over odd when asked the parity of zero. A follow-up investigation offered more choices: neither, both, and don't know. This time the number of children in the same age range identifying zero as even dropped to 32%. Success in deciding that zero is even initially shoots up and then levels off at around 50% in Years 3 to 6. For comparison, the easiest task, identifying the parity of a single digit, levels off at about 85% success.
In interviews, Frobisher elicited the students' reasoning. One fifth-year decided that 0 was even because it was found on the 2 times table. A couple of fourth-years realized that zero can be split into equal parts. Another fourth-year reasoned "1 is odd and if I go down it's even." The interviews also revealed the misconceptions behind incorrect responses. A second-year was "quite convinced" that zero was odd, on the basis that "it is the first number you count". A fourth-year referred to 0 as "none" and thought that it was neither odd nor even, since "it's not a number". In another study, Annie Keith observed a class of 15 second-graders who convinced each other that zero was an even number based on even-odd alternation and on the possibility of splitting a group of zero things in two equal groups.
More in-depth investigations were conducted by Esther Levenson, Pessia Tsamir, and Dina Tirosh, who interviewed a pair of sixth-grade students in the USA who were performing highly in their mathematics class. One student preferred deductive explanations of mathematical claims, while the other preferred practical examples. Both students initially thought that 0 was neither even nor odd, for different reasons. Levenson et al. demonstrated how the students' reasoning reflected their concepts of zero and division.
Deborah Loewenberg Ball analyzed US third grade students' ideas about even and odd numbers and zero, which they had just been discussing with a group of fourth-graders. The students discussed the parity of zero, the rules for even numbers, and how mathematics is done. The claims about zero took many forms, as seen in the list on the right. Ball and her coauthors argued that the episode demonstrated how students can "do mathematics in school", as opposed to the usual reduction of the discipline to the mechanical solution of exercises.
One of the themes in the research literature is the tension between students' concept images of parity and their concept definitions. Levenson et al.'s sixth-graders both defined even numbers as multiples of 2 or numbers divisible by 2, but they were initially unable to apply this definition to zero, because they were unsure how to multiply or divide zero by 2. The interviewer eventually led them to conclude that zero was even; the students took different routes to this conclusion, drawing on a combination of images, definitions, practical explanations, and abstract explanations. In another study, David Dickerson and Damien Pitman examined the use of definitions by five advanced undergraduate mathematics majors. They found that the undergraduates were largely able to apply the definition of "even" to zero, but they were still not convinced by this reasoning, since it conflicted with their concept images.
Teachers' knowledge
Researchers of mathematics education at the University of Michigan have included the true-or-false prompt "0 is an even number" in a database of over 250 questions designed to measure teachers' content knowledge. For them, the question exemplifies "common knowledge ... that any well-educated adult should have", and it is "ideologically neutral" in that the answer does not vary between traditional and reform mathematics. In a 2000–2004 study of 700 primary teachers in the United States, overall performance on these questions significantly predicted improvements in students' standardized test scores after taking the teachers' classes. In a more in-depth 2008 study, the researchers found a school where all of the teachers thought that zero was neither odd nor even, including one teacher who was exemplary by all other measures. The misconception had been spread by a math coach in their building.
It is uncertain how many teachers harbor misconceptions about zero. The Michigan studies did not publish data for individual questions. Betty Lichtenberg, an associate professor of mathematics education at the University of South Florida, in a 1972 study reported that when a group of prospective elementary school teachers were given a true-or-false test including the item "Zero is an even number", they found it to be a "tricky question", with about two thirds answering "False".
Implications for instruction
Mathematically, proving that zero is even is a simple matter of applying a definition, but more explanation is needed in the context of education. One issue concerns the foundations of the proof; the definition of "even" as "integer multiple of 2" is not always appropriate. A student in the first years of primary education may not yet have learned what "integer" or "multiple" means, much less how to multiply with 0. Additionally, stating a definition of parity for all integers can seem like an arbitrary conceptual shortcut if the only even numbers investigated so far have been positive. It can help to acknowledge that as the number concept is extended from positive integers to include zero and negative integers, number properties such as parity are also extended in a nontrivial way.
Numerical cognition
Adults who do believe that zero is even can nevertheless be unfamiliar with thinking of it as even, enough so to measurably slow them down in a reaction time experiment. Stanislas Dehaene, a pioneer in the field of numerical cognition, led a series of such experiments in the early 1990s. A numeral is flashed to the subject on a monitor, and a computer records the time it takes the subject to push one of two buttons to identify the number as odd or even. The results showed that 0 was slower to process than other even numbers. Some variations of the experiment found delays as long as 60 milliseconds or about 10% of the average reaction time—a small difference but a significant one.
Dehaene's experiments were not designed specifically to investigate 0 but to compare competing models of how parity information is processed and extracted. The most specific model, the mental calculation hypothesis, suggests that reactions to 0 should be fast; 0 is a small number, and it is easy to calculate . (Subjects are known to compute and name the result of multiplication by zero faster than multiplication of nonzero numbers, although they are slower to verify proposed results like .) The results of the experiments suggested that something quite different was happening: parity information was apparently being recalled from memory along with a cluster of related properties, such as being prime or a power of two. Both the sequence of powers of two and the sequence of positive even numbers 2, 4, 6, 8, ... are well-distinguished mental categories whose members are prototypically even. Zero belongs to neither list, hence the slower responses.
Repeated experiments have shown a delay at zero for subjects with a variety of ages and national and linguistic backgrounds, confronted with number names in numeral form, spelled out, and spelled in a mirror image. Dehaene's group did find one differentiating factor: mathematical expertise. In one of their experiments, students in the École Normale Supérieure were divided into two groups: those in literary studies and those studying mathematics, physics, or biology. The slowing at 0 was "essentially found in the [literary] group", and in fact, "before the experiment, some L subjects were unsure whether 0 was odd or even and had to be reminded of the mathematical definition".
This strong dependence on familiarity again undermines the mental calculation hypothesis. The effect also suggests that it is inappropriate to include zero in experiments where even and odd numbers are compared as a group. As one study puts it, "Most researchers seem to agree that zero is not a typical even number and should not be investigated as part of the mental number line."
Everyday contexts
Some of the contexts where the parity of zero makes an appearance are purely rhetorical. Linguist Joseph Grimes muses that asking "Is zero an even number?" to married couples is a good way to get them to disagree. People who think that zero is neither even nor odd may use the parity of zero as proof that every rule has a counterexample, or as an example of a trick question.
Around the year 2000, media outlets noted a pair of unusual milestones: "1999/11/19" was the last calendar date composed of all odd digits that would occur for a very long time, and that "2000/02/02" was the first all-even date to occur in a very long time. Since these results make use of 0 being even, some readers disagreed with the idea.
In standardized tests, if a question asks about the behavior of even numbers, it might be necessary to keep in mind that zero is even. Official publications relating to the GMAT and GRE tests both state that 0 is even.
The parity of zero is relevant to odd–even rationing, in which cars may drive or purchase gasoline on alternate days, according to the parity of the last digit in their license plates. Half of the numbers in a given range end in 0, 2, 4, 6, 8 and the other half in 1, 3, 5, 7, 9, so it makes sense to include 0 with the other even numbers. However, in 1977, a Paris rationing system led to confusion: on an odd-only day, the police avoided fining drivers whose plates ended in 0, because they did not know whether 0 was even. To avoid such confusion, the relevant legislation sometimes stipulates that zero is even; such laws have been passed in New South Wales and Maryland.
On U.S. Navy vessels, even-numbered compartments are found on the port side, but zero is reserved for compartments that intersect the centerline. That is, the numbers read 6-4-2-0-1-3-5 from port to starboard.
In the game of roulette, the number 0 does not count as even or odd, giving the casino an advantage on such bets. Similarly, the parity of zero can affect payoffs in prop bets when the outcome depends on whether some randomized number is odd or even, and it turns out to be zero.
The game of "odds and evens" is also affected: if both players cast zero fingers, the total number of fingers is zero, so the even player wins. One teachers' manual suggests playing this game as a way to introduce children to the concept that 0 is divisible by 2.
References
Bibliography
Further reading
External links
Is Zero Even? - Numberphile, video with James Grime, University of Nottingham
Elementary arithmetic
Zero
0 (number) | Parity of zero | [
"Mathematics"
] | 5,511 | [
"Elementary mathematics",
"Arithmetic",
"Elementary arithmetic"
] |
13,260,616 | https://en.wikipedia.org/wiki/Krivine%E2%80%93Stengle%20Positivstellensatz | In real algebraic geometry, Krivine–Stengle (German for "positive-locus-theorem") characterizes polynomials that are positive on a semialgebraic set, which is defined by systems of inequalities of polynomials with real coefficients, or more generally, coefficients from any real closed field.
It can be thought of as a real analogue of Hilbert's Nullstellensatz (which concern complex zeros of polynomial ideals), and this analogy is at the origin of its name. It was proved by French mathematician and then rediscovered by the Canadian .
Statement
Let be a real closed field, and = {f1, f2, ..., fm} and = {g1, g2, ..., gr} finite sets of polynomials over in variables. Let be the semialgebraic set
and define the preorder associated with as the set
where Σ2[1,...,] is the set of sum-of-squares polynomials. In other words, (, ) = + , where is the cone generated by (i.e., the subsemiring of [1,...,] generated by and arbitrary squares) and is the ideal generated by .
Let ∈ [1,...,] be a polynomial. Krivine–Stengle Positivstellensatz states that
(i) if and only if and such that .
(ii) if and only if such that .
The weak is the following variant of the . Let be a real closed field, and , , and finite subsets of [1,...,]. Let be the cone generated by , and the ideal generated by . Then
if and only if
(Unlike , the "weak" form actually includes the "strong" form as a special case, so the terminology is a misnomer.)
Variants
The Krivine–Stengle Positivstellensatz also has the following refinements under additional assumptions. It should be remarked that Schmüdgen's Positivstellensatz has a weaker assumption than Putinar's Positivstellensatz, but the conclusion is also weaker.
Schmüdgen's Positivstellensatz
Suppose that . If the semialgebraic set is compact, then each polynomial that is strictly positive on can be written as a polynomial in the defining functions of with sums-of-squares coefficients, i.e. . Here is said to be strictly positive on if for all . Note that Schmüdgen's Positivstellensatz is stated for and does not hold for arbitrary real closed fields.
Putinar's Positivstellensatz
Define the quadratic module associated with as the set
Assume there exists L > 0 such that the polynomial If for all , then ∈ (,).
See also
Positive polynomial for other positivstellensatz theorems.
Real Nullstellensatz
Notes
References
Real algebraic geometry
Algebraic varieties
German words and phrases
Theorems in algebraic geometry | Krivine–Stengle Positivstellensatz | [
"Mathematics"
] | 635 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
13,261,213 | https://en.wikipedia.org/wiki/Lentinan | Lentinan is a polysaccharide isolated from the fruit body of shiitake mushroom (Lentinula edodes).
Chemistry
Lentinan is a β-1,3 beta-glucan with β-1,6 branching. It has a molecular weight of 500,000 Da and specific rotation of +14-22° (NaOH).
Research
Preclinical studies
An in vitro experiment showed lentinan stimulated production of white blood cells in the human cell line U937. Lentinan is thought to be inactive in humans when given orally and is therefore administered intravenously. The authors of an in vivo study of lentinan suggested that the compound may be active when administered orally in mice.
Human clinical trials
Lentinan has been the subject of a limited number of clinical studies in cancer patients in Japan; however, evidence of efficacy is lacking.
Adverse effects
Lentinan has been reported to cause shiitake mushroom dermatitis.
See also
Medicinal mushrooms
References
External links
Lentinan effects (antitumor and others)
Memorial Sloan-Kettering Cancer Center's page for Lentinan.
Immunostimulants
Polysaccharides | Lentinan | [
"Chemistry"
] | 247 | [
"Carbohydrates",
"Polysaccharides"
] |
13,261,937 | https://en.wikipedia.org/wiki/PGG-glucan | Poly-[1-6]--D-glucopyranosyl-[1-3]--D-glucopyranose glucan (PGG glucan, proprietary name Betafectin) is an anti-infective agent and a form or type of beta-glucan.
Betafectin is a PGG-glucan, a novel β-(1,6) branched β-(1,3) glucan, purified from the cell walls of Saccharomyces cerevisiae.
It is a macrophage-specific immunomodulator.
References
External links
Clinical trial in surgical patients
Immunology | PGG-glucan | [
"Chemistry",
"Biology"
] | 148 | [
"Immunology",
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
13,262,036 | https://en.wikipedia.org/wiki/Death%20threat | A death threat is a threat, often made anonymously, by one person or a group of people to kill another person or group of people. These threats are often designed to intimidate victims in order to manipulate their behaviour, in which case a death threat could be a form of coercion. For example, a death threat could be used to dissuade a public figure from pursuing a criminal investigation or an advocacy campaign.
Legality
In most jurisdictions, death threats are a serious type of criminal offence. Death threats are often covered by coercion statutes. For instance, the coercion statute in Alaska says:
In the United States, some judges during a legal proceeding make death threats stating they hope the defendant will die in prison. An American judge was also removed from their positions due to making death threats towards children while off the bench.
Methods
A death threat can be communicated via a wide range of media, among these letters, newspaper publications, telephone calls, internet blogs, e-mail, and social media. If the threat is made against a political figure, it can also be considered treason. If a threat targets a location that is frequented by people (e.g. a building), it could be a terrorist threat. Sometimes, death threats are part of a wider campaign of abuse targeting a person or a group of people (see terrorism, mass murder).
Against a head of state
In many governments, including monarchies and republics of all levels of political freedom, threatening to kill the head of state or head of government (such as the sovereign, president, or prime minister) is considered a crime. Punishments for such threats vary. United States law provides for up to five years in prison for threatening any government official, especially the president. In the United Kingdom, under the Treason Felony Act 1848, it is illegal to attempt to kill or deprive the monarch of their throne; this offense was originally punished with penal transportation, and then was changed to the death penalty, and currently the penalty is life imprisonment.
Osman warning
Named after a high-profile case, Osman v United Kingdom, Osman warnings (also letters or notices) are warnings of a death threat or high risk of murder issued by British police or legal authorities to the possible victim. They are used when there is intelligence of the threat, but there is not enough evidence to justify the police arresting the potential murderer.
See also
Assassination
Bomb threat
Coercion
Contract killing
Extortion
Garda Information Message in Ireland
Murder
Stalking
Terroristic threat
Witness intimidation
References
External links
Judiciary Criminal Charges
The Forensic Linguistics Institute
Crimes
Death
Violence
Illegal speech in the United States
Terrorism
Aggression
Harassment and bullying
Speech crimes
Murder | Death threat | [
"Biology"
] | 537 | [
"Behavior",
"Violence",
"Harassment and bullying",
"Aggression",
"Human behavior"
] |
13,262,080 | https://en.wikipedia.org/wiki/Jyeshtha%20%28nakshatra%29 | Jyeshtha ("The Elder" or "Older" in Sanskrit) is the 18th nakshatra or lunar mansion in Hindu astronomy and Vedic astrology associated with the string of the constellation Scorpii, and the stars ε, ζ1 Sco, η, θ, ι1 Sco, κ, λ, μ and ν Scorpionis.
Astrology
The symbol of Jyeshtha is a circular amulet, umbrella, or earring, and it is associated with Indra, chief of the gods. The lord of Jyeshtha is Budha (Mercury). Jyestha is termed in Malayalam as Thrikketta and in Tamil as Kēttai. The nakshtra is called honorifically as Trikkētta (Tiru + Kētta). Jyeshtha nakshatra corresponds to Antares.
The Ascendant/Lagna in Jyeshtha indicates a person with a sense of seniority and superiority, who is protective, responsible and a leader of their family. They are wise, profound, psychic, maybe with occult powers, and are courageous and inventive.
Under the traditional Hindu principle of naming individuals according to their Ascendant/Lagna, the following Sanskrit syllables correspond with this Nakshatra, and would belong at the beginning of a first name:
No
Ya
Yi
Yu
References
Nakshatra | Jyeshtha (nakshatra) | [
"Astronomy"
] | 280 | [
"Nakshatra",
"Constellations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.