text
stringlengths
26
3.6k
page_title
stringlengths
1
71
source
stringclasses
1 value
token_count
int64
10
512
id
stringlengths
2
8
url
stringlengths
31
117
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
Two or more antidepressants taken together, from either the same or different classes (affecting the same area of the brain, often at a much higher level). An antipsychotic combined with an antidepressant, particularly atypical antipsychotics such as aripiprazole (Abilify), quetiapine (Seroquel), olanzapine (Zyprexa), and risperidone (Risperdal). It is unknown if undergoing psychological therapy at the same time as taking anti-depressants enhances the anti-depressive effect of the medication. Less common adjuncts Lithium has been used to augment antidepressant therapy in those who have failed to respond to antidepressants alone. Furthermore, Lithium dramatically decreases the suicide risk in recurrent depression. There is some evidence for the addition of a thyroid hormone, triiodothyronine, in patients with normal thyroid function. Psychopharmacologists have also tried adding a stimulant, in particular, D-amphetamine. However, the use of stimulants in cases of treatment-resistant depression is relatively controversial. A review article published in 2007 found psychostimulants may be effective in treatment-resistant depression with concomitant antidepressant therapy, but a more certain conclusion could not be drawn due to substantial deficiencies in the studies available for consideration, and the somewhat contradictory nature of their results. History The idea of an antidepressant, if melancholy is thought synonymous with depression, existed at least as early as the 1599 pamphlet A pil to purge melancholie or, A preprative to a pvrgation: or, Topping, copping, and capping: take either or whether: or, Mash them, and squash them, and dash them, and diddle come derrie come daw them, all together... Thomas d'Urfey's Wit and Mirth: Or Pills to Purge Melancholy, the title of a large collection of songs, was published between 1698 and 1720.
Antidepressant
Wikipedia
433
2388
https://en.wikipedia.org/wiki/Antidepressant
Biology and health sciences
Psychiatric drugs
Health
Before the 1950s, opioids and amphetamines were commonly used as antidepressants. Amphetamine has been described as the first antidepressant. Use of opioids and amphetamines for depression was later restricted due to their addictive nature and side effects. Extracts from the herb St John's wort have been used as a "nerve tonic" to alleviate depression. St John's wort fell out of favor in most countries through the 19th and 20th centuries, except in Germany, where Hypericum extracts were eventually licensed, packaged, and prescribed. Small-scale efficacy trials were carried out in the 1970s and 1980s, and attention grew in the 1990s following a meta-analysis. It remains an over-the-counter (OTC) supplement in most countries. Lead contamination associated with its usage has been seen as concerning, as lead levels in women in the United States taking St. John's wort are elevated by about 20% on average. Research continues to investigate its active component hyperforin, and to further understand its mode of action. Isoniazid, iproniazid, and imipramine In 1951, Irving Selikoff and Edward H. Robitzek, working out of Sea View Hospital on Staten Island, began clinical trials on two new anti-tuberculosis agents developed by Hoffman-LaRoche, isoniazid, and iproniazid. Only patients with a poor prognosis were initially treated. Nevertheless, their condition improved dramatically. Selikoff and Robitzek noted "a subtle general stimulation ... the patients exhibited renewed vigor and indeed this occasionally served to introduce disciplinary problems." The promise of a cure for tuberculosis in the Sea View Hospital trials was excitedly discussed in the mainstream press.
Antidepressant
Wikipedia
362
2388
https://en.wikipedia.org/wiki/Antidepressant
Biology and health sciences
Psychiatric drugs
Health
In 1952, learning of the stimulating side effects of isoniazid, the Cincinnati psychiatrist Max Lurie tried it on his patients. In the following year, he and Harry Salzer reported that isoniazid improved depression in two-thirds of their patients, so they then coined the term antidepressant to refer to its action. A similar incident took place in Paris, where Jean Delay, head of psychiatry at Sainte-Anne Hospital, heard of this effect from his pulmonology colleagues at Cochin Hospital. In 1952 (before Lurie and Salzer), Delay, with the resident Jean-Francois Buisson, reported the positive effect of isoniazid on depressed patients. The mode of antidepressant action of isoniazid is still unclear. It is speculated that its effect is due to the inhibition of diamine oxidase, coupled with a weak inhibition of monoamine oxidase A. Selikoff and Robitzek also experimented with another anti-tuberculosis drug, iproniazid; it showed a greater psychostimulant effect, but more pronounced toxicity. Later, Jackson Smith, Gordon Kamman, George E. Crane, and Frank Ayd, described the psychiatric applications of iproniazid. Ernst Zeller found iproniazid to be a potent monoamine oxidase inhibitor. Nevertheless, iproniazid remained relatively obscure until Nathan S. Kline, the influential head of research at Rockland State Hospital, began to popularize it in the medical and popular press as a "psychic energizer". Roche put a significant marketing effort behind iproniazid. Its sales grew until it was recalled in 1961, due to reports of lethal hepatotoxicity. The antidepressant effect of a tricyclic antidepressant, a three-ringed compound, was first discovered in 1957 by Roland Kuhn in a Swiss psychiatric hospital. Antihistamine derivatives were used to treat surgical shock and later as neuroleptics. Although in 1955, reserpine was shown to be more effective than a placebo in alleviating anxious depression, neuroleptics were being developed as sedatives and antipsychotics.
Antidepressant
Wikipedia
457
2388
https://en.wikipedia.org/wiki/Antidepressant
Biology and health sciences
Psychiatric drugs
Health
Attempting to improve the effectiveness of chlorpromazine, Kuhn in conjunction with the Geigy Pharmaceutical Company discovered the compound "G 22355", later renamed imipramine. Imipramine had a beneficial effect on patients with depression who showed mental and motor retardation. Kuhn described his new compound as a "thymoleptic" "taking hold of the emotions," in contrast with neuroleptics, "taking hold of the nerves" in 1955–56. These gradually became established, resulting in the patent and manufacture in the US in 1951 by Häfliger and SchinderA. Antidepressants became prescription drugs in the 1950s. It was estimated that no more than fifty to one hundred individuals per million had the kind of depression that these new drugs would treat, and pharmaceutical companies were not enthusiastic about marketing for this small market. Sales through the 1960s remained poor compared to the sales of tranquilizers, which were being marketed for different uses. Imipramine remained in common use and numerous successors were introduced. The use of monoamine oxidase inhibitors (MAOI) increased after the development and introduction of "reversible" forms affecting only the MAO-A subtype of inhibitors, making this drug safer to use. By the 1960s, it was thought that the mode of action of tricyclics was to inhibit norepinephrine reuptake. However, norepinephrine reuptake became associated with stimulating effects. Later tricyclics were thought to affect serotonin as proposed in 1969 by Carlsson and Lindqvist as well as Lapin and Oxenkrug. Second-generation antidepressants Researchers began a process of rational drug design to isolate antihistamine-derived compounds that would selectively target these systems. The first such compound to be patented was zimelidine in 1971, while the first released clinically was indalpine. Fluoxetine was approved for commercial use by the US Food and Drug Administration (FDA) in 1988, becoming the first blockbuster SSRI. Fluoxetine was developed at Eli Lilly and Company in the early 1970s by Bryan Molloy, Klaus Schmiegel, David T. Wong, and others. SSRIs became known as "novel antidepressants" along with other newer drugs such as SNRIs and NRIs with various selective effects.
Antidepressant
Wikipedia
492
2388
https://en.wikipedia.org/wiki/Antidepressant
Biology and health sciences
Psychiatric drugs
Health
Rapid-acting antidepressants Esketamine (brand name Spravato), the first rapid-acting antidepressant to be approved for clinical treatment of depression, was introduced for this indication in March 2019 in the United States. Research A 2016 randomized controlled trial evaluated the rapid antidepressant effects of the psychedelic Ayahuasca in treatment-resistant depression with a positive outcome. In 2018, the FDA granted Breakthrough Therapy Designation for psilocybin-assisted therapy for treatment-resistant depression and in 2019, the FDA granted Breakthrough Therapy Designation for psilocybin therapy treating major depressive disorder. Publication bias and aged research A 2018 systematic review published in The Lancet comparing the efficacy of 21 different first and second generation antidepressants found that antidepressant drugs tended to perform better and cause less adverse events when they were novel or experimental treatments compared to when they were evaluated again years later. Unpublished data was also associated with smaller positive effect sizes. However, the review did not find evidence of bias associated with industry funded research. Society and culture Prescription trends United Kingdom In the UK, figures reported in 2010 indicated that the number of antidepressants prescribed by the National Health Service (NHS) almost doubled over a decade. Further analysis published in 2014 showed that number of antidepressants dispensed annually in the community went up by 25 million in the 14 years between 1998 and 2012, rising from 15 million to 40 million. Nearly 50% of this rise occurred in the four years after the Great Recession, during which time the annual increase in prescriptions rose from 6.7% to 8.5%. These sources also suggest that aside from the recession, other factors that may influence changes in prescribing rates may include: improvements in diagnosis, a reduction of the stigma surrounding mental health, broader prescribing trends, GP characteristics, geographical location, and housing status. Another factor that may contribute to increasing consumption of antidepressants is the fact that these medications now are used for other conditions including social anxiety and post-traumatic stress disorder. Between 2005 and 2017, the number of adolescents (12 to 17 years) in England who were prescribed antidepressants has doubled. On the other hand, antidepressant prescriptions for children aged 5–11 in England decreased between 1999 and 2017. From April 2015, prescriptions increased for both age groups (for people aged 0 to 17) and peaked during the first COVID lockdown in March 2020.
Antidepressant
Wikipedia
508
2388
https://en.wikipedia.org/wiki/Antidepressant
Biology and health sciences
Psychiatric drugs
Health
According to National Institute for Health and Care Excellence (NICE) guidelines, antidepressants for children and adolescents with depression and obsessive-compulsive disorder (OCD) should be prescribed together with therapy and after being assessed by a child and adolescent psychiatrist. However, between 2006 and 2017, only 1 in 4 of 12–17 year-olds who were prescribed an SSRI by their GP had seen a specialist psychiatrist and 1 in 6 has seen a pediatrician. Half of these prescriptions were for depression and 16% for anxiety, the latter not being licensed for treatment with antidepressants. Among the suggested possible reasons why GPs are not following the guidelines are the difficulties of accessing talking therapies, long waiting lists, and the urgency of treatment. According to some researchers, strict adherence to treatment guidelines would limit access to effective medication for young people with mental health problems. United States In the United States, antidepressants were the most commonly prescribed medication in 2013. Of the estimated 16 million "long term" (over 24 months) users, roughly 70 percent are female. , about 16.5% of white people in the United States took antidepressants compared with 5.6% of black people in the United States. United States: The most commonly prescribed antidepressants in the US retail market in 2010 were: Netherlands: In the Netherlands, paroxetine is the most prescribed antidepressant, followed by amitriptyline, citalopram and venlafaxine. Adherence , worldwide, 30% to 60% of people did not follow their practitioner's instructions about taking their antidepressants, and in the US, it appeared that around 50% of people did not take their antidepressants as directed by their practitioner. When people fail to take their antidepressants, there is a greater risk that the drug will not help, that symptoms get worse, that they miss work or are less productive at work, and that the person may be hospitalized.
Antidepressant
Wikipedia
413
2388
https://en.wikipedia.org/wiki/Antidepressant
Biology and health sciences
Psychiatric drugs
Health
Social science perspective Some academics have highlighted the need to examine the use of antidepressants and other medical treatments in cross-cultural terms, because various cultures prescribe and observe different manifestations, symptoms, meanings, and associations of depression and other medical conditions within their populations. These cross-cultural discrepancies, it has been argued, then have implications on the perceived efficacy and use of antidepressants and other strategies in the treatment of depression in these different cultures. In India, antidepressants are largely seen as tools to combat marginality, promising the individual the ability to reintegrate into society through their use—a view and association not observed in the West. Environmental impacts Because most antidepressants function by inhibiting the reuptake of neurotransmitters serotonin, dopamine, and norepinephrine these drugs can interfere with natural neurotransmitter levels in other organisms impacted by indirect exposure. Antidepressants fluoxetine and sertraline have been detected in aquatic organisms residing in effluent-dominated streams. The presence of antidepressants in surface waters and aquatic organisms has caused concern because ecotoxicological effects on aquatic organisms due to fluoxetine exposure have been demonstrated. Coral reef fish have been demonstrated to modulate aggressive behavior through serotonin. Artificially increasing serotonin levels in crustaceans can temporarily reverse social status and turn subordinates into aggressive and territorial dominant males. Exposure to Fluoxetine has been demonstrated to increase serotonergic activity in fish, subsequently reducing aggressive behavior. Perinatal exposure to Fluoxetine at relevant environmental concentrations has been shown to lead to significant modifications of memory processing in 1-month-old cuttlefish. This impairment may disadvantage cuttlefish and decrease their survival. Somewhat less than 10% of orally administered Fluoxetine is excreted from humans unchanged or as glucuronide.
Antidepressant
Wikipedia
402
2388
https://en.wikipedia.org/wiki/Antidepressant
Biology and health sciences
Psychiatric drugs
Health
An anode usually is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, which is usually an electrode of the device through which conventional current leaves the device. A common mnemonic is ACID, for "anode current into device". The direction of conventional current (the flow of positive charges) in a circuit is opposite to the direction of electron flow, so (negatively charged) electrons flow from the anode of a galvanic cell, into an outside or external circuit connected to the cell. For example, the end of a household battery marked with a "+" is the cathode (while discharging). In both a galvanic cell and an electrolytic cell, the anode is the electrode at which the oxidation reaction occurs. In a galvanic cell the anode is the wire or plate having excess negative charge as a result of the oxidation reaction. In an electrolytic cell, the anode is the wire or plate upon which excess positive charge is imposed. As a result of this, anions will tend to move towards the anode where they will undergo oxidation. Historically, the anode of a galvanic cell was also known as the zincode because it was usually composed of zinc. Charge flow The terms anode and cathode are not defined by the voltage polarity of electrodes, but are usually defined by the direction of current through the electrode. An anode usually is the electrode of a device through which conventional current (positive charge) flows into the device from an external circuit, while a cathode usually is the electrode through which conventional current flows out of the device.
Anode
Wikipedia
347
2392
https://en.wikipedia.org/wiki/Anode
Physical sciences
Electrochemistry
Chemistry
In general, if the current through the electrodes reverses direction, as occurs for example in a rechargeable battery when it is being charged, the roles of the electrodes as anode and cathode are reversed. However, the definition of anode and cathode is different for electrical devices such as diodes and vacuum tubes where the electrode naming is fixed and does not depend on the actual charge flow (current). These devices usually allow substantial current flow in one direction but negligible current in the other direction. Therefore, the electrodes are named based on the direction of this "forward" current. In a diode the anode is the terminal through which current enters and the cathode is the terminal through which current leaves, when the diode is forward biased. The names of the electrodes do not change in cases where reverse current flows through the device. Similarly, in a vacuum tube only one electrode can thermionically emit electrons into the evacuated tube, so electrons can only enter the device from the external circuit through the heated electrode. Therefore, this electrode is permanently named the cathode, and the electrode through which the electrons exit the tube is named the anode. Conventional current depends not only on the direction the charge carriers move, but also the carriers' electric charge. The currents outside the device are usually carried by electrons in a metal conductor. Since electrons have a negative charge, the direction of electron flow is opposite to the direction of conventional current. Consequently, electrons leave the device through the anode and enter the device through the cathode. Examples The polarity of voltage on an anode with respect to an associated cathode varies depending on the device type and on its operating mode. In the following examples, the anode is negative in a device that provides power, and positive in a device that consumes power: In a discharging battery or galvanic cell (diagram on left), the anode is the negative terminal: it is where conventional current flows into the cell. This inward current is carried externally by electrons moving outwards. In a recharging battery, or an electrolytic cell, the anode is the positive terminal imposed by an external source of potential difference. The current through a recharging battery is opposite to the direction of current during discharge; in other words, the electrode which was the cathode during battery discharge becomes the anode while the battery is recharging.
Anode
Wikipedia
503
2392
https://en.wikipedia.org/wiki/Anode
Physical sciences
Electrochemistry
Chemistry
In battery engineering, it is common to designate one electrode of a rechargeable battery the anode and the other the cathode according to the roles the electrodes play when the battery is discharged. This is despite the fact that the roles are reversed when the battery is charged. When this is done, "anode" simply designates the negative terminal of the battery and "cathode" designates the positive terminal. In a diode, the anode is the terminal represented by the tail of the arrow symbol (flat side of the triangle), where conventional current flows into the device. Note the electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes or gas-filled tubes, the anode is the terminal where current enters the tube. Etymology The word was coined in 1834 from the Greek ἄνοδος (anodos), 'ascent', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the anode is where the current enters the electrolyte, on the East side: "ano upwards, odos a way; the way which the sun rises".
Anode
Wikipedia
356
2392
https://en.wikipedia.org/wiki/Anode
Physical sciences
Electrochemistry
Chemistry
The use of 'East' to mean the 'in' direction (actually 'in' → 'East' → 'sunrise' → 'up') may appear contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "eisode" (the doorway where the current enters). His motivation for changing it to something meaning 'the East electrode' (other candidates had been "eastode", "oriode" and "anatolode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the East electrode would not have been the 'way in' any more. Therefore, "eisode" would have become inappropriate, whereas "anode" meaning 'East electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the anode's function any more, but more importantly because as we now know, the Earth's magnetic field direction on which the "anode" term is based is subject to reversals whereas the current direction convention on which the "eisode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember and more durably correct technically although historically false, etymology has been suggested: anode, from the Greek anodos, 'way up', 'the way (up) out of the cell (or other device) for electrons'.
Anode
Wikipedia
463
2392
https://en.wikipedia.org/wiki/Anode
Physical sciences
Electrochemistry
Chemistry
Electrolytic anode In electrochemistry, the anode is where oxidation occurs and is the positive polarity contact in an electrolytic cell. At the anode, anions (negative ions) are forced by the electrical potential to react chemically and give off electrons (oxidation) which then flow up and into the driving circuit. Mnemonics: LEO Red Cat (Loss of Electrons is Oxidation, Reduction occurs at the Cathode), or AnOx Red Cat (Anode Oxidation, Reduction Cathode), or OIL RIG (Oxidation is Loss, Reduction is Gain of electrons), or Roman Catholic and Orthodox (Reduction – Cathode, anode – Oxidation), or LEO the lion says GER (Losing electrons is Oxidation, Gaining electrons is Reduction). This process is widely used in metals refining. For example, in copper refining, copper anodes, an intermediate product from the furnaces, are electrolysed in an appropriate solution (such as sulfuric acid) to yield high purity (99.99%) cathodes. Copper cathodes produced using this method are also described as electrolytic copper. Historically, when non-reactive anodes were desired for electrolysis, graphite (called plumbago in Faraday's time) or platinum were chosen. They were found to be some of the least reactive materials for anodes. Platinum erodes very slowly compared to other materials, and graphite crumbles and can produce carbon dioxide in aqueous solutions but otherwise does not participate in the reaction. Battery or galvanic cell anode In a battery or galvanic cell, the anode is the negative electrode from which electrons flow out towards the external part of the circuit. Internally the positively charged cations are flowing away from the anode (even though it is negative and therefore would be expected to attract them, this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems); but, external to the cell in the circuit, electrons are being pushed out through the negative contact and thus through the circuit by the voltage potential as would be expected. Battery manufacturers may regard the negative electrode as the anode, particularly in their technical literature. Though from an electrochemical viewpoint incorrect, it does resolve the problem of which electrode is the anode in a secondary (or rechargeable) cell. Using the traditional definition, the anode switches ends between charge and discharge cycles.
Anode
Wikipedia
512
2392
https://en.wikipedia.org/wiki/Anode
Physical sciences
Electrochemistry
Chemistry
Vacuum tube anode In electronic vacuum devices such as a cathode-ray tube, the anode is the positively charged electron collector. In a tube, the anode is a charged positive plate that collects the electrons emitted by the cathode through electric attraction. It also accelerates the flow of these electrons. Diode anode In a semiconductor diode, the anode is the P-doped layer which initially supplies holes to the junction. In the junction region, the holes supplied by the anode combine with electrons supplied from the N-doped region, creating a depleted zone. As the P-doped layer supplies holes to the depleted region, negative dopant ions are left behind in the P-doped layer ('P' for positive charge-carrier ions). This creates a base negative charge on the anode. When a positive voltage is applied to anode of the diode from the circuit, more holes are able to be transferred to the depleted region, and this causes the diode to become conductive, allowing current to flow through the circuit. The terms anode and cathode should not be applied to a Zener diode, since it allows flow in either direction, depending on the polarity of the applied potential (i.e. voltage). Sacrificial anode In cathodic protection, a metal anode that is more reactive to the corrosive environment than the metal system to be protected is electrically linked to the protected system. As a result, the metal anode partially corrodes or dissolves instead of the metal system. As an example, an iron or steel ship's hull may be protected by a zinc sacrificial anode, which will dissolve into the seawater and prevent the hull from being corroded. Sacrificial anodes are particularly needed for systems where a static charge is generated by the action of flowing liquids, such as pipelines and watercraft. Sacrificial anodes are also generally used in tank-type water heaters. In 1824 to reduce the impact of this destructive electrolytic action on ships hulls, their fastenings and underwater equipment, the scientist-engineer Humphry Davy developed the first and still most widely used marine electrolysis protection system. Davy installed sacrificial anodes made from a more electrically reactive (less noble) metal attached to the vessel hull and electrically connected to form a cathodic protection circuit.
Anode
Wikipedia
503
2392
https://en.wikipedia.org/wiki/Anode
Physical sciences
Electrochemistry
Chemistry
A less obvious example of this type of protection is the process of galvanising iron. This process coats iron structures (such as fencing) with a coating of zinc metal. As long as the zinc remains intact, the iron is protected from the effects of corrosion. Inevitably, the zinc coating becomes breached, either by cracking or physical damage. Once this occurs, corrosive elements act as an electrolyte and the zinc/iron combination as electrodes. The resultant current ensures that the zinc coating is sacrificed but that the base iron does not corrode. Such a coating can protect an iron structure for a few decades, but once the protecting coating is consumed, the iron rapidly corrodes. If, conversely, tin is used to coat steel, when a breach of the coating occurs it actually accelerates oxidation of the iron. Impressed current anode Another cathodic protection is used on the impressed current anode. It is made from titanium and covered with mixed metal oxide. Unlike the sacrificial anode rod, the impressed current anode does not sacrifice its structure. This technology uses an external current provided by a DC source to create the cathodic protection. Impressed current anodes are used in larger structures like pipelines, boats, city water tower, water heaters and more. Related antonym The opposite of an anode is a cathode. When the current through the device is reversed, the electrodes switch functions, so the anode becomes the cathode and the cathode becomes anode, as long as the reversed current is applied. The exception is diodes where electrode naming is always based on the forward current direction.
Anode
Wikipedia
337
2392
https://en.wikipedia.org/wiki/Anode
Physical sciences
Electrochemistry
Chemistry
Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal. Analog signals vary over a continuous range of possible values which means that electronic noise and interference may be introduced. Thus with analog, a moderately weak signal becomes snowy and subject to interference. In contrast, picture quality from a digital television (DTV) signal remains good until the signal level drops below a threshold where reception is no longer possible or becomes intermittent. Analog television may be wireless (terrestrial television and satellite television) or can be distributed over a cable network as cable television. All broadcast television systems used analog signals before the arrival of DTV. Motivated by the lower bandwidth requirements of compressed digital signals, beginning just after the year 2000, a digital television transition is proceeding in most countries of the world, with different deadlines for the cessation of analog broadcasts. Several countries have made the switch already, with the remaining countries still in progress mostly in Africa, Asia, and South America. Development The earliest systems of analog television were mechanical television systems that used spinning disks with patterns of holes punched into the disc to scan an image. A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. The reproduced images from these mechanical systems were dim, very low resolution and flickered severely. Analog television did not begin in earnest as an industry until the development of the cathode-ray tube (CRT), which uses a focused electron beam to trace lines across a phosphor coated surface. The electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more closely spaced scan lines and much higher image resolution. Also, far less maintenance was required of an all-electronic system compared to a mechanical spinning disc system. All-electronic systems became popular with households after World War II. Standards
Analog television
Wikipedia
424
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
Broadcasters of analog television encode their signal using different systems. The official systems of transmission were defined by the ITU in 1961 as: A, B, C, D, E, F, G, H, I, K, K1, L, M and N. These systems determine the number of scan lines, frame rate, channel width, video bandwidth, video-audio separation, and so on. A color encoding scheme (NTSC, PAL, or SECAM) could be added to the base monochrome signal. Using RF modulation the signal is then modulated onto a very high frequency (VHF) or ultra high frequency (UHF) carrier wave. Each frame of a television image is composed of scan lines drawn on the screen. The lines are of varying brightness; the whole set of lines is drawn quickly enough that the human eye perceives it as one image. The process repeats and the next sequential frame is displayed, allowing the depiction of motion. The analog television signal contains timing and synchronization information so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal. The first commercial television systems were black-and-white; the beginning of color television was in the 1950s. A practical television system needs to take luminance, chrominance (in a color system), synchronization (horizontal and vertical), and audio signals, and broadcast them over a radio transmission. The transmission system must include a means of television channel selection. Analog broadcast television systems come in a variety of frame rates and resolutions. Further differences exist in the frequency and modulation of the audio carrier. The monochrome combinations still existing in the 1950s were standardized by the International Telecommunication Union (ITU) as capital letters A through N. When color television was introduced, the chrominance information was added to the monochrome signals in a way that black and white televisions ignore. In this way backward compatibility was achieved. There are three standards for the way the additional color information can be encoded and transmitted. The first was the American NTSC system. The European and Australian PAL and the French and former Soviet Union SECAM standards were developed later and attempt to cure certain defects of the NTSC system. PAL's color encoding is similar to the NTSC systems. SECAM, though, uses a different modulation approach than PAL or NTSC. PAL had a late evolution called PALplus, allowing widescreen broadcasts while remaining fully compatible with existing PAL equipment.
Analog television
Wikipedia
508
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
In principle, all three color encoding systems can be used with any scan line/frame rate combination. Therefore, in order to describe a given signal completely, it is necessary to quote the color system plus the broadcast standard as a capital letter. For example, the United States, Canada, Mexico and South Korea used (or use) NTSC-M, Japan used NTSC-J, the UK used PAL-I, France used SECAM-L, much of Western Europe and Australia used (or use) PAL-B/G, most of Eastern Europe uses SECAM-D/K or PAL-D/K and so on. Not all of the possible combinations exist. NTSC is only used with system M, even though there were experiments with NTSC-A (405 line) in the UK and NTSC-N (625 line) in part of South America. PAL is used with a variety of 625-line standards (B, G, D, K, I, N) but also with the North American 525-line standard, accordingly named PAL-M. Likewise, SECAM is used with a variety of 625-line standards. For this reason, many people refer to any 625/25 type signal as PAL and to any 525/30 signal as NTSC, even when referring to digital signals; for example, on DVD-Video, which does not contain any analog color encoding, and thus no PAL or NTSC signals at all. Although a number of different broadcast television systems are in use worldwide, the same principles of operation apply. Displaying an image A cathode-ray tube (CRT) television displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. At the end of each line, the beam returns to the start of the next line; at the end of the last line, the beam returns to the beginning of the first line at the top of the screen. As it passes each point, the intensity of the beam is varied, varying the luminance of that point. A color television system is similar except there are three beams that scan together and an additional signal known as chrominance controls the color of the spot.
Analog television
Wikipedia
453
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
When analog television was developed, no affordable technology for storing video signals existed; the luminance signal had to be generated and transmitted at the same time at which it is displayed on the CRT. It was therefore essential to keep the raster scanning in the camera (or other device for producing the signal) in exact synchronization with the scanning in the television. The physics of the CRT require that a finite time interval be allowed for the spot to move back to the start of the next line (horizontal retrace) or the start of the screen (vertical retrace). The timing of the luminance signal must allow for this. The human eye has a characteristic called phi phenomenon. Quickly displaying successive scan images creates the illusion of smooth motion. Flickering of the image can be partially solved using a long persistence phosphor coating on the CRT so that successive images fade slowly. However, slow phosphor has the negative side-effect of causing image smearing and blurring when there is rapid on-screen motion occurring. The maximum frame rate depends on the bandwidth of the electronics and the transmission system, and the number of horizontal scan lines in the image. A frame rate of 25 or 30 hertz is a satisfactory compromise, while the process of interlacing two video fields of the picture per frame is used to build the image. This process doubles the apparent number of video frames per second and further reduces flicker and other defects in transmission. Receiving signals The television system for each country will specify a number of television channels within the UHF or VHF frequency ranges. A channel actually consists of two signals: the picture information is transmitted using amplitude modulation on one carrier frequency, and the sound is transmitted with frequency modulation at a frequency at a fixed offset (typically 4.5 to 6 MHz) from the picture signal. The channel frequencies chosen represent a compromise between allowing enough bandwidth for video (and hence satisfactory picture resolution), and allowing enough channels to be packed into the available frequency band. In practice a technique called vestigial sideband is used to reduce the channel spacing, which would be nearly twice the video bandwidth if pure AM was used. Signal reception is invariably done via a superheterodyne receiver: the first stage is a tuner which selects a television channel and frequency-shifts it to a fixed intermediate frequency (IF). The signal amplifier performs amplification to the IF stages from the microvolt range to fractions of a volt.
Analog television
Wikipedia
504
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
Extracting the sound At this point the IF signal consists of a video carrier signal at one frequency and the sound carrier at a fixed offset in frequency. A demodulator recovers the video signal. Also at the output of the same demodulator is a new frequency modulated sound carrier at the offset frequency. In some sets made before 1948, this was filtered out, and the sound IF of about 22 MHz was sent to an FM demodulator to recover the basic sound signal. In newer sets, this new carrier at the offset frequency was allowed to remain as intercarrier sound, and it was sent to an FM demodulator to recover the basic sound signal. One particular advantage of intercarrier sound is that when the front panel fine tuning knob is adjusted, the sound carrier frequency does not change with the tuning, but stays at the above-mentioned offset frequency. Consequently, it is easier to tune the picture without losing the sound. So the FM sound carrier is then demodulated, amplified, and used to drive a loudspeaker. Until the advent of the NICAM and MTS systems, television sound transmissions were monophonic. Structure of a video signal The video carrier is demodulated to give a composite video signal containing luminance, chrominance and synchronization signals. The result is identical to the composite video format used by analog video devices such as VCRs or CCTV cameras. To ensure good linearity and thus fidelity, consistent with affordable manufacturing costs of transmitters and receivers, the video carrier is never modulated to the extent that it is shut off altogether. When intercarrier sound was introduced later in 1948, not completely shutting off the carrier had the side effect of allowing intercarrier sound to be economically implemented. Each line of the displayed image is transmitted using a signal as shown above. The same basic format (with minor differences mainly related to timing and the encoding of color) is used for PAL, NTSC, and SECAM television systems. A monochrome signal is identical to a color one, with the exception that the elements shown in color in the diagram (the colorburst, and the chrominance signal) are not present.
Analog television
Wikipedia
450
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
The front porch is a brief (about 1.5 microsecond) period inserted between the end of each transmitted line of picture and the leading edge of the next line's sync pulse. Its purpose was to allow voltage levels to stabilise in older televisions, preventing interference between picture lines. The front porch is the first component of the horizontal blanking interval which also contains the horizontal sync pulse and the back porch. The back porch is the portion of each scan line between the end (rising edge) of the horizontal sync pulse and the start of active video. It is used to restore the black level (300 mV) reference in analog video. In signal processing terms, it compensates for the fall time and settling time following the sync pulse. In color television systems such as PAL and NTSC, this period also includes the colorburst signal. In the SECAM system, it contains the reference subcarrier for each consecutive color difference signal in order to set the zero-color reference. In some professional systems, particularly satellite links between locations, the digital audio is embedded within the line sync pulses of the video signal, to save the cost of renting a second channel. The name for this proprietary system is Sound-in-Syncs. Monochrome video signal extraction The luminance component of a composite video signal varies between 0 V and approximately 0.7 V above the black level. In the NTSC system, there is a blanking signal level used during the front porch and back porch, and a black signal level 75 mV above it; in PAL and SECAM these are identical. In a monochrome receiver, the luminance signal is amplified to drive the control grid in the electron gun of the CRT. This changes the intensity of the electron beam and therefore the brightness of the spot being scanned. Brightness and contrast controls determine the DC shift and amplification, respectively. Color video signal extraction U and V signals A color signal conveys picture information for each of the red, green, and blue components of an image. However, these are not simply transmitted as three separate signals, because: such a signal would not be compatible with monochrome receivers, an important consideration when color broadcasting was first introduced. It would also occupy three times the bandwidth of existing television, requiring a decrease in the number of television channels available.
Analog television
Wikipedia
474
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
Instead, the RGB signals are converted into YUV form, where the Y signal represents the luminance of the colors in the image. Because the rendering of colors in this way is the goal of both monochrome film and television systems, the Y signal is ideal for transmission as the luminance signal. This ensures a monochrome receiver will display a correct picture in black and white, where a given color is reproduced by a shade of gray that correctly reflects how light or dark the original color is. The U and V signals are color difference signals. The U signal is the difference between the B signal and the Y signal, also known as B minus Y (B-Y), and the V signal is the difference between the R signal and the Y signal, also known as R minus Y (R-Y). The U signal then represents how purplish-blue or its complementary color, yellowish-green, the color is, and the V signal how purplish-red or its complementary, greenish-cyan, it is. The advantage of this scheme is that the U and V signals are zero when the picture has no color content. Since the human eye is more sensitive to detail in luminance than in color, the U and V signals can be transmitted with reduced bandwidth with acceptable results. In the receiver, a single demodulator can extract an additive combination of U plus V. An example is the X demodulator used in the X/Z demodulation system. In that same system, a second demodulator, the Z demodulator, also extracts an additive combination of U plus V, but in a different ratio. The X and Z color difference signals are further matrixed into three color difference signals, (R-Y), (B-Y), and (G-Y). The combinations of usually two, but sometimes three demodulators were: In the end, further matrixing of the above color-difference signals c through f yielded the three color-difference signals, (R-Y), (B-Y), and (G-Y).
Analog television
Wikipedia
437
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
The R, G, and B signals in the receiver needed for the display device (CRT, Plasma display, or LCD display) are electronically derived by matrixing as follows: R is the additive combination of (R-Y) with Y, G is the additive combination of (G-Y) with Y, and B is the additive combination of (B-Y) with Y. All of this is accomplished electronically. It can be seen that in the combining process, the low-resolution portion of the Y signals cancel out, leaving R, G, and B signals able to render a low-resolution image in full color. However, the higher resolution portions of the Y signals do not cancel out, and so are equally present in R, G, and B, producing the higher-resolution image detail in monochrome, although it appears to the human eye as a full-color and full-resolution picture. NTSC and PAL systems In the NTSC and PAL color systems, U and V are transmitted by using quadrature amplitude modulation of a subcarrier. This kind of modulation applies two independent signals to one subcarrier, with the idea that both signals will be recovered independently at the receiving end. For NTSC, the subcarrier is at 3.58 MHz. For the PAL system it is at 4.43 MHz. The subcarrier itself is not included in the modulated signal (suppressed carrier), it is the subcarrier sidebands that carry the U and V information. The usual reason for using suppressed carrier is that it saves on transmitter power. In this application a more important advantage is that the color signal disappears entirely in black and white scenes. The subcarrier is within the bandwidth of the main luminance signal and consequently can cause undesirable artifacts on the picture, all the more noticeable in black and white receivers.
Analog television
Wikipedia
381
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
A small sample of the subcarrier, the colorburst, is included in the horizontal blanking portion, which is not visible on the screen. This is necessary to give the receiver a phase reference for the modulated signal. Under quadrature amplitude modulation the modulated chrominance signal changes phase as compared to its subcarrier and also changes amplitude. The chrominance amplitude (when considered together with the Y signal) represents the approximate saturation of a color, and the chrominance phase against the subcarrier reference approximately represents the hue of the color. For particular test colors found in the test color bar pattern, exact amplitudes and phases are sometimes defined for test and troubleshooting purposes only. Due to the nature of the quadrature amplitude modulation process that created the chrominance signal, at certain times, the signal represents only the U signal, and 70 nanoseconds (NTSC) later, it represents only the V signal. About 70 nanoseconds later still, -U, and another 70 nanoseconds, -V. So to extract U, a synchronous demodulator is utilized, which uses the subcarrier to briefly gate the chroma every 280 nanoseconds, so that the output is only a train of discrete pulses, each having an amplitude that is the same as the original U signal at the corresponding time. In effect, these pulses are discrete-time analog samples of the U signal. The pulses are then low-pass filtered so that the original analog continuous-time U signal is recovered. For V, a 90-degree shifted subcarrier briefly gates the chroma signal every 280 nanoseconds, and the rest of the process is identical to that used for the U signal. Gating at any other time than those times mentioned above will yield an additive mixture of any two of U, V, -U, or -V. One of these off-axis (that is, of the U and V axis) gating methods is called I/Q demodulation. Another much more popular off-axis scheme was the X/Z demodulation system. Further matrixing recovered the original U and V signals. This scheme was actually the most popular demodulator scheme throughout the 1960s.
Analog television
Wikipedia
474
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
The above process uses the subcarrier. But as previously mentioned, it was deleted before transmission, and only the chroma is transmitted. Therefore, the receiver must reconstitute the subcarrier. For this purpose, a short burst of the subcarrier, known as the colorburst, is transmitted during the back porch (re-trace blanking period) of each scan line. A subcarrier oscillator in the receiver locks onto this signal (see phase-locked loop) to achieve a phase reference, resulting in the oscillator producing the reconstituted subcarrier. NTSC uses this process unmodified. Unfortunately, this often results in poor color reproduction due to phase errors in the received signal, caused sometimes by multipath, but mostly by poor implementation at the studio end. With the advent of solid-state receivers, cable TV, and digital studio equipment for conversion to an over-the-air analog signal, these NTSC problems have been largely fixed, leaving operator error at the studio end as the sole color rendition weakness of the NTSC system. In any case, the PAL D (delay) system mostly corrects these kinds of errors by reversing the phase of the signal on each successive line, and averaging the results over pairs of lines. This process is achieved by the use of a 1H (where H = horizontal scan frequency) duration delay line. Phase shift errors between successive lines are therefore canceled out and the wanted signal amplitude is increased when the two in-phase (coincident) signals are re-combined. NTSC is more spectrum efficient than PAL, giving more picture detail for a given bandwidth. This is because sophisticated comb filters in receivers are more effective with NTSC's 4 color frame sequence compared to PAL's 8-field sequence. However, in the end, the larger channel width of most PAL systems in Europe still gives PAL systems the edge in transmitting more picture detail. SECAM system In the SECAM television system, U and V are transmitted on alternate lines, using simple frequency modulation of two different color subcarriers.
Analog television
Wikipedia
431
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
In some analog color CRT displays, starting in 1956, the brightness control signal (luminance) is fed to the cathode connections of the electron guns, and the color difference signals (chrominance signals) are fed to the control grids connections. This simple CRT matrix mixing technique was replaced in later solid state designs of signal processing with the original matrixing method used in the 1954 and 1955 color TV receivers. Synchronization Synchronizing pulses added to the video signal at the end of every scan line and video frame ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal so that the image can be reconstructed on the receiver screen. A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. Horizontal synchronization The horizontal sync pulse separates the scan lines. The horizontal sync signal is a single short pulse that indicates the start of every line. The rest of the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next horizontal or vertical synchronization pulse. The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 μs pulse at 0 V. In the 625-line PAL system the pulse is 4.7 μs at 0 V. This is lower than the amplitude of any video signal (blacker than black) so it can be detected by the level-sensitive sync separator circuit of the receiver. Two timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line. Vertical synchronization Vertical synchronization separates the video fields. In PAL and NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync pulses are made by prolonging the length of horizontal sync pulses through almost the entire length of the scan line.
Analog television
Wikipedia
458
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or midway through). The format of such a signal in 525-line NTSC is: pre-equalizing pulses (6 to start scanning odd lines, 5 to start scanning even lines) long-sync pulses (5 pulses) post-equalizing pulses (5 to start scanning odd lines, 4 to start scanning even lines) Each pre- or post-equalizing pulse consists of half a scan line of black signal: 2 μs at 0 V, followed by 30 μs at 0.3 V. Each long sync pulse consists of an equalizing pulse with timings inverted: 30 μs at 0 V, followed by 2 μs at 0.3 V. In video production and computer graphics, changes to the image are often performed during the vertical blanking interval to avoid visible discontinuity of the image. If this image in the framebuffer is updated with a new image while the display is being refreshed, the display shows a mishmash of both frames, producing page tearing partway down the image. Horizontal and vertical hold The sweep (or deflection) oscillators were designed to run without a signal from the television station (or VCR, computer, or other composite video source). This allows the television receiver to display a raster and to allow an image to be presented during antenna placement. With sufficient signal strength, the receiver's sync separator circuit would split timebase pulses from the incoming video and use them to reset the horizontal and vertical oscillators at the appropriate time to synchronize with the signal from the station.
Analog television
Wikipedia
426
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
The free-running oscillation of the horizontal circuit is especially critical, as the horizontal deflection circuits typically power the flyback transformer (which provides acceleration potential for the CRT) as well as the filaments for the high voltage rectifier tube and sometimes the filament(s) of the CRT itself. Without the operation of the horizontal oscillator and output stages in these television receivers, there would be no illumination of the CRT's face. The lack of precision timing components in early equipment meant that the timebase circuits occasionally needed manual adjustment. If their free-run frequencies were too far from the actual line and field rates, the circuits would not be able to follow the incoming sync signals. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen. Older analog television receivers often provide manual controls to adjust horizontal and vertical timing. The adjustment takes the form of horizontal hold and vertical hold controls, usually on the front panel along with other common controls. These adjust the free-run frequencies of the corresponding timebase oscillators. A slowly rolling vertical picture demonstrates that the vertical oscillator is nearly synchronized with the television station but is not locking to it, often due to a weak signal or a failure in the sync separator stage not resetting the oscillator. Horizontal sync errors cause the image to be torn diagonally and repeated across the screen as if it were wrapped around a screw or a barber's pole; the greater the error, the more copies of the image will be seen at once wrapped around the barber pole. By the early 1980s the efficacy of the synchronization circuits, plus the inherent stability of the sets' oscillators, had been improved to the point where these controls were no longer necessary. Integrated Circuits which eliminated the horizontal hold control were starting to appear as early as 1969. The final generations of analog television receivers used IC-based designs where the receiver's timebases were derived from accurate crystal oscillators. With these sets, adjustment of the free-running frequency of either sweep oscillator was unnecessary and unavailable. Horizontal and vertical hold controls were rarely used in CRT-based computer monitors, as the quality and consistency of components were quite high by the advent of the computer age, but might be found on some composite monitors used with the 1970s–80s home or personal computers.
Analog television
Wikipedia
510
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
Other technical information Components of a television system The tuner is the object which, with the aid of an antenna, isolates the television signals received over the air. There are two types of tuners in analog television, VHF and UHF tuners. The VHF tuner selects the VHF television frequency. This consists of a 4 MHz video bandwidth and about 100 kHz audio bandwidth. It then amplifies the signal and converts it to a 45.75 MHz Intermediate Frequency (IF) amplitude-modulated video and a 41.25 MHz IF frequency-modulated audio carrier. The IF amplifiers are centered at 44 MHz for optimal frequency transference of the audio and video carriers. Like radio, television has automatic gain control (AGC). This controls the gain of the IF amplifier stages and the tuner. The video amp and output amplifier is implemented using a pentode or a power transistor. The filter and demodulator separates the 45.75 MHz video from the 41.25 MHz audio then it simply uses a diode to detect the video signal. After the video detector, the video is amplified and sent to the sync separator and then to the picture tube. The audio signal goes to a 4.5 MHz amplifier. This amplifier prepares the signal for the 4.5 MHz detector. It then goes through a 4.5 MHz IF transformer to the detector. In television, there are 2 ways of detecting FM signals. One way is by the ratio detector. This is simple but very hard to align. The next is a relatively simple detector. This is the quadrature detector. It was invented in 1954. The first tube designed for this purpose was the 6BN6 type. It is easy to align and simple in circuitry. It was such a good design that it is still being used today in the Integrated circuit form. After the detector, it goes to the audio amplifier. Image synchronization is achieved by transmitting negative-going pulses. The horizontal sync signal is a single short pulse that indicates the start of every line. Two-timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line.
Analog television
Wikipedia
499
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The vertical sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace. A sync separator circuit detects the sync voltage levels and extracts and conditions signals that the horizontal and vertical oscillators can use to keep in sync with the video. It also forms the AGC voltage. The horizontal and vertical oscillators form the raster on the CRT. They are driven by the sync separator. There are many ways to create these oscillators. The earliest is the thyratron oscillator. Although it is known to drift, it makes a perfect sawtooth wave. This sawtooth wave is so good that no linearity control is needed. This oscillator was designed for the electrostatic deflection CRTs but also found some use in electromagnetically deflected CRTs. The next oscillator developed was the blocking oscillator which uses a transformer to create a sawtooth wave. This was only used for a brief time period and never was very popular. Finally the multivibrator was probably the most successful. It needed more adjustment than the other oscillators, but it is very simple and effective. This oscillator was so popular that it was used from the early 1950s until today. Two oscillator amplifiers are needed. The vertical amplifier directly drives the yoke. Since it operates at 50 or 60 Hz and drives an electromagnet, it is similar to an audio amplifier. Because of the rapid deflection required, the horizontal oscillator requires a high-power flyback transformer driven by a high-powered tube or transistor. Additional windings on this flyback transformer typically power other parts of the system. Loss of horizontal synchronization usually results in a scrambled and unwatchable picture; loss of vertical synchronization produces an image rolling up or down the screen. Timebase circuits
Analog television
Wikipedia
452
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
In an analog receiver with a CRT display sync pulses are fed to horizontal and vertical timebase circuits (commonly called sweep circuits in the United States), each consisting of an oscillator and an amplifier. These generate modified sawtooth and parabola current waveforms to scan the electron beam. Engineered waveform shapes are necessary to make up for the distance variations from the electron beam source and the screen surface. The oscillators are designed to free-run at frequencies very close to the field and line rates, but the sync pulses cause them to reset at the beginning of each scan line or field, resulting in the necessary synchronization of the beam sweep with the originating signal. The output waveforms from the timebase amplifiers are fed to the horizontal and vertical deflection coils wrapped around the CRT tube. These coils produce magnetic fields proportional to the changing current, and these deflect the electron beam across the screen. In the 1950s, the power for these circuits was derived directly from the mains supply. A simple circuit consisted of a series voltage dropper resistance and a rectifier. This avoided the cost of a large high-voltage mains supply (50 or 60 Hz) transformer. It was inefficient and produced a lot of heat. In the 1960s, semiconductor technology was introduced into timebase circuits. During the late 1960s in the UK, synchronous (with the scan line rate) power generation was introduced into solid state receiver designs. In the UK use of the simple (50 Hz) types of power, circuits were discontinued as thyristor based switching circuits were introduced. The reason for design changes arose from the electricity supply contamination problems arising from EMI, and supply loading issues due to energy being taken from only the positive half cycle of the mains supply waveform. CRT flyback power supply Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray tube requires a very high voltage (typically 10–30 kV) for correct operation.
Analog television
Wikipedia
442
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
This voltage is not directly produced by the main power supply circuitry; instead, the receiver makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched through the line output transformer, and alternating current (AC) is induced into the scan coils. At the end of each horizontal scan line the magnetic field, which has built up in both transformer and scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line scan time) current from both the line output transformer and the horizontal scan coil is discharged again into the primary winding of the flyback transformer by the use of a rectifier which blocks this counter-electromotive force. A small value capacitor is connected across the scan-switching device. This tunes the circuit inductances to resonate at a much higher frequency. This lengthens the flyback time from the extremely rapid decay rate that would result if they were electrically isolated during this short period. One of the secondary windings on the flyback transformer then feeds this brief high-voltage pulse to a Cockcroft–Walton generator design voltage multiplier. This produces the required high-voltage supply. A flyback converter is a power supply circuit operating on similar principles. A typical modern design incorporates the flyback transformer and rectifier circuitry into a single unit with a captive output lead, known as a diode split line output transformer or an Integrated High Voltage Transformer (IHVT), so that all high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a well-insulated high-voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal scanning allows reasonably small components to be used. Transition to digital In many countries, over-the-air broadcast television of analog audio and analog video signals has been discontinued to allow the re-use of the television broadcast radio spectrum for other services.
Analog television
Wikipedia
427
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
The first country to make a wholesale switch to digital over-the-air (terrestrial television) broadcasting was Luxembourg in 2006, followed later in 2006 by the Netherlands. The Digital television transition in the United States for high-powered transmission was completed on 12 June 2009, the date that the Federal Communications Commission (FCC) set. Almost two million households could no longer watch television because they had not prepared for the transition. The switchover had been delayed by the DTV Delay Act. While the majority of the viewers of over-the-air broadcast television in the U.S. watch full-power stations (which number about 1800), there are three other categories of television stations in the U.S.: low-power broadcasting stations, class A stations, and television translator stations. These were given later deadlines. In Japan, the switch to digital began in northeastern Ishikawa Prefecture on 24 July 2010 and ended in 43 of the country's 47 prefectures (including the rest of Ishikawa) on 24 July 2011, but in Fukushima, Iwate, and Miyagi prefectures, the conversion was delayed to 31 March 2012, due to complications from the 2011 Tōhoku earthquake and tsunami and its related nuclear accidents. In Canada, most of the larger cities turned off analog broadcasts on 31 August 2011. China had scheduled to end analog broadcasting between 2015 and 2021. Brazil switched to digital television on 2 December 2007 in São Paulo and planned to end analog broadcasting nationwide by 30 June 2016. However, the Ministry of Communications announced in 2012 that the deadline would be delayed. As of 2024, Brazil is in the process of implementing its next-generation digital television system, known as TV 3.0. In July 2024, ATSC 3.0 standard was officially selected for the country's next-generation digital television system. The transition to TV 3.0 is expected to begin in 2025, with initial deployments planned for key cities such as São Paulo, Rio de Janeiro, and Brasília.
Analog television
Wikipedia
410
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
In Malaysia, the Malaysian Communications and Multimedia Commission advertised for tender bids to be submitted in the third quarter of 2009 for the 470 through 742 MHz UHF allocation, to enable Malaysia's broadcast system to move into DTV. The new broadcast band allocation would result in Malaysia's having to build an infrastructure for all broadcasters, using a single digital terrestrial television broadcast channel. Large portions of Malaysia are covered by television broadcasts from Singapore, Thailand, Brunei, and Indonesia (from Borneo and Batam). Starting from 1 November 2019, all regions in Malaysia were no longer using the analog system after the states of Sabah and Sarawak finally turned it off on 31 October 2019. In Singapore, digital television under DVB-T2 began on 16 December 2013. The switchover was delayed many times until analog TV was switched off at midnight on 2 January 2019. In the Philippines, the National Telecommunications Commission required all broadcasting companies to end analog broadcasting on 31 December 2015 at 11:59 p.m. Due to delay of the release of the implementing rules and regulations for digital television broadcast, the target date was moved to 2020. Full digital broadcast was expected in 2021 and all of the analog TV services were to be shut down by the end of 2023. However, in February 2023, the NTC postponed the ASO/DTV transition to 2025 due to many provincial television stations not being ready to start their digital TV transmissions. In the Russian Federation, the Russian Television and Radio Broadcasting Network (RTRS) disabled analog broadcasting of federal channels in five stages, shutting down broadcasting in multiple federal subjects at each stage. The first region to have analog broadcasting disabled was Tver Oblast on 3 December 2018, and the switchover was completed on 14 October 2019. During the transition, DVB-T2 receivers and monetary compensations for purchasing of terrestrial or satellite digital TV reception equipment were provided to disabled people, World War II veterans, certain categories of retirees and households with income per member below living wage.
Analog television
Wikipedia
401
2393
https://en.wikipedia.org/wiki/Analog%20television
Technology
Broadcasting
null
Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation. The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, and welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by reactive or non-reactive, a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin. Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the 20th century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present. History The earliest evidence of human adhesive use was discovered in central Italy when three stone implements were discovered with birch bark tar indications. The tools were dated to about 200,000 before present in the Middle Paleolithic. It is the earliest example of tar-hafted stone tools. An experimental archeology study published in 2019 demonstrated how birch bark tar can be produced in an easier, more discoverable process. It involves directly burning birch bark under an overhanging rock surface in an open-air environment and collecting the tar that builds up on the rock.
Adhesive
Wikipedia
456
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
Although sticky enough, plant-based, single-component adhesives can be brittle and vulnerable to environmental conditions. The first use of compound adhesives was discovered in Sibudu, South Africa. Here, 70,000-year-old stone segments that were once inserted in axe hafts were discovered covered with an adhesive composed of plant gum and red ochre (natural iron oxide) as adding ochre to plant gum produces a stronger product and protects the gum from disintegrating under wet conditions. The ability to produce stronger adhesives allowed middle Stone Age humans to attach stone segments to sticks in greater variations, which led to the development of new tools. A study of material from Le Moustier indicates that Middle Paleolithic people, possibly Neanderthals, used glue made from a mixture of ocher and bitumen to make hand grips for cutting and scraping stone tools. More recent examples of adhesive use by prehistoric humans have been found at the burial sites of ancient tribes. Archaeologists studying the sites found that approximately 6,000 years ago the tribesmen had buried their dead together with food found in broken clay pots repaired with tree resins. Another investigation by archaeologists uncovered the use of bituminous cements to fasten ivory eyeballs to statues in Babylonian temples dating to approximately 4000 BC. In 2000, a paper revealed the discovery of a 5,200-year-old man nicknamed the "Tyrolean Iceman" or "Ötzi", who was preserved in a glacier near the Austria-Italy border. Several of his belongings were found with him including two arrows with flint arrowheads and a copper hatchet, each with evidence of organic glue used to connect the stone or metal parts to the wooden shafts. The glue was analyzed as pitch, which requires the heating of tar during its production. The retrieval of this tar requires a transformation of birch bark by means of heat, in a process known as pyrolysis.
Adhesive
Wikipedia
395
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
The first references to adhesives in literature appeared in approximately 2000 BC. Further historical records of adhesive use are found from the period spanning 1500–1000 BC. Artifacts from this period include paintings depicting wood gluing operations and a casket made of wood and glue in King Tutankhamun's tomb. Other ancient Egyptian artifacts employ animal glue for bonding or lamination. Such lamination of wood for bows and furniture is thought to have extended their life and was accomplished using casein (milk protein)-based glues. The ancient Egyptians also developed starch-based pastes for the bonding of papyrus to clothing and a plaster of Paris-like material made of calcined gypsum. From AD 1 to 500 the Greeks and Romans made great contributions to the development of adhesives. Wood veneering and marquetry were developed, the production of animal and fish glues refined, and other materials utilized. Egg-based pastes were used to bond gold leaves, and incorporated various natural ingredients such as blood, bone, hide, milk, cheese, vegetables, and grains. The Greeks began the use of slaked lime as mortar while the Romans furthered mortar development by mixing lime with volcanic ash and sand. This material, known as pozzolanic cement, was used in the construction of the Roman Colosseum and Pantheon. The Romans were also the first people known to have used tar and beeswax as caulk and sealant between the wooden planks of their boats and ships. In Central Asia, the rise of the Mongols in approximately AD 1000 can be partially attributed to the good range and power of the bows of Genghis Khan's hordes. These bows were made of a bamboo core, with horn on the belly (facing towards the archer) and sinew on the back, bound together with animal glue.
Adhesive
Wikipedia
378
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
In Europe, glue fell into disuse until the period AD 1500–1700. At this time, world-renowned cabinet and furniture makers such as Thomas Chippendale and Duncan Phyfe began to use adhesives to hold their products together. In 1690, the first commercial glue plant was established in The Netherlands. This plant produced glues from animal hides. In 1750, the first British glue patent was issued for fish glue. The following decades of the next century witnessed the manufacture of casein glues in German and Swiss factories. In 1876, the first U.S. patent (number 183,024) was issued to the Ross brothers for the production of casein glue. The first U.S. postage stamps used starch-based adhesives when issued in 1847. The first US patent (number 61,991) on dextrin (a starch derivative) adhesive was issued in 1867. Natural rubber was first used as material for adhesives in 1830, which marked the starting point of the modern adhesive. In 1862, a British patent (number 3288) was issued for the plating of metal with brass by electrodeposition to obtain a stronger bond to rubber. The development of the automobile and the need for rubber shock mounts required stronger and more durable bonds of rubber and metal. This spurred the development of cyclized rubber treated in strong acids. By 1927, this process was used to produce solvent-based thermoplastic rubber cements for metal to rubber bonding. Natural rubber-based sticky adhesives were first used on a backing by Henry Day (US Patent 3,965) in 1845. Later these kinds of adhesives were used in cloth backed surgical and electric tapes. By 1925, the pressure-sensitive tape industry was born. Today, sticky notes, Scotch Tape, and other tapes are examples of pressure-sensitive adhesives (PSA). A key step in the development of synthetic plastics was the introduction of a thermoset plastic known as Bakelite phenolic in 1910. Within two years, phenolic resin was applied to plywood as a coating varnish. In the early 1930s, phenolics gained importance as adhesive resins.
Adhesive
Wikipedia
456
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
The 1920s, 1930s, and 1940s witnessed great advances in the development and production of new plastics and resins due to the First and Second World Wars. These advances greatly improved the development of adhesives by allowing the use of newly developed materials that exhibited a variety of properties. With changing needs and ever evolving technology, the development of new synthetic adhesives continues to the present. However, due to their low cost, natural adhesives are still more commonly used. Types Adhesives are typically organized by the method of adhesion. These are then organized into reactive and non-reactive adhesives, which refers to whether the adhesive chemically reacts in order to harden. Alternatively they can be organized by whether the raw stock is of natural, or synthetic origin, or by their starting physical phase. By reactiveness Non-reactive Drying There are two types of adhesives that harden by drying: solvent-based adhesives and polymer dispersion adhesives, also known as emulsion adhesives. Solvent-based adhesives are a mixture of ingredients (typically polymers) dissolved in a solvent. White glue, contact adhesives and rubber cements are members of the drying adhesive family. As the solvent evaporates, the adhesive hardens. Depending on the chemical composition of the adhesive, they will adhere to different materials to greater or lesser degrees. Polymer dispersion adhesives are milky-white dispersions often based on polyvinyl acetate (PVAc). They are used extensively in the woodworking and packaging industries. They are also used with fabrics and fabric-based components, and in engineered products such as loudspeaker cones. Pressure-sensitive Pressure-sensitive adhesives (PSA) form a bond by the application of light pressure to bind the adhesive with the adherend. They are designed to have a balance between flow and resistance to flow. The bond forms because the adhesive is soft enough to flow (i.e., "wet") to the adherend. The bond has strength because the adhesive is hard enough to resist flow when stress is applied to the bond. Once the adhesive and the adherend are in close proximity, molecular interactions, such as van der Waals forces, become involved in the bond, contributing significantly to its ultimate strength.
Adhesive
Wikipedia
476
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
PSAs are designed for either permanent or removable applications. Examples of permanent applications include safety labels for power equipment, foil tape for HVAC duct work, automotive interior trim assembly, and sound/vibration damping films. Some high performance permanent PSAs exhibit high adhesion values and can support kilograms of weight per square centimeter of contact area, even at elevated temperatures. Permanent PSAs may initially be removable (for example to recover mislabeled goods) and build adhesion to a permanent bond after several hours or days. Removable adhesives are designed to form a temporary bond, and ideally can be removed after months or years without leaving residue on the adherend. Removable adhesives are used in applications such as surface protection films, masking tapes, bookmark and note papers, barcode labels, price marking labels, promotional graphics materials, and for skin contact (wound care dressings, EKG electrodes, athletic tape, analgesic and trans-dermal drug patches, etc.). Some removable adhesives are designed to repeatedly stick and unstick. They have low adhesion, and generally cannot support much weight. Pressure-sensitive adhesive is used in Post-it notes. Pressure-sensitive adhesives are manufactured with either a liquid carrier or in 100% solid form. Articles are made from liquid PSAs by coating the adhesive and drying off the solvent or water carrier. They may be further heated to initiate a cross-linking reaction and increase molecular weight. 100% solid PSAs may be low viscosity polymers that are coated and then reacted with radiation to increase molecular weight and form the adhesive, or they may be high viscosity materials that are heated to reduce viscosity enough to allow coating, and then cooled to their final form. Major raw material for PSA's are acrylate-based polymers. Contact Contact adhesives form high shear-resistance bonds with a rapid cure time. They are often applied in thin layers for use with laminates, such as bonding Formica to countertops, and in footwear, as in attaching outsoles to uppers. Natural rubber and polychloroprene (Neoprene) are commonly used contact adhesives. Both of these elastomers undergo strain crystallization.
Adhesive
Wikipedia
470
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
Contact adhesives must be applied to both surfaces and allowed some time to dry before the two surfaces are pushed together. Some contact adhesives require as long as 24 hours to dry completely before the surfaces are to be held together. Once the surfaces are pushed together, the bond forms very quickly. Clamps are typically not needed due to the rapid bond formation. Hot Hot adhesives, also known as hot melt adhesives, are thermoplastics applied in molten form (in the 65–180 °C range) which solidify on cooling to form strong bonds between a wide range of materials. Ethylene-vinyl acetate-based hot-melts are particularly popular for crafts because of their ease of use and the wide range of common materials they can join. A glue gun (shown at right) is one method of applying hot adhesives. The glue gun melts the solid adhesive, then allows the liquid to pass through its barrel onto the material, where it solidifies. Thermoplastic glue may have been invented around 1940 by Procter & Gamble as a solution to the problem that water-based adhesives, commonly used in packaging at that time, failed in humid climates, causing packages to open. However, water-based adhesives are still of strong interest as they typically do not contain volatile solvents. Reactive Anaerobic Anaerobic adhesives cure when in contact with metal, in the absence of oxygen. They work well in a close-fitting space, as when used as a Thread-locking fluid. Multi-part Multi-component adhesives harden by mixing two or more components which chemically react. This reaction causes polymers to cross-link into acrylates, urethanes, and epoxies . There are several commercial combinations of multi-component adhesives in use in industry. Some of these combinations are: Polyester resin & polyurethane resin Polyols & polyurethane resin Acrylic polymers & polyurethane resins The individual components of a multi-component adhesive are not adhesive by nature. The individual components react with each other after being mixed and show full adhesion only on curing. The multi-component resins can be either solvent-based or solvent-less. The solvents present in the adhesives are a medium for the polyester or the polyurethane resin. The solvent is dried during the curing process.
Adhesive
Wikipedia
501
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
Pre-mixed and frozen adhesives Pre-mixed and frozen adhesives (PMFs) are adhesives that are mixed, deaerated, packaged, and frozen. As it is necessary for PMFs to remain frozen before use, once they are frozen at −80 °C they are shipped with dry ice and are required to be stored at or below −40 °C. PMF adhesives eliminate mixing mistakes by the end user and reduce exposure of curing agents that can contain irritants or toxins. PMFs were introduced commercially in the 1960s and are commonly used in aerospace and defense. One-part One-part adhesives harden via a chemical reaction with an external energy source, such as radiation, heat, and moisture. Ultraviolet (UV) light curing adhesives, also known as light curing materials (LCM), have become popular within the manufacturing sector due to their rapid curing time and strong bond strength. Light curing adhesives can cure in as little as one second and many formulations can bond dissimilar substrates (materials) and withstand harsh temperatures. These qualities make UV curing adhesives essential to the manufacturing of items in many industrial markets such as electronics, telecommunications, medical, aerospace, glass, and optical. Unlike traditional adhesives, UV light curing adhesives not only bond materials together but they can also be used to seal and coat products. They are generally acrylic-based. Heat curing adhesives consist of a pre-made mixture of two or more components. When heat is applied the components react and cross-link. This type of adhesive includes thermoset epoxies, urethanes, and polyimides. Moisture curing adhesives cure when they react with moisture present on the substrate surface or in the air. This type of adhesive includes cyanoacrylates and urethanes. By origin Natural Natural adhesives are made from organic sources such as vegetable starch (dextrin), natural resins, or animals (e.g. the milk protein casein and hide-based animal glues). These are often referred to as bioadhesives.
Adhesive
Wikipedia
452
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
One example is a simple paste made by cooking flour in water. Starch-based adhesives are used in corrugated board and paper sack production, paper tube winding, and wallpaper adhesives. Casein glue is mainly used to adhere glass bottle labels. Animal glues have traditionally been used in bookbinding, wood joining, and many other areas but now are largely replaced by synthetic glues except in specialist applications like the production and repair of stringed instruments. Albumen made from the protein component of blood has been used in the plywood industry. Masonite, a wood hardboard, was originally bonded using natural wood lignin, an organic polymer, though most modern particle boards such as MDF use synthetic thermosetting resins. Synthetic Synthetic adhesives are made out of organic compounds. Many are based on elastomers, thermoplastics, emulsions, and thermosets. Examples of thermosetting adhesives are: epoxy, polyurethane, cyanoacrylate and acrylic polymers. The first commercially produced synthetic adhesive was Karlsons Klister in the 1920s. Application Applicators of different adhesives are designed according to the adhesive being used and the size of the area to which the adhesive will be applied. The adhesive is applied to either one or both of the materials being bonded. The pieces are aligned and pressure is added to aid in adhesion and rid the bond of air bubbles. Common ways of applying an adhesive include brushes, rollers, using films or pellets, spray guns and applicator guns (e.g., caulk gun). All of these can be used manually or automated as part of a machine. Mechanisms of adhesion For an adhesive to be effective it must have three main properties. Firstly, it must be able to wet the base material. Wetting is the ability of a liquid to maintain contact with a solid surface. It must also increase in strength after application, and finally it must be able to transmit load between the two surfaces/substrates being adhered. Adhesion, the attachment between adhesive and substrate may occur either by mechanical means, in which the adhesive works its way into small pores of the substrate, or by one of several chemical mechanisms. The strength of adhesion depends on many factors, including the means by which it occurs.
Adhesive
Wikipedia
495
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
In some cases, an actual chemical bond occurs between adhesive and substrate. Thiolated polymers, for example, form chemical bonds with endogenous proteins such as mucus glycoproteins, integrins or keratins via disulfide bridges. Because of their comparatively high adhesive properties, these polymers find numerous biomedical applications. In others, electrostatic forces, as in static electricity, hold the substances together. A third mechanism involves the van der Waals forces that develop between molecules. A fourth means involves the moisture-aided diffusion of the glue into the substrate, followed by hardening. Methods to improve adhesion The quality of adhesive bonding depends strongly on the ability of the adhesive to efficiently cover (wet) the substrate area. This happens when the surface energy of the substrate is greater than the surface energy of the adhesive. However, high-strength adhesives have high surface energy. Thus, they bond poorly to low-surface-energy polymers or other materials. To solve this problem, surface treatment can be used to increase the surface energy as a preparation step before adhesive bonding. Importantly, surface preparation provides a reproducible surface allowing consistent bonding results. The commonly used surface activation techniques include plasma activation, flame treatment and wet chemistry priming. Failure There are several factors that could contribute to the failure of two adhered surfaces. Sunlight and heat may weaken the adhesive. Solvents can deteriorate or dissolve adhesive. Physical stresses may also cause the separation of surfaces. When subjected to loading, debonding may occur at different locations in the adhesive joint. The major fracture types are the following: Cohesive fracture Cohesive fracture is obtained if a crack propagates in the bulk polymer which constitutes the adhesive. In this case the surfaces of both adherends after debonding will be covered by fractured adhesive. The crack may propagate in the center of the layer or near an interface. For this last case, the cohesive fracture can be said to be "cohesive near the interface". Adhesive fracture Adhesive fracture (sometimes referred to as interfacial fracture) is when debonding occurs between the adhesive and the adherend. In most cases, the occurrence of adhesive fracture for a given adhesive goes along with smaller fracture toughness.
Adhesive
Wikipedia
473
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
Other types of fracture Other types of fracture include: The mixed type, which occurs if the crack propagates at some spots in a cohesive and in others in an interfacial manner. Mixed fracture surfaces can be characterised by a certain percentage of adhesive and cohesive areas. The alternating crack path type which occurs if the cracks jump from one interface to the other. This type of fracture appears in the presence of tensile pre-stresses in the adhesive layer. Fracture can also occur in the adherend if the adhesive is tougher than the adherend. In this case, the adhesive remains intact and is still bonded to one substrate and remnants of the other. For example, when one removes a price label, the adhesive usually remains on the label and the surface. This is cohesive failure. If, however, a layer of paper remains stuck to the surface, the adhesive has not failed. Another example is when someone tries to pull apart Oreo cookies and all the filling remains on one side; this is an adhesive failure, rather than a cohesive failure. Design of adhesive joints As a general design rule, the material properties of the object need to be greater than the forces anticipated during its use. (i.e. geometry, loads, etc.). The engineering work will consist of having a good model to evaluate the function. For most adhesive joints, this can be achieved using fracture mechanics. Concepts such as the stress concentration factor and the strain energy release rate can be used to predict failure. In such models, the behavior of the adhesive layer itself is neglected and only the adherents are considered. Failure will also very much depend on the opening mode of the joint. Mode I is an opening or tensile mode where the loadings are normal to the crack. Mode II is a sliding or in-plane shear mode where the crack surfaces slide over one another in direction perpendicular to the leading edge of the crack. This is typically the mode for which the adhesive exhibits the highest resistance to fracture. Mode III is a tearing or antiplane shear mode. As the loads are usually fixed, an acceptable design will result from combination of a material selection procedure and geometry modifications, if possible. In adhesively bonded structures, the global geometry and loads are fixed by structural considerations and the design procedure focuses on the material properties of the adhesive and on local changes on the geometry.
Adhesive
Wikipedia
488
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
Increasing the joint resistance is usually obtained by designing its geometry so that: The bonded zone is large It is mainly loaded in mode II Stable crack propagation will follow the appearance of a local failure. Shelf life Some glues and adhesives have a limited shelf life. Shelf life is dependent on multiple factors, the foremost of which being temperature. Adhesives may lose their effectiveness at high temperatures, as well as become increasingly stiff. Other factors affecting shelf life include exposure to oxygen or water vapor.
Adhesive
Wikipedia
99
2396
https://en.wikipedia.org/wiki/Adhesive
Technology
Material and chemical
null
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern analytical techniques. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples.
Analytical chemistry
Wikipedia
454
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. Classical methods Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. Qualitative analysis Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. Chemical tests There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. Flame test Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. Quantitative analysis
Analytical chemistry
Wikipedia
499
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). Gravimetric analysis The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. Volumetric analysis Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titration is a family of techniques used to determine the concentration of an analyte. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant. Instrumental methods Spectroscopy Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. Mass spectrometry
Analytical chemistry
Wikipedia
378
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. Electrochemical analysis Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Potentiometry measures the cell's potential, coulometry measures the cell's current, and voltammetry measures the change in current when cell potential changes. Thermal analysis Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. Separation Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field. Chromatographic assays
Analytical chemistry
Wikipedia
368
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Chromatography can be used to determine the presence of substances in a sample as different components in a mixture have different tendencies to adsorb onto the stationary phase or dissolve in the mobile phase. Thus, different components of the mixture move at different speed. Different components of a mixture can therefore be identified by their respective Rƒ values, which is the ratio between the migration distance of the substance and the migration distance of the solvent front during chromatography. In combination with the instrumental methods, chromatography can be used in quantitative determination of the substances. Chromatography separates the analyte from the rest of the sample so that it may be measured without interference from other compounds. There are different types of chromatography that differ from the media they use to separate the analyte and the sample. In Thin-layer chromatography, the analyte mixture moves up and separates along the coated sheet under the volatile mobile phase. In Gas chromatography, gas separates the volatile analytes. A common method for chromatography using liquid as a mobile phase is High-performance liquid chromatography. Hybrid techniques Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry. Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Microscopy The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries. Lab-on-a-chip
Analytical chemistry
Wikipedia
506
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. Errors Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment. In error the true value and observed value in chemical analysis can be related with each other by the equation where is the absolute error. is the true value. is the observed value. An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement. Errors can be expressed relatively. Given the relative error(): The percent error can also be calculated: If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in : Standards Standard curve A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample. Internal standards Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution.
Analytical chemistry
Wikipedia
447
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Standard addition The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem. Signals and noise One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR). Noise can arise from environmental factors as well as from fundamental physical processes. Thermal noise Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum. The root mean square value of the thermal noise in a resistor is given by where kB is the Boltzmann constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency . Shot noise Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal. Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by where e is the elementary charge and I is the average current. Shot noise is white noise. Flicker noise Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier. Environmental noise Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments.
Analytical chemistry
Wikipedia
478
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Noise reduction Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods. Applications Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry. Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (μTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used. Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules.
Analytical chemistry
Wikipedia
487
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on. The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic. Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations.
Analytical chemistry
Wikipedia
180
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Physical sciences
Analytical chemistry
null
An analog computer or analogue computer is a type of computation machine (computer) that uses physical phenomena such as electrical, mechanical, or hydraulic quantities behaving according to the mathematical principles in question (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Complex mechanisms for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the second, minute and hour needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task. Timeline of analog computers Precursors This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions. The Antikythera mechanism, a type of device used to determine the positions of heavenly bodies known as an orrery, was described as an early mechanical analog computer by British physicist, information scientist, and historian of science Derek J. de Solla Price. It was discovered in 1901, in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to , during the Hellenistic period. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.
Analog computer
Wikipedia
450
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was first described by Ptolemy in the 2nd century AD. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. In 1831–1835, mathematician and engineer Giovanni Plana devised a perpetual-calendar machine, which, through a system of pulleys and cylinders, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
Analog computer
Wikipedia
395
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 James Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. Several systems followed, notably those of Spanish engineer Leonardo Torres Quevedo, who built various analog machines for solving real and complex roots of polynomials; and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of the Gibbs phenomenon of overshoot in Fourier representation near discontinuities. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. Modern era The Dumaresq was a mechanical calculating device invented around 1902 by Lieutenant John Dumaresq of the Royal Navy. It was an analog computer that related vital variables of the fire control problem to the movement of one's own ship and that of a target ship. It was often used with other devices, such as a Vickers range clock to generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded. By 1912, Arthur Pollen had developed an electrically driven mechanical analog computer for fire-control systems, based on the differential analyser. It was used by the Imperial Russian Navy in World War I. Starting in 1929, AC network analyzers were constructed to solve calculation problems related to electrical power systems that were too large to solve with numerical methods at the time. These were essentially scale models of the electrical properties of the full-size system. Since network analyzers could handle problems too large for analytic methods or hand computation, they were also used to solve problems in nuclear physics and in the design of structures. More than 50 large network analyzers were built by the end of the 1950s.
Analog computer
Wikipedia
460
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
World War II era gun directors, gun data computers, and bomb sights used mechanical analog computers. In 1942 Helmut Hölzer built a fully electronic analog computer at Peenemünde Army Research Center as an embedded control system (mixing device) to calculate V-2 rocket trajectories from the accelerations and orientations (measured by gyroscopes) and to stabilize and guide the missile. Mechanical analog computers were very important in gun fire control in World War II, the Korean War and well past the Vietnam War; they were made in significant numbers. In the period 1930–1945 in the Netherlands, Johan van Veen developed an analogue computer to calculate and predict tidal currents when the geometry of the channels are changed. Around 1950, this idea was developed into the Deltar, a hydraulic analogy computer supporting the closure of estuaries in the southwest of the Netherlands (the Delta Works). The FERMIAC was an analog computer invented by physicist Enrico Fermi in 1947 to aid in his studies of neutron transport. Project Cyclone was an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems. Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4,000 electron tubes and used 100 dials and 6,000 plug-in connectors to program. The MONIAC Computer was a hydraulic analogy of a national economy first unveiled in 1949. Computer Engineering Associates was spun out of Caltech in 1950 to provide commercial services using the "Direct Analogy Electric Analog Computer" ("the largest and most impressive general-purpose analyzer facility for the solution of field problems") developed there by Gilbert D. McCann, Charles H. Wilts, and Bart Locanthi.
Analog computer
Wikipedia
346
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Educational analog computers illustrated the principles of analog calculation. The Heathkit EC-1, a $199 educational analog computer, was made by the Heath Company, US . It was programmed using patch cords that connected nine operational amplifiers and other components. General Electric also marketed an "educational" analog computer kit of a simple design in the early 1960s consisting of two transistor tone generators and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed, depending on which dials were inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate. However, the unit did demonstrate the basic principle. Analog computer designs were published in electronics magazines. One example is the PEAC (Practical Electronics analogue computer), published in Practical Electronics in the January 1968 edition. Another more modern hybrid computer design was published in Everyday Practical Electronics in 2002. An example described in the EPE hybrid computer was the flight of a VTOL aircraft such as the Harrier jump jet. The altitude and speed of the aircraft were calculated by the analog part of the computer and sent to a PC via a digital microprocessor and displayed on the PC screen. In industrial process control, analog loop controllers were used to automatically regulate temperature, flow, pressure, or other process conditions. The technology of these controllers ranged from purely mechanical integrators, through vacuum-tube and solid-state devices, to emulation of analog controllers by microprocessors. Electronic analog computers The similarity between linear mechanical components, such as springs and dashpots (viscous-fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations of the same form. However, the difference between these systems is what makes analog computing useful. Complex systems often are not amenable to pen-and-paper analysis, and require some form of testing or simulation. Complex mechanical systems, such as suspensions for racing cars, are expensive to fabricate and hard to modify. And taking precise mechanical measurements during high-speed tests adds further difficulty.
Analog computer
Wikipedia
479
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
By contrast, it is very inexpensive to build an electrical equivalent of a complex mechanical system, to simulate its behavior. Engineers arrange a few operational amplifiers (op amps) and some passive linear components to form a circuit that follows the same equations as the mechanical system being simulated. All measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) stiffness of the spring, for instance, can be changed by adjusting the parameters of an integrator. The electrical system is an analogy to the physical system, hence the name, but it is much less expensive than a mechanical prototype, much easier to modify, and generally safer. The electronic circuit can also be made to run faster or slower than the physical system being simulated. Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations. Electronic analog computers are especially well-suited to representing situations described by differential equations. Historically, they were often used when a system of differential equations proved very difficult to solve by traditional means. As a simple example, the dynamics of a spring-mass system can be described by the equation , with as the vertical position of a mass , the damping coefficient, the spring constant and the gravity of Earth. For analog computing, the equation is programmed as . The equivalent analog circuit consists of two integrators for the state variables (speed) and (position), one inverter, and three potentiometers. Electronic analog computers have drawbacks: the value of the circuit's supply voltage limits the range over which the variables may vary (since the value of a variable is represented by a voltage on a particular wire). Therefore, each problem must be scaled so its parameters and dimensions can be represented using voltages that the circuit can supply —e.g., the expected magnitudes of the velocity and the position of a spring pendulum. Improperly scaled variables can have their values "clamped" by the limits of the supply voltage. Or if scaled too small, they can suffer from higher noise levels. Either problem can cause the circuit to produce an incorrect simulation of the physical system. (Modern digital simulations are much more robust to widely varying values of their variables, but are still not entirely immune to these concerns: floating-point digital calculations support a huge dynamic range, but can suffer from imprecision if tiny differences of huge values lead to numerical instability.)
Analog computer
Wikipedia
494
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. (Modern digital simulations are much better in this area. Digital arbitrary-precision arithmetic can provide any desired degree of precision.) However, in most cases the precision of an analog computer is absolutely sufficient given the uncertainty of the model characteristics and its technical parameters. Many small computers dedicated to specific computations are still part of industrial regulation equipment, but from the 1950s to the 1970s, general-purpose analog computers were the only systems fast enough for real time simulation of dynamic systems, especially in the aircraft, military and aerospace field. In the 1960s, the major manufacturer was Electronic Associates of Princeton, New Jersey, with its 231R Analog Computer (vacuum tubes, 20 integrators) and subsequently its EAI 8800 Analog Computer (solid state operational amplifiers, 64 integrators). Its challenger was Applied Dynamics of Ann Arbor, Michigan. Although the basic technology for analog computers is usually operational amplifiers (also called "continuous current amplifiers" because they have no low frequency limitation), in the 1960s an attempt was made in the French ANALAC computer to use an alternative technology: medium frequency carrier and non dissipative reversible circuits. In the 1970s, every large company and administration concerned with problems in dynamics had an analog computing center, such as: In the US: NASA (Huntsville, Houston), Martin Marietta (Orlando), Lockheed, Westinghouse, Hughes Aircraft In Europe: CEA (French Atomic Energy Commission), MATRA, Aérospatiale, BAC (British Aircraft Corporation). Construction An analog computing machine consists of several main components:
Analog computer
Wikipedia
346
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Signal sources: These are blocks that generate analog signals, such as voltage or current, to represent input data and operations. Amplifiers: Amplifiers are used to boost analog signals and maintain their amplitudes throughout the system. They amplify weak input signals and compensate for signal losses during transmission. Filters: Filters are used to modify the spectrum of signals by suppressing or amplifying specific frequencies. They allow the isolation or suppression of certain signal components depending on the computational requirements. Modulators and demodulators: Modulators convert information into analog signals that can be transmitted through a communication channel, and demodulators perform the reverse transformation, recovering the original data from modulated signals. Adders, multipliers, log converters, and other calculation stages: These perform arithmetic operations on analog signals. They can be used for mathematical operations such as addition, multiplication, exponentiation, integration, and differentiation. Storage and memory: Analog computing machines can use various forms of information storage, such as capacitors or inductors, to store intermediate results and memory. Feedback and control: Feedback and control blocks are used to maintain the stability and accuracy of the analog computing machine. They may include regulation systems and error correction. Patch panel: Analog computing machines also feature a patch panel or patch field. A patch panel is a physical structure on which connectors or contacts are placed to interconnect various components and modules within the system. On the patch panel, various connections and routes can be set and switched to configure the machine and determine signal flows. This allows users to flexibly configure and reconfigure the analog computing system to perform specific tasks. Patch panels are used to control data flows, connect and disconnect connections between various blocks of the system, including signal sources, amplifiers, filters, and other components. They provide convenience and flexibility in configuring and experimenting with analog computations. Patch panels can be presented as a physical panel with connectors or, in more modern systems, as a software interface that allows virtual management of signal connections and routes. Hardware interfaces: Interfaces provide means of interaction with the machine, for example, for parameter control or data transmission. Output device: this device is designed to present the results of analog computations in a convenient form for the user or to transmit the obtained data to other systems.
Analog computer
Wikipedia
480
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Output devices in analog machines can vary depending on the specific goals of the system. For example, they could be graphical indicators, oscilloscopes, graphic recording devices, TV connection module, voltmeter, etc. These devices allow for the visualization of analog signals and the representation of the results of measurements or mathematical operations. Power source and stabilizers. These are just general blocks that can be found in a typical analog computing machine. The actual configuration and components may vary depending on the specific implementation and the intended use of the machine. Analog–digital hybrids Analog computing devices are fast; digital computing devices are more versatile and accurate. The idea behind an analog-digital hybrid is to combine the two processes for the best efficiency. An example of such hybrid elementary device is the hybrid multiplier, where one input is an analog signal, the other input is a digital signal and the output is analog. It acts as an analog potentiometer, upgradable digitally. This kind of hybrid technique is mainly used for fast dedicated real time computation when computing time is very critical, as signal processing for radars and generally for controllers in embedded systems. In the early 1970s, analog computer manufacturers tried to tie together their analog computers with a digital computers to get the advantages of the two techniques. In such systems, the digital computer controlled the analog computer, providing initial set-up, initiating multiple analog runs, and automatically feeding and collecting data. The digital computer may also participate to the calculation itself using analog-to-digital and digital-to-analog converters. The largest manufacturer of hybrid computers was Electronic Associates. Their hybrid computer model 8900 was made of a digital computer and one or more analog consoles. These systems were mainly dedicated to large projects such as the Apollo program and Space Shuttle at NASA, or Ariane in Europe, especially during the integration step where at the beginning everything is simulated, and progressively real components replace their simulated parts. Only one company was known as offering general commercial computing services on its hybrid computers, CISI of France, in the 1970s. The best reference in this field is the 100,000 simulation runs for each certification of the automatic landing systems of Airbus and Concorde aircraft.
Analog computer
Wikipedia
443
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
After 1980, purely digital computers progressed more and more rapidly and were fast enough to compete with analog computers. One key to the speed of analog computers was their fully parallel computation, but this was also a limitation. The more equations required for a problem, the more analog components were needed, even when the problem wasn't time critical. "Programming" a problem meant interconnecting the analog operators; even with a removable wiring panel this was not very versatile. Implementations Mechanical analog computers While a wide variety of mechanisms have been developed throughout history, some stand out because of their theoretical importance, or because they were manufactured in significant quantities. Most practical mechanical analog computers of any significant complexity used rotating shafts to carry variables from one mechanism to another. Cables and pulleys were used in a Fourier synthesizer, a tide-predicting machine, which summed the individual harmonic components. Another category, not nearly as well known, used rotating shafts only for input and output, with precision racks and pinions. The racks were connected to linkages that performed the computation. At least one U.S. Naval sonar fire control computer of the later 1950s, made by Librascope, was of this type, as was the principal computer in the Mk. 56 Gun Fire Control System. Online, there is a remarkably clear illustrated reference (OP 1140) that describes the fire control computer mechanisms. For adding and subtracting, precision miter-gear differentials were in common use in some computers; the Ford Instrument Mark I Fire Control Computer contained about 160 of them. Integration with respect to another variable was done by a rotating disc driven by one variable. Output came from a pick-off device (such as a wheel) positioned at a radius on the disc proportional to the second variable. (A carrier with a pair of steel balls supported by small rollers worked especially well. A roller, its axis parallel to the disc's surface, provided the output. It was held against the pair of balls by a spring.) Arbitrary functions of one variable were provided by cams, with gearing to convert follower movement to shaft rotation. Functions of two variables were provided by three-dimensional cams. In one good design, one of the variables rotated the cam. A hemispherical follower moved its carrier on a pivot axis parallel to that of the cam's rotating axis. Pivoting motion was the output. The second variable moved the follower along the axis of the cam. One practical application was ballistics in gunnery.
Analog computer
Wikipedia
508
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Coordinate conversion from polar to rectangular was done by a mechanical resolver (called a "component solver" in US Navy fire control computers). Two discs on a common axis positioned a sliding block with pin (stubby shaft) on it. One disc was a face cam, and a follower on the block in the face cam's groove set the radius. The other disc, closer to the pin, contained a straight slot in which the block moved. The input angle rotated the latter disc (the face cam disc, for an unchanging radius, rotated with the other (angle) disc; a differential and a few gears did this correction). Referring to the mechanism's frame, the location of the pin corresponded to the tip of the vector represented by the angle and magnitude inputs. Mounted on that pin was a square block. Rectilinear-coordinate outputs (both sine and cosine, typically) came from two slotted plates, each slot fitting on the block just mentioned. The plates moved in straight lines, the movement of one plate at right angles to that of the other. The slots were at right angles to the direction of movement. Each plate, by itself, was like a Scotch yoke, known to steam engine enthusiasts. During World War II, a similar mechanism converted rectilinear to polar coordinates, but it was not particularly successful and was eliminated in a significant redesign (USN, Mk. 1 to Mk. 1A). Multiplication was done by mechanisms based on the geometry of similar right triangles. Using the trigonometric terms for a right triangle, specifically opposite, adjacent, and hypotenuse, the adjacent side was fixed by construction. One variable changed the magnitude of the opposite side. In many cases, this variable changed sign; the hypotenuse could coincide with the adjacent side (a zero input), or move beyond the adjacent side, representing a sign change. Typically, a pinion-operated rack moving parallel to the (trig.-defined) opposite side would position a slide with a slot coincident with the hypotenuse. A pivot on the rack let the slide's angle change freely. At the other end of the slide (the angle, in trig. terms), a block on a pin fixed to the frame defined the vertex between the hypotenuse and the adjacent side.
Analog computer
Wikipedia
487
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
At any distance along the adjacent side, a line perpendicular to it intersects the hypotenuse at a particular point. The distance between that point and the adjacent side is some fraction that is the product of 1 the distance from the vertex, and 2 the magnitude of the opposite side. The second input variable in this type of multiplier positions a slotted plate perpendicular to the adjacent side. That slot contains a block, and that block's position in its slot is determined by another block right next to it. The latter slides along the hypotenuse, so the two blocks are positioned at a distance from the (trig.) adjacent side by an amount proportional to the product. To provide the product as an output, a third element, another slotted plate, also moves parallel to the (trig.) opposite side of the theoretical triangle. As usual, the slot is perpendicular to the direction of movement. A block in its slot, pivoted to the hypotenuse block positions it. A special type of integrator, used at a point where only moderate accuracy was needed, was based on a steel ball, instead of a disc. It had two inputs, one to rotate the ball, and the other to define the angle of the ball's rotating axis. That axis was always in a plane that contained the axes of two movement pick-off rollers, quite similar to the mechanism of a rolling-ball computer mouse (in that mechanism, the pick-off rollers were roughly the same diameter as the ball). The pick-off roller axes were at right angles. A pair of rollers "above" and "below" the pick-off plane were mounted in rotating holders that were geared together. That gearing was driven by the angle input, and established the rotating axis of the ball. The other input rotated the "bottom" roller to make the ball rotate. Essentially, the whole mechanism, called a component integrator, was a variable-speed drive with one motion input and two outputs, as well as an angle input. The angle input varied the ratio (and direction) of coupling between the "motion" input and the outputs according to the sine and cosine of the input angle.
Analog computer
Wikipedia
460
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Although they did not accomplish any computation, electromechanical position servos (aka. torque amplifiers) were essential in mechanical analog computers of the "rotating-shaft" type for providing operating torque to the inputs of subsequent computing mechanisms, as well as driving output data-transmission devices such as large torque-transmitter synchros in naval computers. Other readout mechanisms, not directly part of the computation, included internal odometer-like counters with interpolating drum dials for indicating internal variables, and mechanical multi-turn limit stops. Considering that accurately controlled rotational speed in analog fire-control computers was a basic element of their accuracy, there was a motor with its average speed controlled by a balance wheel, hairspring, jeweled-bearing differential, a twin-lobe cam, and spring-loaded contacts (ship's AC power frequency was not necessarily accurate, nor dependable enough, when these computers were designed). Electronic analog computers Electronic analog computers typically have front panels with numerous jacks (single-contact sockets) that permit patch cords (flexible wires with plugs at both ends) to create the interconnections that define the problem setup. In addition, there are precision high-resolution potentiometers (variable resistors) for setting up (and, when needed, varying) scale factors. In addition, there is usually a zero-center analog pointer-type meter for modest-accuracy voltage measurement. Stable, accurate voltage sources provide known magnitudes. Typical electronic analog computers contain anywhere from a few to a hundred or more operational amplifiers ("op amps"), named because they perform mathematical operations. Op amps are a particular type of feedback amplifier with very high gain and stable input (low and stable offset). They are always used with precision feedback components that, in operation, all but cancel out the currents arriving from input components. The majority of op amps in a representative setup are summing amplifiers, which add and subtract analog voltages, providing the result at their output jacks. As well, op amps with capacitor feedback are usually included in a setup; they integrate the sum of their inputs with respect to time. Integrating with respect to another variable is the nearly exclusive province of mechanical analog integrators; it is almost never done in electronic analog computers. However, given that a problem solution does not change with time, time can serve as one of the variables. Other computing elements include analog multipliers, nonlinear function generators, and analog comparators.
Analog computer
Wikipedia
512
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Electrical elements such as inductors and capacitors used in electrical analog computers had to be carefully manufactured to reduce non-ideal effects. For example, in the construction of AC power network analyzers, one motive for using higher frequencies for the calculator (instead of the actual power frequency) was that higher-quality inductors could be more easily made. Many general-purpose analog computers avoided the use of inductors entirely, re-casting the problem in a form that could be solved using only resistive and capacitive elements, since high-quality capacitors are relatively easy to make. The use of electrical properties in analog computers means that calculations are normally performed in real time (or faster), at a speed determined mostly by the frequency response of the operational amplifiers and other computing elements. In the history of electronic analog computers, there were some special high-speed types. Nonlinear functions and calculations can be constructed to a limited precision (three or four digits) by designing function generators—special circuits of various combinations of resistors and diodes to provide the nonlinearity. Typically, as the input voltage increases, progressively more diodes conduct. When compensated for temperature, the forward voltage drop of a transistor's base-emitter junction can provide a usably accurate logarithmic or exponential function. Op amps scale the output voltage so that it is usable with the rest of the computer. Any physical process that models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle of spaghetti as a model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding the convex hull of a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described in Dewdney (1984). Components Analog computers often have a complicated framework, but they have, at their core, a set of key components that perform the calculations. The operator manipulates these through the computer's framework. Key hydraulic components might include pipes, valves and containers. Key mechanical components might include rotating shafts for carrying data within the computer, miter gear differentials, disc/ball/roller integrators, cams (2-D and 3-D), mechanical resolvers and multipliers, and torque servos.
Analog computer
Wikipedia
490
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Key electrical/electronic components might include: precision resistors and capacitors operational amplifiers multipliers potentiometers fixed-function generators The core mathematical operations used in an electric analog computer are: addition integration with respect to time inversion multiplication exponentiation logarithm division In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier. Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability. Limitations In general, analog computers are limited by non-ideal effects. An analog signal is composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response, noise floor, non-linearities, temperature coefficient, and parasitic effects within semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are always figures of merit. Decline In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers. Even so, some research in analog computation is still being done. A few universities still use analog computers to teach control system theory. The American company Comdyna manufactured small analog computers. At Indiana University Bloomington, Jonathan Mills has developed the Extended Analog Computer based on sampling voltages in a foam sheet. At the Harvard Robotics Laboratory, analog computation is a research topic. Lyric Semiconductor's error correction circuits use analog probabilistic signals. Slide rules are still used as flight computers in flight training. Resurgence
Analog computer
Wikipedia
385
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
With the development of very-large-scale integration (VLSI) technology, Yannis Tsividis' group at Columbia University has been revisiting analog/hybrid computers design in standard CMOS process. Two VLSI chips have been developed, an 80th-order analog computer (250 nm) by Glenn Cowan in 2005 and a 4th-order hybrid computer (65 nm) developed by Ning Guo in 2015, both targeting at energy-efficient ODE/PDE applications. Glenn's chip contains 16 macros, in which there are 25 analog computing blocks, namely integrators, multipliers, fanouts, few nonlinear blocks. Ning's chip contains one macro block, in which there are 26 computing blocks including integrators, multipliers, fanouts, ADCs, SRAMs and DACs. Arbitrary nonlinear function generation is made possible by the ADC+SRAM+DAC chain, where the SRAM block stores the nonlinear function data. The experiments from the related publications revealed that VLSI analog/hybrid computers demonstrated about 1–2 orders magnitude of advantage in both solution time and energy while achieving accuracy within 5%, which points to the promise of using analog/hybrid computing techniques in the area of energy-efficient approximate computing. In 2016, a team of researchers developed a compiler to solve differential equations using analog circuits. Analog computers are also used in neuromorphic computing, and in 2021 a group of researchers have shown that a specific type of artificial neural network called a spiking neural network was able to work with analog neuromorphic computers. In 2021, the German company anabrid GmbH began to produce THE ANALOG THING (abbreviated THAT), a small low-cost analog computer mainly for educational and scientific use. The company is also constructing analog mainframes and hybrid computers. Practical examples These are examples of analog computers that have been constructed or practically used:
Analog computer
Wikipedia
391
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
Analog Paradim, a modular analog computer produced by anabrid Boeing B-29 Superfortress Central Fire Control System Deltar E6B flight computer Ishiguro Storm Surge Computer Kerrison Predictor Leonardo Torres y Quevedo's Analogue Calculating Machines based on "fusee sans fin" Librascope, aircraft weight and balance computer Mechanical computer Mechanical watch Mechanical integrators, for example, the planimeter Mischgerät (V-2 guidance computer) MONIAC, economic modelling Nomogram Norden bombsight Rangekeeper, and related fire control computers Scanimate SR-71 inlet control system (fast adjustment of inlet geometry to prevent super-sonic shock waves from causing engine flame-out at high mach numbers) THE ANALOG THING, a small analog computer by anabrid Torpedo Data Computer Torquetum Water integrator Analog (audio) synthesizers can also be viewed as a form of analog computer, and their technology was originally based in part on electronic analog computer technology. The ARP 2600's Ring Modulator was actually a moderate-accuracy analog multiplier. The Simulation Council (or Simulations Council) was an association of analog computer users in US. It is now known as The Society for Modeling and Simulation International. The Simulation Council newsletters from 1952 to 1963 are available online and show the concerns and technologies at the time, and the common use of analog computers for missilry.
Analog computer
Wikipedia
286
2428
https://en.wikipedia.org/wiki/Analog%20computer
Technology
Computer hardware
null
A minute of arc, arcminute (arcmin), arc minute, or minute arc, denoted by the symbol , is a unit of angular measurement equal to of one degree. Since one degree is of a turn, or complete rotation, one arcminute is of a turn. The nautical mile (nmi) was originally defined as the arc length of a minute of latitude on a spherical Earth, so the actual Earth's circumference is very near . A minute of arc is of a radian. A second of arc, arcsecond (arcsec), or arc second, denoted by the symbol , is of an arcminute, of a degree, of a turn, and (about ) of a radian. These units originated in Babylonian astronomy as sexagesimal (base 60) subdivisions of the degree; they are used in fields that involve very small angles, such as astronomy, optometry, ophthalmology, optics, navigation, land surveying, and marksmanship. To express even smaller angles, standard SI prefixes can be employed; the milliarcsecond (mas) and microarcsecond (μas), for instance, are commonly used in astronomy. For a three-dimensional area such as on a sphere, square arcminutes or seconds may be used. Symbols and abbreviations The prime symbol () designates the arcminute, though a single quote (U+0027) is commonly used where only ASCII characters are permitted. One arcminute is thus written as 1′. It is also abbreviated as arcmin or amin. Similarly, double prime (U+2033) designates the arcsecond, though a double quote (U+0022) is commonly used where only ASCII characters are permitted. One arcsecond is thus written as 1″. It is also abbreviated as arcsec or asec. In celestial navigation, seconds of arc are rarely used in calculations, the preference usually being for degrees, minutes, and decimals of a minute, for example, written as 42° 25.32′ or 42° 25.322′. This notation has been carried over into marine GPS and aviation GPS receivers, which normally display latitude and longitude in the latter format by default. Common examples The average apparent diameter of the full Moon is about 31 arcminutes, or 0.52°.
Minute and second of arc
Wikipedia
492
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
One arcminute is the approximate distance two contours can be separated by, and still be distinguished by, a person with 20/20 vision. One arcsecond is the approximate angle subtended by a U.S. dime coin (18 mm) at a distance of . An arcsecond is also the angle subtended by an object of diameter at a distance of one astronomical unit, an object of diameter at one light-year, an object of diameter one astronomical unit () at a distance of one parsec, per the definition of the latter. One milliarcsecond is about the size of a half dollar, seen from a distance equal to that between the Washington Monument and the Eiffel Tower. One microarcsecond is about the size of a period at the end of a sentence in the Apollo mission manuals left on the Moon as seen from Earth. One nanoarcsecond is about the size of a penny on Neptune's moon Triton as observed from Earth. Also notable examples of size in arcseconds are: Hubble Space Telescope has calculational resolution of 0.05 arcseconds and actual resolution of almost 0.1 arcseconds, which is close to the diffraction limit. At crescent phase, Venus measures between 60.2 and 66 seconds of arc. History The concepts of degrees, minutes, and seconds—as they relate to the measure of both angles and time—derive from Babylonian astronomy and time-keeping. Influenced by the Sumerians, the ancient Babylonians divided the Sun's perceived motion across the sky over the course of one full day into 360 degrees. Each degree was subdivided into 60 minutes and each minute into 60 seconds. Thus, one Babylonian degree was equal to four minutes in modern terminology, one Babylonian minute to four modern seconds, and one Babylonian second to (approximately 0.067) of a modern second. Uses Astronomy Since antiquity, the arcminute and arcsecond have been used in astronomy: in the ecliptic coordinate system as latitude (β) and longitude (λ); in the horizon system as altitude (Alt) and azimuth (Az); and in the equatorial coordinate system as declination (δ). All are measured in degrees, arcminutes, and arcseconds. The principal exception is right ascension (RA) in equatorial coordinates, which is measured in time units of hours, minutes, and seconds.
Minute and second of arc
Wikipedia
507
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
Contrary to what one might assume, minutes and seconds of arc do not directly relate to minutes and seconds of time, in either the rotational frame of the Earth around its own axis (day), or the Earth's rotational frame around the Sun (year). The Earth's rotational rate around its own axis is 15 minutes of arc per minute of time (360 degrees / 24 hours in day); the Earth's rotational rate around the Sun (not entirely constant) is roughly 24 minutes of time per minute of arc (from 24 hours in day), which tracks the annual progression of the Zodiac. Both of these factor in what astronomical objects you can see from surface telescopes (time of year) and when you can best see them (time of day), but neither are in unit correspondence. For simplicity, the explanations given assume a degree/day in the Earth's annual rotation around the Sun, which is off by roughly 1%. The same ratios hold for seconds, due to the consistent factor of 60 on both sides. The arcsecond is also often used to describe small astronomical angles such as the angular diameters of planets (e.g. the angular diameter of Venus which varies between 10″ and 60″); the proper motion of stars; the separation of components of binary star systems; and parallax, the small change of position of a star or Solar System body as the Earth revolves about the Sun. These small angles may also be written in milliarcseconds (mas), or thousandths of an arcsecond. The unit of distance called the parsec, abbreviated from the parallax angle of one arc second, was developed for such parallax measurements. The distance from the Sun to a celestial object is the reciprocal of the angle, measured in arcseconds, of the object's apparent movement caused by parallax. The European Space Agency's astrometric satellite Gaia, launched in 2013, can approximate star positions to 7 microarcseconds (μas).
Minute and second of arc
Wikipedia
418
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
Apart from the Sun, the star with the largest angular diameter from Earth is R Doradus, a red giant with a diameter of 0.05″. Because of the effects of atmospheric blurring, ground-based telescopes will smear the image of a star to an angular diameter of about 0.5″; in poor conditions this increases to 1.5″ or even more. The dwarf planet Pluto has proven difficult to resolve because its angular diameter is about 0.1″. Techniques exist for improving seeing on the ground. Adaptive optics, for example, can produce images around 0.05″ on a 10 m class telescope. Space telescopes are not affected by the Earth's atmosphere but are diffraction limited. For example, the Hubble Space Telescope can reach an angular size of stars down to about 0.1″. Cartography Minutes (′) and seconds (″) of arc are also used in cartography and navigation. At sea level one minute of arc along the equator equals exactly one geographical mile (not to be confused with international mile or statute mile) along the Earth's equator or approximately . A second of arc, one sixtieth of this amount, is roughly . The exact distance varies along meridian arcs or any other great circle arcs because the figure of the Earth is slightly oblate (bulges a third of a percent at the equator). Positions are traditionally given using degrees, minutes, and seconds of arcs for latitude, the arc north or south of the equator, and for longitude, the arc east or west of the Prime Meridian. Any position on or above the Earth's reference ellipsoid can be precisely given with this method. However, when it is inconvenient to use base-60 for minutes and seconds, positions are frequently expressed as decimal fractional degrees to an equal amount of precision. Degrees given to three decimal places ( of a degree) have about the precision of degrees-minutes-seconds ( of a degree) and specify locations within about . For navigational purposes positions are given in degrees and decimal minutes, for instance The Needles lighthouse is at 50º 39.734’N 001º 35.500’W. Property cadastral surveying
Minute and second of arc
Wikipedia
447
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
Related to cartography, property boundary surveying using the metes and bounds system and cadastral surveying relies on fractions of a degree to describe property lines' angles in reference to cardinal directions. A boundary "mete" is described with a beginning reference point, the cardinal direction North or South followed by an angle less than 90 degrees and a second cardinal direction, and a linear distance. The boundary runs the specified linear distance from the beginning point, the direction of the distance being determined by rotating the first cardinal direction the specified angle toward the second cardinal direction. For example, North 65° 39′ 18″ West 85.69 feet would describe a line running from the starting point 85.69 feet in a direction 65° 39′ 18″ (or 65.655°) away from north toward the west. Firearms The arcminute is commonly found in the firearms industry and literature, particularly concerning the precision of rifles, though the industry refers to it as minute of angle (MOA). It is especially popular as a unit of measurement with shooters familiar with the imperial measurement system because 1 MOA subtends a circle with a diameter of 1.047 inches (which is often rounded to just 1 inch) at 100 yards ( at or 2.908 cm at 100 m), a traditional distance on American target ranges. The subtension is linear with the distance, for example, at 500 yards, 1 MOA subtends 5.235 inches, and at 1000 yards 1 MOA subtends 10.47 inches. Since many modern telescopic sights are adjustable in half (), quarter () or eighth () MOA increments, also known as clicks, zeroing and adjustments are made by counting 2, 4 and 8 clicks per MOA respectively.
Minute and second of arc
Wikipedia
363
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
For example, if the point of impact is 3 inches high and 1.5 inches left of the point of aim at 100 yards (which for instance could be measured by using a spotting scope with a calibrated reticle, or a target delineated for such purposes), the scope needs to be adjusted 3 MOA down, and 1.5 MOA right. Such adjustments are trivial when the scope's adjustment dials have a MOA scale printed on them, and even figuring the right number of clicks is relatively easy on scopes that click in fractions of MOA. This makes zeroing and adjustments much easier: To adjust a MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 × 2 = 6 clicks down and 1.5 x 2 = 3 clicks right To adjust a MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 x 4 = 12 clicks down and 1.5 × 4 = 6 clicks right To adjust a MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 x 8 = 24 clicks down and 1.5 × 8 = 12 clicks right Another common system of measurement in firearm scopes is the milliradian (mrad). Zeroing an mrad based scope is easy for users familiar with base ten systems. The most common adjustment value in mrad based scopes is  mrad (which approximates MOA). To adjust a  mrad scope 0.9 mrad down and 0.4 mrad right, the scope needs to be adjusted 9 clicks down and 4 clicks right (which equals approximately 3 and 1.5 MOA respectively).
Minute and second of arc
Wikipedia
358
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
One thing to be aware of is that some MOA scopes, including some higher-end models, are calibrated such that an adjustment of 1 MOA on the scope knobs corresponds to exactly 1 inch of impact adjustment on a target at 100 yards, rather than the mathematically correct 1.047 inches. This is commonly known as the Shooter's MOA (SMOA) or Inches Per Hundred Yards (IPHY). While the difference between one true MOA and one SMOA is less than half of an inch even at 1000 yards, this error compounds significantly on longer range shots that may require adjustment upwards of 20–30 MOA to compensate for the bullet drop. If a shot requires an adjustment of 20 MOA or more, the difference between true MOA and SMOA will add up to 1 inch or more. In competitive target shooting, this might mean the difference between a hit and a miss. The physical group size equivalent to m minutes of arc can be calculated as follows: group size = tan() × distance. In the example previously given, for 1 minute of arc, and substituting 3,600 inches for 100 yards, 3,600 tan() ≈ 1.047 inches. In metric units 1 MOA at 100 metres ≈ 2.908 centimetres. Sometimes, a precision-oriented firearm's performance will be measured in MOA. This simply means that under ideal conditions (i.e. no wind, high-grade ammo, clean barrel, and a stable mounting platform such as a vise or a benchrest used to eliminate shooter error), the gun is capable of producing a group of shots whose center points (center-to-center) fit into a circle, the average diameter of circles in several groups can be subtended by that amount of arc. For example, a 1 MOA rifle should be capable, under ideal conditions, of repeatably shooting 1-inch groups at 100 yards. Most higher-end rifles are warrantied by their manufacturer to shoot under a given MOA threshold (typically 1 MOA or better) with specific ammunition and no error on the shooter's part. For example, Remington's M24 Sniper Weapon System is required to shoot 0.8 MOA or better, or be rejected from sale by quality control.
Minute and second of arc
Wikipedia
471
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
Rifle manufacturers and gun magazines often refer to this capability as sub-MOA, meaning a gun consistently shooting groups under 1 MOA. This means that a single group of 3 to 5 shots at 100 yards, or the average of several groups, will measure less than 1 MOA between the two furthest shots in the group, i.e. all shots fall within 1 MOA. If larger samples are taken (i.e., more shots per group) then group size typically increases, however this will ultimately average out. If a rifle was truly a 1 MOA rifle, it would be just as likely that two consecutive shots land exactly on top of each other as that they land 1 MOA apart. For 5-shot groups, based on 95% confidence, a rifle that normally shoots 1 MOA can be expected to shoot groups between 0.58 MOA and 1.47 MOA, although the majority of these groups will be under 1 MOA. What this means in practice is if a rifle that shoots 1-inch groups on average at 100 yards shoots a group measuring 0.7 inches followed by a group that is 1.3 inches, this is not statistically abnormal. The metric system counterpart of the MOA is the milliradian (mrad or 'mil'), being equal to of the target range, laid out on a circle that has the observer as centre and the target range as radius. The number of milliradians on a full such circle therefore always is equal to 2 × × 1000, regardless the target range. Therefore, 1 MOA ≈ 0.2909 mrad. This means that an object which spans 1 mrad on the reticle is at a range that is in metres equal to the object's linear size in millimetres (e.g. an object of 100 mm subtending 1 mrad is 100 metres away). So there is no conversion factor required, contrary to the MOA system. A reticle with markings (hashes or dots) spaced with a one mrad apart (or a fraction of a mrad) are collectively called a mrad reticle. If the markings are round they are called mil-dots. In the table below conversions from mrad to metric values are exact (e.g. 0.1 mrad equals exactly 10 mm at 100 metres), while conversions of minutes of arc to both metric and imperial values are approximate.
Minute and second of arc
Wikipedia
497
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
1′ at 100 yards is about 1.047 inches 1′ ≈ 0.291 mrad (or 29.1 mm at 100 m, approximately 30 mm at 100 m) 1 mrad ≈ 3.44′, so  mrad ≈ ′ 0.1 mrad equals exactly 1 cm at 100 m, or exactly 0.36 inches at 100 yards Human vision In humans, 20/20 vision is the ability to resolve a spatial pattern separated by a visual angle of one minute of arc, from a distance of twenty feet. A 20/20 letter subtends 5 minutes of arc total. Materials The deviation from parallelism between two surfaces, for instance in optical engineering, is usually measured in arcminutes or arcseconds. In addition, arcseconds are sometimes used in rocking curve (ω-scan) x ray diffraction measurements of high-quality epitaxial thin films. Manufacturing Some measurement devices make use of arcminutes and arcseconds to measure angles when the object being measured is too small for direct visual inspection. For instance, a toolmaker's optical comparator will often include an option to measure in "minutes and seconds".
Minute and second of arc
Wikipedia
245
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Physical sciences
Angle
Basics and measurement
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes: the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force; that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass. The SI unit for acceleration is metre per second squared (, ). For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralised in reference to the acceleration due to change in speed. Definition and properties Average acceleration An object's average acceleration over a period of time is its change in velocity, , divided by the duration of the period, . Mathematically, Instantaneous acceleration
Acceleration
Wikipedia
490
2443
https://en.wikipedia.org/wiki/Acceleration
Physical sciences
Classical mechanics
null
Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time: As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to : (Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.) By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity. Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time: Units Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second. Other forms An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration. Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer. In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law): where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large. Tangential and centripetal acceleration The velocity of a particle moving on a curved path as a function of time can be written as: with equal to the speed of travel along the path, and
Acceleration
Wikipedia
482
2443
https://en.wikipedia.org/wiki/Acceleration
Physical sciences
Classical mechanics
null
a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as: where is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and is its instantaneous radius of curvature based upon the osculating circle at time . The components are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively. Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas. Special cases Uniform acceleration Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by: Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed: where is the elapsed time, is the initial displacement from the origin, is the displacement from the origin at time , is the initial velocity, is the velocity at time , and is the uniform rate of acceleration. In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth. Circular motion
Acceleration
Wikipedia
421
2443
https://en.wikipedia.org/wiki/Acceleration
Physical sciences
Classical mechanics
null
In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighbouring point, thereby rotating the velocity vector along the circle. For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed: For a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius . Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as Thus This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion. In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is,
Acceleration
Wikipedia
481
2443
https://en.wikipedia.org/wiki/Acceleration
Physical sciences
Classical mechanics
null
The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration (), and the tangent is always directed at right angles to the radius vector. Coordinate systems In multi-dimensional Cartesian coordinate systems, acceleration is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding acceleration components are defined as The two-dimensional acceleration vector is then defined as . The magnitude of this vector is found by the distance formula asIn three-dimensional systems where there is an additional z-axis, the corresponding acceleration component is defined asThe three-dimensional acceleration vector is defined as with its magnitude being determined by Relation to relativity Special relativity The special theory of relativity describes the behaviour of objects travelling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations. As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it. General relativity Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating. Conversions
Acceleration
Wikipedia
341
2443
https://en.wikipedia.org/wiki/Acceleration
Physical sciences
Classical mechanics
null
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses 50 to 70 billion cells each day due to apoptosis. For the average human child between 8 and 14 years old, each day the approximate loss is 20 to 30 billion cells. In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them. Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately. In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis. Discovery and etymology
Apoptosis
Wikipedia
463
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at the University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz. For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death. The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis.
Apoptosis
Wikipedia
413
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (). In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc. In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation: We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid. Activation mechanisms The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC).
Apoptosis
Wikipedia
469
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell death. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis. Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain. Intrinsic pathway The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis.
Apoptosis
Wikipedia
462
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3. Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability. Extrinsic pathway Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals.
Apoptosis
Wikipedia
382
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
TNF pathway TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the protein TRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis. Fas pathway The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8. Common components
Apoptosis
Wikipedia
500
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family. Caspases Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases (caspases 2, 8, 9, 10, 11, and 12) and effector caspases (caspases 3, 6, and 7). The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program. Caspase-independent apoptotic pathway There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor). Apoptosis model in amphibians The frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog.
Apoptosis
Wikipedia
489
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
Negative regulators of apoptosis Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB. Proteolytic caspase cascade: Killing the cell Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis. A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include: Cell shrinkage and rounding occur because of the retraction of lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases. The cytoplasm appears dense, and the organelles appear tightly packed. Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis. The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA. Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death. Apoptotic cell disassembly
Apoptosis
Wikipedia
476
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly: Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1). Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia. Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes. Removal of dead cells The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis. Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation.
Apoptosis
Wikipedia
449
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
Pathway knock-outs Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons.
Apoptosis
Wikipedia
396
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist. Methods for distinguishing apoptotic from necrotic cells
Apoptosis
Wikipedia
293
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references. Implication in disease Defective pathways The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased.
Apoptosis
Wikipedia
431
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis. Dysregulation of p53 The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair; however, it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors.
Apoptosis
Wikipedia
386
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
Inhibition Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis". HeLa cell Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur. Treatments
Apoptosis
Wikipedia
374
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology
The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway. Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis.
Apoptosis
Wikipedia
300
2457
https://en.wikipedia.org/wiki/Apoptosis
Biology and health sciences
Cell processes
Biology