text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
Due to the large number of bits needed to encode the three registers of a 3-operand instruction, RISC architectures that have 16-bit instructions are invariably 2-operand designs, such as the Atmel AVR, TI MSP430, and some versions of ARM Thumb. RISC architectures that have 32-bit instructions are usually 3-operand designs, such as the ARM, AVR32, MIPS, Power ISA, and SPARC architectures.
Each instruction specifies some number of operands (registers, memory locations, or immediate values) explicitly. Some instructions give one or both operands implicitly, such as by being stored on top of the stack or in an implicit register. If some of the operands are given implicitly, fewer operands need be specified in the instruction. When a "destination operand" explicitly specifies the destination, an additional operand must be supplied. Consequently, the number of operands encoded in an instruction may differ from the mathematically necessary number of arguments for a logical or arithmetic operation (the arity). Operands are either encoded in the "opcode" representation of the instruction, or else are given as values or addresses following the opcode.
Register pressure
Register pressure measures the availability of free registers at any point in time during the program execution. Register pressure is high when a large number of the available registers are in use; thus, the higher the register pressure, the more often the register contents must be spilled into memory. Increasing the number of registers in an architecture decreases register pressure but increases the cost.
While embedded instruction sets such as Thumb suffer from extremely high register pressure because they have small register sets, general-purpose RISC ISAs like MIPS and Alpha enjoy low register pressure. CISC ISAs like x86-64 offer low register pressure despite having smaller register sets. This is due to the many addressing modes and optimizations (such as sub-register addressing, memory operands in ALU instructions, absolute addressing, PC-relative addressing, and register-to-register spills) that CISC ISAs offer. | Instruction set architecture | Wikipedia | 429 | 47772 | https://en.wikipedia.org/wiki/Instruction%20set%20architecture | Technology | Computer architecture concepts | null |
Instruction length
The size or length of an instruction varies widely, from as little as four bits in some microcontrollers to many hundreds of bits in some VLIW systems. Processors used in personal computers, mainframes, and supercomputers have minimum instruction sizes between 8 and 64 bits. The longest possible instruction on x86 is 15 bytes (120 bits). Within an instruction set, different instructions may have different lengths. In some architectures, notably most reduced instruction set computers (RISC), , typically corresponding with that architecture's word size. In other architectures, instructions have variable length, typically integral multiples of a byte or a halfword. Some, such as the ARM with Thumb-extension have mixed variable encoding, that is two fixed, usually 32-bit and 16-bit encodings, where instructions cannot be mixed freely but must be switched between on a branch (or exception boundary in ARMv8).
Fixed-length instructions are less complicated to handle than variable-length instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary, for instance), and are therefore somewhat easier to optimize for speed.
Code density
In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the code density, was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch. | Instruction set architecture | Wikipedia | 383 | 47772 | https://en.wikipedia.org/wiki/Instruction%20set%20architecture | Technology | Computer architecture concepts | null |
Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named Complex Instruction Set Computers, CISC). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions.
Reduced instruction-set computers, RISC, were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed instruction length, whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.
Certain embedded RISC ISAs like Thumb and AVR32 typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.
Minimal instruction set computers (MISC) are commonly a form of stack machine, where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA or in a multi-core form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.
There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this. | Instruction set architecture | Wikipedia | 472 | 47772 | https://en.wikipedia.org/wiki/Instruction%20set%20architecture | Technology | Computer architecture concepts | null |
In practice, code density is also dependent on the compiler. Most optimizing compilers have options that control whether to optimize code generation for execution speed or for code density. For instance GCC has the option -Os to optimize for small machine code size, and -O3 to optimize for execution speed at the cost of larger machine code.
Representation
The instructions constituting a program are rarely specified using their internal, numeric form (machine code); they may be specified by programmers using an assembly language or, more commonly, may be generated from high-level programming languages by compilers.
Design
The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. The first was the CISC (Complex Instruction Set Computer), which had many different instructions. In the 1970s, however, places like IBM did research and found that many instructions in the set could be eliminated. The result was the RISC (Reduced Instruction Set Computer), an architecture that uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption. However, a more complex set may optimize common operations, improve memory and cache efficiency, or simplify programming.
Some instruction set designers reserve one or more opcodes for some kind of system call or software interrupt. For example, MOS Technology 6502 uses 00H, Zilog Z80 uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH while Motorola 68000 use codes in the range A000..AFFFH.
Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements.
The NOP slide used in immunity-aware programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP.
On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement if the instruction set includes support for something such as "fetch-and-add", "load-link/store-conditional" (LL/SC), or "atomic compare-and-swap".
Instruction set implementation | Instruction set architecture | Wikipedia | 461 | 47772 | https://en.wikipedia.org/wiki/Instruction%20set%20architecture | Technology | Computer architecture concepts | null |
A given instruction set can be implemented in a variety of ways. All ways of implementing a particular instruction set provide the same programming model, and all implementations of that instruction set are able to run the same executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc.
When designing the microarchitecture of a processor, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs, etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture.
There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises):
Some computer designs "hardwire" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
Other designs employ microcode routines or tables (or both) to do this, using ROMs or writable RAMs (writable control store), PLAs, or both.
Some microcoded CPU designs with a writable control store use it to allow the instruction set to be changed (for example, the Rekursiv processor and the Imsys Cjip).
CPUs designed for reconfigurable computing may use field-programmable gate arrays (FPGAs).
An ISA can also be emulated in software by an interpreter. Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless the hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready.
Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of the instruction pipeline only allow a single memory load or memory store per instruction, leading to a load–store architecture (RISC). For another example, some early ways of implementing the instruction pipeline led to a delay slot. | Instruction set architecture | Wikipedia | 459 | 47772 | https://en.wikipedia.org/wiki/Instruction%20set%20architecture | Technology | Computer architecture concepts | null |
The demands of high-speed digital signal processing have pushed in the opposite direction—forcing instructions to be implemented in a particular way. For example, to perform digital filters fast enough, the MAC instruction in a typical digital signal processor (DSP) must use a kind of Harvard architecture that can fetch an instruction and two data words simultaneously, and it requires a single-cycle multiply–accumulate multiplier. | Instruction set architecture | Wikipedia | 82 | 47772 | https://en.wikipedia.org/wiki/Instruction%20set%20architecture | Technology | Computer architecture concepts | null |
Quality of life (QOL) is defined by the World Health Organization as "an individual's perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns".
Standard indicators of the quality of life include wealth, employment, the environment, physical and mental health, education, recreation and leisure time, social belonging, religious beliefs, safety, security and freedom. QOL has a wide range of contexts, including the fields of international development, healthcare, politics and employment. Health related QOL (HRQOL) is an evaluation of QOL and its relationship with health.
Engaged theory
One approach, called the engaged theory, outlined in the journal of Applied Research in the Quality of Life, posits four domains in assessing quality of life: ecology, economics, politics and culture. In the domain of culture, for example, it includes the following subdomains of quality of life:
Beliefs and ideas
Creativity and recreation
Enquiry and learning
Gender and generations
Identity and engagement
Memory and projection
Well-being and health
Under this conception, other frequently related concepts include freedom, human rights, and happiness. However, since happiness is subjective and difficult to measure, other measures are generally given priority. It has also been shown that happiness, as much as it can be measured, does not necessarily increase correspondingly with the comfort that results from increasing income. As a result, standard of living should not be taken to be a measure of happiness. Also, sometimes considered related is the concept of human security, though the latter may be considered at a more basic level and for all people.
Quantitative measurement
Unlike per capita GDP or standard of living, both of which can be measured in financial terms, it is harder to make objective or long-term measurements of the quality of life experienced by nations or other groups of people. Researchers have begun in recent times to distinguish two aspects of personal well-being: Emotional well-being, in which respondents are asked about the quality of their everyday emotional experiencesthe frequency and intensity of their experiences of, for example, joy, stress, sadness, anger and affectionand life evaluation, in which respondents are asked to think about their life in general and evaluate it against a scale. Such and other systems and scales of measurement have been in use for some time. Research has attempted to examine the relationship between quality of life and productivity. | Quality of life | Wikipedia | 489 | 47789 | https://en.wikipedia.org/wiki/Quality%20of%20life | Biology and health sciences | Health and fitness: General | Health |
There are many different methods of measuring quality of life in terms of health care, wealth, and materialistic goods. However, it is much more difficult to measure meaningful expression of one's desires. One way to do so is to evaluate the scope of how individuals have fulfilled their own ideals. Quality of life can simply mean happiness, which is the subjective state of mind. By using that mentality, citizens of a developing country appreciate more since they are content with the basic necessities of health care, education and child protection.
According to ecological economist Robert Costanza:
Human Development Index
Perhaps the most commonly used international measure of development is the Human Development Index (HDI), which combines measures of life expectancy, education, and standard of living, in an attempt to quantify the options available to individuals within a given society. The HDI is used by the United Nations Development Programme in their Human Development Report. However, since 2010, The Human Development Report introduced an Inequality-adjusted Human Development Index (IHDI). While the original HDI remains useful, it stated that "the IHDI is the actual level of human development (accounting for inequality), while the original HDI can be viewed as an index of 'potential' human development (or the maximum level of HDI) that could be achieved if there was no inequality."
World Happiness Report
The World Happiness Report is a landmark survey on the state of global happiness. It ranks 156 countries by their happiness levels, reflecting growing global interest in using happiness and substantial well-being as an indicator of the quality of human development. Its growing purpose has allowed governments, communities and organizations to use appropriate data to record happiness in order to enable policies to provide better lives. The reports review the state of happiness in the world today and show how the science of happiness explains personal and national variations in happiness.
Developed again by the United Nations and published recently along with the HDI, this report combines both objective and subjective measures to rank countries by happiness, which is deemed as the ultimate outcome of a high quality of life. It uses surveys from Gallup, real GDP per capita, healthy life expectancy, having someone to count on, perceived freedom to make life choices, freedom from corruption, and generosity to derive the final score. Happiness is already recognized as an important concept in global public policy. The World Happiness Report indicates that some regions have in recent years been experiencing progressive inequality of happiness. | Quality of life | Wikipedia | 491 | 47789 | https://en.wikipedia.org/wiki/Quality%20of%20life | Biology and health sciences | Health and fitness: General | Health |
Other measures
The Physical Quality of Life Index (PQLI) is a measure developed by sociologist M. D. Morris in the 1970s, based on basic literacy, infant mortality, and life expectancy. Although not as complex as other measures, and now essentially replaced by the Human Development Index, the PQLI is notable for Morris's attempt to show a "less fatalistic pessimistic picture" by focusing on three areas where global quality of life was generally improving at the time, while ignoring gross national product and other possible indicators that were not improving.
The Happy Planet Index, introduced in 2006, is unique among quality of life measures in that, in addition to standard determinants of well-being, it uses each country's ecological footprint as an indicator. As a result, European and North American nations do not dominate this measure. The 2012 list is instead topped by Costa Rica, Vietnam and Colombia.
In 2010, Gallup researchers trying to find the world's happiest countries found Denmark to be at the top of the list. For the period 2014–2016, Norway surpasses Denmark to be at the top of the list.
A 2010 study by two Princeton University professors looked at 1,000 randomly selected U.S. residents over an extended period. It concludes that their life evaluations – that is, their considered evaluations of their life against a stated scale of one to ten – rise steadily with income. On the other hand, their reported quality of emotional daily experiences (their reported experiences of joy, affection, stress, sadness, or anger) levels off after a certain income level (approximately $75,000 per year in 2010); income above $75,000 does not lead to more experiences of happiness nor to further relief of unhappiness or stress. Below this income level, respondents reported decreasing happiness and increasing sadness and stress, implying the pain of life's misfortunes, including disease, divorce, and being alone, is exacerbated by poverty.
Gross national happiness and other subjective measures of happiness are being used by the governments of Bhutan and the United Kingdom. The World Happiness report, issued by Columbia University is a meta-analysis of happiness globally and provides an overview of countries and grassroots activists using GNH. The OECD issued a guide for the use of subjective well-being metrics in 2013. In the U.S., cities and communities are using a GNH metric at a grassroots level. | Quality of life | Wikipedia | 501 | 47789 | https://en.wikipedia.org/wiki/Quality%20of%20life | Biology and health sciences | Health and fitness: General | Health |
The Social Progress Index measures the extent to which countries provide for the social and environmental needs of their citizens. Fifty-two indicators in the areas of basic human needs, foundations of wellbeing, and opportunity show the relative performance of nations. The index uses outcome measures when there is sufficient data available or the closest possible proxies.
Day-Reconstruction Method was another way of measuring happiness, in which researchers asked their subjects to recall various things they did on the previous day and describe their mood during each activity. Being simple and approachable, this method required memory and the experiments have confirmed that the answers that people give are similar to those who repeatedly recalled each subject. The method eventually declined as it called for more effort and thoughtful responses, which often included interpretations and outcomes that do not occur to people who are asked to record every action in their daily lives.
The Digital Quality of Life Index - a yearly study on digital well-being across 121 countries created by Surfshark. It indexes each country according to five pillars that impact a population's digital quality of life: internet affordability, internet quality, electronic infrastructure, electronic security, and electronic government.
Livability
The term quality of life is also used by politicians and economists to measure the livability of a given city or nation. Two widely known measures of livability are the Economist Intelligence Unit's Where-to-be-born Index and Mercer's Quality of Living Reports. These two measures calculate the livability of countries and cities around the world, respectively, through a combination of subjective life-satisfaction surveys and objective determinants of quality of life such as divorce rates, safety, and infrastructure. Such measures relate more broadly to the population of a city, state, or country, not to individual quality of life. Livability has a long history and tradition in urban design, and neighborhoods design standards such as LEED-ND are often used in an attempt to influence livability. | Quality of life | Wikipedia | 392 | 47789 | https://en.wikipedia.org/wiki/Quality%20of%20life | Biology and health sciences | Health and fitness: General | Health |
Crimes
Some crimes against property (e.g., graffiti and vandalism) and some "victimless crimes" have been referred to as "quality-of-life crimes". American sociologist James Q. Wilson encapsulated this argument as the broken windows theory, which asserts that relatively minor problems left unattended (such as litter, graffiti, or public urination by homeless individuals) send a subliminal message that disorder, in general, is being tolerated, and as a result, more serious crimes will end up being committed (the analogy being that a broken window left broken shows an image of general dilapidation).
Wilson's theories have been used to justify the implementation of zero tolerance policies by many prominent American mayors, most notably Oscar Goodman in Las Vegas, Richard Riordan in Los Angeles, Rudolph Giuliani in New York City and Gavin Newsom in San Francisco. Such policies refuse to tolerate even minor crimes; proponents argue that this will improve the quality of life of local residents. However, critics of zero tolerance policies believe that such policies neglect investigation on a case-by-case basis and may lead to unreasonably harsh penalties for crimes.
In healthcare
Within the field of healthcare, quality of life is often regarded in terms of how a certain ailment affects a patient on an individual level. This may be a debilitating weakness that is not life-threatening; life-threatening illness that is not terminal; terminal illness; the predictable, natural decline in the health of an elder; an unforeseen mental/physical decline of a loved one; or chronic, end-stage disease processes. Researchers at the University of Toronto's Quality of Life Research Unit define quality of life as "The degree to which a person enjoys the important possibilities of his or her life" (UofT). Their Quality of Life Model is based on the categories "being", "belonging", and "becoming"; respectively who one is, how one is connected to one's environment, and whether one achieves one's personal goals, hopes, and aspirations.
Experience sampling studies show substantial between-person variability in within-person associations between somatic symptoms and quality of life. Hecht and Shiel measure quality of life as "the patient's ability to enjoy normal life activities" since life quality is strongly related to wellbeing without suffering from sickness and treatment. | Quality of life | Wikipedia | 489 | 47789 | https://en.wikipedia.org/wiki/Quality%20of%20life | Biology and health sciences | Health and fitness: General | Health |
In international development
Quality of life has been deemed an important concept in the field of international development because it allows development to be analyzed on a measure that is generally accepted as more comprehensive than standard of living. Within development theory, however, there are varying ideas concerning what constitutes desirable change for a particular society. The different ways that quality of life is defined by institutions, therefore, shape how these organizations work for its improvement as a whole.
Organisations such as the World Bank, for example, declare a goal of "working for a world free of poverty", with poverty defined as a lack of basic human needs, such as food, water, shelter, freedom, access to education, healthcare, or employment. In other words, poverty is defined as a low quality of life. Using this definition, the World Bank works towards improving quality of life through the stated goal of lowering poverty and helping people afford a better quality of life.
Other organizations, however, may also work towards improved global quality of life using a slightly different definition and substantially different methods. Many NGOs do not focus at all on reducing poverty on a national or international scale, but rather attempt to improve the quality of life for individuals or communities. One example would be sponsorship programs that provide material aid for specific individuals. Although many organizations of this type may still talk about fighting poverty, the methods are significantly different.
Improving quality of life involves action not only by NGOs but also by governments. Global health has the potential to achieve greater political presence if governments were to incorporate aspects of human security into foreign policy. Stressing individuals' basic rights to health, food, shelter, and freedom addresses prominent inter-sectoral problems negatively impacting today's society, and may lead to greater action and resources. Integration of global health concerns into foreign policy may be hampered by approaches that are shaped by the overarching roles of defense and diplomacy. | Quality of life | Wikipedia | 376 | 47789 | https://en.wikipedia.org/wiki/Quality%20of%20life | Biology and health sciences | Health and fitness: General | Health |
The Voyager program is an American scientific program that employs two interstellar probes, Voyager 1 and Voyager 2. They were launched in 1977 to take advantage of a favorable planetary alignment to explore the two gas giants Jupiter and Saturn and potentially also the ice giants, Uranus and Neptune - to fly near them while collecting data for transmission back to Earth. After Voyager 1 successfully completed its flyby of Saturn and its moon Titan, it was decided to send Voyager 2 on flybys of Uranus and Neptune.
After the planetary flybys were complete, decisions were made to keep the probes in operation to explore interstellar space and the outer regions of the solar system. On 25 August 2012, data from Voyager 1 indicated that it had entered interstellar space. On 5 November 2019, data from Voyager 2 indicated that it also had entered interstellar space. On 4 November 2019, scientists reported that on 5 November 2018, the Voyager 2 probe had officially reached the interstellar medium (ISM), a region of outer space beyond the influence of the solar wind, as did Voyager 1 in 2012. In August 2018, NASA confirmed, based on results by the New Horizons spacecraft, the existence of a "hydrogen wall" at the outer edges of the Solar System that was first detected in 1992 by the two Voyager spacecraft.
the Voyagers are still in operation beyond the outer boundary of the heliosphere in interstellar space. Voyager 1 is moving with a velocity of , or 17 km/s, (10.5 miles/second) relative to the Sun, and is from the Sun reaching a distance of from Earth as of May 25, 2024. , Voyager 2 is moving with a velocity of , or 15 km/s, relative to the Sun, and is from the Sun reaching a distance of from Earth as of May 25, 2024.
The two Voyagers are the only human-made objects to date that have passed into interstellar space — a record they will hold until at least the 2040s — and Voyager 1 is the farthest human-made object from Earth.
History
Mariner Jupiter-Saturn | Voyager program | Wikipedia | 425 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
The two Voyager space probes were originally conceived as part of the Planetary Grand Tour planned during the late 1960s and early 70s that aimed to explore Jupiter, Saturn, Saturn's moon, Titan, Uranus, Neptune, and Pluto. The mission originated from the Grand Tour program, conceptualized by Gary Flandro, an aerospace engineer at the Jet Propulsion Laboratory, in 1964, which leveraged a rare planetary alignment occurring once every 175 years. This alignment allowed a craft to reach all outer planets using gravitational assists. The mission was to send several pairs of probes and gained momentum in 1966 when it was endorsed by NASA's Jet Propulsion Laboratory. However, in December 1971, the Grand Tour mission was canceled when funding was redirected to the Space Shuttle program.
In 1972, a scaled-down (four planets, two identical spacecraft) mission was proposed, utilizing a spacecraft derived from the Mariner series, initially intended to be Mariner 11 and Mariner 12. The gravity-assist technique, successfully demonstrated by Mariner 10, would be used to achieve significant velocity changes by maneuvering through an intermediate planet's gravitational field to minimize time towards Saturn. The spacecrafts were then moved into a separate program named Mariner Jupiter-Saturn (also Mariner Jupiter-Saturn-Uranus, MJS, or MJSU), part of the Mariner program, later renamed because it was thought that the design of the two space probes had progressed sufficiently beyond that of the Mariner family to merit a separate name.
Voyager probes
On March 4, 1977, NASA announced a competition to rename the mission, believing the existing name was not appropriate as the mission had differed significantly from previous Mariner missions. Voyager was chosen as the new name, referencing an earlier suggestion by William Pickering, who had proposed the name Navigator. Due to the name change occurring close to launch, the probes were still occasionally referred to as Mariner 11 and Mariner 12, or even Voyager 11 and Voyager 12. | Voyager program | Wikipedia | 403 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
Two mission trajectories were established: JST aimed at Jupiter, Saturn, and enhancing a Titan flyby, while JSX served as a contingency plan. JST focused on a Titan flyby, while JSX provided a flexible mission plan. If JST succeeded, JSX could proceed with the Grand Tour, but in case of failure, JSX could be redirected for a separate Titan flyby, forfeiting the Grand Tour opportunity. The second probe, now Voyager 2, followed the JSX trajectory, granting it the option to continue on to Uranus and Neptune. Upon Voyager 1 completing its main objectives at Saturn, Voyager 2 received a mission extension, enabling it to proceed to Uranus and Neptune. This allowed Voyager 2 to diverge from the originally planned JST trajectory.
The probes would be launched in August or September 1977, with their main objective being to compare the characteristics of Jupiter and Saturn, such as their atmospheres, magnetic fields, particle environments, ring systems, and moons. They would fly by planets and moons in either a JST or JSX trajectory. After completing their flybys, the probes would communicate with Earth, relaying vital data using their magnetometers, spectrometers, and other instruments to detect interstellar, solar, and cosmic radiation. Their radioisotope thermoelectric generators (RTGs) would limit the maximum communication time with the probes to roughly a decade. Following their primary missions, the probes would continue to drift into interstellar space.
Voyager 2 was the first to be launched. Its trajectory was designed to allow flybys of Jupiter, Saturn, Uranus, and Neptune. Voyager 1 was launched after Voyager 2, but along a shorter and faster trajectory that was designed to provide an optimal flyby of Saturn's moon Titan, which was known to be quite large and to possess a dense atmosphere. This encounter sent Voyager 1 out of the plane of the ecliptic, ending its planetary science mission. Had Voyager 1 been unable to perform the Titan flyby, the trajectory of Voyager 2 could have been altered to explore Titan, forgoing any visit to Uranus and Neptune. Voyager 1 was not launched on a trajectory that would have allowed it to continue to Uranus and Neptune, but could have continued from Saturn to Pluto without exploring Titan. | Voyager program | Wikipedia | 485 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
During the 1990s, Voyager 1 overtook the slower deep-space probes Pioneer 10 and Pioneer 11 to become the most distant human-made object from Earth, a record that it will keep for the foreseeable future. The New Horizons probe, which had a higher launch velocity than Voyager 1, is travelling more slowly due to the extra speed Voyager 1 gained from its flybys of Jupiter and Saturn. Voyager 1 and Pioneer 10 are the most widely separated human-made objects anywhere since they are travelling in roughly opposite directions from the Solar System.
In December 2004, Voyager 1 crossed the termination shock, where the solar wind is slowed to subsonic speed, and entered the heliosheath, where the solar wind is compressed and made turbulent due to interactions with the interstellar medium. On 10 December 2007, Voyager 2 also reached the termination shock, about closer to the Sun than from where Voyager 1 first crossed it, indicating that the Solar System is asymmetrical.
In 2010 Voyager 1 reported that the outward velocity of the solar wind had dropped to zero, and scientists predicted it was nearing interstellar space. In 2011, data from the Voyagers determined that the heliosheath is not smooth, but filled with giant magnetic bubbles, theorized to form when the magnetic field of the Sun becomes warped at the edge of the Solar System.
In June 2012, Scientists at NASA reported that Voyager 1 was very close to entering interstellar space, indicated by a sharp rise in high-energy particles from outside the Solar System. In September 2013, NASA announced that Voyager 1 had crossed the heliopause on 25 August 2012, making it the first spacecraft to enter interstellar space.
In December 2018, NASA announced that Voyager 2 had crossed the heliopause on 5 November 2018, making it the second spacecraft to enter interstellar space.
Voyager 1 and Voyager 2 continue to monitor conditions in the outer expanses of the Solar System. The Voyager spacecraft are expected to be able to operate science instruments through 2020, when limited power will require instruments to be deactivated one by one. Sometime around 2025, there will no longer be sufficient power to operate any science instruments.
In July 2019, a revised power management plan was implemented to better manage the two probes' dwindling power supply.
Spacecraft design | Voyager program | Wikipedia | 466 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
The Voyager spacecraft each weighed at launch, but after fuel usage are now about . Of this weight, each spacecraft carries of scientific instruments. The identical Voyager spacecraft use three-axis-stabilized guidance systems that use gyroscopic and accelerometer inputs to their attitude control computers to point their high-gain antennas towards the Earth and their scientific instruments towards their targets, sometimes with the help of a movable instrument platform for the smaller instruments and the electronic photography system.
The diagram shows the high-gain antenna (HGA) with a diameter dish attached to the hollow decagonal electronics container. There is also a spherical tank that contains the hydrazine monopropellant fuel.
The Voyager Golden Record is attached to one of the bus sides. The angled square panel to the right is the optical calibration target and excess heat radiator. The three radioisotope thermoelectric generators (RTGs) are mounted end-to-end on the lower boom.
The scan platform comprises: the Infrared Interferometer Spectrometer (IRIS) (largest camera at top right); the Ultraviolet Spectrometer (UVS) just above the IRIS; the two Imaging Science Subsystem (ISS) vidicon cameras to the left of the UVS; and the Photopolarimeter System (PPS) under the ISS.
Only five investigation teams are still supported, though data is collected for two additional instruments.
The Flight Data Subsystem (FDS) and a single eight-track digital tape recorder (DTR) provide the data handling functions.
The FDS configures each instrument and controls instrument operations. It also collects engineering and science data and formats the data for transmission. The DTR is used to record high-rate Plasma Wave Subsystem (PWS) data, which is played back every six months.
The Imaging Science Subsystem made up of a wide-angle and a narrow-angle camera is a modified version of the slow scan vidicon camera designs that were used in the earlier Mariner flights. The Imaging Science Subsystem consists of two television-type cameras, each with eight filters in a commandable filter wheel mounted in front of the vidicons. One has a low resolution focal length wide-angle lens with an aperture of f/3 (the wide-angle camera), while the other uses a higher resolution narrow-angle f/8.5 lens (the narrow-angle camera). | Voyager program | Wikipedia | 508 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
Three spacecraft were built, Voyager 1 (VGR 77-1), Voyager 2 (VGR 77-3), and test spare model (VGR 77-2).
Scientific instruments
Computers and data processing
There are three different computer types on the Voyager spacecraft, two of each kind, sometimes used for redundancy. They are proprietary, custom-built computers built from CMOS and TTL medium-scale CMOS integrated circuits and discrete components, mostly from the 7400 series of Texas Instruments. Total number of words among the six computers is about 32K. Voyager 1 and Voyager 2 have identical computer systems.
The Computer Command System (CCS), the central controller of the spacecraft, has two 18-bit word, interrupt-type processors with 4096 words each of non-volatile plated-wire memory. During most of the Voyager mission the two CCS computers on each spacecraft were used non-redundantly to increase the command and processing capability of the spacecraft. The CCS is nearly identical to the system flown on the Viking spacecraft.
The Flight Data System (FDS) is two 16-bit word machines with modular memories and 8198 words each.
The Attitude and Articulation Control System (AACS) is two 18-bit word machines with 4096 words each.
Unlike the other on-board instruments, the operation of the cameras for visible light is not autonomous, but rather it is controlled by an imaging parameter table contained in one of the on-board digital computers, the Flight Data Subsystem (FDS). More recent space probes, since about 1990, usually have completely autonomous cameras.
The computer command subsystem (CCS) controls the cameras. The CCS contains fixed computer programs such as command decoding, fault detection, and correction routines, antenna-pointing routines, and spacecraft sequencing routines. This computer is an improved version of the one that was used in the Viking orbiter. The hardware in both custom-built CCS subsystems in the Voyagers is identical. There is only a minor software modification for one of them that has a scientific subsystem that the other lacks.
According to Guinness Book of Records, CCS holds record of "longest period of continual operation for a computer". It has been running continuously since 20 August 1977. | Voyager program | Wikipedia | 472 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
The Attitude and Articulation Control Subsystem (AACS) controls the spacecraft orientation (its attitude). It keeps the high-gain antenna pointing towards the Earth, controls attitude changes, and points the scan platform. The custom-built AACS systems on both craft are identical.
It has been erroneously reported on the Internet that the Voyager space probes were controlled by a version of the RCA 1802 (RCA CDP1802 "COSMAC" microprocessor), but such claims are not supported by the primary design documents. The CDP1802 microprocessor was used later in the Galileo space probe, which was designed and built years later. The digital control electronics of the Voyagers were not based on a microprocessor integrated-circuit chip.
Communications
The uplink communications are executed via S-band microwave communications. The downlink communications are carried out by an X-band microwave transmitter on board the spacecraft, with an S-band transmitter as a back-up. All long-range communications to and from the two Voyagers have been carried out using their high-gain antennas. The high-gain antenna has a beamwidth of 0.5° for X-band, and 2.3° for S-band. (The low-gain antenna has a 7 dB gain and 60° beamwidth.)
Because of the inverse-square law in radio communications, the digital data rates used in the downlinks from the Voyagers have been continually decreasing the farther that they get from the Earth. For example, the data rate used from Jupiter was about 115,000 bits per second. That was halved at the distance of Saturn, and it has gone down continually since then. Some measures were taken on the ground along the way to reduce the effects of the inverse-square law. In between 1982 and 1985, the diameters of the three main parabolic dish antennas of the Deep Space Network were increased from dramatically increasing their areas for gathering weak microwave signals.
Whilst the craft were between Saturn and Uranus the onboard software was upgraded to do a degree of image compression and to use a more efficient Reed-Solomon error-correcting encoding. | Voyager program | Wikipedia | 445 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
Then between 1986 and 1989, new techniques were brought into play to combine the signals from multiple antennas on the ground into one, more powerful signal, in a kind of an antenna array. This was done at Goldstone, California, Canberra (Australia), and Madrid (Spain) using the additional dish antennas available there. Also, in Australia, the Parkes Radio Telescope was brought into the array in time for the fly-by of Neptune in 1989. In the United States, the Very Large Array in New Mexico was brought into temporary use along with the antennas of the Deep Space Network at Goldstone. Using this new technology of antenna arrays helped to compensate for the immense radio distance from Neptune to the Earth.
Power
Electrical power is supplied by three MHW-RTG radioisotope thermoelectric generators (RTGs). They are powered by plutonium-238 (distinct from the Pu-239 isotope used in nuclear weapons) and provided approximately 470 W at 30 volts DC when the spacecraft was launched. Plutonium-238 decays with a half-life of 87.74 years, so RTGs using Pu-238 will lose a factor of 1−0.5(1/87.74) = 0.79% of their power output per year.
In 2011, 34 years after launch, the thermal power generated by such an RTG would be reduced to (1/2)(34/87.74) ≈ 76% of its initial power. The RTG thermocouples, which convert thermal power into electricity, also degrade over time reducing available electric power below this calculated level.
By 7 October 2011 the power generated by Voyager 1 and Voyager 2 had dropped to 267.9 W and 269.2 W respectively, about 57% of the power at launch. The level of power output was better than pre-launch predictions based on a conservative thermocouple degradation model. As the electrical power decreases, spacecraft loads must be turned off, eliminating some capabilities. There may be insufficient power for communications by 2032.
Voyager Interstellar Mission | Voyager program | Wikipedia | 423 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
The Voyager primary mission was completed in 1989, with the close flyby of Neptune by Voyager 2. The Voyager Interstellar Mission (VIM) is a mission extension, which began when the two spacecraft had already been in flight for over 12 years. The Heliophysics Division of the NASA Science Mission Directorate conducted a Heliophysics Senior Review in 2008. The panel found that the VIM "is a mission that is absolutely imperative to continue" and that VIM "funding near the optimal level and increased DSN (Deep Space Network) support is warranted."
The main objective of the VIM was to extend the exploration of the Solar System beyond the outer planets to the heliopause (the farthest extent at which the Sun's radiation predominates over interstellar winds) and if possible even beyond. Voyager 1 crossed the heliopause boundary in 2012, followed by Voyager 2 in 2018. Passing through the heliopause boundary has allowed both spacecraft to make measurements of the interstellar fields, particles and waves unaffected by the solar wind. Two significant findings so far have been the discovery of a region of magnetic bubbles and no indication of an expected shift in the Solar magnetic field.
The entire Voyager 2 scan platform, including all of the platform instruments, was switched off in 1998. All platform instruments on Voyager 1, except for the ultraviolet spectrometer (UVS) have also been switched off.
The Voyager 1 scan platform was scheduled to go off-line in late 2000 but has been left on to investigate UV emission from the upwind direction.
UVS data are still captured but scans are no longer possible.
Gyro operations ended in 2016 for Voyager 2 and in 2017 for Voyager 1. Gyro operations are used to rotate the probe 360 degrees six times per year to measure the magnetic field of the spacecraft, which is then subtracted from the magnetometer science data.
The two spacecraft continue to operate, with some loss in subsystem redundancy but retain the capability to return scientific data from a full complement of Voyager Interstellar Mission (VIM) science instruments.
Both spacecraft also have adequate electrical power and attitude control propellant to continue operating until around 2025, after which there may not be electrical power to support science instrument operation; science data return and spacecraft operations will cease.
Mission details | Voyager program | Wikipedia | 471 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
By the start of VIM, Voyager 1 was at a distance of 40 AU from the Earth, while Voyager 2 was at 31 AU. VIM is in three phases: termination shock, heliosheath exploration, and interstellar exploration phase. The spacecraft began VIM in an environment controlled by the Sun's magnetic field, with the plasma particles being dominated by those contained in the expanding supersonic solar wind. This is the characteristic environment of the termination shock phase. At some distance from the Sun, the supersonic solar wind will be held back from further expansion by the interstellar wind. The first feature encountered by a spacecraft as a result of this interaction – between interstellar wind and solar wind – was the termination shock, where the solar wind slows to subsonic speed, and large changes in plasma flow direction and magnetic field orientation occur. Voyager 1 completed the phase of termination shock in December 2004 at a distance of 94 AU, while Voyager 2 completed it in August 2007 at a distance of 84 AU. After entering into the heliosheath, the spacecraft were in an area that is dominated by the Sun's magnetic field and solar wind particles. After passing through the heliosheath, the two Voyagers began the phase of interstellar exploration. The outer boundary of the heliosheath is called the heliopause. This is the region where the Sun's influence begins to decrease and interstellar space can be detected.
Voyager 1 is escaping the Solar System at the speed of 3.6 AU per year 35° north of the ecliptic in the general direction of the solar apex in Hercules, while Voyager 2s speed is about 3.3 AU per year, heading 48° south of the ecliptic. The Voyager spacecraft will eventually go on to the stars. In about 40,000 years, Voyager 1 will be within 1.6 light years (ly) of AC+79 3888, also known as Gliese 445, which is approaching the Sun. In 40,000 years Voyager 2 will be within 1.7 ly of Ross 248 (another star which is approaching the Sun), and in 296,000 years it will pass within 4.6 ly of Sirius, which is the brightest star in the night-sky. The spacecraft are not expected to collide with a star for 1 sextillion (1020) years. | Voyager program | Wikipedia | 491 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System, as detected by the Voyager space probes. According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose".
Voyager Golden Record
Both spacecraft carry a golden phonograph record that contains pictures and sounds of Earth, symbolic directions on the cover for playing the record, and data detailing the location of Earth. The record is intended as a combination time capsule and an interstellar message to any civilization, alien or far-future human, that may recover either of the Voyagers. The contents of this record were selected by a committee that included Timothy Ferris and was chaired by Carl Sagan.
Pale Blue Dot
Pale Blue Dot is a photograph of Earth taken on February 14, 1990, by the Voyager 1 space probe from a distance of approximately kilometers ( miles, 40.5 AU), as part of that day's Family Portrait series of images of the Solar System.
The Voyager program's discoveries during the primary phase of its mission, including new close-up color photos of the major planets, were regularly documented by print and electronic media outlets. Among the best-known of these is an image of the Earth as a Pale Blue Dot, taken in 1990 by Voyager 1, and popularized by Carl Sagan, | Voyager program | Wikipedia | 291 | 47795 | https://en.wikipedia.org/wiki/Voyager%20program | Technology | Programs and launch sites | null |
The trogons and quetzals are birds in the order Trogoniformes which contains only one family, the Trogonidae. The family Trogonidae contains 46 species in seven genera. The fossil record of the trogons dates back 49 million years to the Early Eocene. They might constitute a member of the basal radiation of the order Coraciiformes and order Passeriformes or be closely related to mousebirds and owls. The word trogon is Greek for "nibbling" and refers to the fact that these birds gnaw holes in trees to make their nests.
Trogons are residents of tropical forests worldwide. The greatest diversity is in the Neotropics, where four genera, containing 24 species, occur. The genus Apaloderma contains the three African species. The genera Harpactes and Apalharpactes, containing twelve species, are found in southeast Asia.
They feed on insects and fruit, and their broad bills and weak legs reflect their diet and arboreal habits. Although their flight is fast, they are reluctant to fly any distance. Trogons are generally not migratory, although some species undertake partial local movements. Trogons have soft, often colourful, feathers with distinctive male and female plumage. They are the only type of animal with a heterodactyl toe arrangement. They nest in holes dug into trees or termite nests, laying 2–4 white or pastel-coloured eggs.
Evolution and taxonomy
The position of the trogons within the class Aves has been a long-standing mystery. A variety of relations have been suggested, including the parrots, cuckoos, toucans, jacamars and puffbirds, rollers, owls and nightjars. More recent morphological and molecular evidence has suggested a relationship with the Coliiformes. The unique arrangement of the toes on the foot (see morphology and flight) has led many to consider the trogons to have no close relatives; to place them in their own order, possibly with the similarly atypical mousebirds as their closest relatives. | Trogon | Wikipedia | 425 | 47798 | https://en.wikipedia.org/wiki/Trogon | Biology and health sciences | Trogoniformes | null |
The earliest formally described fossil specimen is a cranium from the Fur Formation Lower Eocene in Denmark (54 mya). Other trogoniform fossils have been found in the Messel pit deposits from the mid-Eocene in Germany (49 mya), and in Oligocene and Miocene deposits from Switzerland and France respectively. The oldest New World fossil of a trogon is from the comparatively recent Pleistocene (less than 2.588 mya).
The family had been thought to have an Old World origin notwithstanding the current richness of the family, which is more diverse in the Neotropical New World. DNA evidence seemed to support an African origin for the trogons, with the African genus Apaloderma seemingly basal in the family, and the other two lineages, the Asian and American, breaking off 20–36 million years ago. More recent studies show that the DNA evidence gives contradictory results concerning the basal phylogenetic relationships; so it is currently unknown if all extant trogons are descended from an African ancestor, an American ancestor or neither.
The trogons are split into three subfamilies, each reflecting one of these splits. Aplodermatinae is the African subfamily and contains a single genus, Apaloderma. Harpactinae is the Asian subfamily and contains two genera, Harpactes and Apalharpactes. Apalharpactes, consisting of two species in Java and Sumatra, has only recently been accepted as a separate genus from Harpactes. The remaining subfamily, the Neotropical Trogoninae, contains the remaining four genera, Trogon, Priotelus, Pharomachrus and Euptilotis.
The two Caribbean species of Priotelus were formerly different ones (Temnotrogon on Hispaniola), and are extremely ancient. The two quetzal genera, Pharomachrus and Euptilotis are possibly derived from the final and most numerous genus of trogons in the Neotropics, Trogon. A 2008 study of the genetics of Trogon suggested the genus originated in Central America and radiated into South America after the formation of the Isthmus of Panama (as part of the Great American Interchange), thus making trogons relatively recent arrivals in South America.
Distribution and habitat | Trogon | Wikipedia | 473 | 47798 | https://en.wikipedia.org/wiki/Trogon | Biology and health sciences | Trogoniformes | null |
The majority of trogons are birds of tropical and subtropical forests. They have a cosmopolitan distribution in the worlds wet tropics, being found in the Americas, Africa and Asia. A few species are distributed into the temperate zone, with one species, the elegant trogon, reaching the south of the United States, specifically southern Arizona and the surrounding area. The Narina trogon of Africa is slightly exceptional in that it utilises a wider range of habitats than any other trogon, ranging from dense forest to fairly open savannah, and from the Equator to southern South Africa. It is the most widespread and successful of all the trogons. The eared quetzal of Mexico is also able to use more xeric habitats, but preferentially inhabits forests. Most other species are more restricted in their habitat, with several species being restricted to undisturbed primary forest. Within forests they tend to be found in the mid-story, occasionally in the canopy.
Some species, particularly the quetzals, are adapted to cooler montane forest. There are a number of insular species; these include a number of species found in the Greater Sundas, one species in the Philippines as well as two species endemic to Cuba and Hispaniola respectively. Outside of South East Asia and the Caribbean, however, trogons are generally absent from islands, especially oceanic ones.
Trogons are generally sedentary, with no species known to undertake long migrations. A small number of species are known to make smaller migratory movements, particularly montane species which move to lower altitudes during different seasons. This has been demonstrated using radio tracking in the resplendent quetzal in Costa Rica and evidence has been accumulated for a number of other species. The Narina trogon of Africa is thought to undertake some localised short-distance migrations over parts of its range, for example birds of Zimbabwe's plateau savannah depart after the breeding season. A complete picture of these movements is however lacking. Trogons are difficult to study as their thick tarsi (feet bones) make ringing studies difficult.
Morphology and flight | Trogon | Wikipedia | 431 | 47798 | https://en.wikipedia.org/wiki/Trogon | Biology and health sciences | Trogoniformes | null |
The trogons as a family are fairly uniform in appearance, having compact bodies and long tails (very long in the case of the quetzals), and short necks. Trogons range in size from the , scarlet-rumped trogon to the , resplendent quetzal (not including the male quetzal's tail streamers). Their legs and feet are weak and short, and trogons are essentially unable to walk beyond a very occasional shuffle along a branch. They are even incapable of turning around on a branch without using their wings. The ratio of leg muscle to body weight in trogons is only 3%, the lowest known ratio of any bird. The arrangement of toes on the feet of trogons is also unique among birds, although essentially resembling the zygodactyl's two forward two backward arrangement of parrots and other near-passerines, the actual toes are arranged with usually inner hallux being the outer hind toe, an arrangement that is referred to as heterodactylous. The strong bill is short and the gape wide, particularly in the fruit eating quetzals, with a slight hook at the end. There is also a notch at the end of the bill and many species have slight serrations in the mandibles. The skin is exceptionally tender, making preparation of study skins difficult for museum curators. The skeletons of trogons are surprisingly slender, particularly the skulls which are very thin. The plumage of many species is iridescent, although not in most of the Asian species. The African trogons are generally green on the back with red bellies. The New World trogons similarly have green or deep blue upperparts but are more varied in their lowerparts. The Asian species tend towards red underparts and brown backs.
The wings are short but strong, with the wing muscle ratio being around 22% of the body weight. In spite of the strength of their flight, trogons do not fly often or for great distances, generally flying no more than a few hundred metres at a time. Only the montane species tend to make long-distance flights. Shorter flights tend to be direct and swift, but longer flights are slightly undulating. Their flight can be surprisingly silent (for observers), although that of a few species is reportedly quite noisy. | Trogon | Wikipedia | 482 | 47798 | https://en.wikipedia.org/wiki/Trogon | Biology and health sciences | Trogoniformes | null |
Calls
The calls of trogons are generally loud and uncomplex, consisting of monosyllabic hoots and whistles delivered in varying patterns and sequences. The calls of the quetzals and the two Caribbean genera are the most complex. Among the Asian genera the Sumatran trogon (Apalharpactes) has the most atypical call of any trogon, research has not yet established whether the closely related Javan trogon has a similar call. The calls of the other Asian genus, Harpactes, are remarkably uniform. In addition to the territorial and breeding calls given by males and females during the breeding seasons, trogons have been recorded as having aggression calls given by competing males and alarm calls.
Behaviour
Trogons are generally inactive outside of infrequent feeding flights. Among birdwatchers and biologists it has been noted that "[a]part from their great beauty [they] are notorious ... for their lack of other immediately engaging qualities". Their lack of activity is possibly a defence against predation; trogons on all continents have been reported to shift about on branches to always keep their less brightly coloured backs turned towards observers, while their heads, which like owls can turn through 180 degrees, keep a watch on the watcher. Trogons have reportedly been preyed upon by hawks and predatory mammals; one report was of a resplendent quetzal taken while brooding young by a margay.
Diet and feeding
Trogons feed principally on insects, other arthropods, and fruit; to a lesser extent some small vertebrates such as lizards are taken. Among the insect prey taken one of the more important types are caterpillars; along with cuckoos, trogons are one of the few birds groups to regularly prey upon them. Some caterpillars are known to be poisonous to trogons though, like Arsenura armida. The extent to which each food type is taken varies depending on geography and species. The three African trogons are exclusively insectivorous, whereas the Asian and American genera consume varying amounts of fruit. Diet is somewhat correlated with size, with larger species feeding more on fruit and smaller species focusing on insects. | Trogon | Wikipedia | 459 | 47798 | https://en.wikipedia.org/wiki/Trogon | Biology and health sciences | Trogoniformes | null |
Prey is almost always obtained on the wing. The most commonly employed foraging technique is a sally-glean flight, where a trogon flies from an observation perch to a target on another branch or in foliage. Once there the birds hovers or stalls and snatches the item before returning to its perch to consume the item. This type of foraging is commonly used by some types of bird to obtain insect prey; in trogons and quetzals it is also used to pluck fruit from trees. Insect prey may also be taken on the wing, with the trogon pursuing flying insects in a similar manner to drongos and Old World flycatchers. Frogs, lizards and large insects on the ground may also be pounced on from the air. More rarely some trogons may shuffle along a branch to obtain insects, insect eggs and very occasionally nestling birds. Violaceous trogons will consume wasps and wasp larvae encountered while digging nests.
Breeding
Trogons are territorial and monogamous. Males will respond quickly to playbacks of their calls and will repel other members of the same species and even other hole-nesting species from around their nesting sites. Males attract females by singing, and, in the case of the resplendent quetzal, undertaking display flights. Some species have been observed in small flocks of 3–12 individuals prior to and sometimes during the breeding season, calling and chasing each other, but the function of these flocks is unclear.
Trogons are cavity nesters. Nests are dug into rotting wood or termite nests, with one species, the violaceous trogon, nesting in wasp nests. Nest cavities can either be deep upward slanting tubes that lead to fully enclosed chambers, or much shallower open niches (from which the bird is visible). Nests are dug with the beak, incidentally giving the family its name. Nest digging may be undertaken by the male alone or by both sexes. In the case of nests dug into tree trunks, the wood must be strong enough not to collapse but soft enough to dig out. Trogons have been observed landing on dead tree trunks and slapping the wood with their tails, presumably to test the firmness. | Trogon | Wikipedia | 455 | 47798 | https://en.wikipedia.org/wiki/Trogon | Biology and health sciences | Trogoniformes | null |
The nests of trogons are thought to usually be unlined. Between two and four eggs are laid in a nesting attempt. These are round and generally glossy white or lightly coloured (buff, grey, blue or green), although they get increasingly dirty during incubation. Both parents incubate the eggs (except in the case of the bare-cheeked trogon, where apparently the male takes no part), with the male taking one long incubation stint a day and the female incubating the rest of the time. Incubation seems to begin after the last egg is laid. The incubation period varies by species, usually lasting between 16–19 days. On hatching the chicks are altricial, blind and naked. The chicks acquire feathers rapidly in some of the montane species, in the case of the mountain trogon in a week, but more slowly in lowland species like the black-headed trogon, which may take twice as long. The nestling period varies by species and size, with smaller species generally taking 16 to 17 days to fledge, whereas larger species may take as long as 30 days, although 23–25 days is more typical.
Relationship with humans
Trogons and quetzals are considered to be "among the most beautiful of birds", yet they are also often reclusive and seldom seen. Little is known about much of their biology, and much of what is known about them comes from the research of neotropical species by the ornithologist Alexander Skutch. Trogons are nevertheless popular birds with birdwatchers, and there is a modest ecotourism industry in particular to view quetzals in Central America.
Species list
Order Trogoniformes
Family Trogonidae | Trogon | Wikipedia | 360 | 47798 | https://en.wikipedia.org/wiki/Trogon | Biology and health sciences | Trogoniformes | null |
Huntington's disease (HD), also known as Huntington's chorea, is an incurable neurodegenerative disease that is mostly inherited. The earliest symptoms are often subtle problems with mood or mental/psychiatric abilities. A general lack of coordination and an unsteady gait often follow. It is also a basal ganglia disease causing a hyperkinetic movement disorder known as chorea. As the disease advances, uncoordinated, involuntary body movements of chorea become more apparent. Physical abilities gradually worsen until coordinated movement becomes difficult and the person is unable to talk. Mental abilities generally decline into dementia, depression, apathy, and impulsivity at times. The specific symptoms vary somewhat between people. Symptoms usually begin between 30 and 50 years of age, and can start at any age but are usually seen around the age of 40. The disease may develop earlier in each successive generation. About eight percent of cases start before the age of 20 years, and are known as juvenile HD, which typically present with the slow movement symptoms of Parkinson's disease rather than those of chorea.
HD is typically inherited from an affected parent, who carries a mutation in the huntingtin gene (HTT). However, up to 10% of cases are due to a new mutation. The huntingtin gene provides the genetic information for huntingtin protein (Htt). Expansion of CAG repeats of cytosine-adenine-guanine (known as a trinucleotide repeat expansion) in the gene coding for the huntingtin protein results in an abnormal mutant protein (mHtt), which gradually damages brain cells through a number of possible mechanisms. The mutant protein is dominant, so having one parent who is a carrier of the trait is sufficient to trigger the disease in their children. Diagnosis is by genetic testing, which can be carried out at any time, regardless of whether or not symptoms are present. This fact raises several ethical debates: the age at which an individual is considered mature enough to choose testing; whether parents have the right to have their children tested; and managing confidentiality and disclosure of test results. | Huntington's disease | Wikipedia | 432 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
No cure for HD is known, and full-time care is required in the later stages. Treatments can relieve some symptoms and in some, improve quality of life. The best evidence for treatment of the movement problems is with tetrabenazine. HD affects about 4 to 15 in 100,000 people of European descent. It is rare among the Finnish and Japanese, while the occurrence rate in Africa is unknown. The disease affects males and females equally. Complications such as pneumonia, heart disease, and physical injury from falls reduce life expectancy; although fatal aspiration pneumonia is commonly cited as the ultimate cause of death for those with the condition. Suicide is the cause of death in about 9% of cases. Death typically occurs 15–20 years from when the disease was first detected.
The earliest known description of the disease was in 1841 by American physician Charles Oscar Waters. The condition was described in further detail in 1872 by American physician George Huntington. The genetic basis was discovered in 1993 by an international collaborative effort led by the Hereditary Disease Foundation. Research and support organizations began forming in the late 1960s to increase public awareness, provide support for individuals and their families and promote research. Research directions include determining the exact mechanism of the disease, improving animal models to aid with research, testing of medications and their delivery to treat symptoms or slow the progression of the disease, and studying procedures such as stem-cell therapy with the goal of replacing damaged or lost neurons.
Signs and symptoms
Signs and symptoms of Huntington's disease most commonly become noticeable between the ages of 30 and 50 years, but they can begin at any age and present as a triad of motor, cognitive, and psychiatric symptoms. When developed in an early stage, it is known as juvenile Huntington's disease. In 50% of cases, the psychiatric symptoms appear first. Their progression is often described in early stages, middle stages, and late stages with an earlier prodromal phase. In the early stages, subtle personality changes, problems in cognition and physical skills, irritability, and mood swings occur, all of which may go unnoticed, and these usually precede the motor symptoms. Almost everyone with HD eventually exhibits similar physical symptoms, but the onset, progression, and extent of cognitive and behavioral symptoms vary significantly between individuals. | Huntington's disease | Wikipedia | 455 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
The most characteristic initial physical symptoms are jerky, random, and uncontrollable movements called chorea. Many people are not aware of their involuntary movements, or impeded by them. Chorea may be initially exhibited as general restlessness, small unintentionally initiated or uncompleted motions, lack of coordination, or slowed saccadic eye movements. These minor motor abnormalities usually precede more obvious signs of motor dysfunction by at least three years. The clear appearance of symptoms such as rigidity, writhing motions, or abnormal posturing appear as the disorder progresses. These are signs that the system in the brain that is responsible for movement has been affected. Psychomotor functions become increasingly impaired, such that any action that requires muscle control is affected. When muscle control is affected such as rigidity or muscle contracture this is known as dystonia. Dystonia is a neurological hyperkinetic movement disorder that results in twisting or repetitive movements, that may resemble a tremor. Common consequences are physical instability, abnormal facial expression, and difficulties chewing, swallowing, and speaking. Sleep disturbances and weight loss are also associated symptoms. Eating difficulties commonly cause weight loss and may lead to malnutrition. Weight loss is common in people with Huntington's disease, and it progresses with the disease. Juvenile HD generally progresses at a faster rate with greater cognitive decline, and chorea is exhibited briefly, if at all; the Westphal variant of slowness of movement, rigidity, and tremors is more typical in juvenile HD, as are seizures.
Cognitive abilities are progressively impaired and tend to generally decline into dementia. Especially affected are executive functions, which include planning, cognitive flexibility, abstract thinking, rule acquisition, initiation of appropriate actions, and inhibition of inappropriate actions. Different cognitive impairments include difficulty focusing on tasks, lack of flexibility, a lack of impulse, a lack of awareness of one's own behaviors and abilities and difficulty learning or processing new information. As the disease progresses, memory deficits tend to appear. Reported impairments range from short-term memory deficits to long-term memory difficulties, including deficits in episodic (memory of one's life), procedural (memory of the body of how to perform an activity), and working memory. | Huntington's disease | Wikipedia | 462 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Reported neuropsychiatric signs are anxiety, depression, a reduced display of emotions, egocentrism, aggression, and compulsive behavior and hallucination and delusion. Other common psychiatric disorders could include obsessive–compulsive disorder, mania, insomnia and bipolar disorder. Difficulties in recognizing other people's negative expressions have also been observed. The prevalence of these symptoms is highly variable between studies, with estimated rates for lifetime prevalence of psychiatric disorders between 33 and 76%. For many with the disease and their families, these symptoms are among the most distressing aspects of the disease, often affecting daily functioning and constituting reason for institutionalization. Early behavioral changes in HD result in an increased risk of suicide. Often, individuals have reduced awareness of chorea, cognitive, and emotional impairments.
Mutant huntingtin is expressed throughout the body and associated with abnormalities in peripheral tissues that are directly caused by such expression outside the brain. These abnormalities include muscle atrophy, cardiac failure, impaired glucose tolerance, weight loss, osteoporosis, and testicular atrophy.
Genetics
Everyone has two copies of the huntingtin gene (HTT), which codes for the huntingtin protein (Htt). HTT is also called the HD gene, and the IT15 gene, (interesting transcript 15). Part of this gene is a repeated section called a trinucleotide repeat expansion – a short repeat, which varies in length between individuals, and may change length between generations. If the repeat is present in a healthy gene, a dynamic mutation may increase the repeat count and result in a defective gene. When the length of this repeated section reaches a certain threshold, it produces an altered form of the protein, called mutant huntingtin protein (mHtt). The differing functions of these proteins are the cause of pathological changes, which in turn cause the disease symptoms. The Huntington's disease mutation is genetically dominant and almost fully penetrant; mutation of either of a person's HTT alleles causes the disease. It is not inherited according to sex, but by the length of the repeated section of the gene; hence its severity can be influenced by the sex of the affected parent.
Genetic mutation | Huntington's disease | Wikipedia | 455 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
HD is one of several trinucleotide repeat disorders that are caused by the length of a repeated section of a gene exceeding a normal range. The HTT gene is located on the short arm of chromosome 4 at 4p16.3. HTT contains a sequence of three DNA bases—cytosine-adenine-guanine (CAG)—repeated multiple times (i.e. ... CAGCAGCAG ...), known as a trinucleotide repeat. CAG is the three-letter genetic code (codon) for the amino acid glutamine, so a series of them results in the production of a chain of glutamine known as a polyglutamine tract (or polyQ tract), and the repeated part of the gene, the polyQ region.
Generally, people have fewer than 36 repeated glutamines in the polyQ region, which results in the production of the cytoplasmic protein huntingtin. However, a sequence of 36 or more glutamines results in the production of a protein with different characteristics. This altered form, called mutant huntingtin (mHtt), increases the decay rate of certain types of neurons. Regions of the brain have differing amounts and reliance on these types of neurons and are affected accordingly. Generally, the number of CAG repeats is related to how much this process is affected, and accounts for about 60% of the variation of the age of the onset of symptoms. The remaining variation is attributed to the environment and other genes that modify the mechanism of HD. About 36 to 39 repeats result in a reduced-penetrance form of the disease, with a much later onset and slower progression of symptoms. In some cases, the onset may be so late that symptoms are never noticed. With very large repeat counts (more than 60), HD onset can occur below the age of 20, known as juvenile HD. Juvenile HD is typically of the Westphal variant that is characterized by slowness of movement, rigidity, and tremors. This accounts for about 7% of HD carriers.
Inheritance | Huntington's disease | Wikipedia | 432 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Huntington's disease has autosomal dominant inheritance, meaning that an affected individual typically inherits one copy of the gene with an expanded trinucleotide repeat (the mutant allele) from an affected parent. Since the penetrance of the mutation is very high, those who have a mutated copy of the gene will have the disease. In this type of inheritance pattern, each offspring of an affected individual has a 50% risk of inheriting the mutant allele, so are affected with the disorder (see figure). This probability is sex-independent. Sex-dependent or sex-linked genes are traits that are found on the X or Y chromosomes.
Trinucleotide CAG repeats numbering over 28 are unstable during replication, and this instability increases with the number of repeats present. This usually leads to new expansions as generations pass (dynamic mutations) instead of reproducing an exact copy of the trinucleotide repeat. This causes the number of repeats to change in successive generations, such that an unaffected parent with an "intermediate" number of repeats (28–35), or "reduced penetrance" (36–40), may pass on a copy of the gene with an increase in the number of repeats that produces fully penetrant HD. The earlier age of onset and greater severity of disease in successive generations due to increases in the number of repeats is known as genetic anticipation. Instability is greater in spermatogenesis than oogenesis; maternally inherited alleles are usually of a similar repeat length, whereas paternally inherited ones have a higher chance of increasing in length. Rarely is Huntington's disease caused by a new mutation, where neither parent has over 36 CAG repeats.
In the rare situations where both parents have an expanded HD gene, the risk increases to 75%, and when either parent has two expanded copies, the risk is 100% (all children will be affected). Individuals with both genes affected are rare. For some time, HD was thought to be the only disease for which possession of a second mutated gene did not affect symptoms and progression, but it has since been found that it can affect the phenotype and the rate of progression. | Huntington's disease | Wikipedia | 445 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Mechanisms
Huntingtin protein interacts with over 100 other proteins, and appears to have multiple functions. The behavior of the mutated protein (mHtt) is not completely understood, but it is toxic to certain cell types, particularly brain cells. Early damage is most evident in the subcortical basal ganglia, initially in the striatum, but as the disease progresses, other areas of the brain are also affected, including regions of the cerebral cortex. Early symptoms are attributable to functions of the striatum and its cortical connections—namely control over movement, mood, and higher cognitive function. DNA methylation also appears to be changed in HD.
In 2025, scientists affiliated with Harvard and MIT are believed to have revealed why symptoms of the disease starts earlier in some people who inherit it and later in others: the repeats of a gene sequence become toxic at 150 repeats.
Huntingtin function
Htt is expressed in all cells, with the highest concentrations found in the brain and testes, and moderate amounts in the liver, heart, and lungs. Its functions are unclear, but it does interact with proteins involved in transcription, cell signaling, and intracellular transporting. In animals genetically modified to exhibit HD, several functions of Htt have been identified. In these animals, Htt is important for embryonic development, as its absence is related to embryonic death. Caspase, an enzyme which plays a role in catalyzing apoptosis, is thought to be activated by the mutated gene through damaging the ubiquitin-protease system. It also acts as an antiapoptotic agent preventing programmed cell death and controls the production of brain-derived neurotrophic factor, a protein that protects neurons and regulates their creation during neurogenesis. Htt also facilitates synaptic vesicular transport and synaptic transmission, and controls neuronal gene transcription. If the expression of Htt is increased, brain cell survival is improved and the effects of mHtt are reduced, whereas when the expression of Htt is reduced, the resulting characteristics are more as seen in the presence of mHtt. Accordingly, the disease is thought not to be caused by inadequate production of Htt, but by a toxic gain-of-function of mHtt in the body.
Cellular changes | Huntington's disease | Wikipedia | 473 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
The toxic action of mHtt may manifest and produce the HD pathology through multiple cellular changes. In its mutant (polyglutamine expanded) form, the protein is more prone to cleavage that creates shorter fragments containing the polyglutamine expansion. These protein fragments have a propensity to undergo misfolding and aggregation, yielding fibrillar aggregates in which non-native polyglutamine β-strands from multiple proteins are bonded together by hydrogen bonds. These aggregates share the same fundamental cross-beta amyloid architecture seen in other protein deposition diseases . Over time, the aggregates accumulate to form inclusion bodies within cells, ultimately interfering with neuronal function. Inclusion bodies have been found in both the cell nucleus and cytoplasm. Inclusion bodies in cells of the brain are one of the earliest pathological changes, and some experiments have found that they can be toxic for the cell, but other experiments have shown that they may form as part of the body's defense mechanism and help protect cells.
Several pathways by which mHtt may cause cell death have been identified. These include effects on chaperone proteins, which help fold proteins and remove misfolded ones; interactions with caspases, which play a role in the process of removing cells; the toxic effects of glutamine on nerve cells; impairment of energy production within cells; and effects on the expression of genes.
Mutant huntingtin protein has been found to play a key role in mitochondrial dysfunction. The impairment of mitochondrial electron transport can result in higher levels of oxidative stress and release of reactive oxygen species.
Glutamine is known to be excitotoxic when present in large amounts, that can cause damage to numerous cellular structures. Excessive glutamine is not found in HD, but the interactions of the altered huntingtin protein with numerous proteins in neurons lead to an increased vulnerability to glutamine. The increased vulnerability is thought to result in excitotoxic effects from normal glutamine levels.
Macroscopic changes | Huntington's disease | Wikipedia | 411 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Initially, damage to the brain is regionally specific with the dorsal striatum in the subcortical basal ganglia being primarily affected, followed later by cortical involvement in all areas. Other areas of the basal ganglia affected include the substantia nigra; cortical involvement includes cortical layers 3, 5, and 6; also evident is involvement of the hippocampus, Purkinje cells in the cerebellum, lateral tuberal nuclei of the hypothalamus and parts of the thalamus. These areas are affected according to their structure and the types of neurons they contain, reducing in size as they lose cells. Striatal medium spiny neurons are the most vulnerable, particularly ones with projections towards the external globus pallidus, with interneurons and spiny cells projecting to the internal globus pallidus being less affected. HD also causes an abnormal increase in astrocytes and activation of the brain's immune cells, microglia.
The basal ganglia play a key role in movement and behavior control. Their functions are not fully understood, but theories propose that they are part of the cognitive executive system and the motor circuit. The basal ganglia ordinarily inhibit a large number of circuits that generate specific movements. To initiate a particular movement, the cerebral cortex sends a signal to the basal ganglia that causes the inhibition to be released. Damage to the basal ganglia can cause the release or reinstatement of the inhibitions to be erratic and uncontrolled, which results in an awkward start to the motion or motions to be unintentionally initiated or in a motion to be halted before or beyond its intended completion. The accumulating damage to this area causes the characteristic erratic movements associated with HD known as chorea, a dyskinesia. Because of the basal ganglia's inability to inhibit movements, individuals affected by it inevitably experience a reduced ability to produce speech and swallow foods and liquids (dysphagia).
Transcriptional dysregulation | Huntington's disease | Wikipedia | 419 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
CREB-binding protein (CBP), a transcriptional coregulator, is essential for cell function because as a coactivator at a significant number of promoters, it activates the transcription of genes for survival pathways. CBP contains an acetyltransferase domain to which HTT binds through its polyglutamine-containing domain. Autopsied brains of those who had Huntington's disease also have been found to have incredibly reduced amounts of CBP. In addition, when CBP is overexpressed, polyglutamine-induced death is diminished, further demonstrating that CBP plays an important role in Huntington's disease and neurons in general.
Diagnosis
Diagnosis of the onset of HD can be made following the appearance of physical symptoms specific to the disease. Genetic testing can be used to confirm a physical diagnosis if no family history of HD exists. Even before the onset of symptoms, genetic testing can confirm if an individual or embryo carries an expanded copy of the trinucleotide repeat (CAG) in the HTT gene that causes the disease. Genetic counseling is available to provide advice and guidance throughout the testing procedure and on the implications of a confirmed diagnosis. These implications include the impact on an individual's psychology, career, family-planning decisions, relatives, and relationships. Despite the availability of pre-symptomatic testing, only 5% of those at risk of inheriting HD choose to do so.
Clinical | Huntington's disease | Wikipedia | 294 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
A physical examination, sometimes combined with a psychological examination, can determine whether the onset of the disease has begun. Excessive unintentional movements of any part of the body are often the reason for seeking medical consultation. If these are abrupt and have random timing and distribution, they suggest a diagnosis of HD. Cognitive or behavioral symptoms are rarely the first symptoms diagnosed; they are usually only recognized in hindsight or when they develop further. How far the disease has progressed can be measured using the unified Huntington's disease rating scale, which provides an overall rating system based on motor, behavioral, cognitive, and functional assessments. Medical imaging, such as a CT scan or MRI scan, can show atrophy of the caudate nuclei early in the disease, as seen in the illustration to the right, but these changes are not, by themselves, diagnostic of HD. Cerebral atrophy can be seen in the advanced stages of the disease. Functional neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), can show changes in brain activity before the onset of physical symptoms, but they are experimental tools and are not used clinically.
Predictive genetic testing | Huntington's disease | Wikipedia | 245 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Because HD follows an autosomal dominant pattern of inheritance, a strong motivation exists for individuals who are at risk of inheriting it to seek a diagnosis. The genetic test for HD consists of a blood test, which counts the numbers of CAG repeats in each of the HTT alleles. Cutoffs are given as follows:
At 40 or more CAG repeats, full penetrance allele (FPA) exists. A "positive test" or "positive result" generally refers to this case. A positive result is not considered a diagnosis, since it may be obtained decades before the symptoms begin. However, a negative test means that the individual does not carry the expanded copy of the gene and will not develop HD. The test will tell a person who originally had a 50% chance of inheriting the disease if their risk goes up to 100% or is eliminated. Persons who test positive for the disease will develop HD sometime within their lifetimes, provided they live long enough for the disease to appear.
At 36 to 39 repeats, incomplete or reduced penetrance allele (RPA) may cause symptoms, usually later in the adult life. The maximum risk is 60% that a person with an RPA will be symptomatic at age 65, and 70% at 75.
At 27 to 35 repeats, intermediate allele (IA), or large normal allele, is not associated with symptomatic disease in the tested individual, but may expand upon further inheritance to give symptoms in offspring.
With 26 or fewer repeats, the result is not associated with HD. | Huntington's disease | Wikipedia | 320 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Testing before the onset of symptoms is a life-changing event and a very personal decision. The main reason given for choosing to test for HD is to aid in career and family decisions. Predictive testing for Huntington's disease has been available via linkage analysis (which requires testing multiple family members) since 1986 and via direct mutation analysis since 1993. At that time, surveys indicated that 50–70% of at-risk individuals would have been interested in receiving testing, but since predictive testing has been offered far fewer choose to be tested. Over 95% of individuals at risk of inheriting HD do not proceed with testing, mostly because it has no treatment. A key issue is the anxiety an individual experiences about not knowing whether they will eventually develop HD, compared to the impact of a positive result. Irrespective of the result, stress levels are lower two years after being tested, but the risk of suicide is increased after a positive test result. Individuals found to have not inherited the disorder may experience survivor guilt about family members who are affected. Other factors taken into account when considering testing include the possibility of discrimination and the implications of a positive result, which usually means a parent has an affected gene and that the individual's siblings will be at risk of inheriting it. In one study, genetic discrimination was found in 46% of individuals at risk for Huntington's disease. It occurred at higher rates within personal relationships than health insurance or employment relations. Genetic counseling in HD can provide information, advice and support for initial decision-making, and then, if chosen, throughout all stages of the testing process. Because of the implications of this test, patients who wish to undergo testing must complete three counseling sessions which provide information about Huntington's.
Counseling and guidelines on the use of genetic testing for HD have become models for other genetic disorders, such as autosomal dominant cerebellar ataxia. Presymptomatic testing for HD has also influenced testing for other illnesses with genetic variants such as polycystic kidney disease, familial Alzheimer's disease and breast cancer. The European Molecular Genetics Quality Network have published yearly external quality assessment scheme for molecular genetic testing for this disease and have developed best practice guidelines for genetic testing for HD to assist in testing and reporting of results.
Preimplantation genetic diagnosis | Huntington's disease | Wikipedia | 468 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Embryos produced using in vitro fertilization may be genetically tested for HD using preimplantation genetic diagnosis. This technique, where one or two cells are extracted from a typically 4- to 8-cell embryo and then tested for the genetic abnormality, can then be used to ensure embryos affected with HD genes are not implanted, so any offspring will not inherit the disease. Some forms of preimplantation genetic diagnosis—non-disclosure or exclusion testing—allow at-risk people to have HD-free offspring without revealing their own parental genotype, giving no information about whether they themselves are destined to develop HD. In exclusion testing, the embryo's DNA is compared with that of the parents and grandparents to avoid inheritance of the chromosomal region containing the HD gene from the affected grandparent. In nondisclosure testing, only disease-free embryos are replaced in the uterus while the parental genotype and hence parental risk for HD are never disclosed.
Prenatal testing
Obtaining a prenatal diagnosis for an embryo or fetus in the womb is also possible, using fetal genetic material acquired through chorionic villus sampling. An amniocentesis can be performed if the pregnancy is further along, within 14–18 weeks. This procedure looks at the amniotic fluid surrounding the baby for indicators of the HD mutation. This, too, can be paired with exclusion testing to avoid disclosure of parental genotype. Prenatal testing can be done when parents have been diagnosed with HD, when they have had genetic testing showing the expansion of the HTT gene, or when they have a 50% chance of inheriting the disease. The parents can be counseled on their options, which include termination of pregnancy, and on the difficulties of a child with the identified gene.
In addition, in at-risk pregnancies due to an affected male partner, noninvasive prenatal diagnosis can be performed by analyzing cell-free fetal DNA in a blood sample taken from the mother (via venipuncture) between six and 12 weeks of pregnancy. It has no procedure-related risk of miscarriage.
Differential diagnosis | Huntington's disease | Wikipedia | 439 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
About 99% of HD diagnoses based on the typical symptoms and a family history of the disease are confirmed by genetic testing to have the expanded trinucleotide repeat that causes HD. Most of the remaining are called HD-like (HDL) syndromes. The cause of most HDL diseases is unknown, but those with known causes are due to mutations in the prion protein gene (HDL1), the junctophilin 3 gene (HDL2), a recessively inherited unknown gene (HDL3—only found in two families and poorly understood), and the gene encoding the TATA box-binding protein (SCA17, sometimes called HDL4). Other autosomal dominant diseases that can be misdiagnosed as HD are dentatorubral-pallidoluysian atrophy and neuroferritinopathy. Also, some autosomal recessive disorders resemble sporadic cases of HD. These include chorea acanthocytosis and pantothenate kinase-associated neurodegeneration. One X-linked disorder of this type is McLeod syndrome.
Management
Treatments are available to reduce the severity of some HD symptoms. For many of these treatments, evidence to confirm their effectiveness in treating symptoms of HD specifically are incomplete. As the disease progresses, the ability to care for oneself declines, and carefully managed multidisciplinary caregiving becomes increasingly necessary. Although relatively few studies of exercises and therapies have shown to be helpful to rehabilitate cognitive symptoms of HD, some evidence shows the usefulness of physical therapy, occupational therapy, and speech therapy.
Therapy
Weight loss and problems in eating due to dysphagia and other muscle discoordination are common, making nutrition management increasingly important as the disease advances. Thickening agents can be added to liquids, as thicker fluids are easier and safer to swallow. Reminding the affected person to eat slowly and to take smaller pieces of food into the mouth may also be of use to prevent choking. If eating becomes too hazardous or uncomfortable, the option of using a percutaneous endoscopic gastrostomy is available. This feeding tube, permanently attached through the abdomen into the stomach, reduces the risk of aspirating food and provides better nutritional management. Assessment and management by speech-language pathologists with experience in Huntington's disease is recommended. | Huntington's disease | Wikipedia | 479 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
People with Huntington's disease may see a physical therapist for noninvasive and nonmedication-based ways of managing the physical symptoms. Physical therapists may implement fall risk assessment and prevention, as well as strengthening, stretching, and cardiovascular exercises. Walking aids may be prescribed as appropriate. Physical therapists also prescribe breathing exercises and airway clearance techniques with the development of respiratory problems. Consensus guidelines on physiotherapy in Huntington's disease have been produced by the European HD Network. Goals of early rehabilitation interventions are prevention of loss of function. Participation in rehabilitation programs during the early to middle stage of the disease may be beneficial as it translates into long-term maintenance of motor and functional performance. Rehabilitation during the late stage aims to compensate for motor and functional losses. For long-term independent management, the therapist may develop home exercise programs for appropriate people.
Additionally, an increasing number of people with HD are turning to palliative care, which aims to improve quality of life through the treatment of the symptoms and stress of serious illness, in addition to their other treatments.
Medications
Tetrabenazine was approved in 2000 for treatment of chorea in Huntington's disease in the EU, and in 2008 in the US. Although other drugs had been used "off label", tetrabenazine was the first approved treatment for Huntington's disease in the U.S. The compound has been known since the 1950s. An alternative to tetrabenazine is amantadine but there is limited evidence for its safety and efficacy.
Other drugs that help to reduce chorea include antipsychotics and benzodiazepines. Hypokinesia and rigidity, especially in juvenile cases, can be treated with antiparkinsonian drugs, and myoclonic hyperkinesia can be treated with valproic acid. Tentative evidence has found ethyl eicosapentaenoic acid to improve motor symptoms at one year. In 2017, deutetrabenazine, a heavier form of tetrabenazine medication for the treatment of chorea in HD, was approved by the FDA. This is marketed as Austedo. | Huntington's disease | Wikipedia | 442 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Psychiatric symptoms can be treated with medications similar to those used in the general population. Selective serotonin reuptake inhibitors and mirtazapine have been recommended for depression, while atypical antipsychotics are recommended for psychosis and behavioral problems. Specialist neuropsychiatric input is recommended since people may require long-term treatment with multiple medications in combination.
Plant-based medications
There has been a number of alternative therapies experimented in ayurvedic medicine with plant-based products, although none have provided good evidence of efficacy. A recent study showed that the stromal processing peptidase (SPP), a synthetic enzyme found in plant chloroplasts, prevented the aggregation of proteins associated with Huntington's disease. However, repeat studies and clinical validation are needed to confirm its true therapeutic potential.
Education
The families of individuals, and society at large, who have inherited or are at risk of inheriting HD have generations of experience of HD but may be unaware of recent breakthroughs in understanding the disease, and of the availability of genetic testing. Genetic counseling benefits these individuals by updating their knowledge, seeking to dispel any unfounded beliefs that they may have, and helping them consider their future options and plans. The Patient Education Program for Huntington's Disease has been created to help educate family members, caretakers, and those diagnosed with Huntington's disease. Also covered is information concerning family planning choices, care management, and other considerations.
Prognosis
The length of the trinucleotide repeat accounts for 60% of the variation of the age of symptoms onset and their rate of progress. A longer repeat results in an earlier age of onset and a faster progression of symptoms. Individuals with more than sixty repeats often develop the disease before age 20, while those with fewer than 40 repeats may remain asymptomatic. The remaining variation is due to environmental factors and other genes that influence the mechanism of the disease. | Huntington's disease | Wikipedia | 396 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Life expectancy in HD is generally around 10 to 30 years following the onset of visible symptoms. Juvenile Huntington's disease has a life expectancy rate of 10 years after onset of visible symptoms. Most life-threatening complications result from muscle coordination, and to a lesser extent, behavioral changes induced by declining cognitive function. The largest risk is pneumonia, which causes death in one third of those with HD. As the ability to synchronize movements deteriorates, difficulty clearing the lungs, and an increased risk of aspirating food or drink both increase the risk of contracting pneumonia. The second-greatest risk is heart disease, which causes almost a quarter of fatalities of those with HD. Suicide is the third greatest cause of fatalities, with 7.3% of those with HD taking their own lives and up to 27% attempting to do so. To what extent suicidal thoughts are influenced by behavioral symptoms is unclear, as they signify a desire to avoid the later stages of the disease. Suicide is the greatest risk of this disease before the diagnosis is made and in the middle stages of development throughout the disease. Other associated risks include choking; due to the inability to swallow, physical injury from falls, and malnutrition.
Epidemiology | Huntington's disease | Wikipedia | 250 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
The late onset of Huntington's disease means it does not usually affect reproduction. The worldwide prevalence of HD is 5–10 cases per 100,000 persons, but varies greatly geographically as a result of ethnicity, local migration and past immigration patterns. Prevalence is similar for men and women. The rate of occurrence is highest in peoples of Western European descent, averaging around seven per 100,000 people, and is lower in the rest of the world; e.g., one per million people of Asian and African descent. A 2013 epidemiological study of the prevalence of Huntington's disease in the UK between 1990 and 2010 found that the average prevalence for the UK was 12.3 per 100,000. Additionally, some localized areas have a much higher prevalence than their regional average. One of the highest incidences is in the isolated populations of the Lake Maracaibo region of Venezuela, where HD affects up to 700 per 100,000 persons. Other areas of high localization have been found in Tasmania and specific regions of Scotland, Wales and Sweden. Increased prevalence in some cases occurs due to a local founder effect, a historical migration of carriers into an area of geographic isolation. Some of these carriers have been traced back hundreds of years using genealogical studies. Genetic haplotypes can also give clues for the geographic variations of prevalence. Iceland, on the contrary, has a rather low prevalence of 1 per 100,000, despite the fact that Icelanders as a people are descended from the early Germanic tribes of Scandinavia which also gave rise to the Swedes; all cases with the exception of one going back nearly two centuries having derived from the offspring of a couple living early in the 19th century. Finland, as well, has a low incidence of only 2.2 per 100,000 people.
Until the discovery of a genetic test, statistics could only include clinical diagnosis based on physical symptoms and a family history of HD, excluding those who died of other causes before diagnosis. These cases can now be included in statistics; and, as the test becomes more widely available, estimates of the prevalence and incidence of the disorder are likely to increase.
History
In centuries past, various kinds of chorea were at times called by names such as Saint Vitus' dance, with little or no understanding of their cause or type in each case. | Huntington's disease | Wikipedia | 467 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
The first definite mention of HD was in a letter by Charles Oscar Waters (1816–1892), published in the first edition of Robley Dunglison's Practice of Medicine in 1842. Waters described "a form of chorea, vulgarly called magrums", including accurate descriptions of the chorea, its progression, and the strong heredity of the disease. In 1846 Charles Rollin Gorman (1817–1879) observed how higher prevalence seemed to occur in localized regions. Independently of Gorman and Waters, both students of Dunglison at Jefferson Medical College in Philadelphia, (1830–1906) also produced an early description in 1860. He specifically noted that in Setesdalen, a secluded mountain valley in Norway, the high prevalence of dementia was associated with a pattern of jerking movement disorders that ran in families.
The first thorough description of the disease was by George Huntington in 1872. Examining the combined medical history of several generations of a family exhibiting similar symptoms, he realized their conditions must be linked; he presented his detailed and accurate definition of the disease as his first paper. Huntington described the exact pattern of inheritance of autosomal dominant disease years before the rediscovery by scientists of Mendelian inheritance.
Sir William Osler was interested in the disorder and chorea in general, and was impressed with Huntington's paper, stating, "In the history of medicine, there are few instances in which a disease has been more accurately, more graphically or more briefly described." Osler's continued interest in HD, combined with his influence in the field of medicine, helped to rapidly spread awareness and knowledge of the disorder throughout the medical community. Great interest was shown by scientists in Europe, including Louis Théophile Joseph Landouzy, Désiré-Magloire Bourneville, Camillo Golgi, and Joseph Jules Dejerine, and until the end of the century, much of the research into HD was European in origin. By the end of the 19th century, research and reports on HD had been published in many countries and the disease was recognized as a worldwide condition. | Huntington's disease | Wikipedia | 421 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
During the rediscovery of Mendelian inheritance at the turn of the 20th century, HD was used tentatively as an example of autosomal dominant inheritance. English biologist William Bateson used the pedigrees of affected families to establish that HD had an autosomal dominant inheritance pattern. The strong inheritance pattern prompted several researchers, including Smith Ely Jelliffe, to attempt to trace and connect family members of previous studies. Jelliffe collected information from across New York and published several articles regarding the genealogy of HD in New England. Jelliffe's research roused the interest of his college friend, Charles Davenport, who commissioned Elizabeth Muncey to produce the first field study on the East Coast of the United States of families with HD and to construct their pedigrees. Davenport used this information to document the variable age of onset and range of symptoms of HD; he claimed that most cases of HD in the US could be traced back to a handful of individuals. This research was further embellished in 1932 by P. R. Vessie, who popularized the idea that three brothers who left England in 1630 bound for Boston were the progenitors of HD in the US. The claim that the earliest progenitors had been established and eugenic bias of Muncey's, Davenport's, and Vessie's work contributed to misunderstandings and prejudice about HD. Muncey and Davenport also popularized the idea that in the past, some with HD may have been thought to be possessed by spirits or victims of witchcraft, and were sometimes shunned or exiled by society. This idea has not been proven. Researchers have found contrary evidence; for instance, the community of the family studied by George Huntington openly accommodated those who exhibited symptoms of HD.
The search for the cause of this condition was enhanced considerably in 1968, when the Hereditary Disease Foundation (HDF) was created by Milton Wexler, a psychoanalyst based in Los Angeles, California, whose wife Leonore Sabin had been diagnosed earlier that year with Huntington's disease. The three brothers of Wexler's wife also had this disease. | Huntington's disease | Wikipedia | 435 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
The foundation was involved in the recruitment of more than 100 scientists in the US-Venezuela Huntington's Disease Collaborative Project, which over a 10-year period from 1979, worked to locate the genetic cause. This was achieved in 1983 when a causal gene was approximately located, and in 1993, the gene was precisely located at chromosome 4 (4p16.3). The study had focused on the populations of two isolated Venezuelan villages, Barranquitas and Lagunetas, where there was an unusually high prevalence of HD, and involved over 18,000 people, mostly from a single extended family, and resulted in making HD the first autosomal disease locus found using genetic linkage analysis. Among other innovations, the project developed DNA-marking methods which were an important step in making the Human Genome Project possible.
In the same time, key discoveries concerning the mechanisms of the disorder were being made, including the findings by Anita Harding's research group on the effects of the gene's length.
Modelling the disease in various types of animals, such as the transgenic mouse developed in 1996, enabled larger-scale experiments. As these animals have faster metabolisms and much shorter lifespans than humans results from experiments are received sooner, speeding research. The 1997 discovery that mHtt fragments misfold led to the discovery of the nuclear inclusions they cause. These advances have led to increasingly extensive research into the proteins involved with the disease, potential drug treatments, care methods, and the gene itself.
The networks of care and support that had developed in Venezuela and Colombia during the research projects there in the 1970s through 2000s were eventually eroded by various forces, such as the ongoing crisis in Venezuela and the death of a lead researcher in Colombia (Jorge Daza Barriga). Doctors are working toward rekindling these networks because the people who have contributed to the science of Huntington's disease by participating in these studies deserve adequate follow-up care; societies elsewhere in the world who benefit from the scientific advances thus achieved owe at least that much to those who participated in the research.
The condition was formerly called Huntington's chorea, but this term has been replaced by Huntington's disease because not all patients develop chorea and due to the importance of cognitive and behavioral problems.
Society and culture
Ethics | Huntington's disease | Wikipedia | 460 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Genetic testing for Huntington's disease has raised several ethical issues. The issues for genetic testing include defining how mature an individual should be before being considered eligible for testing, ensuring the confidentiality of results, and whether companies should be allowed to use test results for decisions on employment, life insurance or other financial matters. There was controversy when Charles Davenport proposed in 1910 that compulsory sterilization and immigration control be used for people with certain diseases, including HD, as part of the eugenics movement. In vitro fertilization has some issues regarding its use of embryos. Some HD research has ethical issues due to its use of animal testing and embryonic stem cells.
The development of an accurate diagnostic test for Huntington's disease has caused social, legal, and ethical concerns over access to and use of a person's results.
Many guidelines and testing procedures have strict procedures for disclosure and confidentiality to allow individuals to decide when and how to receive their results and also to whom the results are made available. Insurance companies and businesses are faced with the question of whether to use genetic test results when assessing an individual, such as for life insurance or employment. The United Kingdom's insurance companies agreed with the Department of Health and Social Care that until 2017 customers would not need to disclose predictive genetics tests to them, but this agreement explicitly excluded the government-approved test for Huntington's when writing policies with a value over . As with other untreatable genetic conditions with a later onset, it is ethically questionable to perform presymptomatic testing on a child or adolescent since there would be no medical benefit for that individual. There is consensus for testing only individuals who are considered cognitively mature, although there is a counter-argument that parents have a right to make the decision on their child's behalf. With the lack of effective treatment, testing a person under legal age who is not judged to be competent is considered unethical in most cases.
There are ethical concerns related to prenatal genetic testing or preimplantation genetic diagnosis to ensure a child is not born with a given disease. For example, prenatal testing raises the issue of selective abortion, a choice considered unacceptable by some. As it is a dominant disease, there are difficulties in situations in which a parent does not want to know his or her own diagnosis. This would require parts of the process to be kept secret from the parent.
Support organizations | Huntington's disease | Wikipedia | 487 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
In 1968, after experiencing HD in his wife's family, Dr. Milton Wexler was inspired to start the Hereditary Disease Foundation (HDF), with the aim of curing genetic illnesses by coordinating and supporting research. The foundation and Wexler's daughter, Nancy Wexler, were key parts of the research team in Venezuela which discovered the HD gene.
At roughly the same time as the HDF formed, Marjorie Guthrie helped to found the committee to Combat Huntington's Disease (now the Huntington's Disease Society of America), after her husband, folk singer-songwriter Woody Guthrie died from complications of HD.
Since then, support and research organizations have formed in many countries around the world and have helped to increase public awareness of HD. A number of these collaborate in umbrella organizations, like the International Huntington Association and the European HD network. Many support organizations hold an annual HD awareness event, some of which have been endorsed by their respective governments. For example, 6 June is designated "National Huntington's Disease Awareness Day" by the US Senate. Many organizations exist to support and inform those affected by HD, including the Huntington's Disease Association in the UK. The largest funder of research is provided by the Cure Huntington's Disease Initiative Foundation (CHDI).
Research directions
Research into the mechanism of HD is focused on identifying the functioning of Htt, how mHtt differs or interferes with it, and the brain pathology that the disease produces. Research is conducted using in vitro methods, genetically modified animals, (also called transgenic animal models), and human volunteers. Animal models are critical for understanding the fundamental mechanisms causing the disease, and for supporting the early stages of drug development. The identification of the causative gene has enabled the development of many genetically modified organisms including nematodes (roundworms), Drosophila fruit flies, and genetically modified mammals including mice, rats, sheep, pigs and monkeys that express mutant huntingtin and develop progressive neurodegeneration and HD-like symptoms. | Huntington's disease | Wikipedia | 412 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Research is being conducted using many approaches to either prevent Huntington's disease or slow its progression. Disease-modifying strategies can be broadly grouped into three categories: reducing the level of the mutant huntingtin protein (including gene splicing and gene silencing); approaches aimed at improving neuronal survival by reducing the harm caused by the protein to specific cellular pathways and mechanisms (including protein homeostasis and histone deacetylase inhibition); and strategies to replace lost neurons. In addition, novel therapies to improve brain functioning are under development; these seek to produce symptomatic rather than disease-modifying therapies, and include phosphodiesterase inhibitors.
The CHDI Foundation funds a great many research initiatives providing many publications. The CHDI foundation is the largest funder of Huntington's disease research globally and aims to find and develop drugs that will slow the progression of HD. CHDI was formerly known as the High Q Foundation. In 2006, it spent $50 million on Huntington's disease research. CHDI collaborates with many academic and commercial laboratories globally and engages in oversight and management of research projects as well as funding.
Reducing huntingtin production | Huntington's disease | Wikipedia | 240 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Gene silencing aims to reduce the production of the mutant protein, since HD is caused by a single dominant gene encoding a toxic protein. Gene silencing experiments in mouse models have shown that when the expression of mHtt is reduced, symptoms improve. The safety of RNA interference, and allele-specific oligonucleotide (ASO) methods of gene silencing has been demonstrated in mice and the larger primate macaque brain. Allele-specific silencing attempts to silence mutant htt while leaving wild-type Htt untouched. One way of accomplishing this is to identify polymorphisms present on only one allele and produce gene silencing drugs that target polymorphisms in only the mutant allele. The first gene silencing trial involving humans with HD began in 2015, testing the safety of IONIS-HTTRx, produced by Ionis Pharmaceuticals and led by UCL Institute of Neurology. Mutant huntingtin was detected and quantified for the first time in cerebrospinal fluid from Huntington's disease mutation-carriers in 2015 using a novel "single-molecule counting" immunoassay, providing a direct way to assess whether huntingtin-lowering treatments are achieving the desired effect. A phase 3 trial of this compound, renamed tominersen and sponsored by Roche Pharmaceuticals, began in 2019 but was halted in 2021 after the safety monitoring board concluded that the risk-benefit balance was unfavourable. A huntingtin-lowering gene therapy trial run by Uniqure began in 2019, and several trials of orally administered huntingtin-lowering splicing modulator compounds have been announced. Gene splicing techniques are being looked at to try to repair a genome with the erroneous gene that causes HD, using tools such as CRISPR/Cas9. PTC Therapeutics is evaluating small molecules that induce poison exon inclusion in HTT transcript as a therapeutic strategy to lower HTT expression. | Huntington's disease | Wikipedia | 402 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
Increasing huntingtin clearance
Another strategy to reduce the level of mutant huntingtin is to increase the rate at which cells are able to clear it. As mHtt (and many other protein aggregates) are degraded by autophagy, increasing the rate of autophagy has the potential to reduce levels of mHtt and thereby ameliorate disease. Pharmacological and genetic inducers of autophagy have been tested in a variety of Huntington's disease models; many have been shown to reduce mHtt levels and decrease toxicity.
Improving cell survival
Among the approaches aimed at improving cell survival in the presence of mutant huntingtin are correction of transcriptional regulation using histone deacetylase inhibitors, modulating aggregation of huntingtin, improving metabolism and mitochondrial function and restoring function of synapses.
Neuronal replacement
Stem-cell therapy is used to replace damaged neurons by transplantation of stem cells into affected regions of the brain. Experiments in animal models (rats and mice only) have yielded positive results.
Whatever their future therapeutic potential, stem cells are already a valuable tool for studying Huntington's disease in the laboratory.
Ferroptosis
Ferroptosis is a form of regulated cell death characterized by the iron-dependent accumulation of lipid hydroperoxides to lethal levels. ALOX5-mediated ferroptosis acts as a cell death pathway upon oxidative stress in Huntington's disease. Inhibitors of ferroptosis are protective in models of degenerative brain disorders, including
Parkinson's, Huntington's, and Alzheimer's Diseases.
Clinical trials
In 2020, there were 197 clinical trials related to varied therapies and biomarkers for Huntington's disease listed as either underway, recruiting or newly completed. Compounds trialled that have failed to prevent or slow the progression of Huntington's disease include remacemide, coenzyme Q10, riluzole, creatine, minocycline, ethyl-EPA, phenylbutyrate and dimebon. | Huntington's disease | Wikipedia | 414 | 47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Biology and health sciences | Specific diseases | Health |
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.
Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse, and the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time. These two stochastic processes are considered the most important and central in the theory of stochastic processes, and were invented repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries.
The term random function is also used to refer to a stochastic or random process, because a stochastic process can also be interpreted as a random element in a function space. The terms stochastic process and random process are used interchangeably, often with no specific mathematical space for the set that indexes the random variables. But often these two terms are used when the random variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead. The values of a stochastic process are not always numbers and can be vectors or other mathematical objects. | Stochastic process | Wikipedia | 429 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Based on their mathematical properties, stochastic processes can be grouped into various categories, which include random walks, martingales, Markov processes, Lévy processes, Gaussian processes, random fields, renewal processes, and branching processes. The study of stochastic processes uses mathematical knowledge and techniques from probability, calculus, linear algebra, set theory, and topology as well as branches of mathematical analysis such as real analysis, measure theory, Fourier analysis, and functional analysis. The theory of stochastic processes is considered to be an important contribution to mathematics and it continues to be an active topic of research for both theoretical reasons and applications.
Introduction
A stochastic or random process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set. The set used to index the random variables is called the index set. Historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time. Each random variable in the collection takes values from the same mathematical space known as the state space. This state space can be, for example, the integers, the real line or -dimensional Euclidean space. An increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time. A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, a sample function or realization.
Classifications
A stochastic process can be classified in different ways, for example, by its state space, its index set, or the dependence among the random variables. One common way of classification is by the cardinality of the index set and the state space. | Stochastic process | Wikipedia | 374 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
When interpreted as time, if the index set of a stochastic process has a finite or countable number of elements, such as a finite set of numbers, the set of integers, or the natural numbers, then the stochastic process is said to be in discrete time. If the index set is some interval of the real line, then time is said to be continuous. The two types of stochastic processes are respectively referred to as discrete-time and continuous-time stochastic processes. Discrete-time stochastic processes are considered easier to study because continuous-time processes require more advanced mathematical techniques and knowledge, particularly due to the index set being uncountable. If the index set is the integers, or some subset of them, then the stochastic process can also be called a random sequence.
If the state space is the integers or natural numbers, then the stochastic process is called a discrete or integer-valued stochastic process. If the state space is the real line, then the stochastic process is referred to as a real-valued stochastic process or a process with continuous state space. If the state space is -dimensional Euclidean space, then the stochastic process is called a -dimensional vector process or -vector process.
Etymology
The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics". This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz who in 1917 wrote in German the word stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob. For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin, though the German term had been used earlier, for example, by Andrei Kolmogorov in 1931. | Stochastic process | Wikipedia | 487 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
According to the Oxford English Dictionary, early occurrences of the word random in English with its current meaning, which relates to chance or luck, date back to the 16th century, while earlier recorded usages started in the 14th century as a noun meaning "impetuosity, great speed, force, or violence (in riding, running, striking, etc.)". The word itself comes from a Middle French word meaning "speed, haste", and it is probably derived from a French verb meaning "to run" or "to gallop". The first written appearance of the term random process pre-dates stochastic process, which the Oxford English Dictionary also gives as a synonym, and was used in an article by Francis Edgeworth published in 1888.
Terminology
The definition of a stochastic process varies, but a stochastic process is traditionally defined as a collection of random variables indexed by some set. The terms random process and stochastic process are considered synonyms and are used interchangeably, without the index set being precisely specified. Both "collection", or "family" are used while instead of "index set", sometimes the terms "parameter set" or "parameter space" are used.
The term random function is also used to refer to a stochastic or random process, though sometimes it is only used when the stochastic process takes real values. This term is also used when the index sets are mathematical spaces other than the real line, while the terms stochastic process and random process are usually used when the index set is interpreted as time, and other terms are used such as random field when the index set is -dimensional Euclidean space or a manifold.
Notation
A stochastic process can be denoted, among other ways, by , , or simply as . Some authors mistakenly write even though it is an abuse of function notation. For example, or are used to refer to the random variable with the index , and not the entire stochastic process. If the index set is , then one can write, for example, to denote the stochastic process.
Examples
Bernoulli process | Stochastic process | Wikipedia | 428 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
One of the simplest stochastic processes is the Bernoulli process, which is a sequence of independent and identically distributed (iid) random variables, where each random variable takes either the value one or zero, say one with probability and zero with probability . This process can be linked to an idealisation of repeatedly flipping a coin, where the probability of obtaining a head is taken to be and its value is one, while the value of a tail is zero. In other words, a Bernoulli process is a sequence of iid Bernoulli random variables, where each idealised coin flip is an example of a Bernoulli trial.
Random walk
Random walks are stochastic processes that are usually defined as sums of iid random variables or random vectors in Euclidean space, so they are processes that change in discrete time. But some also use the term to refer to processes that change in continuous time, particularly the Wiener process used in financial models, which has led to some confusion, resulting in its criticism. There are various other types of random walks, defined so their state spaces can be other mathematical objects, such as lattices and groups, and in general they are highly studied and have many applications in different disciplines.
A classic example of a random walk is known as the simple random walk, which is a stochastic process in discrete time with the integers as the state space, and is based on a Bernoulli process, where each Bernoulli variable takes either the value positive one or negative one. In other words, the simple random walk takes place on the integers, and its value increases by one with probability, say, , or decreases by one with probability , so the index set of this random walk is the natural numbers, while its state space is the integers. If , this random walk is called a symmetric random walk.
Wiener process
The Wiener process is a stochastic process with stationary and independent increments that are normally distributed based on the size of the increments. The Wiener process is named after Norbert Wiener, who proved its mathematical existence, but the process is also called the Brownian motion process or just Brownian motion due to its historical connection as a model for Brownian movement in liquids. | Stochastic process | Wikipedia | 451 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Playing a central role in the theory of probability, the Wiener process is often considered the most important and studied stochastic process, with connections to other stochastic processes. Its index set and state space are the non-negative numbers and real numbers, respectively, so it has both continuous index set and states space. But the process can be defined more generally so its state space can be -dimensional Euclidean space. If the mean of any increment is zero, then the resulting Wiener or Brownian motion process is said to have zero drift. If the mean of the increment for any two points in time is equal to the time difference multiplied by some constant , which is a real number, then the resulting stochastic process is said to have drift .
Almost surely, a sample path of a Wiener process is continuous everywhere but nowhere differentiable. It can be considered as a continuous version of the simple random walk. The process arises as the mathematical limit of other stochastic processes such as certain random walks rescaled, which is the subject of Donsker's theorem or invariance principle, also known as the functional central limit theorem.
The Wiener process is a member of some important families of stochastic processes, including Markov processes, Lévy processes and Gaussian processes. The process also has many applications and is the main stochastic process used in stochastic calculus. It plays a central role in quantitative finance, where it is used, for example, in the Black–Scholes–Merton model. The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena.
Poisson process
The Poisson process is a stochastic process that has different forms and definitions. It can be defined as a counting process, which is a stochastic process that represents the random number of points or events up to some time. The number of points of the process that are located in the interval from zero to some given time is a Poisson random variable that depends on that time and some parameter. This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process. | Stochastic process | Wikipedia | 473 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process. The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and Lévy processes.
The homogeneous Poisson process can be defined and generalized in different ways. It can be defined such that its index set is the real line, and this stochastic process is also called the stationary Poisson process. If the parameter constant of the Poisson process is replaced with some non-negative integrable function of , the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant. Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows.
Defined on the real line, the Poisson process can be interpreted as a stochastic process, among other random objects. But then it can be defined on the -dimensional Euclidean space or other mathematical spaces, where it is often interpreted as a random set or a random counting measure, instead of a stochastic process. In this setting, the Poisson process, also called the Poisson point process, is one of the most important objects in probability theory, both for applications and theoretical reasons. But it has been remarked that the Poisson process does not receive as much attention as it should, partly due to it often being considered just on the real line, and not on other mathematical spaces.
Definitions
Stochastic process
A stochastic process is defined as a collection of random variables defined on a common probability space , where is a sample space, is a -algebra, and is a probability measure; and the random variables, indexed by some set , all take values in the same mathematical space , which must be measurable with respect to some -algebra .
In other words, for a given probability space and a measurable space , a stochastic process is a collection of -valued random variables, which can be written as:
Historically, in many problems from the natural sciences a point had the meaning of time, so is a random variable representing a value observed at time . A stochastic process can also be written as to reflect that it is actually a function of two variables, and . | Stochastic process | Wikipedia | 485 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
There are other ways to consider a stochastic process, with the above definition being considered the traditional one. For example, a stochastic process can be interpreted or defined as a -valued random variable, where is the space of all the possible functions from the set into the space . However this alternative definition as a "function-valued random variable" in general requires additional regularity assumptions to be well-defined.
Index set
The set is called the index set or parameter set of the stochastic process. Often this set is some subset of the real line, such as the natural numbers or an interval, giving the set the interpretation of time. In addition to these sets, the index set can be another set with a total order or a more general set, such as the Cartesian plane or -dimensional Euclidean space, where an element can represent a point in space. That said, many results and theorems are only possible for stochastic processes with a totally ordered index set.
State space
The mathematical space of a stochastic process is called its state space. This mathematical space can be defined using integers, real lines, -dimensional Euclidean spaces, complex planes, or more abstract mathematical spaces. The state space is defined using elements that reflect the different values that the stochastic process can take.
Sample function
A sample function is a single outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process. More precisely, if is a stochastic process, then for any point , the mapping
is called a sample function, a realization, or, particularly when is interpreted as time, a sample path of the stochastic process . This means that for a fixed , there exists a sample function that maps the index set to the state space . Other names for a sample function of a stochastic process include trajectory, path function or path. | Stochastic process | Wikipedia | 388 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Increment
An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period. For example, if is a stochastic process with state space and index set , then for any two non-negative numbers and such that , the difference is a -valued random variable known as an increment. When interested in the increments, often the state space is the real line or the natural numbers, but it can be -dimensional Euclidean space or more abstract spaces such as Banach spaces.
Further definitions
Law
For a stochastic process defined on the probability space , the law of stochastic process is defined as the pushforward measure:
where is a probability measure, the symbol denotes function composition and is the pre-image of the measurable function or, equivalently, the -valued random variable , where is the space of all the possible -valued functions of , so the law of a stochastic process is a probability measure.
For a measurable subset of , the pre-image of gives
so the law of a can be written as:
The law of a stochastic process or a random variable is also called the probability law, probability distribution, or the distribution.
Finite-dimensional probability distributions
For a stochastic process with law , its finite-dimensional distribution for is defined as:
This measure is the joint distribution of the random vector ; it can be viewed as a "projection" of the law onto a finite subset of .
For any measurable subset of the -fold Cartesian power , the finite-dimensional distributions of a stochastic process can be written as:
The finite-dimensional distributions of a stochastic process satisfy two mathematical conditions known as consistency conditions.
Stationarity
Stationarity is a mathematical property that a stochastic process has when all the random variables of that stochastic process are identically distributed. In other words, if is a stationary stochastic process, then for any the random variable has the same distribution, which means that for any set of index set values , the corresponding random variables | Stochastic process | Wikipedia | 461 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
all have the same probability distribution. The index set of a stationary stochastic process is usually interpreted as time, so it can be the integers or the real line. But the concept of stationarity also exists for point processes and random fields, where the index set is not interpreted as time.
When the index set can be interpreted as time, a stochastic process is said to be stationary if its finite-dimensional distributions are invariant under translations of time. This type of stochastic process can be used to describe a physical system that is in steady state, but still experiences random fluctuations. The intuition behind stationarity is that as time passes the distribution of the stationary stochastic process remains the same. A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.
A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic process is said to be stationary in the wide sense, then the process has a finite second moment for all and the covariance of the two random variables and depends only on the number for all . Khinchin introduced the related concept of stationarity in the wide sense, which has other names including covariance stationarity or stationarity in the broad sense.
Filtration
A filtration is an increasing sequence of sigma-algebras defined in relation to some probability space and an index set that has some total order relation, such as in the case of the index set being some subset of the real numbers. More formally, if a stochastic process has an index set with a total order, then a filtration , on a probability space is a family of sigma-algebras such that for all , where and denotes the total order of the index set . With the concept of a filtration, it is possible to study the amount of information contained in a stochastic process at , which can be interpreted as time . The intuition behind a filtration is that as time passes, more and more information on is known or available, which is captured in , resulting in finer and finer partitions of . | Stochastic process | Wikipedia | 462 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Modification
A modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process that has the same index set , state space , and probability space as another stochastic process is said to be a modification of if for all the following
holds. Two stochastic processes that are modifications of each other have the same finite-dimensional law and they are said to be stochastically equivalent or equivalent.
Instead of modification, the term version is also used, however some authors use the term version when two stochastic processes have the same finite-dimensional distributions, but they may be defined on different probability spaces, so two processes that are modifications of each other, are also versions of each other, in the latter sense, but not the converse.
If a continuous-time real-valued stochastic process meets certain moment conditions on its increments, then the Kolmogorov continuity theorem says that there exists a modification of this process that has continuous sample paths with probability one, so the stochastic process has a continuous modification or version. The theorem can also be generalized to random fields so the index set is -dimensional Euclidean space as well as to stochastic processes with metric spaces as their state spaces.
Indistinguishable
Two stochastic processes and defined on the same probability space with the same index set and set space are said be indistinguishable if the following
holds. If two and are modifications of each other and are almost surely continuous, then and are indistinguishable.
Separability
Separability is a property of a stochastic process based on its index set in relation to the probability measure. The property is assumed so that functionals of stochastic processes or random fields with uncountable index sets can form random variables. For a stochastic process to be separable, in addition to other conditions, its index set must be a separable space, which means that the index set has a dense countable subset. | Stochastic process | Wikipedia | 424 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
More precisely, a real-valued continuous-time stochastic process with a probability space is separable if its index set has a dense countable subset and there is a set of probability zero, so , such that for every open set and every closed set , the two events and differ from each other at most on a subset of .
The definition of separability can also be stated for other index sets and state spaces, such as in the case of random fields, where the index set as well as the state space can be -dimensional Euclidean space.
The concept of separability of a stochastic process was introduced by Joseph Doob,. The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process. Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable. A theorem by Doob, sometimes known as Doob's separability theorem, says that any real-valued continuous-time stochastic process has a separable modification. Versions of this theorem also exist for more general stochastic processes with index sets and state spaces other than the real line.
Independence
Two stochastic processes and defined on the same probability space with the same index set are said be independent if for all and for every choice of epochs , the random vectors and are independent.
Uncorrelatedness
Two stochastic processes and are called uncorrelated if their cross-covariance is zero for all times. Formally:
.
Independence implies uncorrelatedness
If two stochastic processes and are independent, then they are also uncorrelated.
Orthogonality
Two stochastic processes and are called orthogonal if their cross-correlation is zero for all times. Formally:
.
Skorokhod space | Stochastic process | Wikipedia | 385 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
A Skorokhod space, also written as Skorohod space, is a mathematical space of all the functions that are right-continuous with left limits, defined on some interval of the real line such as or , and take values on the real line or on some metric space. Such functions are known as càdlàg or cadlag functions, based on the acronym of the French phrase continue à droite, limite à gauche. A Skorokhod function space, introduced by Anatoliy Skorokhod, is often denoted with the letter , so the function space is also referred to as space . The notation of this function space can also include the interval on which all the càdlàg functions are defined, so, for example, denotes the space of càdlàg functions defined on the unit interval .
Skorokhod function spaces are frequently used in the theory of stochastic processes because it often assumed that the sample functions of continuous-time stochastic processes belong to a Skorokhod space. Such spaces contain continuous functions, which correspond to sample functions of the Wiener process. But the space also has functions with discontinuities, which means that the sample functions of stochastic processes with jumps, such as the Poisson process (on the real line), are also members of this space.
Regularity
In the context of mathematical construction of stochastic processes, the term regularity is used when discussing and assuming certain conditions for a stochastic process to resolve possible construction issues. For example, to study stochastic processes with uncountable index sets, it is assumed that the stochastic process adheres to some type of regularity condition such as the sample functions being continuous.
Further examples
Markov processes and chains
Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.
The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. | Stochastic process | Wikipedia | 497 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). It has been argued that the first definition of a Markov chain, where it has discrete time, now tends to be used, despite the second definition having been used by researchers like Joseph Doob and Kai Lai Chung.
Markov processes form an important class of stochastic processes and have applications in many areas. For example, they are the basis for a general stochastic simulation method known as Markov chain Monte Carlo, which is used for simulating random objects with specific probability distributions, and has found application in Bayesian statistics.
The concept of the Markov property was originally for stochastic processes in continuous and discrete time, but the property has been adapted for other index sets such as -dimensional Euclidean space, which results in collections of random variables known as Markov random fields.
Martingale
A martingale is a discrete-time or continuous-time stochastic process with the property that, at every instant, given the current value and all the past values of the process, the conditional expectation of every future value is equal to the current value. In discrete time, if this property holds for the next value, then it holds for all future values. The exact mathematical definition of a martingale requires two other conditions coupled with the mathematical concept of a filtration, which is related to the intuition of increasing available information as time passes. Martingales are usually defined to be real-valued, but they can also be complex-valued or even more general.
A symmetric random walk and a Wiener process (with zero drift) are both examples of martingales, respectively, in discrete and continuous time. For a sequence of independent and identically distributed random variables with zero mean, the stochastic process formed from the successive partial sums is a discrete-time martingale. In this aspect, discrete-time martingales generalize the idea of partial sums of independent random variables. | Stochastic process | Wikipedia | 495 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Martingales can also be created from stochastic processes by applying some suitable transformations, which is the case for the homogeneous Poisson process (on the real line) resulting in a martingale called the compensated Poisson process. Martingales can also be built from other martingales. For example, there are martingales based on the martingale the Wiener process, forming continuous-time martingales.
Martingales mathematically formalize the idea of a 'fair game' where it is possible form reasonable expectations for payoffs, and they were originally developed to show that it is not possible to gain an 'unfair' advantage in such a game. But now they are used in many areas of probability, which is one of the main reasons for studying them. Many problems in probability have been solved by finding a martingale in the problem and studying it. Martingales will converge, given some conditions on their moments, so they are often used to derive convergence results, due largely to martingale convergence theorems.
Martingales have many applications in statistics, but it has been remarked that its use and application are not as widespread as it could be in the field of statistics, particularly statistical inference. They have found applications in areas in probability theory such as queueing theory and Palm calculus and other fields such as economics and finance.
Lévy process
Lévy processes are types of stochastic processes that can be considered as generalizations of random walks in continuous time. These processes have many applications in fields such as finance, fluid mechanics, physics and biology. The main defining characteristics of these processes are their stationarity and independence properties, so they were known as processes with stationary and independent increments. In other words, a stochastic process is a Lévy process if for non-negatives numbers, , the corresponding increments
are all independent of each other, and the distribution of each increment only depends on the difference in time.
A Lévy process can be defined such that its state space is some abstract mathematical space, such as a Banach space, but the processes are often defined so that they take values in Euclidean space. The index set is the non-negative numbers, so , which gives the interpretation of time. Important stochastic processes such as the Wiener process, the homogeneous Poisson process (in one dimension), and subordinators are all Lévy processes.
Random field | Stochastic process | Wikipedia | 489 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
A random field is a collection of random variables indexed by a -dimensional Euclidean space or some manifold. In general, a random field can be considered an example of a stochastic or random process, where the index set is not necessarily a subset of the real line. But there is a convention that an indexed collection of random variables is called a random field when the index has two or more dimensions. If the specific definition of a stochastic process requires the index set to be a subset of the real line, then the random field can be considered as a generalization of stochastic process.
Point process
A point process is a collection of points randomly located on some mathematical space such as the real line, -dimensional Euclidean space, or more abstract spaces. Sometimes the term point process is not preferred, as historically the word process denoted an evolution of some system in time, so a point process is also called a random point field. There are different interpretations of a point process, such a random counting measure or a random set. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear.
Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or -dimensional Euclidean space. Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.
History
Early probability theory
Probability theory has its origins in games of chance, which have a long history, with some games being played thousands of years ago, but very little analysis on them was done in terms of probability. The year 1654 is often considered the birth of probability theory when French mathematicians Pierre Fermat and Blaise Pascal had a written correspondence on probability, motivated by a gambling problem. But there was earlier mathematical work done on the probability of gambling games such as Liber de Ludo Aleae by Gerolamo Cardano, written in the 16th century but posthumously published later in 1663. | Stochastic process | Wikipedia | 443 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
After Cardano, Jakob Bernoulli wrote Ars Conjectandi, which is considered a significant event in the history of probability theory. Bernoulli's book was published, also posthumously, in 1713 and inspired many mathematicians to study probability. But despite some renowned mathematicians contributing to probability theory, such as Pierre-Simon Laplace, Abraham de Moivre, Carl Gauss, Siméon Poisson and Pafnuty Chebyshev, most of the mathematical community did not consider probability theory to be part of mathematics until the 20th century.
Statistical mechanics
In the physical sciences, scientists developed in the 19th century the discipline of statistical mechanics, where physical systems, such as containers filled with gases, are regarded or treated mathematically as collections of many moving particles. Although there were attempts to incorporate randomness into statistical physics by some scientists, such as Rudolf Clausius, most of the work had little or no randomness.
This changed in 1859 when James Clerk Maxwell contributed significantly to the field, more specifically, to the kinetic theory of gases, by presenting work where he modelled the gas particles as moving in random directions at random velocities. The kinetic theory of gases and statistical physics continued to be developed in the second half of the 19th century, with work done chiefly by Clausius, Ludwig Boltzmann and Josiah Gibbs, which would later have an influence on Albert Einstein's mathematical model for Brownian movement.
Measure theory and probability theory
At the International Congress of Mathematicians in Paris in 1900, David Hilbert presented a list of mathematical problems, where his sixth problem asked for a mathematical treatment of physics and probability involving axioms. Around the start of the 20th century, mathematicians developed measure theory, a branch of mathematics for studying integrals of mathematical functions, where two of the founders were French mathematicians, Henri Lebesgue and Émile Borel. In 1925, another French mathematician Paul Lévy published the first probability book that used ideas from measure theory.
In the 1920s, fundamental contributions to probability theory were made in the Soviet Union by mathematicians such as Sergei Bernstein, Aleksandr Khinchin, and Andrei Kolmogorov. Kolmogorov published in 1929 his first attempt at presenting a mathematical foundation, based on measure theory, for probability theory. In the early 1930s, Khinchin and Kolmogorov set up probability seminars, which were attended by researchers such as Eugene Slutsky and Nikolai Smirnov, and Khinchin gave the first mathematical definition of a stochastic process as a set of random variables indexed by the real line. | Stochastic process | Wikipedia | 511 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Birth of modern probability theory
In 1933, Andrei Kolmogorov published in German, his book on the foundations of probability theory titled Grundbegriffe der Wahrscheinlichkeitsrechnung, where Kolmogorov used measure theory to develop an axiomatic framework for probability theory. The publication of this book is now widely considered to be the birth of modern probability theory, when the theories of probability and stochastic processes became parts of mathematics.
After the publication of Kolmogorov's book, further fundamental work on probability theory and stochastic processes was done by Khinchin and Kolmogorov as well as other mathematicians such as Joseph Doob, William Feller, Maurice Fréchet, Paul Lévy, Wolfgang Doeblin, and Harald Cramér.
Decades later, Cramér referred to the 1930s as the "heroic period of mathematical probability theory". World War II greatly interrupted the development of probability theory, causing, for example, the migration of Feller from Sweden to the United States of America and the death of Doeblin, considered now a pioneer in stochastic processes.
Stochastic processes after World War II
After World War II, the study of probability theory and stochastic processes gained more attention from mathematicians, with significant contributions made in many areas of probability and mathematics as well as the creation of new areas. Starting in the 1940s, Kiyosi Itô published papers developing the field of stochastic calculus, which involves stochastic integrals and stochastic differential equations based on the Wiener or Brownian motion process.
Also starting in the 1940s, connections were made between stochastic processes, particularly martingales, and the mathematical field of potential theory, with early ideas by Shizuo Kakutani and then later work by Joseph Doob. Further work, considered pioneering, was done by Gilbert Hunt in the 1950s, connecting Markov processes and potential theory, which had a significant effect on the theory of Lévy processes and led to more interest in studying Markov processes with methods developed by Itô. | Stochastic process | Wikipedia | 418 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
In 1953, Doob published his book Stochastic processes, which had a strong influence on the theory of stochastic processes and stressed the importance of measure theory in probability.
Doob also chiefly developed the theory of martingales, with later substantial contributions by Paul-André Meyer. Earlier work had been carried out by Sergei Bernstein, Paul Lévy and Jean Ville, the latter adopting the term martingale for the stochastic process. Methods from the theory of martingales became popular for solving various probability problems. Techniques and theory were developed to study Markov processes and then applied to martingales. Conversely, methods from the theory of martingales were established to treat Markov processes.
Other fields of probability were developed and used to study stochastic processes, with one main approach being the theory of large deviations. The theory has many applications in statistical physics, among other fields, and has core ideas going back to at least the 1930s. Later in the 1960s and 1970s, fundamental work was done by Alexander Wentzell in the Soviet Union and Monroe D. Donsker and Srinivasa Varadhan in the United States of America, which would later result in Varadhan winning the 2007 Abel Prize. In the 1990s and 2000s the theories of Schramm–Loewner evolution and rough paths were introduced and developed to study stochastic processes and other mathematical objects in probability theory, which respectively resulted in Fields Medals being awarded to Wendelin Werner in 2008 and to Martin Hairer in 2014.
The theory of stochastic processes still continues to be a focus of research, with yearly international conferences on the topic of stochastic processes.
Discoveries of specific stochastic processes
Although Khinchin gave mathematical definitions of stochastic processes in the 1930s, specific stochastic processes had already been discovered in different settings, such as the Brownian motion process and the Poisson process. Some families of stochastic processes such as point processes or renewal processes have long and complex histories, stretching back centuries. | Stochastic process | Wikipedia | 410 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Bernoulli process
The Bernoulli process, which can serve as a mathematical model for flipping a biased coin, is possibly the first stochastic process to have been studied. The process is a sequence of independent Bernoulli trials, which are named after Jacob Bernoulli who used them to study games of chance, including probability problems proposed and studied earlier by Christiaan Huygens. Bernoulli's work, including the Bernoulli process, were published in his book Ars Conjectandi in 1713.
Random walks
In 1905, Karl Pearson coined the term random walk while posing a problem describing a random walk on the plane, which was motivated by an application in biology, but such problems involving random walks had already been studied in other fields. Certain gambling problems that were studied centuries earlier can be considered as problems involving random walks. For example, the problem known as the Gambler's ruin is based on a simple random walk, and is an example of a random walk with absorbing barriers. Pascal, Fermat and Huyens all gave numerical solutions to this problem without detailing their methods, and then more detailed solutions were presented by Jakob Bernoulli and Abraham de Moivre.
For random walks in -dimensional integer lattices, George Pólya published, in 1919 and 1921, work where he studied the probability of a symmetric random walk returning to a previous position in the lattice. Pólya showed that a symmetric random walk, which has an equal probability to advance in any direction in the lattice, will return to a previous position in the lattice an infinite number of times with probability one in one and two dimensions, but with probability zero in three or higher dimensions.
Wiener process
The Wiener process or Brownian motion process has its origins in different fields including statistics, finance and physics. In 1880, Danish astronomer Thorvald Thiele wrote a paper on the method of least squares, where he used the process to study the errors of a model in time-series analysis. The work is now considered as an early discovery of the statistical method known as Kalman filtering, but the work was largely overlooked. It is thought that the ideas in Thiele's paper were too advanced to have been understood by the broader mathematical and statistical community at the time. | Stochastic process | Wikipedia | 457 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
The French mathematician Louis Bachelier used a Wiener process in his 1900 thesis in order to model price changes on the Paris Bourse, a stock exchange, without knowing the work of Thiele. It has been speculated that Bachelier drew ideas from the random walk model of Jules Regnault, but Bachelier did not cite him, and Bachelier's thesis is now considered pioneering in the field of financial mathematics.
It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by the Leonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas, which was cited by mathematicians including Doob, Feller and Kolmogorov. The book continued to be cited, but then starting in the 1960s, the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work.
In 1905, Albert Einstein published a paper where he studied the physical observation of Brownian motion or movement to explain the seemingly random movements of particles in liquids by using ideas from the kinetic theory of gases. Einstein derived a differential equation, known as a diffusion equation, for describing the probability of finding a particle in a certain region of space. Shortly after Einstein's first paper on Brownian movement, Marian Smoluchowski published work where he cited Einstein, but wrote that he had independently derived the equivalent results by using a different method.
Einstein's work, as well as experimental results obtained by Jean Perrin, later inspired Norbert Wiener in the 1920s to use a type of measure theory, developed by Percy Daniell, and Fourier analysis to prove the existence of the Wiener process as a mathematical object.
Poisson process
The Poisson process is named after Siméon Poisson, due to its definition involving the Poisson distribution, but Poisson never studied the process. There are a number of claims for early uses or discoveries of the Poisson
process.
At the beginning of the 20th century, the Poisson process would arise independently in different situations.
In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process. | Stochastic process | Wikipedia | 481 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Another discovery occurred in Denmark in 1909 when A.K. Erlang derived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang was not at the time aware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent to each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.
In 1910, Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Motivated by their work, Harry Bateman studied the counting problem and derived Poisson probabilities as a solution to a family of differential equations, resulting in the independent discovery of the Poisson process. After this time there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.
Markov processes
Markov processes and Markov chains are named after Andrey Markov who studied Markov chains in the early 20th century. Markov was interested in studying an extension of independent random sequences. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains.
In 1912, Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. | Stochastic process | Wikipedia | 479 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Andrei Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in the 1930s, and then later Eugene Dynkin, starting in the 1950s.
Lévy processes
Lévy processes such as the Wiener process and the Poisson process (on the real line) are named after Paul Lévy who started studying them in the 1930s, but they have connections to infinitely divisible distributions going back to the 1920s. In a 1932 paper, Kolmogorov derived a characteristic function for random variables associated with Lévy processes. This result was later derived under more general conditions by Lévy in 1934, and then Khinchin independently gave an alternative form for this characteristic function in 1937. In addition to Lévy, Khinchin and Kolomogrov, early fundamental contributions to the theory of Lévy processes were made by Bruno de Finetti and Kiyosi Itô.
Mathematical construction
In mathematics, constructions of mathematical objects are needed, which is also the case for stochastic processes, to prove that they exist mathematically. There are two main approaches for constructing a stochastic process. One approach involves considering a measurable space of functions, defining a suitable measurable mapping from a probability space to this measurable space of functions, and then deriving the corresponding finite-dimensional distributions.
Another approach involves defining a collection of random variables to have specific finite-dimensional distributions, and then using Kolmogorov's existence theorem to prove a corresponding stochastic process exists. This theorem, which is an existence theorem for measures on infinite product spaces, says that if any finite-dimensional distributions satisfy two conditions, known as consistency conditions, then there exists a stochastic process with those finite-dimensional distributions. | Stochastic process | Wikipedia | 507 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Construction issues
When constructing continuous-time stochastic processes certain mathematical difficulties arise, due to the uncountable index sets, which do not occur with discrete-time processes. One problem is that it is possible to have more than one stochastic process with the same finite-dimensional distributions. For example, both the left-continuous modification and the right-continuous modification of a Poisson process have the same finite-dimensional distributions. This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process.
Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined. For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable. For a continuous-time stochastic process , other characteristics that depend on an uncountable number of points of the index set include:
a sample function of a stochastic process is a continuous function of ;
a sample function of a stochastic process is a bounded function of ; and
a sample function of a stochastic process is an increasing function of .
where the symbol ∈ can be read "a member of the set", as in a member of the set .
To overcome the two difficulties described above, i.e., "more than one..." and "functionals of...", different assumptions and approaches are possible.
Resolving construction issues
One approach for avoiding mathematical construction issues of stochastic processes, proposed by Joseph Doob, is to assume that the stochastic process is separable. Separability ensures that infinite-dimensional distributions determine the properties of sample functions by requiring that sample functions are essentially determined by their values on a dense countable set of points in the index set. Furthermore, if a stochastic process is separable, then functionals of an uncountable number of points of the index set are measurable and their probabilities can be studied. | Stochastic process | Wikipedia | 442 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Another approach is possible, originally developed by Anatoliy Skorokhod and Andrei Kolmogorov, for a continuous-time stochastic process with any metric space as its state space. For the construction of such a stochastic process, it is assumed that the sample functions of the stochastic process belong to some suitable function space, which is usually the Skorokhod space consisting of all right-continuous functions with left limits. This approach is now more used than the separability assumption, but such a stochastic process based on this approach will be automatically separable.
Although less used, the separability assumption is considered more general because every stochastic process has a separable version. It is also used when it is not possible to construct a stochastic process in a Skorokhod space. For example, separability is assumed when constructing and studying random fields, where the collection of random variables is now indexed by sets other than the real line such as -dimensional Euclidean space.
Application
Applications in Finance
Black-Scholes Model
One of the most famous applications of stochastic processes in finance is the Black-Scholes model for option pricing. Developed by Fischer Black, Myron Scholes, and Robert Solow, this model uses Geometric Brownian motion, a specific type of stochastic process, to describe the dynamics of asset prices. The model assumes that the price of a stock follows a continuous-time stochastic process and provides a closed-form solution for pricing European-style options. The Black-Scholes formula has had a profound impact on financial markets, forming the basis for much of modern options trading.
The key assumption of the Black-Scholes model is that the price of a financial asset, such as a stock, follows a log-normal distribution, with its continuous returns following a normal distribution. Although the model has limitations, such as the assumption of constant volatility, it remains widely used due to its simplicity and practical relevance. | Stochastic process | Wikipedia | 407 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Stochastic Volatility Models
Another significant application of stochastic processes in finance is in stochastic volatility models, which aim to capture the time-varying nature of market volatility. The Heston model is a popular example, allowing for the volatility of asset prices to follow its own stochastic process. Unlike the Black-Scholes model, which assumes constant volatility, stochastic volatility models provide a more flexible framework for modeling market dynamics, particularly during periods of high uncertainty or market stress.
Applications in Biology
Population Dynamics
One of the primary applications of stochastic processes in biology is in population dynamics. In contrast to deterministic models, which assume that populations change in predictable ways, stochastic models account for the inherent randomness in births, deaths, and migration. The birth-death process, a simple stochastic model, describes how populations fluctuate over time due to random births and deaths. These models are particularly important when dealing with small populations, where random events can have large impacts, such as in the case of endangered species or small microbial populations.
Another example is the branching process, which models the growth of a population where each individual reproduces independently. The branching process is often used to describe population extinction or explosion, particularly in epidemiology, where it can model the spread of infectious diseases within a population.
Applications in Computer Science
Randomized Algorithms
Stochastic processes play a critical role in computer science, particularly in the analysis and development of randomized algorithms. These algorithms utilize random inputs to simplify problem-solving or enhance performance in complex computational tasks. For instance, Markov chains are widely used in probabilistic algorithms for optimization and sampling tasks, such as those employed in search engines like Google's PageRank. These methods balance computational efficiency with accuracy, making them invaluable for handling large datasets. Randomized algorithms are also extensively applied in areas such as cryptography, large-scale simulations, and artificial intelligence, where uncertainty must be managed effectively. | Stochastic process | Wikipedia | 418 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
Queuing Theory
Another significant application of stochastic processes in computer science is in queuing theory, which models the random arrival and service of tasks in a system. This is particularly relevant in network traffic analysis and server management. For instance, queuing models help predict delays, manage resource allocation, and optimize throughput in web servers and communication networks. The flexibility of stochastic models allows researchers to simulate and improve the performance of high-traffic environments. For example, queueing theory is crucial for designing efficient data centers and cloud computing infrastructures. | Stochastic process | Wikipedia | 112 | 47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Mathematics | Statistics and probability | null |
In set theory, the union (denoted by ∪) of a collection of sets is the set of all elements in the collection. It is one of the fundamental operations through which sets can be combined and related to each other.
A refers to a union of zero () sets and it is by definition equal to the empty set.
For explanation of the symbols used in this article, refer to the table of mathematical symbols.
Union of two sets
The union of two sets A and B is the set of elements which are in A, in B, or in both A and B. In set-builder notation,
.
For example, if A = {1, 3, 5, 7} and B = {1, 2, 4, 6, 7} then A ∪ B = {1, 2, 3, 4, 5, 6, 7}. A more elaborate example (involving two infinite sets) is:
A =
B =
As another example, the number 9 is not contained in the union of the set of prime numbers and the set of even numbers , because 9 is neither prime nor even.
Sets cannot have duplicate elements, so the union of the sets and is . Multiple occurrences of identical elements have no effect on the cardinality of a set or its contents.
Algebraic properties
Binary union is an associative operation; that is, for any sets ,
Thus, the parentheses may be omitted without ambiguity: either of the above can be written as . Also, union is commutative, so the sets can be written in any order.
The empty set is an identity element for the operation of union. That is, , for any set . Also, the union operation is idempotent: . All these properties follow from analogous facts about logical disjunction.
Intersection distributes over union
and union distributes over intersection
The power set of a set , together with the operations given by union, intersection, and complementation, is a Boolean algebra. In this Boolean algebra, union can be expressed in terms of intersection and complementation by the formula
where the superscript denotes the complement in the universal set . Alternatively, intersection can be expressed in terms of union and complementation in a similar way: . These two expressions together are called De Morgan's laws. | Union (set theory) | Wikipedia | 461 | 47949 | https://en.wikipedia.org/wiki/Union%20%28set%20theory%29 | Mathematics | Discrete mathematics | null |
Finite unions
One can take the union of several sets simultaneously. For example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C.
A finite union is the union of a finite number of sets; the phrase does not imply that the union set is a finite set.
Arbitrary unions
The most general notion is the union of an arbitrary collection of sets, sometimes called an infinitary union. If M is a set or class whose elements are sets, then x is an element of the union of M if and only if there is at least one element A of M such that x is an element of A. In symbols:
This idea subsumes the preceding sections—for example, A ∪ B ∪ C is the union of the collection . Also, if M is the empty collection, then the union of M is the empty set.
Notations
The notation for the general concept can vary considerably. For a finite union of sets one often writes or . Various common notations for arbitrary unions include , , and . The last of these notations refers to the union of the collection , where I is an index set and is a set for every . In the case that the index set I is the set of natural numbers, one uses the notation , which is analogous to that of the infinite sums in series.
When the symbol "∪" is placed before other symbols (instead of between them), it is usually rendered as a larger size.
Notation encoding
In Unicode, union is represented by the character . In TeX, is rendered from \cup and is rendered from \bigcup. | Union (set theory) | Wikipedia | 368 | 47949 | https://en.wikipedia.org/wiki/Union%20%28set%20theory%29 | Mathematics | Discrete mathematics | null |
Authentication (from authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate, determining the age of an artifact by carbon dating, or ensuring that a product or document is not counterfeit.
Methods
Authentication is relevant to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or in a certain place or period of history. In computer science, verifying a user's identity is often required to allow access to confidential data or systems.
Authentication can be considered to be of three types:
The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member, or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while they may not have evidence that every step in the supply chain was authenticated. Centralized authority-based trust relationships back most secure internet communication through known public certificate authorities; decentralized peer-based trust, also known as a web of trust, is used for personal services such as email or files and trust is established by known individuals signing each other's cryptographic key for instance. | Authentication | Wikipedia | 374 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical and spectroscopic analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.
Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.
In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well.
Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.
Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.
The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury and are also vulnerable to being separated from the artifact and lost. | Authentication | Wikipedia | 505 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access devices to allow system access. In this case, authenticity is implied but not guaranteed.
Consumer goods such as pharmaceuticals, perfume, and clothing can use all forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation. As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.
Authentication factors
The ways in which someone may be authenticated fall into three categories, based on what is known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity before being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.
Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified. The three factors (classes) and some of the elements of each factor are:
Knowledge: Something the user knows (e.g., a password, partial password, passphrase, personal identification number (PIN), challenge–response (the user must answer a question or pattern), security question).
Ownership: Something the user has (e.g., wrist band, ID card, security token, implanted device, cell phone with a built-in hardware token, software token, or cell phone holding a software token).
Inherence: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifiers). | Authentication | Wikipedia | 510 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
Single-factor authentication
As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual's identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security.
Multi-factor authentication
Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors.
For example, using a bank card (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.
Authentication types
Strong authentication
The United States government's National Information Assurance Glossary defines strong authentication as a layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.
The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements.
The FIDO Alliance has been striving to establish technical specifications for strong authentication.
Continuous authentication
Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based on writing styles as a continuous authentication method. | Authentication | Wikipedia | 484 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
Recent research has shown the possibility of using smartphones sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems.
Digital authentication
The term digital authentication, also known as electronic authentication or e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network.
The American National Institute of Standards and Technology (NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication:
Enrollment – an individual applies to a credential service provider (CSP) to initiate the enrollment process. After successfully proving the applicant's identity, the CSP allows the applicant to become a subscriber.
Authentication – After becoming a subscriber, the user receives an authenticator e.g., a token and credentials, such as a user name. He or she is then permitted to perform online transactions within an authenticated session with a relying party, where they must provide proof that he or she possesses one or more authenticators.
Life-cycle maintenance – the CSP is charged with the task of maintaining the user's credential over the course of its lifetime, while the subscriber is responsible for maintaining his or her authenticator(s).
The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.
Product authentication
Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods, such as electronics, music, apparel, and counterfeit medications, have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting. | Authentication | Wikipedia | 483 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
In their anti-counterfeiting technology guide, the EUIPO Observatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media.
Products or their packaging can include a variable QR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits unless scan data is analyzed at the system level to detect anomalies. To increase the security level, the QR Code can be combined with a digital watermark or copy detection pattern that are robust to copy attempts and can be authenticated with a smartphone.
A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally, the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified. | Authentication | Wikipedia | 289 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
Packaging
Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products. Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:
Taggant fingerprinting – uniquely coded microscopic materials that are verified from a database
Encrypted micro-particles – unpredictably placed markings (numbers, layers and colors) not visible to the human eye
Holograms – graphics printed on seals, patches, foils or labels and used at the point of sale for visual verification
Micro-printing – second-line authentication often used on currencies
Serialized barcodes
UV printing – marks only visible under UV light
Track and trace systems – use codes to link products to the database tracking system
Water indicators – become visible when contacted with water
DNA tracking – genes embedded onto labels that can be traced
Color-shifting ink or film – visible marks that switch colors or texture when tilted
Tamper evident seals and tapes – destructible or graphically verifiable at point of sale
2d barcodes – data codes that can be tracked
RFID chips
NFC chips | Authentication | Wikipedia | 343 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
Information content
Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:
A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.
A shared secret, such as a passphrase, in the content of the message.
An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.
The opposite problem is the detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.
Literacy and literature authentication
In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity in which students document the relevant research process (). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period.
History and state-of-the-art | Authentication | Wikipedia | 459 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
Historically, fingerprints have been used as the most authoritative method of authentication, but court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as well, fingerprints are easily spoofable, with British Telecom's top computer security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device.
In a computer data context, cryptographic methods have been developed which are not spoofable if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. However, it is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in the future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.
Authorization
The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set.
Access control
One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity. | Authentication | Wikipedia | 407 | 47967 | https://en.wikipedia.org/wiki/Authentication | Technology | Cryptography | null |
A polder () is a low-lying tract of land that forms an artificial hydrological entity, enclosed by embankments known as dikes. The three types of polder are:
Land reclaimed from a body of water, such as a lake or the seabed
Flood plains separated from the sea or river by a dike
Marshes separated from the surrounding water by a dike and subsequently drained; these are also known as koogs, especially in Germany
The ground level in drained marshes subsides over time. All polders will eventually be below the surrounding water level some or all of the time. Water enters the low-lying polder through infiltration and water pressure of groundwater, or rainfall, or transport of water by rivers and canals. This usually means that the polder has an excess of water, which is pumped out or drained by opening sluices at low tide. Care must be taken not to set the internal water level too low. Polder land made up of peat (former marshland) will sink in relation to its previous level, because of peat decomposing when exposed to oxygen from the air.
Polders are at risk of flooding at all times, and care must be taken to protect the surrounding dikes. Dikes are typically built with locally available materials, and each material has its own risks: sand is prone to collapse owing to saturation by water; dry peat is lighter than water and potentially unable to retain water in very dry seasons. Some animals dig tunnels in the barrier, allowing water to infiltrate the structure; the muskrat is known for this activity and hunted in certain European countries because of it. Polders are most commonly, though not exclusively, found in river deltas, former fenlands, and coastal areas.
Flooding of polders has also been used as a military tactic in the past. One example is the flooding of the polders along the Yser River during World War I. Opening the sluices at high tide and closing them at low tide turned the polders into an inaccessible swamp, which allowed the Allied armies to stop the German army. | Polder | Wikipedia | 425 | 48065 | https://en.wikipedia.org/wiki/Polder | Physical sciences | Artificial landforms | null |
The Netherlands has a large area of polders: as much as 20% of the land area has at some point in the past been reclaimed from the sea, thus contributing to the development of the country. IJsselmeer is the most famous polder project of the Netherlands. Some other countries which have polders are Bangladesh, Belgium, Canada and China. Some examples of Dutch polder projects are Beemster, Schermer, Flevopolder and Noordoostpolder.
Etymology
The Dutch word derives successively from Middle Dutch , from Old Dutch , and ultimately from pol-, a piece of land elevated above its surroundings, with the augmentative suffix -er and epenthetical -d-. The word has been adopted in thirty-six languages.
Netherlands
The Netherlands is frequently associated with polders, as its engineers became noted for developing techniques to drain wetlands and make them usable for agriculture and other development. This is illustrated by the saying "God created the world, but the Dutch created the Netherlands".
The Dutch have a long history of reclamation of marshes and fenland, resulting in some 3,000 polders nationwide. By 1961, about half of the country's land, , was reclaimed from the sea. About half the total surface area of polders in northwest Europe is in the Netherlands. The first embankments in Europe were constructed in Roman times. The first polders were constructed in the 11th century. The oldest extant polder is the Achtermeer polder, from 1533.
As a result of flooding disasters, water boards called waterschap (when situated more inland) or hoogheemraadschap (near the sea, mainly used in the Holland region) were set up to maintain the integrity of the water defences around polders, maintain the waterways inside a polder, and control the various water levels inside and outside the polder. Water boards hold separate elections, levy taxes, and function independently from other government bodies. Their function is basically unchanged even today. As such, they are the oldest democratic institutions in the country. The necessary cooperation among all ranks to maintain polder integrity gave its name to the Dutch version of third-way politics—the Polder Model. | Polder | Wikipedia | 455 | 48065 | https://en.wikipedia.org/wiki/Polder | Physical sciences | Artificial landforms | null |
The 1953 flood disaster prompted a new approach to the design of dikes and other water-retaining structures, based on an acceptable probability of overflowing. Risk is defined as the product of probability and consequences. The potential damage in lives, property, and rebuilding costs is compared with the potential cost of water defences. From these calculations follows an acceptable flood risk from the sea at one in 4,000–10,000 years, while it is one in 100–2,500 years for a river flood. The particular established policy guides the Dutch government to improve flood defences as new data on threat levels become available.
Major Dutch polders and the years they were laid dry include Beemster (1609–1612), Schermer (1633–1635), and Haarlemmermeerpolder (1852). Polders created as part of the Zuiderzee Works include Wieringermeerpolder (1930), Noordoostpolder (1942) and Flevopolder (1956–1968)
Examples of polders
Brazil
Several cities on the Paraíba Valley region (in the state of São Paulo) have polders on land claimed from the floodplains around the Paraíba do Sul river.
Bangladesh
Bangladesh has 139 polders, of which 49 are sea-facing, while the rest are along the numerous distributaries of the Ganges-Brahmaputra-Meghna River delta. These were constructed in the 1960s to protect the coast from tidal flooding and reduce salinity incursion. They reduce long-term flooding and waterlogging following storm surges from tropical cyclones. They are also cultivated for agriculture.
Belgium
De Moeren, near Veurne in West Flanders
Polders along the Yser river between Nieuwpoort and Diksmuide
Polders of Muisbroek and Ettenhoven, in Ekeren and Hoevenen
Polder of Stabroek, in Stabroek
Kabeljauwpolder, in Zandvliet
Scheldepolders on the left bank of the Scheldt
Uitkerkse polders, near Blankenberge in West Flanders
Prosperpolder, near Doel, Antwerp and Kieldrecht.
Canada
Tantramar Marshes
Holland Marsh
Pitt Polder Ecological Reserve
Grand Pré, Nova Scotia
Minas Basin
China
The city of Kunshan has over 100 polders. | Polder | Wikipedia | 490 | 48065 | https://en.wikipedia.org/wiki/Polder | Physical sciences | Artificial landforms | null |
History
The Jiangnan region, at the Yangtze River Delta, has a long history of constructing polders. Most of these projects were performed between the 10th and 13th centuries. The Chinese government also assisted local communities in constructing dikes for swampland water drainage. The Lijia (里甲) self-monitoring system of 110 households under a lizhang (里长) headman was used for the purposes of service administration and tax collection in the polder, with a liangzhang (粮长, grain chief) responsible for maintaining the water system and a tangzhang (塘长, dike chief) for polder maintenance.
Denmark
Filsø
Kolindsund
Lammefjorden
Finland
Söderfjärden
Munsmo
Two polders ( in total) near Vassor in Korsholm
France
Marais Poitevin
Les Moëres, adjacent to the Flemish polder De Moeren in Belgium.
Polders de Couesnon near Mont-Saint Michel in Normandy
Germany
In Germany, land reclaimed by diking is called a koog. The German Deichgraf system was similar to the Dutch and is widely known from Theodor Storm's novella The Rider on the White Horse.
Altes Land near Hamburg
Blockland and Hollerland near Bremen
Nordstrand, Germany
Bormerkoog and Meggerkoog near Friedrichstadt
36 koogs in the district of Nordfriesland
12 koogs in the district of Dithmarschen
In southern Germany, the term polder is used for retention basins recreated by opening dikes during river floodplain restoration, a meaning somewhat opposite to that in coastal context.
Guyana
Black Bush Polder, Corentyne, Berbice.
India
Kuttanad Region, Kerala
Ireland
Lough Swilly, County Donegal. Near Inch Island and Newtowncunningham.
Italy
Delta of the river Po, such as Bonifica Valle del Mezzano
Japan
Around the Ariake Sea in Kyushu, mainly in Saga but also in Fukuoka and Kumamoto Prefectures
Lithuania
Rusnė Island
Netherlands | Polder | Wikipedia | 416 | 48065 | https://en.wikipedia.org/wiki/Polder | Physical sciences | Artificial landforms | null |
Achtermeer, the oldest polder, from 1533
Alblasserwaard, containing the windmills of Kinderdijk, a World Heritage Site
Alkmaar
Andijk
Anna Paulownapolder
Beemster, a World Heritage Site
Bijlmermeer
Flevopolder, the largest artificial island in the world, last part drained in 1968
's-Gravesloot
Haarlemmermeer, containing Schiphol airport
Krimpenerwaard
Lauwersmeer
Mastenbroek, one of the oldest medieval polders, drained around 1363-1364.
Noordoostpolder
Prins Alexanderpolder
Purmer
Schermer
Watergraafsmeer
Wieringermeer
Wieringerwaard
Wijdewormer
Zestienhoven, home of the Rotterdam The Hague Airport (Overschie), in the city of Rotterdam.
Zuidplaspolder, along with Lammefjord in Denmark the lowest point of the European Union
Poland
Vistula delta near Elbląg and Nowy Dwór Gdański
Warta delta near Kostrzyn nad Odrą
Romania
Danube Delta
Singapore
Parts of Pulau Tekong
Slovenia
The Ankaran/Ancarano Polder (), Semedela Polder (), and Škocjan Polder () in reclaimed land around Koper/Capodistria.
South Korea
Parts of the coast of Ganghwa Island, adjacent to the river Han in Incheon
Delta of the river Nakdong in Busan
Saemangeum in North Jeolla Province
Spain
Parts of Málaga were built on reclaimed land
United Kingdom
Traeth Mawr
Sunk Island, on the north shore of the Humber east of Hull
Caldicot and Wentloog Levels along the Severn Estuary in South Wales
Parts of The Fens
Branston Island, by the River Witham outside the conventional area of the fens but connected to them.
Parts of the coast of Essex
Some land along the River Plym in Plymouth
Some land around Meathop east of Grange-over-Sands, reclaimed as a side-effect of building a railway embankment
The Somerset Levels and North Somerset Levels
Romney Marsh
Sealand, Flintshire
Humberhead Levels
United States
New Orleans
Sacramento – San Joaquin River Delta | Polder | Wikipedia | 466 | 48065 | https://en.wikipedia.org/wiki/Polder | Physical sciences | Artificial landforms | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.