id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
70,138,102 | https://en.wikipedia.org/wiki/Ostreopsis%20lenticularis | Ostreopsis lenticularis is a species of dinoflagellate in the family Ostreopsidaceae described in 1981 by Yasuwo Fukuyo. O. lenticularis is known to produce toxins, including ostreotoxin.
Distribution
O. lenticularis was first idientified on the Gambler and Society Islands of French Polynesia and New Caledonia in the Pacific Ocean.
References
Gonyaulacales
Dinoflagellate species
Protists described in 1981 | Ostreopsis lenticularis | Biology | 102 |
13,700,969 | https://en.wikipedia.org/wiki/Sundiver%20%28space%20mission%29 | Sundiver was a proposed space mission to crash a probe into the Sun, while sending back data to Earth before burning up. It was proposed as a design study by the Australian Academy of Science's National Committee for Space Science as a Flagship mission to kick-start an Australian space program. The design study was proposed as a five-year study from 2011-2015 with a complement of 10 PhDs, budgeted at a cost of $10 M (Australian), leading to a Go/NoGo Decision in 2015.
The mission would have been comparable, in its close approach to the Sun, to the NASA Parker Solar Probe mission, although it would have only made a single pass into the solar corona.
External links
Article in The Australian announcing plans
Decadal Plan for Australian Space Science (Sundiver proposal begins on page 90)
Spaceflight | Sundiver (space mission) | Astronomy | 173 |
19,536,528 | https://en.wikipedia.org/wiki/Engineering%20sample | Engineering samples are the beta versions of integrated circuits that are meant to be used for compatibility qualification or as demonstrators. They are usually loaned to OEM manufacturers prior to the chip's commercial release to allow product development or display. Engineering samples are usually handed out under a non-disclosure agreement or another type of confidentiality agreement.
Some engineering samples, such as Pentium 4 processors were rare and favoured for having unlocked base-clock multipliers. More recently, Core 2 engineering samples have become more common and popular. Asian sellers were selling the Core 2 processors at major profit.
While engineering sample CPUs do occasionally appear on secondhand markets such as eBay, they are generally not authorized for resale and can suffer from unpredictable performance issues, compatibility issues, and lack of warranty support options. This is due to their unfinished nature compared to the retail version of the chip.
References
See also
Integrated circuits | Engineering sample | Technology,Engineering | 182 |
12,999,450 | https://en.wikipedia.org/wiki/Discovery%20of%20disease-causing%20pathogens | The discovery of disease-causing pathogens is an important activity in the field of medical science. Many viruses, bacteria, protozoa, fungi, helminths (parasitic worms), and prions are identified as a confirmed or potential pathogen. In the United States, a Centers for Disease Control and Prevention program, begun in 1995, identified over a hundred patients with life-threatening illnesses that were considered to be of an infectious cause but that could not be linked to a known pathogen. The association of pathogens with disease can be a complex and controversial process, in some cases requiring decades or even centuries to achieve.
Factors impairing identification of pathogens
Factors which have been identified as impeding the identification of pathogens include the following:
1. Lack of animal models: Experimental infection in animals has been used as a criterion to demonstrate a disease-causing ability, but for some pathogens (such as Vibrio cholerae, which causes disease only in humans), animal models do not exist. In cases where animal models were not available, scientists have sometimes infected themselves or others to determine an organism's disease causing ability.
2. Pre-existing theories of disease: Before a pathogen is well-recognized, scientists may attribute the symptoms of infection to other causes, such as toxicological, psychological, or genetic causes. Once a pathogen has been associated with an illness, researchers have reported difficulty displacing these pre-existing theories.
3. Variable pathogenicity: Infection with pathogens can produce varying responses in hosts, complicating the process of showing a relationship between infection and the pathogen. In some infectious diseases, the severity of symptoms has been shown to be dependent on specific genetic traits of the host.
4. Organisms that look alike but behave differently: In some cases a harmless organism exists which looks identical to a disease causing organism with a microscope, which complicates the discovery process.
5. Lack of research effort: Slow progress has been attributed to the small numbers of researchers working on a pathogen.
19th-century discoveries
Vibrio cholerae (1849–1884)
Vibrio cholerae bacteria are transmitted through contaminated water. Once ingested, the bacteria colonize the intestinal tract of the host and produce a toxin which causes body fluids to flow across the lining of the intestine. Death can result in 2–3 hours from dehydration if no treatment is provided.
Before the discovery of an infectious cause, the symptoms of cholera were thought to be caused by an excess of bile in the patient; the disease cholera gets its name from the Greek word χολή, meaning bile. This theory was consistent with humorism, and led to such medical practices as bloodletting. The bacterium was first reported in 1849 by Gabriel Pouchet, who discovered it in stools from patients with cholera, but he did not appreciate the significance of this presence. The first scientist to understand the significance of Vibrio cholerae was the Italian anatomist Filippo Pacini, who published detailed drawings of the organism in "Microscopical observations and pathological deductions on cholera" in 1854. He published further papers in 1866, 1871, 1876, and 1880, which were ignored by the scientific community. He correctly described how the bacteria caused diarrhea, and developed treatments that were found to be effective. Whilst John Snow's epidemiological maps were well recognized and led to the removal of the Broad Street pump handle (e.g., the 1854 Broad Street cholera outbreak), in 1874, scientific representatives from 21 countries voted unanimously to resolve that cholera was caused by environmental toxins from miasmata, or clouds of unhealthy substances which float in the air. In 1884, Robert Koch re-discovered Vibrio cholerae as a causal element in cholera. Some scientists opposed the new theory, and even drank cholera cultures to disprove it:
Von Pettenkofer considered his experience proof that Vibrio cholerae was harmless, as he did not develop cholera from consuming the culture. Between 1849, when Pouchet discovered Vibrio cholerae, and 1891, over a million people died in cholera epidemics in Europe and Russia. In 1995, researchers published a study in Science explaining why some persons are able to be infected with cholera without symptoms, possibly explaining why Pettenkofer did not get sick. The study showed that a series of genetic mutations in some people provide resistance to cholera toxin; but these mutations come at a price. If too many of them occur in a person, they will develop cystic fibrosis, an incurable and often fatal genetic disorder.
20th-century discoveries
Giardia lamblia (1681–1975)
Giardiasis is a disease caused by infection with the protozoan Giardia lamblia. Infection with Giardia can produce diarrhea, gas, and abdominal pain in some people. If untreated, infection can be chronic. In children, chronic Giardia infection can cause stunting (stunted growth) and lowered intelligence. Infection with Giardia is now universally recognized as a disease and treated by physicians with antiprotozoal drugs. Since 2002, Giardia cases must be reported to the United States Centers for Disease Control and Prevention (CDC), according to the CDC's Reportable Disease Spreadsheet. The U.S. National Institutes of Health Gastrointestinal Parasites Lab studies Giardia almost exclusively.
However, Giardia experienced an extraordinarily long term of emergence, from its discovery in 1681 until the 1970s when it was fully accepted that infection with Giardia was a treatable cause of chronic diarrhea:
Some of the first evidence in modern times of Giardia's pathogenicity came during World War II when soldiers were treated for malaria with the antiprotozoal quinacrine, and their diarrhea disappeared, as did the Giardia from their stool samples. In 1954, Dr. R.C. Rendtorff performed experiments on prisoner volunteers, infecting them with Giardia. In the experiment, although some prisoners experienced changes in stool habits, he concluded that these could not be conclusively linked to Giardia infection, and also indicated that all prisoners experienced spontaneous clearance of Giardia. His experiments were described at the EPA Symposium on Waterborne Transmission of Giardiasis in 1978:
In 1954–1955, an outbreak of Giardia infection occurred in Oregon (United States), sickening 50,000 people. This was documented in a communication by Dr. Lyle Veazie, which wasn't published until 15 years later in The New England Journal of Medicine. In the communication, Veazie notes that he was unable to find a publisher for his account of the epidemic. The communication was re-published in the Proceedings of the EPA Symposium on Waterborne Transmission of Giardiasis in 1979, and that version included the following quote from the Director of the Oregon State Board of Health, suggesting that diarrhea from Giardia was still being attributed to other causes by health authorities in 1954:
Helicobacter pylori (1892–1982)
Infection with the bacteria Helicobacter pylori is the cause of most stomach ulcers. The discovery is generally credited to Australian gastroenterologists Dr. Barry Marshall and Dr. J. Robin Warren, who published their findings in 1983. The pair received the Nobel Prize in 2005 for their work. Before this, nobody really knew what caused stomach ulcers, though a popular belief was that the "stress" played a role. Some researchers suggested that ulcers were a psychosomatic illness.
In H Pylori Pioneers, Dr. Marshall noted that other physicians had produced evidence of H. pylori infection as early as 1892. Marshall writes that earlier reports were disregarded because they conflicted with existing belief. The first description of H. pylori came in 1892 from Giulio Bizzozero, who identified acid-tolerant bacteria living in a dog's stomach. Later, a theory would be developed that no bacteria could live in the stomach. Although the theory has no scientific basis, it would become a stumbling block for scientists, discouraging them for searching for infective causes of stomach ulcers. In 1940, two physicians, Dr. A. Stone Freeberg and Dr. Louis E. Barron published a paper describing a spiral bacteria found in about half of their gastroenterology patients who had stomach ulcers. Dr. John Lykoudis, a Greek physician, was one of the first physicians to treat stomach ulcers as an infectious disease. Between 1960 and 1970, he treated over 10,000 ulcer patients in Athens with antibiotics. Lykoudis tried to publish a paper on his findings, but they conflicted with traditional theory, and his work was never published. Lykoudis' experience was followed in 1975 by a further publication in Gut magazine that included spiral bacteria living on the borders of duodonal ulcers. The medical significance of Steer's findings was disregarded, but he “continued to publish papers on H. Pylori, mostly as a hobby."
H. pylori can infect the stomach of some people without causing stomach ulcers. In investigating asymptomatic carriers of H. pylori, researchers identified a genetic trait called Interleuik-1 beta-31 which causes increased production of stomach acid, resulting in ulcers if patients become infected with H. pylori. Patients without the trait do not develop stomach ulcers in response to H. pylori infection, but instead have increased risk from stomach cancer if they become infected. Investigation into other gastrointestinal infections has also shown that the symptoms are the result of interaction between the infection and specific genetic mutations in the host.
Pathogenic variants of Escherichia coli (1947–1983)
There are different types of E. coli, some of which are found in humans and are harmless. Enterotoxigenic Escherichia coli (ETEC) is a type found to cause illness in humans, possessing gene that allows it to manufacture a substance toxic to humans. Cattle are immune to its effects but when people eat food contaminated with cattle feces, the organism can cause disease. Reports of pathogenic E. coli appear in medical literature as early as 1947. Publications regarding variants of E. coli which cause disease appeared regularly in medical journals throughout the 1950s, '60s, and '70s, with fatalities being reported in humans and infants starting in the 1970s. Despite the earlier reports, pathogenic E. coli did not rise to public prominence until 1983, when a CDC researcher published a paper identifying ETEC as the cause of a series of outbreaks of unexplained hemorrhagic gastrointestinal illness. Despite the earlier publication of pathogenic variants of E. coli, researchers encountered significant difficulties in establishing ETEC as a pathogen.
Human immunodeficiency virus (1959–1984)
AIDS was first reported June 5, 1981, when the CDC recorded a cluster of Pneumocystis carinii pneumonia (now still classified as PCP but known to be caused by Pneumocystis jirovecii) in five homosexual men in Los Angeles. The discovery of the virus took several years of research, and was announced in 1984 by Dr. Gallo of the U.S. National Cancer Institute, Dr. Luc Montagnier at the Pasteur Institute in Paris, and Dr. Jay Levy at the University of California, San Francisco.
However, HIV existed long before the 1981 CDC report. Three of the earliest known instances of HIV infection are as follows:
A plasma sample taken in 1959 from an adult male living in what is now the Democratic Republic of the Congo.
HIV found in tissue samples from a 15-year-old African-American teenager who died in St. Louis in 1969.
HIV found in tissue samples from a Norwegian sailor who died around 1976.
Two species of HIV infect humans: HIV-1 and HIV-2. More virulent and more easily transmitted, HIV-1 is the source of the majority of HIV infections throughout the world, while HIV-2 is not as easily transmitted and is largely confined to West Africa. Both HIV-1 and HIV-2 are of primate origin. The origin of HIV-1 is the central common chimpanzee (Pan troglodytes troglodytes) found in southern Cameroon. It is established that HIV-2 originated from the sooty mangabey (Cercocebus atys), an Old World monkey of Guinea Bissau, Gabon, and Cameroon.
It is hypothesized that HIV probably transferred to humans as a result of direct contact with primates, for instance during hunting, butchery, or inter-species sexual contact.
Cyclospora (1995)
Cyclospora is a gastrointestinal pathogen that causes fever, diarrhea, vomiting, and severe weight loss. Outbreaks of the disease occurred in Chicago in 1989 and other areas in the United States. But investigation by the CDC could not identify an infectious cause. The discovery of the cause was made by Mr. Ramachandran Rajah, the head of a medical clinic's laboratory in Kathmandu, Nepal. Mr. Rajah was trying to discover why local residents and visitors were becoming ill every summer. He identified an unusual looking organism in stool samples from patients who were sick. But when the clinic sent slides of the organism to the CDC, it was identified as blue-green algae, which is harmless. Many pathologists had seen the same thing before, but dismissed it as irrelevant to the patient's disease. Later, the organism would be identified as a special kind of parasite, and treatment would be developed to help patients with the infection. In the United States, Cyclospora infection must be reported to the CDC according to the CDC's Reportable Disease Chart
Present-day discoveries
The process of identifying new infectious agents continues. One study has suggested there are a large number of pathogens already causing illness in the population, but they have not yet been properly identified.
Gastrointestinal pathogens
Many recently emerged pathogens infect the gastrointestinal tract. For example, there are three gastrointestinal protozoal infections which must be reported to the CDC: Giardia, Cyclospora, and Cryptosporidium. None of these was known to be a significant pathogen in the 1970s.
Figure 1 shows the prevalence of gastrointestinal protozoa in studies from the United States and Canada. The most prevalent protozoa in these studies are considered emerging infectious diseases by some researchers, because a consensus does not yet exist in the medical and public health spheres concerning their importance in the role of human disease. Researchers have suggested that their treatment may be complicated by differing opinions regarding pathogenicity, lack of reliable testing procedures, and lack of reliable treatments. As with newly discovered pathogens before them, researchers are reporting that these organisms may be responsible for illnesses for which no clear cause has been found, such as irritable bowel syndrome.
Dientamoeba fragilis
Dientamoeba fragilis is a single-celled parasite which infects the large intestine causing diarrhea, gas, and abdominal pain. An Australian study identified patients with symptoms of IBS who were actually infected with Dientamoeba fragilis. Their symptoms resolved following treatment. A study in Denmark identified a high incidence Dientamoeba fragilis infection in a group of patients suspected of having gastrointestinal illness of an infectious nature. The study also suggested special methods may be required to identify infection.
Blastocystis
Blastocystis is a single-celled protozoan which infects the large intestine. Physicians report that patients with infection show symptoms of abdominal pain, constipation, and diarrhea. One study found that 43% of IBS patients were infected with Blastocystis versus 7% of controls. An additional study found that many IBS patients from whom Blastocystis could not be identified showed a strong antibody reaction to the organism, which is a type of test used to diagnose certain difficult-to-detect infections. Other researchers have also reported that special testing techniques may be necessary to identify the infection in some people. While some scientists believe the finding that IBS patients carry a protozoal infection to be significant, other researchers have reported their belief that the presence of the infection is not medically significant. Researchers report that the infection can be resistant to common protozoal treatments in laboratory culture study, and in experience with patients,; therefore, identifying Blastocystis infection may not be of immediate help to a patient. A 2006 study of gastrointestinal infections in the United States suggested that Blastocystis infection has become the leading cause of protozoal diarrhea in that country. Blastocystis was the most frequently identified protozoal infection found in patients in a 2006 Canadian study.
See also
Spanish flu
Black Death
Bubonic plague
Pandemic
Smallpox
References
Infectious diseases
Emerging infectious diseases
Pathology | Discovery of disease-causing pathogens | Biology | 3,537 |
22,389,658 | https://en.wikipedia.org/wiki/Geastrum%20saccatum | Geastrum saccatum, commonly known as the sessile earthstar or rounded earthstar, or star of the land, is a species of mushroom belonging to the genus Geastrum. The opening of the outer layer of the fruiting body in the characteristic star shape is thought to be due to a buildup of calcium oxalate crystals immediately prior to dehiscence. G. saccatum is distinguished from other earthstars by the distinct circular ridge or depression surrounding the central pore.
The species has a worldwide distribution and is found growing on rotting wood. It is a common mushroom, but peaks in popularity during late summer. It is considered inedible but contains bioactive compounds.
Description
The immature fruiting body is in diameter and tall. Initially, the fruiting body is egg-shaped—similar in appearance to puffballs—and has strands of mycelia (rhizomorphs) at the base that attach it to the growing surface. The 'skin,' or peridium, is composed of two separate layers: the outer layer (exoperidium), which is a golden tan to yellowish-brown color, separates away from the inner basidiocarp and splits into five to eight rays that curve backward (recurve) to the base.
The mushroom is in diameter after the rays have expanded. Unlike some other members of the genus Geastrum (such as G. fornicatum) the arms do not push the basidiocarp off the ground; rather, it lies flat. The inner spore-bearing basidiocarp is broad, and has a central pore surrounded by a circular dull-brown apical disc; the disc is distinctly ridged or depressed. The inside of the interior sphere is white when young, but matures into a mass of brown, powdery spores mixed with thick-walled fibres known as capillitium. The flesh is bitter.
Spores
The spores are rounded, with warts, and have dimensions of 3.5–4.5 μm.
Mechanism of dehiscence
A study has shown that the formation of calcium oxalate crystals on the hyphae that form the endoperidial layer of the basidiocarp is responsible for the characteristic opening (dehiscence) of the outer peridial layers. Calcium oxalate is a common compound found in many fungi, including the earthstars. Curtis Gates Lloyd was the first to note the presence of these crystals on the endoperidium of Geaster calceus (now known as Geastrum minimum). The formation of calcium oxalate crystals stretches the layers of the outer walls, pushing the inner and outer layers of the peridium apart.
Similar species
The related species Geastrum fimbriatum does not have an apical disc, and its pores are slightly smaller. G. saccatum may be distinguished from G. indicum by the absence of loose tissue forming a collar around the base of the endoperidium. Other similar species include G. fornicatum and G. triplex. Astraeus earthstars usually have less orderly rays.
Habitat and distribution
Geastrum saccatum is saprobic, and grows scattered or clustered together in leaf litter of humus, usually in late summer and fall. It has a cosmopolitan distribution, and is well adapted to tropical regions. It is common in Hawaiian dry forests.
The species has been collected in North America (Canada, the United States and Mexico), Central America (the Congo and Panama), South America (Argentina, Uruguay, and Brazil), Cuba, Africa (Tanzania, West Africa, and South Africa), Asia (China and India), and Tobago. In North America, it is found September–December on the west coast and July–October elsewhere.
Potential uses
The species is inedible.
Bioactive compounds
A β-glucan–protein complex extracted from G. saccatum was isolated and analysed and shown to have antiinflammatory, antioxidant, and cytotoxic activities. It is suggested that the mechanism for the antiinflammatory activity is due to inhibition of the enzymes nitric oxide synthase and cyclooxygenase.
In culture
In Brazil, its common name translates to "star of the land".
See also
Medicinal uses of fungi
References
Further reading
Mushrooms (Eyewitness Handbooks), by Thomas Laessoe, with Gary Lincoff, DK Publishing, New York, 1998, 304 pages, flexible vinyl.
saccatum
Fungi of North America
Fungi of Europe
Fungi described in 1829
Inedible fungi
Taxa named by Elias Magnus Fries
Fungus species | Geastrum saccatum | Biology | 959 |
13,976,171 | https://en.wikipedia.org/wiki/Pair-rule%20gene | A pair-rule gene is a type of gene involved in the development of the segmented embryos of insects. Pair-rule genes are expressed as a result of differing concentrations of gap gene proteins, which encode transcription factors controlling pair-rule gene expression. Pair-rule genes are defined by the effect of a mutation in that gene, which causes the loss of the normal developmental pattern in alternating segments.
Pair-rule genes were first described by Christiane Nüsslein-Volhard and Eric Wieschaus in 1980. They used a genetic screen to identify genes required for embryonic development in the fruit fly Drosophila melanogaster. In normal unmutated Drosophila, each segment produces bristles called denticles in a band arranged on the side of the segment closer to the head (the anterior). They found five genes – even-skipped, hairy, odd-skipped, paired and runt – where mutations caused the deletion of a particular region of every alternate segment. For example, in even-skipped, the denticle bands of alternate segments are missing, which results in an embryo having half the number of denticle bands. Later work identified more pair-rule genes in the Drosophila early embryo – fushi tarazu, odd-paired and sloppy paired.
Once the pair-rule genes had been identified at the molecular level it was found that each gene is expressed in alternate parasegments – regions in the embryo that are closely related to segments, but are slightly out of register. Each parasegment includes the posterior part of one (future) segment, and an anterior part of the next (more posterior) segment. The bands of expression of the pair-rule genes correspond to the regions missing in the mutant. The expression of the pair-rule genes in bands is dependent both upon direct regulation by the gap genes and on regulatory interactions between the pair-rule genes themselves.
See also
Drosophila embryogenesis
Gap gene
References
External links
The Interactive Fly: http://www.sdbonline.org/fly/aignfam/gapnprl.htm
Developmental genes and proteins | Pair-rule gene | Biology | 436 |
9,705,828 | https://en.wikipedia.org/wiki/Technical%20features%20new%20to%20Windows%20Vista | Windows Vista (formerly codenamed Windows "Longhorn") has many significant new features compared with previous Microsoft Windows versions, covering most aspects of the operating system.
In addition to the new user interface, security capabilities, and developer technologies, several major components of the core operating system were redesigned, most notably the audio, print, display, and networking subsystems; while the results of this work will be visible to software developers, end-users will only see what appear to be evolutionary changes in the user interface.
As part of the redesign of the networking architecture, IPv6 has been incorporated into the operating system, and a number of performance improvements have been introduced, such as TCP window scaling. Prior versions of Windows typically needed third-party wireless networking software to work properly; this is no longer the case with Windows Vista, as it includes comprehensive wireless networking support.
For graphics, Windows Vista introduces a new as well as major revisions to Direct3D. The new display driver model facilitates the new Desktop Window Manager, which provides the tearing-free desktop and special effects that are the cornerstones of the Windows Aero graphical user interface. The new display driver model is also able to offload rudimentary tasks to the GPU, allow users to install drivers without requiring a system reboot, and seamlessly recover from rare driver errors due to illegal application behavior.
At the core of the operating system, many improvements have been made to the memory manager, process scheduler, heap manager, and I/O scheduler. A Kernel Transaction Manager has been implemented that can be used by data persistence services to enable atomic transactions. The service is being used to give applications the ability to work with the file system and registry using atomic transaction operations.
Audio
Windows Vista features a completely re-written audio stack designed to provide low-latency 32-bit floating point audio, higher-quality digital signal processing, bit-for-bit sample level accuracy, up to 144 dB of dynamic range and new audio APIs created by a team including Steve Ball and Larry Osterman.
The new audio stack runs at user level, thus reducing impact on system stability. Also, the new Universal Audio Architecture (UAA) model has been introduced, replacing WDM audio, which allows compliant audio hardware to automatically work under Windows without needing device drivers from the audio hardware vendor.
There are three major APIs in the Windows Vista audio architecture:
Windows Audio Session API – Very low-level API for rendering audio, render/capture audio streams, adjust volume etc. This API also provides low latency for audio professionals through WaveRT (wave real-time) port driver.
Multimedia Device API – For enumerating and managing audio endpoints.
Device Topology API – For discovering the internals of an audio card's topology.
Audio stack architecture
Applications communicate with the audio driver through Sessions, and these Sessions are programmed through the Windows Audio Session API (WASAPI). In general, WASAPI operates in two modes. In exclusive mode (also called DMA mode), unmixed audio streams are rendered directly to the audio adapter and no other application's audio will play and signal processing has no effect. Exclusive mode is useful for applications that demand the least amount of intermediate processing of the audio data or those that want to output compressed audio data such as Dolby Digital, DTS or WMA Pro over S/PDIF. WASAPI exclusive mode is similar to kernel streaming in function, but no kernel mode programming is required. In shared mode, audio streams are rendered by the application and optionally applied per-stream audio effects known as Local Effects (LFX) (such as per-session volume control). Then the streams are mixed by the global audio engine, where a set of global audio effects (GFX) may be applied. Finally, they're rendered on the audio device.
After passing through WASAPI, all host-based audio processing, including custom audio processing, can take place. Host-based processing modules are referred to as Audio Processing Objects, or APOs. All these components operate in user mode, only the audio driver runs in kernel mode.
The Windows Kernel Mixer (KMixer) is completely gone. DirectSound and MME are emulated as Session instances rather than being directly connected to the audio driver. This does have the effect of preventing DirectSound from being hardware-accelerated, and completely removes support for DirectSound3D and EAX extensions, however APIs such as ASIO and OpenAL are not affected.
Audio performance
Windows Vista also includes a new Multimedia Class Scheduler Service (MMCSS) that allows multimedia applications to register their time-critical processing to run at an elevated thread priority, thus ensuring prioritized access to CPU resources for time-sensitive DSP processing and mixing tasks.
For audio professionals, a new WaveRT port driver has been introduced that strives to achieve real-time performance by using the multimedia class scheduler and supports audio applications that reduce the latency of audio streams. All the existing audio APIs have been re-plumbed and emulated to use these APIs internally, all audio goes through these three APIs, so that most applications "just work".
Issues
A fault in the MME WaveIn/WaveOut emulation was introduced in Windows Vista: if sample rate conversion is needed, audible noise is sometimes introduced, such as when playing audio in a web browser that uses these APIs. This is because the internal resampler, which is no longer configurable, defaults to linear interpolation, which was the lowest-quality conversion mode that could be set in previous versions of Windows. The resampler can be set to a high-quality mode via a hotfix for Windows 7 and Windows Server 2008 R2 only.
Audio signal processing
New digital signal processing functionalities such as Room Correction, Bass Management, Loudness Equalization and Speaker Fill have been introduced. These adapt and modify an audio signal to take best advantage of the speaker configuration a given system has. Windows Vista also includes the ability to calibrate speakers to a given room's acoustics automatically using a software wizard.
Windows Vista also includes the ability for audio drivers to include custom DSP effects, which are presented to the user through user-mode System Effect Audio Processing Objects (sAPOs). These sAPOs are also reusable by third-party software.
Audio devices support
Windows Vista builds on the Universal Audio Architecture, a new class driver definition that aims to reduce the need for third-party drivers, and to increase the overall stability and reliability of audio in Windows.
Support for Intel High Definition Audio devices (which replaces Intel's previous AC'97 audio hardware standard)
Extended support for USB audio devices:
Built-in decoding of padded AC-3 (Dolby Digital), MP3, WMA and WMA Pro streams and outputting as S/PDIF.
Support for MIDI "Elements".
New support for asynchronous endpoints.
IEEE 1394 (aka FireWire) audio support was slated for a future release of Windows Vista, to be implemented as a full class driver, automatically supporting IEEE 1394 AV/C audio devices.
Support for audio jack sensing which can detect the audio devices that are plugged into the various audio jacks on a device and inform the user about their configuration.
Endpoint Discovery and Abstraction: Audio devices are expressed in terms of audio endpoints such as microphones, speakers, headphones. For example, each recording input (Microphone, Line in etc.) is treated as a separate device, which allows recording from both at the same time.
Other audio enhancements
A new set of user interface sounds have been introduced, including a new startup sound created with the help of King Crimson's Robert Fripp. The new sounds are intended to complement the Windows Aero graphical user interface, with the new startup sound consisting of two parallel melodies that are played in an intentional "Win-dows Vis-ta" rhythm. According to Jim Allchin, the new sounds are intended to be gentler and softer than the sounds used in previous versions of Windows.
The new Volume Mixer displays a percentage value showing the current system volume while the volume level is being changed. Previous versions of Windows only displayed a volume meter.
Windows Vista also allows controlling system-wide volume or volume of individual audio devices and individual applications separately. This feature can be used from the new Volume Control windows or programmatically using the overhauled audio API. Different sounds can be redirected to different audio devices as well.
Windows Vista includes integrated microphone array support which is intended to increase the accuracy of the speech recognition feature and allow a user to connect multiple microphones to a system so that the inputs can be combined into a single, higher-quality source.
Microsoft has also included a new high quality voice capture DirectX Media Object (DMO) as part of DirectShow that allows voice capture applications such as instant messengers and speech recognition applications to apply Acoustic Echo Cancellation and microphone array processing to speech signals.
Speech recognition
Windows Vista is the first Windows operating system to include fully integrated support for speech recognition. Under Windows 2000 and XP, Speech Recognition was installed with Office 2003, or was included in Windows XP Tablet PC Edition.
A brief speech-driven tutorial is included to help familiarize a user with speech recognition commands. Training could also be completed to improve the accuracy of speech recognition.
Windows Vista includes speech recognition for 8 languages at release time: English (U.S. and British), Spanish, German, French, Japanese and Chinese (traditional and simplified). Support for additional languages is planned for post-release.
Speech recognition in Vista utilizes version 5.3 of the Microsoft Speech API (SAPI) and version 8 of the Speech Recognizer.
Speech synthesis
Speech synthesis was first introduced in Windows with Windows 2000, but it has been significantly enhanced for Windows Vista (code name Mulan). The old voice, Microsoft Sam, has been replaced with two new, more natural sounding voices of generally greater intelligibility: Anna and Lili, the latter of which is capable of speaking Chinese. The screen-reader Narrator which uses these voices has also been updated. Microsoft Agent and other text to speech applications now use the newer SAPI 5 voices.
Print
Windows Vista includes a redesigned print architecture, built around Windows Presentation Foundation. It provides high-fidelity color printing through improved use of color management, removes limitations of the current GDI-based print subsystem, enhances support for printing advanced effects such as gradients, transparencies, etc., and for color laser printers through the use of XML Paper Specification (XPS).
The print subsystem in Windows Vista implements the new XPS print path as well as the legacy GDI print path for legacy support. Windows Vista transparently makes use of the XPS print path for those printers that support it, otherwise using the GDI print path. On documents with intensive graphics, XPS printers are expected to produce much greater quality prints than GDI printers.
In a networked environment with a print server running Windows Vista, documents will be rendered on the client machine, rather than on the server, using a feature known as Client Side Rendering. The rendered intermediate form will just be transferred to the server to be printed without additional processing, making print servers more scalable by offloading rendering computation to clients.
XPS print path
The XPS Print Path introduced in Windows Vista supports high quality 16-bit color printing. The XPS print path uses XML Paper Specification (XPS) as the print spooler file format, that serves as the page description language (PDL) for printers. The XPS spooler format is the intended replacement for the Enhanced Metafile (EMF) format which is the print spooler format in the Graphics Device Interface (GDI) print path. XPS is an XML-based (more specifically XAML-based) color-managed device and resolution independent vector-based paged document format which encapsulates an exact representation of the actual printed output. XPS documents are packed in a ZIP container along with text, fonts, raster images, 2D vector graphics and DRM information. For printers supporting XPS, this eliminates an intermediate conversion to a printer-specific language, increasing the reliability and fidelity of the printed output. Microsoft claims that major printer vendors are planning to release printers with built-in XPS support and that this will provide better fidelity to the original document.
At the core of the XPS print path is XPSDrv, the XPS-based printer driver which includes the filter pipeline. It contains a set of filters which are print processing modules and an XML-based configuration file to describe how the filters are loaded. Filters receive the spool file data as input, perform document processing, rendering and PDL post-processing, and then output PDL data for the printer to consume. Filters can perform a single function such as watermarking a page or doing color transformations or they can perform several print processing functions on specific document parts individually or collectively and then convert the spool file to the page description language supported by the printer.
Windows Vista also provides improved color support through the Windows Color System for higher color precision and dynamic range. It also supports CMYK colorspace and multiple ink systems for higher print fidelity. The print subsystem also has support for named colors simplifying color definition for images transmitted to printer supporting those colors.
The XPS print path can automatically calibrate color profile settings with those being used by the display subsystem. Conversely, XPS print drivers can express the configurable capabilities of the printer, by virtue of the XPS PrintCapabilities class, to enable more fine-grained control of print settings, tuned to the individual printing device.
Applications which use the Windows Presentation Foundation for the display elements can directly print to the XPS print path without the need for image or colorspace conversion. The XPS format used in the spool file, represents advanced graphics effects such as 3D images, glow effects, and gradients as Windows Presentation Foundation primitives, which are processed by the printer drivers without rasterization, preventing rendering artifacts and reducing computational load. When the legacy GDI Print Path is used, the XPS spool file is used for processing before it is converted to a GDI image to minimize the processing done at raster level.
Print schemas
Print schemas provide an XML-based format for expressing and organizing a large set of properties that describe either a job format or print capabilities in a hierarchically structured manner. Print schemas are intended to address the problems associated with internal communication between the components of the print subsystem, and external communication between the print subsystem and applications.
Networking
Windows Vista contains a new networking stack, which brings large improvements in all areas of network-related functionality. It includes a native implementation of IPv6, as well as complete overhaul of IPv4. IPv6 is now supported by all networking components, services, and the user interface. In IPv6 mode, Windows Vista can use the Link Local Multicast Name Resolution (LLMNR) protocol to resolve names of local hosts on a network which does not have a DNS server running. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after settings are changed. The new stack is also based on a strong host model and features an infrastructure to enable more modular components that can be dynamically inserted and removed.
The user interface for configuring, troubleshooting and working with network connections has changed significantly from prior versions of Windows as well. Users can make use of the new "Network Center" to see the status of their network connections, and to access every aspect of configuration. The network can be browsed using Network Explorer, which replaces Windows XP's "My Network Places". Network Explorer items can be a shared device such as a scanner, or a file share. Network Location Awareness uniquely identifies each network and exposes the network's attributes and connectivity type. Windows Vista graphically presents how different devices are connected over a network in the Network Map view, using the LLTD protocol. In addition, the Network Map uses LLTD to determine connectivity information and media type (wired or wireless). Any device can implement LLTD to appear on the Network Map with an icon representing the device, allowing users one-click access to the device's user interface. When LLTD is invoked, it provides metadata about the device that contains static or state information, such as the MAC address, IPv4/IPv6 address, signal strength etc.
Support for wireless networks is built into the network stack itself, and does not emulate wired connections, as was the case with previous versions of Windows. This allows implementation of wireless-specific features such as larger frame sizes and optimized error recovery procedures. Windows Vista uses various techniques like Receive Window Auto-scaling, Explicit Congestion Notification, TCP Chimney offload and Compound TCP to improve networking performance. Quality of service (QoS) policies can be used to prioritize network traffic, with traffic shaping available to all applications, even those that do not explicitly use QoS APIs. Windows Vista includes in-built support for peer-to-peer networks and SMB 2.0. For improved network security, Windows Vista supports for 256-bit and 384-bit Diffie-Hellman (DH) algorithms, as well as for 128-bit, 192-bit and 256-bit Advanced Encryption Standard (AES) is included in the network stack itself, while integrating IPsec with Windows Firewall.
Kernel and core OS changes
The new Kernel Transaction Manager enables atomic transaction operations across different types of objects, most significantly file system and registry operations.
The memory manager and processes scheduler have been improved. The scheduler was modified to use the cycle counter register of modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine, resulting in more deterministic application behaviour. Many kernel data structures and algorithms have been rewritten. Lookup algorithms now run in constant time, instead of linear time as with previous versions.
Windows Vista includes support for condition variables and reader-writer locks.
Process creation overhead is reduced by significant improvements to DLL address-resolving schemes.
Windows Vista introduces a Protected Process, which differs from usual processes in the sense that other processes cannot manipulate the state of such a process, nor can threads from other processes be introduced in it. A Protected Process has enhanced access to DRM-functions of Windows Vista. However, currently, only the applications using Protected Video Path can create Protected Processes.
Thread Pools have been upgraded to support multiple pools per process, as well as to reduce performance overhead using thread recycling. It also includes Cleanup Groups that allow cleanup of pending thread-pool requests on process shutdown.
Threaded DPC , conversely to an ordinary DPC (Deferred Procedure Call), decreases the system latency improving the performance of time-sensitive applications, such as audio or video playback.
Data Redirection: Also known as data virtualization, this virtualizes the registry and certain parts of the file system for applications running in the protected user context if User Account Control is turned on, enabling legacy applications to run in non-administrator accounts. It automatically creates private copies of files that an application can use when it does not have permission to access the original files. This facilitates stronger file security and helps applications not written with the least user access principle in mind to run under stronger restrictions. Registry virtualization isolates write operations that have a global impact to a per-user location. Reads and writes in the section of the Registry by user-mode applications while running as a standard user, as well as to folders such as "Program Files", are "redirected" to the user's profile. The process of reading and writing on the profile data and not on the application-intended location is completely transparent to the application.
Windows Vista supports the PCI Express 1.1 specification, including PCI Express Native Control and ASPM. PCI Express registers, including capability registers, are supported, along with save and restore of configuration data.
Native support and generic driver for Advanced Host Controller Interface (AHCI) specification for Serial ATA drives, SATA Native Command Queuing, Hot plugging, and AHCI Link Power Management.
Full support for the ACPI 2.0 specification, and parts of ACPI 3.0. Support for throttling power usage of individual devices has been improved.
Windows Vista SP1 supports Windows Hardware Error Architecture (WHEA).
Kernel-mode Plug-And-Play enhancements include support for PCI multilevel rebalance, partial arbitration of resources to support PCI subtractive bridges, asynchronous device start and enumeration operations to speed system startup, support for setting and retrieving custom properties on a device, an enhanced ejection API to allow the caller to determine if and when a device has been successfully ejected, and diagnostic tracing to facilitate improved reliability.
The startup process for Windows Vista has changed completely in comparison to earlier versions of Windows. The NTLDR boot loader has been replaced by a more flexible system, with NTLDR's functionality split between two new components: winload.exe and Windows Boot Manager. A notable change is that the Windows Boot Manager is invoked by pressing the space bar instead of the F8 function key. The F8 key still remains assigned for advanced boot options once the Windows Boot Manager menu appears.
On UEFI systems, beginning with Windows Vista Service Pack 1, the x64 version of Windows Vista has the ability to boot from a disk with a GUID Partition Table.
Windows Vista includes a completely overhauled and rewritten Event logging subsystem, known as Windows Event Log which is XML-based and allows applications to more precisely log events, offers better views, filtering and categorization by criteria, automatic log forwarding, centrally logging and managing events from a single computer and remote access.
Windows Vista includes an overhauled Task Scheduler that uses hierarchical folders of tasks. The Task Scheduler can run programs, send email, or display a message. The Task Scheduler can also now be triggered by an XPath expression for filtering events from the Windows Event Log, and can respond to a workstation's lock or unlock, and as well as the connection or disconnection to the machine from a Remote Desktop. The Task Scheduler tasks can be scripted in VBScript, JScript, or PowerShell.
Restart Manager: The Restart Manager works with Microsoft's update tools and websites to detect processes that have files in use and to gracefully stop and restart services to reduce the number of reboots required after applying updates as far as possible for higher levels of the software stack. Kernel updates, logically, still require the system to be restarted. In addition, the Restart Manager provides a mechanism for applications to stop and then restart programs. Applications that are written specifically to take advantage of the new Restart Manager features using the API can be restarted and restored to the same state and with the same data as before the restart. Using the Application Recovery and Restart APIs in conjunction with the Restart Manager enables applications to control what actions are taken on their behalf by the system when they fail or crash such as recovering unsaved data or documents, restarting the application, and diagnosing and reporting the problem using Windows Error Reporting.
When shutting down or restarting Windows, previous Windows versions either forcibly terminated applications after waiting for few seconds, or allowed applications to entirely cancel shutdown without informing the user. Windows Vista now informs the user in a full-screen interface if there are running applications when exiting Windows or allows continuing with or cancelling the initiated shutdown. The reason registered, if any, for cancelling a shutdown by an application using the new ShutdownBlockReasonCreate API is also displayed.
Clean service shutdown: Services in Windows Vista have the capability of delaying the system shutdown in order to properly flush data and finish current operations. If the service stops responding, the system terminates it after 3 minutes. Crashes and restart problems are drastically reduced since the Service Control Manager is not terminated by a forced shutdown anymore.
Boot process
Windows Vista introduces an overhaul of the previous Windows NT operating system loader architecture NTLDR. Used by versions of Windows NT since its inception with Windows NT 3.1, NTLDR has been completely replaced with a new architecture designed to address modern firmware technologies such as the Unified Extensible Firmware Interface. The new architecture introduces a firmware-independent data store and is backward compatible with previous versions of the Windows operating system.
Memory management
Windows Vista features a Dynamic System Address Space that allocates virtual memory and kernel page tables on-demand. It also supports very large registry sizes.
Includes enhanced support for Non-Uniform Memory Access (NUMA) and systems with large memory pages. Windows Vista also exposes APIs for accessing the NUMA features.
Memory pages can be marked as read-only, to prevent data corruption.
New address mapping scheme called Rotate Virtual Address Descriptors (VAD). It is used for the advanced Video subsystem.
Swapping in of memory pages and system cache include prefetching and clustering, to improve performance.
Performance of Address Translation Buffers has been enhanced.
Heap layout has been modified to provide higher performance on 64-bit and Symmetric multiprocessing (SMP) systems. The new heap structure is also more scalable and has low management overhead, especially for large heaps.
Windows Vista automatically tunes up the heap layout for improved fragmentation management. The Low Fragmentation Heap (LFH) is enabled by default.
Lazy initialization of heap initializes only when required, to improve performance.
The Windows Vista memory manager does not have a 64 kb read-ahead cache limitation unlike previous versions of Windows and can thus improve file system performance dramatically.
File systems
Transactional NTFS allows multiple file/folder operations to be treated as a single operation, so that a crash or power failure won't result in half-completed file writes. Transactions can also be extended to multiple machines.
Image Mastering API (IMAPI v2) enables DVD burning support for applications, in addition to CD burning. IMAPI v2 supports multiple optical drives, even recording to multiple drives simultaneously, unlike IMAPI in Windows XP which only supported recording with one optical drive at a time. In addition, multiple filesystems are supported. Applications using IMAPI v2 can create, and burn disc images—it is extensible in the sense that developers can write their own specific media formats and create their own file systems for its programming interfaces. IMAPI v2 is implemented as a DLL rather than as a service as was the case in Windows XP, and is also scriptable using VBScript. IMAPI v2 is also available for Windows XP. With the Windows Feature Pack for Storage installed, IMAPI 2.0 supports Recordable Blu-ray Disc (BD-R) and Rewritable Blu-ray Disc (BD-RE) media as well. Windows DVD Maker can burn DVD-Video discs, while Windows Explorer can burn data on DVDs (DVD±R, DVD±R DL, DVD±R RW) in addition to DVD-RAM and CDs.
Live File System: A writable UDF file system. The Windows UDF file system (UDFS) implementation was read-only in OS releases prior to Windows Vista. In Windows Vista, Packet writing (incremental writing) is supported by UDFS, which can now format and write to all mainstream optical media formats (MO, CDR/RW, DVD+R/RW, DVD-R/RW/RAM). Write support is included for UDF format versions up to and including 2.50, with read support up to 2.60. UDF symbolic links, however, are not supported.
Common Log File System (CLFS) API provides a high-performance, general-purpose log-file subsystem that dedicated user-mode and kernel-mode client applications can use and multiple clients can share to optimize log access and for data and event management.
File encryption support superior to that available in Encrypting File System in Windows XP, which will make it easier and more automatic to prevent unauthorized viewing of files on stolen laptops or hard drives.
File System Mini Filters model which are kernel mode non-device drivers, to monitor filesystem activity, have been upgraded in Windows Vista. The Registry filtering model adds support for redirecting calls and modifying parameters and introduces the concept of altitudes for filter registrations.
Registry notification hooks, introduced in Windows XP, and recently enhanced in Windows Vista, allow software to participate in registry related activities in the system.
Support of UNIX-style symbolic links. Previous Windows versions had support for a type of cross-volume reparse points known as junction points and hard links. However, junction points could be created only for directories and stored absolute paths, whereas hardlinks could be created for files but were not cross-volume. NTFS symbolic links can be created for any object and are cross-volume, cross-host (work over UNC paths), and store relative paths. However, the cross-host functionality of symbolic links does not work over the network with previous versions of Windows or other operating systems, only with computers running Windows Vista or a later Windows operating system. Symbolic links can be created, modified and deleted using the Mklink utility which is included with Windows Vista. Microsoft has published some developer documentation on symbolic links in the MSDN documentation. In addition, Windows Explorer is now symbolic link-aware and deleting a symbolic link from Explorer just deletes the link itself and not the target object. Explorer also shows the symbolic link target in the object's properties and shows a shortcut icon overlay on a junction point.
A new tab, "Previous Versions", in the Properties dialog for any file or folder, provides read-only snapshots of files on local or network volumes from an earlier point in time. This feature is based on the Volume Shadow Copy technology.
A new file-based disk image format called Windows Imaging Format (WIM), which can be mounted as a partition, or booted from. An associated tool called ImageX provides facilities to create and maintain these image files.
Self-healing NTFS: In previous Windows versions, NTFS marked the volume "dirty" upon detecting file-system corruption and CHKDSK was required to be run by taking the volume "offline". With self-healing NTFS, an NTFS worker thread is spawned in the background which performs a localized fix-up of damaged data structures, with only the corrupted files/folders remaining unavailable without locking out the entire volume. The self-healing behavior can be turned on for a volume with the fsutil repair set C: 1 command where C presents the volume letter.
New /B switch in CHKDSK for NTFS volumes which clears marked bad sectors on a volume and reevaluates them.
Windows Vista has support for hard disk drives with large physical sector sizes (> 512 bytes per sector drives) if the drive supports 512-bytes logical sectors / emulation (called Advanced Format/512E). Drives with both 4k logical and 4k physical sectors are not supported.
The NLS casing table in NTFS has been updated so that partitions formatted with Windows Vista will be able to see the proper behavior for the 100+ mappings that have been added to Unicode but were not added to Windows.
Windows Vista Service Pack 1 and later have built-in support for exFAT.
Drivers
Windows Vista introduces an improved driver model, Windows Driver Foundation which is an opt-in framework to replace the older Windows Driver Model. It includes:
Windows Display Driver Model (WDDM), previously referred to as Longhorn Display Driver Model (LDDM), designed for graphics performance and stability.
A new Kernel-Mode Driver Framework, which will also be available for Windows XP and Windows 2000.
A new user-mode driver model called the User-Mode Driver Framework. In Windows Vista, WDDM display drivers have two components, a kernel mode driver (KMD) that is very streamlined, and a user-mode driver that does most of the intense computations. With this model, most of the code is moved out of kernel mode. The audio subsystem also runs largely in user-mode to prevent impacting negatively on kernel performance and stability. Also, printer drivers in kernel mode are not supported. User-mode drivers are not able to directly access the kernel but use it through a dedicated API. User-mode drivers are supported for devices which plug into a USB or FireWire bus, such as digital cameras, portable media players, PDAs, mobile phones and mass storage devices, as well as "non-hardware" drivers, such as filter drivers and other software-only drivers. This also allows for drivers which would typically require a system reboot (video card drivers, for example) to install or update without needing a reboot of the machine. If the driver requires access to kernel-mode resources, developers can split the driver so that part of it runs in kernel-mode and part of it runs in user-mode. These features are significant because a majority of system crashes can be traced to improperly installed or unstable third-party device drivers. If an error occurs the new framework allows for an immediate restart of the driver and does not impact the system. User-Mode Driver Framework is available for Windows XP and is included in Windows Media Player 11.
Kernel-mode drivers on 64-bit versions of Windows Vista must be digitally signed; even administrators will not be able to install unsigned kernel-mode drivers. A boot-time option is available to disable this check for a single session of Windows. Installing user-mode drivers will still work without a digital signature.
Signed drivers are required for usage of PUMA, PAP (Protected Audio Path), and PVP-OPM subsystems.
Driver packages that are used to install driver software are copied in their entirety into a "Driver Store", which is a repository of driver packages. This ensures that drivers that need to be repaired or reinstalled won't need to ask for source media to get "fresh" files. The Driver Store can also be preloaded with drivers by an OEM or IT administrator to ensure that commonly used devices (e.g. external peripherals shipped with a computer system, corporate printers) can be installed immediately. Adding, removing and viewing drivers from the "Driver Store" is done using A new setting in Device Manager allows deleting the drivers from the Driver Store when uninstalling the hardware.
Since Windows Vista, there has a "delete the driver software for this device" clickbox in confirmation dialog when uninstalling a hardware device in Device Manager.
Support for Windows Error Reporting; information on an "unknown device" is reported to Microsoft when a driver cannot be found on the system, via Windows Update, or supplied by the user. OEMs can hook into this system to provide information that can be returned to the user, such as a formal statement of non-support of a device for Windows Vista, or a link to a web site with support information, drivers, etc.
Processor Power Management
Windows Vista includes the following changes and enhancements in processor power management:
Native operating system support for PPM on multiprocessor systems, including systems using processors with multiple logical threads, multiple cores, or multiple physical sockets.
Support for all ACPI 2.0 and 3.0 processor objects.
User configurable system cooling policy, minimum and maximum processor states.
Operating system coordination of performance state transitions between dependent processors.
Elimination of the processor dynamic throttling policies used in Windows XP and Windows Server 2003.
More flexible use of the available range of processor performance states through system power policy.
The static use of any linear throttle state on systems that are not capable of processor performance states.
Exposure of multiple power policy parameters that original equipment manufacturers (OEMs) may tune to optimize Windows Vista use of PPM features.
In-box drivers for processors from all leading processor manufacturers at that time. (Intel, AMD, VIA)
A generic processor driver that allows the use of processor-specific controls for performance state transitions.
An improved C3 entry algorithm, where a failed C3 entry does not cause demotion to C2.
Removal of support for legacy processor performance state interfaces.
Removal of support for legacy mobile processor drivers.
System performance
SuperFetch caches frequently-used applications and documents in memory, and keeps track of when commonly used applications are usually loaded, so that they can be pre-cached and it also prioritizes the programs currently used over background tasks. SuperFetch aims to negate the negative performance effect of having anti-virus or backup software run when the user is not at the computer. Superfetch is able to learn at what time of a given day an application is used and so it can be pre-cached.
ReadyBoost, makes PCs running Windows Vista more responsive by using flash memory on a USB drive (USB 2.0 only), SD card, Compact Flash, or other form of flash memory, in order to boost system performance. When such a device is plugged in, the Windows Autoplay dialog offers an additional option to use it to speed up the system; an additional "ReadyBoost" tab is added to the drive's properties dialog where the amount of space to be used can be configured.
ReadyBoot uses an in-RAM cache to optimize the boot process if the system has 700MB or more memory. The size of the cache depends on the total RAM available, but is large enough to create a reasonable cache and yet allow the system the memory it needs to boot smoothly. ReadyBoot uses the same ReadyBoost service.
ReadyDrive is the name Microsoft has given to its support for hybrid drives, a new design of hard drive developed by Samsung and Microsoft. Hybrid drives incorporate non-volatile memory into the drive's design, resulting in lower power needs, as the drive's spindles do not need to be activated for every write operation. Windows Vista can also make use of the NVRAM to increase the speed of booting and returning from hibernation.
Windows Vista features Prioritized I/O which allows developers to set application I/O priorities for read/write disk operations, similar to how currently application processes/threads can be assigned CPU priorities. I/O has been enhanced with I/O asynchronous cancellation and I/O scheduling based on thread priority. Background applications running in low priority I/O do not disturb foreground applications. Applications like Windows Defender, Automatic Disk Defragmenter and Windows Desktop Search (during indexing) already use this feature. Windows Media Player 11 also supports this technology to offer glitch-free multimedia playback.
The Offline Files feature, which maintains a client-side cache of files shared over a network, has been significantly improved. When synchronizing the changes in the cached copy to the remote version, the Bitmap Differential Transfer protocol is used so that only the changed blocks in the cached version are transferred, but when retrieving changes from the remote copy, the entire file is downloaded. are synchronized on a per-share basis and encrypted on a per-user basis and users can force Windows to work in offline mode or online mode or sync manually from the Sync Center. The Sync Center can also report sync errors and resolve sync conflicts. Also, if network connectivity is restored, file handles are redirected to the remote share transparently.
Delayed service start allows services to start a short while after the system has finished booting and initial busy operations, so that the system boots up faster and performs tasks quicker than before.
Enable advanced performance option for hard disks: When enabled, the operating system may cache disk writes as well as disk reads. In previous Windows operating systems, only the disk's internal disk caching, if any, was utilised for disk write operations when the disk cache was enabled by the user. Enabling this option causes Windows to make use of its own local cache in addition to this, which speeds up performance, at the expense of a little more risk of data loss during a sudden loss of power.
Programmability
.NET Framework 3.0
Windows Vista is the first client version of Windows to ship with the .NET Framework. The .NET Framework is a set of managed code APIs that is slated to succeed Win32. The Win32 API is also present in Windows Vista, but does not give direct access to all the new functionality introduced with the .NET Framework. In addition, .NET Framework is intended to give programmers easier access to the functionality present in Windows itself.
.NET Framework 3.0 includes APIs such as ADO.NET, ASP.NET, Windows Forms, among others, and adds four core frameworks to the .NET Framework:
Windows Presentation Foundation (WPF)
Windows Communication Foundation (WCF)
Windows Workflow Foundation (WF)
Windows CardSpace
WPF
Windows Presentation Foundation (codenamed Avalon) is the overhaul of the graphical subsystem in Windows and the flagship resolution independent API for 2D and 3D graphics, raster and vector graphics (XAML), fixed and adaptive documents (XPS), advanced typography, animation (XAML), data binding, audio and video in Windows Vista. WPF enables richer control, design, and development of the visual aspects of Windows programs. Based on DirectX, it renders all graphics using Direct3D. Routing the graphics through Direct3D allows Windows to offload graphics tasks to the GPU, reducing the workload on the computer's CPU. This capability is used by the Desktop Window Manager to make the desktop, all windows and all other shell elements into 3D surfaces. WPF applications can be deployed on the desktop or hosted in a web browser (XBAP).
The 3D capabilities in WPF are limited compared to what's available in Direct3D. However, WPF provides tighter integration with other features like user interface (UI), documents, and media. This makes it possible to have 3D UI, 3D documents, and 3D media. A set of built-in controls is provided as part of WPF, containing items such as button, menu, and list box controls. WPF provides the ability to perform control composition, where a control can contain any other control or layout. WPF also has a built-in set of data services to enable application developers to bind data to the controls. Images are supported using the Windows Imaging Component. For media, WPF supports any audio and video formats which Windows Media Player can play. In addition, WPF supports time-based animations, in contrast to the frame-based approach. This delinks the speed of the animation from how slow or fast the system is performing. Text is anti-aliased and rendered using ClearType.
WPF uses Extensible Application Markup Language (XAML), which is a variant of XML, intended for use in developing user interfaces. Using XAML to develop user interfaces also allows for separation of model and view. In XAML, every element maps onto a class in the underlying API, and the attributes are set as properties on the instantiated classes. All elements of WPF may also be coded in a .NET language such as C#. The XAML code is ultimately compiled into a managed assembly in the same way all .NET languages are, which means that the use of XAML for development does not incur a performance cost.
WCF
Windows Communication Foundation (codenamed Indigo) is a new communication subsystem to enable applications, in one machine or across multiple machines connected by a network, to communicate. WCF programming model unifies Web Services, .NET Remoting, Distributed Transactions, and Message Queues into a single Service-oriented architecture model for distributed computing, where a server exposes a service via an interface, defined using XML, to which clients connect. WCF runs in a sandbox and provides the enhanced security model all .NET applications provide.
WCF is capable of using SOAP for communication between two processes, thereby making WCF based applications interoperable with any other process that communicates via SOAP. When a WCF process communicates with a non-WCF process, XML based encoding is used for the SOAP messages but when it communicates with another WCF process, the SOAP messages are encoded in an optimized binary format, to optimize the communication. Both the encodings conform to the data structure of the SOAP format, called Infoset.
Windows Vista also incorporates Microsoft Message Queuing 4.0 (MSMQ) that supports subqueues, poison messages (messages which continually fail to be processed correctly by the receiver), and transactional receives of messages from a remote queue.
WF
Windows Workflow Foundation is a Microsoft technology for defining, executing and managing workflows. This technology is part of .NET Framework 3.0 and therefore targeted primarily for the Windows Vista operating system. The Windows Workflow Foundation runtime components provide common facilities for running and managing the workflows and can be hosted in any CLR application domain.
Workflows comprise 'activities'. Developers can write their own domain-specific activities and then use them in workflows. Windows Workflow Foundation also provides a set of general-purpose 'activities' that cover several control flow constructs. It also includes a visual workflow designer. The workflow designer can be used within Visual Studio 2005, including integration with the Visual Studio project system and debugger.
Windows CardSpace
Windows CardSpace (codenamed InfoCard), a part of .NET Framework 3.0, is an implementation of Identity Metasystem, which centralizes acquiring, usage and management of digital identity. A digital identity is represented as logical Security Tokens, that each consist of one or more Claims, which provide information about different aspects of the identity, such as name, address etc.
Any identity system centers around three entities — the User who is to be identified, an Identity Provider who provides identifying information regarding the User, and Relying Party who uses the identity to authenticate the user. An Identity Provider may be a service like Active Directory, or even the user who provides an authentication password, or biometric authentication data.
A Relying Party issues a request to an application for an identity, by means of a Policy that states what Claims it needs and what will be the physical representation of the security token. The application then passes on the request to Windows CardSpace, which then contacts a suitable Identity Provider and retrieves the Identity. It then provides the application with the Identity along with information on how to use it.
Windows CardSpace also keeps a track of all Identities used, and represents them as visually identifiable virtual cards, accessible to the user from a centralized location. Whenever an application requests any identity, Windows CardSpace informs the user about which identity is being used and needs confirmation before it provides the requestor with the identity.
Windows CardSpace presents an API that allows any application to use Windows CardSpace to handle authentication tasks. Similarly, the API allows Identity Providers to hook up with Windows CardSpace. To any Relying Party, it appears as a service which provides authentication credentials.
Other .NET Framework APIs
Microsoft UI Automation (UIA) is a managed code API replacing Microsoft Active Accessibility to drive user interfaces. UIA is designed to serve both assistive technology and test-automation requirements.
.NET Framework 3.0 also includes a managed code speech API which has similar functionality to SAPI 5 but is suitable to be used by managed code applications.
Media Foundation
Media Foundation is a set of COM-based APIs to handle audio and video playback that provides DirectX Video Acceleration 2.0 and better resilience to CPU, I/O, and memory stress for glitch-free low-latency playback of audio and video. It also enables high color spaces through the multimedia processing pipeline. DirectShow and Windows Media SDK will be gradually deprecated in future versions.
Search
The Windows Vista Instant Search index can also be accessed programmatically using both managed as well as native code. Native code connects to the index catalog by using a Data Source Object retrieved from Windows Vista shell's Indexing Service OLE DB provider. Managed code use the MSIDXS ADO.NET provider with the index catalog name. A catalog on a remote machine can also be specified using a UNC path. The criteria for the search is specified using a SQL-like syntax.
The default catalog is called SystemIndex and it stores all the properties of indexed items with a predefined naming pattern. For example, the name and location of documents in the system is exposed as a table with the column names System. ItemName and System. ItemURL respectively. An SQL query can directly refer these tables and index catalogues and use the MSIDXS provider to run queries against them. The search index can also be used via OLE DB, using the CollatorDSO provider. However, OLE DB provider is read-only, supporting only SELECT and GROUP ON SQL statements.
The Windows Search API can also be used to convert a search query written using Advanced Query Syntax (or Natural Query Syntax, the natural language version of AQS) to SQL queries. It exposes a method GenerateSQLFromUserQuery method of the ISearchQueryHelper interface. Searches can also be performed using the search-ms: protocol, which is a pseudo protocol that lets searches be exposed as an URI. It contains all the operators and search terms specified in AQS. It can refer to saved search folders as well. When such a URI is activated, Windows Search, which is registered as a handler for the protocol, parses the URI to extract the parameters and perform the search.
Networking
Winsock Kernel (WSK) is a new transport-independent kernel-mode Network Programming Interface (NPI) for that provides TDI client developers with a sockets-like programming model similar to those supported in user-mode Winsock. While most of the same sockets programming concepts exist as in user-mode Winsock such as socket, creation, bind, connect, accept, send and receive, Winsock Kernel is a completely new programming interface with unique characteristics such as asynchronous I/O that uses IRPs and event callbacks to enhance performance. TDI is supported in Windows Vista for backward compatibility.
Windows Vista includes a specialized QoS API called qWave (Quality Windows Audio/Video Experience), which is a pre-configured quality of service module for time dependent multimedia data, such as audio or video streams. qWave uses different packet priority schemes for real-time flows (such as multimedia packets) and best-effort flows (such as file downloads or e-mails) to ensure that real time data gets as little delays as possible, while providing a high quality channel for other data packets.
Windows Filtering Platform allows external applications to access and hook into the packet processing pipeline of the networking subsystem.
Cryptography
Windows Vista features an update to the Microsoft Crypto API known as Cryptography API: Next Generation (CNG). CNG is an extensible, user mode and kernel mode API that includes support for Elliptic curve cryptography and a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It also integrates with the smart card subsystem by including a Base CSP module which encapsulates the smart card API so that developers do not have to write complex CSPs.
Other features and changes
Support for Unicode 5.0
A number of new fonts:
Latin fonts: Calibri, Cambria, Candara, Consolas (monotype), Constantia, and Corbel. Segoe UI, previously used in Windows XP Media Center Edition, is also included, despite licensing issues with Linotype.
Meiryo, supporting the new and modified characters of the JIS X 0213:2004 standard
Non-Latin fonts: Microsoft JhengHei (Chinese Traditional), Microsoft YaHei (Chinese Simplified), Majalla UI (Arabic), Gisha (Hebrew), Leelawadee (Thai) and Malgun Gothic (Korean).
Support for Adobe CFF/Type2 fonts, which provides support for contextual and discretionary ligatures.
When accessing files with the ANSI character set, if the total path length is more than the maximum allowed 260 characters, Windows Vista automatically uses the alternate short names (which has an 8.3 limit) to shorten the total path length. In Unicode mode, this is not done as the maximum allowed length is 32,000.
The long "Documents and Settings" folder is now just "Users", although a symbolic link called "Documents and Settings" is kept for compatibility. The paths of several special folders under the user profile have changed.
New support for infrared receivers and Bluetooth 2.0 wireless standards; devices supporting these can transfer files and sync data wirelessly to a Windows Vista computer with no additional software.
A non-administrator user can share only the folders under his user profile. In addition, all users have a Public folder which is shared, though an administrator can override this.
Network Projection is used to detect and use network-connected projectors. It can be used to display a presentation, or share a presentation with the machine which hosts the projector. Users can do this over a network so multiple sources can be connected at different times without having to keep moving the sources or projectors around. The network projector can be connected to the network via wireless or cable (LAN) technology to make it even more flexible. Users can not only connect to the network projector remotely but can also remotely configure it. Network projectors are designed to transmit and display still images, such as photographs and slides —not high-bandwidth transmissions, such as video streams. The projector can transmit video, but the playback quality is often poor. Binary %windir%\system32\NetProj.exe implement Network Projection feature.
New monitor configuration APIs make it possible to adjust the monitor's display area, save and restore display settings, calibrate color and use vendor-specific monitor features. Overall too, Windows Vista is designed to be more resolution-independent than its predecessors, with a particular focus on higher resolutions and high DPI displays . Windows Presentation Foundation and WPF applications are fully resolution-independent. Also, Transient Multimon Manager, a new feature that uses the monitor's EDID enables automatic detection, setup and proper configuration of additional or multiple displays as they are attached and removed, on the fly. The settings are saved on a per-display basis when possible, so that users can move among multiple displays with no manual configuration.
Windows Vista includes a WSD-WIA class driver that enables all devices compliant with Microsoft's Web Services for Scanner (WS-Scan) protocol to work with WIA without any additional driver or software.
The Fax service and model are fully account-based. Fax-aware applications such as Windows Fax and Scan can send multiple documents in a single fax submission. The Fax Service API generates TIFF files for each document and merges them into a single TIFF file. Users can right-click a document in Windows Explorer and select Send to Fax Recipient.
Windows Vista introduces the 'Assistance Platform' based on MAML. Help and Support is intended to be more meaningful and clear. Guided Help, or Active Content Wizard is an automated tutorial and self-help system available with the release of Windows Vista where a series of animated steps show users how to complete a particular task. It highlights only the options and the parts of screen that are relevant to the task and darkening the rest of the screen. A separate file format is used for ACW help files. The guided help SDK got replaced in Windows 7 with the Windows Troubleshooting Platform.
All standard text editing controls and all versions of the 'RichEdit' control now support the Text Services Framework. Also, all Tablet/Ink API applications and all HTML applications which use Internet Explorer's Trident layout engine support the Text Services Framework.
Windows Data Access Components (Windows DAC) replace MDAC 2.81 which shipped with Windows XP Service Pack 2.
DFS Replication, the successor to File Replication Service, is a state-based replication engine for file replication among DFS shares, which supports replication scheduling and bandwidth throttling. It uses Remote Differential Compression to detect and replicate only the change to files, rather than replicating entire files, if changed. DFS-R is also included with Windows Server 2003 R2.
As with Windows XP Professional x64 Edition, in Windows Vista x64, old 16-bit Windows programs are not supported. If 16-bit software needs to be run in 64-bit Windows Vista, virtualization can be used for running a 32-bit operating system.
See also
Windows Server 2008
Notes and references
External links
Windows Vista Technical Library Roadmap
Making Your Application a Windows Vista Application: The Top Ten Things to Do — from MSDN.
New Networking Features in Windows Server 2008 and Windows Vista
A list of Vista ReadyBoost compatible devices
Windows Vista
Windows Vista | Technical features new to Windows Vista | Technology | 11,851 |
24,019,807 | https://en.wikipedia.org/wiki/C20H20O7 | {{DISPLAYTITLE:C20H20O7}}
The molecular formula C20H20O7 (molar mass: 372.36 g/mol, exact mass: 372.120903 u) may refer to:
Nephroarctin, a depside
Sinensetin, a methylated flavone
Tangeritin, a methylated flavone
Molecular formulas | C20H20O7 | Physics,Chemistry | 87 |
28,700,430 | https://en.wikipedia.org/wiki/NGC%2018 | NGC 18 is a double star system located in the constellation of Pegasus. It was first recorded by Herman Schultz on 15 October 1866. It was looked for but not found by Édouard Stephan on 2 October 1882. It was independently observed by Guillaume Bigourdan in November 1886.
Both stars are light-years away, and based on this distance have a minimum separation of approximately 2,700 astronomical units, an unusually wide separation for a binary system.
See also
Double star
Binary star
List of NGC objects (1–999)
References
External links
Pegasus (constellation)
Double stars
0018
18661015 | NGC 18 | Astronomy | 119 |
24,750,268 | https://en.wikipedia.org/wiki/Polymake | polymake is a software for the algorithmic treatment of convex polyhedra.
Albeit primarily a tool to study the combinatorics and the geometry of convex polytopes and polyhedra, it is by now also capable of dealing with simplicial complexes, matroids, polyhedral fans, graphs, tropical objects, toric varieties and other objects. In particular, its capability to compute the convex hull and lattice points of a polytope proved itself to be quite useful for different kinds of research.
polymake has been cited in over 300 recent articles indexed by Zentralblatt MATH as can be seen from its entry in the swMATH database.
Special features and applications
polymake exhibits a few particularities, making it special to work with.
Firstly, polymake can be used within a Perl script. Moreover, users can extend polymake and define new objects, properties, rules for computing properties, and algorithms.
Secondly, it exhibits an internal client-server scheme to accommodate the usage of Perl for object management and interfaces as well as C++ for mathematical algorithms. The server holds information about each object (e.g., a polytope), and the client sends requests to compute properties. The server has the job of determining how to complete each request from information already known about each object using a rule-based system. For example, there are many rules on how to compute the facets of a polytope. Facets can be computed from a vertex description of the polytope, and from a (possibly redundant) inequality description. polymake builds a dependency graph outlining the steps to process each request and selects the best path via a Dijkstra-type algorithm.
polymake divides its collection of functions and objects into 10 different groups called applications. They behave like C++ namespaces. The polytope application was the first one developed and it is the largest.
Common: "helper" functions used in other applications.
Fan: functions for polyhedral complexes (which generalize simplicial complexes), planar drawings of 3-polytopes, polyhedral fans, and subdivisions of points or vectors.
Fulton: computations with normal toric varieties. It is named after William Fulton, author "Introduction to Toric Varieties".
Graph: manipulation of directed and undirected graphs.
Group: focus on finite permutation groups. Basic properties of a group can be calculated like characters and conjugacy classes.
Ideal: computations on polynomial ideals: Gröbner basis, Hilbert polynomial, and radicals.
Matroid: computation of standard properties of a matroid, like bases and circuits. This application can also compute more advanced properties like the Tutte polynomial of a matroid and realizing the matroid with a polytope.
Polytope: over 230 functions or calculations that can be done with a polytope. These functions range in complexity from simply calculating basic information about a polytope (e.g., number of vertices, number of facets, tests for simplicial polytopes, and converting a vertex description to an inequality description) to combinatorial or algebraic properties (e.g., H-vector, Ehrhart polynomial, Hilbert basis, and Schlegel diagrams). There are also many visualization options.
Topaz: functions relating to abstract simplicial complexes. Many advanced topological calculations over simplicial complexes can be performed like homology groups, orientation and fundamental group. There is also a combinatorial collection of properties that can be computed, like a shelling and Hasse diagrams.
Tropical: functions for exploring tropical geometry; in particular, tropical hypersurfaces and tropical cones.
Development History
polymake version 1.0 first appeared in the proceedings of DMV-Seminar "Polytopes and Optimization" held in Oberwolfach, November 1997. Version 1.0 only contained the polytope application, but the system of "applications" was not yet developed. Version 2.0 was in July 2003, and version 3.0 was released in 2016. The last big revision, version 4.0, was released in January 2020.
Interaction with other software packages
polymake is highly modularly built and, therefore, displays great interaction with third party software packages for specialized computations, thereby providing a common interface and bridge between different tools. A user can easily (and unknowingly) switch between using different software packages in the process of computing properties of a polytope.
Used within polymake
Below is a list of third-party software packages that polymake can interface with as of version 4.0. Users are also able to write new rule files for interfacing with any software package. Note that there is some redundancy in this list (e.g., a few different packages can be used for finding the convex hull of a polytope). Because polymake uses rule files and a dependency graph for computing properties, most of these software packages are optional. However, some become necessary for specialized computations.
4ti2: software package for algebraic, geometric and combinatorial problems on linear spaces
a-tint: tropical intersection theory
azove: enumeration of 0/1 vertices
barvinok: counting of integer points in parametrized and non-parametrized polytopes
cdd: double description method for converting between an inequality and vertex description of a polytope
Geomview: interactive 3D viewing program
Gfan: Gröbner fans and tropical varieties
GraphViz: graph visualization software
homology: computation homology groups of simplicial complexes
LattE (Lattice point Enumeration): counting lattice points inside polytopes and integration over polytopes
libnormaliz: affine monoids, vector configurations, lattice polytopes, and rational cones
lrs: implementation of the reverse-search algorithm for the vertex enumeration problem and convex hull problems
mptopcom: computation of triangulations of point configurations and matroids using parallel reverse search
nauty: automorphism groups of graphs
plantri: planar triangulations
permlib: set stabilizer and in-orbit computations
PORTA: enumerate lattice points of a polytope
ppl: Parma Polyhedra Library
qhull: Quickhull algorithm for convex hulls
singular: computer algebra system for polynomial computations, with special emphasis on commutative and non-commutative algebra, algebraic geometry, and singularity theory
sketch: for making line drawings of two- or three-dimensional solid objects
SplitsTree4: phylogenetic networks
sympol: tool to work with symmetric polyhedra
threejs: JavaScript library for animated 3D computer graphics
tikz: TeX packages for creating graphics programmatically
TropLi: for computing tropical linear spaces of matroids
tosimplex: Dual simplex algorithm implemented by Thomas Opfer
Vinci: volumes of polytopes
Used in conjunction with polymake
jupyter-polymake: allows polymake within Jupyter notebooks.
OSCAR: Open Source Computer Algebra Research system currently under development
PolymakeInterface: package for using polymake in GAP.
PolyViewer: GUI viewer for polymake fies.
References
Mathematical software
Polyhedra
Computational geometry | Polymake | Mathematics | 1,511 |
52,628,572 | https://en.wikipedia.org/wiki/List%20of%20works%20by%20John%20Lautner | List of works by American architect John Lautner.
References
Architecture lists | List of works by John Lautner | Engineering | 15 |
84,029 | https://en.wikipedia.org/wiki/Euclidean%20planes%20in%20three-dimensional%20space | In Euclidean geometry, a plane is a flat two-dimensional surface that extends indefinitely.
Euclidean planes often arise as subspaces of three-dimensional space .
A prototypical example is one of a room's walls, infinitely extended and assumed infinitesimal thin.
While a pair of real numbers suffices to describe points on a plane, the relationship with out-of-plane points requires special consideration for their embedding in the ambient space .
Derived concepts
A or (or simply "plane", in lay use) is a planar surface region; it is analogous to a line segment.
A bivector is an oriented plane segment, analogous to directed line segments.
A face is a plane segment bounding a solid object.
A slab is a region bounded by two parallel planes.
A parallelepiped is a region bounded by three pairs of parallel planes.
Background
Euclid set forth the first great landmark of mathematical thought, an axiomatic treatment of geometry. He selected a small core of undefined terms (called common notions) and postulates (or axioms) which he then used to prove various geometrical statements. Although the plane in its modern sense is not directly given a definition anywhere in the Elements, it may be thought of as part of the common notions. Euclid never used numbers to measure length, angle, or area. The Euclidean plane equipped with a chosen Cartesian coordinate system is called a Cartesian plane; a non-Cartesian Euclidean plane equipped with a polar coordinate system would be called a polar plane.
A plane is a ruled surface.
Euclidean plane
Representation
This section is solely concerned with planes embedded in three dimensions: specifically, in .
Determination by contained points and lines
In a Euclidean space of any number of dimensions, a plane is uniquely determined by any of the following:
Three non-collinear points (points not on a single line).
A line and a point not on that line.
Two distinct but intersecting lines.
Two distinct but parallel lines.
Properties
The following statements hold in three-dimensional Euclidean space but not in higher dimensions, though they have higher-dimensional analogues:
Two distinct planes are either parallel or they intersect in a line.
A line is either parallel to a plane, intersects it at a single point, or is contained in the plane.
Two distinct lines perpendicular to the same plane must be parallel to each other.
Two distinct planes perpendicular to the same line must be parallel to each other.
Point–normal form and general form of the equation of a plane
In a manner analogous to the way lines in a two-dimensional space are described using a point-slope form for their equations, planes in a three dimensional space have a natural description using a point in the plane and a vector orthogonal to it (the normal vector) to indicate its "inclination".
Specifically, let be the position vector of some point , and let be a nonzero vector. The plane determined by the point and the vector consists of those points , with position vector , such that the vector drawn from to is perpendicular to . Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the desired plane can be described as the set of all points such that
The dot here means a dot (scalar) product.
Expanded this becomes
which is the point–normal form of the equation of a plane. This is just a linear equation
where
which is the expanded form of
In mathematics it is a common convention to express the normal as a unit vector, but the above argument holds for a normal vector of any non-zero length.
Conversely, it is easily shown that if , , , and are constants and , , and are not all zero, then the graph of the equation
is a plane having the vector as a normal. This familiar equation for a plane is called the general form of the equation of the plane or just the plane equation.
Thus for example a regression equation of the form (with ) establishes a best-fit plane in three-dimensional space when there are two explanatory variables.
Describing a plane with a point and two vectors lying on it
Alternatively, a plane may be described parametrically as the set of all points of the form
where and range over all real numbers, and are given linearly independent vectors defining the plane, and is the vector representing the position of an arbitrary (but fixed) point on the plane. The vectors and can be visualized as vectors starting at and pointing in different directions along the plane. The vectors and can be perpendicular, but cannot be parallel.
Describing a plane through three points
Let , , and be non-collinear points.
Method 1
The plane passing through , , and can be described as the set of all points (x,y,z) that satisfy the following determinant equations:
Method 2
To describe the plane by an equation of the form , solve the following system of equations:
This system can be solved using Cramer's rule and basic matrix manipulations. Let
If is non-zero (so for planes not through the origin) the values for , and can be calculated as follows:
These equations are parametric in d. Setting d equal to any non-zero number and substituting it into these equations will yield one solution set.
Method 3
This plane can also be described by the prescription above. A suitable normal vector is given by the cross product
and the point can be taken to be any of the given points , or (or any other point in the plane).
Operations
Distance from a point to a plane
Line–plane intersection
Line of intersection between two planes
Sphere–plane intersection
Occurrence in nature
A plane serves as a mathematical model for many physical phenomena, such as specular reflection in a plane mirror or wavefronts in a traveling plane wave.
The free surface of undisturbed liquids tends to be nearly flat (see flatness).
The flattest surface ever manufactured is a quantum-stabilized atom mirror.
In astronomy, various reference planes are used to define positions in orbit.
Anatomical planes may be lateral ("sagittal"), frontal ("coronal") or transversal.
In geology, beds (layers of sediments) often are planar.
Planes are involved in different forms of imaging, such as the focal plane, picture plane, and image plane.
Miller indices
The attitude of a lattice plane is the orientation of the line normal to the plane, and is described by the plane's Miller indices. In three-space a family of planes (a series of parallel planes) can be denoted by its Miller indices (hkl), so the family of planes has an attitude common to all its constituent planes.
Strike and dip
Many features observed in geology are planes or lines, and their orientation is commonly referred to as their attitude. These attitudes are specified with two angles.
For a line, these angles are called the trend and the plunge. The trend is the compass direction of the line, and the plunge is the downward angle it makes with a horizontal plane.
For a plane, the two angles are called its strike (angle) and its dip (angle). A strike line is the intersection of a horizontal plane with the observed planar feature (and therefore a horizontal line), and the strike angle is the bearing of this line (that is, relative to geographic north or from magnetic north). The dip is the angle between a horizontal plane and the observed planar feature as observed in a third vertical plane perpendicular to the strike line.
See also
Dihedral angle
Flat (geometry)
Half-plane
Hyperplane
Plane coordinates
Plane of incidence
Plane of rotation
Plane orientation
Polygon
Notes
Explanatory notes
Citations
References
External links
"Easing the Difficulty of Arithmetic and Planar Geometry" is an Arabic manuscript, from the 15th century, that serves as a tutorial about plane geometry and arithmetic. | Euclidean planes in three-dimensional space | Mathematics | 1,587 |
9,379,243 | https://en.wikipedia.org/wiki/Bitfrost | Bitfrost is the security design specification for the OLPC XO, a low cost laptop intended for children in developing countries and developed by the One Laptop Per Child (OLPC) project. Bitfrost's main architect is Ivan Krstić. The first public specification was made available in February 2007.
Bitfrost architecture
Passwords
No passwords are required to access or use the computer.
System of rights
Every program, when first installed, requests certain bundles of rights, for instance "accessing the camera", or "accessing the internet". The system keeps track of these rights, and the program is later executed in an environment which makes only the requested resources available. The implementation is not specified by Bitfrost, but dynamic creation of security contexts is required. The first implementation was based on vserver, the second and current implementation is based on user IDs and group IDs (/etc/password is edited when an activity is started), and a future implementation might involve SE Linux or some other technology.
By default, the system denies certain combinations of rights; for instance, a program would not be granted both the right to access the camera and to access the internet. Anybody can write and distribute programs that request allowable right combinations. Programs that require normally unapproved right combinations need a cryptographic signature by some authority. The laptop's user can use the built-in security panel to grant additional rights to any application.
Modifying the system
The users can modify the laptop's operating system, a special version of Fedora Linux running the new Sugar graphical user interface and operating on top of Open Firmware. The original system remains available in the background and can be restored.
By acquiring a developer key from a central location, a user may even modify the background copy of the system and many aspects of the BIOS. Such a developer key is only given out after a waiting period (so that theft of the machine can be reported in time) and is only valid for one particular machine.
Theft-prevention leases
The laptops request a new "lease" from a central network server once a day. These leases come with an expiry time (typically a month), and the laptop stops functioning if all its leases have expired. Leases can also be given out from local school servers or via a portable USB device. Laptops that have been registered as stolen cannot acquire a new lease.
The deploying country decides whether this lease system is used and sets the lease expiry time.
Microphone and camera
The laptop's built-in camera and microphone are hard-wired to LEDs, so that the user always knows when they are operating. This cannot be switched off by software.
Privacy concerns
Len Sassaman, a computer security researcher at the Catholic University of Leuven in Belgium and his colleague Meredith Patterson at the University of Iowa in Iowa City claim that the Bitfrost system has inadvertently become a possible tool for unscrupulous governments or government agencies to definitively trace the source of digital information and communications that originated on the laptops. This is a potentially serious issue as many of the countries which have the laptops have governments with questionable human rights records.
Notes
The specification itself mentions that the name "Bitfrost" is a play on the Norse mythology concept of Bifröst, the bridge between the world of mortals and the realm of Gods. According to the Prose Edda, the bridge was built to be strong, yet it will eventually be broken; the bridge is an early recognition of the idea that there's no such thing as a perfect security system.
See also
CapDesk
References
External links
Ivan Krstić's homepage
OLPC Wiki: Bitfrost
Bitfrost specification, version Draft-19 - release 1, 7 February 2007
High Security for $100 Laptop, Wired News, 7 February 2007
Making antivirus software obsolete - Technology Review magazine recognized Ivan Krstić, Bitfrost's main architect, as one of the world's top innovators under the age of 35 (Krstić was 21 at the time of publication) for his work on the system.
One Laptop per Child
Cryptographic software | Bitfrost | Mathematics | 848 |
42,682,437 | https://en.wikipedia.org/wiki/Information%20technology%20generalist | An information technology generalist is a technology professional proficient in many facets of information technology without any specific specialty. Furthermore, an IT generalist is generally considered to possess general business knowledge and soft skills allowing them to be adaptable in a wide array of work environments. The IT Generalist is often able to fulfill many different roles within a company depending on specific technology needs. In a small business environment, budgets often delegate many different facets of technology to a single individual, especially considering a small business will often require an individual proficient in desktop support, web page design, databases, phone systems, and even server administration. The role of the IT Generalist within a larger company, however, often becomes more of a project leader or integrations specialist due to a project team consisting of a varying degree of IT specialists and interfacing with end-users requiring soft-skills.
Industry role
The information technology industry consists of many disparate technologies that each serve a critical piece of the total technology puzzle. As technology practices change new methods, techniques, and tools become available that often require human expertise in order to implement and maintain technology systems. The human expertise required in order to manage these new systems has given rise to what is defined as an IT specialist—someone with an expert level of competency and knowledge on a particular piece of technology. In comparison, the IT Generalist does not possess the expert knowledge to implement an advanced system but instead possesses the knowledge and experience to ensure all of the disparate technologies can function properly together.
The generalist has become an increasingly valuable asset to a company, especially when it comes to the rapidly changing field of technology. In many cases, companies are more apt to demand hiring and retaining employees that are multi-functional, especially those that have analytical abilities such as critical reasoning and statistical analysis.
Market trends
Market trends tend to be moving away from the hiring of IT specialists and instead individuals who possess a more broad technical base with additional soft-skills such as enthusiasm, passion, and energy. In even some cases, IT specialists with years of experience may be passed over for more malleable hires because as technology changes innovation may be stifled by specialists not being able to adjust to new procedures and processes in the market. The attributes associated with an IT generalist have been defined in other occupational series as well. In a study conducted over five years with over 250 political science specialists their political predictions were recorded along with a large sample of generalist's political predictions to determine if specialists with their expert knowledge were more effective in their forecasts. The study discovered that those with limited political science exposure were more accurate in their predictions—most likely as a result of their more varied and un-focused exposure to political science as compared to those of the specialists.
Arguments for and against
The debate between the IT generalist and specialist has been going on for years as companies are trying to determine how to best utilize their IT department skills and assets. Up until recently the general consensus of the market has been the hiring of IT specialists due to the implementation of advanced technologies such as cloud computing, next-generation mobile application development, and the virtualization of technology infrastructure. It has been found, however, that the hiring of specialists has led to the rise of silos of talent within a company leading to difficulty implementing new business processes and taking advantage of inter-departmental collaboration. This compartmentalization of knowledge and skills has led to companies shifting focus to the hiring and implementing of IT Generalists for the purposes of implementing a more mobile, adaptable, and diverse technology department.
According to multiple studies in 2010 companies who were surveyed noted their goal was to reduce the amount of specialists they hire and instead hiring “versatilists.” These versatilists are synonymous with IT Generalists because they have an overall general sense of technology as well as business knowledge while also possessing “soft skills” that are considered lacking in technology-minded individuals focused on specific technology skill-sets.
Further reading
Tetlock, Philip (2005). Expert Political Judgment: How Good Is It? How Can We Know?. Princeton, NJ: Princeton University Press.
Stewart, Jim (2009). Technology Jobs: Secrets to Landing your Next Job in Information Technology. Equity Press.
Rubenstein, Albert (2007). Managing Technology in the Decentralized Firm. Authors Choice Press.
References
Computer occupations | Information technology generalist | Technology | 876 |
16,638,498 | https://en.wikipedia.org/wiki/Increment%20borer | An increment borer is a specialized tool used to extract a section of wood tissue from a living tree with relatively minor injury to the plant itself. The tool consists of a handle, an auger bit and a small, half circular metal tray ( the core extractor) that fits into the auger bit; the last is usually manufactured from carbide steel. It is most often used by foresters, researchers and scientists to determine the age of a tree. This science is also called dendrochronology. The operation enables the user to count the rings in the core sample, to reveal the age of the tree being examined and its growth rate. After use the tool breaks down: auger bit and extractor fit within the handle, making it highly compact and easy to carry.
Use
Effective use of an increment borer requires specialized training. Samples are taken at breast height or stump height of the tree trunk, depending on the user's objectives; during use the borer should be well lubricated, thus making it easier to use and preventing it from becoming stuck in the wood.
Maintenance
As with any other tools, increment borers should be properly maintained to keep them in good working condition; should be thoroughly cleaned after each use and dried before storing. Sharpening kits are available and should be used regularly, before such bits become dull.
Types
Increment borers come in different lengths and diameters and have different types of threads. The most common diameters are the 4, 4.3, and 5.15 millimeter borers in the range of 3.8 millimeters to 12 millimeters, and they are available with two or three threads. The two-threaded type is more appropriate for hard woods, because it cuts at a slower rate, which applies more torque. The three-threaded auger will penetrate the wood at a faster rate than the former.
References
External links
Increment borer, Virginia Tech
Forest modelling
Dimensional instruments
Forestry tools | Increment borer | Physics,Mathematics | 405 |
66,369,269 | https://en.wikipedia.org/wiki/BiP%20%28software%29 | BiP is a freeware instant messaging application developed by Lifecell Ventures Cooperatief U.A., a subsidiary of Turkcell incorporated in the Netherlands. It allows users to send text messages, voice messages and video calling, and it can be downloaded from the App Store, Google Play, and Huawei AppGallery. BiP has over 53 million users worldwide, and was first released in 2013.
Functions
BiP is a secure, and free communication platform. BiP allows making video and audio calls, allows sharing images, videos and location. BiP includes instant translations to 106 languages and exchange rates. President Erdoğan's Communications Office opposed WhatsApp's enforcement of its updated privacy policy and announced that Erdoğan left WhatsApp and opened an account in Telegram and BiP. The Turkish Ministry of National Defense has announced that it will move information groups to BiP for the same reason.
Others
Banglalink announced a BiP messenger partnership in Bangladesh The Communications Office of President Erdoğan opposed WhatsApp's enforcement of its updated privacy policy and announced that Erdoğan left WhatsApp and opened an account in Telegram and BiP. The Turkish Ministry of National Defense has announced that it will move information groups to BiP for the same reason.
The CEO of BiP is Burak Akinci.
The number of downloads of the app is 80 million globally.
See also
Comparison of instant messaging clients
Comparison of VoIP software
List of most-downloaded Google Play applications
Comparison of user features of messaging platforms
References
Instant messaging clients
Mobile applications
2013 software
Turkish brands
Android (operating system) software
IOS software
Social media
VoIP software
Cross-platform software
Communication software
Software companies of Turkey | BiP (software) | Technology | 346 |
56,061 | https://en.wikipedia.org/wiki/Discrete%20space | In topology, a discrete space is a particularly simple example of a topological space or similar structure, one in which the points form a , meaning they are isolated from each other in a certain sense. The discrete topology is the finest topology that can be given on a set. Every subset is open in the discrete topology so that in particular, every singleton subset is an open set in the discrete topology.
Definitions
Given a set :
A metric space is said to be uniformly discrete if there exists a such that, for any one has either or The topology underlying a metric space can be discrete, without the metric being uniformly discrete: for example the usual metric on the set
Properties
The underlying uniformity on a discrete metric space is the discrete uniformity, and the underlying topology on a discrete uniform space is the discrete topology.
Thus, the different notions of discrete space are compatible with one another.
On the other hand, the underlying topology of a non-discrete uniform or metric space can be discrete; an example is the metric space (with metric inherited from the real line and given by ).
This is not the discrete metric; also, this space is not complete and hence not discrete as a uniform space.
Nevertheless, it is discrete as a topological space.
We say that is topologically discrete but not uniformly discrete or metrically discrete.
Additionally:
The topological dimension of a discrete space is equal to 0.
A topological space is discrete if and only if its singletons are open, which is the case if and only if it does not contain any accumulation points.
The singletons form a basis for the discrete topology.
A uniform space is discrete if and only if the diagonal is an entourage.
Every discrete topological space satisfies each of the separation axioms; in particular, every discrete space is Hausdorff, that is, separated.
A discrete space is compact if and only if it is finite.
Every discrete uniform or metric space is complete.
Combining the above two facts, every discrete uniform or metric space is totally bounded if and only if it is finite.
Every discrete metric space is bounded.
Every discrete space is first-countable; it is moreover second-countable if and only if it is countable.
Every discrete space is totally disconnected.
Every non-empty discrete space is second category.
Any two discrete spaces with the same cardinality are homeomorphic.
Every discrete space is metrizable (by the discrete metric).
A finite space is metrizable only if it is discrete.
If is a topological space and is a set carrying the discrete topology, then is evenly covered by (the projection map is the desired covering)
The subspace topology on the integers as a subspace of the real line is the discrete topology.
A discrete space is separable if and only if it is countable.
Any topological subspace of (with its usual Euclidean topology) that is discrete is necessarily countable.
Any function from a discrete topological space to another topological space is continuous, and any function from a discrete uniform space to another uniform space is uniformly continuous. That is, the discrete space is free on the set in the category of topological spaces and continuous maps or in the category of uniform spaces and uniformly continuous maps. These facts are examples of a much broader phenomenon, in which discrete structures are usually free on sets.
With metric spaces, things are more complicated, because there are several categories of metric spaces, depending on what is chosen for the morphisms. Certainly the discrete metric space is free when the morphisms are all uniformly continuous maps or all continuous maps, but this says nothing interesting about the metric structure, only the uniform or topological structure. Categories more relevant to the metric structure can be found by limiting the morphisms to Lipschitz continuous maps or to short maps; however, these categories don't have free objects (on more than one element). However, the discrete metric space is free in the category of bounded metric spaces and Lipschitz continuous maps, and it is free in the category of metric spaces bounded by 1 and short maps. That is, any function from a discrete metric space to another bounded metric space is Lipschitz continuous, and any function from a discrete metric space to another metric space bounded by 1 is short.
Going the other direction, a function from a topological space to a discrete space is continuous if and only if it is locally constant in the sense that every point in has a neighborhood on which is constant.
Every ultrafilter on a non-empty set can be associated with a topology on with the property that non-empty proper subset of is an open subset or else a closed subset, but never both. Said differently, subset is open or closed but (in contrast to the discrete topology) the subsets that are open and closed (i.e. clopen) are and . In comparison, subset of is open and closed in the discrete topology.
Examples and uses
A discrete structure is often used as the "default structure" on a set that doesn't carry any other natural topology, uniformity, or metric; discrete structures can often be used as "extreme" examples to test particular suppositions. For example, any group can be considered as a topological group by giving it the discrete topology, implying that theorems about topological groups apply to all groups. Indeed, analysts may refer to the ordinary, non-topological groups studied by algebraists as "discrete groups". In some cases, this can be usefully applied, for example in combination with Pontryagin duality. A 0-dimensional manifold (or differentiable or analytic manifold) is nothing but a discrete and countable topological space (an uncountable discrete space is not second-countable). We can therefore view any discrete countable group as a 0-dimensional Lie group.
A product of countably infinite copies of the discrete space of natural numbers is homeomorphic to the space of irrational numbers, with the homeomorphism given by the continued fraction expansion. A product of countably infinite copies of the discrete space is homeomorphic to the Cantor set; and in fact uniformly homeomorphic to the Cantor set if we use the product uniformity on the product. Such a homeomorphism is given by using ternary notation of numbers. (See Cantor space.) Every fiber of a locally injective function is necessarily a discrete subspace of its domain.
In the foundations of mathematics, the study of compactness properties of products of is central to the topological approach to the ultrafilter lemma (equivalently, the Boolean prime ideal theorem), which is a weak form of the axiom of choice.
Indiscrete spaces
In some ways, the opposite of the discrete topology is the trivial topology (also called the indiscrete topology), which has the fewest possible open sets (just the empty set and the space itself). Where the discrete topology is initial or free, the indiscrete topology is final or cofree: every function from a topological space to an indiscrete space is continuous, etc.
See also
Cylinder set
List of topologies
Taxicab geometry
References
General topology
Metric spaces
Topological spaces
Topology | Discrete space | Physics,Mathematics | 1,451 |
1,572,904 | https://en.wikipedia.org/wiki/Mecoptera | Mecoptera (from the Greek: mecos = "long", ptera = "wings") is an order of insects in the superorder Holometabola with about six hundred species in nine families worldwide. Mecopterans are sometimes called scorpionflies after their largest family, Panorpidae, in which the males have enlarged genitals raised over the body that look similar to the stingers of scorpions, and long beaklike rostra. The Bittacidae, or hangingflies, are another prominent family and are known for their elaborate mating rituals, in which females choose mates based on the quality of gift prey offered to them by the males. A smaller group is the snow scorpionflies, family Boreidae, adults of which are sometimes seen walking on snowfields. In contrast, the majority of species in the order inhabit moist environments in tropical locations.
The Mecoptera are closely related to the Siphonaptera (fleas), and a little more distantly to the Diptera (true flies). They are somewhat fly-like in appearance, being small to medium-sized insects with long slender bodies and narrow membranous wings. Most breed in moist environments such as leaf litter or moss, and the eggs may not hatch until the wet season arrives. The larvae are caterpillar-like and mostly feed on vegetable matter, and the non-feeding pupae may pass through a diapause until weather conditions are favorable.
Early Mecoptera may have played an important role in pollinating extinct species of gymnosperms before the evolution of other insect pollinators such as bees. Adults of modern species are overwhelmingly predators or consumers of dead organisms. In a few areas, some species are the first insects to arrive at a cadaver, making them useful in forensic entomology.
Diversity
Mecopterans vary in length from . There are about six hundred extant species known, divided into thirty-four genera in nine families. The majority of the species are contained in the families Panorpidae and Bittacidae. Besides this there are about four hundred known fossil species in about eighty-seven genera, which are more diverse than the living members of the order. The group is sometimes called the scorpionflies, from the turned-up "tail" of the male's genitalia in the Panorpidae.
Distribution of mecopterans is worldwide; the greatest diversity at the species level is in the Afrotropic and Palearctic realms, but there is greater diversity at the generic and family level in the Neotropic, Nearctic and Australasian realms. They are absent from Madagascar and many islands and island groups; this may demonstrate that their dispersal ability is low, with Trinidad, Taiwan and Japan, where they are found, having had recent land bridges to the nearest continental land masses.
Evolution and phylogeny
Taxonomic history
The European scorpionfly was named Panorpa communis by Linnaeus in 1758.
The Mecoptera were named by Alpheus Hyatt and Jennie Maria Arms in 1891. The name is from the Greek, mecos meaning long, and ptera meaning wings.
The families of Mecoptera are well accepted by taxonomists but their relationships have been debated. In 1987, R. Willman treated the Mecoptera as a clade, containing the Boreidae as sister to the Meropeidae, but in 2002 Michael F. Whiting declared the Mecoptera so-defined as paraphyletic, with the Boreidae as sister to another order, the Siphonaptera (fleas).
Fossil history
Among the earliest members of the Mecoptera are the Nannochoristidae of Upper Permian age. Fossil Mecoptera become abundant and diverse during the Cretaceous, for example in China, where panorpids such as Jurassipanorpa, hangingflies (Bittacidae and Cimbrophlebiidae), Orthophlebiidae, and Cimbrophlebiidae have been found.
Extinct Mecoptera species may have been important pollinators of early gymnosperm seed plants during the late Middle Jurassic to mid–Early Cretaceous periods before other pollinating groups such as the bees evolved. These were mainly wind-pollinated plants, but fossil mecopterans had siphon-feeding apparatus that could have fertilized these early gymnosperms by feeding on their nectar and pollen. The lack of iron enrichment in their fossilized probosces rules out their use for drinking blood. Eleven species have been identified from three families, Mesopsychidae, Aneuretopsychidae, and Pseudopolycentropodidae within the clade Aneuretopsychina. Their lengths range from in Parapolycentropus burmiticus to in Lichnomesopsyche gloriae. The proboscis could be as long as . It has been suggested that these mecopterans transferred pollen on their mouthparts and head surfaces, as do bee flies and hoverflies today, but no such associated pollen has been found, even when the insects were finely preserved in Eocene Baltic amber. They likely pollinated plants such as Caytoniaceae, Cheirolepidiaceae, and Gnetales, which have ovulate organs that are either poorly suited for wind pollination or have structures that could support long-proboscid fluid feeding. The Aneuretopsychina were the most diverse group of mecopterans in the Latest Permian, taking the place of the Permochoristidae, to the Middle Triassic. During the Late Triassic through the Middle Jurassic, Aneuretopsychina species were gradually replaced by species from the Parachoristidae and Orthophlebiidae. Modern mecopteran families are derived from the Orthophlebiidae.
External relationships
Mecoptera have special importance in the evolution of the insects. Two of the most important insect orders, Lepidoptera (butterflies and moths) and Diptera (true flies), along with Trichoptera (caddisflies), probably evolved from ancestors belonging to, or strictly related to, the Mecoptera. Evidence includes anatomical and biochemical similarities as well as transitional fossils, such as Permotanyderus and Choristotanyderus, which lie between the Mecoptera and Diptera. The group was once much more widespread and diverse than it is now, with four suborders during the Mesozoic.
It is unclear as of 2020 whether the Mecoptera form a single clade, or whether the Siphonaptera (fleas) are inside that clade, so that the traditional "Mecoptera" taxon is paraphyletic. However the earlier suggestion that the Siphonaptera are sister to the Boreidae is not supported; instead, there is the possibility that they are sister to another Mecopteran family, the Nannochoristidae. The two possible trees are shown below:
(a) Mecoptera (clades in boldface) is paraphyletic, containing Siphonaptera:
(b) Mecoptera is monophyletic, sister to Siphonaptera:
Internal relationships
All the families were formerly treated as part of a single order, Mecoptera. The relationships between the families are, however, a matter of debate. The cladogram, from Cracraft and Donoghue 2004, places the Nannochoristidae as a separate order, with the Boreidae, as the sister group to the Siphonaptera, also as its own order. The Eomeropidae is suggested to be the sister group to the rest of the Mecoptera, with the position of the Bittacidae unclear. Of those other families, the Meropeidae is the most basal, and the relationships of the rest are not completely clear.
Biology
Morphology
Mecoptera are small to medium-sized insects with long beaklike rostra, membranous wings and slender, elongated bodies. They have relatively simple mouthparts, with a long labium, long mandibles and fleshy palps, which resemble those of the more primitive true flies. Like many other insects, they possess compound eyes on the sides of their heads, and three ocelli on the top. The antennae are filiform (thread-shaped) and contain multiple segments.
The fore and hind wings are similar in shape, being long and narrow, with numerous cross-veins, and somewhat resembling those of primitive insects such as mayflies. A few genera, however, have reduced wings, or have lost them altogether. The abdomen is cylindrical with eleven segments, the first of which is fused to the metathorax. The cerci consist of one or two segments. The abdomen typically curves upwards in the male, superficially resembling the tail of a scorpion, the tip containing an enlarged structure called the genital bulb.
The caterpillar-like larvae have hard sclerotised heads with mandibles (jaws), short true legs on the thorax, prolegs on the first eight abdominal segments, and a suction disc or pair of hooks on the terminal tenth segment. The pupae have free appendages rather than being secured within a cocoon (they are exarate).
Ecology
Mecopterans mostly inhabit moist environments although a few species are found in semi-desert habitats. Scorpionflies, family Panorpidae, generally live in broad-leaf woodlands with plentiful damp leaf litter. Snow scorpionflies, family Boreidae, appear in winter and are to be seen on snowfields and on moss; the larvae being able to jump like fleas. Hangingflies, family Bittacidae, occur in forests, grassland and caves with high moisture levels. They mostly breed among mosses, in leaf litter and other moist places, but their reproductive habits have been little studied, and at least one species, Nannochorista philpotti, has aquatic larvae.
Adult mecopterans are mostly scavengers, feeding on decaying vegetation and the soft bodies of dead invertebrates. Panorpa raid spider webs to feed on trapped insects and even the spiders themselves, and hangingflies capture flies and moths with their specially modified legs. Some groups consume pollen, nectar, midge larvae, carrion and moss fragments. Most mecopterans live in moist environments; in hotter climates, the adults may therefore be active and visible only for short periods of the year.
Mating behaviour
Various courtship behaviours have been observed among mecopterans, with males often emitting pheromones to attract mates. The male may provide an edible gift such as a dead insect or a brown salivary secretion to the female. Some boreids have hook-like wings which the male uses to pick up and place the female on his back while copulating. Male panorpids vibrate their wings or even stridulate while approaching a female.
Hangingflies (Bittacidae) provide a nuptial meal in the form of a captured insect prey, such as a caterpillar, bug, or fly. The male attracts a female with a pheromone from vesicles on his abdomen; he retracts these once a female is nearby, and presents her with the prey. While she evaluates the gift, he locates her genitalia with his. If she stays to eat the prey, his genitalia attach to hers, and the female lowers herself into an upside-down hanging position, and eats the prey while mating. Larger prey result in longer mating times. In Hylobittacus apicalis, prey long give between 1 and 17 minutes of mating. Larger males of that species give prey as big as houseflies, earning up to 29 minutes of mating, maximal sperm transfer, more oviposition, and a refractory period during which the female does not mate with other males: all of these increase the number of offspring the male is likely to have.
Life-cycle
The female lays the eggs in close contact with moisture, and the eggs typically absorb water and increase in size after deposition. In species that live in hot conditions, the eggs may not hatch for several months, the larvae only emerging when the dry season has finished. More typically, however, they hatch after a relatively short period of time. The larvae are usually quite caterpillar-like, with short, clawed, true legs, and a number of abdominal prolegs. They have sclerotised heads with mandibulate mouthparts. Larvae possess compound eyes, which is unique among holometabolous insects. The tenth abdominal segment bears either a suction disc, or, less commonly, a pair of hooks. They generally eat vegetation or scavenge for dead insects, although some predatory larvae are known. The larva crawls into the soil or decaying wood to pupate, and does not spin a cocoon. The pupae are exarate, meaning the limbs are free of the body, and are able to move their mandibles, but are otherwise entirely nonmotile. In drier environments, they may spend several months in diapause, before emerging as adults once the conditions are more suitable.
Interaction with humans
Forensic entomology makes use of scorpionflies' habit of feeding on human corpses. In areas where the family Panorpidae occurs, such as the eastern United States, these scorpionflies can be the first insects to arrive at a donated human cadaver, and remain on a corpse for one or two days. The presence of scorpionflies thus indicates that a body must be fresh.
Scorpionflies are sometimes described as looking "sinister", particularly from the male's raised "tail" resembling a scorpion's sting. A popular but incorrect belief is that they can sting with their tails.
References
External links
Mecoptera at the Tree of Life
Mecoptera image gallery at myrmecos.net
Video of Mecoptera from Austria
Mecoptera in UK on BBC wildlife website (third image in)
Insect orders
Extant Permian first appearances
Paraphyletic groups | Mecoptera | Biology | 2,934 |
901,091 | https://en.wikipedia.org/wiki/Sodium%20triphosphate | Sodium triphosphate (STP), also sodium tripolyphosphate (STPP), or tripolyphosphate (TPP),) is an inorganic compound with formula Na5P3O10. It is the sodium salt of the polyphosphate penta-anion, which is the conjugate base of triphosphoric acid. It is produced on a large scale as a component of many domestic and industrial products, especially detergents. Environmental problems associated with eutrophication are attributed to its widespread use.
Preparation and properties
Sodium tripolyphosphate is produced by heating a stoichiometric mixture of disodium phosphate, Na2HPO4, and monosodium phosphate, NaH2PO4, under carefully controlled conditions.
2 Na2HPO4 + NaH2PO4 → Na5P3O10 + 2 H2O
In this way, approximately 2 million tons are produced annually.
STPP is a colourless salt, which exists both in anhydrous form and as the hexahydrate. The anion can be described as the pentanionic chain [O3POP(O)2OPO3]5−. Many related di-, tri-, and polyphosphates are known including the cyclic triphosphate (e.g. sodium trimetaphosphate). It binds strongly to metal cations as both a bidentate and tridentate chelating agent.
Uses
Detergents
The majority of STPP is consumed as a component of commercial detergents. It serves as a "builder", industrial jargon for a water softener. In hard water (water that contains high concentrations of Mg2+ and Ca2+), detergents are deactivated. Being a highly charged chelating agent, TPP5− binds to dications tightly and prevents them from interfering with the sulfonate detergent.
Food
STPP is a preservative for seafood, meats, poultry, and animal feeds. It is common in food production as E number E451. In foods, STPP is used as an emulsifier and to retain moisture. Many governments regulate the quantities allowed in foods, as it can substantially increase the sale weight of seafood in particular. The United States Food and Drug Administration lists STPP as Generally recognized as safe.
Other
Other uses (hundreds of thousands of tons/year) include ceramics (decrease the viscosity of glazes up to a certain limit), leather tanning (as masking agent and synthetic tanning agent - SYNTAN), anticaking agents, setting retarders, flame retardants, paper, anticorrosion pigments, textiles, rubber manufacture, fermentation, antifreeze." TPP is used as a polyanion crosslinker in polysaccharide based drug delivery. Toothpaste may contain sodium triphosphate.
Health effects
High serum phosphate concentration has been identified as a predictor of cardiovascular events and mortality. Whilst phosphate is present in the body and food in organic forms, inorganic forms of phosphate such as sodium triphosphate are readily adsorbed and can result in elevated phosphate levels in serum. Salts of polyphosphate anions are moderately irritating to skin and mucous membranes because they are mildly alkaline.
Environmental effects
Because it is very water-soluble, STPP is not significantly removed by waste water treatment. STPP hydrolyses to phosphate, which is assimilated into the natural phosphorus cycle. Detergents containing phosphorus contribute to the eutrophication of many fresh waters.
See also
Sodium trimetaphosphate, a cyclic triphosphate
Acceptable daily intake
References
Food additives
Sodium compounds
Phosphates
E-number additives | Sodium triphosphate | Chemistry | 779 |
24,154,448 | https://en.wikipedia.org/wiki/C17H24N2O2 | {{DISPLAYTITLE:C17H24N2O2}}
The molecular formula C17H24N2O2 (molar mass : 288.39 g/mol) may refer to:
4,5-MDO-DiPT
5,6-MDO-DiPT
Phenglutarimide
Molecular formulas | C17H24N2O2 | Physics,Chemistry | 72 |
55,589,771 | https://en.wikipedia.org/wiki/BD%2B03%202562 | BD+03 2562 is a very-low-metallicity star in the constellation of Virgo. It is located about 8,500 light-years (2,600 parsecs) from Earth.
Planetary system
The star is orbited by a superjovian exoplanet, BD+03 2562 b, which was discovered in 2017 by a radial velocity method.
References
K-type giants
Virgo (constellation)
BD+03 2562
J11501555+0245365
Planetary systems with one confirmed planet | BD+03 2562 | Astronomy | 113 |
3,161,666 | https://en.wikipedia.org/wiki/TW%20Hydrae | TW Hydrae is a T Tauri star approximately 196 light-years away in the constellation of Hydra (the Sea Serpent). TW Hydrae is about 80% of the mass of the Sun, but is only about 5-10 million years old. The star appears to be accreting from a protoplanetary disk of dust and gas, oriented face-on to Earth, which has been resolved in images from the ALMA observatory. TW Hydrae is accompanied by about twenty other low-mass stars with similar ages and spatial motions, comprising the "TW Hydrae association" or TWA, one of the closest regions of recent "fossil" star-formation to the Sun.
Stellar characteristics
TW Hydrae is a pre-main-sequence star that is approximately 80% the mass of and 111% the radius of the Sun. It has a temperature of 4000 K and is about 8 million years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K. The star's luminosity is 28% (0.28x) that of the Sun, equivalent to that of a main-sequence star of spectral type ~K2. However, the spectral class is K6.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 11.27. It is too dim to be seen with the naked eye.
Planetary system
The star is known to host one likely exoplanet, TW Hydrae b.
Protoplanetary disk
Previously disproven protoplanet
In December 2007, a team led by Johny Setiawan of the Max Planck Institute for Astronomy in Heidelberg, Germany announced discovery of a planet orbiting TW Hydrae, dubbed "TW Hydrae b" with a minimum mass around 1.2 Jupiter masses, a period of 3.56 days, and an orbital radius of 0.04 astronomical units (inside the inner rim of the protoplanetary disk). Assuming it orbits in the same plane as the outer part of the dust disk (inclination 7±1°), it has a true mass of 9.8±3.3 Jupiter masses. However, if the inclination is similar to the inner part of the dust disk (4.3±1.0°), the mass would be 16 Jupiter masses, making it a brown dwarf. Since the star itself is so young, it was presumed this is the youngest extrasolar planet yet discovered, and essentially still in formation.
In 2008 a team of Spanish researchers concluded that the planet does not exist: the radial velocity variations were not consistent when observed at different wavelengths, which would not occur if the origin of the radial velocity variations was caused by an orbiting planet. Instead, the data was better modelled by starspots on TW Hydrae's surface passing in and out of view as the star rotates. "Results support the spot scenario rather than the presence of a hot Jupiter around TW Hya". Similar wavelength-dependent radial velocity variations, also caused by starspots, have been detected on other T Tauri stars.
New study of more distant planet
In 2016, ALMA found evidence that a possible Neptune-like planet was forming in its disk, at a distance of around 22 AU.
Outflow of an embedded protoplanet
In 2024 observations with ALMA showed sulfur monoxide representing an outflow from an embedded protoplanet. The position of the brightest emission coincides with a planet-carved dust gap at 42 au. Previously this gap was associated with the formation of a super-earth and modelling of the outflow velocity, the researchers estimate a mass of about 4 earth-masses. The mass accretion of this embedded protoplanet is constrained to between 3 x10−7 and 10−5 /year.
Detection of methanol
In 2016, methanol, one of the building blocks for life, was detected in the star's protoplanetary disk.
Gallery
Notes
References
External links
Hydra (constellation)
K-type main-sequence stars
T Tauri stars
Circumstellar disks
TW Hydrae association
Hydrae, TW
Durchmusterung objects
053911
Hypothetical planetary systems | TW Hydrae | Astronomy | 862 |
42,565,977 | https://en.wikipedia.org/wiki/Pharmaco-electroencephalography | Electroencephalography (EEG) is the science of recording the spontaneous rhythmic electrical activity of a living brain through electrodes on the scalp. Brain rhythms have origins similar to the electrical activity of the heart. The rhythmic activity varies in frequency and amplitude with age, attention, sleep, and chemical concentrations of oxygen, carbon dioxide, glucose, ammonia, and hormones. Chemicals that affect brain functions change brain rhythms in systematic and identifiable ways. As new psychoactive drugs were discovered that changed behavior, the basis for the science of psychopharmacology, the accompanying changes in the rhythms were found to be drug class specific. The measurement of the changes in rhythms became the basis for the science of pharmaco-EEG.
Definitions of the changes in EEG rhythms were developed that identified and classified psychoactive drugs, monitored the depth of anesthesia, and evaluated the efficacy of the seizures induced in convulsive therapy (electroshock).
History
The first recordings of electrical activity from the brain were reported from exposed animal brain tissues in the 1870s. In 1929 Hans Berger, a German psychiatrist, reported continuous electrical rhythms from the intact human head using electrodes on the scalp. The continuous electrical activity varied in frequencies and amplitude with drowsiness and sleep, and with mental problem solving. Episodic runs and bursts of high voltage slow frequencies were recorded in patients with epilepsy.
In his third report in 1931 Berger recorded changes in the rhythms with cocaine, morphine, scopolamine, and chloroform. Each chemical elicited different frequency and amplitude patterns and different behaviors.
The first clinical applications were in identifying the sudden bursts of high voltage slow frequencies during seizures, both spontaneous and induced by the chemical pentylenetetrazol (Metrazol), by electricity in electroshock, and in the coma induced by insulin. When reserpine was studied in 1953, chlorpromazine in 1954, and imipramine in 1957, individual rhythmic patterns were described.
The EEG patterns of new psychoactive drugs predicted their clinical activity. By the 1960s, EEG analysis of psychoactive drugs was a feature of the NIMH Early Clinical Drug Evaluation (ECDEU) program that evaluated and identified new psychiatric treatments. Proposed psychoactive drugs developed in chemical laboratories were first tested in animals and then tested in man. The changes in the EEG became the basis for a classification of new drugs.
Assessment methods in human volunteers were developed that recorded the changes in the resting subject at different dosages, both on acute single administrations and repeated daily dosing. The observed changes were compared to those for known drugs and predicted their behavioral effects. When no systematic changes were recorded, the agents were considered not to have a clinical use.
Dosing schedules were optimized. In patients who failed to respond to prescribed treatments, those who were considered "pharmacotherapy resistant," EEG studies showed that the chemicals did not elicit identifiable brain changes.
In pre-clinical animal trials EEG recordings were associated the changes with vigilance and motor measures, concluding that the EEG patterns were "dissociated," that is, bearing little relationship to the changes in behavior. In human trials, however, when the EEG measures could be related to vigilance, mood, memory, and psychological tests, a theory of "association of EEG and behavior" developed and sustained pharmaco-EEG studies of new drugs.
The technology was applied in anesthesia, identifying the efficacy of individual seizures in convulsive therapy, in studies of sleep patterns, and the relation of evoked potentials to speech and psychological tests.
Social changes in attitudes to the ethics of testing drugs and treatments in patients, prisoners, children, and volunteers inhibited the continued development of the science and its abandonment.
Methodology
Polypharmacy and the widespread use of active psychiatric drugs made the study of individual compounds in psychiatric patients difficult. The science then successfully focused on alert male volunteers (since the EEG varied with menstrual cycles in women).
Vigilance. The scalp recorded EEG is sensitive to changes in vigilance. Different methods developed to sustain a monitored level of alertness using hand held buzzers that sounded off when the subject relaxed and dozed.
Volunteer Baseline and Placebo training. As the EEG is sensitive to anxiety, an initial training session became standard procedure. The baseline recording identified subjects whose records were unique.
EEG recording. Different electrode placements were tested. Commonly the recordings were made using the frontal-occipital or the bifrontal leads. Standard EEG amplifiers were used.
Quantification and analyses. In the beginning the EEG recordings were made on paper and changes measured visually, scored by ruler and calipers. By the 1960s, electronic analyzers of 10 second epochs measured changes in "power." Digital computer methods using period analysis, power spectral density, and amplitude analyses followed.
The quantitative changes in mean frequency, mean amplitudes, percent time delta (1–3 Hz), theta (3.5 - 7.5 Hz), alpha (8-12.5 Hz), beta1 (13–21 Hz), and beta2 (>21 Hz), and the presence of bursts in 10-second epochs were commonly used to identify patterns.
Predictive patterns. The measures related the EEG changes to the common classes of psychoactive drugs—antidepressant, anxiolytic, antipsychotic, hallucinogen, deliriant, euphoriant, and mood stabilizer being the most frequent. For a time, the pharmaco-EEG profiles of different classes of drugs were actively used to identify active psychotropic agents.
Applications
Psychopharmacology. Pharmaco-EEG studies were economically useful in clinically classifying new agents, dosage ranges and durations of effects, and separating active from inactive substances. The list of successful applications is extensive; some specific examples are the identification of mianserin (GB-94) and doxepin as antidepressants of the imipramine class; of the inactivity of flutroline as a proposed antipsychotic in man despite activity in dogs; and of equivalent EEG activity of the laevo and dextro enantiomers of 6-azamianserin (mirtazapine) despite differences in preclinical trials.
Studies of different cannabis formulations (hashish, marijuana, tetrahydrocannabinol-∆-9 extract each showed the same patterns in EEG, cardiovascular, and clinical profiles. Tolerance development was marked in acute administration of cannabis in chronic hashish users.
In testing narcotic antagonists (naloxone, cyclazocine) and opioid substitutes (methadone, levomethadyl) in the treatment of opioid dependence, the quantitative EEG experiments showed the efficiency of antagonistic and replacement activity of different dosing schedules. Dose finding trials of naloxone showed no specific CNS effect when administered alone but very effective antagonistic action in opioid dose and overdose.
Convulsive therapy. The grandmal seizure is the central event in electroshock (electroconvulsive therapy, ECT) and insulin coma. It was introduced in 1934 and by the 1940s EEG recordings during the treatment showed the classic sequence of epileptic seizure events recognized as the "grand mal seizure." In the 1950s, recordings of interseizure records, on days after an induced seizure, showed progressive slowing of mean frequencies and increased amplitudes during the treatment course. These changes were necessary accompaniments of effective courses of treatment—patients without progressive slowing failed to recover.
In the early 1980s, commercial ECT devices were equipped with a 2-channel EEG recorder that measured the EEG characteristics and duration. The quality of the EEG record became the standard for an "effective" treatment. The same quantitative measures used in psychopharmacology were established in clinical ECT.
Anesthesia. Specialized equipment to monitor ongoing identification of anesthesia stages are common in modern surgical units.
References
History.
Fink M. Pharmacoelectroenephalography: A note on its history. Neuropsychobiology 1984; 12:173-178
Fink M. A clinician-researcher and ECDEU: 1959-1980. In: T. Ban, D. Healy, E. Shorter (Eds.): The Triumph of Psychopharmacology and the Story of the CINP. Budapest, Animula, 82-96, 2000.
Fink M. Pharmaco-Electroencephalography: A Selective History of the Study of Brain Responses to Psychoactive Drugs. In: T. Ban, E. Shorter, D. Healy (Eds.): History of CINP, IV: 661-672, 2004.
Galderisi S, Sannita WG. Pharmaco-EEG: A history of progress and missed opportunity. Clinical EEG and Neuroscience 37:61-65, 2006;
Fink M. Remembering the lost science of pharmaco-EEG. Acta psychiatr Scand., 121:161-173. 2010
Methodology
Brazier MAB (Ed): Computer Techniques in EEG Analysis. EEG Journal, Supplement 20, 1-98, 1961.
Stille G, Herrmann W, Bente D, Fink M, Itil T, Koella WP, Kubicki S, Künkel H, Kugler J, Matejcek M, Petsche H. Guidelines for pharmaco-EEG studies in man. Pharmacopsychiatry 15:107-108;1982
Herrmann WM, Abt K, Coppola R, et al. International Pharmaco-EEG Group (IPEG). Recommendations for EEG and evoked potential mapping. Neuropsychobiology 22:170-176. 1989
Association or dissociation?
Wikler A. Pharmacologic dissociation of behavior and EEG 'sleep patterns' in dogs: Morphine, n-allynormorphine and atropine. Proc Soc exp Biol 79:261-264, 1952;
Wikler A. Clinical and electroencephalographic studies on the effect of mescaline, n-allyInormorphine and morphine in man. J nerv ment Dis 120:157-175, 1954.
Fink M. EEG classification of psychoactive compounds in man: Review and theory of behavioral association. In: Efron D, Cole JO, Levine J, Wittenborn JR. (Eds): Psychopharmacology: A Review of Progress 1957-1967 U.S.G.P.O., Washington, D.C., 497-507, 1968;
Fink M., Itil T. Neurophysiology of the phantastica: EEG and behavioral relations in man. In: Efron D, Cole JO, Levine J, Wittenborn JR. (Eds): Psychopharmacology: A Review of Progress 1957-1967. U.S.G.P.O., Washington, D.C., 1231-1239, 1968;
Fink M. Itil T. EEG and human psychopharmacology: IV: Clinical antidepressants. In: Efron D, Cole JO, Levine J, Wittenborn JR. (Eds): Psychopharmacology: A Review of Progress 1957-1967. U.S.G.P.O., Washington, D.C., 671-682, 1968;
Fink M. EEG and human psychopharmacology. Annu Rev Pharmacol 9:241-258, 1969;
Bradley P and Fink M. (Eds): Anticholinergic Drugs and Brain Functions in Animals and Man. Prog Brain Res 28, 184 pp., 1968.
Fink M. EEG and behavior: Association or dissociation in man? Integrative Psychiatry 9:108-123, 1993.
Examples of human studies.
Fink M, Kahn RL. Relation of EEG delta activity to behavioral response in electroshock: Quantitative serial studies. Arch Neurol & Psychiatry 78:516-525,1957
Fink M. Electroencephalographic and behavioral effects of Tofranil. Canad Psychiat Assoc J, 4 (suppl) 166-71, 1959.
Itil TM, Polvan N, Hsu W. Clinical and EEG effects of GB-94, a tetracyclic antidepressant: EEG model in the discovery of a new psychotropic drug. Curr Ther Res 14:395-413, 1972.
Volavka J, Levine R, Feldstein S, Fink M. Short-term effects of heroin in man. Arch Gen Psychiatry 30:677-684,1974.
Itil TM, Cora R, Akpinar S, Herrmann WH, Patterson CJ. "Psychotropic" action of sex hormones: Computerized EEG in establishing the immediate CNS effects of steroid hormones. Curr Therapeutic Res 16:1147-1170, 1974.
Fink M, Kety S, McGaugh J (Eds.): Psychobiology of Convulsive Therapy. Washington DC: VH Winston & Sons, 1974.
Volavka J, Fink M, Panayiotopoulos CP. Acute EEG effects of cannabis preparations in long-term users. In: C. Stefanis, R. Dornbush, M Fink (Eds): Hashish- A Study of Long-Term Use. NY: Raven Press, 1977.
American Psychiatric Association Electroconvulsive therapy. Task Force Report #14. Washington, DC: American Psychiatric Association. (1978). (200 pp.).
Fink M, Irwin P, Sannita W, Papakostas Y, Green MA. Phenytoin: EEG effects and plasma levels in volunteers. Therap Drug Monitoring 1: 93-104, 1979.
Fink M and Irwin P. EEG and behavioral profile of flutroline (CP-36,584), a novel antipsychotic drug. Psychopharmacology 72: 67-71, 1981.
Fink M and Irwin P. Pharmaco-EEG study of 6-azamianserin (ORG-3770): Dissociation of EEG and pharmacologic predictors of antidepressant activity. Psychopharmacology 78: 44-48, 1982.
Fink M, Irwin P. CNS effects of acetylsalicylic acid (Aspirin). Clin Pharm Therap 32:362-365, 1982.
Additional notes.
Meetings of interested scientists began at the World Congress of Psychiatry meeting in Montreal in 1961 and the CINP meeting in Washington D.C. in 1966. Thereafter, biennial meetings of the IPEG (International Pharmaco-EEG Group) were scheduled at different cities, mainly in Europe, <http://www.ipeg-society.org/>. The proceedings were published in volumes with different editors cited in Fink (1984).
The ACNP recorded interviews with leading scientists. Transcripts are published in Ban T, Fink M. (Eds.): Oral History of Neuropsychopharmacology: The First Fifty Years:Neurophysiology. Brentwood TN: ACNP. Volume 2: 319 pp.
Videotaped interviews with Enoch Callaway, Max Fink, Turan M. Itil, and A. Arthur Sugarman are on-line at <https://web.archive.org/web/20140507040813/http://www.acnp.org/programs/history.aspx>.
Fink M interviewed by Cole JO, in An Oral History of Neuropsychopharmacology - The First Fifty Years: Peer Interviews (Thomas A. Ban, editor), Volume 2- "Neurophysiology" (Max Fink, volume editor). Brentwood: American College of Neuropsychopharmacology; 2011. p. 7 - 20.
Fink M interviewed by Healy D, in An Oral History of Neuropsychopharmacology - The First Fifty Years: Peer Interviews (Thomas A. Ban, editor), Volume 9- "Update" (Barry Blackwell, volume editor). Brentwood: American College of Neuropsychopharmacology; 2011. p. 73 - 104.
SBU Library. Max Fink's archives from the 1950s to the present are at the Special Collections of the Frank Melville Memorial Library of Stony Brook University, Stony Brook, New York .
Psychopharmacology
Electroencephalography
Electroconvulsive therapy | Pharmaco-electroencephalography | Chemistry | 3,522 |
14,817,979 | https://en.wikipedia.org/wiki/KCNK1 | Potassium channel subfamily K member 1 is a protein that in humans is encoded by the KCNK1 gene.
This gene encodes K2P1.1, a member of the superfamily of potassium channel proteins containing two pore-forming P domains. The product of this gene has not been shown to be a functional channel, however, and it may require other non-pore-forming proteins for activity.
See also
Tandem pore domain potassium channel
References
Further reading
External links
Ion channels | KCNK1 | Chemistry | 99 |
11,322,119 | https://en.wikipedia.org/wiki/Ceratobasidium%20cornigerum | Ceratobasidium cornigerum is a species of fungus in the order Cantharellales. Basidiocarps (fruit bodies) are thin, spread on the substrate out like a film (effused) and web-like. An anamorphic state is frequently obtained when isolates are cultured. Ceratobasidium cornigerum is saprotrophic, but is also a facultative plant pathogen, causing a number of economically important crop diseases, and an orchid endomycorrhizal associate. The species is genetically diverse and is sometimes treated as a complex of closely related taxa. DNA research shows the species (or species complex) actually belongs within the genus Rhizoctonia.
Taxonomy
Corticium cornigerum was first described in 1922 by mycologist Hubert Bourdot, who found it growing in France on dead stems of Jerusalem artichoke. It was subsequently transferred to the genus Ceratobasidium by American mycologist Donald P. Rogers in 1935. Molecular research, based on cladistic analysis of DNA sequences, places Ceratobasidium cornigerum within the genus Rhizoctonia, but this taxonomic problem has yet to be resolved.
Anastomosis groups (AGs)
Ceratobasidium cornigerum is one of several species whose anamorphic states are sometimes referred to as "binucleate rhizoctonias". These binucleate rhizoctonias have been divided into genetically distinct "anastomosis groups" (AGs) based initially on hyphal anastomosis tests, subsequently supported by analyses of DNA sequences. At least six of these AGs (AG-A, AG-B(o), AG-C, AG-D, AG-P, and AG-Q) have been linked to Ceratobasidium cornigerum, which may therefore be considered as a variable species (comprising at least six genetically distinct populations) or as a complex of morphologically similar species. In the latter case, it is not clear which of these AGs (if any) should take the original name C. cornigerum.
Synonyms or associated species
The following taxa belong in the Ceratobasidium cornigerum complex and have been treated as synonyms or as closely related but independent species:
Ceratobasidium ramicola = AG-A (also includes several invalidly published names including Rhizoctonia candida, R. endophytica, and R. fragariae). This group contains a range of crop pathogens and orchid associates.
Ceratobasidium cereale = AG-D (also includes the dubious name Ceratobasidium gramineum). This group contains cereal and grass pathogens.
Ceratobasidium ochroleucum (= Corticium stevensii), Ceratobasidium lantanae-camarae, Corticium pervagum, Corticium invisum, and AG-P are all tropical or subtropical, web-blight pathogens.
Description
The basidiocarps (fruit bodies) are effused, thin, and whitish. Microscopically they have colourless hyphae, 3 to 9 μm wide, without clamp connections. The basidia are ellipsoid to broadly club-shaped, 9 to 14 by 8 to 12 μm, bearing four sterigmata. The basidiospores are ellipsoid and broadly fusiform (spindle-shaped), measuring 6 to 11 by 4 to 6 μm. Pale brown sclerotia are sometimes produced, measuring 0.5 to 3 mm across.
Habitat and distribution
If treated as a single species, Ceratobasidium cornigerum is cosmopolitan and has been reported from Asia, Australia, Europe, North & South America. It occurs as a soil saprotroph, producing basidiocarps on dead stems and fallen litter, but is also a facultative plant pathogen causing disease of crops and turf grass. It can also grow as a "web blight" pathogen on living leaves of trees and shrubs, particularly in the tropics and subtropics. It is one of the commonest endomycorrhizal associates of terrestrial orchids.
Hosts (specifically strawberries) and symptoms
Symptoms are most visible in the first fruiting year and are most apparent during the last couple of weeks before harvest. Early symptoms will include reduced vigor and a decrease in the ability to survive high water conditions. Plants may experience lodging when water demand is high. Infected plants may continue to grow but will show aboveground symptoms including stunting, decreased fruit size, and numerous dead older leaves. Belowground symptoms include the deterioration of roots. Infected plants may have feeder and main roots that are smaller and covered in black lesions. Feeder roots will appear water soaked. In the early stages of infection, the core of the root will appear white while the exterior begins to show black lesions. In severely affected roots, both the core and the outer tissue of the root will be black [3]. Stained feeder roots may reveal masses of moniliform cells of R. fragariae. Characteristics of R. fragariae include hyphal branching pattern, dolipore septa, and moniliform resting cells. The binucleated hyphae directly penetrate the root.
Environment
Black root rot is commonly found in field with a long history of strawberry production. Increased chances of disease are likely if there are stress factors such as herbicide injury, winter or cold injury, excessive soil moisture, soil compaction or repeated freezing of roots. Black root rot is not usually introduced into the new planting through nursery stock or contaminated equipment but is instead often due to one or more of the disease-causing fungi already present in the soil. Black root rot is a disease complex on strawberry, which means that one or more organisms can infect the host. For strawberries, the common fungi are Pythium spp, Fusarium spp, and Rhizoctonia spp, along with several species of nematodes that function together to cause disease. Strawberries have been shown to have greater levels of rot when simultaneously exposed to both R. fragariae and P. penetrans (nematode).
Importance
Black root rot is a common disease in North Carolina, a top strawberry producing region, and much of the southeastern region of the United States, having been shown to reduce yields by 20 to 40%. This is the main reason growers fumigate their fields in this region. Pre-planting fumigation may suppress the disease during the year of planting, but typically it does not offer any lasting control and cultivars resistant to black root rot are not currently available. Black root rot has been a challenge for strawberry growers for at least a century, and probably longer. Black root rot of strawberry is recorded to have been prevalent in Massachusetts, Michigan, and New York in the years 1902 and 1908. In 1920 a Rhizoctonia species was first assigned as the causal pathogen responsible for "dying out" of strawberry beds in western Washington. By 1988, R. fragariae was isolated from more than 70% of plants from commercial strawberry fields in Connecticut in cultivation for more than one year.
Economic importance
Under various names, fungi in the Ceratobasidium cornigerum complex are known to cause a range of diseases in commercial crops.
The AG-A group (Ceratobasidium ramicola) causes various diseases, including "strawberry black root rot", diseases of soya bean, pea, and pak choy, and "silky threadblight" of Pittosporum and other shrubs.
The AG-D group (Ceratobasidium cereale) causes "sharp eyespot" of cereals and "yellow patch" in turf grass.
Corticium invisum was described as the causal agent of "black rot" of tea in Sri Lanka, whilst Corticium pervagum causes a leaf and stem blight of cocoa. Ceratobasidium ochroleucum (Corticium stevensii) was described causing a blight of apple and quince trees in Brazil, but the name is of uncertain application because of confusion with Rhizoctonia noxia.
Ceratobasidium lantanae-camarae was described from Brazil as the causal agent of a web blight of the invasive shrub Lantana camara, suggesting it has potential as a biocontrol agent.
References
Fungal plant pathogens and diseases
Fungal strawberry diseases
Cantharellales
Fungi of Asia
Fungi of Australia
Fungi of Europe
Fungi of North America
Fungi of South America
Fungi described in 1922
Fungus species | Ceratobasidium cornigerum | Biology | 1,803 |
1,529,966 | https://en.wikipedia.org/wiki/Zone%20Routing%20Protocol | Zone Routing Protocol, or ZRP is a hybrid wireless networking routing protocol that uses both proactive and reactive routing protocols when sending information over the network. ZRP was designed to speed up delivery and reduce processing overhead by selecting the most efficient type of protocol to use throughout the route.
How ZRP works
If a packet's destination is in the same zone as the origin, the proactive protocol using an already stored routing table is used to deliver the packet immediately.
If the route extends outside the packet's originating zone, a reactive protocol takes over to check each successive zone in the route to see whether the destination is inside that zone. This reduces the processing overhead for those routes. Once a zone is confirmed as containing the destination node, the proactive protocol, or stored route-listing table, is used to deliver the packet.
In this way packets with destinations within the same zone as the originating zone are delivered immediately using a stored routing table. Packets delivered to nodes outside the sending zone avoid the overhead of checking routing tables along the way by using the reactive protocol to check whether each zone encountered contains the destination node.
Thus ZRP reduces the control overhead for longer routes that would be necessary if using proactive routing protocols throughout the entire route, while eliminating the delays for routing within a zone that would be caused by the route-discovery processes of reactive routing protocols.
Details
What is called the Intra-zone Routing Protocol (IARP), or a proactive routing protocol, is used inside routing zones. What is called the Inter-zone Routing Protocol (IERP), or a reactive routing protocol, is used between routing zones. IARP uses a routing table. Since this table is already stored, this is considered a proactive protocol. IERP uses a reactive protocol.
Any route to a destination that is within the same local zone is quickly established from the source's proactively cached routing table by IARP. Therefore, if the source and destination of a packet are in the same zone, the packet can be delivered immediately.
Most existing proactive routing algorithms can be used as the IARP for ZRP.
In ZRP a zone is defined around each node, called the node's k-neighborhood, which consists of all nodes within k hops of the node. Border nodes are nodes which are exactly k hops away from a source node.
For routes beyond the local zone, route discovery happens reactively. The source node sends a route request to the border nodes of its zone, containing its own address, the destination address and a unique sequence number. Each border node checks its local zone for the destination. If the destination is not a member of this local zone, the border node adds its own address to the route request packet and forwards the packet to its own border nodes. If the destination is a member of the local zone, it sends a route reply on the reverse path back to the source. The source node uses the path saved in the route reply packet to send data packets to the destination.
References
Haas, Z. J., 1997 (ps). A new routing protocol for the reconfigurable wireless networks. Retrieved 2011-05-06.
The ZRP internet-draft
The BRP internet-draft
Wireless Networking
Wireless networking
Ad hoc routing protocols | Zone Routing Protocol | Technology,Engineering | 664 |
43,410 | https://en.wikipedia.org/wiki/VHDL | VHDL (VHSIC Hardware Description Language) is a hardware description language that can model the behavior and structure of digital systems at multiple levels of abstraction, ranging from the system level down to that of logic gates, for design entry, documentation, and verification purposes. The language was developed for the US military VHSIC program in the 1980s, and has been standardized by the Institute of Electrical and Electronics Engineers (IEEE) as IEEE Std 1076; the latest version of which is IEEE Std 1076-2019. To model analog and mixed-signal systems, an IEEE-standardized HDL based on VHDL called VHDL-AMS (officially IEEE 1076.1) has been developed.
History
In 1983, VHDL was originally developed at the behest of the U.S. Department of Defense in order to document the behavior of the ASICs that supplier companies were including in equipment. The standard MIL-STD-454N in Requirement 64 in section 4.5.1 "ASIC documentation in VHDL" explicitly requires documentation of "Microelectronic Devices" in VHDL.
The idea of being able to simulate the ASICs from the information in this documentation was so obviously attractive that logic simulators were developed that could read the VHDL files. The next step was the development of logic synthesis tools that read the VHDL and output a definition of the physical implementation of the circuit.
Due to the Department of Defense requiring as much of the syntax as possible to be based on Ada, in order to avoid re-inventing concepts that had already been thoroughly tested in the development of Ada, VHDL borrows heavily from the Ada programming language in both concept and syntax.
The initial version of VHDL, designed to IEEE standard IEEE 1076–1987, included a wide range of data types, including numerical (integer and real), logical (bit and Boolean), character and time, plus arrays of bit called bit_vector and of character called string.
A problem not solved by this edition, however, was "multi-valued logic", where a signal's drive strength (none, weak or strong) and unknown values are also considered. This required IEEE standard 1164, which defined the 9-value logic types: scalar std_logic and its vector version std_logic_vector. Being a resolved subtype of its std_Ulogic parent type, std_logic-typed signals allow multiple driving for modeling bus structures, whereby the connected resolution function handles conflicting assignments adequately.
The updated IEEE 1076, in 1993, made the syntax more consistent, allowed more flexibility in naming, extended the character type to allow ISO-8859-1 printable characters, added the xnor operator, etc.
Minor changes in the standard (2000 and 2002) added the idea of protected types (similar to the concept of class in C++) and removed some restrictions from port mapping rules.
In addition to IEEE standard 1164, several child standards were introduced to extend functionality of the language. IEEE standard 1076.2 added better handling of real and complex data types. IEEE standard 1076.3 introduced signed and unsigned types to facilitate arithmetical operations on vectors. IEEE standard 1076.1 (known as VHDL-AMS) provided analog and mixed-signal circuit design extensions.
Some other standards support wider use of VHDL, notably VITAL (VHDL Initiative Towards ASIC Libraries) and microwave circuit design extensions.
In June 2006, the VHDL Technical Committee of Accellera (delegated by IEEE to work on the next update of the standard) approved so-called Draft 3.0 of VHDL-2006. While maintaining full compatibility with older versions, this proposed standard provides numerous extensions that make writing and managing VHDL code easier. Key changes include incorporation of child standards (1164, 1076.2, 1076.3) into the main 1076 standard, an extended set of operators, more flexible syntax of case and generate statements, incorporation of VHPI (VHDL Procedural Interface) (interface to C/C++ languages) and a subset of PSL (Property Specification Language). These changes should improve quality of synthesizable VHDL code, make testbenches more flexible, and allow wider use of VHDL for system-level descriptions.
In February 2008, Accellera approved VHDL 4.0, also informally known as VHDL 2008, which addressed more than 90 issues discovered during the trial period for version 3.0 and includes enhanced generic types. In 2008, Accellera released VHDL 4.0 to the IEEE for balloting for inclusion in IEEE 1076–2008. The VHDL standard IEEE 1076-2008 was published in January 2009.
Standardization
The IEEE Standard 1076 defines the VHSIC Hardware Description Language, or VHDL. It was originally developed under contract F33615-83-C-1003 from the United States Air Force awarded in 1983 to a team of Intermetrics, Inc. as language experts and prime contractor, Texas Instruments as chip design experts and IBM as computer-system design experts. The language has undergone numerous revisions and has a variety of sub-standards associated with it that augment or extend it in important ways.
1076 was and continues to be a milestone in the design of electronic systems.
Revisions
IEEE 1076-1987 First standardized revision of ver 7.2 of the language from the United States Air Force.
IEEE 1076-1993 (also published with ). Significant improvements resulting from several years of feedback. Probably the most widely used version with the greatest vendor tool support.
IEEE 1076–2000. Minor revision. Introduces the use of protected types.
IEEE 1076–2002. Minor revision of 1076–2000. Rules with regard to buffer ports are relaxed.
IEC 61691-1-1:2004. IEC adoption of IEEE 1076–2002.
IEEE 1076c-2007. Introduced VHPI, the VHDL procedural interface, which provides software with the means to access the VHDL model. The VHDL language required minor modifications to accommodate the VHPI.
IEEE 1076-2008 (previously referred to as 1076-200x). Major revision released on 2009-01-26. Among other changes, this standard incorporates a basic subset of PSL, allows for generics on packages and subprograms and introduces the use of external names.
IEC 61691-1-1:2011. IEC adoption of IEEE 1076–2008.
IEEE 1076–2019. Major revision.
Related standards
IEEE 1076.1 VHDL Analog and Mixed-Signal (VHDL-AMS)
IEEE 1076.1.1 VHDL-AMS Standard Packages (stdpkgs)
IEEE 1076.2 VHDL Math Package
IEEE 1076.3 VHDL Synthesis Package (vhdlsynth) (numeric std)
IEEE 1076.3 VHDL Synthesis Package – Floating Point (fphdl)
IEEE 1076.4 Timing (VHDL Initiative Towards ASIC Libraries: vital)
IEEE 1076.6 VHDL Synthesis Interoperability (withdrawn in 2010)
IEEE 1164 VHDL Multivalue Logic (std_logic_1164) Packages
Design
VHDL is generally used to write text models that describe a logic circuit. Such a model is processed by a synthesis program, only if it is part of the logic design. A simulation program is used to test the logic design using simulation models to represent the logic circuits that interface to the design. This collection of simulation models is commonly called a testbench.
A VHDL simulator is typically an event-driven simulator. This means that each transaction is added to an event queue for a specific scheduled time. E.g. if a signal assignment should occur after 1 nanosecond, the event is added to the queue for time +1ns. Zero delay is also allowed, but still needs to be scheduled: for these cases delta delay is used, which represent an infinitely small time step. The simulation alters between two modes: statement execution, where triggered statements are evaluated, and event processing, where events in the queue are processed.
VHDL has constructs to handle the parallelism inherent in hardware designs, but these constructs (processes) differ in syntax from the parallel constructs in Ada (tasks). Like Ada, VHDL is strongly typed and is not case sensitive. In order to directly represent operations which are common in hardware, there are many features of VHDL which are not found in Ada, such as an extended set of Boolean operators including nand and nor.
VHDL has file input and output capabilities, and can be used as a general-purpose language for text processing, but files are more commonly used by a simulation testbench for stimulus or verification data. There are some VHDL compilers which build executable binaries. In this case, it might be possible to use VHDL to write a testbench to verify the functionality of the design using files on the host computer to define stimuli, to interact with the user, and to compare results with those expected. However, most designers leave this job to the simulator.
It is relatively easy for an inexperienced developer to produce code that simulates successfully but that cannot be synthesized into a real device, or is too large to be practical. One particular pitfall is the accidental production of transparent latches rather than D-type flip-flops as storage elements.
One can design hardware in a VHDL IDE (for FPGA implementation such as Xilinx ISE, Altera Quartus, Synopsys Synplify or Mentor Graphics HDL Designer) to produce the RTL schematic of the desired circuit. After that, the generated schematic can be verified using simulation software which shows the waveforms of inputs and outputs of the circuit after generating the appropriate testbench. To generate an appropriate testbench for a particular circuit or VHDL code, the inputs have to be defined correctly. For example, for clock input, a loop process or an iterative statement is required.
A final point is that when a VHDL model is translated into the "gates and wires" that are mapped onto a programmable logic device such as a CPLD or FPGA, then it is the actual hardware being configured, rather than the VHDL code being "executed" as if on some form of a processor chip.
Advantages
The key advantage of VHDL, when used for systems design, is that it allows the behavior of the required system to be described (modeled) and verified (simulated) before synthesis tools translate the design into real hardware (gates and wires).
Another benefit is that VHDL allows the description of a concurrent system. VHDL is a dataflow language in which every statement is considered for execution simultaneously, unlike procedural computing languages such as BASIC, C, and assembly code, where a sequence of statements is run sequentially one instruction at a time.
A VHDL project is multipurpose. Being created once, a calculation block can be used in many other projects. However, many formational and functional block parameters can be tuned (capacity parameters, memory size, element base, block composition and interconnection structure).
A VHDL project is portable. Being created for one element base, a computing device project can be ported on another element base, for example VLSI with various technologies.
A big advantage of VHDL compared to original Verilog is that VHDL has a full type system. Designers can use the type system to write much more structured code (especially by declaring record types).
Design examples
In VHDL, a design consists at a minimum of an entity which describes the interface and an architecture which contains the actual implementation. In addition, most designs import library modules. Some designs also contain multiple architectures and configurations.
A simple AND gate in VHDL would look something like
-- (this is a VHDL comment)
/*
this is a block comment (VHDL-2008)
*/
-- import std_logic from the IEEE library
library IEEE;
use IEEE.std_logic_1164.all;
-- this is the entity
entity ANDGATE is
port (
I1 : in std_logic;
I2 : in std_logic;
O : out std_logic);
end entity ANDGATE;
-- this is the architecture
architecture RTL of ANDGATE is
begin
O <= I1 and I2;
end architecture RTL;
(Notice that RTL stands for Register transfer level design.) While the example above may seem verbose to HDL beginners, many parts are either optional or need to be written only once. Generally simple functions like this are part of a larger behavioral module, instead of having a separate module for something so simple. In addition, use of elements such as the std_logic type might at first seem to be an overkill. One could easily use the built-in bit type and avoid the library import in the beginning. However, using a form of many-valued logic, specifically 9-valued logic (U,X,0,1,Z,W,H,L,-), instead of simple bits (0,1) offers a very powerful simulation and debugging tool to the designer which currently does not exist in any other HDL.
In the examples that follow, you will see that VHDL code can be written in a very compact form. However, more experienced designers usually avoid these compact forms and use a more verbose coding style for the sake of readability and maintainability.
Synthesizable constructs and VHDL templates
VHDL is frequently used for two different goals: simulation of electronic designs and synthesis of such designs. Synthesis is a process where a VHDL is compiled and mapped into an implementation technology such as an FPGA or an ASIC.
Not all constructs in VHDL are suitable for synthesis. For example, most constructs that explicitly deal with timing such as wait for 10 ns; are not synthesizable despite being valid for simulation. While different synthesis tools have different capabilities, there exists a common synthesizable subset of VHDL that defines what language constructs and idioms map into common hardware for many synthesis tools. IEEE 1076.6 defines a subset of the language that is considered the official synthesis subset. It is generally considered a "best practice" to write very idiomatic code for synthesis as results can be incorrect or suboptimal for non-standard constructs.
MUX template
The multiplexer, or 'MUX' as it is usually called, is a simple construct very common in hardware design. The example below demonstrates a simple two to one MUX, with inputs A and B, selector S and output X. Note that there are many other ways to express the same MUX in VHDL.
X <= A when S = '1' else B;
A more complex example of a MUX with 4x3 inputs and a 2-bit selector:
library IEEE;
use IEEE.std_logic_1164.all;
entity mux4 is
port(
a1 : in std_logic_vector(2 downto 0);
a2 : in std_logic_vector(2 downto 0);
a3 : in std_logic_vector(2 downto 0);
a4 : in std_logic_vector(2 downto 0);
sel : in std_logic_vector(1 downto 0);
b : out std_logic_vector(2 downto 0)
);
end mux4;
architecture rtl of mux4 is
-- declarative part: empty
begin
p_mux : process(a1,a2,a3,a4,sel)
begin
case sel is
when "00" => b <= a1 ;
when "01" => b <= a2 ;
when "10" => b <= a3 ;
when others => b <= a4 ;
end case;
end process p_mux;
end rtl;
Latch template
A transparent latch is basically one bit of memory which is updated when an enable signal is raised. Again, there are many other ways this can be expressed in VHDL.
-- latch template 1:
Q <= D when Enable = '1' else Q;
-- latch template 2:
process(all)
begin
Q <= D when(Enable);
end process;
D-type flip-flops
The D-type flip-flop samples an incoming signal at the rising (or falling) edge of a clock. This example has an asynchronous, active-high reset, and samples at the rising clock edge.
DFF : process(all) is
begin
if RST then
Q <= '0';
elsif rising_edge(CLK) then
Q <= D;
end if;
end process DFF;
Another common way to write edge-triggered behavior in VHDL is with the 'event' signal attribute. A single apostrophe has to be written between the signal name and the name of the attribute.
DFF : process(RST, CLK) is
begin
if RST then
Q <= '0';
elsif CLK'event and CLK = '1' then
Q <= D;
end if;
end process DFF;
VHDL also lends itself to "one-liners" such as
DFF : Q <= '0' when RST = '1' else D when rising_edge(clk);
or
DFF : process(all) is
begin
if rising_edge(CLK) then
Q <= D;
end if;
if RST then
Q <= '0';
end if;
end process DFF;
or:Library IEEE;
USE IEEE.Std_logic_1164.all;
entity RisingEdge_DFlipFlop_SyncReset is
port(
Q : out std_logic;
Clk : in std_logic;
sync_reset : in std_logic;
D : in std_logic
);
end RisingEdge_DFlipFlop_SyncReset;
architecture Behavioral of RisingEdge_DFlipFlop_SyncReset is
begin
process(Clk)
begin
if (rising_edge(Clk)) then
if (sync_reset='1') then
Q <= '0';
else
Q <= D;
end if;
end if;
end process;
end Behavioral;Which can be useful if not all signals (registers) driven by this process should be reset.
Example: a counter
The following example is an up-counter with asynchronous reset, parallel load and configurable width. It demonstrates the use of the 'unsigned' type, type conversions between 'unsigned' and 'std_logic_vector' and VHDL generics. The generics are very close to arguments or templates in other traditional programming languages like C++. The example is in VHDL 2008 language.
library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.numeric_std.all; -- for the unsigned type
entity COUNTER is
generic (
WIDTH : in natural := 32);
port (
RST : in std_logic;
CLK : in std_logic;
LOAD : in std_logic;
DATA : in std_logic_vector(WIDTH-1 downto 0);
Q : out std_logic_vector(WIDTH-1 downto 0));
end entity COUNTER;
architecture RTL of COUNTER is
begin
process(all) is
begin
if RST then
Q <= (others => '0');
elsif rising_edge(CLK) then
if LOAD='1' then
Q <= DATA;
else
Q <= std_logic_vector(unsigned(Q) + 1);
end if;
end if;
end process;
end architecture RTL;
More complex counters may add if/then/else statements within the rising_edge(CLK) elsif to add other functions, such as count enables, stopping or rolling over at some count value, generating output signals like terminal count signals, etc. Care must be taken with the ordering and nesting of such controls if used together, in order to produce the desired priorities and minimize the number of logic levels needed.
Simulation-only constructs
A large subset of VHDL cannot be translated into hardware. This subset is known as the non-synthesizable or the simulation-only subset of VHDL and can only be used for prototyping, simulation and debugging. For example, the following code will generate a clock with a frequency of 50 MHz. It can, for example, be used to drive a clock input in a design during simulation. It is, however, a simulation-only construct and cannot be implemented in hardware. In actual hardware, the clock is generated externally; it can be scaled down internally by user logic or dedicated hardware.
process
begin
CLK <= '1'; wait for 10 NS;
CLK <= '0'; wait for 10 NS;
end process;
The simulation-only constructs can be used to build complex waveforms in very short time. Such waveform can be used, for example, as test vectors for a complex design or as a prototype of some synthesizer logic that will be implemented in the future.
process
begin
wait until START = '1'; -- wait until START is high
for i in 1 to 10 loop -- then wait for a few clock periods...
wait until rising_edge(CLK);
end loop;
for i in 1 to 10 loop -- write numbers 1 to 10 to DATA, 1 every cycle
DATA <= to_unsigned(i, 8);
wait until rising_edge(CLK);
end loop;
-- wait until the output changes
wait on RESULT;
-- now raise ACK for clock period
ACK <= '1';
wait until rising_edge(CLK);
ACK <= '0';
-- and so on...
end process;
VHDL-2008 Features
Hierarchical Aliases
library ieee;
use ieee.std_logic_1164.all;
entity bfm is end entity;
architecture beh of bfm is
signal en :std_logic;
begin
-- insert implementation here
end architecture;
//------------------------------------------
library ieee;
use ieee.std_logic_1164.all;
entity test1 is end entity;
architecture beh of test1 is
begin
ibfm: entity work.bfm;
-- The testbench process
process
alias probe_en is <<signal .test1.ibfm.en :std_logic>>;
begin
probe_en <= '1';
wait for 100 ns;
probe_en <= '0';
wait for 100 ns;
probe_en <= '1';
wait for 100 ns;
std.env.stop(0);
end process;
end architecture;
Standard libraries
Also referred as standard packages.
IEEE Standard Package
The IEEE Standard Package includes the following:
numeric_std
std_logic_1164
std_logic_arith
std_logic_unsigned
std_logic_signed
std_logic_misc
VHDL simulators
Commercial:
Aldec Active-HDL
Cadence Incisive
Mentor Graphics ModelSim
Mentor Graphics Questa Advanced Simulator
Synopsys VCS-MX
Xilinx Vivado Design Suite (features the Vivado Simulator)
Other:
EDA Playground - Free web browser-based VHDL IDE (uses Synopsys VCS, Cadence Incisive, Aldec Riviera-PRO and GHDL for VHDL simulation)
GHDL is an open source VHDL compiler that can execute VHDL programs.
boot by freerangefactory.org is a VHDL compiler and simulator based on GHDL and GTKWave
VHDL Simili by Symphony EDA is a free commercial VHDL simulator.
nvc by Nick Gasson is an open source VHDL compiler and simulator
freehdl by Edwin Naroska was an open source VHDL simulator, abandoned since 2001.
See also
References
Notes
Further reading
Peter J. Ashenden, "The Designer's Guide to VHDL, Third Edition (Systems on Silicon)", 2008, . (The VHDL reference book written by one of the lead developers of the language)
Bryan Mealy, Fabrizio Tappero (February 2012). . The no-frills guide to writing powerful VHDL code for your digital implementations. Archived from the original Free Range VHDL on 2015-02-13.
— Sandstrom presents a table relating VHDL constructs to Verilog constructs.
Janick Bergeron, "Writing Testbenches: Functional Verification of HDL Models", 2000, . (The HDL Testbench Bible)
External links
VHDL Analysis and Standardization Group (VASG)
Hardware description languages
IEEE standards
IEC standards
Ada programming language family
Domain-specific programming languages
Programming languages created in 1983 | VHDL | Technology,Engineering | 5,324 |
64,733,514 | https://en.wikipedia.org/wiki/Michel%20Bercovier | Michel Bercovier (Hebrew: מישל ברקוביאר; born: 10 September 1941) is a French-Israeli Professor (Emeritus) of Scientific Computing and Computer Aided Design (CAD) in The Rachel and Selim Benin School of Computer Science and Engineering at the Hebrew University of Jerusalem. Bercovier is also the head of the School of Computer Science at the Hadassah Academic College, Jerusalem.
Early life and education
Michel Bercovier was born in Lyon, France. He received his B.Sc in Mathematics from Paris University in 1964. He was from 1964 to 1965 vice president of Union of French Jewish Students and co-principal editor of its magazine Kadima.
During the years 1965-67 he served in the French Army. He earned his D. Es Sc. in 1976 at the Faculté des Sciences de Rouen. Bercovier authored the thesis Régularisation duale des problèmes variationnels mixtes (Dual regularization of mixed variational problems), under the supervision of Jacques-Louis Lions. He belongs to the second generation of Lions' students.
Career
Bercovier was an assistant professor at the University of Rouen (1969 - 1972) where he created the Computation Center.
He emigrated to Israel in 1973 and was director of applications and services at the Hebrew University of Jerusalem Computer Center (1973 - 1976). He joined the School of Applied Sciences of the Hebrew University of Jerusalem as a Lecturer in 1977, becoming an associate professor in 1983, and moved to its Institute of Computer Science in 1986.
From 1997 until 2006 he held the Bertold Badler Chair of Computer Science as a full professor. In 1996-1998 Bercovier set up the Computer Science department of a new university at Paris-La Defense (Pôle universitaire Léonard de Vinci). From 1999 to 2007 he was in charge of the W3C office in Israel, and thus very active in the development of the Internet.
Retirement
He retired from the Hebrew University of Jerusalem in 2007 as an emeritus professor. From 2010 he is also a professor and head of the school of Computer Science at the Hadassah Academic College, Jerusalem.
Bercovier has advised more than 30 M.Sc. and 16 Ph.D. students.
Research
Professor Bercovier's research work focuses on Computer Aided Design and in Scientific Computation.
He developed new Finite Element Methods for fluid flows and incompressible materials based on penalty and reduced integration methods that are universally implemented.Together with Pironneau, Olivier he proved the optimality of the Hood Taylor finite element for incompressible fluids. He has made contributions to the integration of Computer Aided Design (CAD) and Analysis, developed new methods in surface design, integrated optimal control methods in CAD and cloth simulation for animation.
He has been involved in multidisciplinary research, teaming with surgeons, biologists and pharmacologists (artificial heart valve modeling, Ca+ discharge in axons, keratotomical surgery, drug release models).
Bercovier carries joint research with INRIA, Pierre and Marie Curie University, EPFL, IMATI-Pavia, Institute of Applied Geometry (Johannes Kepler University Linz) and MIT, among others.
He is currently involved in several aspects of Isogeometric Analysis, such as smooth surfaces on arbitrary meshes and Domain Decomposition methods on arbitrarily overlapping domains. On the former subject, his book with Tanya Matskevitch is at the origin of numerous research on smooth surfaces over arbitrary quadrilateral meshes.
Professional experience
Parallel to his academic work, Bercovier has been involved in industrial research: he was Chief Consultant for Kleber and Michelin (1972 – 1985), Hutchinson (1990-2017), Pechiney (1992-2017), L’Oréal (1996-2013).
He also contributed to the creation of several Hi-Tech companies: FDI (now part of Ansys, a US company, was based on his research, as was Bercom, a leading CAD/CAE Israeli firm. Bercovier was also the chairman of Aleph Yissum (now Ex Libris) in the years 1986 - 1996, and was active in turning the small start up into the leader in computer systems for libraries.
He contributed to the creation of the R&D team of Visiowave (Lausanne), which was acquired by General Electric.
He is on the editorial board of several scientific journals. He is a member of SIAM, European Mathematical Society and ACM, on the board of SIA (Automotive Engineering Society in France) and ECCOMAS. He was a visiting fellow for long periods at IBM, Digital Equipment Corporation and Matra. He was a member of the scientific council of AMIES, Agency for Interaction in Mathematics with Business and Society, a founding member of the Israel Association for Computational Methods in Mechanics and co-founder and chairman of the World User Association in CFD (Computational fluid dynamics).
Honors and awards
Chevalier des Palmes Académiques (1986)
Conseiller du Commerce Extérieur(fr) (1993, renewed 1995)
Publications
Bercovier is the author of over 80 papers and 3 books.
Books
Topics in computer aided geometric design. Barnhill RF., Bercovier M., Boehm W., Capasso V., eds Symposium on topics in computer aided geometric design held in Erice 1990 RAIRO MMAN 26, Duno Paris, 1992.
Domain Decomposition Methods in Science and Engineering 18, Editor, With M.J Gander, Kornhubler and O Widlund, Springer, 2008.
(with Tanya Matskevich) Smooth Bézier Surfaces over Arbitrary Quadrilateral Meshes Lectures Notes of the UMI, 22, Springer, 2017.
Selected articles
Isogeometric analysis with geometrically continuous functions on multi-patch geometries, Kapl, Mario and Buchegger, Florian and Bercovier, Michel and Jüttler, Bert. Computer Methods in Applied Mechanics and Engineering (Vol 316, Pages 209-234), April 2017
Overlapping non Matching Meshes Domain Decomposition Method in Isogeometric Analysis, Bercovier, Michel and Soloveichik, Ilya. February 2015.
Efficient simulation of inextensible cloth. Goldenthal, Rony and Harmon, David and Fattal, Raanan and Bercovier, Michel and Grinspun, Eitan. ACM Transactions on Graphics (TOG) (Pages 49), 2007
Curve and surface fitting and design by optimal control methods. Alhanaty, Michal and Bercovier, Michel. Computer-Aided Design (Volume 33, Pages 167–182), 2001
Virtual topology operators for meshing. Sheffer, Alla and Bercovier, Michel and BLACKER, TED and Clements, Jan. International Journal of Computational Geometry & Applications (Volume 10, Pages 309–331), 2000
“Discrete” G 1 assembly of patches over irregular meshes. Matskevich, T and Volpin, O and Bercovier, M. Proceedings of the international conference on *Mathematical methods for curves and surfaces II Lillehammer, 1997 (Pages 351–358), 1998
The development of a mechanical model for a tyre: a 15 years story. Bercovier, Michel and Jankovich, Etienne and Durand, Michel. Proceedings of the Second *European Symposium on Mathematics in Industry: ESMI II, March 1–7, 1987, Oberwolfach (Pages 269), 1988
Computer simulation of lamellar keratectomy and laser myopic keratomileusis. Hanna, Khalil D and Jouve, Francois and Bercovier, Michel H and Waring, George O. Journal of refractive surgery (Volume 4, Pages 222–231), 1988
Finite elements and characteristics for some parabolic-hyperbolic problems. Bercovier, Michel and Pironneau, Olivier and Sastri, Vedala. Applied Mathematical Modelling (Volume 7, Pages 89–96), 1983
The vortex method with finite elements. Bardos, Claude and Bercovier, Michel and Pironneau, Olivier. Mathematics of Computation (Volume 36, Pages 119–136), 1981
A finite-element method for incompressible non-Newtonian flows. Bercovier, Michel and Engelman, Michael. Journal of Computational Physics (Volume 36, Pages 313–326), 1980
Error estimates for finite element method solution of the Stokes problem in the primitive variables. Bercovier, Michel and Pironneau, Olivier. Numerische Mathematik (Volume 33, Pages 211–224), 1979
A finite element for the numerical solution of viscous incompressible flows. Bercovier, Michel and Engelman, Michael. Journal of Computational Physics (Volume 30, Pages 181–201), 1979
Perturbation of mixed variational problems. Application to mixed finite element methods. Bercovier, Michel. RAIRO. Analyse numérique (Volume 12, Pages 211–236), 1978
Personal life
Bercovier is divorced, has three sons and lives in Jerusalem. His brother, Herve Bercovier, is a Professor (Emeritus) in the faculty of medicine at the Hebrew University of Jerusalem.
Michel Bercovier is a co-founder and Honorary President of the Association du Festival Lyrique de Montperreux(fr).
References
External links
Michel Bercovier, Hebrew University of Jerusalem
Michel Bercovier, Hadassah Academic College
Michel Bercovier, Mathematics Genealogy Project
My personal story of Aleph-Yissum (Ex Libris), Hebrew University of Jerusalem
1941 births
Living people
Academic staff of the Hebrew University of Jerusalem
Israeli computer scientists
Applied mathematicians
Numerical analysts
Computer graphics researchers
University of Paris alumni
University of Rouen Normandy alumni
French emigrants to Israel | Michel Bercovier | Mathematics | 2,017 |
24,718,548 | https://en.wikipedia.org/wiki/FPSO%20Noble%20Seillean | The FPSO Noble Seillean was a dynamically positioned floating oil production, storage and offloading vessel.
Etymology
The name Seillean means "honeybee" in Gaelic.
History
The vessel was designated as a single-well oil production ship (SWOPS) when constructed for BP by Harland and Wolff in 1986–1988.
The process plant, flare and the riser including subsea connection were designed and procured by Matthew Hall Engineering, which also provided construction assistance and commissioning of the oil production facilities. The original specification for the vessel was as follows:
Production capability - 15,000 barrels of oil per day, 10,000 barrels of produced water per day, 6 million standard cubic metres of gas per day
Production train - 1 train, 2 stages, 1st stage separator pressure 17 barg
Storage capacity - 318,000 barrels
Living accommodation - 45 berths
The vessel was originally designed for a generic North Sea field well, although it was later assigned the Cyrus oilfield on Block 16/28 in the UK sector of the North Sea. Later she served on the Donan field. Seillean was sold by BP in 1993 to Reading & Bates. Brazilian oil company Petrobras signed a four-year charter for Seillean to develop the Roncador field. Seillean was upgraded and arrived in Brazil in December 1998.
After acquisition of Reading & Bates by Transocean, the vessel was acquired by Frontier Drilling in 2002. She was moved to Jubarte field. In February 2006, the vessel started to operate in the Petrobras-operated Golfinho Field in the Espirito Santo Basin off Brazil. In 2007, Seillean was moved to Pipa 2 oilfield.
In June 2010, Seillean was contracted for oil collection and processing at the Macondo Prospect to deal with the Deepwater Horizon oil spill.
With the purchase of Frontier Drilling in 2010, Seillean was acquired by Noble Corporation. She was renamed Noble Seillean and her flag was changed from Panama to Liberia.
Seillean was sent for scrapping to Alang, India, on 26 September 2015.
Description
FPSO Seillean was a dynamically positioned monohull floating production, storage and offloading vessel. She was classed by Lloyd's Register of Shipping as a 100A1 Oil Processing Tanker. Seallean was equipped with a flare, two cargo-handling cranes, a process plant inside the hull, a completion tower, and crew accommodation. The vessel had a displacement of 79,600 tonnes, the capacity to process up to , and to store up to of oil. Ship's length was , breadth was , and depth was . She could operate at the water depth of . The vessel was equipped with a helideck of .
Seillean was powered by a hybrid system of three Ruston gas turbine of 3.3 MW each, and three MAN diesel driven generators 4.2 MW each. These were operated such that when fuel gas was available from production operations, the gas turbine generators generate electrical power. When fuel gas was unavailable, the diesel driven generators provided the power, supplemented by the gas turbine driven generators operating on diesel oil.
Seillean had the following production facilities:
processing plant. Maximum oil production was and maximum produced water handling capability was .
Storage and transport facilities for of oil.
A 6-5/8" riser which can connect to a subsea wellhead.
The crude oil was pumped from the process plant to six cargo oil tanks. During the 1998 upgrade, an offtake reel system was installed which allows ship discharge of cargo to a dynamically positioned shuttle tanker.
References
External links
Seillean website
Floating production storage and offloading vessels
Ships of BP
1988 ships
Ships of Liberia
Ships built in Belfast
Ships built by Harland and Wolff | FPSO Noble Seillean | Chemistry | 770 |
48,040,170 | https://en.wikipedia.org/wiki/NGC%206286 | NGC 6286 is an interacting spiral galaxy located in the constellation Draco. It is designated as Sb/P in the galaxy morphological classification scheme and was discovered by the American astronomer Lewis A. Swift on 13 August 1885. NGC 6286 is located at about 252 million light years away from Earth. NGC 6286 and NGC 6285 form a pair of interacting galaxies, with tidal distortions, categorized as Arp 293 in the Arp Atlas of Peculiar Galaxies.
Gallery
See also
List of NGC objects (6001–7000)
List of NGC objects
References
External links
10647
Spiral galaxies
Draco (constellation)
6286
Interacting galaxies
Luminous infrared galaxies
59352
293 | NGC 6286 | Astronomy | 135 |
41,607,617 | https://en.wikipedia.org/wiki/IT%20resource%20performance%20management | IT RPM, or IT resource performance management, is a concept employed within the discipline of IT service management. In practice, it is a combination of technologies and processes which combine social collaboration, mobility tools and gamification in order to provide higher quality support in a service desk environment. It is typically under-pinned by best practices like ITIL, COBIT or Lean Six Sigma for Service Management.
Key concepts
Device-agnostic tools (e.g. anywhere access from nearly any type of computing device)
IT to IT use / Business to IT use, as well as crowdsourced support
Management reports and leaderboards to unite, encourage and reward staff for goal achievements
Capturing ad-hoc (out-of-band) communications to harness user knowledge
Game mechanic theories to identify latent skills in groups of staff or individuals
Predictive knowledge management to provide faster end-user support; typically deployed through self-service portals or decision tree technologies
Using surveys and peer feedback for continuous improvement
See also
Social media
Gamification
Incentive-centered design
Social media
Knowledge industry | IT resource performance management | Technology | 211 |
61,617,980 | https://en.wikipedia.org/wiki/Society%20of%20Environmental%20Toxicology%20and%20Chemistry | The Society of Environmental Toxicology and Chemistry (SETAC) is an international environmental toxicology and environmental chemistry organization.
History
It was set up to allow interdisciplinary communication between environmental scientists around the world. It was founded in 1979 in North America.
Function
SETAC promotes environmental sciences through conducting meetings, workshops, and symposia; bestowing awards recognizing for excellence; promoting education in the field by organizing training courses and supporting students; and through its publication program. It holds meetings and events around the world. It produces two scientific journals; Environmental Toxicology and Chemistry (ET&C), which it has produced since 1982, originally yearly and then monthly from 1986; and Integrated Environmental Assessment and Management (IEAM). It also produces online books and easy to read Technical Issue Papers and Science Briefs, which are publicly available.
See also
European Association of Geochemistry
References
1979 establishments in the United States
Chemistry societies
Environmental chemistry
Environmental toxicology
International environmental organizations
Scientific organizations established in 1979
Pensacola, Florida
Toxicology organizations | Society of Environmental Toxicology and Chemistry | Chemistry,Environmental_science | 203 |
5,095,949 | https://en.wikipedia.org/wiki/42%20Cassiopeiae | 42 Cassiopeiae is a possible binary star system in the northern circumpolar constellation of Cassiopeia. It is visible to the naked eye as a dim, blue-white hued star with a baseline apparent visual magnitude of +5.18. The system is located approximately 291 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +7 km/s.
This is classified as a suspected eclipsing binary of the Algol type, with a period of 16.77 days and a magnitude decrease of 0.3. The primary is a B-type main-sequence star with a stellar classification of B9V. It is roughly 67 million years old and is spinning with a projected rotational velocity of 149 km/s. The star has 2.7 times the mass of the Sun and 2.6 times the Sun's radius. It is radiating 66 times the luminosity of the Sun from its photosphere at an effective temperature of 10,141 K.
References
B-type main-sequence stars
Spectroscopic binaries
Algol variables
Cassiopeia (constellation)
BD+69 0114
Cassiopeiae, 42
010250
008016
0480 | 42 Cassiopeiae | Astronomy | 257 |
28,360,272 | https://en.wikipedia.org/wiki/Les%20Houches%20School%20of%20Physics | Les Houches School of Physics () is an international physics center dedicated to seasonal schools and workshops. It is located in Les Houches, France. The school was founded in 1951 by French scientist Cécile DeWitt-Morette.
Between its participants there have been famous Nobel laureates in Physics like Enrico Fermi, Wolfgang Pauli, Murray Gell-Mann and John Bardeen amongst others. According to former director of the school, Jean Zinn-Justin, the school is the "mother of all modern schools of physics”.
Since 2017, it is a Joint Research Service (, UMS) of the French National Centre for Scientific Research (CNRS) and the Grenoble Alpes University. In 2020, it was recognized as a EPS Historic Site by the European Physical Society (EPS).
History
The school was founded by Cécile DeWitt-Morette in 1951. She was 29 years old at the time, had married physicist Bryce DeWitt a week before, and was still a postdoctoral researcher in the United States. The school was created as a post-World War II effort to improve the standard of modern physics in Europe, which was lagging behind the United States. She was inspired by her experience in the Girl Scouts and 1949 Richard Feynman's Ann Arbor annual Summer Symposium, at the University of Michigan, which DeWitt-Morette attended.
She quickly gathered the institutional and financial support of Pierre Victor Auger (then director of the Natural Sciences Department at UNESCO), the CNRS, Albert Châtelet (dean of faculty of physics of the University of Paris) and in charge of the French Ministry of Education. With a reduced budget, she settled to open the school in a rustic farm surrounded by chalets, a few kilometers from the village of Les Houches.
The school was publicized by her French colleagues: Yves Rocard at the École normale supérieure, Louis Leprince-Ringuet at École polytechnique, Louis de Broglie and Alexandre Proca at the Institut Henri Poincaré, and Francis Perrin at the Collège de France and CEA who hired a secretary to handle the paperwork. Louis Néel acquired the patronage of the Grenoble faculty of science in order for the school to be legally attached to the University of Grenoble. DeWitt-Morette also obtained international support from J. Robert Oppenheimer, Enrico Fermi, Julian Schwinger and Victor Weisskopf.
The first session in 1951 was attended by young French professors like , Alfred Kastler and , as well as by famous physicists from abroad including Walter Heitler, Léon van Hove, Emilio Segrè, Walter Kohn and Wolfgang Pauli. The first lessons were given by Van Hove on quantum mechanics.
Up until the 1960s, the students at the school were cut off from the outside world with the bare minimum in amenities. Nobel laureate Claude Cohen-Tannoudji, a student in 1955, recalled
Yves Rocard and Maurice Lévy, inspired by the school, founded a summer school in Cargèse, Corsica, which they called the '‘Les Houches on the beach". Subsequently, a number of scientific summer schools opened all over Europe following the same model, partly with the support of Advanced Study Institutes program of NATO.
In its early years, it caused some political controversy, with the French Communist Party accusing the school of US espionage and interference. A counter-school project against the allegedly Americanized Les Houches school was considered but was short-lived.
In 1977, a physics centre was created, specialised for shorter conferences which could take place all year round. In 1988, a pre-doctoral school was opened for young researchers entering into their PhD theses.
Attendees
This table records attendees who later went on to receive either the Nobel Prize in Physics or the Fields Medal.
Prize
The Cecile DeWitt-Morette, Ecole de Physique des Houches Prize is awarded annually since 2019. It is awarded to scientists, less than 55 year old, from any nationality, who has made a remarkable contribution to physics and have attended the school as a lecturer or student. The jury is composed of members of the French Academy of Sciences. Since 2023, it is called the Cécile DeWitt-Morette / Ecole de Physique des Houches / Fundation CFM for Research prize.
The laureates are:
References
External links
École de Physique des Houches web site
Education in France
Summer schools
Physics education | Les Houches School of Physics | Physics | 919 |
11,118,768 | https://en.wikipedia.org/wiki/Parabolic%20induction | In mathematics, parabolic induction is a method of constructing representations of a reductive group from representations of its parabolic subgroups.
If G is a reductive algebraic group and is the Langlands decomposition of a parabolic subgroup P, then parabolic induction consists of taking a representation of , extending it to P by letting N act trivially, and inducing the result from P to G.
There are some generalizations of parabolic induction using cohomology, such as cohomological parabolic induction and Deligne–Lusztig theory.
Philosophy of cusp forms
The philosophy of cusp forms was a slogan of Harish-Chandra, expressing his idea of a kind of reverse engineering of automorphic form theory, from the point of view of representation theory. The discrete group Γ fundamental to the classical theory disappears, superficially. What remains is the basic idea that representations in general are to be constructed by parabolic induction of cuspidal representations. A similar philosophy was enunciated by Israel Gelfand, and the philosophy is a precursor of the Langlands program. A consequence for thinking about representation theory is that cuspidal representations are the fundamental class of objects, from which other representations may be constructed by procedures of induction.
According to Nolan Wallach
Put in the simplest terms the "philosophy of cusp forms" says that for each Γ-conjugacy classes of Q-rational parabolic subgroups one should construct automorphic functions (from objects from spaces of lower dimensions) whose constant terms are zero for other conjugacy classes and the constant terms for [an] element of the given class give all constant terms for this parabolic subgroup. This is almost possible and leads to a description of all automorphic forms in terms of these constructs and cusp forms. The construction that does this is the Eisenstein series.
Notes
References
A. W. Knapp, Representation Theory of Semisimple Groups: An Overview Based on Examples, Princeton Landmarks in Mathematics, Princeton University Press, 2001. .
Representation theory | Parabolic induction | Mathematics | 416 |
16,881,256 | https://en.wikipedia.org/wiki/Lindemann%20index | The Lindemann index is a simple measure of thermally driven disorder in atoms or molecules.
Definition
The local Lindemann index is defined as:
where angle brackets indicate a time average. The global Lindemann index is a system average of this quantity.
In condensed matter physics a departure from linearity in the behaviour of the global Lindemann index or an increase above a threshold value related to the spacing between atoms (or micelles, particles, globules, etc.) is often taken as the indication that a solid-liquid phase transition has taken place. See Lindemann melting criterion.
Biomolecules often possess separate regions with different order characteristics. In order to quantify or illustrate local disorder, the local Lindemann index can be used.
Factors when using the Lindemann index
Care must be taken if the molecule possesses globally defined dynamics, such as about a hinge or pivot, because these motions will obscure the local motions which the Lindemann index is designed to quantify. An appropriate tactic in this circumstance is to sum the rij only over a small number of neighbouring atoms to arrive at each qi. A further variety of such modifications to the Lindemann index are available and have different merits, e.g. for the study of glassy vs crystalline materials.
References
Molecular physics
Condensed matter physics
Dimensionless numbers of physics | Lindemann index | Physics,Chemistry,Materials_science,Engineering | 280 |
50,982,946 | https://en.wikipedia.org/wiki/Zo%C3%A9%20Chatzidakis | Zoé Maria Chatzidakis (died January 22, 2025) was a mathematician who worked as a director of research at the École Normale Supérieure in Paris, France. Her research concerned model theory and difference algebra. She was invited to give the Tarski Lectures in 2020, though the lectures were postponed due to the COVID-19 pandemic.
Education and employment
Chatzidakis earned her Ph.D. in 1984 from Yale University, under the supervision of Angus Macintyre, with a dissertation on the model theory of profinite groups. She is Senior researcher and team director in Algebra and Geometry in the Département de mathématiques et applications de l'École Normale Supérieure.
Honors and awards
She was the 2013 winner of the Leconte Prize, and was an invited speaker at the International Congress of Mathematicians in 2014. She was named MSRI Chern Professor for Fall 2020.
References
External links
Home page
Year of birth missing (living people)
Living people
Model theorists
French mathematicians
French women mathematicians
Yale Graduate School of Arts and Sciences alumni | Zoé Chatzidakis | Mathematics | 220 |
29,525,748 | https://en.wikipedia.org/wiki/Stormwater%20harvesting | Stormwater harvesting or stormwater reuse is the collection, accumulation, treatment or purification, and storage of stormwater for its eventual reuse. While rainwater harvesting collects precipitation primarily from rooftops, stormwater harvesting deals with collection of runoff from creeks, gullies, ephemeral streams and underground conveyance. It can also include catchment areas from developed surfaces, such as roads or parking lots, or other urban environments such as parks, gardens and playing fields.
Water that comes into contact with impervious surfaces, or saturated surfaces incapable of absorbing more water, is termed surface runoff. As the surface runoff travels greater distance over impervious surfaces it often becomes contaminated and collects an increasing amount of pollutants. A main challenge of stormwater harvesting is the removal of pollutants in order to make this water available for reuse.
Stormwater harvesting projects often have multiple objectives, such as reducing contaminated runoff to sensitive waters, promoting groundwater recharge, and non-potable applications such as toilet flushing and irrigation. Stormwater harvesting is also practiced in areas of the United States as a way to address rising water demands as population rises. Internationally, Australia is notable in its active pursuit of stormwater harvesting.
Systems
Ground catchments systems channel water from a prepared catchment area into storage. These systems are often considered in areas where rainwater is scarce and other sources of water are not available. If properly designed, ground catchment systems can collect large quantities of rainwater. In arid ranch land, a catchwater or cattle tank can be constructed across shallow ephemeral washes to impound and collect what little stormwater does generate there. This untreated water is easily accessed and utilized by livestock. More intricate collection and processing systems are necessary for stormwater harvest to be reused for human uses.
Stormwater capture
Five Core Steps: End Use, Collection, Treatment, Storage, and Distribution
End Use
Water resources become more scarce as the human population grows. Populations need to create systems and methods to minimize water consumption at all levels, while simultaneously engineering new methods of water reuse. For non-potable water purposes with lower water quality needs, people can use stormwater for toilet flushing, gardening, fire fighting, irrigation, etc. For potable water use of higher water quality, stormwater needs to be highly treated before final use. The latter has rarely been used around the world. Some stormwater collection systems aim to simply reduce the amount of stormwater runoff that flows to a nearby waterway. The benefits of these systems include reducing pollution in streams, lakes, and nearshore coastal environments, as well as promoting groundwater recharge. The intended end use of a system will determine the level of treatment and processing of collected stormwater.
Collections
Stormwater collection is a process of directing water into storage from stormwater gathering, such as urban runoff. Generally, there are two types, online storages and offline storages. Online storages are a conventional way of acquiring stormwater directly from waterways or drains. For instance, the urban drainage system of channels and pipes conduct stormwater into storage facilities, often with treatment at or just prior to storage. One drawback of this collection design is the required maintenance that systems may require for structural integrity to prevent conduit failure resulting in stormwater leakage. Water Sensitive Urban Design, or WSUD, is one comprehensive planning and design process that incorporates online stormwater storage into urban development models. Offline storages require additional facilities to conduct water from waterways indirectly, and can serve as storage for stormwater prior to treatment. For instance, weirs divert flows into stormwater containment and contribute to a large part of stormwater catchment for a city, where it is then stored for future treatment and distribution. Stormwater collection is widely practiced for purposes of urban runoff and flood mitigation as well.
Treatment
Stormwater treatment is the greatest challenge for stormwater harvesting. Water treatment processes depend on the intended end use and the catchment equipment, which determines the level of pollutants to be filtered and removed. For instance, construction uses may only require non-potable water where the water processing includes only filtration and disinfection. However, for potable uses of higher water quality, the treatment process requires screening, coagulation, filtration, carbon adsorption, and disinfection.
Storage
There are three factors to consider in terms of storage: function, location, and capacity. The planner is responsible for determining the end use of the stored stormwater, such as fire fighting, industrial water supply, farming and irrigation, recreation, flood mitigation, groundwater recharge, etc. Regarding location of a system and its storage, a water tank in proximity to the waters' end use may be the best design. If the collection system is intended to slow runoff and/or recharge an aquifer, an on-site, below ground infiltration systems may be considered. Choices between online and offline storages can affect the surrounding natural aquatic systems and yields different maintenance costs and flood mitigation effectiveness. The capacity of a storage system will be determined by the type of end use in a particular climate or period of time.
Distribution
Generally, there are two types of stormwater distribution systems. The first is open space irrigation systems. This application uses treated stormwater to irrigate open spaces such as parks, municipal green spaces, golf courses, etc., and can be implemented at a hyper-local scale (ie catchment and reuse occurs at the same park). Another system is a non-potable distribution system which distributes treated stormwater to be used for things like toilet flushing, fire fighting, and some industrial uses. This system may require additional infrastructure such as a third-pipe network for distribution.
Concerns
Major concerns for stormwater harvesting projects include cost effectiveness as well as quality, quantity, and reliability of the reclamation, as well as existing water management infrastructure and soil characteristics. Some projects have estimated stormwater harvesting to be twice as expensive per unit -when including operating costs- versus other potable water alternatives. New construction of third-pipe networks in urban settings can be prohibitively expensive; therefore the ideal project will produce recycled stormwater of potable quality in order to take advantage of existing distribution infrastructure. Attaining quality as well as useful quantity water from stormwater harvesting presents challenges of filtration efficacy as well as source reliability and predictability. However, other valuable (and hard-to-calculate) benefits include reducing soil erosion by slowing flow rates and reducing demands on local aquifers, as well as reduction of pollution into local waterways.
See also
Rainwater Hog
Stormwater detention vault
References
Stormwater management
Water supply
Water conservation
Hydrology and urban planning
Appropriate technology | Stormwater harvesting | Chemistry,Engineering,Environmental_science | 1,364 |
22,003,136 | https://en.wikipedia.org/wiki/Quantum%20triviality | In a quantum field theory, charge screening can restrict the value of the observable "renormalized" charge of a classical theory. If the only resulting value of the renormalized charge is zero, the theory is said to be "trivial" or noninteracting. Thus, surprisingly, a classical theory that appears to describe interacting particles can, when realized as a quantum field theory, become a "trivial" theory of noninteracting free particles. This phenomenon is referred to as quantum triviality. Strong evidence supports the idea that a field theory involving only a scalar Higgs boson is trivial in four spacetime dimensions, but the situation for realistic models including other particles in addition to the Higgs boson is not known in general. Nevertheless, because the Higgs boson plays a central role in the Standard Model of particle physics, the question of triviality in Higgs models is of great importance.
This Higgs triviality is similar to the Landau pole problem in quantum electrodynamics, where this quantum theory may be inconsistent at very high momentum scales unless the renormalized charge is set to zero, i.e., unless the field theory has no interactions. The Landau pole question is generally considered to be of minor academic interest for quantum electrodynamics because of the inaccessibly large momentum scale at which the inconsistency appears. This is not however the case in theories that involve the elementary scalar Higgs boson, as the momentum scale at which a "trivial" theory exhibits inconsistencies may be accessible to present experimental efforts such as at the Large Hadron Collider (LHC) at CERN. In these Higgs theories, the interactions of the Higgs particle with itself are posited to generate the masses of the W and Z bosons, as well as lepton masses like those of the electron and muon. If realistic models of particle physics such as the Standard Model suffer from triviality issues, the idea of an elementary scalar Higgs particle may have to be modified or abandoned.
The situation becomes more complex in theories that involve other particles however. In fact, the addition of other particles can turn a trivial theory into a nontrivial one, at the cost of introducing constraints. Depending on the details of the theory, the Higgs mass can be bounded or even calculable. These quantum triviality constraints are in sharp contrast to the picture one derives at the classical level, where the Higgs mass is a free parameter. Quantum triviality can also lead to a calculable Higgs mass in asymptotic safety scenarios.
Triviality and the renormalization group
Modern considerations of triviality are usually formulated in terms of the real-space renormalization group, largely developed by Kenneth Wilson and others. Investigations of triviality are usually performed in the context of lattice gauge theory. A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilatation group of conventional renormalizable theories, came from condensed matter physics. Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in Wilson's extensive important contributions. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1974, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.
In more technical terms, let us assume that we have a theory described by a certain function of the state variables and a certain set of coupling constants . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the
whole description of the physics of the system.
Now we consider a certain blocking transformation of the state variables ,
the number of must be lower than the number of . Now let us try to rewrite the function only in terms of the . If this is achievable by a certain change in the parameters, , then the theory is said to be renormalizable. The most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to be trivial. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
Historical background
The first evidence of possible triviality of quantum field theories was obtained by Landau, Abrikosov, and Khalatnikov by finding the following relation of the observable charge with the "bare" charge ,
where is the mass of the particle, and is the momentum cut-off. If is finite, then tends to zero in the limit of infinite cut-off .
In fact, the proper interpretation of Eq.1 consists in its inversion, so that (related to the length scale ) is chosen to give a correct value of ,
The growth of with invalidates Eqs. () and () in the region (since they were obtained for ) and the existence of the "Landau pole" in Eq.2 has no physical meaning.
The actual behavior of the charge as a function of the momentum scale is determined by the full Gell-Mann–Low equation
which gives Eqs.(),() if it is integrated under conditions for and for , when only the term with is retained in the right hand side.
The general behavior of relies on the appearance of the function . According to the classification by Bogoliubov and Shirkov, there are three qualitatively different situations:
The latter case corresponds to the quantum triviality in the full theory (beyond its perturbation context), as can be seen by reductio ad absurdum. Indeed, if is finite, the theory is internally inconsistent. The only way to avoid it, is to tend to infinity, which is possible only for .
Conclusions
As a result, the question of whether the Standard Model of particle physics is nontrivial remains a serious unresolved question. Theoretical proofs of triviality of the pure scalar field theory exist, but the situation for the full standard model is unknown. The implied constraints on the standard model have been discussed.
See also
Hierarchy problem
References
Renormalization group
Quantum mechanics
Mathematical physics
Physical phenomena | Quantum triviality | Physics,Mathematics | 1,376 |
43,180,070 | https://en.wikipedia.org/wiki/Actinide%20chemistry | Actinide chemistry (or actinoid chemistry) is one of the main branches of nuclear chemistry that investigates the processes and molecular systems of the actinides. The actinides derive their name from the group 3 element actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide. All but one of the actinides are f-block elements, corresponding to the filling of the 5f electron shell; lawrencium, a d-block element, is also generally considered an actinide. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. The actinide series encompasses the 15 metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium.
Main branches
Organoactinide chemistry
In contrast to the relatively early flowering of organotransition-metal chemistry (1955 to the present), the corresponding development of actinide organometallic chemistry has taken place largely within the past 15 or so years. During this period, 5f organometallic science has blossomed, and it is now apparent that the actinides have a rich, intricate, and highly informative organometallic chemistry. Intriguing parallels to and sharp differences from the d-block elements have emerged. Actinides can coordinate the organic active groups or bind to carbon by the covalent bonds.
Thermodynamics of actinides
The necessity of obtaining accurate thermodynamic quantities for the actinide elements and their compounds was recognized at the outset of the Manhattan Project, when a dedicated team of scientists and engineers initiated the program to exploit nuclear energy for military purposes. Since the end of World War II, both fundamental and applied objectives have motivated a great deal of further study of actinide thermodynamics.
Nanotechnology and supramolecular chemistry of actinides
The possibility of using unique properties of lanthanides in the nanotechnology is demonstrated. The origination of linear and nonlinear optical properties of lanthanide compounds with phthalocyanines, porphyrins, naphthalocyanines, and their analogs in solutions and condensed state and the prospects of obtaining novel materials on their basis are discussed. Based on the electronic structure and properties of lanthanides and their compounds, namely, optical and magnetic characteristics, electronic and ionic conductivity, and fluctuating valence, molecular engines are classified. High-speed storage engines or memory storage engines; photoconversion molecular engines based on Ln(II) and Ln(III); electrochemical molecular engines involving silicate and phosphate glasses; molecular engines whose operation is based on insulator – semiconductor, semiconductor – metal, and metal – superconductor types of conductivity phase transitions; solid electrolyte molecular engines; and miniaturized molecular engines for medical analysis are distinguished. It is shown that thermodynamically stable nanoparticles of LnxMy composition can be formed by d elements of the second halves of the series, i.e., those arranged after M = Mn, Tc, and Re.
Biological and environmental chemistry of actinides
Generally, ingested insoluble actinide compounds such as high-fired uranium dioxide and mixed oxide (MOX) fuel will pass through the digestive system with little effect since they cannot dissolve and be absorbed by the body. Inhaled actinide compounds, however, will be more damaging as they remain in the lungs and irradiate the lung tissue. Ingested Low-fired oxides and soluble salts such as nitrate can be absorbed into the blood stream. If they are inhaled then it is possible for the solid to dissolve and leave the lungs. Hence the dose to the lungs will be lower for the soluble form.
Radon and radium are not actinides—they are both radioactive daughters from the decay of uranium. Aspects of their biology and environmental behaviour is discussed at radium in the environment.
In India, a large amount of thorium ore can be found in the form of monazite in placer deposits of the Western and Eastern coastal dune sands, particularly in the Tamil Nadu coastal areas. The residents of this area are exposed to a naturally occurring radiation dose ten times higher than the worldwide average.
Thorium has been linked to liver cancer. In the past thoria (thorium dioxide) was used as a contrast agent for medical X-ray radiography but its use has been discontinued. It was sold under the name Thorotrast.
Uranium is about as abundant as arsenic or molybdenum. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores (it is recovered commercially from these sources).
Seawater contains about 3.3 parts per billion of uranium by weight as uranium(VI) forms soluble carbonate complexes. The extraction of uranium from seawater has been considered as a means of obtaining the element. Because of the very low specific activity of uranium the chemical effects of it upon living things can often outweigh the effects of its radioactivity.
Plutonium, like other actinides, readily forms a plutonium dioxide (plutonyl) core (PuO2). In the environment, this plutonyl core readily complexes with carbonate as well as other oxygen moieties (OH−, , , and ) to form charged complexes which can be readily mobile with low affinities to soil.
Nuclear reactions
Some early evidence for nuclear fission was the formation of a short-lived radioisotope of barium which was isolated from neutron irradiated uranium (139Ba, with a half-life of 83 minutes and 140Ba, with a half-life of 12.8 days, are major fission products of uranium). At the time, it was thought that this was a new radium isotope, as it was then standard radiochemical practice to use a barium sulfate carrier precipitate to assist in the isolation of radium.
PUREX
The PUREX process is a liquid–liquid extraction ion-exchange method used to reprocess spent nuclear fuel, in order to extract primarily uranium and plutonium, independent of each other, from the other constituents. The current method of choice is to use the PUREX liquid–liquid extraction process which uses a tributyl phosphate/hydrocarbon mixture to extract both uranium and plutonium from nitric acid. This extraction is of the nitrate salts and is classed as being of a solvation mechanism. For example, the extraction of plutonium by an extraction agent (S) in a nitrate medium occurs by the following reaction.
(aq) + 4 (aq) + 2 S(organic) → [](organic)
A complex bond is formed between the metal cation, the nitrates and the tributyl phosphate, and a model compound of a dioxouranium(VI) complex with two nitrates and two triethyl phosphates has been characterised by X-ray crystallography. After the dissolution step it is normal to remove the fine insoluble solids, because otherwise they will disturb the solvent extraction process by altering the liquid-liquid interface. It is known that the presence of a fine solid can stabilize an emulsion. Emulsions are often referred to as third phases in the solvent extraction community.
An organic solvent composed of 30% tributyl phosphate (TBP) in a hydrocarbon solvent, such as kerosene, is used to extract the uranium as UO2(NO3)2·2TBP complexes, and plutonium as similar complexes, from other fission products, which remain in the aqueous phase. The transuranium elements americium and curium also remain in the aqueous phase. The nature of the organic soluble uranium complex has been the subject of some research. A series of complexes of uranium with nitrate and trialkyl phosphates and phosphine oxides have been characterized.
Plutonium is separated from uranium by treating the kerosene solution with aqueous ferrous sulphamate, which selectively reduces the plutonium to the +3 oxidation state. The plutonium passes into the aqueous phase. The uranium is stripped from the kerosene solution by back-extraction into nitric acid at a concentration of ca. .
See also
Nuclear chemistry
Actinides in the environment
Important publications in nuclear chemistry
References
Nuclear chemistry
Chemistry
Actinides | Actinide chemistry | Physics,Chemistry | 1,759 |
924,869 | https://en.wikipedia.org/wiki/Longitude%20of%20the%20ascending%20node | The longitude of the ascending node, also known as the right ascension of the ascending node, is one of the orbital elements used to specify the orbit of an object in space. Denoted with the symbol Ω, it is the angle from a specified reference direction, called the origin of longitude, to the direction of the ascending node (☊), as measured in a specified reference plane. The ascending node is the point where the orbit of the object passes through the plane of reference, as seen in the adjacent image.
Types
Commonly used reference planes and origins of longitude include:
For geocentric orbits, Earth's equatorial plane as the reference plane, and the First Point of Aries (FPA) as the origin of longitude. In this case, the longitude is also called the right ascension of the ascending node (RAAN). The angle is measured eastwards (or, as seen from the north, counterclockwise) from the FPA to the node. An alternative is the local time of the ascending node (LTAN), based on the local mean time at which the spacecraft crosses the equator. Similar definitions exist for satellites around other planets (see planetary coordinate systems).
For heliocentric orbits, the ecliptic as the reference plane, and the FPA as the origin of longitude. The angle is measured counterclockwise (as seen from north of the ecliptic) from the First Point of Aries to the node.
For orbits outside the Solar System, the plane tangent to the celestial sphere at the point of interest (called the plane of the sky) as the reference plane, and north (i.e. the perpendicular projection of the direction from the observer to the north celestial pole onto the plane of the sky) as the origin of longitude. The angle is measured eastwards (or, as seen by the observer, counterclockwise) from north to the node., pp. 40, 72, 137; , chap. 17.
In the case of a binary star known only from visual observations, it is not possible to tell which node is ascending and which is descending. In this case the orbital parameter which is recorded is simply labeled longitude of the node, ☊, and represents the longitude of whichever node has a longitude between 0 and 180 degrees., chap. 17;, p. 72.
Calculation from state vectors
In astrodynamics, the longitude of the ascending node can be calculated from the specific relative angular momentum vector h as follows:
Here, n = ⟨nx, ny, nz⟩ is a vector pointing towards the ascending node. The reference plane is assumed to be the xy-plane, and the origin of longitude is taken to be the positive x-axis. k is the unit vector (0, 0, 1), which is the normal vector to the xy reference plane.
For non-inclined orbits (with inclination equal to zero), ☊ is undefined. For computation it is then, by convention, set equal to zero; that is, the ascending node is placed in the reference direction, which is equivalent to letting n point towards the positive x-axis.
See also
Equinox
Kepler orbits
List of orbits
Orbital node
Perturbation of the orbital plane can cause precession of the ascending node.
References
Orbits
Angle | Longitude of the ascending node | Physics | 671 |
2,798,040 | https://en.wikipedia.org/wiki/Ferrocyanide | Ferrocyanide is the name of the anion [Fe(CN)6]4−. Salts of this coordination complex give yellow solutions. It is usually available as the salt potassium ferrocyanide, which has the formula K4Fe(CN)6. [Fe(CN)6]4− is a diamagnetic species, featuring low-spin iron(II) center in an octahedral ligand environment. Although many salts of cyanide are highly toxic, ferro- and ferricyanides are less toxic because they tend not to release free cyanide. It is of commercial interest as a precursor to the pigment Prussian blue and, as its potassium salt, an anticaking agent.
Reactions
Treatment of ferrocyanide with ferric-containing salts gives the intensely coloured pigment Prussian blue (sometimes called ferric ferrocyanide and ferrous ferricyanide).
Ferrocyanide reversibly oxidized by one electron, giving ferricyanide:
[Fe(CN)6]4− ⇌ [Fe(CN)6]3− + e−
This conversion can be followed spectroscopically at 420 nm, since ferrocyanide has negligible absorption at this wavelength while ferricyanide has an extinction coefficient of 1040 M−1 cm−1.
Applications
The dominant use of ferrocyanides is as precursors to the Prussian blue pigments. Sodium ferrocyanide is a common anti-caking agent. Specialized applications involves their use as precipitating agents for production of citric acid and wine.
Research
Ferrocyanide and its oxidized product ferricyanide cannot freely pass through the plasma membrane. For this reason ferrocyanide has been used as a probe of extracellular electron acceptor in the study of redox reactions in cells. Ferricyanide is consumed in the process, thus any increase in ferrocyanide can be attributed to secretions of reductants or transplasma membrane electron transport activity.
Nickel ferrocyanide (Ni2Fe(CN)6) is also used as catalyst in electro-oxidation (anodic oxidation) of urea. Aspirational applications range from hydrogen production for cleaner energy with lower CO2 emission to wastewater treatment.
Ferrocyanide is also studied as an electrolyte in flow batteries.
Nomenclature
According to the recommendations of IUPAC, ferrocyanide should be called "hexacyanidoferrate(II)". Cyanides as a chemical class were named because they were discovered in ferrocyanide. Ferrocyanide in turn was named in Latin to mean "blue substance with iron." The dye Prussian blue had been first made in the early 18th century. The word "cyanide" used in the name is from κύανος kyanos, Greek for "(dark) blue."
Gallery
See also
Ferricyanide
Perls' Prussian blue - a histology stain
Potassium ferrocyanide
Sodium ferrocyanide
References
Cyano complexes
Iron(II) compounds
Anions
Iron complexes
Cyanometallates | Ferrocyanide | Physics,Chemistry | 674 |
38,411 | https://en.wikipedia.org/wiki/Regression%20testing | Regression testing (rarely, non-regression testing) is re-running functional and non-functional tests to ensure that previously developed and tested software still performs as expected after a change. If not, that would be called a regression.
Changes that may require regression testing include bug fixes, software enhancements, configuration changes, and even substitution of electronic components (hardware). As regression test suites tend to grow with each found defect, test automation is frequently involved. The evident exception is the GUIs regression testing, which normally must be executed manually. Sometimes a change impact analysis is performed to determine an appropriate subset of tests (non-regression analysis).
Background
As software is updated or changed, or reused on a modified target, emergence of new faults and/or re-emergence of old faults is quite common.
Sometimes re-emergence occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Often, a fix for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software. Frequently, a fix for a problem in one area inadvertently causes a software bug in another area.
It may happen that when a feature is redesigned some of the same mistakes that were made in the original implementation of the feature also occur in the redesign. In most software development situations, it is considered good coding practice, when a bug is located and fixed, to record a test that exposes the bug and re-run that test regularly after subsequent changes to the program.
Although this may be done through manual testing procedures using programming techniques, it is often done using automated testing tools. Such a test suite contains software tools that allow the testing environment to execute all the regression test cases automatically; many projects have automated Continuous integration systems to re-run all regression tests at specified intervals and report any failures (which could imply a regression or an out-of-date test).
Common strategies are to run such a system after every successful compile (for small projects), every night, or once a week. Those strategies can be automated by an external tool.
Regression testing is an integral part of the extreme programming software development method. In this method, design documents are replaced by extensive, repeatable, and automated testing of the entire software package throughout each stage of the software development process. Regression testing is done after functional testing has concluded, to verify that the other functionalities are working.
In the corporate world, regression testing has traditionally been performed by a software quality assurance team after the development team has completed work. However, defects found at this stage are the most costly to fix. This problem is being addressed by the rise of unit testing. Although developers have always written test cases as part of the development cycle, these test cases have generally been either functional tests or unit tests that verify only intended outcomes. Developer testing compels a developer to focus on unit testing and to include both positive and negative test cases.
Techniques
The various regression testing techniques are:
Retest all
This technique checks all the test cases on the current program to check its integrity. Though it is expensive as it needs to re-run all the cases, it ensures that there are no errors because of the modified code.
Regression test selection
Unlike Retest all, this technique runs a part of the test suite (owing to the cost of retest all) if the cost of selecting the part of the test suite is less than the Retest all technique.
Test case prioritization
Prioritize the test cases so as to increase a test suite's rate of fault detection. Test case prioritization techniques schedule test cases so that the test cases that are higher in priority are executed before the test cases that have a lower priority.
Types of test case prioritization
General prioritization – Prioritize test cases that will be beneficial on subsequent versions
Version-specific prioritization – Prioritize test cases with respect to a particular version of the software.
Hybrid
This technique is a hybrid of regression test selection and test case prioritization.
Benefits and drawbacks
Regression testing is performed when changes are made to the existing functionality of the software or if there is a bug fix in the software. Regression testing can be achieved through multiple approaches; if a test all approach is followed, it provides certainty that the changes made to the software have not affected the existing functionalities, which are unaltered.
In agile software development—where the software development life cycles are very short, resources are scarce, and changes to the software are very frequent—regression testing might introduce a lot of unnecessary overhead.
In a software development environment which tends to use black box components from a third party, performing regression testing can be tricky, as any change in the third-party component may interfere with the rest of the system (and performing regression testing on a third-party component is difficult, because it is an unknown entity).
Uses
Regression testing can be used not only for testing the correctness of a program but often also for tracking the quality of its output. For instance, in the design of a compiler, regression testing could track the code size and the time it takes to compile and execute the test suite cases.
Regression tests can be broadly categorized as functional tests or unit tests. Functional tests exercise the complete program with various inputs. Unit tests exercise individual functions, subroutines, or object methods. Both functional testing tools and unit-testing tools tend to be automated and are often third-party products that are not part of the compiler suite. A functional test may be a scripted series of program inputs, possibly even involving an automated mechanism for controlling mouse movements and clicks. A unit test may be a set of separate functions within the code itself or a driver layer that links to the code without altering the code being tested.
See also
Quality control
Test-driven development
References
Software testing
Extreme programming | Regression testing | Engineering | 1,200 |
1,102,836 | https://en.wikipedia.org/wiki/Kuroda%20normal%20form | In formal language theory, a noncontracting grammar is in Kuroda normal form if all production rules are of the form:
AB → CD or
A → BC or
A → B or
A → a
where A, B, C and D are nonterminal symbols and a is a terminal symbol. Some sources omit the A → B pattern.
It is named after Sige-Yuki Kuroda, who originally called it a linear bounded grammar, a terminology that was also used by a few other authors thereafter.
Every grammar in Kuroda normal form is noncontracting, and therefore, generates a context-sensitive language. Conversely, every noncontracting grammar that does not generate the empty string can be converted to Kuroda normal form.
A straightforward technique attributed to György Révész transforms a grammar in Kuroda normal form to a context-sensitive grammar: AB → CD is replaced by four context-sensitive rules AB → AZ, AZ → WZ, WZ → WD and WD → CD. This proves that every noncontracting grammar generates a context-sensitive language.
There is a similar normal form for unrestricted grammars as well, which at least some authors call "Kuroda normal form" too:
AB → CD or
A → BC or
A → a or
A → ε
where ε is the empty string. Every unrestricted grammar is weakly equivalent to one using only productions of this form.
If the rule AB → CD is eliminated from the above, one obtains context-free grammars in Chomsky Normal Form. The Penttonen normal form (for unrestricted grammars) is a special case where first rule above is AB → AD. Similarly, for context-sensitive grammars, the Penttonen normal form, also called the one-sided normal form (following Penttonen's own terminology) is:
AB → AD or
A → BC or
A → a
For every context-sensitive grammar, there exists a weakly equivalent one-sided normal form.
See also
Backus–Naur form
Chomsky normal form
Greibach normal form
References
Further reading
G. Révész, "Comment on the paper 'Error detection in formal languages,'" Journal of Computer and System Sciences, vol. 8, no. 2, pp. 238–242, Apr. 1974. (Révész' trick)
Formal languages | Kuroda normal form | Mathematics | 492 |
61,303,201 | https://en.wikipedia.org/wiki/Virginia%20Tucker | Virginia Layden Tucker (1909 – January 19, 1985) was an American mathematician whose work at the National Advisory Committee for Aeronautics (NACA), the precursor to NASA, allowed engineers to design and improve upon airplanes. Tucker was one of the first human computers at the NACA, served as a recruiter for the program, and later worked as an aerodynamicist and an advocate for women in mathematics.
Life and career
Tucker was born in Hertford, North Carolina in 1909. She was the valedictorian of Perquimans High School's first graduating class in 1926 and is an alumna of the North Carolina College for Women where she graduated in 1930 with a B.A. in mathematics and a minor in education.
Tucker spent the next four years as a high school mathematics teacher in her hometown. In 1935, she was recruited to work at the Langley Memorial Aeronautical Laboratory (now Langley Research Center), the main research center for the National Advisory Committee for Aeronautics, at the time. Tucker was one of five women from around the country recruited to be part of Langley's first "computer pool". As human computers, these women were responsible for processing the large amounts of data gathered from flight, wind tunnel, and aeronautical tests conducted at the facility, as the NACA did not have electrical computers at the time.
As World War II began in 1939, the rapid development of aeronautical technologies became a main priority of the U.S. Military and as a result, the demand for human computers at Langley grew rapidly. Tucker traveled across the country (particularly the South) recruiting and training female mathematicians for the program.
In 1946, Tucker was promoted to the position of Overall Supervisor for Computing at Langley overseeing around 400 female "human computers", many of whom she recruited.
In 1948, Tucker left the NACA to become a researcher at the Northrop Corporation. She also became an advocate for women in engineering, working as the director of the Los Angeles Section of the Society of Women Engineers (SWE) chair of SWE's National Finance Committee from 1955 to 1956, and as SWE's representative to the Los Angeles Technical Societies Council in 1957.
Tucker left Northrop after 17 years and returned to North Carolina where she became the supervisor of a local school system until her retirement in 1974.
Virginia Tucker died on January 19, 1985, at the age of 75.
See also
Hidden Figures
Grace Hopper
Katherine Johnson
Women in computing
References
1909 births
1985 deaths
20th-century American mathematicians
Human computers
National Advisory Committee for Aeronautics
Mathematicians from North Carolina
People from Hertford, North Carolina
University of North Carolina at Greensboro alumni
20th-century American women mathematicians | Virginia Tucker | Technology | 533 |
72,811,144 | https://en.wikipedia.org/wiki/Buchwaldoboletus%20parvulus | Buchwaldoboletus parvulus is a species of bolete fungus in the family Boletaceae native to India. It grows on dead bamboo stumps, has a convex bright yellow cap, yellow to red-brown pores, and a yellow above, reddish below stipe.
Taxonomy and naming
Originally described by & Purush. as Pulveroboletus parvulus in 1988, it was given its current name by Ernst Both and Beatriz Ortiz-Santana in A preliminary survey of the genus Buchwaldoboletus, published in "Bulletin of the Buffalo Society of Natural Sciences" in 2011.
Description
The cap is bright yellow, convex, pulverulent, and can reach 7–13 mm in diameter. The pores are small, and tubes are adnate, concolorous with the
pileus, 3–4 mm deep. The stipe is very short, excentric and concolorous with the cap, becoming olive-brown when cut. Natarajan's description doesn't mention any bluing of the flesh, characteristic for Buchwaldoboletus genus.
Spores measure 5–6 by 3–4 μm.
References
External links
Boletaceae
Fungi described in 1988
Fungi of Asia
Fungus species | Buchwaldoboletus parvulus | Biology | 255 |
36,143,197 | https://en.wikipedia.org/wiki/Claw%20tool | The claw tool (also known as the Hayward Claw Tool) is a forcible entry tool used by firefighters, made of steel, that has a hook on one end and a forked end on the other. The tool was a major component in the Fire Department of New York during the early 20th century. Over the last fifty years, the claw tool has lost prominence due to the advent of newer and more efficient forcible entry tools.
History
The exact origin of the claw tool, which later became the Halligan bar, is not well documented, but according to FDNY folklore, it was discovered by firefighters responding to a fire at a lower Manhattan bank. The fire was started to cover up a burglary, and during the investigation, firefighters found an unusual tool with a claw-like end that the burglars had used to break into the bank. The firefighters believed that if the tool was good enough to break into a bank, it was good enough for their use, so they labeled it the "claw tool" and reproduced it many times over. It became the primary forcible entry tool used by the FDNY and is thought to be the first tool designed solely for that purpose. FDNY Deputy Chief Hugh Halligan later incorporated the fork end of the claw tool into his design of the Halligan bar in 1948.
Design and use
The original claw tool weighed 12 pounds and was approximately 36 inches in length. It was designed with a claw on one end and a tapered fork on the other end.
See also
Halligan bar
Kelly tool
References
Firefighter tools
Hand tools | Claw tool | Engineering | 324 |
15,402,259 | https://en.wikipedia.org/wiki/MAIFI | The Momentary Average Interruption Frequency Index (MAIFI) is a reliability index used by electric power utilities. MAIFI is the average number of momentary interruptions that a customer would experience during a given period (typically a year). Electric power utilities may define momentary interruptions differently, with some considering a momentary interruption to be an outage of less than 1 minute in duration while others may consider a momentary interruption to be an outage of less than 5 minutes in duration.
Calculation
MAIFI is calculated as
Reporting
MAIFI has tended to be less reported than other reliability indicators, such as SAIDI, SAIFI, and CAIDI. However, MAIFI is useful for tracking momentary power outages, or "blinks," that can be hidden or misrepresented by an overall outage duration index like SAIDI or SAIFI.
Causes
Momentary power outages are often caused by transient faults, such as lightning strikes or vegetation contacting a power line, and many utilities use reclosers to automatically restore power quickly after a transient fault has cleared.
Comparisons
MAIFI is specific to the area ( power utility, state, region, county, power line, etc. ) because of the many variables that affect the measure: high/low lightning, number & type of trees, high/low winds, etc. Therefore, comparing MAIFI of one power utility to another is not valid and should not be used in this type of benchmarking. It also is difficult to compare this measure of reliability
within a single utility. One year may have had an unusually high number of thunderstorms and thus skew any comparison to another year's MAIFI.
References
Electric power
Reliability indices | MAIFI | Physics,Engineering | 338 |
9,787,268 | https://en.wikipedia.org/wiki/Connie%20Eaves | Constance Jean Eaves CorrFRSE (née Halperin; May 22, 1944 – March 7, 2024) was a Canadian biologist with significant contributions to cancer and stem cell research. Eaves was a professor generics of genetics at the University of British Columbia and was also the co-founder with Allen C Eaves of Terry Fox Laboratory (Vancouver, Canada).
Life and career
In high school, Eaves was interested in becoming a physician but later decided to pursue research due to gender discrimination in medical school acceptance rates.
Eaves received a BA in Biology and Chemistry and in 1964 and 1966 an MSc in biology (Genetics) working on oncogenic viruses from Queen's University. She then pursued doctoral training at the Paterson Laboratories of the Christie Hospital and Holt Radium Institute and obtained a PhD from the University of Manchester in Great Britain in 1969.
Eaves did postdoctoral work on hematopoiesis at the Ontario Cancer Institute in Toronto, Canada, as a member of the research team of James Till and Ernest McCulloch.
After completing her studies, moved to British Columbia because she was offered an academic position at the University of British Columbia.
Her contributions to the professional and scholarly community include acting as the editor-in-chief of the journal Experimental Hematology, in addition to serving as the president of the National Cancer Institute (Canada), the associate scientific director of the Canadian Stem Cell Network, and president of the International Society of Experimental Hematology.
Connie Eaves died on March 7, 2024, at the age of 79.
Honors and recognition
1993, Fellow of the Royal Society of Canada
2003, Robert L. Noble Prize for Excellence in Cancer Research from the National Cancer Institute of Canada
2008, Donald Metcalf Lecture Award by the International Society for Experimental Hematology
2011, Canadian Blood Services' 2011 Lifetime Achievement Award.
2015, Corresponding Fellow of the Royal Society of Edinburgh
2016, Dr. Chew Wei Memorial Prize in Cancer Research from the University of British Columbia's Faculty of Medicine
2018, American Society of Hematology's E. Donnall Thomas Prize
2018, Tobias Award from the International Society for Stem Cell Research
2019, Canada Gairdner Wightman Award
2019, Inductee, Canadian Medical Hall of Fame
2021, Fellow of the Royal Society; Officer Order of Canada
Eaves was also a professor of Medical Genetics and an Associate Member of Medicine and Pathology & Laboratory Medicine at the University of British Columbia.
In recognition of the significant impact Drs. Connie and Allen Eaves have made on global cancer research and treatment over the past 50 years, the BC Cancer Foundation has unveiled the inauguration of the Eaves Stem Cell Assay Laboratory to honor their enduring legacy.
References
External links
List of publications available on PubMed
1944 births
2024 deaths
Alumni of the University of Manchester
Canadian cancer researchers
Fellows of the Royal Society of Canada
Canadian fellows of the Royal Society
Queen's University at Kingston alumni
Stem cell researchers
Officers of the Order of Canada
Members of the National Academy of Medicine
Scientists from British Columbia
Academics from Ottawa
Scientists from Ottawa | Connie Eaves | Biology | 615 |
11,023,988 | https://en.wikipedia.org/wiki/TSX-32 | TSX-32 has been a general purpose 32-bit multi-user multitasking operating system for x86 architecture platform, with a command line user interface. It is compatible with some 16-bit DOS applications and supports file systems FAT16 and FAT32. It was developed by S&H Computer Systems, and has been available since 1989.
DEC-oriented columnist Kevin G. Barkes noted that TSX-32 is "not a port of the PDP-11 TSX-Plus" and that it runs
well on 386, 486 and Pentium-based systems. He reported a limitation: since it supports the MS-DOS FAT file system, filenames are 8.3.
TSX-Plus
An earlier non-DEC operating system, also from S&H, was named TSX-Plus. Released in 1980, TSX-Plus was the successor to TSX, released in 1976.
The strength of TSX-Plus is to simultaneously provide to multiple users the services of DEC's single-user RT-11. Depending on which PDP-11 model and the amount of memory, the system could support a minimum of 12 users (14-18 users on a 2 MB 11/73, depending on workload). A productivity feature called "virtual lines" "allows a single user to control several tasks from a single terminal."
History
S&H wrote the original TSX because "Spending $25K on a computer that could only support one user bugged" (founder Harry Sanders); the outcome was the initial four-user TSX in 1976.
For TSX-32, they said in an interview, "We started with a clean sheet of paper" rather than starting with a "port."
As of 2021, it appears to be defunct.
VAX
The company's product line was ported/expanded for the VAX line.
See also
Multiuser DOS Federation
References
External links
TSX-32 official description page
X86 operating systems
DOS variants
1989 software | TSX-32 | Technology | 410 |
17,851,209 | https://en.wikipedia.org/wiki/Conservation%20finance | Conservation finance is the practice of raising and managing capital to support land, water, and resource conservation. Conservation financing options vary by source from public, private, and nonprofit funders; by type from loans, to grants, to tax incentives, to market mechanisms; and by scale ranging from federal to state, national to local.
Conservationists have traditionally relied upon private, philanthropic capital in the form of solicited donations, foundation grants, etc., and public, governmental funds in the form of tax incentives, ballot measures, bonding, agency appropriations, etc., to fund conservation projects and initiatives.
Although governments and philanthropists provide a moderate amount of funds, conservationist believe there is a shortage in the capital required to preserve global ecosystems. On an annual basis, they estimate in 2018 that investors must allocate $300 to $400 billion to meet worldwide conservation needs. From this amount, funders only provide approximately $52 billion per year to conservation finance. Increasingly, conservationists are embracing a broader range of funding and financing options, leveraging traditional “philanthropic and government resources with other sources of capital, including that from the capital markets." These non-traditional sources of conservation capital include debt-financing, emerging tax benefits, private equity investments, and project financing. These additional sources of leverage serve to enlarge the pool of financial capital available to fund conservation work worldwide and, as this financial capital is invested, the asset portfolio of conserved land, water and natural resources is grown.
Government Sources
Debt-for-Nature Swaps
Governments finance various forms of conservation finance. One such method involves establishing debt-for-nature swaps that aid environmental sustainability efforts in developing nations. Originated in the 1980s, this concept allows for public and private interests to purchase debt from a developing country. Consequently, that nation's purchased debt is discharged in part or in full. The government then spends the money on domestic conservation projects. While developed nations participate in these transactions, private institutions purchase this debt as well. For example, commercial banks buy this debt and sell the portfolio at discounted prices to other investors or financial firms. Third-party organizations, particularly NGOs, participate in these swaps to secure currency or help develop governmental programs using the newly acquired funds. In 1987, Bolivia successfully implemented the first debt-for-nature swap. The Bolivian government sold $650,000 of its debt for $100,000. In exchange, Bolivia agreed to provide funding for sustainability efforts in Beni's wildlife reserve. Since the world's most indebted nations also contain diverse ecosystems, debt-for-nature swaps draw significant attention towards conservation efforts in the most fragile parts of the biosphere.
Foreign Aid
Foreign aid is instrumental in implementing global conservation finance efforts. The USAID is a federal agency within the United States committed to foreign aid and emphasizes conservation for developmental purposes. The agency allocates $200 million per year towards worldwide efforts to conserve species. One focus is developing conservation zones, particularly in coastal wetlands. These zones preserve fish species, thus strengthening both the local ecosystem and the fishing industry's profitability. Foreign aid directly provides resources to countries helps to facilitate conservation finance projects.
Private sector sources
Climate business
Climate business is a private-sector strategy for conservation finance that some organizations advocate for. This would allow businesses to adopt clean technologies and services that promote efficiency standards. These standards consist of managing capital and using those funds to implement multiple business practices. Examples include investing in low carbon energy generation for office buildings. Such infrastructure would drastically reduce greenhouse gas emissions, let alone carbon. According to the World Bank Group, climate business would require accurate and scalable models to address a firm's environmental impact. In order for such models to remain relevant to firms, it is suggested that businesses remain cognizant of solutions throughout the global markets. One group that advocates this private-sector strategy is the International Finance Corporation (IFC), a member of the World Bank Group that facilitates private-sector investment in developing nations. The institution argues that such an approach should function on a global scale. According to the IFC, widespread adoption of climate business would lead to decreasing technological costs and favorable financial incentives for both the developing and developed world.
Payment for ecosystem services (PES)
A payment for ecosystem services (PES) broadly refers to any payment that is aimed to incentivize conserving and restoring ecological systems. These systems could include any ecosystem, such as a river or forest, that facilitates vital environmental processes. For instance, forests serve multiple functions in this regard. They provide environmental goods, such as food, facilitate nutrient cycling and other biological processes. Due to environmental degradation, these ecological systems are threatened. PES is a form of conservation finance that rewards people for maintaining these ecosystem services, often using financial incentives. In order to facilitate these transactions, the service provider must clearly define the service and secure an ecosystem which needs those particular resources. In addition, service purchasers carefully monitor the providers to ensure that conversation is efficiently carried out.
Many developing countries implement this market-based mechanism to address conservation needs in different ways. Nations that rely heavily on PES to improve conservation efforts include Vietnam, Brazil and Costa Rica. Parties in developing countries can facilitate PES in a variety of market types. Some PES markets exist with little to no regulations in place. Without a formal regulatory system, buyers must negotiate directly with sellers to obtain reasonable terms. Consequently, all PES transactions in these voluntary markets are priced and paid for privately. Formal regulatory markets require that legislators in respective countries determine how PES transactions are implemented. For instance, regulatory caps are placed on investments in specific forms of conservation. Buyers and sellers in the PES market are also strictly defined in legislation. While private parties are still encouraged to negotiate with each other, this formal system mandates legal boundaries intended to protect both buyers and sellers. Since the 1990s, Costa Rica has experimented with using PES to preserve the nation's ecosystems. Costa Rica uses a unique system in which the government pays service providers directly. Service providers are often farmers who own substantial properties containing forests and other sites that require conservation. However, many believe that these public funds should not disproportionately go to wealthy farmers and private companies. Instead, they conclude that the Costa Rican government should enable more service providers who live in poverty to compete and receive compensation.
Green bonds
Green bonds are liquid investment vehicles that raise capital for conservation efforts and environmentally stable practices in general. Investors commit their capital to these bonds and the money is then allocated towards green initiatives. Investors range from private corporations and firms to municipalities and even state governments. Conservation efforts include preserving endangered watersheds and rainforests. Over time, the investors would hypothetically receive a profitable return from these initial investments. Many financial professional argue that these green bonds symbolize a historic shift from investing in fossil fuel-based industry to climate change mitigation. They speculate that this would attract more investors and create more diversified portfolios among this base. The first Green Bond initiative was San Francisco's solar bonds authority to finance both conservation and local renewables, placed on the ballot and approved by voters in 2001 and incorporated into its Community Choice Aggregation program. In the late-2000s, the World Bank Treasury and the IFC pioneered these investments. In 2013, the IFC issued about $3.7 billion worth of green bonds to the private sector. Green bonds also consistently achieve high security ratings from bond rating agencies. For instance, the bonds issued from the IFC and the World Bank generally receive AAA/Aaa. This indicates a high level of quality and security for investors who seek to enter this market.
See also
Conservation movement
Payment for ecosystem services
Public bonds
Debt-for-Nature Swap
Conservation easement
Ballot initiative
References
Bibliography
External links
The Conservation Finance Network
Environmental economics
Environmentalism
· | Conservation finance | Environmental_science | 1,579 |
16,379,875 | https://en.wikipedia.org/wiki/Sun-Earth%20Day | Sun-Earth Day is a joint educational program established in 2000 by NASA and ESA. The goal of the program is to popularize the knowledge about the Sun, and the way it influences life on Earth, among students and the public. The day itself is mainly celebrated in the United States near the time of the spring equinox. However, the Sun-Earth Day event actually runs throughout the year, with a different theme being chosen each year.
Themes
The selection of each year's theme often corresponds to events for that year. Every theme is supported by free educational plans for both informal and formal educators. Here is a list of themes by year:
References
External links
Sun-Earth Day home page at NASA
Unofficial observances
March observances | Sun-Earth Day | Astronomy | 152 |
78,086,135 | https://en.wikipedia.org/wiki/REBELS-25 | REBELS-25 is a massive, star-forming rotating disc galaxy with a redshift of 7.31.
It was discovered using the Atacama Large Millimeter/submillimeter Array (ALMA), notice of its discovery was published in the Monthly Notices of the Royal Astronomical Society. REBELS-25 existed just 700 million years after the Big Bang.
The discovery of such an ancient galaxy not only makes it the oldest known galaxy, but it is another piece of mounting evidence that suggests cosmologists need to revise their previous notions on galactic evolution.
REBELS-25 is very complex compared to what was expected for a galaxy of its age: researchers discovered that it rotates (with the help of blueshift and redshift), and also shows traces of spiral arms as "modern" galaxies, like the Milky Way, has. Lucie Rowland, lead author of the REBELS-25 discovery paper, said about this ancient galaxy that "Seeing a galaxy with such similarities to our own Milky Way, that is strongly rotation-dominated, challenges our understanding of how quickly galaxies in the early Universe evolve into the orderly galaxies of today's cosmos".
References
Galaxies
Sextans
James Webb Space Telescope
Astronomical objects discovered in 2024
Galaxies discovered in 2024 | REBELS-25 | Astronomy | 254 |
39,192,470 | https://en.wikipedia.org/wiki/Soluble%20adenylyl%20cyclase | Soluble adenylyl cyclase (sAC) is a regulatory cytosolic enzyme present in almost every cell. sAC is a source of cyclic adenosine 3’,5’ monophosphate (cAMP) – a second messenger that mediates cell growth and differentiation in organisms from bacteria to higher eukaryotes. sAC differentiates from the transmembrane adenylyl cyclase (tmACs) – an important source of cAMP; in that sAC is regulated by bicarbonate anions and it is dispersed throughout the cell cytoplasm. sAC has been found to have various functions in physiological systems different from that of the tmACs.
Genomic context and summary
sAC is encoded in a single Homo sapiens gene identified as ADCY10 or Adenylate cyclase 10 (soluble). This gene packed down 33 exons that comprise greater than 100kb; though, it seems to utilize multiple promoters, and its mRNA undergoes extensive alternative splicing.
Structure
The functional mammalian sAC consist of two heterologous catalytic domains (C1 and C2), forming the 50 kDa amino terminus of the protein. The additional ~140 kDa C terminus of the enzyme includes an autoinhibitory region, canonical P-loop, potential heme-binding domain, and leucine zipper-like sequence, which are a form of putative regulatory domains.
A truncated form of the enzyme only includes the C1 and C2 domains and it is refers to as the minimal functional sAC variant. This sAC-truncated form has cAMP-forming activity much higher than its full-length type. These sAC variants are stimulated by HCO3- and respond to all known selective sAC inhibitors. Crystal structures of this sAC variant comprising only the catalytic core, in apo form and in as complex with various substrate analogs, products, and regulators, reveal a generic Class III AC architecture with sAC-specific features. The structurally related domains C1 and C2 form the typical pseudo-heterodimer, with one active site. The pseudo-symmetric site accommodates the sAC-specific activator HCO3−, which activates by triggering a rearrangement of Arg176, a residue connecting both sites. The anionic sAC inhibitor 4,4′-diisothiocyanatostilbene-2,2′-disulfonic acid (DIDS) acts as a blocker for the entrance to active site and bicarbonate binding pocket.
Activation by bicarbonate (HCO−3) and calcium (Ca2+)
The binding and cyclizing of adenosine 5’ triphosphate (ATP) to the catalytic active site of the enzyme is coordinated by two metal cations. The catalytic activity of sAC is increase by the presence of manganese [Mn2+]. sAC magnesium [Mg2+] activity is regulated by calcium [Ca2+] which increases the affinity for ATP of mammalian sAC. In addition, bicarbonate [HCO−3] releases ATP-Mg2+ substrate inhibition and increases Vmax of the enzyme.
The open conformation state of sAC is reached when ATP, with Ca2+ bound to its γ-phosphate binds with specific residues in the catalytic center of the enzyme. When the second metal – a Mg2+ ion – binds to the α-phosphate of ATP leads to a conformational change of the enzyme: the close state. The change in conformation from open to close state induces esterification of the α-phosphate with the ribose in adenosine and the release of the β- and γ-phosphates, this leads to cyclizing. Hydrogencarbonate stimulates the enzyme’s Vmax by promoting the allosteric change that leads to active site closure, recruitment of the catalytic Mg2+ ion, and readjustment of the phosphates in the bound ATP. The activator bicarbonate binds to a site pseudo-symmetric to the active site and triggers conformational changes by recruiting Arg176 from the active site (see above - "structure"). Calcium increases substrate affinity by replacing the magnesium in the ion B site, which provides an anchoring point for the beta- and gamma-phosphates of the ATP substrate.
Sources of bicarbonate (HCO−3)and calcium (Ca2+)
bicarbonate derived from carbonic anhydrase (CA)-dependent hydration.
CO2 metabolism
Enters through membrane-transporting proteins or cystic fibrosis transmembrane conductance regulators.
Calcium enters by voltage-dependent Ca2+ channels or by release from the endoplasmic reticulum.
Hydrogencarbonate and calcium activates sAC in the nucleus.
sAC inside mitochondria is activated by metabolically generated CO2 through carbonic anhydrase.
Physiological effects
Brain and nervous system
Astrocytes express several sAC splice variants, which are involved in metabolic coupling between neurons and astrocytes. Increase of potassium [K+] in the extracellular space caused by neuronal activity depolarizes the cell membrane of nearby astrocytes and facilitates the entry of hydrogencarbonate through Na+/HCO−3- cotransporters. The increase in cytosolic hydrogencarbonate activates sAC; the result of this activation is the release of lactate for use as energy source by the neurons.
Bone
Numerous sAC splice variants are present in osteoclast and osteoblasts, and mutation in the human sAC gene is associated with low spinal density. Calcification by osteoblasts is intrinsically related with bicarbonate and calcium. Bone density experiments in mouse calvaria cultured indicates that HCO−3-sensing sAC is a physiological appropriate regulator of bone formation and/or reabsorption.
Sperm
sAC activation by bicarbonate is necessary for motility and other aspects of capacitation in the spermatozoa of mammals. In human males, mutations in the ADCY10 gene that lead to the inactivation of sAC have been linked to cases of sterility. Due to this essential role in male fertility, sAC has been explored as a potential target for non-hormonal male contraception.
References
Further reading
Signal transduction
Cell signaling
G protein-coupled receptors
Protein kinases | Soluble adenylyl cyclase | Chemistry,Biology | 1,293 |
627,828 | https://en.wikipedia.org/wiki/Double%20Mersenne%20number | In mathematics, a double Mersenne number is a Mersenne number of the form
where p is prime.
Examples
The first four terms of the sequence of double Mersenne numbers are :
Double Mersenne primes
A double Mersenne number that is prime is called a double Mersenne prime. Since a Mersenne number Mp can be prime only if p is prime, (see Mersenne prime for a proof), a double Mersenne number can be prime only if Mp is itself a Mersenne prime. For the first values of p for which Mp is prime, is known to be prime for p = 2, 3, 5, 7 while explicit factors of have been found for p = 13, 17, 19, and 31.
Thus, the smallest candidate for the next double Mersenne prime is , or 22305843009213693951 − 1.
Being approximately 1.695,
this number is far too large for any currently known primality test. It has no prime factor below 1 × 1036.
There are probably no other double Mersenne primes than the four known.
Smallest prime factor of (where p is the nth prime) are
7, 127, 2147483647, 170141183460469231731687303715884105727, 47, 338193759479, 231733529, 62914441, 2351, 1399, 295257526626031, 18287, 106937, 863, 4703, 138863, 22590223644617, ... (next term is > 1 × 1036)
Catalan–Mersenne number conjecture
The recursively defined sequence
is called the sequence of Catalan–Mersenne numbers. The first terms of the sequence are:
Catalan discovered this sequence after the discovery of the primality of by Lucas in 1876. Catalan conjectured that they are prime "up to a certain limit". Although the first five terms are prime, no known methods can prove that any further terms are prime (in any reasonable time) simply because they are too huge. However, if is not prime, there is a chance to discover this by computing modulo some small prime (using recursive modular exponentiation). If the resulting residue is zero, represents a factor of and thus would disprove its primality. Since is a Mersenne number, such a prime factor would have to be of the form . Additionally, because is composite when is composite, the discovery of a composite term in the sequence would preclude the possibility of any further primes in the sequence.
If were prime, it would also contradict the New Mersenne conjecture. It is known that is composite, with factor .
In popular culture
In the Futurama movie The Beast with a Billion Backs, the double Mersenne number is briefly seen in "an elementary proof of the Goldbach conjecture". In the movie, this number is known as a "Martian prime".
See also
Cunningham chain
Double exponential function
Fermat number
Perfect number
Wieferich prime
References
Further reading
.
External links
Tony Forbes, A search for a factor of MM61 .
Status of the factorization of double Mersenne numbers
Double Mersennes Prime Search
Operazione Doppi Mersennes
Eponymous numbers in mathematics
Integer sequences
Large integers
Unsolved problems in number theory
Mersenne primes | Double Mersenne number | Mathematics | 719 |
24,697,544 | https://en.wikipedia.org/wiki/Onset%20of%20action | Onset of action is the duration of time it takes for a drug's effects to come to prominence upon administration. With oral administration, it typically ranges anywhere from 20 minutes to over an hour, depending on the drug in question. Other methods of ingestion such as smoking or injection can take as little as seconds to minutes to take effect. The determination of the onset of action, however, is not completely dependent upon route of administration. There are several other factors that determine the onset of action for a specific drug, including drug formulation, dosage, and the patient receiving the drug.
Effect of Administration Route on the Onset of Action
A drug's pharmacological effects can only occur once it has been fully solubilized and has entered the blood stream. For most drugs administered orally, the drug must be ingested, pass through the stomach, and into the small intestine, where the drug molecules enter the blood stream through the villi and microvilli. A few drugs such as alcohol are absorbed by the lining of the stomach, and therefore tend to take effect much more quickly than the vast majority of oral medications which are absorbed in the small intestine. Gastric emptying time can vary from 0 to 3 hours, and therefore plays a major role in onset of action for orally administered drugs. For intravenous administration, the pathway is much shorter because the drug is administered (usually already in solution) directly to the bloodstream.
References
Pharmacokinetic metrics | Onset of action | Chemistry | 307 |
273,329 | https://en.wikipedia.org/wiki/Catenoid | In geometry, a catenoid is a type of surface, arising by rotating a catenary curve about an axis (a surface of revolution). It is a minimal surface, meaning that it occupies the least area when bounded by a closed space. It was formally described in 1744 by the mathematician Leonhard Euler.
Soap film attached to twin circular rings will take the shape of a catenoid. Because they are members of the same associate family of surfaces, a catenoid can be bent into a portion of a helicoid, and vice versa.
Geometry
The catenoid was the first non-trivial minimal surface in 3-dimensional Euclidean space to be discovered apart from the plane. The catenoid is obtained by rotating a catenary about its directrix. It was found and proved to be minimal by Leonhard Euler in 1744.
Early work on the subject was published also by Jean Baptiste Meusnier. There are only two minimal surfaces of revolution (surfaces of revolution which are also minimal surfaces): the plane and the catenoid.
The catenoid may be defined by the following parametric equations:
where and and is a non-zero real constant.
In cylindrical coordinates:
where is a real constant.
A physical model of a catenoid can be formed by dipping two circular rings into a soap solution and slowly drawing the circles apart.
The catenoid may be also defined approximately by the stretched grid method as a facet 3D model.
Helicoid transformation
Because they are members of the same associate family of surfaces, one can bend a catenoid into a portion of a helicoid without stretching. In other words, one can make a (mostly) continuous and isometric deformation of a catenoid to a portion of the helicoid such that every member of the deformation family is minimal (having a mean curvature of zero). A parametrization of such a deformation is given by the system
for , with deformation parameter , where:
corresponds to a right-handed helicoid,
corresponds to a catenoid, and
corresponds to a left-handed helicoid.
References
Further reading
External links
Catenoid – WebGL model
Euler's text describing the catenoid at Carnegie Mellon University
Calculating the surface area of a Catenoid
Minimal Surface of Revolution
Minimal surfaces
de:Minimalfläche#Das Katenoid | Catenoid | Chemistry | 481 |
66,936,728 | https://en.wikipedia.org/wiki/Electromaterials | In physics, electrical engineering and materials science, electromaterials are the set of materials which store, controllably convert, exchange and conduct electrically charged particles. The term electromaterial can refer to any electronically or ionically active material. While this definition is quite broad, the term is typically used in the context of properties and/or applications in which atomic electronic transition is pertinent. The word electromaterials is a compound form of the Ancient Greek term, ἤλεκτρον ēlektron, "Amber", and the Latin term, materia, "Matter".
Properties
Electromaterials enable the transport of charged species (electrons and/or ions) as well as facilitate the exchange of charge to other materials. For atomic and molecule systems, this is observed as atomic electronic transition between discrete orbitals, while for bulk semiconductor materials electronic bands determine which transitions may occur. Metals, in which the conduction band is permanently populated, may also be considered electromaterials, although this is typically outside the category compared to other conduction mechanisms such as for a degenerate semiconductor (transparent conductive oxides) or polaron hopping (organic conductor). Materials which can be ionised (i.e. electrons either added or stripped away) may also be considered electronically active.
Electromaterials have a number of properties broadly, including:
Opto-electronic properties
Photoelectric properties
Exotic phenomena such as super-conductive properties
Partial charge transfer, adsorption of species leading to change in electronic properties of material
Ion conductive materials
Applications
In the application of electromaterials, ions or electrons are used to carry out a specific function. For example, the oxidation or reduction (loss or gain of electrons, respectively) of another species. Materials such as metals, metal particles, conducting polymers, conducting carbon, e.g. CNTs, graphene, carbon fibres, electrodes, electrolytes, electrocatalysts, light harvesting materials (e.g. dyes for DSSCs) find applications in which electromaterials are critical to their functionality:
Batteries
Super-capacitors
Fuel cells
Photovoltaics
Artificial muscles
Chemical sensors
LEDs
Energy conversion/storage devices
Systems that interact with living tissue and soft robotics (prosthetics)
Characterisation
Electromaterials can be explored by techniques such as (but not limited to) absorption spectroscopy, photoluminescence spectroscopy, electrochemistry, FTIR, Raman spectroscopy or combinations of the above, such as raman spectroelectrochemistry.
See also
Metal
Electrolyte
Electrical conductor
Piezoelectricity
References
Electricity
Materials science | Electromaterials | Physics,Materials_science,Engineering | 544 |
22,221,716 | https://en.wikipedia.org/wiki/Scholander%20pressure%20bomb | A pressure bomb, pressure chamber, or Scholander bomb is an instrument that can measure the approximate water potential of plant tissues. A leaf and petiole or stem segment is placed inside a sealed chamber. Pressurized gas (normally compressed nitrogen) is slowly added to the chamber. As the pressure increases, at some point the liquid contents of the sample will be forced out of the xylem and will be visible at the cut end of the stem or petiole. The pressure that is required to do so is equal and opposite to the water potential of the sample (Ψleaf or Ψtotal). Pressure bombs are field portable and mechanically simple, which make them the predominant method for water potential measurements in the fields of plant physiology and ecophysiology.
Measurements
Several water potential variables can be determined using the pressure bomb analysis. The most common of which are predawn leaf water potential and midday leaf water potential. Measurements conducted on plants predawn are considered a good representation of the total water status of plant. As no transpiration through stomata should be occurring at night, the plant's water potentials should be in equilibrium across the entire plant and be similar to the water potential of the soil around the roots. Midday leaf water potential is less commonly used as it is more variable and does not correlate well with other physiological measurements of water status. However, midday water potentials can be used to determine times of peak water stress or diurnal changes in plant water status. Additional variables and methods that involve pressure bombs for analysis include: stem conductance, xylem embolisms, and vulnerability curves.
Pressure-volume Curves
A more advance method that uses the pressure bomb in plant physiology is pressure-volume curves analysis or p-v curve. Through this method one measures the changes in leaf or stem water potential and relative water content to isolate the underlying components of total leaf or stem water potential. While the measurements can be time intensive, variable such as solute potential (Ψs), turgor loss point (Ψtlp), apoplastic water content and symplastic water content can all be determined using this method. The general protocol for measuring p-v curves involves repeated measure of water potential and mass in succession. As water is forced out of the sample with each measurement in the pressure bomb the mass is also reduced. Tracking these changes over many measurement should show a precipitous drop and then a steady linear decline after an inflection point.
References
Further reading
Plant physiology | Scholander pressure bomb | Biology | 513 |
2,656,649 | https://en.wikipedia.org/wiki/Chi1%20Sagittarii | {{DISPLAYTITLE:Chi1 Sagittarii}}
Chi1 Sagittarii (χ1 Sagittarii) is a binary star system in the zodiac constellation of Sagittarius. The pair have a combined apparent visual magnitude of +5.03, which is bright enough to be seen with the naked eye. Based upon an annual parallax shift of 12.95 mas as seen from Earth, it is located around 252 light years from the Sun. It is advancing through space in the general direction of the Earth with a radial velocity of −43.4 km/s.
This is a visual binary with an orbital period of 5.72 years, an eccentricity of 0.710, and an angular semimajor axis of 69 mas. The primary, component A, is an A-type star showing a mixed spectrum that matches a stellar classification of A3/5 IV/V. Helmut Abt classified it as an Am star with a spectral type of kA5hF0VmF0. This notation indicates it has the calcium K-lines of an A5 star, and the hydrogen and metal lines of an F0 star. It is around 393 million years old and is spinning with a projected rotational velocity of 54 km/s. The star has an estimated 1.6 times the mass of the Sun and is radiating 42.9 times the Sun's luminosity from its photosphere at an effective temperature of 7,859 K.
References
A-type main-sequence stars
Binary stars
Sagittarii, Chi01
Sagittarius (constellation)
Durchmusterung objects
Sagittarii, 47
182369
095477
7362 | Chi1 Sagittarii | Astronomy | 354 |
26,403,720 | https://en.wikipedia.org/wiki/Geniom%20RT%20Analyzer | Geniom RT Analyzer is an instrument used in molecular biology for diagnostic testing. The Geniom RT Analyzer utilizes the dynamic nature of tissue microRNA levels as a biomarker for disease progression. The Geniom analyzer incorporates microfluidic and biochip microarray technology in order to quantify microRNAs via a Microfluidic Primer Extension Assay (MPEA) technique (figure 2).
Background
Many human diseases such as cancer are thought to be involved at a molecular level with dynamic microRNA levels. microRNAs are important in regulating gene expression and can therefore have important implications on the activation or inactivation of oncogenes or tumour suppressor genes respectively. Certain microRNAs have been detected at differing levels throughout the progression of particular types of cancer. Early diagnostic testing has proved to be a challenge for many human diseases, as symptomatic phenotypes can be either ambiguous or subtle in nature. As a result, an extensive research area dedicated to the characterization of molecular biomarkers has blossomed. Biomarkers allow for the accurate measurement of biological molecules at various disease stages; these measurements will eventually contribute to an abundant data source useful for future early diagnostic testing. The Geniom RT Analyzer carries out automated microRNA biomarker profiling that will in turn contribute to this growing data source.
The Biology
detection of microRNA levels can be useful over both space and time or a combination of the two.
Spatial microRNA dynamics:
Recent discoveries have shown the detection of free floating microRNAs in the blood. These free floating microRNAs are found to be protected from endogenous RNAase activity as compared to microRNAs within the cells of tissues. This protection renders these free floating microRNAs as suitable stable biomarkers. This stable biomarker can be used as a control when comparing microRNA levels between tissues. Comparing microRNA levels and thus assessing expression patterns across tissues can be used in a variety of applications such as cancer and disease classification.
Temporal microRNA dynamics:
A complication with designing and developing tumour suppressor drugs for cancer patients is that the molecular environment within and around the tumour does not seem to be constant throughout the process of tumour development. MicroRNAs are one class of molecules that contribute to this environmental fluctuation for the tumour. Detecting microRNA levels in tumour or regular tissues over time can help to gain insight on the nature of these environmental changes. This information can in turn be used in deciding when the proper timing is for various therapeutic interventions.
Work-Flow
The platform for the Geniom RT Analyzer consists of a biochip containing 8 separate microarrays. Tissue derived microRNAs do not require treatment prior to introduction to the Biochip. The Biochip contains validated and optimized customizable capture probes and the on-chip Microfluidic Primer Extension Assay (MPEA) enables direct microRNA-capture probe hybridization. After biotin labelling, primer extension and washing, the Geniom Analyzer undergoes automated processing of the arrays. A Charge-coupled device (CCD) camera assists the biochip readout which displays a pictorial image of the microRNA quantification. This image is supported by the use of biotinylated nucleotides and subsequent staining with a streptavidin-phycoerythrin-conjugate. Due to the use of 8 separate microarrays per biochip, seven replicate intensity readings are made available and the median value is generally applied to the graphical results. An analysis of biomarker assessment is then made and data can be stored in the miRDBXP database.
Innovation
MPEA: the MPEA technique utilizes microfluidic technology to ensure the correct timing and consequently the correct alignment of the capture probes with the microRNA molecules. Conventional methods of probe-microRNA hybridization require prior treatment of microRNAs with potentially costly reagents. These conventional methods also require initial introduction of biotin prior to hybridization. Due to the application of microfluidics, neither initial biotin nor reagent treatment is necessary prior to primer extension. Rather the unlabelled, hybridized microRNA behaves as the primer for enzymatic elongation, a process in which biotinylated nucleotides are assembled.
References
Molecular biology laboratory equipment | Geniom RT Analyzer | Chemistry,Biology | 907 |
27,412,585 | https://en.wikipedia.org/wiki/Middle-third%20rule | In civil engineering, the middle-third rule states that no tension is developed in a wall or foundation if the resultant force lies within the middle third of the structure.
The rule is covered by various standard texts in the field of civil engineering, for instance Principles of Foundation Engineering by B.M. Das. The application of this rule is limited to foundations that are square or rectangular in plan. (For circular foundations a different rule, known as the Middle Quarter Rule applies).
References
Structural engineering | Middle-third rule | Engineering | 100 |
59,104,434 | https://en.wikipedia.org/wiki/Fair%20division%20experiments | Various experiments have been made to evaluate various procedures for fair division, the problem of dividing resources among several people. These include case studies, computerized simulations, and lab experiments.
Case studies
Allocating indivisible heirlooms
1. Flood describes a division of a gift containing 5 parcels: whiskey, prunes, eggs, suitcase, etc. The division was done using the Knaster auction. The resulting division was fair, but in retrospect it was found that coalitions could gain from manipulation.
2. When Mary Anna Lee Paine Winsor died at the age of 93, her estate included two trunks of silver, that had to be divided among her 8 grandchildren. It was divided using a decentralized, fair and efficient allocation procedure, which combined market equilibrium and a Vickrey auction. Although most participants did not fully understand the algorithm or the preference information desired, it handled the major considerations well and was regarded as equitable.
Allocating unused classrooms
In California, the law says that public school classrooms should be shared fairly among all public school pupils, including those in charter schools. Schools have dichotomous preferences: each school demands a certain number of classes, it is happy if it got all of them and unhappy otherwise. A new algorithm allocates classrooms to schools using a non-trivial implementation of the randomized leximin mechanism. Unfortunately it was not deployed in practice, but it was tested using computer simulations based on real school data. While the problem is computationally-hard, simulations show that the implementation scales gracefully in terms of running time: even when there are 300 charter schools, it terminates in a few minutes on average. Moreover, while theoretically the algorithm guarantees only 1/4 of the maximum number of allocated classrooms, in the simulations it satisfies on average at least 98% of the maximum number of charter schools that can possibly be satisfied, and allocates on average at least 98% of the maximum number of classrooms that can possibly be allocated.
The partial collaboration with the school district lead to several practical desiderata in deploying fair division solutions in practice. First, the simplicity of the mechanism, and the intuitiveness of the properties of proportionality, envy-freeness, Pareto optimality, and strategyproofness, have made the approach more likely to be adopted. On the other hand, the use of randomization, though absolutely necessary in order to guarantee fairness in allocating indivisible goods such as classrooms, has been a somewhat harder sell: the term "lottery" raised negative connotations and legal objections.
Resolving international conflicts
The adjusted winner procedure is a protocol for simultaneously resolving several issues under conflict, such that the agreement is envy-free, equitable, and Pareto efficient. While there are no account of it actually being used to resolve disputes, there are several counterfactual studies checking what would have been the results of using this procedure to solve international disputes:
For the Camp David Accords, the authors construct approximate numeric valuation functions for Israel and Egypt, based on the relative importance of each issue for each country. They then run the AW protocol. The theoretical results are very similar to the actual agreement, which leads the authors to conclude that the agreement is as fair as it could be.
For the Israeli-Palestinian conflict, the author constructs the valuation functions based on a survey of expert opinions, and describes the agreement that would result from running the AW protocol with these valuations.
For the Spratly Islands dispute, the authors construct a two-phase procedure for settling the dispute, and present its (hypothetic) outcome.
Allocating rooms and rent
Rental harmony is the problem of simultaneously allocating rooms in an apartment and the rent of the apartment among the housemates. It has several solutions. Some of these solutions were implemented in the Spliddit.org website and tested on real users.
Sharing cooperation surplus
When different agents cooperate, there is an economic surplus in welfare. Cooperative game theory studies the question of how this surplus should be allocated, taking into account the various coalitional options of the players. Several cases of such cooperation has been studied, in light of concepts such as the Shapley value.
Fair Bargaining
Flood analyzed several cases of bargaining between a buyer and a seller on the price of purchasing a good (e.g. a car). He found that the "split-the-difference" principle was acceptable by both participants. The same cooperative principle was found in more abstract non-cooperative games. However, in some cases, bidders in an auction did not find a cooperative solution.
Fair Load-Shedding
Olabambo et al develop heuristic algorithms for fair allocation of electricity disconnections in developing countries. They test the fairness and welfare of their algorithms on electricity usage data from Texas, which they adapt to the situation in Nigeria.
Computerized simulations
Fair cake-cutting
Walsh developed several algorithms for online fair cake-cutting. He tested them using a computerized simulation: valuation functions for each agent were generated by dividing the cake into random segments, and assigning a random value to each segment, normalizing the total value of the cake. The egalitarian welfare and the utilitarian welfare of various algorithms were compared.
Shtechman, Gonen and Segal-Halevi simulated two famous cake-cutting algorithms - Even–Paz and Last diminisher - on real land-value data from New Zealand and Israel. The agents' valuations were generated by taking the market value of each land-cell and adding a random "noise" based on two different noise models: uniform noise and hot-spot noise. They showed the algorithms perform better than two alternative processes for dividing land, namely selling the land and dividing the proceeds, and hiring a real-estate assessor.
Welfare redistribution mechanism
Cavallo developed an improvement of the Vickrey–Clarke–Groves mechanism in which money is redistributed in order to increase social welfare. He tested his mechanism using simulations. He generated piecewise-constant valuation functions, whose constants were selected at random from the uniform distribution. He also tried Gaussian distributions and got similar results.
Fair item assignment
Dickerson, Goldman, Karp and Procaccia use simulations to check under what conditions an envy-free assignment of discrete items is likely to exist. They generate instances by sampling the value of each item to each agent from two probability distributions: uniform and correlated. In the correlated sampling, they first sample an intrinsic value for each good, and then assign a random value to each agent drawn from a truncated nonnegative normal distribution around that intrinsic value. Their simulations show that, when the number of goods is larger than the number of agents by a logarithmic factor, envy-free allocations exist with high probability.
Segal-Halevi, Aziz and Hassidim use simulations from similar distributions to show that, in many cases, there exist allocations that are necessarily fair based on a certain convexity assumption on the agents' preferences.
Laboratory experiments
Several experiments were conducted with people, in order to find out what is the relative importance of several desiderata in choosing an allocation.
Important concepts
James Konow reviewed hundreds of experiments, done by phone interviews or written surveys, aimed at eliciting people's preferences and ideas regarding "what is fair?". Most experiments were done by presenting short stories (vignettes) to people and asking them whether the outcome is fair or unfair. The experiments revolved around four aspects of justice:
Equality and Need: egalitarianism, Rawls' theory and the Social contract and Marxism. Konow claims that there is little evidence for these as a general fairness principle, except when considering the basic needs. He calls it the Principle of Need: just allocations provide for basic needs equally across individuals.
Utilitarian and Welfare economics: utilitarianism, Pareto efficiency, envy-freeness. There is evidence that people want to maximize total surplus, even when it comes at a personal cost to them. This leads to the Principle of Efficiency: aiming to maximize the sum of derived values.
Equity and moral Desert: Nozick's theory of choice, Buchanan's theory of moral desert, and the theory of equity, which says that the rewards should be proportional to the contributions. He defines the Principle of Equity, which generalizes the equity formula to the entitlement formula: the entitlement of each agent is based on his inputs, outputs, endowments and costs. His allocation should be proportional to the variables he controls, but not to exogeneous variables of which he has no control.
Context: experiments show that the weighing of the above three principles depends on context. Aspects of context include past transactions (existing prices are usually considered "fair", particularly if they are stable and competitive). The endowment effect affects fairness: reducing someone's endowment is considered unfair. There are also information and framing effects: subjects may respond differently depending on what kind of information they are given on the situation. Theories of local justice say that people solve each instance of fair division locally, based on fairness principles relevant for that instance, emphasizing procedural fairness. Experiments find effects of scope, that is, determining the set of agents and the set of allocations to compare. There are differences between countries and cultures in the relative weight they assign to different fairness principles, as well as to related principles such as self-interest, love, altruism, and reciprocity.
Fairness vs. efficiency - what outcome is better?
Sometimes, there are only two possible allocations: one is fair (e.g. envy-free division) but inefficient, while the other is efficient (e.g. Pareto-optimal) but unfair. Which division do people prefer? This was tested in several lab experiments.
1. Subjects were given several possible allocations of money, and were asked which allocation they prefer. One experiment found that the most important factors were Pareto-efficiency and Rawlsian motive for helping the poor (maximin principle). However, a later experiment found that these conclusions only hold for students of economics and business, who train to acknowledge the importance of efficiency. In the general population, the most important factors are selfishness and inequality aversion.
2. Subjects were asked to answer questionnaires regarding the division of indivisible items between two people. The subjects were shown the subjective value that each (virtual) person attaches to each item. The predominant aspect considered was equity - satisfying each individual's preferences. The efficiency aspect was secondary. This effect was slightly more pronounced in economics students, and less pronounced in law students (who chose a Pareto-efficient allocation more frequently).
3. Subjects were divided into pairs and asked to negotiate and decide how to divide a set of 4 items between them. Each combination of items had a pre-specified monetary value, which was different between the two subjects. Each subject knew both his own values and the partner's values. After the division, each subject could redeem the items for their monetary value. The items could be divided in several ways: some divisions were equitable (e.g., giving each partner a value of 45), while other divisions were Pareto efficient (e.g., giving one partner 46 and another partner 75). The interesting question was whether people prefer the equitable or the efficient division. The results showed that people preferred the more efficient division only if it was not "too unfair". A difference of 2-3 value units was considered sufficiently small for most subjects, so they preferred the efficient allocation. But a difference of 20-30 units (such as in the 45:45 vs. 46:75 example) was perceived as too large: 51% preferred the 45:45 division. The effect were less pronounced when the subjects were only shown the rank of the item combinations for each of them, rather than the full monetary value. This experiment also revealed a recurring process which was used during the negotiation: subjects first find the most equitable division of the goods. They take it as a reference point and try to find Pareto improvements. An improvement is implemented only if the inequality it causes is not too large. This process is called CPIES: Conditioned Pareto Improvement from Equal Split.
Intra-personal vs. inter-personal fairness - which is more important?
What is the importance of intra-personal fairness criteria (such as envy-freeness, where each person compares bundles based only on his own utility-function), vs. inter-personal fairness criteria (such as equitability, where each person views the utilities of all other agents)? Using a free-form bargaining experiment, it was found that inter-personal fairness (e.g. equitability) is more important. Intra-personal fairness (such as envy-freeness) are relevant only as a secondary criterion.
Fairness vs. simplicity
Divide and choose (DC) is a fair and very simple procedure. There are more sophisticated procedures that have better fairness guarantees. The question of which were more satisfactory was tested in several lab experiments.
1. Divide-and-choose vs Knaster-Brams-Taylor. Several pairs of players had to divide among them 3 indivisible goods (a ballpoint pen, a lighter and a mug) and some money. Three procedures were used: the simple DC, and the more complicated Adjusted Knaster (an improvement of adjusted winner) and Proportional Knaster. The authors asked the subjects to select their favorite procedure. Then, they let them play the procedure in two modes: binding (strict adherence to the protocol rules) and non-binding (possible renegotiation afterwards). They compared the procedures performance in terms of efficiency, envy-freeness, equitability and truthfulness. Their conclusions are: (a) The sophisticated mechanisms are advantageous only in the binding case; when renegotiation is possible, their performance drops to the baseline level of DC. (b) The preference for a procedure depends not only on the expected utility calculations of the negotiators, but also on their psychological profile: the more "antisocial" a person is, the more likely he is to opt for a procedure with a compensatory mechanism. The more risk-averse a person is, the more likely he is to opt for a straightforward procedure like DC. (c) The final payoff of a participant in a procedure depends a lot on the implementation. If participants cannot divide the goods under a procedure of their own choice, they are more eager to maximize their payoff. A shortened time horizon is equally detrimental.
2. Structured procedures vs. Genetic algorithms. Two pairs of players had to divide between them 10 indivisible goods. A genetic algorithm was used to search for the best division candidates: out of the 1024 possible divisions, a subset of 20 divisions was shown to the players, and they were asked to grade their satisfaction about the candidate division on a scale ranging from 0 (not satisfied at all) to 1 (fully satisfied). Then, for each subject, a new population of 20 divisions was created using a genetic algorithm. This procedure continued for 15 iterations until a best surviving allocation was found. The results were compared to five provably-fair division algorithms: Sealed Bid Knaster, Adjusted Winner, Adjusted Knaster, Division by Lottery and Descending Demand. Often, the best divisions found by the genetic algorithm were rated as more mutually satisfactory than the ones derived from the algorithms. Two possible reasons for that were: (a) Temporal fluctuation of preferences - the valuations of humans change from the point they report their valuations to the point they see the final allocation. Most fair division procedures ignore this issue, but the genetic algorithm captures it naturally. (b) Non-additivity of preferences. Most division procedures assume that valuations are additive, but in reality they are not; the genetic algorithm works just as well with non-additive valuations.
3. Simple procedures vs. Strongly-fair procedures. 39 player-pairs were given 6 indivisible gift-certificates of the same value ($10) but from different vendors (e.g. Esso, Starbucks, etc.). Before the procedure, each participant was shown all the 64 possible allocations, and was asked to grade the satisfaction and fairness of each of them between 0 (bad) and 100 (good). Then, they were taught seven different procedures, with different levels of fairness guarantees: Strict Alternation and Balanced Alternation (no guarantees), Divide and Choose (only envy-freeness), Compensation Procedure and Price Procedure (envy-freeness and Pareto-efficiency), Adjusted Knaster and Adjusted Winner (envy-freeness, Pareto-efficiency and equitability). They practiced each of these against a computer. Then, they did an actual division against another human subject. After the procedure, they were asked again to grade the satisfaction and fairness of the outcome; the goal was to distinguish procedural fairness from distributional fairness. The results showed that: (a) procedural fairness had no significant impact; satisfaction was mainly determined by distributional fairness. (b) the results of simpler procedures (strict alternation, balanced alternation and DC) were considered fairer and more satisfactory. They explain this couter-intuitive result by showing that humans care about object equality - giving each agent the same number of objects (though this does not entail any mathematical fairness criterion).
Efficiency vs. strategy
Consider two agents that have to bargain on a deal, such as how to divide goods among them. Often, if they sincerely reveal their preferences, they can attain a win-win deal. However, if they strategically misrepresent their preferences in an attempt to gain, they might actually lose the deal. What negotiation procedure is most efficient in terms of attaining good deals? Several bargaining procedures were studied in the lab.
1. Sealed bid auction: a simple one-shot negotiation procedure. In the lab, information-advantaged players aggressively exploited asymmetric information, and drastically misrepresented their true valuation through strategic bidding. This often resulted in a reduced bargaining zone, forgone deals and low economic efficiency. In one experiment, deals were made on only 52% of all trials, while 77% of all trials had a positive bargaining zone.
2. Bonus procedure: a procedure that gives a bonus was given to participants making a deal. This bonus is calculated such that it is optimal for players to reveal their true preferences. Lab experiments show that this does not help: subjects still strategize, although it is bad for them.
3. Adjusted Winner (AW): a procedure that allocates divisible objects in order to maximize the total utility. In the lab, subjects bargained in pairs over two divisible objects. Each of the two objects was assigned a random value drawn from a commonly known prior distribution. Each player had complete information about their own values, but incomplete information about their co-bargainer’s values. There were three information conditions: (1) Competing Preferences: Players know that the preferences of their co-bargainer are similar to their own; (2) Complementary Preferences: Players know that the preferences of their co-bargainer are diametrically opposed to their own; (3) Unknown (Random) Preferences: Players do not know what their co-bargainer values most relative to their own preferences. In condition (1), the bilateral decisions converge toward efficient outcomes, yet only one-third are "envy-free". In condition (2), while players dramatically misrepresent their true valuation for objects, both efficiency and envy-freeness approach maximum levels. In condition (3), pronounced strategic bidding emerges, yet the result is twice as many envy-free outcomes, with increased levels of efficiency (relative to condition 1). In all cases, the structured AW procedure was quite successful in attaining a win-win solution - about 3/2 times more than unstructured negotiation. The key to its success is that it forces players out of the ‘fixed pie myth’.
4. Conflict-resolution algorithm: Hortala-Vallve and lorente-Saguer describe a simple mechanism for solving several issues simultaneously (analogous to Adjusted Winner). They observe that equilibrium play increases over time, and truthful play decreases over time - agents manipulate more often when they learn their partners' preferences. Fortunately, the deviations from equilibrium do not cause much damage to the social welfare - the final welfare is close to the theoretic optimum.
5. Fair cake-cutting algorithms: Ortega, Kyropoulou and Segal-Halevi tested algorithms such as Divide and choose, Last diminisher, Even–Paz and Selfridge–Conway between laboratory subjects. It is known that these procedures are not strategyproof, and indeed, they found that subjects often manipulate them. Moreover, the manipulation was often irrational - subjects often used dominated strategies. Despite the manipulations, the algorithms for envy-free cake-cutting produced outcomes with less envy, and were considered fairer.
Children
In the lab, children were paired to "rich" and "poor" and were asked to share objects. There were differences in the perception of "initial belongings" vs. "things that have to be shared": young children (up to 7) did not distinguish them while older children (above 11) did.
See also
Participatory budgeting experiments - experiments related to fairness and other issues, in the particular setting of participatory budgeting.
The ultimatum game and the dictator game - two very simple games in which subjects have to choose between insisting on fairness and increasing their own payoff. Many variants of this game were tested in lab.
The Moral Machine experiment - an experiment that collected millions of decisions on moral issues related to autonomous vehicles (e.g., if a vehicle must kill someone, who should it be?).
Experimental evidence on the question of "What is fair?"
References
Fair division
Experimental economics
Thought experiments in ethics | Fair division experiments | Mathematics | 4,508 |
13,567 | https://en.wikipedia.org/wiki/HyperCard | HyperCard is a software application and development kit for Apple Macintosh and Apple IIGS computers. It is among the first successful hypermedia systems predating the World Wide Web.
HyperCard combines a flat-file database with a graphical, flexible, user-modifiable interface. HyperCard includes a built-in programming language called HyperTalk for manipulating data and the user interface.
This combination of features – a database with simple form layout, flexible support for graphics, and ease of programming – suits HyperCard for many different projects such as rapid application development of applications and databases, interactive applications with no database requirements, command and control systems, and many examples in the demoscene.
HyperCard was originally released in 1987 for $49.95 and was included free with all new Macs sold afterwards. It was withdrawn from sale in March 2004, having received its final update in 1998 upon the return of Steve Jobs to Apple. HyperCard was not ported to Mac OS X, but can run in the Classic Environment on versions of Mac OS X that support it.
Overview
Design
HyperCard is based on the concept of a "stack" of virtual "cards". Cards hold data, just as they would in a Rolodex card-filing device. Each card contains a set of interactive objects, including text fields, check boxes, buttons, and similar common graphical user interface (GUI) elements. Users browse the stack by navigating from card to card, using built-in navigation features, a powerful search mechanism, or through user-created scripts.
Users build or modify stacks by adding new cards. They place GUI objects on the cards using an interactive layout engine based on a simple drag-and-drop interface. Also, HyperCard includes prototype or template cards called backgrounds; when new cards are created they can refer to one of these background cards, which causes all of the objects on the background to "show through" behind the new card. This way, a stack of cards with a common layout and functionality can be created. The layout engine is similar in concept to a form as used in most rapid application development (RAD) environments such as Borland Delphi, and Microsoft Visual Basic and Visual Studio.
The database features of the HyperCard system are based on the storage of the state of all of the objects on the cards in the physical file representing the stack. The database does not exist as a separate system within the HyperCard stack; no database engine or similar construct exists. Instead, the state of any object in the system is considered to be live and editable at any time. From the HyperCard runtime's perspective, there is no difference between moving a text field on the card and typing into it; both operations simply change the state of the target object within the stack. Such changes are immediately saved when complete, so typing into a field causes that text to be stored to the stack's physical file. The system operates in a largely stateless fashion, with no need to save during operation. This is in common with many database-oriented systems, although somewhat different from document-based applications.
The final key element in HyperCard is the script, a single code-carrying element of every object within the stack. The script is a text field whose contents are interpreted in the HyperTalk language. Like any other property, the script of any object can be edited at any time and changes are saved as soon as they were complete. When the user invokes actions in the GUI, like clicking on a button or typing into a field, these actions are translated into events by the HyperCard runtime. The runtime then examines the script of the object that is the target of the event, like a button, to see if its script object contains the event's code, called a handler. If it does, the HyperTalk engine runs the handler; if it does not, the runtime examines other objects in the visual hierarchy.
These concepts make up the majority of the HyperCard system; stacks, backgrounds and cards provide a form-like GUI system, the stack file provides object persistence and database-like functionality, and HyperTalk allows handlers to be written for GUI events. Unlike the majority of RAD or database systems of the era, however, HyperCard combines all of these features, both user-facing and developer-facing, in a single application. This allows rapid turnaround and immediate prototyping, possibly without any coding, allowing users to author custom solutions to problems with their own personalized interface. "Empowerment" became a catchword as this possibility was embraced by the Macintosh community, as was the phrase "programming for the rest of us", that is, anyone, not just professional programmers.
It is this combination of features that also makes HyperCard a powerful hypermedia system. Users can build backgrounds to suit the needs of some system, say a rolodex, and use simple HyperTalk commands to provide buttons to move from place to place within the stack, or provide the same navigation system within the data elements of the UI, like text fields. Using these features, it is easy to build linked systems similar to hypertext links on the Web. Unlike the Web, programming, placement, and browsing are all the same tool. Similar systems have been created for HTML, but traditional Web services are considerably more heavyweight.
HyperTalk
HyperCard contains an object-oriented scripting language called HyperTalk, which was noted for having a syntax resembling casual English language. HyperTalk language features were predetermined by the HyperCard environment, although they could be extended by the use of externals functions (XFCN) and commands (XCMD), written in a compiled language. The weakly typed HyperTalk supports most standard programming structures such as "if–then" and "repeat". HyperTalk is verbose, hence its ease of use and readability. HyperTalk code segments are referred to as "scripts", a term that is considered less daunting to beginning programmers.
Externals
HyperCard can be extended significantly through the use of external command (XCMD) and external function (XFCN) modules. These are code libraries packaged in a resource fork that integrate into either the system generally or the HyperTalk language specifically; this is an early example of the plug-in concept. Unlike conventional plug-ins, these do not require separate installation before they are available for use; they can be included in a stack, where they are directly available to scripts in that stack.
During HyperCard's peak popularity in the late 1980s, a whole ecosystem of vendors offered thousands of these externals such as HyperTalk compilers, graphing systems, database access, Internet connectivity, and animation. Oracle offered an XCMD that allows HyperCard to directly query Oracle databases on any platform, superseded by Oracle Card. BeeHive Technologies offered a hardware interface that allows the computer to control external devices. Connected via the Apple Desktop Bus (ADB), this instrument can read the state of connected external switches or write digital outputs to a multitude of devices.
Externals allow access to the Macintosh Toolbox, which contains many lower-level commands and functions not native to HyperTalk, such as control of the serial and ADB ports.
History
Development
HyperCard was created by Bill Atkinson following an LSD trip. Work for it began in March 1985 under the name of WildCard (hence its creator code of WILD). In 1986, Dan Winkler began work on HyperTalk and the name was changed to HyperCard for trademark reasons. It was released on 11 August 1987 for the first day of the MacWorld Conference & Expo in Boston, with the understanding that Atkinson would give HyperCard to Apple only if the company promised to release it for free on all Macs. Apple timed its release to coincide with the MacWorld Conference & Expo in Boston, Massachusetts to guarantee maximum publicity.
Launch
HyperCard was successful almost instantly. The Apple Programmer's and Developer's Association (APDA) said, "HyperCard has been an informational feeding frenzy. From August [1987, when it was announced] to October our phones never stopped ringing. It was a zoo." Within a few months of release, there were multiple HyperCard books and a 50 disk set of public domain stacks. Apple's project managers found HyperCard was being used by a huge number of people, internally and externally. Bug reports and upgrade suggestions continued to flow in, demonstrating its wide variety of users. Since it was also free, it was difficult to justify dedicating engineering resources to improvements in the software. Apple and its mainstream developers understood that HyperCard's user empowerment could reduce the sales of ordinary shrink-wrapped products. Stewart Alsop II speculated that HyperCard might replace Finder as the shell of the Macintosh graphical user interface.
HyperCard 2.0
In late 1989, Kevin Calhoun, then a HyperCard engineer at Apple, led an effort to upgrade the program. This resulted in HyperCard 2.0, released in 1990. The new version included an on-the-fly compiler that greatly increased performance of computationally intensive code, a new debugger and many improvements to the underlying HyperTalk language.
At the same time HyperCard 2.0 was being developed, a separate group within Apple developed and in 1991 released HyperCard IIGS, a version of HyperCard for the Apple IIGS system. Aimed mainly at the education market, HyperCard IIGS has roughly the same feature set as the 1.x versions of Macintosh HyperCard, while adding support for the color graphics abilities of the IIGS. Although stacks (HyperCard program documents) are not binary-compatible, a translator program (another HyperCard stack) allows them to be moved from one platform to the other.
Then, Apple decided that most of its application software packages, including HyperCard, would be the property of a wholly owned subsidiary called Claris. Many of the HyperCard developers chose to stay at Apple rather than move to Claris, causing the development team to be split. Claris attempted to create a business model where HyperCard could also generate revenues. At first the freely-distributed versions of HyperCard shipped with authoring disabled. Early versions of Claris HyperCard contain an Easter Egg: typing "magic" into the message box converts the player into a full HyperCard authoring environment. When this trick became nearly universal, they wrote a new version, HyperCard Player, which Apple distributed with the Macintosh operating system, while Claris sold the full version commercially. Many users were upset that they had to pay to use software that had traditionally been supplied free and which many considered a basic part of the Mac.
Even after HyperCard was generating revenue, Claris did little to market it. Development continued with minor upgrades, and the first failed attempt to create a third generation of HyperCard. During this period, HyperCard began losing market share. Without several important, basic features, HyperCard authors began moving to systems such as SuperCard and Macromedia Authorware. Nonetheless, HyperCard continued to be popular and used for a widening range of applications, from the game The Manhole, an earlier effort by the creators of Myst, to corporate information services.
Apple eventually folded Claris back into the parent company, returning HyperCard to Apple's core engineering group. In 1992, Apple released the eagerly anticipated upgrade of HyperCard 2.2 and included licensed versions of Color Tools and Addmotion II, adding support for color pictures and animations. However, these tools are limited and often cumbersome to use because HyperCard 2.0 lacks true, internal color support.
HyperCard 3.0
Several attempts were made to restart HyperCard development once it returned to Apple. Because of the product's widespread use as a multimedia-authoring tool it was rolled into the QuickTime group. A new effort to allow HyperCard to create QuickTime interactive (QTi) movies started, once again under the direction of Kevin Calhoun. QTi extended QuickTime's core multimedia playback features to provide true interactive facilities and a low-level programming language based on 68000 assembly language. The resulting HyperCard 3.0 was first presented in 1996 when an alpha-quality version was shown to developers at Apple's annual Apple Worldwide Developers Conference (WWDC). Under the leadership of Dan Crow development continued through the late 1990s, with public demos showing many popular features such as color support, Internet connectivity, and the ability to play HyperCard stacks (which were now special QuickTime movies) in a web browser. Development upon HyperCard 3.0 stalled when the QuickTime team was focused away from developing QuickTime interactive to the streaming features of QuickTime 4.0. in 1998 Steve Jobs disliked the software because Atkinson had chosen to stay at Apple to finish it instead of joining Jobs at NeXT, and (according to Atkinson) "it had Sculley's stink all over it". In 2000, the HyperCard engineering team was reassigned to other tasks after Jobs decided to abandon the product. Calhoun and Crow both left Apple shortly after, in 2001.
Its final release was in 1998, and it was totally discontinued in March 2004.
HyperCard runs natively only in the classic Mac OS, but it can still be used in Mac OS X's Classic mode on PowerPC based machines (G5 and earlier). The last functional native HyperCard authoring environment is Classic mode in Mac OS X 10.4 (Tiger) on PowerPC-based machines.
Applications
HyperCard has been used for a range of hypertext and artistic purposes. Before the advent of PowerPoint, HyperCard was often used as a general-purpose presentation program. Examples of HyperCard applications include simple databases, "choose your own adventure"-type games, and educational teaching aids.
Due to its rapid application design facilities, HyperCard was also often used for prototyping applications and sometimes even for version 1.0 implementations. Inside Apple, the QuickTime team was one of HyperCard's biggest customers.
HyperCard has lower hardware requirements than Macromedia Director. Several commercial software products were created in HyperCard, most notably the original version of the graphic adventure game Myst, the Voyager Company's Expanded Books, multimedia CD-ROMs of Beethoven's Ninth Symphony CD-ROM, A Hard Day's Night by the Beatles, and the Voyager MacBeth. An early electronic edition of the Whole Earth Catalog was implemented in HyperCard. and stored on CD-ROM.
The prototype and demo of the popular game You Don't Know Jack was written in HyperCard. The French auto manufacturer Renault used it to control their inventory system.
In Quebec, Canada, HyperCard was used to control a robot arm used to insert and retrieve video disks at the National Film Board CinéRobothèque.
In 1989, Hypercard was used to control the BBC Radiophonic Workshop Studio Network, using a single Macintosh.
HyperCard was used to prototype a fully functional prototype of SIDOCI (one of the first experiments in the world to develop an integrated electronic patient record system) and was heavily used by Montréal Consulting firm DMR to demonstrate what "a typical day in the life of a patient about to get surgery" would look like in a paperless age.
Activision, which was until then mainly a game company, saw HyperCard as an entry point into the business market. Changing its name to Mediagenic, it published several major HyperCard-based applications, most notably Danny Goodman's Focal Point, a personal information manager, and Reports For HyperCard, a program by Nine To Five Software that allows users to treat HyperCard as a full database system with robust information viewing and printing features.
The HyperCard-inspired SuperCard for a while included the Roadster plug-in that allowed stacks to be placed inside web pages and viewed by web browsers with an appropriate browser plug-in. There was even a Windows version of this plug-in allowing computers other than Macintoshes to use the plug-in.
Exploits
The first HyperCard virus was discovered in Belgium and the Netherlands in April 1991.
Because HyperCard executed scripts in stacks immediately on opening, it was also one of the first applications susceptible to macro viruses. The Merryxmas virus was discovered in early 1993 by Ken Dunham, two years before the Concept virus. Very few viruses were based on HyperCard, and their overall impact was minimal.
Reception
Compute!'s Apple Applications in 1987 stated that HyperCard "may make Macintosh the personal computer of choice". While noting that its large memory requirement made it best suited for computers with 2 MB of memory and hard drives, the magazine predicted that "the smallest programming shop should be able to turn out stackware", especially for using CD-ROMs. Compute! predicted in 1988 that most future Mac software would be developed using HyperCard, if only because using it was so addictive that developers "won't be able to tear themselves away from it long enough to create anything else". Byte in 1989 listed it as among the "Excellence" winners of the Byte Awards. While stating that "like any first entry, it has some flaws", the magazine wrote that "HyperCard opened up a new category of software", and praised Apple for bundling it with every Mac. In 2001 Steve Wozniak called HyperCard "the best program ever written".
Legacy
HyperCard is one of the first products that made use of and popularized the hypertext concept to a large popular base of users.
Jakob Nielsen has pointed out that HyperCard was really only a hypermedia program since its links started from regions on a card, not text objects; actual HTML-style text hyperlinks were possible in later versions, but were awkward to implement and seldom used. Deena Larsen programmed links into HyperCard for Marble Springs. Bill Atkinson later lamented that if he had only realized the power of network-oriented stacks, instead of focusing on local stacks on a single machine, HyperCard could have become the first Web browser.
HyperCard saw a loss in popularity with the growth of the World Wide Web, since the Web could handle and deliver data in much the same way as HyperCard without being limited to files on a local hard disk. HyperCard had a significant impact on the web as it inspired the creation of both HTTP (through its influence on Tim Berners-Lee's colleague Robert Cailliau), and JavaScript (whose creator, Brendan Eich, was inspired by HyperTalk). It was also a key inspiration for ViolaWWW, an early web browser.
The pointing-finger cursor used for navigating stacks was later used in the first web browsers, as the hyperlink cursor.
The Myst computer game franchise, initially released as a HyperCard stack and included bundled with some Macs (for example the Performa 5300), still lives on, making HyperCard a facilitating technology for starting one of the best-selling computer games of all time.
According to Ward Cunningham, the inventor of Wiki, the wiki concept can be traced back to a HyperCard stack he wrote in the late 1980s.
In 2017 the Internet Archive established a project to preserve and emulate HyperCard stacks, allowing users to upload their own.
The GUI of the prototype Apple Wizzy Active Lifestyle Telephone was based on HyperCard.
World Wide Web
HyperCard influenced the development of the Web in late 1990 through its influence on Robert Cailliau, who assisted in developing Tim Berners-Lee's first Web browser. Javascript was inspired by HyperTalk.
Although HyperCard stacks do not operate over the Internet, by 1988, at least 300 stacks were publicly available for download from the commercial CompuServe network (which was not connected to the official Internet yet). The system can link phone numbers on a user's computer together and enable them to dial numbers without a modem, using a less expensive piece of hardware, the Hyperdialer.
In this sense, like the Web, it does form an association-based experience of information browsing via links, though not operating remotely over the TCP/IP protocol then. Like the Web, it also allows for the connections of many different kinds of media.
Similar systems
Other companies have offered their own versions. , two products are available which offer HyperCard-like abilities:
HyperStudio, one of the first HyperCard clones, is , developed and published by Software MacKiev.
LiveCode, published by LiveCode, Ltd., expands greatly on HyperCard's feature set and offers color and a GUI toolkit which can be deployed on many popular platforms (Android, iOS, Classic Macintosh system software, Mac OS X, Windows 98 through 10, and Linux/Unix). LiveCode directly imports extant HyperCard stacks and provides a migration path for stacks still in use.
Past products include:
SuperCard, the first HyperCard clone, was similar to HyperCard, but with many added features such as: full color support, pixel and vector graphics, a full GUI toolkit, and support for many modern macOS features. It could create both standalone applications and projects that run on the freeware SuperCard Player. SuperCard could also convert extant HyperCard stacks into SuperCard projects. It ran only on Macs.
SK8 was a "HyperCard killer" developed within Apple but never released. It extends HyperTalk to allow arbitrary objects which allowed it to build complete Mac-like applications (instead of stacks). The project was never released, although the source code was placed in the public domain.
Hyper DA by Symmetry was a Desk Accessory for classic single-tasked Mac OS that allows viewing HyperCard 1.x stacks as added windows in any extant application, and is also embedded into many Claris products (like MacDraw II) to display their user documentation.
HyperPad from Brightbill-Roberts was a clone of HyperCard, written for DOS. It makes use of ASCII linedrawing to create the graphics of cards and buttons.
Plus, later renamed WinPlus, was similar to HyperCard, for Windows and Macintosh.
Oracle purchased Plus and created a cross-platform version as Oracle Card, later renamed Oracle Media Objects, used as a 4GL for database access.
IBM LinkWay was a mouse-controlled HyperCard-like environment for DOS PCs. It has minimal system requirements, runs in graphics CGA and VGA. It even supported video disc control.
Asymetrix's Windows application ToolBook resembled HyperCard, and later included an external converter to read HyperCard stacks (the first was a third-party product from Heizer software).
TileStack was an attempt to create a web based version of HyperCard that is compatible with the original HyperCard files. The site closed down January 24, 2011.
In addition, many of the basic concepts of the original system were later re-used in other forms. Apple built its system-wide scripting engine AppleScript on a language similar to HyperTalk; it is often used for desktop publishing (DTP) workflow automation needs. In the 1990s FaceSpan provided a third-party graphical interface. AppleScript also has a native graphical programming front-end called Automator, released with Mac OS X Tiger in April 2005. One of HyperCard's strengths was its handling of multimedia, and many multimedia systems like Macromedia Authorware and Macromedia Director are based on concepts originating in HyperCard.
AppWare, originally named Serius Developer, is sometimes seen to be similar to HyperCard, as both are rapid application development (RAD) systems. AppWare was sold in the early 90s and worked on both Mac and Windows systems.
Zoomracks, a DOS application with a similar "stack" database metaphor, predates HyperCard by 4 years, which led to a contentious lawsuit against Apple.
See also
Apple Media Tool
MetaCard, LiveCode
Morphic (software)
mTropolis
NoteCards
Stagecast Creator
References
Bibliography
External links
Collection of emulated HyperCard stacks via the Internet Archive
; HyperCard conversion utility
HyperCard online simulator
1987 software
Domain-specific programming languages
Hypertext
HyperCard products
Classic Mac OS-only software made by Apple Inc.
Classic Mac OS programming tools | HyperCard | Technology | 4,923 |
251,882 | https://en.wikipedia.org/wiki/Quart | The quart (symbol: qt) is a unit of volume equal to a quarter of a gallon. Three kinds of quarts are currently used: the liquid quart and dry quart of the US customary system and the of the British imperial system. All are roughly equal to one liter. It is divided into two pints or (in the US) four cups. Historically, the exact size of the quart has varied with the different values of gallons over time and in reference to different commodities.
Name
The term comes from the Latin (meaning one-quarter) via the French . However, although the French word has the same root, it frequently means something entirely different. In Canadian French in particular, the quart is called , whilst the pint is called .
History
Since gallons of various sizes have historically been in use, the corresponding quarts have also existed with various sizes.
Definitions and equivalencies
US liquid quart
In the United States, traditional length and volume measures have been legally standardized for commerce by the international yard and pound agreement of 1959, using the definition of 1 yard being exactly equal to 0.9144 meters. From this definition is derived the metric equivalencies for inches, feet, and miles, area measures, and measures of volume. The US liquid quart equals 57.75 cubic inches, which is exactly equal to .
US dry quart
In the United States, the dry quart is equal to one quarter of a US dry gallon, or exactly .
Imperial quart
The imperial quart, which is used for both liquid and dry capacity, is equal to one quarter of an imperial gallon, or exactly 1.1365225 liters. In the United Kingdom goods may be sold by the quart if the equivalent metric measure is also given.
In Canadian French, by federal law, the imperial quart is called .
Winchester quart
The Winchester quart is an archaic measure, roughly equal to 2 imperial quarts or 2.25 liters. The 2.5L bottles in which laboratory chemicals are supplied are sometimes referred to as Winchester quart bottles, although they contain slightly more than a traditional Winchester quart.
Reputed quart
The reputed quart was a measure equal to two-thirds of an imperial quart (or one-sixth of an imperial gallon), at about 0.7577liters, which is very close to one US fifth (0.757 liters).
The reputed quart was previously recognized as a standard size of wine bottle in the United Kingdom, and is only about 1% larger than the current standard wine bottle of 0.75L.
Notes
References
External links
Alcohol measurement
Cooking weights and measures
Customary units of measurement in the United States
Imperial units
Units of volume | Quart | Mathematics | 558 |
38,571,997 | https://en.wikipedia.org/wiki/Mixed%20criticality | A mixed criticality system is a system containing computer hardware and software that can execute several applications of different criticality, such as safety-critical and non-safety critical, or of different safety integrity level (SIL). Different criticality applications are engineered to different levels of assurance, with high criticality applications being the most costly to design and verify. These kinds of systems are typically embedded in a machine such as an aircraft whose safety must be ensured.
Principle
Traditional safety-critical systems had to be tested and certified in their entirety to show that they were safe to use. However, many such systems are composed of a mixture of safety-critical and non-critical parts, as for example when an aircraft contains a passenger entertainment system that is isolated from the safety-critical flight systems. Some issues to address in mixed criticality systems include real-time behaviour, memory isolation, data and control coupling.
Computer scientists have developed techniques for handling systems which thus have mixed criticality, but there are many challenges remaining especially for multi-core hardware.
Priority and criticality
Basically, most errors are currently committed when making confusion between priority attribution and criticality management. As priority defines an order between different tasks or messages to be transmitted inside a system, criticality defines classes of messages which can have different parameters depending on the current use case. For example, in case of car crash avoidance or obstacle anticipation, camera sensors can suddenly emit messages more often, and so create an overload in the system. That is when we need to make Mixed-Criticality operate : to select messages to absolutely guarantee on the system in these overload cases.
Research projects
EU funded research projects on mixed criticality include:
MultiPARTES
DREAMS
PROXIMA
CONTREX
SAFURE
CERTAINTY
VIRTICAL
T-CREST
PROARTIS
ACROSS (Artemis)
EMC2 (Artemis)
RECOMP Artemis
ARAMIS and ARAMIS II
IMPReSS
UK EPSRC funded research projects on mixed criticality include:
MCC
Several research projects have decided to present their research results at the EU-funded Mixed-Criticality Forum
Workshops and seminars
Workshops and seminars on Mixed Criticality Systems include:
1st International Workshop on Mixed Criticality Systems (WMC 2013)
2nd International Workshop on Mixed Criticality Systems (WMC 2014)
3rd International Workshop on Mixed Criticality Systems (WMC 2015)
4th International Workshop on Mixed Criticality Systems (WMC 2015)
Dagstuhl Seminar on Mixed Criticality on Multicore/Manycore Platforms (2015)
Dagstuhl Seminar on Mixed Criticality on Multicore/Manycore Platforms (2017)
References
External links
Karlsruhe Institute of Technology: Mixed Criticality in Safety-Critical Systems
Washington University in St Louis: A Research Agenda for Mixed-Criticality Systems
Software engineering
Safety engineering | Mixed criticality | Technology,Engineering | 556 |
10,214,377 | https://en.wikipedia.org/wiki/GDF5 | Growth/differentiation factor 5 is a protein that in humans is encoded by the GDF5 gene.
The protein encoded by this gene is closely related to the bone morphogenetic protein (BMP) family and is a member of the TGF-beta superfamily. This group of proteins is characterized by a polybasic proteolytic processing site which is cleaved to produce a mature protein containing seven conserved cysteine residues. The members of this family are regulators of cell growth and differentiation in both embryonic and adult tissues. Mutations in this gene are associated with acromesomelic dysplasia, Hunter-Thompson type; brachydactyly, type C; and osteochondrodysplasia, Grebe type. These associations confirm that the gene product plays a role in skeletal development.
GDF5 is expressed in the developing central nervous system, and has a role in skeletal and joint development. It also increases the survival of neurones that respond to the neurotransmitter dopamine, and is a potential therapeutic molecule associated with Parkinson's disease.
See also
Chondrodysplasia, Grebe type
References
Further reading
Developmental genes and proteins
TGFβ domain | GDF5 | Chemistry,Biology | 254 |
480,927 | https://en.wikipedia.org/wiki/Transmittance | In optical physics, transmittance of the surface of a material is its effectiveness in transmitting radiant energy. It is the fraction of incident electromagnetic power that is transmitted through a sample, in contrast to the transmission coefficient, which is the ratio of the transmitted to incident electric field.
Internal transmittance refers to energy loss by absorption, whereas (total) transmittance is that due to absorption, scattering, reflection, etc.
Mathematical definitions
Hemispherical transmittance
Hemispherical transmittance of a surface, denoted T, is defined as
where
Φet is the radiant flux transmitted by that surface;
Φei is the radiant flux received by that surface.
Spectral hemispherical transmittance
Spectral hemispherical transmittance in frequency and spectral hemispherical transmittance in wavelength of a surface, denoted Tν and Tλ respectively, are defined as
where
Φe,νt is the spectral radiant flux in frequency transmitted by that surface;
Φe,νi is the spectral radiant flux in frequency received by that surface;
Φe,λt is the spectral radiant flux in wavelength transmitted by that surface;
Φe,λi is the spectral radiant flux in wavelength received by that surface.
Directional transmittance
Directional transmittance of a surface, denoted TΩ, is defined as
where
Le,Ωt is the radiance transmitted by that surface;
Le,Ωi is the radiance received by that surface.
Spectral directional transmittance
Spectral directional transmittance in frequency and spectral directional transmittance in wavelength of a surface, denoted Tν,Ω and Tλ,Ω respectively, are defined as
where
Le,Ω,νt is the spectral radiance in frequency transmitted by that surface;
Le,Ω,νi is the spectral radiance received by that surface;
Le,Ω,λt is the spectral radiance in wavelength transmitted by that surface;
Le,Ω,λi is the spectral radiance in wavelength received by that surface.
Luminous transmittance
In the field of photometry (optics), the luminous transmittance of a filter is a measure of the amount of luminous flux or intensity transmitted by an optical filter. It is generally defined in terms of a standard illuminant (e.g. Illuminant A, Iluminant C, or Illuminant E). The luminous transmittance with respect to the standard illuminant is defined as:
where:
is the spectral radiant flux or intensity of the standard illuminant (unspecified magnitude).
is the spectral transmittance of the filter
is the luminous efficiency function
The luminous transmittance is independent of the magnitude of the flux or intensity of the standard illuminant used to measure it, and is a dimensionless quantity.
Beer–Lambert law
By definition, internal transmittance is related to optical depth and to absorbance as
where
τ is the optical depth;
A is the absorbance.
The Beer–Lambert law states that, for N attenuating species in the material sample,
or equivalently that
where
σi is the attenuation cross section of the attenuating species i in the material sample;
ni is the number density of the attenuating species i in the material sample;
εi is the molar attenuation coefficient of the attenuating species i in the material sample;
ci is the amount concentration of the attenuating species i in the material sample;
ℓ is the path length of the beam of light through the material sample.
Attenuation cross section and molar attenuation coefficient are related by
and number density and amount concentration by
where NA is the Avogadro constant.
In case of uniform attenuation, these relations become
or equivalently
Cases of non-uniform attenuation occur in atmospheric science applications and radiation shielding theory for instance.
Other radiometric coefficients
See also
Opacity (optics)
Photometry (optics)
Radiometry
References
Physical quantities
Radiometry
Spectroscopy | Transmittance | Physics,Chemistry,Mathematics,Engineering | 789 |
5,373,006 | https://en.wikipedia.org/wiki/Capital%20adequacy%20ratio | Capital Adequacy Ratio (CAR) also known as Capital to Risk (Weighted) Assets Ratio (CRAR), is the ratio of a bank's capital to its risk. National regulators track a bank's CAR to ensure that it can absorb a reasonable amount of loss and complies with statutory Capital requirements.
It is a measure of a bank's capital. It is expressed as a percentage of a bank's risk-weighted credit exposures. The enforcement of regulated levels of this ratio is intended to protect depositors and promote stability and efficiency of financial systems around the world.
Two types of capital are measured:
tier one capital, which can absorb losses without a bank being required to cease trading; and
tier two capital, which can absorb losses in the event of a winding-up and so provides a lesser degree of protection to depositors.
Formula
Capital adequacy ratios (CARs) are a measure of the amount of a bank's core capital expressed as a percentage of its risk-weighted asset.
Capital adequacy ratio is defined as:
TIER 1 CAPITAL = (paid up capital + statutory reserves + disclosed free reserves) - (equity investments in subsidiary + intangible assets + current & brought-forward losses)
TIER 2 CAPITAL = A) Undisclosed Reserves + B) General Loss reserves + C) hybrid debt capital instruments and subordinated debts
where Risk can either be weighted assets () or the respective national regulator's minimum total capital requirement. If using risk weighted assets,
≥ 10%.
The percent threshold varies from bank to bank (10% in this case, a common requirement for regulators conforming to the Basel Accords) and is set by the national banking regulator of different countries.
Two types of capital are measured: tier one capital ( above), which can absorb losses without a bank being required to cease trading, and tier two capital ( above), which can absorb losses in the event of a winding-up and so provides a lesser degree of protection to depositors.
Use
Capital adequacy ratio is the ratio which determines the bank's capacity to meet the time liabilities and other risks such as credit risk, operational risk etc. In the most simple formulation, a bank's capital is the "cushion" for potential losses, and protects the bank's depositors and other lenders. Banking regulators in most countries define and monitor CAR to protect depositors, thereby maintaining confidence in the banking system.
CAR is similar to leverage; in the most basic formulation, it is comparable to the inverse of debt-to-equity leverage formulations (although CAR uses equity over assets instead of debt-to-equity; since assets are by definition equal to debt plus equity, a transformation is required). Unlike traditional leverage, however, CAR recognizes that assets can have different levels of risk.
Risk weighting
Since different types of assets have different risk profiles, CAR primarily adjusts for assets that are less risky by allowing banks to "discount" lower-risk assets. The specifics of CAR calculation vary from country to country, but general approaches tend to be similar for countries that apply the Basel Accords. In the most basic application, government debt is allowed a 0% "risk weighting" - that is, they are subtracted from total assets for purposes of calculating the CAR.
Risk weighting example
Risk weighted assets - Fund Based : Risk weighted assets mean fund based assets such as cash, loans, investments and other assets. Degrees of credit risk expressed as percentage weights have been assigned by the national regulator to each such assets.
Non-funded (Off-Balance sheet) Items : The credit risk exposure attached to off-balance sheet items has to be first calculated by multiplying the face amount of each of the off-balance sheet items by the Credit Conversion Factor. This will then have to be again multiplied by the relevant weightage.
Local regulations establish that cash and government bonds have a 0% risk weighting, and residential mortgage loans have a 50% risk weighting. All other types of assets (loans to customers) have a 100% risk weighting.
Bank "A" has assets totaling 100 units, consisting of:
Cash: 10 units
Government bonds: 15 units
Mortgage loans: 20 units
Other loans: 50 units
Other assets: 5 units
Bank "A" has debt of 95 units, all of which are deposits. By definition, equity is equal to assets minus debt, or 5 units.
Bank A's risk-weighted assets are calculated as follows
Even though Bank A would appear to have a debt-to-equity ratio of 95:5, or equity-to-assets of only 5%, its CAR is substantially higher. It is considered less risky because some of its assets are less risky than others.
Types of capital
The Basel rules recognize that different types of equity are more important than others. To recognize this, different adjustments are made:
Tier I Capital: Actual contributed equity plus retained earnings...
Tier II Capital: Preferred shares plus 50% of subordinated debt...
Different minimum CARs are applied. For example, the minimum Tier I equity allowed by statute for risk-weighted assets may be 6%, while the minimum CAR when including Tier II capital may be 8%.
There is usually a maximum of Tier II capital that may be "counted" towards CAR, which varies by jurisdiction.
See also
Capital requirement
Tier 1 capital
Tier 2 capital
Basel accords
Tier 1 Capital Ratio
TLAC, Total Loss Absorbency Capacity
LR, Leverage Ratio
NSFR, Net Stable Funding Ratio
LCR, Liquidity Coverage Ratio
References
External links
Capital Adequacy Ratio at Investopedia.
Financial economics
Financial ratios
Capital requirement | Capital adequacy ratio | Mathematics | 1,151 |
2,651,355 | https://en.wikipedia.org/wiki/Indexing%20head | An indexing head, also known as a dividing head or spiral head, is a specialized tool that allows a workpiece to be circularly indexed; that is, easily and precisely rotated to preset angles or circular divisions. Indexing heads are usually used on the tables of milling machines, but may be used on many other machine tools including drill presses, grinders, and boring machines. Common jobs for a dividing head include machining the flutes of a milling cutter, cutting the teeth of a gear, milling curved slots, or drilling a bolt hole circle around the circumference of a part.
The tool is similar to a rotary table except that it is designed to be tilted as well as rotated and often allows positive locking at finer gradations of rotation, including through differential indexing. Most adjustable designs allow the head to be tilted from 10° below horizontal to 90° vertical, at which point the head is parallel with the machine table.
The workpiece is held in the indexing head in the same manner as a metalworking lathe. This is most commonly a chuck but can include a collet fitted directly into the spindle on the indexing head, faceplate, or between centers. If the part is long then it may be supported with the help of an accompanying tailstock.
Manual indexing heads
Indexing is an operation of dividing a periphery of a cylindrical workpiece into equal number of divisions by the help of index crank and index plate. A manual indexing head includes a hand crank. Rotating the hand crank in turn rotates the spindle and therefore the workpiece. The hand crank uses a worm gear drive to provide precise control of the rotation of the work. The work may be rotated and then locked into place before the cutter is applied, or it may be rotated during cutting depending on the type of machining being done.
Most dividing heads operate at a 40:1 ratio; that is 40 turns of the hand crank generates 1 revolution of the spindle or workpiece. In other words, 1 turn of the hand crank rotates the spindle by 9 degrees. Because the operator of the machine may want to rotate the part to an arbitrary angle indexing plates are used to ensure the part is accurately positioned.
Direct indexing plate: Most dividing heads have an indexing plate permanently attached to the spindle. This plate is located at the end of the spindle, very close to where the work would be mounted. It is fixed to the spindle and rotates with it. This plate is usually equipped with a series of holes that enables rapid indexing to common angles, such as 30, 45, or 90 degrees. A pin in the base of the dividing head can be extended into the direct indexing plate to lock the head quickly into one of these angles. The advantage of the direct indexing plate is that it is fast and simple, and no calculations are required to use it. The disadvantage is that it can only be used for a limited number of angles.
Interchangeable indexing plates are used when the work must be rotated to an angle not available on the direct indexing plate. Because the hand crank is fixed to the spindle at a known ratio (commonly 40:1) the dividing plates mounted at the handwheel can be used to create finer divisions for precise orientation at irregular angles. These dividing plates are provided in sets of several plates. Each plate has rings of holes with different divisions. For example, an indexing plate might have three rows of holes with 24, 30, and 36 holes in each row. A pin on the hand crank engages these holes. Index plates with up to 400 holes are available. Only one such plate can be mounted to the dividing head at a time. The plate is selected by the machinist based on exactly what angle he wishes to index to.
For example, if a machinist wanted to index (rotate) his workpiece by 22.5 degrees, then he would turn the hand crank two full revolutions plus one-half of a turn. Since each full revolution is 9 degrees and a half-revolution is 4.5 degrees, the total is 22.5 (9 + 9 + 4.5 = 22.5). The one-half turn can easily be done precisely using any indexing plate with an even number of holes and rotating to the halfway point (Hole #8 on the 16-hole ring).
Brown and Sharpe indexing heads include a set of 3 indexing plates. The plates are marked #1, #2 and #3, or "A", "B" and "C". Each plate contains 6 rows of holes. Plate #1 or "A" has 15, 16, 17, 18, 19, and 20 holes. Plate #2 or "B" has 21, 23, 27, 29, 31, and 33 holes. Plate #3 or "C" has 37, 39, 41, 43, 47, and 49 holes.
Universal Dividing heads: some manual indexing heads are equipped with a power drive provision. This allows the rotation of the dividing head to be connected to the table feed of the milling machine instead of using a hand crank. A set of change gears is provided to select the ratio between the table feed and rotation. This setup allows the machining of spiral or helical features such as spiral gears, worms, or screw type parts because the part is simultaneously rotated at the same time it is moved in the horizontal direction. This setup is called a " PTO dividing head".
CNC indexing heads
CNC indexing heads are similar in design to the manual variety, except that they have a servo motor coupled to the spindle instead of a hand crank and indexing plates. The servo motor is electronically controlled to index the work to the required position. The control can either be a simple keypad for the operator, or it may be fully CNC controlled.
CNC indexing heads may be controlled in two different modes. The most basic method of operation uses simple control functions built into the dividing head. It does not require a CNC machine. The operator enters the desired angle into a control box attached to the indexing head, and it automatically rotates to the desired position and locks into place for machining. Changing angles is as simple as typing a new angle value onto the control pad. This is simpler than setting up a manual indexing head because there is no need to interchange indexing plates or to calculate which hole positions to use. It is also faster for repetitive operations because the work can be indexed by simply pressing a button, eliminating the need to count rotations of the hand crank or specific hole positions on the indexing plate. A CNC dividing head may be used in this manner on either manual or CNC machinery.
Most CNC dividing heads are also able to function as a full CNC axis and may be wired into the control of a CNC machine. This enables the machine's main CNC controller to control the indexing head just like it would control the other axes of the machine. This can be used to machine complex 3D shapes, helices with a non-constant pitch, and similar exotic parts. This mode of operation cannot be used on a manual machine tool because it requires a full CNC controller to operate.
References
Bibliography
External links
A Machinery article about "differential indexing"
Machine tools | Indexing head | Engineering | 1,500 |
41,694,041 | https://en.wikipedia.org/wiki/TT%20Cygni | TT Cygni is a carbon star located away in the northern constellation of Cygnus. It is classified as a semiregular variable of subtype SRb that ranges in brightness from magnitude 7.26 down to 8.0 with a period of 118 days. This object is called a carbon star because it has a high ratio of carbon to oxygen in its surface layers. The carbon was produced by helium fusion, dredged up from inside the star by deep convection triggered by a flash from the helium shell.
In 1898 it was announced that Louisa Dennison Wells had discovered that the star, then known as BD +32°3522, is a variable star. It was listed with its variable star designation, TT Cygni, in Annie Jump Cannon's 1907 work Second Catalog of Variable Stars.
A thin spherical shell around the star, about half a light year across, was emitted 7,000 years ago. It was first detected from its carbon monoxide emission and has a mass around , of which about a tenth is dust. The dust is thought to be mostly amorphous carbon.
References
Carbon stars
Semiregular variable stars
Asymptotic-giant-branch stars
Cygnus (constellation)
Durchmusterung objects
186047
096836
Cygni, TT | TT Cygni | Astronomy | 264 |
9,408,017 | https://en.wikipedia.org/wiki/Symbols%20of%20Manitoba | There are several symbols of Manitoba, one of the ten provinces of Canada. These symbols are designated by The Coat of Arms, Emblems and the Manitoba Tartan Act, which came into force on Feb 1, 1988.
Symbols
References
Manitoba
Symbols
Canadian provincial and territorial symbols | Symbols of Manitoba | Mathematics | 55 |
60,646,773 | https://en.wikipedia.org/wiki/%E2%84%93-adic%20sheaf | In algebraic geometry, an ℓ-adic sheaf on a Noetherian scheme X is an inverse system consisting of -modules in the étale topology and inducing .
Bhatt–Scholze's pro-étale topology gives an alternative approach.
Motivation
The development of étale cohomology as a whole was fueled by the desire to produce a 'topological' theory of cohomology for algebraic varieties, i.e. a Weil cohomology theory that works in any characteristic. An essential feature of such a theory is that it admits coefficients in a field of characteristic 0. However, constant étale sheaves with no torsion have no interesting cohomology. For example, if is a smooth variety over a field , then for all positive . On the other hand, the constant sheaves do produce the 'correct' cohomology, as long as is invertible in the ground field . So one takes a prime for which this is true and defines -adic cohomology as .
This definition, however, is not completely satisfactory: As in the classical case of topological spaces, one might want to consider cohomology with coefficients in a local system of -vector spaces, and there should be a category equivalence between such local systems and continuous -representations of the étale fundamental group.
Another problem with the definition above is that it behaves well only when is a separably closed. In this case, all the groups occurring in the inverse limit are finitely generated and taking the limit is exact. But if is for example a number field, the cohomology groups will often be infinite and the limit not exact, which causes issues with functoriality. For instance, there is in general no Hochschild-Serre spectral sequence relating to the Galois cohomology of .
These considerations lead one to consider the category of inverse systems of sheaves as described above. One has then the desired equivalence of categories with representations of the fundamental group (for -local systems, and when is normal for -systems as well), and the issue in the last paragraph is resolved by so-called continuous étale cohomology, where one takes the derived functor of the composite functor of taking the limit over global sections of the system.
Constructible and lisse ℓ-adic sheaves
An ℓ-adic sheaf is said to be
constructible if each is constructible.
lisse if each is constructible and locally constant.
Some authors (e.g., those of SGA 4) assume an ℓ-adic sheaf to be constructible.
Given a connected scheme X with a geometric point x, SGA 1 defines the étale fundamental group of X at x to be the group classifying finite Galois coverings of X. Then the category of lisse ℓ-adic sheaves on X is equivalent to the category of continuous representations of on finite free -modules. This is an analog of the correspondence between local systems and continuous representations of the fundament group in algebraic topology (because of this, a lisse ℓ-adic sheaf is sometimes also called a local system).
ℓ-adic cohomology
An ℓ-adic cohomology groups is an inverse limit of étale cohomology groups with certain torsion coefficients.
The "derived category" of constructible ℓ-adic sheaves
In a way similar to that for ℓ-adic cohomology, the derived category of constructible -sheaves is defined essentially as
writes "in daily life, one pretends (without getting into much trouble) that is simply the full subcategory of some hypothetical derived category ..."
See also
Fourier–Deligne transform
References
Exposé V, VI of
External links
Mathoverflow: A nice explanation of what is a smooth (ℓ-adic) sheaf?
Number theory learning seminar 2016–2017 at Stanford
Algebraic geometry | ℓ-adic sheaf | Mathematics | 800 |
3,231,205 | https://en.wikipedia.org/wiki/Incongruent%20melting | Incongruent melting occurs when a solid substance being partially melted does not melt uniformly, so that the chemical composition of neither the resulting liquid nor the resulting solid is the same as that of the original solid. For example, melting of orthoclase (KAlSi3O8) produces leucite (KAlSi2O6) in addition to a melt. The melt produced is richer in silica (SiO2). The proportions of leucite and melt formed can be recombined to yield the bulk composition of the starting feldspar. Another mineral that can melt incongruently is enstatite (Mg2Si2O6), which produces forsterite (Mg2SiO4) in addition to a melt richer in SiO2 when melting at low pressure. Enstatite melts congruently at higher pressures between 2.5 and 5.5 kilobars.
See also
Congruent melting
Incongruent transition
Phase diagram
References
Incongruent melting from Eric Weisstein's World of Physics
Materials science
Phase transitions
Geochemistry | Incongruent melting | Physics,Chemistry,Materials_science,Engineering | 227 |
13,096,236 | https://en.wikipedia.org/wiki/Ziwei%20enclosure | The Purple Forbidden enclosure ( Zǐ wēi yuán) is one of the San Yuan ( Sān yuán) or Three Enclosures. Stars and constellations of this group lie near the north celestial pole and are visible all year from temperate latitudes in the Northern Hemisphere.
Asterisms
The asterisms are:
See also
Twenty-Eight Mansions
References
Chinese constellations
Chinese astrology
Purple | Ziwei enclosure | Astronomy | 77 |
3,196,336 | https://en.wikipedia.org/wiki/Tributyltin%20oxide | Tributyltin oxide (TBTO) is an organotin compound chiefly used as a biocide (fungicide and molluscicide), especially a wood preservative. Its chemical formula is [(C4H9)3Sn]2O. It is a colorless viscous liquid. It is poorly soluble in water (20 ppm) but highly soluble in organic solvents. It is a potent skin irritant.
Historically, tributyltin oxide's biggest application was as a marine anti-biofouling agent. Concerns over toxicity of these compounds have led to a worldwide ban by the International Maritime Organization. It is now considered a severe marine pollutant and a Substance of Very High Concern by the EU. Today, it is mainly used in wood preservation.
References
External links
National Pollutant Inventory Fact Sheet for organotins
PBT substances
Fungicides
Molluscicides
Organotin compounds
Oxides
Tin(IV) compounds
Butyl compounds | Tributyltin oxide | Chemistry,Biology | 208 |
31,019,664 | https://en.wikipedia.org/wiki/C16H19NO4 | {{DISPLAYTITLE:C16H19NO4}}
The molecular formula C16H19NO4 may refer to:
Benzoylecgonine, the main metabolite of cocaine
Norcocaine, a minor metabolite of cocaine
Norscopolamine, a tropane alkaloid isolated from Atropanthe sinensis | C16H19NO4 | Chemistry | 75 |
13,823,014 | https://en.wikipedia.org/wiki/Krener%27s%20theorem | In mathematics, Krener's theorem is a result attributed to Arthur J. Krener in geometric control theory about the topological properties of attainable sets of finite-dimensional control systems. It states that any attainable set of a bracket-generating system has nonempty interior or, equivalently, that any attainable set has nonempty interior in the topology of the corresponding orbit. Heuristically, Krener's theorem prohibits attainable sets from being hairy.
Theorem
Let
be a smooth control system, where
belongs to a finite-dimensional manifold and belongs to a control set . Consider the family of vector fields .
Let be the Lie algebra generated by with respect to the Lie bracket of vector fields.
Given , if the vector space is equal to ,
then belongs to the closure of the interior of the attainable set from .
Remarks and consequences
Even if is different from ,
the attainable set from has nonempty interior in the orbit topology,
as it follows from Krener's theorem applied to the control system restricted to the orbit through .
When all the vector fields in are analytic, if and only if belongs to the closure of the interior of the attainable set from . This is a consequence of Krener's theorem and of the orbit theorem.
As a corollary of Krener's theorem one can prove that if the system is bracket-generating and if the attainable set from is dense in , then the attainable set from
is actually equal to .
References
Control theory
Theorems in dynamical systems | Krener's theorem | Mathematics | 316 |
14,441,078 | https://en.wikipedia.org/wiki/GNRHR2 | Putative gonadotropin-releasing hormone II receptor is a protein that in humans is encoded by the GNRHR2 gene.
Function
The receptor for gonadotropin-releasing hormone 2 (GnRH2) is encoded by the GnRH2 receptor (GnRHR2) gene. In non-hominoid primates and non-mammalian vertebrates, GnRHR2 encodes a seven-transmembrane G protein-coupled receptor. However, in humans, the N-terminus of the predicted protein contains a frameshift and premature stop codon. In humans, GnRHR2 transcription occurs but whether the gene produces a functional C-terminal multi-transmembrane protein is currently unresolved. Alternative splice variants have been reported. An untranscribed pseudogene of GnRHR2 is also on chromosome 14.
See also
Gonadotropin-releasing hormone receptor
References
Further reading
External links
G protein-coupled receptors | GNRHR2 | Chemistry | 207 |
63,649,139 | https://en.wikipedia.org/wiki/Sum%20of%20residues%20formula | In mathematics, the residue formula says that the sum of the residues of a meromorphic differential form on a smooth proper algebraic curve vanishes.
Statement
In this article, X denotes a proper smooth algebraic curve over a field k. A meromorphic (algebraic) differential form has, at each closed point x in X, a residue which is denoted . Since has poles only at finitely many points, in particular the residue vanishes for all but finitely many points. The residue formula states:
Proofs
A geometric way of proving the theorem is by reducing the theorem to the case when X is the projective line, and proving it by explicit computations in this case, for example in .
proves the theorem using a notion of traces for certain endomorphisms of infinite-dimensional vector spaces. The residue of a differential form can be expressed in terms of traces of endomorphisms on the fraction field of the completed local rings which leads to a conceptual proof of the formula. A more recent exposition along similar lines, using more explicitly the notion of Tate vector spaces, is given by .
References
Algebraic geometry
Algebraic curves
Differential forms | Sum of residues formula | Mathematics,Engineering | 228 |
30,971,879 | https://en.wikipedia.org/wiki/C20H18O4 | {{DISPLAYTITLE:C20H18O4}}
The molecular formula C20H18O4 may refer to:
Glabrene, an isoflavonoid that is found in Glycyrrhiza glabra (licorice)
Phaseolin (pterocarpan), a prenylated pterocarpan found in French bean (Phaseolus vulgaris) seeds and in the stems of Erythrina subumbrans | C20H18O4 | Chemistry | 100 |
10,949,176 | https://en.wikipedia.org/wiki/Wildlife%20of%20the%20Gambia | The wildlife of the Gambia is dictated by several habitat zones over the Gambia's land area of about 10,000 km2. It is bound in the south by the savanna and on the north by the Sudanian woodlands. The habitats host abundant indigenous plants and animals, in addition to migrant species and newly planted species. They vary widely and consist of the marine system, coastal zone, estuary with mangrove vegetation coupled with Banto Faros (barren hypersaline flats), river banks with brackish and fresh water zones, swamps covered with forests and many wetlands.
According to the government of the Gambia, about 3.7% of the land area of the country has been brought under national parks or reserves, and the present wildlife policy is to extend this coverage to 5%. The seven areas included in the protected list are the Niumi National Park, Kiang West National Park, River Gambia National Park, Bao Bolong Wetland Reserve, Abuko Nature Reserve, Tanbi Wetland Complex and the Tanji Karinti River Bird Reserve. These are managed by the Department of Parks and Wildlife Management. The area covered by these parks is 38,000 ha.
The birdlife in the Gambia is colourful and rich, with 560 species inhabiting coastal saltwater, freshwater wetlands, Guinea and Sudan savanna, woodlands and forests, agricultural lands, towns and villages. It is thus a biodiversity hot spot for ornithologists.
Geography
The flat terrain of the Gambia, drained by the Gambia River is categorised under several habitat types. The habitat types are the coast, mangroves and Banto Faros, wetlands, farmlands, savanna and the Sahel habitats, gallery forests and urban habitats.
The coast extends to into the sea with extensive sea green grass meadows where aqua fauna dominate; the long stretch of beach here, with occasional hills or cliffs, is topped with plants which are salt tolerant and bind the terrain. Above these are the very old beaches at an elevation where the rich vegetation consists of baobab (Adansonia digitata) and ron palm (Borassus aethiopum) species interspersed with shrubs and grassland. The sea grass meadows of the coastal areas are rich in green sea turtles (Chelonia mydas), dolphins, common minke whales (Balaenoptera acutorostrata) and Mediterranean monk seals (Monachus monachus), which feed on the fish.
Mangroves and Banto Faros are the mangrove swamp forests seen at the mouth of the Gambia River and extending along the river inland up to Kaur, into the brackish river stretch. It is a transitional zone between aquatic and terrestrial habitats. Two types of mangrove forests, namely the white mangrove colonies and the red mangroves, are of short height near the coast but raise to 15–20 m upstream. Banto Faros are found, after the mangroves, in flat lands which are barren and salt encrusted; however, succulent plants grow in some of the less saline thick mats. Fishes spawn in the mangroves before they move offshore into the sea. Mangrove oysters, (Crassostrea gasar and Crassostrea tulipa) are also extensively found on mangrove trees in this habitat. The trees are used as firewood and as building material.
Wetlands consist of salt pans, lagoons, marshes, mangrove swamps, mudflats, saltwater rivers, fresh water reaches of rivers such as the Gambia River, flooded sand mines, watering holes for animals, paddy fields and ephemeral marshes with reed vegetation in flooded areas. Crustaceans, annelid worms and molluscs are the fauna in the wetlands where migrant birds and wading birds find their feed.
Farmlands, including savanna and woodland, form now a dominant habitat in the Gambia where crops were grown initially on a rotation system of 20 years with a fallow period. This practice has since changed to two or three years rotation. While the agricultural crops grown are sorghum, millet and ground nuts, the tree species retained, within the lands cleared for agricultural use, are the Acacia albida, baobab, ficus species, winterthorn and African locust bean (Parkia biglobosa).
Savanna and the Sahel habitats are of two types. One is the southern Guinea savanna which has rich and dense vegetation of over 50 tree species. The other is the Sudanian savanna which is contiguous to the Guinea savanna on the north bank of the Gambia River. These areas are dry woodlands with soils of laterite formations. The local tree species include silk cotton (Bombax costatum), dry-zone mahogany (Khaya senegalensis) in deeper soil areas and African rosewood (Guibourtia coleosperma). Short grasses and shrubs are seen thinly spread in the Sahelian habitats.
The gallery forests (moisture forests), unlike the rainforest, thrive on groundwater and are integral to savannas. It is a rare habitat found only in Abuko Nature Reserve, Pirang Forest Park and in some stretches of the Gambia River with different set of species with some degree of overlap with rainforests.
Urban habitats consist of numerous villages. The open areas in between have large green stretches with a profusion of tree species, particularly mango (Mangifera indica) trees.
Law of the land
Some parts of the land area of the Gambia, under the protection of the Banjul Declaration of 1977, which is the law on wild life, includes seven protected zones. The law prohibits all types of hunting, except of animals harmful to the environment, such as warthogs, giant pouched rats and francolins. Also, as a signatory to the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), the Gambia enforces laws prohibiting export or even possession within the country of any animal skins, horns or turtle shells.
Fauna
Mammals
More than 100 species of mammals have been reported.
Bats
There are 30–40 species of bats. They are of two types, fruit-eating and insect-eating bats. Straw-coloured fruit bats (Eidolon helvum) and epaulated fruit bats (Epomophorus gambianus) are the most common species. Bats help in eating the malarial insects, with each bat consuming about 3000 mosquitoes per night. Eidolon helvum is a commonly seen bat during the rainy season when flowers and fruits (mangoes) are in full bloom.
Rodents
Rodents include the Gambian sun squirrel (Heliosciurus gambianus), striped ground squirrels (Xerus erythropus)s, nocturnal crested porcupines (Hystrix cristata), mongooses, brush-tailed porcupine, civets and genets.
Aquatic mammals
Aquatic mammals include two species of dolphin, the Atlantic bottlenose dolphin (Tursiops truncatus) and the Atlantic humpback dolphin (Sousa teuszii) and the West African manatee (Trichechus senegalensis).
Carnivores
Leopards and hyenas are still occasionally thought to cross into the more remote areas of East Gambia.
Herbivores
Bushbuck, Maxwell's duiker, warthog and hippopotamus.
Primates
Bijilo forests have endangered western red colobus monkey, the callithrix monkey (Chlorocebus sabaeus). King West National Park has baboons, and patas monkeys (Erythrocebus patas). The Senegal bushbaby (Galago senegalensis), Campbell's mona monkey (Cercopithecus campbelli). The River Gambia National park has chimpanzee (Pan troglodytes). The western red colobus (Procolobus badius) are a common sight in the Kiang West National Park, Bijilo Forest Park and Abuko Nature Reserve. Guinea baboon (Papio papio), which is large in size and fierce in appearance, is found in the northern region and also in small numbers in the coastal Makasutu Culture Forest.
The aardvark is also still reported, although very rarely seen.
Reptiles and amphibians
There are 40 snake species, 9 of which are venomous, such as cobras, puff adders and mambas, genus Dendroaspis; the first two are common. The non-venomous reptiles are pythons, bush snakes, lizards agamas and skinks (with brown and orange flanks); Bosc's monitor, Nile monitors (Varanus niloticus), which are voracious predators; tree geckos; and chameleon.
Three species of crocodiles reported are: slender-snouted crocodile (Mecistpos cataphractus), West African dwarf crocodile (Osteolaemus tetraspis) and West African crocodile; the first two are on the endangered list.
Amphibians consist of 33 species, toads, tree frogs, crowned bullfrogs, edible bullfrogs and reed frogs.
Birds
More than 500 species of birds live in the Gambia. The Bijilo Forest Park and the Abuko are important bird habitats. Birds seen here are the pelicans, spoonbills, yellow-billed stork (Mycteria ibis), Goliath heron (Ardea goliath), blue-cheeked bee-eater (Merops persicus), mouse-brown sunbird (Anthreptes gabonicus), African fish eagle (Hieraaetus spilogaster) (in the river valleys). Wetland bird species are Dendrocygna viduata, sacred ibis (Threskiornis aethiopicus), palm-nut vulture (Gypohierax angolensis), crakes, greater painted snipe (Rostratula benghalensis) and African jacana (Actophilornis africanus).
Butterfly distribution in the Gambia is dictated by the boundary of two major biomes of Sahelian and Guinean Savanna species; it is distinctly different between the rainy season and dry season.
Flora
The vegetation of the Gambia is mostly savanna in the upland areas, inland swamp in the low-lying areas, and mangrove swamp along the banks of the lower Gambia River. The country is almost devoid of true forest cover, the most forested area being the Bijo Forest. Nonetheless it is biologically rich, with an estimated 11,600 plant species many of which are used for medicinal purposes. Many plants are grown for food. The cassava (Manihot esculenta) was brought to the Gambia by the Portuguese between the 17th and 18th centuries. It grows up to 4 metres high and is a staple of the national diet, with each person consuming an average of 100 kg per annum in 2002 according to the Food and Agriculture Organization. Coastal inland forest comprises part of Bijilo Forest Park, Abuko Nature Reserve, Pirang Forest Park, and the River Gambia National Park.
The gummy Combretum glutinosum, Combretum micranthum, Combretum paniculatum and Combretum racemosum are common shrubs in the savanna areas of the country. Combretum paniculatum may be found on the edges of the forests in the north of the country. These plants usually have red petals and the Combretum racemosum has red 4-part flowers, but with inflorescence rimmed by white bracts.
References
Bibliography
Further reading
Penney, D. 2009. Common Spiders and Other Arachnids of The Gambia, West Africa. Siri Scientific Press, Manchester. .
Gambia
Biota of the Gambia | Wildlife of the Gambia | Biology | 2,398 |
32,570,560 | https://en.wikipedia.org/wiki/Pay%20%28geology%29 | Pay is an expression used in hydrocarbon mining. It denotes a portion of a reservoir that contains economically recoverable hydrocarbons. The term derives from the possibility of "paying" an income surpassing the costs. Equivalent terms are pay sand or pay zone. Overall interval in which pay volumes occur is the gross pay; smaller portions of the reservoir that meet further criteria for pay (such as permeability and hydrocarbon saturation) are net pay.
Net pay is determined through placing cut offs on properties like permeability, porosity, water saturation or volume of shale. Care needs to be taken to cut out the part of the gross rock volume that has the ability to allow fluids to flow and actually stores hydrocarbons. In other cases when determining "Net Reservoir" the remaining rock is that rock that can still store flow hydrocarbons but is not necessary containing hydrocarbons. When the cut off is based on the ability of the rock to store hydrocarbons this is called "Net Rock".
References
Petroleum geology | Pay (geology) | Chemistry | 209 |
1,631,920 | https://en.wikipedia.org/wiki/Surface%20bundle | In mathematics, a surface bundle is a bundle in which the fiber is a surface. When the base space is a circle the total space is three-dimensional and is often called a surface bundle over the circle.
See also
Mapping torus
Geometric topology | Surface bundle | Mathematics | 51 |
3,026,353 | https://en.wikipedia.org/wiki/Adverse%20yaw | Adverse yaw is the natural and undesirable tendency for an aircraft to yaw in the opposite direction of a roll. It is caused by the difference in lift and drag of each wing. The effect can be greatly minimized with ailerons deliberately designed to create drag when deflected upward and/or mechanisms which automatically apply some amount of coordinated rudder. As the major causes of adverse yaw vary with lift, any fixed-ratio mechanism will fail to fully solve the problem across all flight conditions and thus any manually operated aircraft will require some amount of rudder input from the pilot in order to maintain coordinated flight.
History
Adverse yaw was first experienced by the Wright brothers when they were unable to perform controlled turns in their 1901 glider which had no vertical control surface. Orville Wright later described the glider's lack of directional control.
Causes
Adverse yaw is a secondary effect of the inclination of the lift vectors on the wing due to its rolling velocity and of the application of the ailerons. Some pilot training manuals focus mainly on the additional drag caused by the downward-deflected aileron
and make only brief or indirect mentions of roll effects. In fact the rolling of the wings usually causes a greater effect than the ailerons. Assuming a roll rate to the right, as in the diagram, the causes are explained as follows:
Lift vector deflection during rolling
During a positive rolling motion, the left wing moves upward. If an aircraft were somehow suspended in air with no motion other than a positive roll, then from the point of view of the left wing, air will be coming from above and striking the upper surface of the wing. Thus, the left wing will experience a small amount of oncoming airflow merely from the rolling motion. This can be conceptualized as a vector originating from the left wing and pointing towards the oncoming air during the positive roll, i.e. perpendicularly upwards from the left wing's surface. If this positive-rolling aircraft were additionally moving forward in flight, then the vector pointing towards the oncoming air will be mostly forward due to forward-moving flight, but also slightly upward due to the rolling motion. This is the dashed vector coming from the left wing in the diagram.
Thus, for the left wing of a forward-moving aircraft, a positive roll causes the oncoming air to be deflected slightly upwards. Equivalently, the left wing's effective angle of attack is decreased due to the positive roll. By definition, lift is perpendicular to the oncoming flow. The upward deflection of oncoming air causes the lift vector to be deflected backward. Conversely, as the right wing descends, its vector pointing towards the oncoming air is deflected downward and its lift vector is deflected forward. The backward deflection of lift for the left wing and the forward deflection of lift for the right wing results in an adverse yaw moment to the left, opposite to the intended right turn. This adverse yaw moment is present only while the aircraft is rolling relative to the surrounding air, and disappears when the aircraft's bank angle is steady.
Induced drag
Initiating a roll to the right requires a briefly greater lift on the left than the right. This also causes a greater induced drag on the left than the right, which further adds to the adverse yaw, but only briefly. Once a steady roll rate is established the left/right lift imbalance dwindles, while the other mechanisms described above persist.
Profile drag
The downward aileron deflection on the left increases the airfoil camber, which will typically increase the profile drag. Conversely, the upward aileron deflection on the right will decrease the camber and profile drag. The profile drag imbalance adds to the adverse yaw. A Frise aileron reduces this imbalance drag, as described further below.
Minimizing the adverse yaw
There are a number of aircraft design characteristics which can be used to reduce adverse yaw to ease the pilot workload:
Yaw stability
A strong directional stability is the first way to reduce adverse yaw. This is influenced by the vertical tail moment (area and lever arm about gravity center).
Lift coefficient
As the tilting of the left/right lift vectors is the major cause to adverse yaw, an important parameter is the magnitude of these lift vectors, or the aircraft's lift coefficient to be more specific. Flight at low lift coefficient (or high speed compared to minimum speed) produces less adverse yaw.
Aileron to rudder mixing
As intended, the rudder is the most powerful and efficient means of managing yaw but mechanically coupling it to the ailerons is impractical. Electronic coupling is commonplace in fly-by-wire aircraft.
Differential aileron deflection
The geometry of most aileron linkages can be configured so as to bias the travel further upward than downward. By excessively deflecting the upward aileron, profile drag is increased rather than reduced and separation drag further aids in producing drag on the inside wing, producing a yaw force in the direction of the turn. Though not as efficient as rudder mixing, aileron differential is very easy to implement on almost any airplane and offers the significant advantage of reducing the tendency for the wing to stall at the tip first by limiting the downward aileron deflection and its associated effective increase in angle of attack.
Most airplanes use this method of adverse yaw mitigation — particularly noticeable on one of the first well-known aircraft to ever use them, the de Havilland Tiger Moth training biplane of the 1930s — due to the simple implementation and safety benefits.
Frise ailerons
Frise ailerons are designed so that when up aileron is applied, some of the forward edge of the aileron will protrude downward into the airflow, causing increased drag on this (down-going) wing. This will counter the drag produced by the other aileron, thus reducing adverse yaw.
Unfortunately, as well as reducing adverse yaw, Frise ailerons will increase the overall drag of the aircraft much more than applying rudder correction. Therefore, they are less popular in aircraft where minimizing drag is important (e.g. in a glider).
Note: Frise ailerons were primarily designed to reduce roll control forces. Contrary to the illustration, the aileron leading edge is in fact rounded to prevent flow separation and flutter at negative deflections. That prevents important differential drag forces.
Roll spoilers
On large aircraft where rudder use is inappropriate at high speeds or ailerons are too small at low speeds, roll spoilers (also called spoilerons) can be used to minimise adverse yaw or increase roll moment. To function as a lateral control, the spoiler is raised on the down-going wing (up aileron) and remains retracted on the other wing. The raised spoiler increases the drag, and so the yaw is in the same direction as the roll.
References and notes
Collection of balanced-aileron test data, F.M. Rogallo, Naca WR-L 419
Aerodynamics
Gliding technology | Adverse yaw | Chemistry,Engineering | 1,464 |
25,416,600 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20January%2027%2C%202093 | A total solar eclipse will occur at the Moon's descending node of orbit on Tuesday, January 27, 2093, with a magnitude of 1.034. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 1.3 days after perigee (on January 25, 2093, at 18:45 UTC), the Moon's apparent diameter will be larger.
The path of totality will be visible from parts of Australia, New Caledonia, and Vanuatu. A partial solar eclipse will also be visible for parts of Antarctica, Australia, Indonesia, and Oceania.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2093
A penumbral lunar eclipse on January 12.
A total solar eclipse on January 27.
A partial lunar eclipse on July 8.
An annular solar eclipse on July 23.
Metonic
Preceded by: Solar eclipse of April 10, 2089
Followed by: Solar eclipse of November 15, 2096
Tzolkinex
Preceded by: Solar eclipse of December 16, 2085
Followed by: Solar eclipse of March 10, 2100
Half-Saros
Preceded by: Lunar eclipse of January 22, 2084
Followed by: Lunar eclipse of February 3, 2102
Tritos
Preceded by: Solar eclipse of February 27, 2082
Followed by: Solar eclipse of December 29, 2103
Solar Saros 142
Preceded by: Solar eclipse of January 16, 2075
Followed by: Solar eclipse of February 8, 2111
Inex
Preceded by: Solar eclipse of February 17, 2064
Followed by: Solar eclipse of January 8, 2122
Triad
Preceded by: Solar eclipse of March 29, 2006
Followed by: Solar eclipse of November 28, 2179
Solar eclipses of 2091–2094
Saros 142
Metonic series
Tritos series
Inex series
References
External links
2093 01 27
2093 in science
2093 01 27
2093 01 27 | Solar eclipse of January 27, 2093 | Astronomy | 598 |
2,513,891 | https://en.wikipedia.org/wiki/Kuleshov%20effect | The Kuleshov effect is a film editing (montage) effect demonstrated by Russian film-maker Lev Kuleshov in the 1910s and 1920s. It is a mental phenomenon by which viewers derive more meaning from the interaction of two sequential shots than from a single shot in isolation.
Kuleshov's experiment
Kuleshov edited a short film in which a shot of the expressionless face of Tsarist matinee idol Ivan Mosjoukine was alternated with various other shots (a bowl of soup, a girl in a coffin, a woman on a divan). The film was shown to an audience who believed that the expression on Mosjoukine's face was different each time he appeared, depending on whether he was "looking at" the bowl of soup, the girl in the coffin, or the woman on the divan, showing an expression of hunger, grief, or desire, respectively. The footage of Mosjoukine was actually the same shot each time. Vsevolod Pudovkin (who later claimed to have been the co-creator of the experiment) described in 1929 how the audience "raved about the acting ... the heavy pensiveness of his mood over the forgotten soup, were touched and moved by the deep sorrow with which he looked on the dead child, and noted the lust with which he observed the woman. But we knew that in all three cases the face was exactly the same."
Kuleshov used the experiment to indicate the usefulness and effectiveness of film editing. The implication is that viewers brought their own emotional reactions to this sequence of images, and then moreover attributed those reactions to the actor, investing his impassive face with their own feelings. Kuleshov believed this, along with montage, had to be the basis of cinema as an independent art form.
The experiment itself was created by assembling fragments of pre-existing film from the Tsarist film industry, with no new material. Mosjoukine had been the leading romantic "star" of Tsarist cinema, and familiar to the audience.
Impact
Kuleshov demonstrated the necessity of considering montage as the basic tool of cinema. In Kuleshov's view, the cinema consists of fragments and the assembly of those fragments, the assembly of elements which in reality are distinct. It is therefore not the content of the images in a film which is important, but their combination. The raw materials of such an art work need not be original, but are prefabricated elements which can be disassembled and reassembled by the artist into new juxtapositions.
The montage experiments carried out by Kuleshov in the late 1910s and early 1920s formed the theoretical basis of Soviet montage cinema, culminating in the famous films of the late 1920s by directors such as Sergei Eisenstein, Vsevolod Pudovkin and Dziga Vertov, among others. These films included The Battleship Potemkin, October, Mother, The End of St. Petersburg, and The Man with a Movie Camera.
The effect has also been studied by psychologists and is well-known among modern film-makers. Alfred Hitchcock refers to the effect in his conversations with François Truffaut, using actor James Stewart as the example. In the famous "Definition of Happiness" interview which was part of the CBC Telescope program, Hitchcock also explained in detail many types of editing to Fletcher Markle. The final form, which he calls "pure editing", is explained visually using the Kuleshov effect. In the first version of the example, Hitchcock is squinting, and the audience sees footage of a woman with a baby. The screen then returns to Hitchcock's face, now smiling. In effect, he is a kind old man. In the second example, the woman and baby are replaced with a woman in a bikini, Hitchcock explains: "What is he now? He's a dirty old man."
Research
The Kuleshov effect has been studied by psychologists only in recent years. Prince and Hensley (1992) recreated the original study design but did not find the alleged effect. The study had 137 participants but was a single-trial between-subject experiment, which is prone to noise in the data. Dean Mobbs et al. did a within-subject fMRI study in 2006 and found an effect for negative, positive, or neutral valence. When a neutral face was shown behind a sad scene, it seemed sad; when it was shown behind a happy scene, it seemed happy. In 2016, Daniel Barratt et al. tested 36 participants using 24 film sequences across five emotional conditions (happiness, sadness, hunger, fear, and desire) and a neutral control condition. Again, they showed that neutral faces were rated in accordance with the stimuli material, confirming the 2006 findings of Mobbs et al.
Thus, despite the initial problems in testing the Kuleshov effect experimentally, researchers now agree that the context in which a face is shown has a significant effect on how the face is perceived.
To find out whether the Kuleshov effect can also be induced auditorily, Andreas M. Baranowski and Heiko Hecht intercut different clips of faces with neutral scenes, featuring happy music, sad music, or no music at all. They found that the music significantly influenced participants’ emotional judgments of facial expression.
See also
Creative geography was another Kuleshov experiment demonstrating the usefulness of montage.
Neurocinema
Uncanny valley
References
Further reading
External links
("Kuleshov effect. The importance of film editing")
Cinema of the Soviet Union
Film editing
Cinematic techniques
Concepts in film theory
Articles containing video clips
Cognitive psychology | Kuleshov effect | Biology | 1,175 |
2,099,543 | https://en.wikipedia.org/wiki/Waterproofing | Waterproofing is the process of making an object, person or structure waterproof or water-resistant so that it remains relatively unaffected by water or resisting the ingress of water under specified conditions. Such items may be used in wet environments or underwater to specified depths.
Water-resistant and waterproof often refer to resistance to penetration of water in its liquid state and possibly under pressure, whereas damp proof refers to resistance to humidity or dampness. Permeation of water vapour through a material or structure is reported as a moisture vapor transmission rate (MVTR).
The hulls of boats and ships were once waterproofed by applying tar or pitch. Modern items may be waterproofed by applying water-repellent coatings or by sealing seams with gaskets or o-rings.
Waterproofing is used in reference to building structures (such as basements, decks, or wet areas), watercraft, canvas, clothing (raincoats or waders), electronic devices and paper packaging (such as cartons for liquids).
In construction
In construction, a building or structure is waterproofed with the use of membranes and coatings to protect contents and structural integrity. The waterproofing of the building envelope in construction specifications is listed under 07 - Thermal and Moisture Protection within MasterFormat 2004, by the Construction Specifications Institute, and includes roofing and waterproofing materials.
In building construction, waterproofing is a fundamental aspect of creating a building envelope, which is a controlled environment. The roof covering materials, siding, foundations, and all of the various penetrations through these surfaces must be water-resistant and sometimes waterproof. Roofing materials are generally designed to be water-resistant and shed water from a sloping roof, but in some conditions, such as ice damming and on flat roofs, the roofing must be waterproof. Many types of waterproof membrane systems are available, including felt paper or tar paper with asphalt or tar to make a built-up roof, other bituminous waterproofing, ethylene propylene diene monomer EPDM rubber, hypalon, polyvinyl chloride, liquid roofing, and more.
Walls are not subjected to standing water, and the water-resistant membranes used as housewraps are designed to be porous enough to let moisture escape. Walls also have vapor barriers or air barriers. Damp proofing is another aspect of waterproofing. Masonry walls are built with a damp-proof course to prevent rising damp, and the concrete in foundations needs to be damp-proofed or waterproofed with a liquid coating, basement waterproofing membrane (even under the concrete slab floor where polyethylene sheeting is commonly used), or an additive to the concrete.
Within the waterproofing industry, below-ground waterproofing is generally divided into two areas:
Tanking: This is waterproofing used where the below-ground structure will be sitting in the water table continuously or periodically. This causes hydrostatic pressure on both the membrane and structure and requires full encapsulation of the basement structure in a tanking membrane, under slab and walls.
Damp proofing: This is waterproofing used where the water table is lower than the structure and there is good free-draining fill. The membrane deals with the shedding of water and the ingress of water vapor only, with no hydrostatic pressure. Generally, this incorporates a damp proof membrane (DPM) to the walls with a polythene DPM under the slab. With higher grade DPM, some protection from short-term Hydrostatic pressure can be gained by transitioning the higher quality wall DPM to the slab polythene under the footing, rather than at the footing face.
In buildings using earth sheltering, too much humidity can be a potential problem, so waterproofing is critical. Water seepage can lead to mold growth, causing significant damage and air quality issues. Properly waterproofing foundation walls is required to prevent deterioration and seepage.
Another specialized area of waterproofing is rooftop decks and balconies. Waterproofing systems have become quite sophisticated and are a very specialized area. Failed waterproof decks, whether made of polymer or tile, are one of the leading causes of water damage to building structures and personal injury when they fail. Where major problems occur in the construction industry is when improper products are used for the wrong application. While the term waterproof is used for many products, each of them has a very specific area of application, and when manufacturer specifications and installation procedures are not followed, the consequences can be severe. Another factor is the impact of expansion and contraction on waterproofing systems for decks. Decks constantly move with changes in temperatures, putting stress on the waterproofing systems. One of the leading causes of waterproof deck system failures is the movement of underlying substrates (plywood) that cause too much stress on the membranes resulting in a failure of the system. While beyond the scope of this reference document, waterproofing of decks and balconies is a complex of many complimentary elements. These include the waterproofing membrane used, adequate slope-drainage, proper flashing details, and proper construction materials.
The penetrations through a building envelope must be built in a way such that water does not enter the building, such as using flashing and special fittings for pipes, vents, wires, etc. Some caulkings are durable, but many are unreliable for waterproofing.
Also, many types of geomembranes are available to control water, gases, or pollution.
From the late 1990s to the 2010s, the construction industry has had technological advances in waterproofing materials, including integral waterproofing systems and more advanced membrane materials. Integral systems such as hycrete work within the matrix of a concrete structure, giving the concrete itself a waterproof quality. There are two main types of integral waterproofing systems: the hydrophilic and the hydrophobic systems. A hydrophilic system typically uses a crystallization technology that replaces the water in the concrete with insoluble crystals. Various brands available in the market claim similar properties, but not all can react with a wide range of cement hydration by-products and thus require caution. Hydrophobic systems use concrete sealers or even fatty acids to block pores within the concrete, preventing water passage.
Sometimes the same materials used to keep water out of buildings are used to keep water in, such as a pool or pond liners.
New membrane materials seek to overcome shortcomings in older methods like polyvinyl chloride (PVC) and high-density polyethylene (HDPE). Generally, new technology in waterproof membranes relies on polymer-based materials that are very adhesive to create a seamless barrier around the outside of a structure.
Waterproofing should not be confused with roofing, since roofing cannot necessarily withstand hydrostatic head while waterproofing can.
The standards for waterproofing bathrooms in domestic construction have improved over the years, due in large part to the general tightening of building codes.
In clothing
Some garments, and tents, are designed to give greater or lesser protection against rain. For urban use raincoats and jackets are used; for outdoor activities in rough weather there is a range of hiking apparel. Typical descriptions are "showerproof", "water resistant", and "waterproof". These terms are not precisely defined. A showerproof garment will usually be treated with a water-resisting coating, but is not rated to resist a specific hydrostatic head. This is suitable for protection against light rain, but after a short time water will penetrate. A water-resistant garment is similar, perhaps slightly more resistant to water but also not rated to resist a specific hydrostatic head. A garment described as waterproof will have a water-repellent coating, with the seams also taped to prevent water ingress there. Better waterproof garments have a membrane lining designed to keep water out but allow trapped moisture to escape ("breathability")—a totally waterproof garment would retain body sweat and become clammy. Waterproof garments specify their hydrostatic rating, ranging from 1,500 for light rain, to 20,000 for heavy rain.
Waterproof garments are intended for use in weather conditions which are often windy as well as wet and are usually also wind resistant.
Footwear can also be made waterproof by using a variety of methods including but not limited to, the application of beeswax, waterproofing spray, or mink oil.
In other objects
Waterproofing methods have been implemented in many types of objects, including paper packaging, cosmetics, and more recently, consumer electronics. Electronic devices used in military and severe commercial environments are routinely conformally coated in accordance with IPC-CC-830 to resist moisture and corrosion but encapsulation is needed to become truly waterproof. Even though it is possible to find waterproof wrapping or other types of protective cases for electronic devices, a new technology enabled the release of diverse waterproof smartphones and tablets in 2013. This method is based on a special nanotechnology coating a thousand times thinner than a human hair which protects electronic equipment from damage due to the penetration of water. Several manufacturers use the nano coating method on their smartphones, tablets, and digital cameras.
A 2013 study found that nanotextured surfaces using cone forms produce highly water-repellent surfaces. These nanocone textures are superhydrophobic (extremely water-hating).
Applications
Waterproof packaging or other types of protective cases for electronic devices can be found. A new technology enabled the release of various waterproof smartphones and tablets in 2013.
A study from 2013 found that nano-textured surfaces using cone shapes produce highly water-repellent surfaces. These "nanocone" textures are superhydrophobic.
Standards
ASTM C1127 – Standard Guide for Use of High Solids Content, Cold Liquid-Applied Elastomeric Waterproofing Membrane with an Integral Wearing Surface
ASTM D779 – Standard Test Method for Determining the Water Vapor Resistance of Sheet Materials in Contact with Liquid Water by the Dry Indicator Method
ASTM D2099 – Standard Test Method for Dynamic Water Resistance of Shoe Upper Leather by the Maeser Water Penetration Tester
ASTM D3393 – Standard Specification for Coated Fabrics Waterproofness
D6135 – Standard Practice for Application of Self-Adhering Modified Bituminous Waterproofing
ASTM D7281 – Standard Test Method for Determining Water Migration Resistance Through Roof Membranes
British Standards Institution BS.8102:2009 – "Protection of Below Ground Structures against Water from the Ground".
IEC 60529 – Degrees of protection provided by enclosures (IP Code)
ISO 2281 – Horology — Water-resistant watches
See also
Saint-Gobain
Bituminous waterproofing
Building insulation
Durable water repellent (DWR) coatings
IP Code (used on mobile phones)
Sika AG
Soundproofing
Truscon Laboratories
Water Resistant mark
Waterproof fabric
Waterproof paper
References
External links
Moisture protection
Physical quantities
Water
Gardening aids | Waterproofing | Physics,Mathematics,Environmental_science | 2,268 |
58,651,092 | https://en.wikipedia.org/wiki/Aspergillus%20striatulus | Aspergillus striatulus (also named Apsergillus striatus) is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1985. It has been isolated from mangrove mud in the Kagh Islands. It has been reported to produce asperthecin, aurantioemestrin, cycloisoemericellin, desferritriacetylfusigen, dithiosilvatin, emindol SA, emindol SB, 7-Hydroxyemodin, paxillin, 1-O-acetylpaxillin, penicillin G, sterigmatocystin, violaceic acid, violaceol I, and violaceol II.
Growth and morphology
A. striatulus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
stellatus
Fungi described in 1985
Fungus species | Aspergillus striatulus | Biology | 235 |
23,014,685 | https://en.wikipedia.org/wiki/Al-Husayn%20%28missile%29 | al-Husayn () was a short-range ballistic missile developed in Ba'athist Iraq. An upgraded version of Scud missile, the al-Husayn was widely used by the Iraqi Army during the Iran–Iraq War (1980–1988) and the Persian Gulf War (1990–1991).
Development
The origins of the al-Husayn could be traced back to the first stages of the war with Iran. Iraq was the first belligerent to use long range artillery rockets during the Iran–Iraq War, firing limited numbers of FROG-7s at the towns of Dezful and Ahvaz. Iran responded with Scud-Bs obtained from Libya. These missiles can hit a target 185 miles away, therefore key Iraqi cities like Sulaymaniya, Kirkuk, and Baghdad itself came within the range of this weapon.
Iraq, which also deployed the Scud-B, was conversely unable to strike the main Iranian industrial centers, including the capital, Tehran, because these are located more than 300 miles from the border. To surmount the Iranian advantage, Iraqi engineers designed a program to upgrade the original Scuds into a series of ballistic missiles whose range would surpass 500 miles. The assembly facility was located near Taji.
The first development, called al-Husayn, with a range of 400 miles, allowed the Iraqi army to attack deep inside the Iranian boundaries. The Iraqis had initiated project 1728 for indigenous Scud engine development and production. The range was extended by reducing the original 945 kg warhead to 500 kg and increasing the propellant capacity. The warhead carried HE, although it had chemical, biological and nuclear capabilities. According to UN inspectors reports, the Iraqis were able to produce all the major components of the system by 1991. The al-Husayn was 12.46 meters long and had a diameter of 0.88 meters. The guidance was inertial, without terminal phase. The altitude where the motor burnt out was 31 miles, while the trajectory highest altitude or apogee, was 94 miles. The accuracy for the impact, or circular error probable, was estimated in a radius of 1,000 meters, and the missile launch weight was 6,400 kg.
Its flight time was of about eight minutes for the maximum range.
The missile fuel was common to every tactical missile of the Cold War: a mix of kerosene, ignited by a nitric acid oxidizer, called IRFNA. Each missile loaded 4,500 kg of liquid propellant, composed by a 22% of kerosene and 78% of IRFNA.
The Iraqis also extended the launch rail of 11 Soviet-produced MAZ-543 artillery trucks to fit them for the longer local-built missiles. The unit responsible for the maintenance and operation of the new missiles was initially the 224 Brigade, already established since 1976 to deal with the R-17 Scuds imported from the Soviet Union in 1972.
By 1989, a second army Brigade was formed, the 223, equipped with 4 locally developed trailer launchers, known as the Al-Nida, which included azimuth identification systems (AzID) for targeting. There were also a second indigenous launcher, the Al-Waleed, but it apparently never became operational.
Some concrete silos were built west of Ar Rutba, near the border with Jordan. They were destroyed by precision bombings carried out by USAF F-15s during the first hours of Operation Desert Storm.
Operational history
Iran–Iraq War (1980–1988)
Up to 200 missiles were launched against Iran between 1987 and 1988, killing some 2,000 people. Tehran, Qom and Isfahan became the usual targets. Their poor accuracy, while mostly ineffective to conduct a major strategic campaign, made them basically weapons of terror, forcing thousands of refugees out of the main Iranian cities. This exchange of ballistic missiles was indeed known as 'the war of the cities'. The full-scale campaign lasted from 29 February 1988 until April 20, when a truce was agreed by both sides. Iraq, which had been looking for some kind of compromise gesture from Iran, is largely viewed as the 'winner' by some sources.
According to Iranian sources, the fuselage and warhead were prone to break into fragments while re-entering the atmosphere. This phenomenon later was an advantage as a counter-measure against the Patriot missile during the 1991 Persian Gulf War.
Persian Gulf War (1991)
Eighty-eight of these modified Scuds were fired at Saudi Arabia (46) and Israel (42) during January and February 1991.
The greatest tactical achievement of the al-Husayn was the destruction of a US military barracks in Dhahran, Saudi Arabia, on 25 February 1991, at 8:30 p.m. local time, when 28 soldiers were killed and another 110 injured, mainly reservists from Pennsylvania.
One of the units involved in this incident, the 14th Quartermaster Detachment, specializing in water-purification, suffered the heaviest toll among US troops deployed in the Persian Gulf, with 81% of its soldiers killed or wounded.
The failure of the Patriot system in tracking the Iraqi missile over Dhahran was due to a shift in the range gate of the radar, due to the continuous use of the software for more than 100 hours without resetting.
Only 10 of the 46 al-Husayn launched at Saudi Arabia caused significant damage: two strikes on US military bases (including the army barracks at Dhahran), one on a Saudi government building, and the remaining seven on civilian facilities. The following is a detailed list of these attacks:
Attacks assessment
Besides the American soldiers, Saudi authorities reported one security guard killed and about 70 civilians injured as result of the missile strikes.
Thirty-eight of the 42 missiles aimed at Israel landed within the boundaries of that country; the other four fell on the West Bank area. Although thousands of houses and apartments were damaged by the strikes, only two people died directly as consequence of the impacts. Another 12 died from indirect causes (suffocation while wearing gas-masks and heart attacks).
The threat posed by the al-Husayn and other Scud missiles forced the coalition air forces to divert 40% of their missions to hunt the launchers along with their support vehicles and supplies. The ground war was postponed one week for this reason.
End of the program
Under the terms of the ceasefire of March 1991, corroborated by the resolution 687 of the UN Security Council, a commission (UNSCOM) was established to assure the dismantling of the Iraqi missile program. They were only allowed to purchase or produce missiles with a range no longer than 150 km. At the end of the war, the Iraqi government declared it had only 61 al-Husayn and other ballistic missiles in its arsenal. These weapons were destroyed under UNSCOM supervision. This process was completed by July 1991. However, the western powers were suspicious that the Iraqi army may have hidden as many as 200 missiles. The Iraqis took advantage of the provisions of the ceasefire by developing two types of short-range ballistic missiles, the Ababil-100 (also called al Fat'h) and the Al-Samoud, which were in an experimental phase at the time of the Invasion of Iraq in 2003. These projects were part of the casus belli raised by the American administration against Saddam Hussein.
See also
Related articles
List of missiles
Iraqi ballistic missile attacks on Saudi Arabia
1991 Iraqi missile attacks against Israel
Iraqi missiles derived from al-Husayn missile
Al Abbas
Al Hijarah
References
Notes
Bibliography
Zaloga, Steven, Ray, Lee, Laurier, Jim: Scud Ballistic Missile and Launch Systems 1955–2005, New Vanguard, 2005.
Scales, Brigadier General Robert H. Jr: Certain Victory. Brassey's, 1994.
Lowry, Richard S.: The Gulf War Chronicles: A Military History of the First War with Iraq. iUniverse, inc., 2003.
Military history of Iraq
Tactical ballistic missiles of Iraq
Iraq–Israel relations
Iraq–Saudi Arabia relations
Chemical weapon delivery systems
Short-range ballistic missiles of Iraq
Weapons and ammunition introduced in 1987
Theatre ballistic missiles | Al-Husayn (missile) | Chemistry | 1,661 |
10,293,325 | https://en.wikipedia.org/wiki/Ariadne%20%28drug%29 | Ariadne, also known chemically as 4C-D or 4C-DOM, by its developmental code name BL-3912, and by its former tentative brand name Dimoxamine, is a little-known psychoactive drug of the phenethylamine, amphetamine, and phenylisobutylamine families. It is a homologue of the psychedelics 2C-D and DOM.
The drug is a serotonin receptor agonist, including of the serotonin 5-HT2A receptor. However, it is non-hallucinogenic in animals and humans, although it still has some psychoactive effects. It may be non-hallucinogenic due to lower-efficacy partial agonism of the serotonin 5-HT2A receptor.
Ariadne was developed by Alexander Shulgin. It was studied at Bristol Laboratories as an antidepressant and for various other uses but was never marketed. There has been renewed interest in Ariadne in the 2020s owing to increased interest in psychedelics for treatment of psychiatric disorders.
Effects
In his 1991 book PiHKAL, Alexander Shulgin reported testing Ariadne on himself up to a dose of 32mg, finding that it produced "the alert of a psychedelic, with none of the rest of the package". Very little published data exists about the human pharmacology of Ariadne apart from Shulgin's limited testing; unpublished human trials reportedly observed some psychoactive effects, but no hallucinations.
In his 2011 book The Shulgin Index, Volume One: Psychedelic Phenethylamines and Related Compounds, Shulgin described (R)-Ariadne as increasing mental alertness and producing feelings of well-being at doses of 25 to 50mg. It was claimed to improve symptoms of manic depression in psychotic individuals at doses of 50 to 100mg and to improve symptoms of Parkinson's disease at a dosage of 100mg/day. Doses of up to 300mg resulted in an altered state of consciousness but still no psychedelic effects. For comparison, DOM shows psychoactive sub-hallucinogenic effects at doses of 1 to 3mg and psychedelic effects at doses of more than 3mg.
Pharmacology
Pharmacodynamics
Ariadne is a potent and selective agonist of the serotonin 5-HT2 receptors, including of the serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptors. However, it is less efficacious in activating the serotonin 5-HT2A receptor, including the Gq, G11, and β-arrestin2 signaling pathways, compared to the related drug DOM, and this weaker partial agonism may be responsible for its lack of psychedelic effects. In addition to the serotonin 5-HT2 receptors, Ariadne is a lower-affinity agonist of the serotonin 5-HT1 receptors. Ariadne shows essentially no activity at the monoamine transporters.
Ariadne shows a markedly attenuated head-twitch response, a behavioral proxy of psychedelic effects, in animals. It is thought that the reduced efficacy of Ariadne in activating the serotonin 5-HT2A receptor is responsible for its non-hallucinogenic nature. Ariadne was also shown to produce stimulus generalization in rats trained to respond to MDMA or LSD. In monkeys, the drug was found to possibly increase motivation, as it caused monkeys that had stopped running mazes to begin running them again. Ariadne has also been found to be effective in an animal model of Parkinson's disease, where it reversed motor deficits similarly to levodopa.
Serotonin 5-HT2A receptor agonists have been found to increase dopamine levels in the nucleus accumbens and other mesolimbic areas and non-hallucinogenic serotonin 5-HT2A receptor agonists like Ariadne may do so without producing psychedelic effects. This action may underlie the preliminary observations of effectiveness of Ariadne in the treatment of parkinsonism in animals and humans.
Chemistry
Ariadne, also known as 4-methyl-2,5-dimethoxy-α-ethylphenethylamine, is a substituted phenethylamine and amphetamine derivative. It is the analogue of 2,5-dimethoxy-4-methylamphetamine (DOM) in which the α-methyl group has been replaced with an α-ethyl group and is the analogue of 2,5-dimethoxy-4-methylphenethylamine (2C-D) with an ethyl group substituted at the α carbon.
Ariadne's alternative name 4C-DOM or 4C-D stands for "four-carbon DOM", whereas the name of 2C-D stands for "two-carbon DOM". Another name of Ariadne is α-Et-2C-D, which stands for α-ethyl-2C-D. Racemic Ariadne is additionally known by the former developmental code name BL-3912, while the (R)-enantiomer of Ariadne is known by the former developmental code name BL-3912A.
Other related compounds include 4C-B (the α-ethyl homologue of 2C-B and DOB) and 4C-T-2 (the α-ethyl homologue of 2C-T-2 and Aleph-2).
History
Ariadne was first synthesized by Alexander Shulgin. Shulgin reported that the drug was tested by Bristol Laboratories as an antidepressant, in an anecdote where he was explaining how human testing is invaluable (compared to animal testing) on drugs that change the state of the mind. He said, "Before they launched into a full multi-clinic study to determine whether it's going to be worth the animal studies or not, every person on the board of directors took it." In The Shulgin Index, Volume One: Psychedelic Phenethylamines and Related Compounds (2011), he described it also being evaluated for increasing mental alertness in geriatric individuals, treating Parkinson's disease, and treating psychosis and manic depression. The tentative commercial name of Ariadne was Dimoxamine. (R)-Ariadne was said to have completed phase 2 clinical trials, but the actual clinical data were never disclosed and further development was halted due to strategic economic reasons.
See also
AAZ-A-154
BMB-201
ITI-1549
References
External links
ARIADNE Entry at PiHKAL·info
5-HT2B agonists
5-HT2C agonists
2,5-Dimethoxyphenethylamines
Abandoned drugs
Experimental non-hallucinogens
Non-hallucinogenic 5-HT2A receptor agonists
Phenylisobutylamines | Ariadne (drug) | Chemistry | 1,476 |
21,464,374 | https://en.wikipedia.org/wiki/Facesitting | Facesitting, also known as queening or kinging, is a sexual practice with one partner sitting over the other's face, sometimes allowing for oral–genital or oral–anal contact. The sitting partner may face in either direction.
Components
Facesitting is common among dominant and submissive individuals, for demonstrating superiority and for sexual gratification. The full-weight body-pressure, smothering, moisture, body odors, and darkness can be perceived as powerful sexual attractions or compulsions. The person sat upon may be in bondage, sexually submissive, or simply held down by the body-weight of the other person. In some cases, the submissive will consume the dominant's bodily waste(s) (urolagnia or coprophilia) or be subjected to the dominant's flatulence (eproctophilia). The woman is on top of the man in facesitting, unlike in sex positions when the man is on top of the woman. The man may also lick her anus.
Furniture
Sometimes special furniture is used, such as a "queening stool" or "smotherbox". A queening stool is a low seat which fits over the submissive's face and contains an opening to allow oral-genital and/or oral-anal stimulation of the domme while seated.
The position allows the pelvic floor muscles (PFM) to relax and therefore partly exposing the labia minora to intimate touch. The gluteus maximus and levator ani muscles, the major muscles of the crotch, can relax and sag allowing easy access to the vagina and anus. Additionally, the stool allows for greater comfort and allows for the activity to be done for a longer period of time.
A smotherbox (or "smothering box") is a special form of queening stool which also allows the person under the seat to be locked in place, restrained by the neck as in a set of stocks. A smotherbox has two openings. One is in a vertical side of the box for the neck of the person who has their head inside the smotherbox. The other is in the top of the box to expose their face. The inside of a smotherbox is often padded to provide support for their neck and prevent their head from moving. The padding may also muffle noises from the outside, causing a relaxation effect and heightening their other senses. Smotherboxes are usually custom made pieces of furniture that may have a special significance for their users. They are sometimes made out of precious woods, with leather used for the seat.
The smotherbox is placed on a stable surface. The cover (top half of the smotherbox) is open while the submissive lies down on their back and places their head in the box. When they are in position the cover is closed. The cover can have hinges or be a separate part. Locks may be used to emphasize the submissive position or the submissive's hands may be fastened above their head to the box. Smotherboxes may be more permanently mounted to tables or other stable objects, and the submissive restrained to that surface instead. A Queening Chair or Facesitting Throne is another variation that is popular, designed to emphasize the relative place of the domme and submissive. They can be elegant and more formal than the smotherbox.
In popular culture
In 1980, Monty Python recorded a humorous song, "Sit on My Face", about the pleasures of facesitting. Written by Eric Idle, the song's lyrics are sung to the melody of "Sing As We Go" by Gracie Fields. The opening gives way to multiple male voices singing "Sit on my face and tell me that you love me." The remaining lyrics contain numerous references to fellatio and cunnilingus.
The Audiovisual Media Services Regulations 2014, introduced by the British government, brought about a ban on the depiction of various sex acts in hardcore pornography on the Internet. Ostensibly, the bill sought to protect women from sex acts that were considered "violent" or "unsafe", and banned a wide variety of sex practices, including facesitting, strangulation, and fisting. This law impacted only the production of pornographic videos as opposed to acts performed privately. Protests against the law were held outside the Palace of Westminster, with protesters saying the law did not reflect the wishes of the public. In 2019, the law was overturned.
See also
BDSM
Body worship
Cunnilingus
Dominatrix
Erotic humiliation
Namio Harukawa, Japanese artist who features this extensively in his work
Maledom
Oral sex
Sex position
Teabagging
References
External links
Sexual acts
Anal eroticism
Cunnilingus
BDSM activities
Oral eroticism
Pornography terminology
Sex positions
Anus
Vulva | Facesitting | Biology | 1,000 |
31,881,102 | https://en.wikipedia.org/wiki/Lead%20Finder | Lead Finder is a computational chemistry tool designed for modelling protein-ligand interactions. It is used for conducting molecular docking studies and quantitatively assessing ligand binding and biological activity. It offers free access to users in commercial, academic, or other settings.
About
The original docking algorithm integrated into Lead Finder can be tailored for either quick but less accurate virtual screening applications or slower but more in-depth analyses.
Lead Finder is used by computational and medicinal chemists for drug discovery, as well as pharmacologists and toxicologists involved in silico assessment of ADME-Tox properties. Additionally, it is used by biochemists and enzymologists working on modeling protein-ligand interactions, enzyme specificity, and rational enzyme design. Lead Finder's specialization in ligand docking and binding energy estimation is a result of its advanced docking algorithm and the precision with which it represents protein-ligand interactions.
Docking algorithm
From a mathematical perspective, ligand docking involves the modelling of a multidimensional surface that describes the free energy associated with protein-ligand binding. This surface can be highly complex; with ligands possessing as many as 15-20 degrees of freedom, such as freely rotatable bonds.
Lead Finder's approach combines the use of genetic algorithm search, local optimization techniques, and knowledge gathered during the search process.
Scoring function
The Lead Finder scoring function represents protein-ligand interactions more precisely. The scoring function's model considers various types of molecular interactions.
In this scoring function, individual energy contributions are carefully adjusted with empirically derived coefficients tailored to objectives. Such as the prediction of binding energies, the ranking of energy for docked ligand poses, and the ordering of active and inactive compounds during virtual screening experiments. To achieve these goals, Lead Finder employs three types of scoring functions, based on the same set of energy contributions but with different sets of energy-scaling coefficients.
Docking success rate
Docking success rate was benchmarked as a percentage of correctly docked ligands for a set of protein-ligand complexes extracted from PDB. Results showed root mean squared deviations of 2 Å or less for 80-96% of the structures in the respective test sets (FlexX, Glide SP, Glide XP, Gold, LigandFit, MolDock, Surflex).
References
Molecular modelling software
Molecular modelling | Lead Finder | Chemistry | 468 |
58,630,722 | https://en.wikipedia.org/wiki/Aspergillus%20spelunceus | Aspergillus spelunceus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1965. It has been isolated from dead cane crickets on the floor of Laurel Creek Cave in West Virginia, United States.
References
spelunceus
Fungi described in 1965
Fungus species | Aspergillus spelunceus | Biology | 75 |
15,071,137 | https://en.wikipedia.org/wiki/HOXC10 | Homeobox protein Hox-C10 is a protein that in humans is encoded by the HOXC10 gene.
Function
This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, which are located on different chromosomes and consist of 9 to 11 genes arranged in tandem. This gene is one of several homeobox HOXC genes located in a cluster on chromosome 12. The protein level is controlled during cell differentiation and proliferation, which may indicate this protein has a role in origin activation.
Pathology
HOXC10 is overexpressed in breast cancer and transcriptionally regulated by estrogen via involvement of histone methylases MLL3 and MLL4.
Methylation of the estrogen-repressed gene HOXC10 in breast cancer determines resistance to aromatase inhibitors. This epigenetic reprogramming of HOXC10 is observed in endocrine-resistant breast cancer.
References
Further reading
External links
Transcription factors | HOXC10 | Chemistry,Biology | 246 |
22,355,848 | https://en.wikipedia.org/wiki/Consed | Consed is a program for viewing, editing, and finishing DNA sequence assemblies. Originally developed for sequence assemblies created with phrap, recent versions also support other sequence assembly programs like Newbler.
History
Consed was originally developed as a contig editing and finishing tool for large-scale cosmid shotgun sequencing in the Human Genome Project. At genome sequencing centers, Consed was used to check assemblies generated by phrap, solve assembly problems like those caused by highly identical repeats, and finishing tasks like primer picking and gap closure.
Development of Consed has continued after the completion of the Human Genome Project. Current Consed versions support very large projects with millions of reads, enabling the use with newer sequencing methods like 454 sequencing and Solexa sequencing. Consed also has advanced tools for finishing tasks like automated primer picking
See also
Phred
Phrap
References
External links
Consed homepage
Bioinformatics software
Computational science | Consed | Chemistry,Mathematics,Biology | 190 |
39,024,610 | https://en.wikipedia.org/wiki/CTI%20Electronics%20Corporation | CTI Electronics Corporation is a manufacturer of industrial computer peripherals such as rugged keyboards, pointing devices, motion controllers, analog joysticks, USB keypads and many other industrial, military, medical, or aerospace grade input devices. CTI Electronics Corporation products are made in the United States and it is a well-known supplier of input devices to some of the most notable private defense contractors in the world, including Lockheed Martin, DRS Technologies, Computer Sciences Corporation, General Dynamics, BAE Systems, L3 Communications, AAI, Northrop Grumman, Raytheon, Boeing, Thales Group and many more companies that provide security and defense around the world. CTI also supplies Homeland Security and United States Department of Defense supporting their efforts in protecting and serving the country and military personnel of the United States.
Background
CTI Electronics Corporation was started in 1986 and is currently located in Stratford, Connecticut.
Industries
CTI's products are used in a variety of industries and specialize in reliable industrial-grade input devices for use harsh environments. Its products are currently being used in the control systems of UAVs, UUVs, and UGVs. CTI has also donated industrial joysticks to students of UW-Madison to for research into the Standing Paraplegic Omni-directional Transport (SPOT)
Product certifications
NEMA 4
NEMA 4X
NEMA 12
IP54
IP66
RoHS
CE
ISO 9001:2008
References
External links
CTI Electronics Home Page
Computer companies of the United States
Computer hardware companies
Computer peripheral companies
Computer companies established in 1986
Electronics companies established in 1986
Companies based in Stratford, Connecticut
American companies established in 1986 | CTI Electronics Corporation | Technology | 334 |
42,264,469 | https://en.wikipedia.org/wiki/WHI3 | WHI3 is a developmental regulator in budding yeast. It influences cell size and the cell cycle by binding CLN3 mRNA and inhibiting its translation. This, in turn, inhibits the G1/S transition.
Function
WHI3 mediates many, often vital, processes such as the cell cycle, meiosis, filamentous growth and mating.
Regulation of the cell cycle is done by acting on the cyclin CLN3, a protein crucial to the G1/S transition in budding yeast. WHI3 acts by binding CLN3 mRNA, and then co-localizes, to form cytoplasmic foci. This locally restricts synthesis of the short-lived CLN3 protein, thus limiting its range. During G1, yeast has the ability to choose from a multitude of developmental options: meiosis, filamentation and mating. This is possible only when the cell arrests in G1, allowing it to continue down a different pathway.
It is also known that WHI3 directly interacts with Cdc28, and is needed to localize it to the cytoplasm during early G1. WHI3 forms a complex with the CLN3 protein, which is needed for the accumulation of Cdc28. In late G1, however, Cdc28 has been observed to localize to the nucleus.
Another, more recently discovered function of WHI3 is encoding for memory in budding yeast cells. Budding yeast is capable of both sexual and asexual reproduction. During sexual reproduction, two yeast cells signal their presence by diffusing pheromones, and it has been shown that when a cell is exposed to mating pheromones but does not perform mating, it "remembers" the event and is less likely to undergo mating afterwards. When exposed to pheromones, yeast will undergo cell-cycle arrest and attempt to mate, however, within the first three hours it will escape the arrest, and the previously inhibited CLN3 will resume activity. The WHI3 protein then aggregate and form a super-assembly, which is inactive and partially insoluble. This then forces the cell to continue with budding, since it is now conditioned against cell-cycle arrest. The daughter cells obtained from this budding, however, are not conditioned against mating, unlike the mother: the WHI3 aggregates have been shown to localize within the mother cell. This results in a mother cell retaining the memory of the previous encounter over multiple generations, while the new daughter cells are still responsive to mating cues.
Structure
Using the WHI3 sequence, the protein is predicted to have a mass of 71,257 kD, an isoelectric point of 8.65, and a codon bias of 0.13.
It also has been shown to have an RNA binding motif, similar to RNP-1 and RNP-2.
Its Cdc28-recruitment region has been shown to be on its N-terminal, spanning amino acids 121–220.
References
Proteins | WHI3 | Chemistry | 622 |
41,084 | https://en.wikipedia.org/wiki/Effective%20height | In telecommunications, the effective height of an antenna is the height of the antenna's center of radiation above the ground.
In low-frequency applications involving loaded or nonloaded vertical antennas, the effective height is the moment of the current distribution in the vertical section, divided by the input current. For an antenna with a symmetrical current distribution, the center of radiation is the center of the distribution. For an antenna with asymmetrical current distribution, the center of radiation is the center of current moments when viewed from points near the direction of maximum radiation.
In antenna theory, it is often used interchangeably with antenna effective length, where it is defined as the ratio of the open circuit voltage across the antenna's terminals to the incident electric field of a radio signal.
See also
Antenna factor
References
Antennas | Effective height | Engineering | 161 |
59,642,940 | https://en.wikipedia.org/wiki/Clara%20Barker | Clara Michelle Barker is a British engineer and material scientist. In 2017 she received the Points of Light award from the UK Prime Minister's Office for her volunteer work raising awareness of lesbian, gay, bisexual and transgender issues. The outcome of this was her rise as a significant role model to the LGBT+ community.
Career and research
Barker completed her thesis on thin film coating at Manchester Metropolitan University. She then held a post-doctoral position at the Swiss Federal Laboratories for Materials Science and Technology (EMPA) in Switzerland for four years, before she moved to the University of Oxford, where she manages the Centre for Applied Superconductivity within the Materials Department. Her current research focuses on creating thin film high temperature superconductors that could be used a resonators for quantum computing devices.
Barker is currently a Daphne Jackson Trust research fellow and Dean for equality and diversity at Linacre College. She is a member of the Royal Society Diversity and Inclusion Committee. She was the vice-chair of the university's LGBT+ Advisory Group. In 2023, she was featured in place of the Seven Sisters tube station in the Engineering Icons Tube Map. In November 2023 she was appointed Inclusion and Diversity Representative by the Institute of Physics, taking over the position from Helen Gleeson.
LGBT+ advocacy
Barker is a transgender woman and an advocate for LGBT+ diversity and women in STEM. She works with a youth group in Oxfordshire, TOPAZ. She has also spoken local schools on behalf of Stonewall and has helped Oxford City Council run an anti-HBT bullying initiative. In 2017, she was featured in a Stonewall poster campaign for Trans Day of Visibility. She also led the promotion of the Out in Oxford project, a project which highlights LGBT+ artefacts in museums. She has given numerous talks on LGBT+ visibility and diversity in STEM. In December 2018 Barker gave a TEDx talk entitled "Why we need to build trust to create diversity in institutions". She has also appeared on BBC Victoria Derbyshire and Sky News talking about transgender rights.
Barker has received several awards for her advocacy. In 2017 she was the 795th person to receive the Points of Light award for her work with Out in Oxford her other volunteering. Her belief is that role models are necessary in all aspects of life. Her representation in STEM has been pivotal for following generations to follow in her footsteps.
In 2018, she won the staff Individual Champion/Role Model award in the Vice-Chancellor's Diversity Awards from the University of Oxford.
References
21st-century British women scientists
21st-century British engineers
Alumni of Manchester Metropolitan University
British materials scientists
British women activists
British women academics
British women's rights activists
Living people
British LGBTQ rights activists
British LGBTQ scientists
People associated with Linacre College, Oxford
Superconductivity scientists and engineers
Transgender academics
British transgender women
Transgender scientists
Women materials scientists and engineers
Year of birth missing (living people)
British women civil rights activists
21st-century British LGBTQ people | Clara Barker | Materials_science,Technology | 596 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.