id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
65,444,294 | https://en.wikipedia.org/wiki/Microfluidic%20modulation%20spectroscopy | Microfluidic modulation spectroscopy (MMS) is an infrared spectroscopy technique that is used to characterize the secondary structure of proteins. Infrared (IR) spectroscopy is well known for this application. However, the lack of automation, repeatability and dynamic range of detection in conventional platforms such as FTIR, have been major limitations which have been addressed with the development of microfluidic modulation spectroscopy.
Biophysical characterization analytical techniques
Circular dichroism spectroscopy (CD) is a technique for the characterization of secondary structure. CD is useful for α-helical protein analysis due to the intense signal α-helix structures provide in the CD region. Fourier-transform infrared spectroscopy (FTIR) secondary structure deconvolution is also used for multivariate analysis techniques including singular value decomposition, partial least squares, soft independent modeling of class analogy, and neural networks.
CD, like conventional FTIR, also has major drawbacks. Measurement needs to be carried out at low concentrations, typically at 0.5 mg/mL but down to as low as 0.1 mg/mL, which can undermine the resulting data. The presence of some excipients in the formulation buffer can also interfere with the measurements. CD and conventional FTIR also lack sensitivity in the characterization of biopharmaceuticals proteins such as immunoglobulins IgG1 and IgG2. Microfluidic modulation spectroscopy is an automated technique that overcomes these challenges of both FTIR and CD for use in biopharmaceutical product characterization.
Applications
Higher order structure assessment
Characterization of protein higher order structures is routinely performed during the biologic product development life cycle. Because biological function is related to structure, it is important to establish that the biologic is manufactured with the expected structure (a monoclonal antibody is created with the expected β-sheet, α-helix, for example). It is also important to demonstrate that the structure is not significantly impacted by drug substance or drug product manufacturing changes that arise during product development. Microfluidic modulation spectroscopy’s sensitivity and accuracy, detects higher order structure change in the formulation and at the concentration of interest, without the need for dilution or deuteration. The technique provides information on which structural motifs in the protein molecule are changing, providing more guidance when developing stable protein molecules and formulations.
Biosimilarity
Biosimilar drug development is an important application for higher order structure comparisons. In analytical similarity studies, the higher order structure of the innovator product is compared to the biosimilar to establish similarity in the structures. Comparability and biosimilarity studies often use microfluidic modulation spectroscopy to assess the products for structural differences. The technique reveals very small conformational differences between different proteins, and provides information on where those differences occur. These capabilities make microfluidic modulation spectroscopy a powerful tool in the analysis and development of biosimilars.
Aggregation
Protein aggregation is the process by which proteins start to bind together under different conditions and formulations. If therapeutic proteins are to be safe and efficacious, their misfolding and aggregation behaviors must be well understood. Both upstream and downstream processing can cause aggregation, a common indicator of protein instability, which can result in a therapeutic product being unfit for launch.
Microfluidic modulation spectroscopy can measure previously undetectable changes in protein structural attributes, changes that are critical to drug efficacy and quality. It is one of the only techniques which can directly monitor the formation of aggregates due to its ability to measure intermolecular beta sheet structures.
Formulation development
A detailed understanding of the mechanisms of aggregation is essential to control stability and ensure a safe, effective drug product. A primary motivation in formulation is to understand these mechanisms, which is driven by high throughput analysis and intense information gathering.
Formulation scientists use a core set of analytical techniques to quantify the colloidal, chemical and conformational stability parameters that define the stability of a biotherapeutic. However, this is a toolset with widely recognized gaps, notably an inability to measure conformational difference with high reproducibility in clinically representative formulations. For reasons mentioned previously, microfluidic modulation spectroscopy provides the sample capacity through 96 well plate operation and technical capabilities to elucidate colloidal and chemical stability, lacking in existing techniques such as size exclusion chromatography (SEC), mass spectrometry and capillary electrophoresis.
Quality assurance (GMP/CFR compliant laboratories)
Effective quality testing acts as a safeguard of product quality, controlling critical changes in the structure of drug substances, drug products, raw materials, or excipients. Quality assurance (QA) is a systematic approach that establishes a set of guidelines for all facets of the manufacturing process that could affect product quality.
Biologic drugs are complex molecules that exhibit microheterogeneity, minor chemical variances such as glycan structural differences, deamidation, oxidation and glycation. Casting a wide analytical net helps establish the robust structure-function relationships that define the boundaries of unacceptable risk. The identification of all possible critical quality attributes (CQAs) underpin effective QA. Microfluidic modulation spectroscopy facilitates the measurement of secondary structure attributes of biopharmaceuticals in all stages of the manufacturing process. This helps establish quality parameters at stages not possible with traditional techniques.
Quantitation
The structure of proteins and how they behave in solution are affected by concentration. Accurate concentration quantitation yields better analysis and comparison of results between different proteins and formulations. There is no common analytical approach for quantitation due to the constraints of traditional techniques (e.g. the limited dynamic range of traditional spectroscopic tools (for example limited resolution and detector linearity). Because the sample absorbance is targeted to a very limited dynamic range, this forces scientists to make extra steps to adjust either the sample concentration or the cell path length to achieve accurate protein quantitation.
Microfluidic modulation spectroscopy provides direct, label free protein quantitation over a wide concentration range and is more selective than traditional spectroscopy instrumentation, with less susceptibility to interferences. Microfluidic modulation spectroscopy increases sensitivity and significantly reduces the errors common with conventional spectroscopy.
Components
Microfluidic modulation spectroscopy features a tunable mid-infrared quantum cascade laser to generate an optical beam that is 1000 times brighter than those used in conventional FTIR. This enables the measurement of samples that are substantially more concentrated than possible with other techniques, and the use of simpler detectors with no requirement for nitrogen cooling. The laser is run in continuous wave mode to generate a very high resolution (< 0.001 cm-1 linewidth), low noise beam with minimal stray light that is focused through a microfluidic transmission cell with a short (25 μm) optical path length onto a thermo-electrically cooled mercury cadmium tellurium (MCT) detector. This optical configuration delivers high sensitivity measurement over a concentration range of 0.1 – 200 mg/mL for structural characterization and down to 0.01 mg/mL for protein quantitation giving microfluidic modulation spectroscopy a far wider dynamic range than alternative protein characterization techniques.
In microfluidic modulation spectroscopy, the sample (protein-in-buffer) solution and a matching buffer reference stream are introduced into the transmission cell under continuous flow and then rapidly modulated (1-10 Hz) across the laser beam path to produce nearly drift-free, background compensated, differential scans of the Amide I band. The complete optical system is sealed and purged with dry air to minimize any interference from atmospheric water vapor which absorbs across the 2000 – 1300 cm-1 wavenumber range and can therefore compromise the use of IR spectroscopy for protein characterization. Advanced signal processing technology is the third key element of the instrument and converts the raw spectra into fractional contribution data for specific motifs of secondary structure, providing a structural fingerprint of the protein.
References
Infrared spectroscopy
Microfluidics | Microfluidic modulation spectroscopy | Physics,Chemistry,Materials_science | 1,641 |
14,815,615 | https://en.wikipedia.org/wiki/UBE2V1 | Ubiquitin-conjugating enzyme E2 variant 1 is a protein that in humans is encoded by the UBE2V1 gene.
Function
Ubiquitin-conjugating E2 enzyme variant proteins constitute a distinct subfamily within the E2 protein family. They have sequence similarity to other ubiquitin-conjugating enzymes but lack the conserved cysteine residue that is critical for the catalytic activity of E2s. The protein encoded by this gene is located in the nucleus and can cause transcriptional activation of the human FOS proto-oncogene. It is thought to be involved in the control of differentiation by altering cell cycle behavior. Multiple alternatively spliced transcripts encoding different isoforms have been described for this gene. A pseudogene has been identified which is also located on chromosome 20. Co-transcription of this gene and the neighboring upstream gene generates a rare transcript (Kua-UEV), which encodes a fusion protein consisting of sequence sharing identity with each individual gene product.
Interactions
UBE2V1 has been shown to interact with UBE2N.
References
Further reading | UBE2V1 | Chemistry | 231 |
56,664,867 | https://en.wikipedia.org/wiki/DingTalk | DingTalk ) is an enterprise communication and collaboration platform developed by Alibaba Group. It was founded in 2014 and headquartered in Hangzhou. By 2018, it was one of the world's largest professional communication and management mobile apps in China with over 100 million users. International market intentions were announced in 2018.
DingTalk provides iOS and Android apps as well as Mac and Windows clients. A version of the app for HarmonyOS was in development as of 2023.
History
On January 16, 2015, DingTalk launched the testing version 1.1.0.
On May 26, 2015, V2.0 was released, adding Ding Mail, Smart OA and shared storage.
On September 19, 2016, V3.0 was released, focusing on B2B communication and collaboration.
On January 15, 2018, DingTalk launched the English version of its application in Malaysia, its first market outside of China (although it can be downloaded and used in other markets such as Bangladesh, US, etc.). At the same time, DingTalk noted that in November 2017 it had launched hardware devices such as the “smart receptionist” that enables employee check-in by fingerprint or facial recognition.
During the COVID-19 pandemic in Wuhan, the app was the target of review bomb after it was used to send homework to quarantined school children.
On April 8, 2020, DingTalk Lite was released on various app stores across key Asian markets, including Japan, Indonesia, Malaysia, and other countries and regions. DingTalk Lite comes with necessary features such as messaging, file sharing, and video conferencing. It supports video-conferencing for over 300 people simultaneously and a live-broadcast function for more than 1000 participants. The app offers AI-enabled translation of messages in 14 languages including Chinese, Japanese, and English.
Features
All messaging types including text messages, voice messages, pictures, files, DingMails. The Unique Read/Unread Mode is to improve communication efficiency and messages can be delivered with DING, which can alert the recipients through phone call, SMS, and the app itself.
All organizational contacts are unified into one online platform
Audio conference call support for up to 30 parties
SmartWork OA for managing internal workflows such as employee leaves, travel applications, and reimbursement. Records can be summed up and exported. In the future, more 3rd party applications and functions will be integrated.
DingTalk is one of the first Chinese apps to have obtained the ISO/IEC 27001:2013 standard. Data is encrypted to SSL/TLS security standards.
Smart hardware C1 and M2, "smart" applications
See also
Comparison of cross-platform instant messaging clients
Comparison of instant messaging protocols
Comparison of Internet Relay Chat clients
Comparison of LAN messengers
Comparison of VoIP software
List of SIP software
List of video telecommunication services and product brands
References
External links
Official Website
Alibaba Group
Instant messaging clients
IOS software
HarmonyOS software
Android (operating system) software | DingTalk | Technology | 612 |
45,021,102 | https://en.wikipedia.org/wiki/Acousto-electronics | Acousto-electronics (also spelled 'Acoustoelectronics') is a branch of physics, acoustics and electronics that studies interactions of ultrasonic and hypersonic waves in solids with electrons and with electro-magnetic fields. Typical phenomena studied in acousto-electronics are acousto-electric effect and also amplification of acoustic waves by flows of electrons in piezoelectric semiconductors, when the drift velocity of the electrons exceeds the velocity of sound. The term 'acousto-electronics' is often understood in a wider sense to include numerous practical applications of the interactions of electro-magnetic fields with acoustic waves in solids. In particular, these are signal processing devices using surface acoustic waves (SAW), different sensors of temperature, pressure, humidity, acceleration, etc.
See also
Acousto-optics
Rayleigh wave
Love wave
Interdigital transducer
Picosecond ultrasonics
Further reading
External links
Consortium for Applied Acoustoelectronic Technology - University of Central Florida
Acoustics
Electronics | Acousto-electronics | Physics | 213 |
12,780,114 | https://en.wikipedia.org/wiki/Adenylylation | Adenylylation, more commonly known as AMPylation, is a process in which an adenosine monophosphate (AMP) molecule is covalently attached to the amino acid side chain of a protein. This covalent addition of AMP to a hydroxyl side chain of the protein is a post-translational modification. Adenylylation involves a phosphodiester bond between a hydroxyl group of the molecule undergoing adenylylation, and the phosphate group of the adenosine monophosphate nucleotide (i.e. adenylic acid). Enzymes that are capable of catalyzing this process are called AMPylators.
The known amino acids to be targeted in the protein are tyrosine and threonine, and sometimes serine. When charges on a protein undergo a change, it affects the characteristics of the protein, normally by altering its shape via interactions of the amino acids which make up the protein. AMPylation can have various effects on the protein. These are properties of the protein like, stability, enzymatic activity, co-factor binding, and many other functional capabilities of a protein. Another function of adenylylation is amino acids activation, which is catalyzed by tRNA aminoacyl synthetase. The most commonly identified protein to receive AMPylation are GTPases, and glutamine synthetase.
Adenylylators
Enzymes responsible for AMPylation, called AMPylators or Adenylyltransferase, fall into two different families, all depending on their structural properties and mechanism used. AMPylator is created by two catalytic homologous halves. One half is responsible for catalyzing the adenylylation reaction, while the other half catalyzes the phosphorolytic deadenylylation reaction. These two families are the DNA-β-polymerase-like and the Fic family.
DNA-β-polymerase-like, is a family of Nucleotidyltransferase. It more specifically is known as the GlnE family. There is a specific motif that is used to clarify this particular family. The motif consists of a three stranded β-sheet which is part of magnesium ion coordination and phosphate binding. Aspartate is essential for the activity to occur in this family.
The Fic domain belongs to Fido (Fic/Doc) superfamilyFic family, which is a filamentation induced by cyclic AMP domain, is known to perform AMPylation. This term was coined when VopS from Vibrio parahaemolyticus was discovered to modify RhoGTPases with AMP on a serine.
This family of proteins are found in all domains of life on earth. It is mediated via a mechanism of ATP-binding-site alpha helix motif. Infectious bacteria use this domain to interrupt phagocytosis and cause cell death. Fic domains are evolutionarily conserved domains in prokaryotes and eukaryotes that belong to the Fido domain superfamily.
AMPylators have been shown to be comparable to kinases due to their ATP hydrolysis activity and reversible transfer of the metabolite to a hydroxyl side chain of the protein substrate. However, AMPylation catalyse a nucleophilic attack on the α-phosphate group, while kinase in the phosphorylation reaction targets γ-phosphate. The nucleophilic attack of AMPylation leads to release Pyrophosphate and the AMP-modified protein are the products of the AMPylation reaction.
De-adenylylators
De-AMPylation is the reverse reaction in which the AMP molecule is detached from the amino acid side of a chain protein.
There are three known mechanisms for this reaction.
The bacterial GS-ATase (GlnE) encodes a bipartite protein with separate N-terminal AMPylation and C-terminal de-AMPylation domains whose activity is regulated by PII and associated posttranslational modifications. De-AMPylation of its substrate AMPylated glutamine synthetase proceeds by a phosphorolytic reaction between the adenyl-tyrosine of GS and orthophosphate, leading to the formation of ADP and unmodified glutamine synthetase.
SidD, a protein introduced in the host cell by the pathogenic bacteria Legionella pneumophila, de-AMPylates Rab1 a host protein AMPylated by a different Legionella pneumophila enzyme, the AMPylase SidM. Whilst the benefit to the pathogen of introducing these two antagonistic effectors in the host remains unclear, the biochemical reaction carried out by SidD involves the use of a phosphatase-like domain to catalyse the hydrolytic removal of the AMP from tyrosine 77 of the host's Rab1.
In animal cells the removal of AMP from threonine 518 of BiP/Grp78 is catalysed by the same enzyme, FICD, that AMPylates BiP. Unlike the bacterial GS-ATase, FICD carries out both reactions with same catalytic domain.
Adenylylation in Prokaryotes
Bacterial homeostasis
AMPylation is involved in bacterial homeostasis. The most famous example is AMPylator GS-ATase (GlnE), which contributes in complex regulation of nitrogen metabolism through AMPylation of glutamine synthetase that was introduced in the AMPylation and DeAMPylation parts.
Another example of AMPylators that play a role in bacterial homeostasis is the class I Fic AMPylators (FicT), which modifies the GyrB subunit of DNA gyrase, the conserved tyrosine residue for ATP binding of ParE subunit at Topoisomerase IV. This DNA gyrase inactivation by AMPylation leads to the activation of SOS response, which is the cellular response to DNA damage. The activity of FicT AMPylation is reversible and only leads to growth arrest, but not cell death. Therefore, FicT AMPylation plays a role in regulating cell stress, which is shown in the Wolbachia bacteria that the level of FicT increases in response to doxycycline.
A Class III Fic AMPylator NmFic of N. meningtidis is also found to modify AMPylate GyrB at the conserved tyrosine for ATP binding. This shows that Fic domains are highly conserved that indicates the important role of AMPylation in regulating cellular stress in bacteria. The regulation of NmFic involves the concentration-dependent monomerization and autoAMPylation for activation of NmFic activity.
Bacterial pathogenicity
Bacteria proteins, also known as effectors, have been shown to use AMPylation. Effectors such as VopS, IbpA, and DrrA, have been shown to AMPylate host GTPases and cause actin cytoskeleton changes. GTPases are common targets of AMPylators. Rho, Rab, and Arf GTPase families are involved in actin cytoskeleton dynamics and vesicular trafficking. They also play roles in cellular control mechanisms such as phagocytosis in the host cell.
The pathogen enhances or prevents its internalization by either inducing or inhibiting host cell phagocytosis. Vibrio parahaemolyticus is a Gram-negative bacterium that causes food poisoning as a result of raw or undercooked seafood consumption in humans. VopS, a type III effector found in Vibrio parahaemolyticus, contains a Fic domain that has a conserved HPFx(D/E)GN(G/K)R motif that contains a histidine residue essential for AMPylation. VopS blocks actin assembly by modifying threonine residue in the switch 1 region of Rho GTPases. The transfer of an AMP moiety using ATP to the threonine residue results in steric hindrance, and thus prevents Rho GTPases from interacting with downstream effectors. VopS also adenylates RhoA and cell division cycle 42 (CDC42), leading to a disaggregation of the actin filament network. As a result, the host cell's actin cytoskeleton control is disabled, leading to cell rounding.
IbpA is secreted into eukaryotic cells from H. somni, a Gram-negative bacterium in cattle that causes respiratory epithelium infection. This effector contains two Fic domains at the C-terminal region. AMPylation of the IbpA Fic domain of Rho family GTPases is responsible for its cytotoxicity. Both Fic domains have similar effects on host cells' cytoskeleton as VopS. The AMPylation on a tyrosine residue of the switch 1 region blocks the interaction of the GTPases with downstream substrates such as PAK.
DrrA is the Dot/Icm type IV translocation system substrate DrrA from Legionella pneumophila. It is the effector secreted by L. pneumophila to modify GTPases of the host cells. This modification increases the survival of bacteria in host cells. DrrA is composed of Rab1b specific guanine nucleotide exchange factor (GEF) domain, a C-terminal lipid binding domain and an N-terminal domain with unclear cytotoxic properties. Research works show that N-terminal and full-length DrrA shows AMPylators activity toward host's Rab1b protein (Ras related protein), which is also the substrate of Rab1b GEF domain. Rab1b protein is the GTPase Rab to regulate vesicle transportation and membrane fusion. The adenylation by bacteria AMPylators prolong GTP-bound state of Rab1b. Thus, the role of effector DrrA is connected toward the benefits of bacteria's vacuoles for their replication during the infection.
Adenlylylation in Eukaryotes
Plants and yeasts have no known endogenous AMPylating enzymes, but animal genomes are endowed with a single copy of a gene encoding a Fic-domain AMPylase, that was likely acquired by an early ancestor of animals via horizontal gene transfer from a prokaryote. The human protein referred to commonly as FICD, had been previously identified as Huntingtin associated protein E (HypE; an assignment arising from a yeast two-hybrid screen, but of questionable relevance, as Huntingtin and HypE/FICD are localised to different cellular compartments). CG9523 Homologues in Drosophila melanogaster (CG9523) and C. elegans (Fic-1) have also received attention. In all animals FICD has a similar structure. It is a type II transmembrane domain protein, with a short cytoplasmic domain followed by membrane anchor that holds the protein in the endoplasmic reticulum (ER) and long C-terminal portion that resides in ER and encompasses tetratricopeptide repeats (TPRs) followed by a catalytic Fic domain.
Endoplasmic reticulum
The discovery of an animal cell AMPylase, followed by the discovery of its ER localisation and that BiP is a prominent substrate for its activity were important breakthroughs. BiP (also known as Grp78) had long been known to undergo an inactivating post-translational modification, but it nature remain elusive. Widely assumed to be ADP-ribosylation, it turns out to be FICD-mediated AMPylation, as inactivating the FICD gene in cells abolished all measurable post-translational modification of BiP.
BiP is an ER-localised protein chaperone whose activity is tightly regulated at the transcriptional level via a gene-expression program known as the Unfolded Protein Response (UPR). The UPR is a homeostatic process that couples the transcription rate of BiP (and many other proteins) to the burden of unfolded proteins in the ER (so-called ER stress) to help maintain ER proteostasis. AMPylation adds another rapid post-translational layer of control of BiP's activity, as modification of Thr518 of BiP's substrate-binding domain with an AMP locks the chaperone into an inactive conformation. This modification is selectively deployed as ER stress wanes, to inactivate surplus BiP. However, as ER stress rises again, the same enzyme, FICD, catalyses the opposite reaction, BiP de-AMPylation.
An understanding of the structural basis of BiP AMPylation and de-AMPylation is gradually emerging, as are clues to the allostery that might regulate the switch in FICD's activity but important details of this process as it occurs in cells remain to be discovered.
The role of FICD in BiP AMPylation (and de-AMPylation) on Thr518 is well supported by biochemical and structural studies. Evidence has also been presented that in some circumstances FICD may AMPylate a different residue, Thr366 in BiP's nucleotide binding domain.
Caenorhabditis elegans
Fic-1 is the only Fic protein present in the genetic code of C. elegans. It is primarily found in the ER nuclear envelope of adult germline cells and embryotic cells, but small amounts may be found within the cytoplasm. This extra-ER pool of FICD-1s is credited with AMPylation of core histones and eEF1-A type translation factors within the nematode.
Though varying AMPylation levels did not create any noticeable effects within the nematode's behaviour or physiology, Fic-1 knockout worms were more susceptible to infection by Pseudomonas aeruginosa compared to the counterparts with active Fic-1 domains, implying a link between AMPylation of cellular targets and immune responses within nematodes.
Drosophila melanogaster
Flies lacking in FICD (CG9523) have been described as blind. Initially, this defect was attributed to a role for FICD on the cell surface of capitate projections - a putative site of neurotransmitter recycling however a later study implicated FICD-mediated AMPylation of BiP Thr366 in the visual problem
Clinical significance
The presynaptic protein α-synuclein was found to be a target for FICD AMPylation. During HypE-mediated adenylylation of αSyn, aggregation of αSyn decreases and both neurotoxicity and ER stress were discovered to decrease in vitro. Thus, adenylylation of αSyn is possibly a protective response to ER stress and αSyn aggregation. However, as aSyn and FICD reside in different compartments further research needs to be done confirm the significance of these claims.
Detection
Chemical handles
Chemical handles are used to detect post-translationally modified proteins. Recently, there is a N6pATP that contains an alkynyl tag (propargyl) at the N6 position of the adenine of ATP. This N6pATP combines with the click reaction to detect AMPylated proteins. To detect unrecognized modified protein and label VopS substrates, ATP derivatives with a fluorophore at the adenine N6 NH2 is utilized to do that.
Antibody-based method
Antibody is famous for its high affinity and selectivity, so it is the good way to detect AMPylated proteins. Recently, ɑ- AMP antibodies is used to directly detect and isolate AMPylated proteins (especially AMPylated tyrosine and AMPylated threonine) from cells and cell lysates. AMPylation is a post-translational modification, so it will modify protein properties by giving the polar character of AMP and hydrophobicity. Thus, instead of using antibodies that detect a whole peptide sequence, raising AMP antibodies directly targeted to specific amino acids are preferred.
Mass spectrometry
Previously, many science works used Mass Spectrometry (MS) in different fragmentation modes to detect AMPylated peptides. In responses to the distinctive fragmentation techniques, AMPylated protein sequences disintegrated at different parts of AMP. While electron transfer dissociation (ETD) creates minimum fragments and less complicated spectra, collision-induced dissociation (CID) and high-energy collision (HCD) fragmentation generate characteristic ions suitable for AMPylated proteins identification by generating multiple AMP fragments. Due to AMP's stability, peptide fragmentation spectra is easy to read manually or with search engines.
Inhibitors
Inhibitors of protein AMPylation with inhibitory constant (Ki) ranging from 6 - 50 μM and at least 30-fold selectivity versus HypE have been discovered.
References
Biochemistry | Adenylylation | Chemistry,Biology | 3,559 |
28,846,154 | https://en.wikipedia.org/wiki/Copernicus%20%28film%29 | Copernicus () is a 1973 Polish historical film directed by Ewa Petelska and Czesław Petelski. The film was entered into the 8th Moscow International Film Festival where it won the Silver Prize. It was also selected as the Polish entry for the Best Foreign Language Film at the 46th Academy Awards, but was not accepted as a nominee.
Cast
Andrzej Kopiczyński as Mikołaj Kopernik
Barbara Wrzesińska as Anna Schilling – cousin
Czesław Wołłejko as Lukasz Watzenrode – bishop of Warmia
Andrzej Antkowiak as Andrzej Kopernik
Klaus-Peter Thiele as Georg Joachim von Lauchen gen. Rhetikus
Henryk Boukołowski as Cardinal Hipolit d'Este
Hannjo Hasse as Andreas Osiander – editor
Henryk Borowski as Tiedemann Giese – bishop of Chełmno
Jadwiga Chojnacka as Thief Kacper's mother
Aleksander Fogiel as Matz Schilling – Anna's father
Emilia Krakowska as Kacper's wife
Gustaw Lutkiewicz as Jan Dantyszek – bishop of Warmia
Leszek Herdegen as Mönch Mattheusz
Witold Pyrkosz as Prepozyt Płotowski
Wiktor Sadecki as Wojciech z Brudzewa
See also
List of submissions to the 46th Academy Awards for Best Foreign Language Film
List of Polish submissions for the Academy Award for Best Foreign Language Film
References
External links
1973 films
1970s biographical films
1970s historical films
Biographical films about scientists
Biographical films about mathematicians
Polish biographical films
Polish historical films
1970s Polish-language films
Films directed by Ewa Petelska
Films directed by Czesław Petelski
Films set in the 16th century
Cultural depictions of Nicolaus Copernicus
Films set in Poland
Films set in Warmian–Masurian Voivodeship
Films set in Kraków
Films set in Ferrara | Copernicus (film) | Astronomy | 408 |
5,551,810 | https://en.wikipedia.org/wiki/PELP-1 | Proline-, glutamic acid- and leucine-rich protein 1 (PELP1) also known as modulator of non-genomic activity of estrogen receptor (MNAR) and transcription factor HMX3 is a protein that in humans is encoded by the PELP1 gene. is a transcriptional corepressor for nuclear receptors such as glucocorticoid receptors and a coactivator for estrogen receptors.
Proline-, glutamic acid-, and leucine-rich protein 1 (PELP1) is transcription coregulator and modulates functions of several hormonal receptors and transcription factors. PELP1 plays essential roles in hormonal signaling, cell cycle progression, and ribosomal biogenesis. PELP1 expression is upregulated in several cancers; its deregulation contributes to hormonal therapy resistance and metastasis; therefore, PELP1 represents a novel therapeutic target for many cancers.
Gene
PELP1 is located on chromosome 17p13.2 and PELP1 is expressed in a wide variety of tissues; its highest expression levels are found in the brain, testes, ovaries, and uterus. Currently, there are two known isoforms (long 3.8 Kb and short 3.4 Kb) and short isoform is widely expressed in cancer cells.
Structure
The PELP1 protein encodes a protein of 1130 amino acids, and exhibits both cytoplasmic and nuclear localization depending on the tissue. PELP1 lacks known enzymatic activity and functions as a scaffolding protein. It contains 10 NR-interacting boxes (LXXLL motifs) and functions as a coregulator of several nuclear receptors via its LXXLL motifs including ESR1, ESR2, ERR-alpha, PR, GR, AR, and RXR. PELP1 also functions as a coregulator of several other transcription factors, including AP1, SP1, NFkB, STAT3, and FHL2.
PELP1 has a histone binding domain and interacts with chromatin-modifying complexes, including CBP/p300, histone deacetylase 2, histones, SUMO2, lysine-specific demethylase 1 (KDM1), PRMT6, and CARM1. PELP1 also interacts with cell cycle regulators such as pRb. E2F1, and p53.
PELP1 is phosphorylated by hormonal and growth factor signals. PELP1 phosphorylation status is also influenced by cell cycle progression, and it is a substrate of CDKs. Further, PELP1 is phosphorylated by DNA damage induced kinases (ATM, ATR, DNA-PKcs).
Function
PELP1 functions as a coactivator of several NRs and regulates genes involved in proliferation and cancer progression. PELP1 enhances transcription functions of ESR1, ESR2, AR, GR, E2F and STAT3. PELP1 participates in activation of ESR1 extra-nuclear actions by coupling ESR1 with Src kinase PI3K STAT3 ILK1 and mTOR PELP1 participates in E2-mediated cell proliferation and is a substrate of CDK4/cyclin D1, CDK2/cyclin E and CDK2/cyclin A complexes. Studies using TG mice model suggested the existence of an autocrine loop involving the CDK–cyclin D1–PELP1 axis in promoting mammary tumorigenesis
PELP1 has a histone binding domain; functions as a reader of histone modifications, interacts with epigenetic modifiers such as HDAC2, KDM1, PRMT6, CARM1; and facilitates activation of genes involved in proliferation and cancer progression. PELP1 modulates the expression of miRs, PELP1-mediated epigenetic changes play important role in the regulation miR expression and many of PELP1 mediated miRS are involved in promoting metastasis. PELP1 is needed for optimal DNA damage response, is phosphorylated by DDR kinases and is important for p53 coactivation function. PELP1 also interacts with MTp53, regulates its recruitment, and alters MTp53 target gene expression. PELP1 depletion contributes to increased stability of E2F1. PELP1 binds RNA, and participates in RNA splicing. The PELP1-regulated genome includes several uniquely spliced isoforms. Mechanistic studies showed that PELP1 interaction with the arginine methyltransferase PRMT6 plays a role in RNA splicing.
PELP1 plays critical roles in 60S ribosomal subunit synthesis and ribosomal RNA transcription. The SENP3-associated complex comprising PELP1, TEX10 and WDR18 is involved in maturation and nucleolar release of the large ribosomal subunit. SUMO conjugation/deconjugation of PELP1 controls its dynamic association with the AAA ATPase MDN1, a key factor of pre-60S remodeling. Modification of PELP1 promotes the recruitment of MDN1 to pre-60S particles, while deSUMOylation is needed to release both MDN1 and PELP1 from pre-ribosomes.
PELP1 is widely expressed in many regions of brain, including the hippocampus, hypothalamus, and cerebral cortex. PELP1 interacts with ESR1, Src, PI3K and GSK3β in the brain. It is essential for E2-mediated extra-nuclear signaling following global cerebral ischemic. PELP1 plays an essential role in E2-mediated rapid extranuclear signaling, neuroprotection, and cognitive function in the brain. Ability of E2 to exert anti-inflammatory effects was lost in PELP1 forebrain-specific knockout mice, indicating a key role for PELP1 in E2 anti-inflammatory signaling.
PELP1 is a proto-oncogene that provides cancer cells with a distinct growth and survival advantage. PELP1 interacts with various enzymes that modulate the cytoskeleton, cell
migration, and metastasis. PELP1 deregulation in vivo promotes development of mammary gland hyperplasia and carcinoma PELP1 is implicated in progression of breast, endometrial, ovarian, salivary prostate, lung, pancreas, and colon neoplasms.
PELP1 signaling contributes to hormonal therapy resistance. Altered localization of PLP1 contributes to tamoxifen resistance via excessive activation of the AKT pathway and cytoplasmic PELP1 induces signaling pathways that converge on ERRγ to promote cell survival in the presence of tamoxifen. AR, PELP1 and Src form constitutive complexes in prostate neoplasms model cells that exhibit androgen independence. Cytoplasmic localization of PELP1 upregulates pro-tumorigenic IKKε and secrete inflammatory signals, which through paracrine macrophage activation, regulate the migratory phenotype associated with breast cancer initiation.
Clinical significance
PELP1 is a proto-oncogene that provides cancer cells with a distinct growth and survival advantage. PELP1 overexpression has been reported in many cancers. PELP1 expression is an independent prognostic predictor of shorter breast cancer–specific survival and disease free interval. Patients whose tumors had high levels of cytoplasmic PELP1 exhibited a tendency to respond poorly to tamoxifen and PELP1 deregulated tumors respond to Src kinase and mTOR inhibitors. Treatment of breast and ovarian cancer xenografts with liposomal PELP1–siRNA–DOPC formulations revealed that knockdown of PELP1 significantly reduce the tumor growth. These results provided initial proof that PELP1 is a bonafide therapeutic target. Emerging data support a central role for PELP1 and its direct protein–protein interactions in cancer progression. Since PELP1 lacks known enzymatic activity, drugs that target PELP1 interactions with other proteins should have clinical utility. Recent studies described an inhibitor (D2) that block PELP1 interactions with AR. Since PELP1 interacts with histone modifications and epigenetic enzymes, drugs targeting epigenetic modifier enzymes may be useful in targeting PELP1 deregulated tumors.
Notes
References
External links
NURSA PELP1:
Gene expression
Transcription coregulators | PELP-1 | Chemistry,Biology | 1,805 |
11,358,397 | https://en.wikipedia.org/wiki/VS%20ribozyme | The Varkud satellite (VS) ribozyme is an RNA enzyme that carries out the cleavage of a phosphodiester bond.
Introduction
Varkud satellite (VS) ribozyme is the largest known nucleolytic ribozyme and found to be embedded in VS RNA. VS RNA is a long non-coding RNA exists as a satellite RNA and is found in mitochondria of Varkud-1C and few other strains of Neurospora. VS ribozyme contains features of both catalytic RNAs and group 1 introns. VS ribozyme has both cleavage and ligation activity and can perform both cleavage and ligation reactions efficiently in the absence of proteins. VS ribozyme undergo horizontal gene transfer with other Neurospora strains. VS ribozymes have nothing in common with other nucleolytic ribozymes.
VS RNA has a unique primary, secondary, and tertiary structure. The secondary structure of the VS ribozyme consists of six helical domains (Figure 1). Stem loop I forms the substrate domain while stem-loop II-VI forms the catalytic domain. When these 2 domains are synthesized in vitro separately, they can perform the self-cleavage reaction by trans-acting The substrate binds into a cleft which is made by two helices. The likely active site of the ribozyme is a very important nucleotide A756. The A730 loop and A756 nucleotide are critical to its function since they participate in the phosphoric transfer chemistry activity of the ribozyme
The Origin
VS RNA is transcribed as a multimeric transcript from VS DNA. VS DNA contains a region coding reverse transcriptase necessary for replication of the VS RNA. Once transcribed VS RNA undergoes a site specific cleavage. VS RNA self-cleaves at a specific phosphodiester bond to produce a monomeric and few multimeric transcripts. These transcripts then undergo a self-ligation and form a circular VS RNA. This circular VS RNA is the predominant form of VS found in Neurospora. VS ribozyme is a small catalytic motif embedded within this circular VS RNA. The majority of VS RNA is made up of 881 nucleotides
Structure of the Ribozyme
In the natural state, a VS ribozyme motif contains 154 nucleotides that fold into six helices. Its RNA contains a self-cleavage element which is thought to act in the processing of intermediates made through the process of replication. The H-shaped structure of the ribozyme is organized by two three-way junctions which determine the overall fold of the ribozyme. A unique feature of the structure of ribozyme is that even if the majority of helix IV and distal end of helix VI would be deleted there would be no significant loss of activity However, if the lengths of helix III and V were to be changed there would be major loss of activity. The base bulges of the ribozyme, helices II and IV have very important structural roles since replacing them with other nucleotides does not affect their activity. Basically, VS ribozyme's activity is very dependent on the local sequence of the two three-way junctions. The three-way junction present in the VS ribozyme is very similar to the one seen in the small (23S) subunit of rRNA.
The Active Site of Ribozyme
The active sites of the ribozyme can be found in the helical junctions, the bulges and the lengths of the critical helices those being III and V. There is one important area found in the internal loop of helix VI called A730, a single base change in this loop would lead to decreased loss of cleavage activity but no significant changes in the folding of the ribozyme occur. Other mutations which affect the activity of the ribozyme are methylation, suppression of thiophilic Manganese ions at the A730 site
Possible Catalytic Mechanism
The A730 loop is very important in the catalytic activity of the ribozyme. The ribozyme functions like a docking station where it will dock the substrate into the cleft between helices II and VI to facilitate an interaction between the cleavage site and A730 loop. This interaction makes an environment to which catalysis can proceed in a way similar to interactions seen in the hairpin ribozyme. Within the A730 loop, a substitution of A756 by G, C or U will lead to a 300-fold loss of cleavage and ligation activity.
The proof that A730 loop is the active site of the VS ribozyme is very evident, and that A756 plays an important role in its activity. The cleavage reaction works by an SN2 reaction mechanism. The nucleophilic attack of the 2’-oxygen on the 3’-phosphate will create a cyclic 2’3’ phosphate by the 5’-oxygen leaving. The ligation reaction occurs in reverse in which the 5’-oxygen attacks the 3’-phosphate of the cyclic phosphate. The way that both of these reactions are facilitated is by general acid-base catalysis which strengthen the oxygen nucleophile by removing bonded proteins and stabilizing the oxyanion leaving groups through protonation. It is also important to add that if a group is behaving as a base in the cleavage reaction then it must act as an acid in the ligation reaction. Solvated metal ions act in general acid-base catalysis, where the metal ions might act as a Lewis acid which polarize phosphate oxygen atoms. Another important factor in the rate of ligation reaction is the pH dependence which corresponds to a pKa of 5.6, which is not a factor in the cleavage reaction . This particular dependence requires a protonated base at position A756 of the ribozyme.
Another proposed catalytic strategy is the stabilization of a pentavalent phosphate of the reaction transition state. This mechanism would probably involve the formation of hydrogen bonds as seen in the hairpin ribozyme Furthermore, the proximity of active site groups to each other and their orientation in space would contribute to the catalytic mechanism taking place. This might bring the transition state and the substrate closer for the legation reaction to occur.
Catalysts
Very high concentration of bivalent and monovalent cation increase the efficiency of the cleavage reaction. These cations facilitate the base pairing of the ribozyme with the substrate. VS cleavage rate can be accelerated by high cation concentration as well as by increasing RNA concentration. Therefore, a low concentration of any of these is rate-limiting. The cations' role is considered to be charge neutralizing in the folding of RNA rather than acting as a catalyst.
Hypothesis For Evolution of VS Ribozyme
1. A molecular fossil of RNA world which has retained both cleavage and ligation functions.
2. VS Ribozyme later acquired one or more of its enzymatic activities.
RNA mediated cleavage and ligation is found in group 1 and group 2 self-splicing RNAs. VS RNA contains many conserved sequence characteristics to group 1 introns. However VS ribozyme splice site is different from group 1 intron splice site and VR ribozyme self-cleaving site is outside of the core of the group 1 intron. In the cleavage reaction VS ribozyme produce 2’,3’ -cyclic phosphate and the group 1 introns produce 3’-hydroxyl. Functional similarity with group 1 introns and then mechanistically being different from the introns support this hypothesis that VS ribozyme is a chimera formed by insertion of a novel catalytic RNA into group 1 introns.
External links
Nucleotide sequence and annotation of the VS DNA that encodes the VS ribozyme (at the National Center for Biotechnology Information Web site)
References
Non-coding RNA
Ribozymes | VS ribozyme | Chemistry | 1,619 |
36,481,247 | https://en.wikipedia.org/wiki/Industrias%20Licoreras%20de%20Guatemala | Industrias Licoreras de Guatemala is a Guatemalan alcohol distillery which produces different kinds of alcohol and which owns different brands. It was created at the beginning of the 20th century by Venancio, Andrés, Felipe, Jesús and Alejandro Botran, who emigrated from Spain to start a distillery business. It is a private company and it is the biggest of the three distilling companies operating in Guatemala.
Types of alcohols produced
Dark rum
White rum
Aguardiente
Bottled cocktail
Vodka
Brandy
Tequila
Whiskey
Liqueur coffee
Owned brands
Ron Zacapa Centenario XO
Ron Zacapa Centenario 23
Ron Botran Reserva
Botran Solera 1893
Ron Botran Añejo 12
Ron Botran añejo 8
Ron Botran Oro
Ron Botran XL
Sello de Oro Venado Especial
Venado Light
Venado Citron
Ron Caribbean Bay
Quetzalteca Edicion Especial
Quetzalteca Rosa de Jamaica y Tamarindo
Venado
Chaparrita
Barrilito
Anis Guaca
Valeroso Kuto
Jaguar
Tucan
Botran VIP Sabores frutales
Botran VIP Cocteles
Cubata Botran
Vodka Black by Botran
Vodka Red by Botran
Cafetto
References
External links
http://industriaslicorerasdeguatemala.com/
Distilleries
Food and drink companies of Guatemala | Industrias Licoreras de Guatemala | Chemistry | 284 |
4,557,375 | https://en.wikipedia.org/wiki/Loudness%20monitoring | Loudness monitoring of programme levels is needed in radio and television broadcasting, as well as in audio post production. Traditional methods of measuring signal levels, such as the peak programme meter and VU meter, do not give the subjectively valid measure of loudness that many would argue is needed to optimise the listening experience when changing channels or swapping disks.
The need for proper loudness monitoring is apparent in the loudness war that is now found everywhere in the audio field, and the extreme compression that is now applied to programme levels.
Loudness meters
Meters have been introduced that aim to measure the human perceived loudness by taking account of the equal-loudness contours and other factors, such as audio spectrum, duration, compression and intensity. One such device was developed by CBS Laboratories in the 1980s. Complaints to broadcasters about the intrusive level of interstitials programs (advertisements, commercials) has resulted in projects to develop such meters. Based on loudness metering, many manufacturers have developed real-time audio processors that adjust the audio signal to match a specified target loudness level that preserves volume consistency at home listeners.
EBU Mode meters
In August 2010, the European Broadcasting Union published a new metering specification EBU Tech 3341, which builds on ITU-R BS.1770. To make sure meters from different manufacturers provide the same reading in LUFS units, EBU Tech 3341 specifies the EBU Mode, which includes a Momentary (400ms), Short term (3s) and Integrated (from start to stop) meter and a set of audio signals to test the meters.
See also
Audio normalization
References
External links
EBU Tech 3341 audio test signals
ITU-R BS.1770: Algorithms to measure audio programme loudness and true-peak audio level
EBU publishes loudness test material
Audio engineering
Broadcast engineering
Sound production technology
Sound recording | Loudness monitoring | Engineering | 380 |
9,401,077 | https://en.wikipedia.org/wiki/Color%20LaserWriter | The Color LaserWriter was a line of PostScript four-color laser printers manufactured by Apple Computer, Inc. in the mid-1990s. These printers were compatible with PCs and Apple's own Macintosh line of computers; these printers were also able to connect to large networks by way of the use of an 10baseT Ethernet port. Two models were released.
Color LaserWriter 12/600 PS
A PostScript printer, the Color LaserWriter 12/600 PS color laser printer was intended for small business and consumers with high printing requirements. The Windows-compatible driver was of interest due to its ability generate Postscript files (.ps) for later printing.
This printer was released in 1995, one year before its replacement with the Color LaserWriter 12/660 PS, which had the same specifications as the 12/600 PS, but was sold at a lower price.
Color LaserWriter 12/660 PS
The Color LaserWriter 12/660 PS is a color laser printer introduced by Apple in October 1996. The printer became a workhorse used in Kinko's copy stores across the United States. The printer's weight, size, speed of printing, and high cost of purchase, operation, and maintenance were its chief drawbacks.
References
External links
Driver for Windows 95
12/600 Technical Specifications on Apple.com
12/660 Technical Specifications on Apple.com
Laser printers
Apple Inc. printers
Computer-related introductions in 1995
Discontinued Apple Inc. products
Products and services discontinued in 1996 | Color LaserWriter | Technology | 297 |
73,327,906 | https://en.wikipedia.org/wiki/Nordic%20Institute%20for%20Interoperability%20Solutions | The Nordic Institute for Interoperability Solutions (NIIS) is a non-profit established in 2017 by Estonia and Finland, with the mission "to develop e-governance solutions...with the X-Road technology used nationwide in the Estonian X-tee and in the Finnish Suomi.fi Data Exchange Layer services". It is funded by both countries, with around 1M€ annually. In 2019, Iceland was invited as well, and later the Faroe Islands.
The NIIS manages, develops, verifies, and audits X-Road's source code; administers documentation, business and technical requirements; conducts development; develops and implements principles of licensing and distribution; provides second-line support for members, and engages in international cooperation. It also shares vendor training and certifications on its technology.
The institute has been coined as "a pioneer of cross-border e-governance solution" and "a key component of its digital diplomacy and digital foreign policy work", "unique in the world". In 2020, the Digital Public Goods Alliance found the X-Road technology managed by NIIS was found to be a digital public good in alignment with the Digital Public Goods Standard. Its CEO, Ville Sirviö, is often referenced in international publications.
References
External links
Internet in Estonia
Government agencies of Estonia
Government agencies of Finland
Government agencies of Iceland
Estonia–Finland relations
2017 establishments in Estonia
2017 establishments in Finland
Free and open-source software organizations
Non-profit technology | Nordic Institute for Interoperability Solutions | Technology | 298 |
8,582,684 | https://en.wikipedia.org/wiki/Reward%20system | The reward system (the mesocorticolimbic circuit) is a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), associative learning (primarily positive reinforcement and classical conditioning), and positively-valenced emotions, particularly ones involving pleasure as a core component (e.g., joy, euphoria and ecstasy). Reward is the attractive and motivational property of a stimulus that induces appetitive behavior, also known as approach behavior, and consummatory behavior. A rewarding stimulus has been described as "any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward". In operant conditioning, rewarding stimuli function as positive reinforcers; however, the converse statement also holds true: positive reinforcers are rewarding. The reward system motivates animals to approach stimuli or engage in behaviour that increases fitness (sex, energy-dense foods, etc.). Survival for most animal species depends upon maximizing contact with beneficial stimuli and minimizing contact with harmful stimuli. Reward cognition serves to increase the likelihood of survival and reproduction by causing associative learning, eliciting approach and consummatory behavior, and triggering positively-valenced emotions. Thus, reward is a mechanism that evolved to help increase the adaptive fitness of animals. In drug addiction, certain substances over-activate the reward circuit, leading to compulsive substance-seeking behavior resulting from synaptic plasticity in the circuit.
Primary rewards are a class of rewarding stimuli which facilitate the survival of one's self and offspring, and they include homeostatic (e.g., palatable food) and reproductive (e.g., sexual contact and parental investment) rewards. Intrinsic rewards are unconditioned rewards that are attractive and motivate behavior because they are inherently pleasurable. Extrinsic rewards (e.g., money or seeing one's favorite sports team winning a game) are conditioned rewards that are attractive and motivate behavior but are not inherently pleasurable. Extrinsic rewards derive their motivational value as a result of a learned association (i.e., conditioning) with intrinsic rewards. Extrinsic rewards may also elicit pleasure (e.g., euphoria from winning a lot of money in a lottery) after being classically conditioned with intrinsic rewards.
Definition
In neuroscience, the reward system is a collection of brain structures and neural pathways that are responsible for reward-related cognition, including associative learning (primarily classical conditioning and operant reinforcement), incentive salience (i.e., motivation and "wanting", desire, or craving for a reward), and positively-valenced emotions, particularly emotions that involve pleasure (i.e., hedonic "liking").
Reward related activities, such as feeding, exercise, sex, substance use, and social interactions play a factor in elevated levels of dopamine, ultimately altering the CNS (or the central nervous system). Dopamine is the chemical messanger that plays a role in regulating mood, motivation, reward, and pleasure.
Terms that are commonly used to describe behavior related to the "wanting" or desire component of reward include appetitive behavior, approach behavior, preparatory behavior, instrumental behavior, anticipatory behavior, and seeking. Terms that are commonly used to describe behavior related to the "liking" or pleasure component of reward include consummatory behavior and taking behavior.
The three primary functions of rewards are their capacity to:
produce associative learning (i.e., classical conditioning and operant reinforcement);
affect decision-making and induce approach behavior (via the assignment of motivational salience to rewarding stimuli);
elicit positively-valenced emotions, particularly pleasure.
Neuroanatomy
Overview
The brain structures that compose the reward system are located primarily within the cortico-basal ganglia-thalamo-cortical loop; the basal ganglia portion of the loop drives activity within the reward system. Most of the pathways that connect structures within the reward system are glutamatergic interneurons, GABAergic medium spiny neurons (MSNs), and dopaminergic projection neurons, although other types of projection neurons contribute (e.g., orexinergic projection neurons). The reward system includes the ventral tegmental area, ventral striatum (i.e., the nucleus accumbens and olfactory tubercle), dorsal striatum (i.e., the caudate nucleus and putamen), substantia nigra (i.e., the pars compacta and pars reticulata), prefrontal cortex, anterior cingulate cortex, insular cortex, hippocampus, hypothalamus (particularly, the orexinergic nucleus in the lateral hypothalamus), thalamus (multiple nuclei), subthalamic nucleus, globus pallidus (both external and internal), ventral pallidum, parabrachial nucleus, amygdala, and the remainder of the extended amygdala. The dorsal raphe nucleus and cerebellum appear to modulate some forms of reward-related cognition (i.e., associative learning, motivational salience, and positive emotions) and behaviors as well. The laterodorsal tegmental nucleus (LDT), pedunculopontine nucleus (PPTg), and lateral habenula (LHb) (both directly and indirectly via the rostromedial tegmental nucleus (RMTg)) are also capable of inducing aversive salience and incentive salience through their projections to the ventral tegmental area (VTA). The LDT and PPTg both send glutaminergic projections to the VTA that synapse on dopaminergic neurons, both of which can produce incentive salience. The LHb sends glutaminergic projections, the majority of which synapse on GABAergic RMTg neurons that in turn drive inhibition of dopaminergic VTA neurons, although some LHb projections terminate on VTA interneurons. These LHb projections are activated both by aversive stimuli and by the absence of an expected reward, and excitation of the LHb can induce aversion.
Most of the dopamine pathways (i.e., neurons that use the neurotransmitter dopamine to communicate with other neurons) that project out of the ventral tegmental area are part of the reward system; in these pathways, dopamine acts on D1-like receptors or D2-like receptors to either stimulate (D1-like) or inhibit (D2-like) the production of cAMP. The GABAergic medium spiny neurons of the striatum are components of the reward system as well. The glutamatergic projection nuclei in the subthalamic nucleus, prefrontal cortex, hippocampus, thalamus, and amygdala connect to other parts of the reward system via glutamate pathways. The medial forebrain bundle, which is a set of many neural pathways that mediate brain stimulation reward (i.e., reward derived from direct electrochemical stimulation of the lateral hypothalamus), is also a component of the reward system.
Two theories exist with regard to the activity of the nucleus accumbens and the generation liking and wanting. The inhibition (or hyperpolarization) hypothesis proposes that the nucleus accumbens exerts tonic inhibitory effects on downstream structures such as the ventral pallidum, hypothalamus or ventral tegmental area, and that in inhibiting in the nucleus accumbens (NAcc), these structures are excited, "releasing" reward related behavior. While GABA receptor agonists are capable of eliciting both "liking" and "wanting" reactions in the nucleus accumbens, glutaminergic inputs from the basolateral amygdala, ventral hippocampus, and medial prefrontal cortex can drive incentive salience. Furthermore, while most studies find that NAcc neurons reduce firing in response to reward, a number of studies find the opposite response. This had led to the proposal of the disinhibition (or depolarization) hypothesis, that proposes that excitation or NAcc neurons, or at least certain subsets, drives reward related behavior.
After nearly 50 years of research on brain-stimulation reward, experts have certified that dozens of sites in the brain will maintain intracranial self-stimulation. Regions include the lateral hypothalamus and medial forebrain bundles, which are especially effective. Stimulation there activates fibers that form the ascending pathways; the ascending pathways include the mesolimbic dopamine pathway, which projects from the ventral tegmental area to the nucleus accumbens. There are several explanations as to why the mesolimbic dopamine pathway is central to circuits mediating reward. First, there is a marked increase in dopamine release from the mesolimbic pathway when animals engage in intracranial self-stimulation. Second, experiments consistently indicate that brain-stimulation reward stimulates the reinforcement of pathways that are normally activated by natural rewards, and drug reward or intracranial self-stimulation can exert more powerful activation of central reward mechanisms because they activate the reward center directly rather than through the peripheral nerves. Third, when animals are administered addictive drugs or engage in naturally rewarding behaviors, such as feeding or sexual activity, there is a marked release of dopamine within the nucleus accumbens. However, dopamine is not the only reward compound in the brain.
Key pathway
Ventral tegmental area
The ventral tegmental area (VTA) is important in responding to stimuli and cues that indicate a reward is present. Rewarding stimuli (and all addictive drugs) act on the circuit by triggering the VTA to release dopamine signals to the nucleus accumbens, either directly or indirectly. The VTA has two important pathways: The mesolimbic pathway projecting to limbic (striatal) regions and underpinning the motivational behaviors and processes, and the mesocortical pathway projecting to the prefrontal cortex, underpinning cognitive functions, such as learning external cues, etc.
Dopaminergic neurons in this region converts the amino acid tyrosine into DOPA using the enzyme tyrosine hydroxylase, which is then converted to dopamine using the enzyme DOPA decarboxylase.
Striatum (Nucleus Accumbens)
The striatum is broadly involved in acquiring and eliciting learned behaviors in response to a rewarding cue. The VTA projects to the striatum, and activates the GABA-ergic Medium Spiny Neurons via D1 and D2 receptors within the ventral (Nucleus Accumbens) and dorsal striatum.
The Ventral Striatum (the Nucleus Accumbens) is broadly involved in acquiring behavior when fed into by the VTA, and eliciting behavior when fed into by the PFC. The NAc shell projects to the pallidum and the VTA, regulating limbic and autonomic functions. This modulates the reinforcing properties of stimuli, and short term aspects of reward. The NAc Core projects to the substantia nigra and is involved in the development of reward-seeking behaviors and its expression. It is involved in spatial learning, conditional response, and impulsive choice; the long term elements of reward.
The Dorsal Striatum is involved in learning, the Dorsal Medial Striatum in goal directed learning, and the Dorsal Lateral Striatum in stimulus-response learning foundational to Pavlovian response. On repeated activation by a stimuli, the Nucleus Accumbens can activate the Dorsal Striatum via an intrastriatal loop. The transition of signals from the NAc to the DS allows reward associated cues to activate the DS without the reward itself being present. This can activate cravings and reward-seeking behaviors (and is responsible for triggering relapse during abstinence in addiction).
Prefrontal Cortex
The VTA dopaminergic neurons project to the PFC, activating glutaminergic neurons that project to multiple other regions, including the Dorsal Striatum and NAc, ultimately allowing the PFC to mediate salience and conditional behaviors in response to stimuli.
Notably, abstinence from addicting drugs activates the PFC, glutamatergic projection to the NAc, which leads to strong cravings, and modulates reinstatement of addiction behaviors resulting from abstinence. The PFC also interacts with the VTA through the mesocortical pathway, and helps associate environmental cues with the reward.
There are several parts of the brain related to the prefrontal cortex that help with decision-making in different ways. The dACC (dorsal anterior cingulate cortex) tracks effort, conflict, and mistakes. The vmPFC (ventromedial prefrontal cortex) focuses on what feels rewarding and helps make choices based on personal preferences. The OFC (orbitofrontal cortex) evaluates options and predicts their outcomes to guide decisions. Together, they work with dopamine signals to process rewards and actions.
Hippocampus
The Hippocampus has multiple functions, including in the creation and storage of memories . In the reward circuit, it serves to contextual memories and associated cues. It ultimately underpins the reinstatement of reward-seeking behaviors via cues, and contextual triggers.
Amygdala
The AMY receives input from the VTA, and outputs to the NAc. The amygdala is important in creating powerful emotional flashbulb memories, and likely underpins the creation of strong cue-associated memories. It also is important in mediating the anxiety effects of withdrawal, and increased drug intake in addiction.
Pleasure centers
Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure unless this response is conditioned). Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but are not inherently pleasurable, are termed extrinsic rewards. Extrinsic rewards (e.g., money) are rewarding as a result of a learned association with an intrinsic reward. In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired.
The reward system contains – i.e., brain structures that mediate pleasure or "liking" reactions from intrinsic rewards. hedonic hotspots have been identified in subcompartments within the nucleus accumbens shell, ventral pallidum, parabrachial nucleus, orbitofrontal cortex (OFC), and insular cortex. The hotspot within the nucleus accumbens shell is located in the rostrodorsal quadrant of the medial shell, while the hedonic coldspot is located in a more posterior region. The posterior ventral pallidum also contains a hedonic hotspot, while the anterior ventral pallidum contains a hedonic coldspot. In rats, microinjections of opioids, endocannabinoids, and orexin are capable of enhancing liking reactions in these hotspots. The hedonic hotspots located in the anterior OFC and posterior insula have been demonstrated to respond to orexin and opioids in rats, as has the overlapping hedonic coldspot in the anterior insula and posterior OFC. On the other hand, the parabrachial nucleus hotspot has only been demonstrated to respond to benzodiazepine receptor agonists.
Hedonic hotspots are functionally linked, in that activation of one hotspot results in the recruitment of the others, as indexed by the induced expression of c-Fos, an immediate early gene. Furthermore, inhibition of one hotspot results in the blunting of the effects of activating another hotspot. Therefore, the simultaneous activation of every hedonic hotspot within the reward system is believed to be necessary for generating the sensation of an intense euphoria.
Wanting and liking
Incentive salience is the "wanting" or "desire" attribute, which includes a motivational component, that is assigned to a rewarding stimulus by the nucleus accumbens shell (NAcc shell). The degree of dopamine neurotransmission into the NAcc shell from the mesolimbic pathway is highly correlated with the magnitude of incentive salience for rewarding stimuli.
Activation of the dorsorostral region of the nucleus accumbens correlates with increases in wanting without concurrent increases in liking. However, dopaminergic neurotransmission into the nucleus accumbens shell is responsible not only for appetitive motivational salience (i.e., incentive salience) towards rewarding stimuli, but also for aversive motivational salience, which directs behavior away from undesirable stimuli. In the dorsal striatum, activation of D1 expressing MSNs produces appetitive incentive salience, while activation of D2 expressing MSNs produces aversion. In the NAcc, such a dichotomy is not as clear cut, and activation of both D1 and D2 MSNs is sufficient to enhance motivation, likely via disinhibiting the VTA through inhibiting the ventral pallidum.
Robinson and Berridge's 1993 incentive-sensitization theory proposed that reward contains separable psychological components: wanting (incentive) and liking (pleasure). To explain increasing contact with a certain stimulus such as chocolate, there are two independent factors at work – our desire to have the chocolate (wanting) and the pleasure effect of the chocolate (liking). According to Robinson and Berridge, wanting and liking are two aspects of the same process, so rewards are usually wanted and liked to the same degree. However, wanting and liking also change independently under certain circumstances. For example, rats that do not eat after receiving dopamine (experiencing a loss of desire for food) act as though they still like food. In another example, activated self-stimulation electrodes in the lateral hypothalamus of rats increase appetite, but also cause more adverse reactions to tastes such as sugar and salt; apparently, the stimulation increases wanting but not liking. Such results demonstrate that the reward system of rats includes independent processes of wanting and liking. The wanting component is thought to be controlled by dopaminergic pathways, whereas the liking component is thought to be controlled by opiate-GABA-endocannabinoids systems.
Anti-reward system
Koobs & Le Moal proposed that there exists a separate circuit responsible for the attenuation of reward-pursuing behavior, which they termed the anti-reward circuit. This component acts as brakes on the reward circuit, thus preventing the over pursuit of food, sex, etc. This circuit involves multiple parts of the amygdala (the bed nucleus of the stria terminalis, the central nucleus), the Nucleus Accumbens, and signal molecules including norepinephrine, corticotropin-releasing factor, and dynorphin. This circuit is also hypothesized to mediate the unpleasant components of stress, and is thus thought to be involved in addiction and withdrawal. While the reward circuit mediates the initial positive reinforcement involved in the development of addiction, it is the anti-reward circuit that later dominates via negative reinforcement that motivates the pursuit of the rewarding stimuli.
Learning
Rewarding stimuli can drive learning in both the form of classical conditioning (Pavlovian conditioning) and operant conditioning (instrumental conditioning). In classical conditioning, a reward can act as an unconditioned stimulus that, when associated with the conditioned stimulus, causes the conditioned stimulus to elicit both musculoskeletal (in the form of simple approach and avoidance behaviors) and vegetative responses. In operant conditioning, a reward may act as a reinforcer in that it increases or supports actions that lead to itself. Learned behaviors may or may not be sensitive to the value of the outcomes they lead to; behaviors that are sensitive to the contingency of an outcome on the performance of an action as well as the outcome value are goal-directed, while elicited actions that are insensitive to contingency or value are called habits. This distinction is thought to reflect two forms of learning, model free and model based. Model free learning involves the simple caching and updating of values. In contrast, model based learning involves the storage and construction of an internal model of events that allows inference and flexible prediction. Although pavlovian conditioning is generally assumed to be model-free, the incentive salience assigned to a conditioned stimulus is flexible with regard to changes in internal motivational states.
Distinct neural systems are responsible for learning associations between stimuli and outcomes, actions and outcomes, and stimuli and responses. Although classical conditioning is not limited to the reward system, the enhancement of instrumental performance by stimuli (i.e., Pavlovian-instrumental transfer) requires the nucleus accumbens. Habitual and goal directed instrumental learning are dependent upon the lateral striatum and the medial striatum, respectively.
During instrumental learning, opposing changes in the ratio of AMPA to NMDA receptors and phosphorylated ERK occurs in the D1-type and D2-type MSNs that constitute the direct and indirect pathways, respectively. These changes in synaptic plasticity and the accompanying learning is dependent upon activation of striatal D1 and NMDA receptors. The intracellular cascade activated by D1 receptors involves the recruitment of protein kinase A, and through resulting phosphorylation of DARPP-32, the inhibition of phosphatases that deactivate ERK. NMDA receptors activate ERK through a different but interrelated Ras-Raf-MEK-ERK pathway. Alone NMDA mediated activation of ERK is self-limited, as NMDA activation also inhibits PKA mediated inhibition of ERK deactivating phosphatases. However, when D1 and NMDA cascades are co-activated, they work synergistically, and the resultant activation of ERK regulates synaptic plasticity in the form of spine restructuring, transport of AMPA receptors, regulation of CREB, and increasing cellular excitability via inhibiting Kv4.2.
Disorders
Addiction
ΔFosB (DeltaFosB) – a gene transcription factor – overexpression in the D1-type medium spiny neurons of the nucleus accumbens is the crucial common factor among virtually all forms of addiction (i.e., behavioral addictions and drug addictions) that induces addiction-related behavior and neural plasticity. In particular, ΔFosB promotes self-administration, reward sensitization, and reward cross-sensitization effects among specific addictive drugs and behaviors. Certain epigenetic modifications of histone protein tails (i.e., histone modifications) in specific regions of the brain are also known to play a crucial role in the molecular basis of addictions.
Addictive drugs and behaviors are rewarding and reinforcing (i.e., are addictive) due to their effects on the dopamine reward pathway.
The lateral hypothalamus and medial forebrain bundle has been the most-frequently-studied brain-stimulation reward site, particularly in studies of the effects of drugs on brain stimulation reward. The neurotransmitter system that has been most-clearly identified with the habit-forming actions of drugs-of-abuse is the mesolimbic dopamine system, with its efferent targets in the nucleus accumbens and its local GABAergic afferents. The reward-relevant actions of amphetamine and cocaine are in the dopaminergic synapses of the nucleus accumbens and perhaps the medial prefrontal cortex. Rats also learn to lever-press for cocaine injections into the medial prefrontal cortex, which works by increasing dopamine turnover in the nucleus accumbens. Nicotine infused directly into the nucleus accumbens also enhances local dopamine release, presumably by a presynaptic action on the dopaminergic terminals of this region. Nicotinic receptors localize to dopaminergic cell bodies and local nicotine injections increase dopaminergic cell firing that is critical for nicotinic reward. Some additional habit-forming drugs are also likely to decrease the output of medium spiny neurons as a consequence, despite activating dopaminergic projections. For opiates, the lowest-threshold site for reward effects involves actions on GABAergic neurons in the ventral tegmental area, a secondary site of opiate-rewarding actions on medium spiny output neurons of the nucleus accumbens. Thus the following form the core of currently characterised drug-reward circuitry; GABAergic afferents to the mesolimbic dopamine neurons (primary substrate of opiate reward), the mesolimbic dopamine neurons themselves (primary substrate of psychomotor stimulant reward), and GABAergic efferents to the mesolimbic dopamine neurons (a secondary site of opiate reward).
Motivation
Dysfunctional motivational salience appears in a number of psychiatric symptoms and disorders. Anhedonia, traditionally defined as a reduced capacity to feel pleasure, has been re-examined as reflecting blunted incentive salience, as most anhedonic populations exhibit intact "liking". On the other end of the spectrum, heightened incentive salience that is narrowed for specific stimuli is characteristic of behavioral and drug addictions. In the case of fear or paranoia, dysfunction may lie in elevated aversive salience. In modern literature, anhedonia is associated with the proposed two forms of pleasure, "anticipatory" and "consummatory".
Neuroimaging studies across diagnoses associated with anhedonia have reported reduced activity in the OFC and ventral striatum. One meta analysis reported anhedonia was associated with reduced neural response to reward anticipation in the caudate nucleus, putamen, nucleus accumbens and medial prefrontal cortex (mPFC).
Mood disorders
Certain types of depression are associated with reduced motivation, as assessed by willingness to expend effort for reward. These abnormalities have been tentatively linked to reduced activity in areas of the striatum, and while dopaminergic abnormalities are hypothesized to play a role, most studies probing dopamine function in depression have reported inconsistent results. Although postmortem and neuroimaging studies have found abnormalities in numerous regions of the reward system, few findings are consistently replicated. Some studies have reported reduced NAcc, hippocampus, medial prefrontal cortex (mPFC), and orbitofrontal cortex (OFC) activity, as well as elevated basolateral amygdala and subgenual cingulate cortex (sgACC) activity during tasks related to reward or positive stimuli. These neuroimaging abnormalities are complemented by little post mortem research, but what little research has been done suggests reduced excitatory synapses in the mPFC. Reduced activity in the mPFC during reward related tasks appears to be localized to more dorsal regions(i.e. the pregenual cingulate cortex), while the more ventral sgACC is hyperactive in depression.
Attempts to investigate underlying neural circuitry in animal models has also yielded conflicting results. Two paradigms are commonly used to simulate depression, chronic social defeat (CSDS), and chronic mild stress (CMS), although many exist. CSDS produces reduced preference for sucrose, reduced social interactions, and increased immobility in the forced swim test. CMS similarly reduces sucrose preference, and behavioral despair as assessed by tail suspension and forced swim tests. Animals susceptible to CSDS exhibit increased phasic VTA firing, and inhibition of VTA-NAcc projections attenuates behavioral deficits induced by CSDS. However, inhibition of VTA- projections exacerbates social withdrawal. On the other hand, CMS associated reductions in sucrose preference and immobility were attenuated and exacerbated by VTA excitation and inhibition, respectively. Although these differences may be attributable to different stimulation protocols or poor translational paradigms, variable results may also lie in the heterogenous functionality of reward related regions.
Optogenetic stimulation of the mPFC as a whole produces antidepressant effects. This effect appears localized to the rodent homologue of the pgACC (the prelimbic cortex), as stimulation of the rodent homologue of the sgACC (the infralimbic cortex) produces no behavioral effects. Furthermore, deep brain stimulation in the infralimbic cortex, which is thought to have an inhibitory effect, also produces an antidepressant effect. This finding is congruent with the observation that pharmacological inhibition of the infralimbic cortex attenuates depressive behaviors.
Schizophrenia
Schizophrenia is associated with deficits in motivation, commonly grouped under other negative symptoms such as reduced spontaneous speech. The experience of "liking" is frequently reported to be intact, both behaviorally and neurally, although results may be specific to certain stimuli, such as monetary rewards. Furthermore, implicit learning and simple reward-related tasks are also intact in schizophrenia. Rather, deficits in the reward system are apparent during reward-related tasks that are cognitively complex. These deficits are associated with both abnormal striatal and OFC activity, as well as abnormalities in regions associated with cognitive functions such as the dorsolateral prefrontal cortex (DLPFC).
Attention deficit hyperactivity disorder
In those with ADHD, core aspects of the reward system are underactive, making it challenging to derive reward from regular activities. Those with the disorder experience a boost of motivation after a high-stimulation behaviour triggers a release of dopamine. In the aftermath of that boost and reward, the return to baseline levels results in an immediate drop in motivation.
People with more ADHD-related behaviors show weaker brain responses to reward anticipation (not reward delivery), especially in the nucleus accumbens. While there is the initial boost of motivation and release of dopamine, as stated above, there is a higher risk of a noticeable drop in motivation. Research shows that for those who have ADHD, monetary rewards triggered the strongest brain activity, while verbal feedback triggered the least.
Impairments of dopaminergic and noradrenergic function are said to be key factors in ADHD. These impairments can lead to executive dysfunction such as dysregulation of reward processing and motivational dysfunction, including anhedonia.
History
The first clue to the presence of a reward system in the brain came with an accidental discovery by James Olds and Peter Milner in 1954. They discovered that rats would perform behaviors such as pressing a bar, to administer a brief burst of electrical stimulation to specific sites in their brains. This phenomenon is called intracranial self-stimulation or brain stimulation reward. Typically, rats will press a lever hundreds or thousands of times per hour to obtain this brain stimulation, stopping only when they are exhausted. While trying to teach rats how to solve problems and run mazes, stimulation of certain regions of the brain where the stimulation was found seemed to give pleasure to the animals. They tried the same thing with humans and the results were similar. The explanation to why animals engage in a behavior that has no value to the survival of either themselves or their species is that the brain stimulation is activating the system underlying reward.
In a fundamental discovery made in 1954, researchers James Olds and Peter Milner found that low-voltage electrical stimulation of certain regions of the brain of the rat acted as a reward in teaching the animals to run mazes and solve problems. It seemed that stimulation of those parts of the brain gave the animals pleasure, and in later work humans reported pleasurable sensations from such stimulation. When rats were tested in Skinner boxes where they could stimulate the reward system by pressing a lever, the rats pressed for hours. Research in the next two decades established that dopamine is one of the main chemicals aiding neural signaling in these regions, and dopamine was suggested to be the brain's "pleasure chemical".
Ivan Pavlov was a psychologist who used the reward system to study classical conditioning. Pavlov used the reward system by rewarding dogs with food after they had heard a bell or another stimulus. Pavlov was rewarding the dogs so that the dogs associated food, the reward, with the bell, the stimulus.
Edward L. Thorndike used the reward system to study operant conditioning. He began by putting cats in a puzzle box and placing food outside of the box so that the cat wanted to escape. The cats worked to get out of the puzzle box to get to the food. Although the cats ate the food after they escaped the box, Thorndike learned that the cats attempted to escape the box without the reward of food. Thorndike used the rewards of food and freedom to stimulate the reward system of the cats. Thorndike used this to see how the cats learned to escape the box. More recently, Ivan De Araujo and colleagues used nutrients inside the gut to stimulate the reward system via the vagus nerve.
Other species
Animals quickly learn to press a bar to obtain an injection of opiates directly into the midbrain tegmentum or the nucleus accumbens. The same animals do not work to obtain the opiates if the dopaminergic neurons of the mesolimbic pathway are inactivated. In this perspective, animals, like humans, engage in behaviors that increase dopamine release.
Kent Berridge, a researcher in affective neuroscience, found that sweet (liked ) and bitter (disliked ) tastes produced distinct orofacial expressions, and these expressions were similarly displayed by human newborns, orangutans, and rats. This was evidence that pleasure (specifically, liking) has objective features and was essentially the same across various animal species. Most neuroscience studies have shown that the more dopamine released by the reward, the more effective the reward is. This is called the hedonic impact, which can be changed by the effort for the reward and the reward itself. Berridge discovered that blocking dopamine systems did not seem to change the positive reaction to something sweet (as measured by facial expression). In other words, the hedonic impact did not change based on the amount of sugar. This discounted the conventional assumption that dopamine mediates pleasure. Even with more-intense dopamine alterations, the data seemed to remain constant. However, a clinical study from January 2019 that assessed the effect of a dopamine precursor (levodopa), antagonist (risperidone), and a placebo on reward responses to music – including the degree of pleasure experienced during musical chills, as measured by changes in electrodermal activity as well as subjective ratings – found that the manipulation of dopamine neurotransmission bidirectionally regulates pleasure cognition (specifically, the hedonic impact of music) in human subjects. This research demonstrated that increased dopamine neurotransmission acts as a sine qua non condition for pleasurable hedonic reactions to music in humans.
Berridge developed the incentive salience hypothesis to address the wanting aspect of rewards. It explains the compulsive use of drugs by drug addicts even when the drug no longer produces euphoria, and the cravings experienced even after the individual has finished going through withdrawal. Some addicts respond to certain stimuli involving neural changes caused by drugs. This sensitization in the brain is similar to the effect of dopamine because wanting and liking reactions occur. Human and animal brains and behaviors experience similar changes regarding reward systems because these systems are so prominent.
See also
References
External links
Scholarpedia Reward
Scholarpedia Reward signals
Addiction
Cognitive neuroscience
Behavioral neuroscience
Behaviorism
Behavior modification
Dopamine
Motivation
Neuroanatomy
Neuropsychology | Reward system | Biology | 7,636 |
33,498 | https://en.wikipedia.org/wiki/Wire | A wire is a flexible, round, bar of metal. Wires are commonly formed by drawing the metal through a hole in a die or draw plate. Wire gauges come in various standard sizes, as expressed in terms of a gauge number or cross-sectional area.
Wires are used to bear mechanical loads, often in the form of wire rope. In electricity and telecommunications signals, a "wire" can refer to an electrical cable, which can contain a "solid core" of a single wire or separate strands in stranded or braided forms.
Usually cylindrical in geometry, wire can also be made in square, hexagonal, flattened rectangular, or other cross-sections, either for decorative purposes, or for technical purposes such as high-efficiency voice coils in loudspeakers. Edge-wound coil springs, such as the Slinky toy, are made of special flattened wire.
History
In antiquity, jewelry often contains large amounts of wire in the form of chains and applied decoration that is accurately made and which must have been produced by some efficient, if not technically advanced, means. In some cases, strips cut from metal sheet were made into wire by pulling them through perforations in stone beads. This causes the strips to fold round on themselves to form thin tubes. This strip drawing technique was in use in Egypt by the 2nd Dynasty (). From the middle of the 2nd millennium BCE most of the gold wires in jewelry are characterized by seam lines that follow a spiral path along the wire. Such twisted strips can be converted into solid round wires by rolling them between flat surfaces or the strip wire drawing method. The strip twist wire manufacturing method was superseded by drawing in the ancient Old World sometime between about the 8th and 10th centuries AD. There is some evidence for the use of drawing further East prior to this period.
Square and hexagonal wires were possibly made using a swaging technique. In this method a metal rod was struck between grooved metal blocks, or between a grooved punch and a grooved metal anvil. Swaging is of great antiquity, possibly dating to the beginning of the 2nd millennium BCE in Egypt and in the Bronze and Iron Ages in Europe for torcs and fibulae. Twisted square-section wires are a very common filigree decoration in early Etruscan jewelry.
In about the middle of the 2nd millennium BCE, a new category of decorative tube was introduced which imitated a line of granules. True beaded wire, produced by mechanically distorting a round-section wire, appeared in the Eastern Mediterranean and Italy in the seventh century BCE, perhaps disseminated by the Phoenicians. Beaded wire continued to be used in jewellery into modern times, although it largely fell out of favour in about the tenth century CE when two drawn round wires, twisted together to form what are termed 'ropes', provided a simpler-to-make alternative. A forerunner to beaded wire may be the notched strips and wires which first occur from around 2000 BCE in Anatolia.
Wire was drawn in England from the medieval period. The wire was used to make wool cards and pins, manufactured goods whose import was prohibited by Edward IV in 1463. The first wire mill in Great Britain was established at Tintern in about 1568 by the founders of the Company of Mineral and Battery Works, who had a monopoly on this. Apart from their second wire mill at nearby Whitebrook, there were no other wire mills before the second half of the 17th century. Despite the existence of mills, the drawing of wire down to fine sizes continued to be done manually.
According to a description in the early 20th century, "[w]ire is usually drawn of cylindrical form; but it may be made of any desired section by varying the outline of the holes in the draw-plate through which it is passed in the process of manufacture. The draw-plate or die is a piece of hard cast-iron or hard steel, or for fine work it may be a diamond or a ruby. The object of utilising precious stones is to enable the dies to be used for a considerable period without losing their size, and so producing wire of incorrect diameter. Diamond dies must be re-bored when they have lost their original diameter of hole, but metal dies are brought down to size again by hammering up the hole and then drifting it out to correct diameter with a punch."
Production
Wire is often reduced to the desired diameter and properties by repeated drawing through progressively smaller dies, or traditionally holes in draw plates. After a number of passes the wire may be annealed to facilitate more drawing or, if it is a finished product, to maximise ductility and conductivity.
Electrical wires are usually covered with insulating materials, such as plastic, rubber-like polymers, or varnish. Insulating and jacketing of wires and cables is nowadays done by passing them through an extruder. Formerly, materials used for insulation included treated cloth or paper and various oil-based products. Since the mid-1960s, plastic and polymers exhibiting properties similar to rubber have predominated.
Two or more wires may be wrapped concentrically, separated by insulation, to form coaxial cable. The wire or cable may be further protected with substances like paraffin, some kind of preservative compound, bitumen, lead, aluminum sheathing, or steel taping. Stranding or covering machines wind material onto wire which passes through quickly. Some of the smallest machines for cotton covering have a large drum, which grips the wire and moves it through toothed gears; the wire passes through the centre of disks mounted above a long bed, and the disks carry each a number of bobbins varying from six to twelve or more in different machines. A supply of covering material is wound on each bobbin, and the end is led on to the wire, which occupies a central position relatively to the bobbins; the latter being revolved at a suitable speed bodily with their disks, the cotton is consequently served on to the wire, winding in spiral fashion so as to overlap. If many strands are required the disks are duplicated, so that as many as sixty spools may be carried, the second set of strands being laid over the first.
For heavier cables that are used for electric light and power as well as submarine cables, the machines are somewhat different in construction. The wire is still carried through a hollow shaft, but the bobbins or spools of covering material are set with their spindles at right angles to the axis of the wire, and they lie in a circular cage which rotates on rollers below. The various strands coming from the spools at various parts of the circumference of the cage all lead to a disk at the end of the hollow shaft. This disk has perforations through which each of the strands pass, thence being immediately wrapped on the cable, which slides through a bearing at this point. Toothed gears having certain definite ratios are used to cause the winding drum for the cable and the cage for the spools to rotate at suitable relative speeds which do not vary. The cages are multiplied for stranding with many tapes or strands, so that a machine may have six bobbins on one cage and twelve on the other.
Forms
Solid
Solid wire, also called solid-core or single-strand wire, consists of one piece of metal wire. Solid wire is useful for wiring breadboards. Solid wire is cheaper to manufacture than stranded wire and is used where there is little need for flexibility in the wire. Solid wire also provides mechanical ruggedness; and, because it has relatively less surface area which is exposed to attack by corrosives, protection against the environment.
Stranded
Stranded wire is composed of a number of small wires bundled or wrapped together to form a larger conductor. Stranded wire is more flexible than solid wire of the same total cross-sectional area. Stranded wire is used when higher resistance to metal fatigue is required. Such situations include connections between circuit boards in multi-printed-circuit-board devices, where the rigidity of solid wire would produce too much stress as a result of movement during assembly or servicing; A.C. line cords for appliances; musical instrument cables; computer mouse cables; welding electrode cables; control cables connecting moving machine parts; mining machine cables; trailing machine cables; and numerous others. At high frequencies, current travels near the surface of the wire because of the skin effect, resulting in increased power loss in the wire. Stranded wire might seem to reduce this effect, since the total surface area of the strands is greater than the surface area of the equivalent solid wire, but ordinary stranded wire does not reduce the skin effect because all the strands are short-circuited together and behave as a single conductor. A stranded wire will have higher resistance than a solid wire of the same diameter because the cross-section of the stranded wire is not all copper; there are unavoidable gaps between the strands (this is the circle packing problem for circles within a circle). A stranded wire with the same cross-section of conductor as a solid wire is said to have the same equivalent gauge and is always a larger diameter. However, for many high-frequency applications, proximity effect is more severe than skin effect, and in some limited cases, simple stranded wire can reduce proximity effect. For better performance at high frequencies, litz wire, which has the individual strands insulated and twisted in special patterns, may be used.
The more individual wire strands in a wire bundle, the more flexible, kink-resistant, break-resistant, and stronger the wire becomes. However, more strands increases manufacturing complexity and cost. For geometrical reasons, the lowest number of strands usually seen is 7: one in the middle, with 6 surrounding it in close contact. The next level up is 19, which is another layer of 12 strands on top of the 7. After that the number varies, but 37 and 49 are common, then in the 70 to 100 range (the number is no longer exact). Larger numbers than that are typically found only in very large cables. For application where the wire moves, 19 is the lowest that should be used (7 should only be used in applications where the wire is placed and then does not move), and 49 is much better. For applications with constant repeated movement, such as assembly robots and headphone wires, 70 to 100 is mandatory . For applications that need even more flexibility, even more strands are used (welding cables are the usual example, but also any application that needs to move wire in tight areas). One example is a 2/0 wire made from 5,292 strands of No. 36 gauge wire. The strands are organized by first creating a bundle of 7 strands. Then 7 of these bundles are put together into super bundles. Finally 108 super bundles are used to make the final cable. Each group of wires is wound in a helix so that when the wire is flexed, the part of a bundle that is stretched moves around the helix to a part that is compressed to allow the wire to have less stress.
Prefused wire is stranded wire made up of strands that are heavily tinned, then fused together. Prefused wire has many of the properties of solid wire, except it is less likely to break.
Braided
A braided wire consists of a number of small strands of wire braided together. Braided wires do not break easily when flexed. Braided wires are often suitable as an electromagnetic shield in noise-reduction cables.
Uses
Wire has many uses. It forms the raw material of many important manufacturers, such as the wire netting industry, engineered springs, wire-cloth making and wire rope spinning, in which it occupies a place analogous to a textile fiber. Wire-cloth of all degrees of strength and fineness of mesh is used for sifting and screening machinery, for draining paper pulp, for window screens, and for many other purposes. Vast quantities of aluminium, copper, nickel and steel wire are employed for telephone and data cables, and as conductors in electric power transmission, and heating. It is in no less demand for fencing, and much is consumed in the construction of suspension bridges, and cages, etc. In the manufacture of stringed musical instruments and scientific instruments, wire is again largely used. Carbon and stainless spring steel wire have significant applications in engineered springs for critical automotive or industrial manufactured parts/components. Pin and hairpin making; the needle and fish-hook industries; nail, peg, and rivet making; and carding machinery consume large amounts of wire as feedstock.
Not all metals and metallic alloys possess the physical properties necessary to make useful wire. The metals must in the first place be ductile and strong in tension, the quality on which the utility of wire principally depends. The principal metals suitable for wire, possessing almost equal ductility, are platinum, silver, iron, copper, aluminium, and gold; and it is only from these and certain of their alloys with other metals, principally brass and bronze, that wire is prepared.
By careful treatment, extremely thin wire can be produced. Special purpose wire is however made from other metals (e.g. tungsten wire for light bulb and vacuum tube filaments, because of its high melting temperature). Copper wires are also plated with other metals, such as tin, nickel, and silver to handle different temperatures, provide lubrication, and provide easier stripping of rubber insulation from copper.
Metallic wires are often used for the lower-pitched sound-producing "strings" in stringed instruments, such as violins, cellos, and guitars, and percussive string instruments such as pianos, dulcimers, dobros, and cimbaloms. To increase the mass per unit length (and thus lower the pitch of the sound even further), the main wire may sometimes be helically wrapped with another, finer strand of wire. Such musical strings are said to be "overspun"; the added wire may be circular in cross-section ("round-wound"), or flattened before winding ("flat-wound").
Examples include:
Hook-up wire is small-to-medium gauge, solid or stranded, insulated wire, used for making internal connections inside electrical or electronic devices. It is often tin-plated to improve solderability.
Wire bonding is the application of microscopic wires for making electrical connections inside semiconductor components and integrated circuits.
Magnet wire is solid wire, usually copper, which, to allow closer winding when making electromagnetic coils, is insulated only with varnish, rather than the thicker plastic or other insulation commonly used on electrical wire. It is used for the winding of motors, transformers, inductors, generators, speaker coils, etc. (For further information about copper magnet wire, see: Copper wire and cable#Magnet wire (Winding wire).).
Coaxial cable is a cable consisting of an inner conductor, surrounded by a tubular insulating layer typically made from a flexible material with a high dielectric constant, all of which is then surrounded by another conductive layer (typically of fine woven wire for flexibility, or of a thin metallic foil), and then finally covered again with a thin insulating layer on the outside. The term coaxial comes from the inner conductor and the outer shield sharing the same geometric axis. Coaxial cables are often used as a transmission line for radio frequency signals. In a hypothetical ideal coaxial cable, the electromagnetic field carrying the signal exists only in the space between the inner and outer conductors. Practical cables achieve this objective to a high degree. A coaxial cable provides extra protection of signals from external electromagnetic interference and effectively guides signals with low emission along the length of the cable which in turn affects thermal heat inside the conductivity of the wire.
Speaker wire is used to make a low-resistance electrical connection between loudspeakers and audio amplifiers. Some high-end modern speaker wire consists of multiple electrical conductors individually insulated by plastic, similar to Litz wire.
Resistance wire is wire with higher than normal resistivity, often used for heating elements or for making wire-wound resistors. Nichrome wire is the most common type.
See also
High-voltage cable
Barbed wire
Chicken wire
Razor wire
Tinsel wire
Wollaston wire
References
External links
Wire
Hardware (mechanical) | Wire | Physics,Technology,Engineering | 3,322 |
64,403,658 | https://en.wikipedia.org/wiki/Quiet%20area | "Quiet area" or "quiet areas" is a concept used in landscape planning to highlight areas with good sound quality and limited noise disturbance. The concept is typically used in nature and nature-like areas with high experiential values and/or high accessibility. Despite the name, quiet areas are not "quiet" in the strictest meaning of the word. Rather, they imply a relative quietness, where other sounds than noise are given the chance to come forward. For instance, sounds of nature are often subtle in character, and require absence of noise to be heard. Quietness in its true sense hardly exists at all.
Background and history
In the planning processes for everyday landscapes, the sound environment has traditionally been given relatively low priority. If sound is at all considered, it is mostly in response to problems with environmental noise, dealt with through measurements of sound pressure levels and technical solutions.
Strategies to avoid noise have existed at least since ancient Greece and have been implemented on a wider scale since the 1970s in the western world. While playing a critical role to reduce noise and associated problems with health, noise management does not take account of the experiential qualities inherent in sound. With "quiet areas", it can be said that focus started to shift from noise to include also the potential qualities in the sound environment, like twittering birds, rustling vegetation and rippling water. This holistic way of thinking is in line with the discourse on soundscape, a research field that started to become influential around the same time as the concept of quiet areas was introduced.
In the EU, the notion of quiet areas can be traced to 1996 when it was mentioned in a Green Paper on "Future Noise Policy". Today, the concept is mostly associated with the influential directive on environmental noise from 2002 (2002/49/EC), where it is stipulated that member states should map their quiet areas as well as formulate strategies to protect them from future noise exposure. The instructions and definitions on quiet areas that were mentioned in the directive were vague, and clarifications and guidelines have been added subsequently.
Definitions and identification strategies
Definitions of what a "quiet area" is varies widely, which is partly a result of the formulations used in the END Directive. The directive makes a distinction between two types of quiet areas; in "open country" and in "agglomerations", which are defined as follows:
In other words; to a large extent, the END directive leaves it to each member state to formulate their own definitions of what qualifies as a quiet area. A number of different interpretations and definitions have come out as a result, many of these were collected in a subsequent publication in the union entitled "Good Practice Guide on quiet areas". Definitions typically include a reference to a benchmark sound pressure level between 25-55 dBA.
A method to identify potential for quiet areas has also been brought forward by the EU; the so called "Quietness Suitability Index" (QSI) uses existing data for noise and land use to indicate potential for quietness. Maps can be accessed through the European Environmental Agency's homepage
Examples and applications
The UK has seen several initiatives related to quiet areas including an interactive map from the Department for Environment, Food and Rural Affairs (DEFRA) depicting five quiet areas in Belfast.
A smartphone application Hush City has been developed as a means to aid identification of quiet areas from a user perspective. The app was released in 2017 and it is now used internationally by citizens and municipalities to map and assess quiet areas, and share them via an open access web-platform.
In Sweden, the initiative "Guide to Silence" has been implemented in several municipalities in the Stockholm region. The initiative is noteworthy for its emphasis on marketing quiet areas and making them accessible to the public.
Initiatives have also been taken in Greece and the Netherlands among other places
References
Soundscape ecology
Landscape architecture
Environmental design
Noise control | Quiet area | Engineering,Biology | 791 |
5,612,656 | https://en.wikipedia.org/wiki/Bioluminescence%20imaging | Bioluminescence imaging (BLI) is a technology developed over the past decades (1990's and onward). that allows for the noninvasive study of ongoing biological processes Recently, bioluminescence tomography (BLT) has become possible and several systems have become commercially available. In 2011, PerkinElmer acquired one of the most popular lines of optical imaging systems with bioluminescence from Caliper Life Sciences.
Background
Bioluminescence is the process of light emission in living organisms. Bioluminescence imaging utilizes native light emission from one of several organisms which bioluminesce, also known as luciferase enzymes. The three main sources are the North American firefly, the sea pansy (and related marine organisms), and bacteria like Photorhabdus luminescens and Vibrio fischeri. The DNA encoding the luminescent protein is incorporated into the laboratory animal either via a viral vector or by creating a transgenic animal. Rodent models of cancer spread can be studied through bioluminescence imaging, e.g. for mouse models of breast cancer metastasis.
Systems derived from the three groups above differ in key ways:
Firefly luciferase requires D-luciferin to be injected into the subject prior to imaging. The peak emission wavelength is about 560 nm. Due to the attenuation of blue-green light in tissues, the red-shift (compared to the other systems) of this emission makes detection of firefly luciferase much more sensitive in vivo.
Renilla luciferase (from the Sea pansy) requires its substrate, coelenterazine, to be injected as well. As opposed to luciferin, coelenterazine has a lower bioavailability (likely due to MDR1 transporting it out of mammalian cells). Additionally, the peak emission wavelength is about 480 nm.
Bacterial luciferase has an advantage in that the lux operon used to express it also encodes the enzymes required for substrate biosynthesis. Although originally believed to be functional only in prokaryotic organisms, where it is widely used for developing bioluminescent pathogens, it has been genetically engineered to work in mammalian expression systems as well. This luciferase reaction has a peak wavelength of about 490 nm.
While the total amount of light emitted from bioluminescence is typically small and not detected by the human eye, an ultra-sensitive CCD camera can image bioluminescence from an external vantage point.
Applications
Common applications of BLI include in vivo studies of infection (with bioluminescent pathogens), cancer progression (using a bioluminescent cancer cell line), and reconstitution kinetics (using bioluminescent stem cells).
Researchers at UT Southwestern Medical Center have shown that bioluminescence imaging can be used to determine the effectiveness of cancer drugs that choke off a tumor's blood supply. The technique requires luciferin to be added to the bloodstream, which carries it to cells throughout the body. When luciferin reaches cells that have been altered to carry the firefly gene, those cells emit light.
The BLT inverse problem of 3D reconstruction of the distribution of bioluminescent molecules from data measured on the animal surface is inherently ill-posed. The first small animal study using BLT was conducted by researchers at the University of Southern California, Los Angeles, USA in 2005. Following this development, many research groups in USA and China have built systems that enable BLT.
Mustard plants have had the gene that makes fireflies' tails glow added to them so that the plants glow when touched. The effect lasts for an hour, but an utra-sensitive camera is needed to see the glow.
Autoluminograph
An autoluminograph is a photograph produced by placing a light emitting object directly on a piece of film. A famous example is an autoluminograph published in Science magazine in 1986 of a glowing transgenic tobacco plant bearing the luciferase gene of fireflies placed on Kodak Ektachrome 200 film.
Induced metabolic bioluminescence imaging
Induced metabolic bioluminescence imaging (imBI) is used to obtain a metabolic snapshot of biological tissues. Metabolites that may be quantified through imBI include glucose, lactate, pyruvate, ATP, glucose-6-phosphate, or D2-hydroxygluturate. imBI can be used to determine the lactate concentration of tumors or to measure the metabolism of the brain.
References
Further reading
Photographic processes
Bioluminescence
Imaging | Bioluminescence imaging | Chemistry,Biology | 931 |
1,559,322 | https://en.wikipedia.org/wiki/Shoot%2C%20shovel%2C%20and%20shut%20up | Shoot, shovel, and shut up, also known as the 3-S treatment, refers to a method for dealing with unwanted or unwelcome animals primarily in rural areas. There have been reports of the frequently illegal triple-step procedure being used to dispatch mischievous pets, endangered species, and even sick livestock. Individuals often engage in this practice as a means to protect property or pets from predatory species that are protected by law, especially if other measures to protect their animals are unfeasible. For instance, eagles, a protected species, have been known to occasionally attack and kill young livestock on ranches. Similarly, there have been multiple incidents where hawks have attacked and killed small farm poultry and pets. Farmers and pet owners caught killing such animals have been prosecuted, regardless of their reasons for doing so.
When applied to marauding dogs, the implication is that the offending canine will be killed by firearm and, as far as the owner is concerned, disappear with no apparent clues because of the reticence of the person employing the method. The phrase was used in this sense in Living Well on Practically Nothing by Edward H. Romney, who pointed out that while one might get away with using the 3-S treatment in rural areas, suburban neighborhoods have different norms. This practice is more common in rural areas, where there is less population to witness illegal activities.
Pittsburgh Tribune-Review columnist Ralph R. Reiland wrote an essay called "Shoot, Shovel & Shut Up," describing landowners' reactions to finding red-cockaded woodpeckers on their property. Under the Endangered Species Act, landowners who have a population of such birds on their property may be subject to restrictions on building and other land uses that would interfere with the animals' habitat. Therefore, it was considered prudent to eliminate the birds before the government noticed their presence. The Fall 2001 issue of the Sierra Citizen notes, "'Shoot, shovel and shut up' is the mantra of many in the so-called 'property rights' movement ... It refers to the practice of killing and burying evidence of any plants or animals that might be threatened or endangered." The property rights movement argues that the Endangered Species Act should be amended to compensate property owners for protecting endangered species, rather than making an endangered species a financial drain on the owners, and that the current act actually hastens the decline of some endangered species when listed by causing property owners to "shoot, shovel, and shut up" to avoid expected losses.
Jim Robbins writes in Wolves Across the Border that Mon Teigen, director of the Montana Stockgrowers Association, "believes labyrinthine federal endangered species regulations may lead a few ranchers to control wolves with the three-S method". In 2005, after a court ruled that ranchers could not shoot wolves caught attacking livestock, the Associated Press reported that "Sharon Beck, an Eastern Oregon rancher and former president of the Oregon Cattlemen's Association, said the ruling leaves ranchers little recourse but to break the law -- known around the West as 'shoot, shovel and shut up' -- when wolves move into their areas".
The phrase has also been used in reference to mad cow disease. More than 30 countries banned beef imports from Canada after one of Albertan farmer Marwyn Peaster's cattle tested positive for the illness. Alberta Premier Ralph Klein, in frustration over the situation, said that any "self-respecting rancher would have shot, shovelled and shut up".
References
Animal welfare
Animal killing
Pest control techniques
Human–animal interaction
Canadian political phrases
Environmental crime
Endangered species | Shoot, shovel, and shut up | Biology | 727 |
33,835,593 | https://en.wikipedia.org/wiki/Gephi | Gephi ( ) is an open-source network analysis and visualization software package written in Java on the NetBeans platform.
History
Initially developed by students of the University of Technology of Compiègne (UTC) in France, Gephi has been selected for the Google Summer of Code in 2009, 2010, 2011, 2012, and 2013.
Its last version, 0.10.1 has been launched in 2023. Previous versions are 0.6.0 (2008), 0.7.0 (2010), 0.8.0 (2011), 0.8.1 (2012), 0.8.2 (2013), 0.9.0 (2015), 0.9.1 (2016) and 0.9.2 (2017).
The Gephi Consortium, created in 2010, is a French non-profit corporation which supports development of future releases of Gephi. Members include SciencesPo, Linkfluence, WebAtlas, and Quid. Gephi is also supported by a large community of users, structured on a discussion group and a forum and producing numerous blogposts, papers and tutorials.
In November 2022, a lightweight web version of Gephi, Gephi lite, was announced.
Applications
Gephi has been used in a number of research projects in academia, journalism and elsewhere, for instance in visualizing the global connectivity of New York Times content and examining Twitter network traffic during social unrest along with more traditional network analysis topics. Gephi is widely used within the digital humanities (in history, literature, political sciences, etc.), a community where many of its developers are involved.
Gephi inspired the LinkedIn InMaps and was used for the network visualizations for Truthy.
See also
Graph (discrete mathematics)
Graph drawing
Graph theory
Graph (data structure)
Social network analysis software
File formats
Dot language
GraphML
Graph Modelling Language
Related software
Cytoscape
Graph-tool
Graphviz
Tulip (software)
yEd
AllegroGraph
NetworkX
NodeXL
Pajek
NetMiner
References
External links
Gephi releases
Gephi wiki
2000 software
Network theory
Free application software
Graph drawing software
Free data and information visualization software | Gephi | Mathematics | 447 |
21,297,236 | https://en.wikipedia.org/wiki/Infamia | In ancient Rome, (in-, "not", and fama, "reputation") was a loss of legal or social standing. As a technical term in Roman law, was juridical exclusion from certain protections of Roman citizenship, imposed as a legal penalty by a censor or praetor. In more general usage during the Republic and Principate, was damage to the esteem (aestimatio) in which a person was held socially; that is, to one's reputation. A person who suffered was an (plural ).
As a legal penalty
Infamia was a form of censure more disgraceful than ignominia, which in its technical sense resulted from the censors' nota censoria, a figurative branding or marking of a citizen that included removal from the senate or other reduction of status. Ignominia, however, was an impermanent status that could be ameliorated, for instance by paying off a debt. A debtor who could not meet his obligations might eventually suffer infamia, a penalty that legislation passed under Julius Caesar sought to mitigate through payment options.
In addition to bankruptcy, a judgment of flagrant dishonesty over contractual relations and other business dealings could result in infamia. Examples of legal actions for which infamia was a penalty (called actiones famosae or actiones turpes) generally involved a betrayal of trust, at times as expressed by lack of respect for another's property rights. A successful lawsuit claiming theft (furtum) or seizure of movable goods by force (rapina) could result in infamia for the defendant. In 66 BC, a praetorian edict permitted lawsuits against "fraud by means of deception" (dolus) when no other contractual remedy was available. Dolus was so broadly defined that Cicero characterized this kind of lawsuit as a fishing expedition. A contractual obligation of mandatum was based on friendship and could not involve any payment, but a lawsuit could be brought to seek restitution for loss or damage; a depositum was the contractual placing of property in the keeping of someone who was not supposed to use it, and legal action could be undertaken to show that the depositary did not fulfill his obligation or refused to return it. A conviction for either an actio mandati or an actio depositi resulted in infamia primarily for breaking one's word, beyond material or financial loss.
Iniuria (from which English "injury" derives) was a broad category for a wrongful act that could be penalized by infamia, including bodily harm and damage against property or reputation, as well as "affronts to decency" and what would now be called sexual harassment.
Other grounds for infamia included dishonorable discharge from the military, bigamy, and "misbehavior in family life."
Consequences
shared some conditions of status with slaves: they could not provide testimony in a court of law, and they were liable to corporal punishment. They could not bring lawsuits to the court on behalf of themselves or others, and they could not run for public office.
The infames
was an "inescapable consequence" for certain kinds of employment, including that of undertakers, executioners, prostitutes and pimps, entertainers such as actors and dancers, and gladiators. The collective infamia of stage performers, prostitutes, and gladiators arose from the uses to which they put their bodies: by subjecting themselves to public display, they had surrendered the right of privacy and bodily integrity that defined the citizen. The of entertainers did not exclude them from socializing among the Roman elite, and entertainers who were "stars", both men and women, sometimes became the lovers of such high-profile figures as Mark Antony and the dictator Sulla.
Charioteers may or may not have been infames; two jurists of the later Imperial era argue that athletic competitions were not mere entertainment but "seem useful" as instructive displays of Roman strength and . The low status of those who competed in public games in Rome stands in striking contrast to athletics in Greece, where Olympic victors enjoyed high honors. A passive homosexual who was "outed" might be subject to social in the colloquial sense without being socially ostracized, and if a citizen he might retain his legal standing.
Religious infamy
In late antiquity, when the Roman Empire had come under Christian rule, infamia was used to punish "religious deviants" such as heretics, apostates, and those who declined to give up their own religious practices and convert to Christianity.
The modern Roman Catholic Church has the similar concept of infamy.
See also
Sexuality in ancient Rome
Pittura infamante
References
External links
Roman law
Social classes in ancient Rome | Infamia | Biology | 996 |
20,526,814 | https://en.wikipedia.org/wiki/Wireless%20network%20organizations%20by%20size | Wireless Network organizations sorted by size to have a better overview of the size of these organizations around the globe and to determine which networks are tiny and which are not.
Athens Wireless Metropolitan Network - 800
TWMN - 503
Pretoria Wireless Users Group - 455
Jawug - 330
Patras wireless metropolitan network - 250
Heraklion Student Wireless Network - 150
Patras Wireless Network - 150
Melbourne Wireless - 150
Personal Telco - 100
Wireless Leiden - 71
Cape Town Wireless User Group (CTWUG) - 70
Ioannina Wireless Network - 40
TasWireless - 37
Seattle Wireless - 30
Unknown
AirJaldi - 2000 computers linked
BWIC
Champaign-Urbana Community Wireless Network
Melbourne Wireless
NYCwireless
Outernet (network)
Wireless Toronto
Vancouver Community Network
Wireless network organizations | Wireless network organizations by size | Technology | 156 |
23,808,291 | https://en.wikipedia.org/wiki/Nekhoroshev%20estimates | The Nekhoroshev estimates are an important result in the theory of Hamiltonian systems concerning the long-time stability of solutions of integrable systems under a small perturbation of the Hamiltonian. The first paper on the subject was written by Nikolay Nekhoroshev in 1971.
The theorem complements both the Kolmogorov-Arnold-Moser theorem and the phenomenon of instability for nearly integrable Hamiltonian systems, sometimes called Arnold diffusion, in the following way: the KAM theorem tells us that many solutions to nearly integrable Hamiltonian systems persist under a perturbation for all time, while, as Vladimir Arnold first demonstrated in 1964, some solutions do not stay close to their integrable counterparts for all time. The Nekhoroshev estimates tell us that, nonetheless, all solutions stay close to their integrable counterparts for an exponentially long time. Thus, they restrict how quickly solutions can become unstable.
Statement
Let be a nearly integrable degree-of-freedom Hamiltonian, where are the action-angle variables. Ignoring the technical assumptions and details in the statement, Nekhoroshev estimates assert that:
for
where is a complicated constant.
See also
Arnold diffusion
References
Dynamical systems | Nekhoroshev estimates | Physics,Mathematics | 253 |
24,520,523 | https://en.wikipedia.org/wiki/C9H20 | {{DISPLAYTITLE:C9H20}}
The molecular formula C9H20 (molar mass: 128.25 g/mol, exact mass: 128.1565 u) may refer to:
Nonane
List of isomers of nonane
Tetraethylmethane | C9H20 | Chemistry | 62 |
44,499,563 | https://en.wikipedia.org/wiki/Suillellus%20hypocarycinus | Suillellus hypocarycinus is a species of bolete fungus found in North America. Originally described as a species of Boletus by Rolf Singer in 1945, it was transferred to Suillellus by William Alphonso Murrill in 1948.
References
External links
hypocarycinus
Fungi described in 1945
Fungi of the United States
Fungi without expected TNC conservation status
Fungus species | Suillellus hypocarycinus | Biology | 83 |
2,648,478 | https://en.wikipedia.org/wiki/Lea%20test | The LEA Vision Test System is a series of pediatric vision tests designed specifically for children who do not know how to read the letters of the alphabet that are typically used in eye charts. There are numerous variants of the LEA test which can be used to assess the visual capabilities of near vision and distance vision, as well as several other aspects of occupational health, such as contrast sensitivity, visual field, color vision, visual adaptation, motion perception, and ocular function and accommodation (eye).
History
The first version of the LEA test was developed in 1976 by Finnish pediatric ophthalmologist Lea Hyvärinen, MD, PhD. Dr. Hyvärinen completed her thesis on fluorescein angiography and helped start the first clinical laboratory in that area while serving as a fellow at the Wilmer Eye Institute of Johns Hopkins Hospital in 1967. During her time with the Wilmer Institute, she became interested in vision rehabilitation and assessment and has been working in that field since the 1970s, training rehabilitation teams, designing new visual assessment devices, and teaching. The first test within the LEA Vision Test System that Dr. Hyvarinen created was the classic LEA Symbols Test followed shortly by the LEA Numbers Test which was used in comparison studies within the field of occupational medicine.
Accuracy
Among the array of visual assessment picture tests that exist, the LEA symbols tests are the only tests that have been calibrated against the standardized Landolt C vision test symbol. The Landolt C is an optotype that is used throughout most of the world as the standardized symbol for measuring visual acuity. It is identical to the "C" that is used in the traditional Snellen chart.
In addition to this, the LEA symbols test has been experimentally verified to be both a valid and reliable measure of visual acuity. As is desirable of a good vision test, each of the four optotypes used in the symbols test has been proven to measure visual acuity similarly and blur equally as well, supporting the test's internal consistency.
A study published in Acta Ophthalmologica Scandinavica in 2006 showed that the Lea Symbols 15-line folding distance chart is clinically useful in detecting deficiencies in visual acuity in preschool children. The study, which compared visual acuity diagnoses from Lea symbols tests to those obtained via ophthalmological examination, revealed that the Lea symbols chart provided an accurate and sufficient assessment in 95.9% of the 149 preschool-age children tested. This suggests that Lea tests can be used confidently as an alternative to more costly and time-consuming pediatric tests of visual acuity.
Importance
The unique design of the LEA tests and their special optotypes allow for pediatric low vision to be diagnosed in children at much younger ages than standard vision tests allow. This is especially important in young children who possess other physical disabilities or mental disabilities and are entitled to receive early special education benefits. More than half of children who suffer from low vision also have other impairments or disabilities. Most of the LEA tests can also be used on children with significant brain damage and serve as one of the few methods that can accurately assess visual acuity in these situations.
Versions
The LEA Vision Test System currently contains over 40 different tests which target the assessment of many aspects of vision and communication deficiencies in both children and adults.
LEA Symbols Test
The oldest and most basic form of the LEA test is simply referred to as the "LEA Symbols Test". This test consists of four optotypes (test symbols): the outlines of an apple, a pentagon, a square, and a circle. Because these four symbols can be named and easily identified as everyday, concrete objects ("apple", "house", "window", and "ring"), they can be recognized at an earlier age than abstract letters or numbers can be. This enables preschool children to be tested for visual acuity long before they become familiar with the letter and numbers used in other standard vision charts.
The LEA Symbols Test is often used in the form of the three-dimensional (3-D) LEA Puzzle. This puzzle incorporates color along with the four standard optotypes to allow for measurement of visual acuity in children as young as fourteen months of age.
LEA Numbers Test
The "LEA Numbers Test" was the second of the LEA tests that was developed and can be used to test the visual acuity of older children and even adults. This test has a layout similar to a typical Snellen chart, with lines of numbers decreasing in size towards the bottom of the page. Like the optotypes of the LEA Symbols Test, these numbers are also calibrated against the Landolt C and blur equally.
LEA Grating Acuity Test
This test allows for the assessment of grating acuity, especially in children who possess severe or multiple visual deficiencies. The "LEA Gratings Test" has also been shown to be successful in vision testing of children with brain damage and is the only test that can reveal their limited capacity for the processing of large numbers of parallel lines.
LEA Contrast Sensitivity Test
Visual information that is presented in low contrast settings is very important to the process of visual communication. It is especially vital to assess a child's contrast sensitivity at a young age in order to determine the distance and accuracy with the child can distinguish facial features. A very popular test designed specifically for this reason is the "Hiding Heidi Low Contrast Face Pictures" test (which the LEA Vision Test System produces a version of.) This test uses a series of cards depicting cartoon faces of different contrast levels. The contrast sensitivity assessment obtained from this test is very important in educational settings because children with contrast deficiencies have extreme difficulty receiving visual cues from body language or facial expressions and often can't read the blackboard or projector.
See also
Eye chart
Sloan letters
Snellen chart
Pediatric ophthalmology
Infant vision
Visual acuity testing in children
References
External links
Official website
Diagnostic ophthalmology
Optotypes | Lea test | Mathematics | 1,208 |
74,714,251 | https://en.wikipedia.org/wiki/X-ray%20diffraction%20computed%20tomography | X-ray diffraction computed tomography is an experimental technique that combines X-ray diffraction with the computed tomography data acquisition approach. X-ray diffraction (XRD) computed tomography (CT) was first introduced in 1987 by Harding et al. using a laboratory diffractometer and a monochromatic X-ray pencil beam. The first implementation of the technique at synchrotron facilities was performed in 1998 by Kleuker et al.
X-ray diffraction computed tomography can be divided into two main categories depending on how the XRD data are being treated, specifically the XRD data can be treated either as powder diffraction or single crystal diffraction data and this depends on the sample properties. If the sample contains small and randomly oriented crystals, then it generates smooth powder diffraction "rings" when using a 2D area detector. If the sample contains large crystals, then it generates "spotty" 2D diffraction patterns. The latter can be performed using also a letterbox, cone and parallel X-ray beam and yields 2D or 3D images corresponding to maps of the crystallites or "grains" present in the sample and their properties, such as stress or strain. There exist several variations of this approach including 3DXRD, X-ray diffraction contrast tomography (DCT) and high energy X-ray diffraction microscopy (HEDM)
X-ray diffraction computed tomography, often abbreviated as XRD-CT, typically refers to the technique invented by Harding et al. which assumes that the acquired data are powder diffraction data. For this reason, it has also been mentioned as powder diffraction computed tomography and diffraction scattering computed tomography (DSCT), however they both refer to the same method.
Data acquisition
XRD-CT employs a monochromatic pencil beam scanning approach and captures the diffraction signal in transmission geometry, producing a diffraction projection dataset. In this setup, the sample moves along an axis perpendicular to the beam's direction. It is illuminated with a monochromatic finely collimated or focused "pencil" X-ray beam. A 2D area detector then records the scattered X-rays, optimizing for best counting statistics and speed. Typically, the translational scan's size surpasses the sample's diameter, ensuring its full coverage at all assessed angles. The size of the translation step is commonly aligned with the X-ray beam's horizontal size. In a perfect scenario for any pencil-beam scanning tomographic method, the measured angles should match the number of translation steps multiplied by π/2, adhering to the Nyquist sampling theorem. However, this number can often be reduced in practice be equal to the number of translation steps without substantially compromising the quality of reconstructed images. The usual angular range spans from 0 to π.
Data reconstruction
In most studies, the predominant data reconstruction approach is the 'reverse analysis' introduced by Bleuet et al. where each sinogram is treated independently yielding a new CT image. Most often the filtered back projection reconstruction algorithm is employed to reconstruct the XRD-CT images. The outcome is an image in which every pixel, or more accurately voxel, equates to a local diffraction pattern. The reconstructed data can also be seen as a stack of 2D square images, where each image corresponds to an X-ray scattering angle.
Reconstruction artefacts
XRD-CT makes the following assumptions:
The sample is small and there are no significant parallax artefacts in the acquired diffraction data; when this assumption is not valid the reconstructed patterns contain a wide range of artefacts, such as inaccurate peak positions, peak shapes and even arteficial peak splitting
The acquired XRD data are powder diffraction-like and do not contain spotty data
The sample is not strongly absorbing the X-rays and there are no significant self-absorption problems in the acquired data
The chemistry of the sample is not changing significantly during the XRD-CT scan
In practise, one or more of these assumptions are not valid and the data suffer from artefacts. There are strategies to remove or significantly all of these artefacts:
Rather than employing the filtered back projection reconstruction algorithm to reconstruct the XRD-CT images, it is possible to use another reconstruction approach, termed "Direct Least Squares Reconstruction" (DLSR) to perform simultaneously peak fitting and tomographic reconstruction which takes into account the geometry of the experimental setup and yields parallax artefact-free reconstructed images. Performing a 0 to 2π XRD-CT scan instead of 0 to π can lead to reconstructed patterns with accurate peak position but not peak shape.
Spotty 2D XRD data acquired during the XRD-CT scan lead to streak or line artefacts in the reconstructed XRD-CT data; it is possible to remove or suppress these artefacts by applying filters during the azimuthal integration of the raw 2D diffraction patterns
The data can be corrected for self-absorption artefacts using an X-ray absorption-contrast CT scan of the same sample.
If the solid-state chemistry of the sample is changing during the XRD-CT scan, then other data acquisition approaches can be employed that can improve the temporal resolution of the method, such as the interlaced approach
Data analysis
Analyzing the local diffraction patterns can range from basic single-peak sequential batch fitting to a comprehensive one-step full-profile analysis, known as 'Rietveld-CT' (Wragg et al., 2015 ). The latter method stands out for its efficiency over the typical sequential method since it shares global parameters across all local models. Examples of these parameters include zero error and instrumental broadening, which enhance the refinement process's stability. To elaborate, each voxel in the restructured images is made up of a local model (like multi-phase scale factors, lattice parameters, and crystallite sizes) tailored to match the corresponding local diffraction pattern. This implies that only the overarching parameters are consistent across local models. However, the application of Rietveld-CT has been limited to small images, specifically those of 60 × 60 voxels, with the feasibility for larger images hinging on the computer memory available. Most often though full profile analysis of the local diffraction patterns is performed on a pixel-by-pixel or line-by-line basis using conventional XRD data analysis methods, such LeBail, Pawley and Rietveld. All these methods employ fitting based on the restructured diffraction patterns. Another approach which is also computational expensive is the DLSR which performs the tomographic data reconstruction and peak fitting in a single step. Regardless of the chosen analytical method, the final output comprises images filled with localized physico-chemical information. Each physico-chemical image corresponds to the refined parameters present in the local models, which might include maps that correspond to scale factors, lattice parameters, and crystallite sizes.
See also
X-ray diffraction
computed tomography
powder diffraction
3DXRD
synchrotron
References
Laboratory techniques in condensed matter physics
X-ray crystallography
X-ray computed tomography
1987 introductions | X-ray diffraction computed tomography | Physics,Chemistry,Materials_science | 1,480 |
310,238 | https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Vi%C3%A8te | François Viète (; 1540 – 23 February 1603), known in Latin as Franciscus Vieta, was a French mathematician whose work on new algebra was an important step towards modern algebra, due to his innovative use of letters as parameters in equations. He was a lawyer by trade, and served as a privy councillor to both Henry III and Henry IV of France.
Biography
Early life and education
Viète was born at Fontenay-le-Comte in present-day Vendée. His grandfather was a merchant from La Rochelle. His father, Etienne Viète, was an attorney in Fontenay-le-Comte and a notary in Le Busseau. His mother was the aunt of Barnabé Brisson, a magistrate and the first president of parliament during the ascendancy of the Catholic League of France.
Viète went to a Franciscan school and in 1558 studied law at Poitiers, graduating as a Bachelor of Laws in 1559. A year later, he began his career as an attorney in his native town. From the outset, he was entrusted with some major cases, including the settlement of rent in Poitou for the widow of King Francis I of France and looking after the interests of Mary, Queen of Scots.
Serving Parthenay
In 1564, Viète entered the service of Antoinette d'Aubeterre, Lady Soubise, wife of Jean V de Parthenay-Soubise, one of the main Huguenot military leaders and accompanied him to Lyon to collect documents about his heroic defence of that city against the troops of Jacques of Savoy, 2nd Duke of Nemours just the year before.
The same year, at Parc-Soubise, in the commune of Mouchamps in present-day Vendée, Viète became the tutor of Catherine de Parthenay, Soubise's twelve-year-old daughter. He taught her science and mathematics and wrote for her numerous treatises on astronomy and trigonometry, some of which have survived. In these treatises, Viète used decimal numbers (twenty years before Stevin's paper) and he also noted the elliptic orbit of the planets, forty years before Kepler and twenty years before Giordano Bruno's death.
John V de Parthenay presented him to King Charles IX of France. Viète wrote a genealogy of the Parthenay family and following the death of Jean V de Parthenay-Soubise in 1566 his biography.
In 1568, Antoinette, Lady Soubise, married her daughter Catherine to Baron Charles de Quellenec and Viète went with Lady Soubise to La Rochelle, where he mixed with the highest Calvinist aristocracy, leaders like Coligny and Condé and Queen Jeanne d’Albret of Navarre and her son, Henry of Navarre, the future Henry IV of France.
In 1570, he refused to represent the Soubise ladies in their infamous lawsuit against the Baron De Quellenec, where they claimed the Baron was unable (or unwilling) to provide an heir.
First steps in Paris
In 1571, he enrolled as an attorney in Paris, and continued to visit his student Catherine. He regularly lived in Fontenay-le-Comte, where he took on some municipal functions. He began publishing his Universalium inspectionum ad Canonem mathematicum liber singularis and wrote new mathematical research by night or during periods of leisure. He was known to dwell on any one question for up to three days, his elbow on the desk, feeding himself without changing position (according to his friend, Jacques de Thou).
In 1572, Viète was in Paris during the St. Bartholomew's Day massacre. That night, Baron De Quellenec was killed after having tried to save Admiral Coligny the previous night. The same year, Viète met Françoise de Rohan, Lady of Garnache, and became her adviser against Jacques, Duke of Nemours.
In 1573, he became a councillor of the Parlement of Rennes, at Rennes, and two years later, he obtained the agreement of Antoinette d'Aubeterre for the marriage of Catherine of Parthenay to Duke René de Rohan, Françoise's brother.
In 1576, Henri, duc de Rohan took him under his special protection, recommending him in 1580 as "maître des requêtes". In 1579, Viète finished the printing of his Universalium inspectionum (Mettayer publisher), published as an appendix to a book of two trigonometric tables (Canon mathematicus, seu ad triangula, the "canon" referred to by the title of his Universalium inspectionum, and Canonion triangulorum laterum rationalium). A year later, he was appointed maître des requêtes to the parliament of Paris, committed to serving the king. That same year, his success in the trial between the Duke of Nemours and Françoise de Rohan, to the benefit of the latter, earned him the resentment of the tenacious Catholic League.
Exile in Fontenay
Between 1583 and 1585, the League persuaded king Henry III to release Viète, Viète having been accused of sympathy with the Protestant cause. Henry of Navarre, at Rohan's instigation, addressed two letters to King Henry III of France on March 3 and April 26, 1585, in an attempt to obtain Viète's restoration to his former office, but he failed.
Viète retired to Fontenay and Beauvoir-sur-Mer, with François de Rohan. He spent four years devoted to mathematics, writing his New Algebra (1591).
Code-breaker to two kings
In 1589, Henry III took refuge in Blois. He commanded the royal officials to be at Tours before 15 April 1589. Viète was one of the first who came back to Tours. He deciphered the secret letters of the Catholic League and other enemies of the king. Later, he had arguments with the classical scholar Joseph Juste Scaliger. Viète triumphed against him in 1590.
After the death of Henry III, Viète became a privy councillor to Henry of Navarre, now Henry IV of France. He was appreciated by the king, who admired his mathematical talents. Viète was given the position of councillor of the parlement at Tours. In 1590, Viète broke the key to a Spanish cipher, consisting of more than 500 characters, and this meant that all dispatches in that language which fell into the hands of the French could be easily read.
Henry IV published a letter from Commander Moreo to the King of Spain. The contents of this letter, read by Viète, revealed that the head of the League in France, Charles, Duke of Mayenne, planned to become king in place of Henry IV. This publication led to the settlement of the Wars of Religion. The King of Spain accused Viète of having used magical powers.
In 1593, Viète published his arguments against Scaliger. Beginning in 1594, he was appointed exclusively deciphering the enemy's secret codes.
Gregorian calendar
In 1582, Pope Gregory XIII published his bull Inter gravissimas and ordered Catholic kings to comply with the change from the Julian calendar, based on the calculations of the Calabrian doctor Aloysius Lilius, aka Luigi Lilio or Luigi Giglio. His work was resumed, after his death, by the scientific adviser to the Pope, Christopher Clavius.
Viète accused Clavius, in a series of pamphlets (1600), of introducing corrections and intermediate days in an arbitrary manner, and misunderstanding the meaning of the works of his predecessor, particularly in the calculation of the lunar cycle. Viète gave a new timetable, which Clavius cleverly refuted, after Viète's death, in his Explicatio (1603).
It is said that Viète was wrong. Without doubt, he believed himself to be a kind of "King of Times" as the historian of mathematics, Dhombres, claimed. It is true that Viète held Clavius in low esteem, as evidenced by De Thou:
The Adriaan van Roomen problem
In 1596, Scaliger resumed his attacks from the University of Leyden. Viète replied definitively the following year. In March that same year, Adriaan van Roomen sought the resolution, by any of Europe's top mathematicians, to a polynomial equation of degree 45. King Henri IV received a snub from the Dutch ambassador, who claimed that there was no mathematician in France. He said it was simply because some Dutch mathematician, Adriaan van Roomen, had not asked any Frenchman to solve his problem.
Viète came, saw the problem, and, after leaning on a window for a few minutes, solved it. It was the equation between sin(x) and sin(x/45). He resolved this at once, and said he was able to give at the same time (actually the next day) the solution to the other 22 problems to the ambassador. "Ut legit, ut solvit," he later said. Further, he sent a new problem back to Van Roomen, for resolution by Euclidean tools (rule and compass) of the lost answer to the problem first set by Apollonius of Perga. Van Roomen could not overcome that problem without resorting to a trick (see detail below).
Final years
In 1598, Viète was granted special leave. Henry IV, however, charged him to end the revolt of the Notaries, whom the King had ordered to pay back their fees. Sick and exhausted by work, he left the King's service in December 1602 and received 20,000 écus, which were found at his bedside after his death.
A few weeks before his death, he wrote a final thesis on issues of cryptography, which essay made obsolete all encryption methods of the time. He died on 23 February 1603, as De Thou wrote, leaving two daughters, Jeanne, whose mother was Barbe Cottereau, and Suzanne, whose mother was Julienne Leclerc. Jeanne, the eldest, died in 1628, having married Jean Gabriau, a councillor of the parliament of Brittany. Suzanne died in January 1618 in Paris.
The cause of Viète's death is unknown. Alexander Anderson, student of Viète and publisher of his scientific writings, speaks of a "praeceps et immaturum autoris fatum" (meeting an untimely end).
Work and thought
New algebra
Background
At the end of the 16th century, mathematics was placed under the dual aegis of Greek geometry and the Arabic procedures for resolution. At the time of Viète, algebra therefore oscillated between arithmetic, which gave the appearance of a list of rules; and geometry, which seemed more rigorous. Meanwhile, Italian mathematicians Luca Pacioli, Scipione del Ferro, Niccolò Fontana Tartaglia, Gerolamo Cardano, Lodovico Ferrari, and especially Raphael Bombelli (1560) all developed techniques for solving equations of the third degree, which heralded a new era.
On the other hand, from the German school of Coss, the Welsh mathematician Robert Recorde (1550) and the Dutchman Simon Stevin (1581) brought an early algebraic notation: the use of decimals and exponents. However, complex numbers remained at best a philosophical way of thinking. Descartes, almost a century after their invention, used them as imaginary numbers. Only positive solutions were considered and using geometrical proof was common.
The mathematician's task was in fact twofold. It was necessary to produce algebra in a more geometrical way (i.e. to give it a rigorous foundation), and it was also necessary to make geometry more algebraic, allowing for analytical calculation in the plane. Viète and Descartes solved this dual task in a double revolution.
Viète's symbolic algebra
Firstly, Viète gave algebra a foundation as strong as that of geometry. He then ended the algebra of procedures (al-Jabr and al-Muqabala), creating the first symbolic algebra, and claiming that with it, all problems could be solved (nullum non problema solvere).
In his dedication of the Isagoge to Catherine de Parthenay, Viète wrote:
Viète did not know "multiplied" notation (given by William Oughtred in 1631) or the symbol of equality, =, an absence which is more striking because Robert Recorde had used the present symbol for this purpose since 1557, and Guilielmus Xylander had used parallel vertical lines since 1575. Note also the use of a 'u' like symbol with a number above it for an unknown to a given power by Rafael Bombelli in 1572.
Viète had neither much time, nor students able to brilliantly illustrate his method. He took years in publishing his work (he was very meticulous), and most importantly, he made a very specific choice to separate the unknown variables, using consonants for parameters and vowels for unknowns. In this notation he perhaps followed some older contemporaries, such as Petrus Ramus, who designated the points in geometrical figures by vowels, making use of consonants, R, S, T, etc., only when these were exhausted. This choice proved unpopular with future mathematicians and Descartes, among others, preferred the first letters of the alphabet to designate the parameters and the latter for the unknowns.
Viète also remained a prisoner of his time in several respects. First, he was heir of Ramus and did not address the lengths as numbers. His writing kept track of homogeneity, which did not simplify their reading. He failed to recognize the complex numbers of Bombelli and needed to double-check his algebraic answers through geometrical construction. Although he was fully aware that his new algebra was sufficient to give a solution, this concession tainted his reputation.
However, Viète created many innovations: the binomial formula, which would be taken by Pascal and Newton, and the coefficients of a polynomial to sums and products of its roots, called Viète's formula.
Geometric algebra
Viète was well skilled in most modern artifices, aiming at the simplification of equations by the substitution of new quantities having a certain connection with the primitive unknown quantities. Another of his works, Recensio canonica effectionum geometricarum, bears a modern stamp, being what was later called an algebraic geometry—a collection of precepts how to construct algebraic expressions with the use of ruler and compass only. While these writings were generally intelligible, and therefore of the greatest didactic importance, the principle of homogeneity, first enunciated by Viète, was so far in advance of his times that most readers seem to have passed it over. That principle had been made use of by the Greek authors of the classic age; but of later mathematicians only Hero, Diophantus, etc., ventured to regard lines and surfaces as mere numbers that could be joined to give a new number, their sum.
The study of such sums, found in the works of Diophantus, may have prompted Viète to lay down the principle that quantities occurring in an equation ought to be homogeneous, all of them lines, or surfaces, or solids, or supersolids — an equation between mere numbers being inadmissible. During the centuries that have elapsed between Viète's day and the present, several changes of opinion have taken place on this subject. Modern mathematicians like to make homogeneous such equations as are not so from the beginning, in order to get values of a symmetrical shape. Viète himself did not see that far; nevertheless, he indirectly suggested the thought. He also conceived methods for the general resolution of equations of the second, third and fourth degrees different from those of Scipione dal Ferro and Lodovico Ferrari, with which he had not been acquainted. He devised an approximate numerical solution of equations of the second and third degrees, wherein Leonardo of Pisa must have preceded him, but by a method which was completely lost.
Above all, Viète was the first mathematician who introduced notations for the problem (and not just for the unknowns). As a result, his algebra was no longer limited to the statement of rules, but relied on an efficient computational algebra, in which the operations act on the letters and the results can be obtained at the end of the calculations by a simple replacement. This approach, which is the heart of contemporary algebraic method, was a fundamental step in the development of mathematics. With this, Viète marked the end of medieval algebra (from Al-Khwarizmi to Stevin) and opened the modern period.
The logic of species
Being wealthy, Viète began to publish at his own expense, for a few friends and scholars in almost every country of Europe, the systematic presentation of his mathematic theory, which he called "species logistic" (from species: symbol) or art of calculation on symbols (1591).
He described in three stages how to proceed for solving a problem:
As a first step, he summarized the problem in the form of an equation. Viète called this stage the Zetetic. It denotes the known quantities by consonants (B, D, etc.) and the unknown quantities by the vowels (A, E, etc.)
In a second step, he made an analysis. He called this stage the Poristic. Here mathematicians must discuss the equation and solve it. It gives the characteristic of the problem, porisma (corrollary), from which we can move to the next step.
In the last step, the exegetical analysis, he returned to the initial problem which presents a solution through a geometrical or numerical construction based on porisma.
Among the problems addressed by Viète with this method is the complete resolution of the quadratic equations of the form and third-degree equations of the form (Viète reduced it to quadratic equations). He knew the connection between the positive roots of an equation (which, in his day, were alone thought of as roots) and the coefficients of the different powers of the unknown quantity (see Viète's formulas and their application on quadratic equations). He discovered the formula for deriving the sine of a multiple angle, knowing that of the simple angle with due regard to the periodicity of sines. This formula must have been known to Viète in 1593.
Viète's formula
In 1593, based on geometrical considerations and through trigonometric calculations perfectly mastered, he discovered the first infinite product in the history of mathematics by giving an expression of , now known as Viète's formula:
He provides 10 decimal places of by applying the Archimedes method to a polygon with 6 × 216 = 393,216 sides.
Adriaan van Roomen's challenge and the problem of Apollonius
This famous controversy is told by Tallemant des Réaux in these terms (46th story from the first volume of Les Historiettes. Mémoires pour servir à l’histoire du XVIIe siècle):
When, in 1595, Viète published his response to the problem set by Adriaan van Roomen, he proposed finding the resolution of the old problem of Apollonius, namely to find a circle tangent to three given circles. Van Roomen proposed a solution using a hyperbola, with which Viète did not agree, as he was hoping for a solution using Euclidean tools.
Viète published his own solution in 1600 in his work Apollonius Gallus. In this paper, Viète made use of the center of similitude of two circles. His friend De Thou said that Adriaan van Roomen immediately left the University of Würzburg, saddled his horse and went to Fontenay-le-Comte, where Viète lived. According to De Thou, he stayed a month with him, and learned the methods of the new algebra. The two men became friends and Viète paid all van Roomen's expenses before his return to Würzburg.
This resolution had an almost immediate impact in Europe and Viète earned the admiration of many mathematicians over the centuries. Viète did not deal with cases (circles together, these tangents, etc.), but recognized that the number of solutions depends on the relative position of the three circles and outlined the ten resulting situations. Descartes completed (in 1643) the theorem of the three circles of Apollonius, leading to a quadratic equation in 87 terms, each of which is a product of six factors (which, with this method, makes the actual construction humanly impossible).
Religious and political beliefs
Viète was accused of Protestantism by the Catholic League, but he was not a Huguenot. His father was, according to Dhombres. Indifferent in religious matters, he did not adopt the Calvinist faith of Parthenay, nor that of his other protectors, the Rohan family. His call to the parliament of Rennes proved the opposite. At the reception as a member of the court of Brittany, on 6 April 1574, he read in public a statement of Catholic faith.
Nevertheless, Viète defended and protected Protestants his whole life, and suffered, in turn, the wrath of the League. It seems that for him, the stability of the state was to be preserved and that under this requirement, the King's religion did not matter. At that time, such people were called "Politicals."
Furthermore, at his death, he did not want to confess his sins. A friend had to convince him that his own daughter would not find a husband, were he to refuse the sacraments of the Catholic Church. Whether Viète was an atheist or not is a matter of debate.
Publications
Chronological list
Between 1564 and 1568, Viète prepared for his student, Catherine de Parthenay, some textbooks of astronomy and trigonometry and a treatise that was never published: Harmonicon coeleste.
In 1579, the trigonometric tables Canon mathematicus, seu ad triangula, published together with a table of rational-sided triangles Canonion triangulorum laterum rationalium, and a book of trigonometry Universalium inspectionum ad canonem mathematicum – which he published at his own expense and with great printing difficulties. This text contains many formulas on the sine and cosine and is unusual in using decimal numbers. The trigonometric tables here exceeded those of Regiomontanus (Triangulate Omnimodis, 1533) and Rheticus (1543, annexed to De revolutionibus of Copernicus). (Alternative scan of a 1589 reprint)
In 1589, Deschiffrement d'une lettre escripte par le Commandeur Moreo au Roy d'Espaigne son maître.
In 1590, Deschiffrement description of a letter by the Commander Moreo at Roy Espaigne of his master, Tours: Mettayer.
In 1591:
In artem analyticem isagoge (Introduction to the art of analysis), also known as Algebra Nova (New Algebra) Tours: Mettayer, in 9 folio; the first edition of the Isagoge.
Zeteticorum libri quinque. Tours: Mettayer, in 24 folio; which are the five books of Zetetics, a collection of problems from Diophantus solved using the analytical art.
Between 1591 and 1593, Effectionum geometricarum canonica recensio. Tours: Mettayer, in 7 folio.
In 1593:
Vietae Supplementum geometriae. Tours: Francisci, in 21 folio.
Francisci Vietae Variorum de rebus responsorum mathematics liber VIII. Tours: Mettaye, in 49 folio; about the challenges of Scaliger.
Variorum de rebus mathematicis responsorum liber VIII; the "Eighth Book of Varied Responses" in which he talks about the problems of the trisection of the angle (which he acknowledges that it is bound to an equation of third degree) of squaring the circle, building the regular heptagon, etc.
In 1594, Munimen adversus nova cyclometrica. Paris: Mettayer, in quarto, 8 folio; again, a response against Scaliger.
In 1595, Ad problema quod omnibus mathematicis totius orbis construendum proposuit Adrianus Romanus, Francisci Vietae responsum. Paris: Mettayer, in quarto, 16 folio; about the Adriaan van Roomen problem.
In 1600:
De numerosa potestatum ad exegesim resolutione. Paris: Le Clerc, in 36 folio; work that provided the means for extracting roots and solutions of equations of degree at most 6.
Francisci Vietae Apollonius Gallus. Paris: Le Clerc, in quarto, 13 folio; where he referred to himself as the French Apollonius.
Between 1600 and 1602:
Fontenaeensis libellorum supplicum in Regia magistri relatio Kalendarii vere Gregoriani ad ecclesiasticos doctores exhibita Pontifici Maximi Clementi VIII. Paris: Mettayer, in quarto, 40 folio.
Francisci Vietae adversus Christophorum Clavium expostulatio. Paris: Mettayer, in quarto, 8 folio; his theses against Clavius.
Posthumous publications
1612:
Supplementum Apollonii Galli edited by Marin Ghetaldi.
Supplementum Apollonii Redivivi sive analysis problematis bactenus desiderati ad Apollonii Pergaei doctrinam a Marino Ghetaldo Patritio Regusino hujusque non ita pridem institutam edited by Alexander Anderson.
1615:
Ad Angularum Sectionem Analytica Theoremata F. Vieta primum excogitata at absque ulla demonstratione ad nos transmissa, iam tandem demonstrationibus confirmata edited by Alexander Anderson.
Pro Zetetico Apolloniani problematis a se jam pridem edito in supplemento Apollonii Redivivi Zetetico Apolloniani problematis a se jam pridem edito; in qua ad ea quae obiter inibi perstrinxit Ghetaldus respondetur edited by Alexander Anderson
Francisci Vietae Fontenaeensis, De aequationum — recognitione et emendatione tractatus duo per Alexandrum Andersonum edited by Alexander Anderson
1617: Animadversionis in Franciscum Vietam, a Clemente Cyriaco nuper editae brevis diakrisis edited by Alexander Anderson
1619: Exercitationum Mathematicarum Decas Prima edited by Alexander Anderson
1631: In artem analyticem isagoge. Eiusdem ad logisticem speciosam notae priores, nunc primum in lucem editae. Paris: Baudry, in 12 folio; the second edition of the Isagoge, including the posthumously published Ad logisticem speciosam notae priores.
Reception and influence
During the ascendancy of the Catholic League, Viète's secretary was Nathaniel Tarporley, perhaps one of the more interesting and enigmatic mathematicians of 16th-century England. When he returned to London, Tarporley became one of the trusted friends of Thomas Harriot.
Apart from Catherine de Parthenay, Viète's other notable students were: French mathematician Jacques Aleaume, from Orleans, Marino Ghetaldi of Ragusa, Jean de Beaugrand and the Scottish mathematician Alexander Anderson. They illustrated his theories by publishing his works and continuing his methods. At his death, his heirs gave his manuscripts to Peter Aleaume. We give here the most important posthumous editions:
In 1612: Supplementum Apollonii Galli of Marino Ghetaldi.
From 1615 to 1619: Animadversionis in Franciscum Vietam, Clemente a Cyriaco nuper by Alexander Anderson
Francisci Vietae Fontenaeensis ab aequationum recognitione et emendatione Tractatus duo Alexandrum per Andersonum. Paris, Laquehay, 1615, in 4, 135 p. The death of Alexander Anderson unfortunately halted the publication.
In 1630, an Introduction en l'art analytic ou nouvelle algèbre ('Introduction to the analytic art or modern algebra), translated into French and commentary by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin.
The Five Books of François Viette's Zetetic (Les cinq livres des zététiques de François Viette), put into French, and commented increased by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin, p. 219.
The same year, there appeared an Isagoge by Antoine Vasset (a pseudonym of Claude Hardy), and the following year, a translation into Latin of Beaugrand, which Descartes would have received.
In 1648, the corpus of mathematical works printed by Frans van Schooten, professor at Leiden University (Elzevirs presses). He was assisted by Jacques Golius and Mersenne.
The English mathematicians Thomas Harriot and Isaac Newton, and the Dutch physicist Willebrord Snellius, the French mathematicians Pierre de Fermat and Blaise Pascal all used Viète's symbolism.
About 1770, the Italian mathematician Targioni Tozzetti, found in Florence Viète's Harmonicon coeleste. Viète had written in it: Describat Planeta Ellipsim ad motum anomaliae ad Terram. (That shows he adopted Copernicus' system and understood before Kepler the elliptic form of the orbits of planets.)
In 1841, the French mathematician Michel Chasles was one of the first to reevaluate his role in the development of modern algebra.
In 1847, a letter from François Arago, perpetual secretary of the Academy of Sciences (Paris), announced his intention to write a biography of François Viète.
Between 1880 and 1890, the polytechnician Fréderic Ritter, based in Fontenay-le-Comte, was the first translator of the works of François Viète and his first contemporary biographer with Benjamin Fillon.
Descartes' views on Viète
Thirty-four years after the death of Viète, the philosopher René Descartes published his method and a book of geometry that changed the landscape of algebra and built on Viète's work, applying it to the geometry by removing its requirements of homogeneity. Descartes, accused by Jean Baptiste Chauveau, a former classmate of La Flèche, explained in a letter to Mersenne (1639 February) that he never read those works. Descartes accepted the Viète's view of mathematics for which the study shall stress the self-evidence of the results that Descartes implemented translating the symbolic algebra in geometric reasoning. Descartes adopted the term mathesis universalis, which he called an "already venerable term with a received usage", which originated in van Roomen's book Mathesis Universalis.
Elsewhere, Descartes said that Viète's notations were confusing and used unnecessary geometric justifications. In some letters, he showed he understands the program of the Artem Analyticem Isagoge; in others, he shamelessly caricatured Viète's proposals. One of his biographers, Charles Adam, noted this contradiction:
Current research has not shown the extent of the direct influence of the works of Viète on Descartes. This influence could have been formed through the works of Adriaan van Roomen or Jacques Aleaume at the Hague, or through the book by Jean de Beaugrand.
In his letters to Mersenne, Descartes consciously minimized the originality and depth of the work of his predecessors. "I began," he says, "where Vieta finished". His views emerged in the 17th century and mathematicians won a clear algebraic language without the requirements of homogeneity. Many contemporary studies have restored the work of Parthenay's mathematician, showing he had the double merit of introducing the first elements of literal calculation and building a first axiomatic for algebra.
Although Viète was not the first to propose notation of unknown quantities by letters - Jordanus Nemorarius had done this in the past - we can reasonably estimate that it would be simplistic to summarize his innovations for that discovery and place him at the junction of algebraic transformations made during the late sixteenth – early 17th century.
See also
Vieta's formulas
Michael Stifel
Rafael Bombelli
Notes
Bibliography
Bailey Ogilvie, Marilyn; Harvey, Joy Dorothy. The Biographical Dictionary of Women in Science: L–Z. Google Books. p 985.
Bachmakova, Izabella G., Slavutin, E.I. “ Genesis Triangulorum de François Viète et ses recherches dans l’analyse indéterminée ”, Archives for History of Exact Science, 16 (4), 1977, 289-306.
Bashmakova, Izabella Grigorievna; Smirnova Galina S; Shenitzer, Abe. The Beginnings and Evolution of Algebra. Google Books. pp. 75–.
Biard, Joel; Rāshid, Rushdī. Descartes et le Moyen Age. Paris: Vrin, 1998. Google Books
Burton, David M (1985). The History of Mathematics: An Introduction. Newton, Massachusetts: Allyn and Bacon, Inc.
Cajori, F. (1919). A History of Mathematics. pp. 152 and onward.
Calinger, Ronald (ed.) (1995). Classics of Mathematics. Englewood Cliffs, New Jersey: Prentice–Hall, Inc.
Calinger, Ronald. Vita mathematica. Mathematical Association of America. Google Books
Chabert, Jean-Luc; Barbin, Évelyne; Weeks, Chris. A History of Algorithms. Google Books
Derbyshire, John (2006). Unknown Quantity a Real and Imaginary History of Algebra. Scribd.com
Eves, Howard (1980). Great Moments in Mathematics (Before 1650). The Mathematical Association of America. Google Books
Grisard, J. (1968) François Viète, mathématicien de la fin du seizième siècle: essai bio-bibliographique (Thèse de doctorat de 3ème cycle) École Pratique des Hautes Études, Centre de Recherche d'Histoire des Sciences et des Techniques, Paris.
Godard, Gaston. François Viète (1540–1603), Father of Modern Algebra. Université de Paris-VII, France, Recherches vendéennes.
W. Hadd, Richard. On the shoulders of merchants. Google Books
Hofmann, Joseph E (1957). The History of Mathematics, translated by F. Graynor and H. O. Midonick. New York, New York: The Philosophical Library.
Joseph, Anthony. Round tables. European Congress of Mathematics. Google Books
Michael Sean Mahoney (1994). The mathematical career of Pierre de Fermat (1601–1665). Google Books
Jacob Klein. Die griechische Logistik und die Entstehung der Algebra in: Quellen und Studien zur Geschichte der Mathematik, Astronomie und Physik, Abteilung B: Studien, Band 3, Erstes Heft, Berlin 1934, p. 18–105 and Zweites Heft, Berlin 1936, p. 122–235; translated in English by Eva Brann as: Greek Mathematical Thought and the Origin of Algebra. Cambridge, Mass. 1968,
Mazur, Joseph (2014). Enlightening Symbols: A Short History of Mathematical Notation and Its Hidden Powers. Princeton, New Jersey: Princeton University Press.
Nadine Bednarz, Carolyn Kieran, Lesley Lee. Approaches to algebra. Google Books
Otte, Michael; Panza, Marco. Analysis and Synthesis in Mathematics. Google Books
Pycior, Helena M. Symbols, Impossible Numbers, and Geometric Entanglements. Google Books
Francisci Vietae Opera Mathematica, collected by F. Van Schooten. Leyde, Elzévir, 1646, p. 554 Hildesheim-New-York: Georg Olms Verlag (1970).
The intégral corpus (excluding Harmonicon) was published by Frans van Schooten, professor at Leyde as Francisci Vietæ. Opera mathematica, in unum volumen congesta ac recognita, opera atque studio Francisci a Schooten, Officine de Bonaventure et Abraham Elzevier, Leyde, 1646. Gallica.bnf.fr (pdf).
Stillwell, John. Mathematics and its history. Google Books
Varadarajan, V. S. (1998). Algebra in Ancient and Modern Times'' The American Mathematical Society. Google Books
Attribution
External links
New Algebra (1591) online
Francois Viète: Father of Modern Algebraic Notation
The Lawyer and the Gambler
About Tarporley
Site de Jean-Paul Guichard
L'algèbre nouvelle
.
1540 births
1603 deaths
People from Fontenay-le-Comte
16th-century cryptographers
16th-century French mathematicians
Algebraists
French cryptographers
University of Poitiers alumni
16th-century French lawyers | François Viète | Mathematics | 7,821 |
15,681,978 | https://en.wikipedia.org/wiki/Xft | Xft, the X FreeType interface library, is a free computer program library written by Keith Packard.
It uses the MIT/X license that The Open Group applied after the post X11R6.4 license restoration.
It is designed to allow the FreeType font rasterizer to be used with the X Rendering Extension; it is generally employed to use FreeType's anti-aliased fonts with the X Window System. Xft also depends on fontconfig for access to the system fonts.
References
External links
Xft homepage
A tutorial by the author
Fontconfig homepage
Computer libraries
Freedesktop.org
X-based libraries | Xft | Technology | 138 |
48,942,135 | https://en.wikipedia.org/wiki/Shadow%20Matching | Shadow matching (a.k.a. Shadow mapping) is a new positioning method that improves positioning accuracy of global navigation satellite systems (GNSS) in urban environments. The shadow matching positioning principle was first proposed and the name 'shadow matching' was first introduced by Paul D Groves. The principle of shadow matching combines two commonly known principles together: GNSS signal availability determination using 3D building models and the fingerprinting-like positioning techniques.
The principle of shadow matching is simple. Due to obstruction by buildings in urban canyons, some of the GNSS satellites will be receivable in some parts of a street, but not all of them. Whether each direct signal is receivable can be predicted using a 3D city model. Consequently, by determining whether a direct signal is being received from a given satellite, the user can localize their position to within one of two areas of the street. By considering other satellites, the position solution may be refined further. At each epoch, a set of candidate user positions is generated close to the user's low-accuracy conventional GNSS positioning solution. At each candidate user position, the predicted satellite visibility is matched with the real observations. The candidate position that has the best match between the prediction and the real observations can be deemed the shadow matching positioning solution. This process can be conducted epoch by epoch, so the GNSS user can be either static or dynamic.
References
Global Positioning System | Shadow Matching | Technology,Engineering | 293 |
11,569,872 | https://en.wikipedia.org/wiki/Graphiola%20phoenicis | Graphiola phoenicis is a plant pathogen of the palm Phoenix canariensis.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ustilaginomycotina
Fungi described in 1823
Fungus species | Graphiola phoenicis | Biology | 51 |
41,479,303 | https://en.wikipedia.org/wiki/Judith%20Pipher | Judith Lynn Pipher (, June 18, 1940 – February 21, 2022) was a Canadian-born American astrophysicist and observational astronomer. She was Professor Emerita of Astronomy at the University of Rochester and directed the C. E. K. Mees Observatory from 1979 to 1994. She made important contributions to the development of infrared detector arrays in space telescopes.
Early life and education
Judith Lynn Bancroft was born on June 18, 1940, in Toronto, Ontario, to Earl Lester Alexander Bancroft and Agnes May Kathleen ( McGowan) Bancroft. She was named Junior Miss Homemaker of Ontario when she was sixteen years old. She graduated from Leaside High School in 1958 and earned a B.A. in astronomy from the University of Toronto in 1962. Following her graduation, she moved to the Finger Lakes region of upstate New York where she taught science and attended Cornell University. In the late 1960s, she worked as a graduate student of Martin Harwit on a cryogenic rocket telescope experiment. She received her Ph.D from Cornell in 1971. Her dissertation, Rocket Submillimeter Observations of the Galaxy and Background, led her into research in the nascent fields of submillimeter and infrared astronomy.
Career and research
Pipher joined the faculty of the University of Rochester's Physics and Astronomy Department in 1971 as an Instructor. From 1979 to 1994, Pipher was director of University of Rochester's C. E. K. Mees Observatory. In the 1970s and 1980s, she made observations from the Kuiper Airborne Observatory. Pipher and William J. Forrest achieved promising results with a 32×32-pixel array of indium antimonide (InSb) detectors at a NASA Ames workshop. They reported their results in 1983. That year Pipher and her colleagues were among the first to use an infrared array camera to capture starburst galaxies.
For the next two decades, Pipher developed ultra-sensitive infrared InSb arrays with the help of colleague William J. Forrest. The Infrared Array Camera (IRAC) for the Spitzer Space Telescope was launched in August 2003. She has also worked with Dan Watson and on the development of mercury cadmium telluride (HgCdTe) arrays. Pipher's observational research has concentrated on star formation studies and the arrays she designed have been used to observe astronomical phenomena such as planetary nebulae, brown dwarfs, and the Galactic Center. She has authored over 200 papers and scientific articles.
Pipher was a member of a team at the University of Rochester that developed the NEOCam sensor, a HgCdTe infrared-light sensor intended for the proposed Near-Earth Object Camera. The sensor improves the ability to detect potentially hazardous objects such as asteroids.
Honors and awards
Pipher received the Susan B. Anthony Lifetime Achievement Award from the University of Rochester in 2002. She was inducted into the National Women's Hall of Fame in 2007 and became involved with its administration. A 2009 article in Discover magazine indicated that Pipher was "considered by many to be the mother of infrared astronomy." Asteroid 306128 Pipher was named in her honor. The official naming citation was published by the Minor Planet Center on January 31, 2018 ().
She was elected a Legacy Fellow of the American Astronomical Society in 2020.
Personal life and death
While at Cornell, Judith met Robert E. Pipher (1934–2007), who brought her four stepchildren when the couple married in 1965. The Piphers lived at Cayuga Lake in Seneca Falls, New York, where she was vice president of the Seneca Museum board of directors. On the occasion of her 80th birthday, June 18, 2020, was proclaimed to be "Dr. Judy Pipher Day" in the Town of Seneca Falls. She died on February 21, 2022, at the age of 81.
References
Further reading
1940 births
2022 deaths
Scientists from Toronto
American astrophysicists
Cornell University alumni
University of Rochester faculty
University of Toronto alumni
Women astronomers
People from Seneca Falls, New York
Scientists from New York (state)
Fellows of the American Astronomical Society
20th-century American women scientists
21st-century American women scientists | Judith Pipher | Astronomy | 840 |
50,257,654 | https://en.wikipedia.org/wiki/WISE%20J1147%E2%88%922040 | WISEA J114724.10−204021.3 (abbreviated WISEA 1147) is a brown dwarf in the TW Hydrae association, a nearby group of very young stars and brown dwarfs. The object is notable because its estimate mass, 6±1 times the mass of Jupiter, places it in the mass range for rogue planets. Nevertheless, it is a free-floating object, unassociated with any star system.
The object was discovered using information from NASA's WISE (Wide-field Infrared Survey Explorer) and the 2MASS (Two Micron All-Sky Survey). Researchers inferred the young age for WISEA 1147 because it is a member of a group of stars that is only 10 million years old, and they estimated its mass using evolutionary models for brown dwarf cooling.
References
Brown dwarfs
Rogue planets
L-type brown dwarfs
Hydra (constellation)
WISE objects
TW Hydrae association | WISE J1147−2040 | Astronomy | 191 |
20,939,035 | https://en.wikipedia.org/wiki/SPKAC | SPKAC (Signed Public Key and Challenge, also known as Netscape SPKI) is a format for sending a certificate signing request (CSR): it encodes a public key, that can be manipulated using OpenSSL. It is created using the little documented HTML keygen element inside a number of Netscape compatible browsers.
Standardisation
There exists an ongoing effort to standardise SPKAC through an Internet Draft in the Internet Engineering Task Force (IETF). The purpose of this work has been to formally define what has existed prior as a de facto standard, and to address security deficiencies, particular with respect to historic insecure use of MD5 that has since been declared unsafe for use with digital signatures as per RFC 6151.
Implementations
HTML5 originally specified the <keygen> element to support SPKAC in the browser to make it easier to create client side certificates through a web service for protocols such as WebID; however, subsequent work for HTML 5.1 placed the keygen element "at-risk", and the first public working draft of HTML 5.2 removes the keygen element entirely. The removal of the keygen element is due to non-interoperability and non-conformity from a standards perspective in addition to security concerns.
The World Wide Web Consortium (W3C) Web Authentication Working Group developed the WebAuthn (Web Authentication) API to replace the keygen element.
Bouncy Castle provides a Java class.
An implementation for Erlang/OTP exists too.
An implementation for Python is named pyspkac.
PHP OpenSSL extension as of version 5.6.0.
Node.js implementation.
Deficiencies
The user interface needs to be improved in browsers, to make it more obvious to users when a server is asking for the client certificate.
See also
Simple public-key infrastructure (SPKI)
References
External links
IETF draft: Signed Public Key and Challenge
PHP v5.6 now supports SPKAC natively
Native SPKAC support in PHP OpenSSL extension with release of v5.6.0-Alpha3
Native SPKAC support in Node.js (with release of v0.11.8)
SPKAC demo in Node.js (requires node.js release > v0.11.8)
Cryptography | SPKAC | Mathematics,Engineering | 477 |
4,944,257 | https://en.wikipedia.org/wiki/Hutchinson%20Patent%20Stopper | Charles G. Hutchinson invented and patented the Hutchinson Patent Stopper in 1879 as a replacement for cork bottle stoppers which were commonly used as stoppers on soda water or pop bottles. His invention employed a wire spring attached to a rubber seal. Production of these stoppers was discontinued after 1912.
References
External links
Hutchinson Patent Stopper Guide
HBDHistoryAndPop
Seals (mechanical)
American inventions
Packaging | Hutchinson Patent Stopper | Physics | 82 |
40,713 | https://en.wikipedia.org/wiki/Amplitude%20distortion | Amplitude distortion is distortion occurring in a system, subsystem, or device when the output amplitude is not a linear function of the input amplitude under specified conditions.
Generally, output is a linear function of input only for a fixed portion of the transfer characteristics. In this region, Ic=βIb where Ic is collector current and Ib is base current, following linear relation y=mx.
When output is not in this portion, two forms of amplitude distortion might arise
Harmonic distortion : The creation of harmonics of the fundamental frequency of a sine wave input to a system.
Intermodulation distortion : This form of distortion occurs when two sine waves of frequencies X and Y are present at the input, resulting in the creation of several other frequency components, whose frequencies include (X+Y), (X-Y), (2X-Y), (2Y-X), and generally (mX ± nY) for integer m and n. Generally the size of the unwanted output falls rapidly as m and n increase.
Due to the additional outputs, this form of distortion is definitely unwanted in audio, radio and telecommunication amplifiers, and it occurs for more than two waves as well.
In a narrowband system such as a radio communication system, unwanted outputs such as X-Y and 2X+Y will be remote from the wanted band and so be ignored by the system. In contrast, 2X-Y and 2Y-X will be close to the wanted signals. These so-called third order distortion products (third order as m+n = 3) tend to dominate the non-linear distortion of narrowband systems.
Amplitude distortion is measured with the system operating under steady-state conditions with a sinusoidal input signal. When other frequencies are present, the term "amplitude" refers to that of the fundamental only.
See also
Audio quality measurement
Noise measurement
Headroom
References
External links
Arcane Radio Trivia amplitude distortion article w/ examples
RF article with trigonometry
Electrical parameters | Amplitude distortion | Engineering | 405 |
20,959,948 | https://en.wikipedia.org/wiki/PolyDADMAC | Polydiallyldimethylammonium chloride (shortened polyDADMAC or polyDDA), also commonly polyquaternium-6, is a homopolymer of diallyldimethylammonium chloride (DADMAC). The molecular weight of polyDADMAC is typically in the range of hundreds of thousands of grams per mole, and even up to a million for some products. PolyDADMAC is usually delivered as a liquid concentrate having a solids level in the range of 10 to 50%. It is a high charge density cationic polymer. The charge density makes it well suited for flocculation. Actually, pDADMAC or DMDAAC, is used as a coagulant - a charge neutralization process that precedes flocculation.
History
PolyDADMAC polymers were first prepared and studied in 1957 by Professor George Butler at the University of Florida. It was remarkable as it was soluble in water in contrast at the time to other known synthetic polymers formed by polymerization of monomers containing more than one vinyl functionality. The structure and reaction path was determined in 2002 with NMR studies. Much of what we know about pDADMAC in water treatment and personal care is from Jerry Boothe of Calgon Corporation.
Synthesis
The monomer DADMAC is formed by reacting two equivalents of allyl chloride with dimethylamine.
PolyDADMAC is then synthesized by radical polymerization of DADMAC with an organic peroxide used as a catalyst. Two polymeric structures are possible when polymerizing DADMAC: N-substituted piperidine structure or N-substituted pyrrolidine structure. The pyrrolidine structure is favored.
Applications
Effluent treatment
PolyDADMAC is used in waste water treatment as a primary organic coagulant which neutralizes negatively charged colloidal material and reduces sludge volume compared with inorganic coagulants.
Pulp and paper industry
PolyDADMAC is used for controlling disturbing substances in the papermaking process. It provides superior fixing of pitch from mechanical pulp and of latex from coated broke. Used in the short circulation of a paper mill to enhance retention and dewatering. In addition, it can be used to improve the efficiency of disk filters and flotators, and for cationization of fillers to provide maximal filler retention.
Water purification
PolyDADMAC is used as a coagulant in water purification. It is effective in coagulating and flocculating inorganic and organic particles such as silt, clay, algae, bacteria and viruses. At high concentrations the organic polymer can remove natural organic matter such as humic and fulvic acids resulting in fewer disinfection byproduct precursors and less color.
References
Organic polymers
Quaternary ammonium compounds
Water treatment | PolyDADMAC | Chemistry,Engineering,Environmental_science | 564 |
36,478,644 | https://en.wikipedia.org/wiki/BX442 | BX442 (Q2343-BX442) is a grand design spiral galaxy of type Sc. It has a companion dwarf galaxy. It is the most distant known grand design spiral galaxy in the universe, with a redshift of z=2.1765 ± 0.0001.
It is commonly referred to as the oldest known grand design spiral galaxy in the universe, but it is more accurately the earliest such galaxy known to exist in the universe, with a lookback time (the difference between the age of the universe now and the age of the universe at the time light left the galaxy) of 10.7 billion years in the concordance cosmology. This time estimate means that structure seen in BX442 developed roughly 3 billion years after the Big Bang. It is 15 kiloparsecs (50 kly) in diameter, and has a mass of .
Galaxy
The spiral morphology of BX442, while similar to many modern-day galaxies, makes it unusual in the young universe. According to study co-author Alice E. Shapley of UCLA, "The vast majority of old galaxies look like train wrecks. Our first thought was, why is this one so different, and so beautiful?"
The unusual spiral morphology of BX442 was discovered using images obtained from the Hubble Space Telescope by a team of astronomers led by David R. Law of the University of Toronto.
While the Hubble image suggested the galaxy's spiral structure however, it didn't conclusively prove that the galaxy rotated like modern-day spiral galaxies. The team therefore used an integral-field spectrograph called OSIRIS (OH-Suppressing Infrared Imaging Spectrograph) at the W.M. Keck
Observatory in Hawaii to confirm their discovery. In combination with a laser-guide-star adaptive optics system which corrects for distortions of incoming light caused by the Earth's turbulent atmosphere, the astronomers were able to sample the light from different parts of the galaxy. Small Doppler shifts of the light between different samples showed that BX442 was indeed a spiral disk, rotating roughly as fast as the Milky Way Galaxy, but much thicker and forming stars more rapidly.
Not only was BX442 revealed to be a genuine spiral galaxy, but part of a sub-class known as 'grand-design' spirals. Most spiral galaxies have subtler features and the arms of the spiral are not necessarily well-defined. A grand design spiral galaxy has very clearly well-formed and distinct arms that significantly stretch out around the galaxy center. Of all spiral galaxies, only about 10% of them are classified as a grand design spiral galaxy.
According to lead author David R. Law, "The fact that this galaxy exists is astounding. Current wisdom holds that such grand-design spiral galaxies simply didn't exist at such an early time in the history of the Universe."
The presence of a dwarf galaxy in the vicinity of BX442 offers a clue as to how the premature spiral structure may have emerged in what would otherwise be a somewhat chaotic lumpy collection of stars, as is the case with most other early galaxies. A recent study of one of the satellite galaxies of the Milky Way, known as the Sagittarius Dwarf Elliptical Galaxy (SagDEG), suggest that SagDEG may have helped generate some the Milky Way's spiral structure when it passed repeatedly through the plane of our galaxy over the past few hundred million years. Similarly, many of the most well-known grand design spiral galaxies (such as the Whirlpool Galaxy) also have nearby companions. A computer simulation has shown that the dwarf companion of BX442 could have had the same effect. However, the chaotic motions of the stars in the youthful BX442 suggest that, if this is the case, the present spiral structure will not be long-lived in cosmic terms, and may have disappeared within a hundred million years or so.
See also
A1689B11, an old and distant spiral galaxy located in the Abell 1689 galaxy cluster
BRI 1335-0417, another old and distant spiral galaxy
References
External links
on SIMBAD
Pegasus (constellation)
Unbarred spiral galaxies
4668406
Astronomical objects discovered in 2012 | BX442 | Astronomy | 878 |
9,070,810 | https://en.wikipedia.org/wiki/The%20longest%20suicide%20note%20in%20history | "The longest suicide note in history" is an epithet originally used by United Kingdom Labour MP Gerald Kaufman to describe his party's 1983 general election manifesto, which emphasised socialist policies in a more profound manner than previous such documents—and which Kaufman felt would ensure that the Labour Party (then in opposition) would fail to win the election.
Document
The New Hope for Britain was a 39-page booklet which called for unilateral nuclear disarmament; higher personal taxation for the rich; withdrawal from the European Economic Community; abolition of the House of Lords; and the re-nationalisation of recently privatised industries such as British Aerospace and the British Shipbuilders Corporation. The manifesto was based on an earlier and much longer policy paper with a similar title, Labour's Plan: The New Hope for Britain.
The epithet referred not only to the orientation of the policies, but also to their marketing. Labour leader Michael Foot decided as a statement on internal democracy that the manifesto would consist of all resolutions arrived at in its party conference.
The document's more left-wing policy proposals, along with the popularity gained by Conservative Prime Minister Margaret Thatcher over the successful outcome of the Falklands War and the division of the opposition vote between the left-wing Labour Party and the centrist Social Democratic Party – Liberal Alliance, dominated by breakaway Labour MPs on the right wing of the party, contributed to a victory with a substantial majority in Parliament for the right-wing Conservative Party Government. The defeat, Labour's worst result since the 1918 general election, led to a turning point in the history of the party: Foot retired as leader and it subsequently moved towards the centre under the leaderships of Neil Kinnock and John Smith. Then, under the leadership of Tony Blair in the 1990s, it rebranded itself as "New Labour" and Third Way. Blair led Labour back to government in a landslide victory at the 1997 general election, fourteen years and two general election defeats later.
Other uses of the phrase
It has subsequently been used by Peter Gutmann in his paper "A Cost Analysis of Windows Vista Content Protection" to describe the digital rights management schemes in the Windows Vista operating system.
Dutch VVD politician Mark Rutte used the phrase in reference to the election programme of the Dutch Labour Party, during the May 2010 parliamentary election campaign, deliberately echoing Kaufman.
In the United States, The Washington Post columnist Charles Krauthammer compared the 2012 Republican House Budget to the manifesto (in terms of comparable unpopularity) and then remarked of the American House Budget, "At 37 footnotes, it might be the most annotated suicide note in history." Neoconservative writer David Frum compared The Path to Prosperity, proposed by congressman Paul Ryan, in a similar light, saying: "This is how a great political party was impelled to base a presidential campaign on the Ryan plan—a plan that has now replaced the 1983 manifesto of the British Labour Party as "the longest suicide note in history."
Labour's decision in 2015 to engrave promises for the upcoming election on a large stone monument nicknamed the "EdStone" (after leader Ed Miliband) was within hours dubbed the "heaviest suicide note in history". George Eaton predicted that immediately after the snap 2017 election, politicians and political writers would dismiss For the Many, Not the Few, Labour's left-wing manifesto in that campaign, as "the new 'longest suicide note in history'" (it was 128 pages long) but its policy proposals would remain popular with Labour members as well as voters as they increased their vote share by 9.6%.
See also
List of Labour Party (UK) general election manifestos
References
External links
online copy
Labour Party Manifesto 1983 another copy archived.
English phrases
History of the Labour Party (UK)
Party platforms
British political phrases (1950–1999)
1983 United Kingdom general election
Michael Foot
1983 in British politics
1983 documents
1983 quotations
Suicide | The longest suicide note in history | Biology | 797 |
39,140,988 | https://en.wikipedia.org/wiki/Minkowski%20space%20%28number%20field%29 | In mathematics, specifically the field of algebraic number theory, a Minkowski space is a Euclidean space associated with an algebraic number field.
If K is a number field of degree d then there are d distinct embeddings of K into C. We let KC be the image of K in the product Cd, considered as equipped with the usual Hermitian inner product. If c denotes complex conjugation, let KR denote the subspace of KC fixed by c, equipped with a scalar product. This is the Minkowski space of K.
See also
Geometry of numbers
Footnotes
References
Algebraic number theory | Minkowski space (number field) | Mathematics | 123 |
2,577,210 | https://en.wikipedia.org/wiki/Goody%27s%20Powder | Goody's Powder, also called Goody's Headache Powders, is an over-the-counter aspirin/paracetamol/caffeine–based pain reliever, in single-dose powder form, which is marketed and sold by Prestige Brands. The powder delivery saves the time needed for the patient's digestive system to break down a tablet or capsule, ostensibly causing the medication to work faster. Goody's Extra Strength Powder consists of aspirin, caffeine, and paracetamol (acetaminophen) in a formula identical to that of Excedrin, a product of Novartis, but in the no-digestion-needed powder form.
Goody's Powder is sold primarily in the southern United States. For many years, the face of Goody's has been NASCAR legend Richard Petty, who appears in advertisements for the brand. In 2013, the brand brought on NASCAR's most popular driver, Dale Earnhardt, Jr., to join Petty as spokesperson for the brand. Prior to that, wrestler Dusty Rhodes appeared in commercials for the product.
The company's website claims that "probably the most popular technique" to take the powder is to "dump" it on the tongue and then "chase" it with a liquid. Goody's Powder can also be blended into water and ingested as a drink.
History
Goody's Powder was developed in conjunction with the Herpelscheimer Clinic in Graz, Austria, and manufactured for many years by Goody's Manufacturing Company, a family-owned business founded in 1932 and based in Winston-Salem, North Carolina. The company also produced other medicinal products, including throat sprays and throat lozenges. The headache powder was introduced in 1936. Beginning in 1995 GlaxoSmithKline produced Goody's Powders in Memphis, Tennessee. The company sold Goody's and 16 other brands to Prestige Brands in 2012.
Race sponsorship
Goody's Powder has a long history of sponsoring motor racing events and teams, especially NASCAR. The Daytona Nationwide Race was sponsored by Goody's from 1982 to 1996. Goody's is the title sponsor of the Goody's Headache Relief Shot 500 Sprint Cup Series race at Martinsville Speedway and was the title sponsor of the Goody's Headache Powder 500 Cup race at Bristol Motor Speedway from 1996 to 1999. Goody's was the official pain reliever of NASCAR from 1977 until 2007, when Tylenol became the new pain reliever of NASCAR. Goody's was also the series sponsor of the Goody's Dash Series from 1992 until NASCAR's sanctioning ended in 2003.
Goody's sponsored Chad McCumbee's No. 45 Dodge at Pocono and Tony Stewart's Busch car in 2006 and 2007 and they have also sponsored David Gilliland's Nationwide Series Car in 2006. Goody's sponsored Bobby Labonte's Dodge at the 2009 fall Martinsville race.
Notes
External links
Official Goody's Powder site
Analgesics
Prestige Brands brands
Powders | Goody's Powder | Physics | 640 |
40,699,651 | https://en.wikipedia.org/wiki/List%20of%20airborne%20early%20warning%20aircraft | This is a list of airborne early warning aircraft. An AEW aircraft is an airborne radar system generally used to detect incoming aircraft, ships, vehicles, missiles, and other projectiles and provide guidance to fighter and attack aircraft strikes.
List of AEW aircraft
See also
List of AEW&C aircraft operators
List of maritime patrol aircraft
References
Citations
Bibliography
Early warning systems | List of airborne early warning aircraft | Technology | 74 |
58,210,781 | https://en.wikipedia.org/wiki/No-code%20development%20platform | No-code development platforms (NCDPs) allow creating application software through graphical user interfaces and configuration instead of traditional computer programming based on writing code.
As with low-code development platforms, it is meant to expedite application development, but unlike low-code, no-code development involves no code writing. This is usually done by offering prebuilt templates for building apps. In the 2010s, both of these types of platforms increased in popularity as companies dealt with a limited supply of competent software developers.
No-code development is closely related to visual programming languages.
Use
No-code tools are often designed with line of business users in mind as opposed to traditional IT.
The potential benefits of using a NCDP include:
Agility - NCDPs typically provide some degree of templated user-interface and user experience functionality for common needs such as forms, workflows, and data display allowing creators to expedite parts of the app creation process.
Richness - NCDPs which at one point were limited to more basic application functions increasingly provide a level of feature-richness and integrations that allows users to design, develop, and deploy apps that meet specific business needs.
See also
Flow-based programming
List of online database creator apps
Low-code development platforms
Rapid application development
Lean software development
Platform as a service
References
External links
Pattani, Aneri (16 November 2016) "A coding revolution in the office cube sends message of change to IT". CNBC. Retrieved 15 November 2017.
Enterprise architecture
Software development | No-code development platform | Technology,Engineering | 310 |
8,398,157 | https://en.wikipedia.org/wiki/Moladi | Moladi is a South African company specializing in a reusable plastic formwork for use in construction of affordable housing projects worldwide. The process involves creating a mold of the form of the complete structure. This wall mold is then filled with an aerated form of Mortar. The construction process is faster than traditional methods of construction.
The technology has won the Design for Development award of the South African Bureau of Standards Design Institute in 1997, with the institute praising Moladi as:
''...an interlocking and modular formwork or shutter system for molding complex monolithic reinforced structures. The modular plastic panels are lightweight and extremely robust. The building method is especially suited for affordable low-cost, mass housing schemes.
In addition to being a part of the drive by the South African government to replace shantytowns with proper houses, Moladi also exports to 26 countries like Panama Democratic Republic of the Congo Tanzania and others.
Moladi solves key low-cost challenges - The company production plant is based in Port Elizabeth
References
Future of Construction - World Economic Forum - World Bank
External links
Construction and civil engineering companies of South Africa
Home builders
Construction | Moladi | Engineering | 236 |
6,151,344 | https://en.wikipedia.org/wiki/Don%20Towsley%20%28computer%20scientist%29 | Donald Fred Towsley (born 1949) is an American computer scientist who has been a
distinguished university professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst.
His research interests include network measurement, modeling, and analysis. Towsley currently serves as editor-in-chief of the IEEE/ACM Transactions on Networking and on the editorial boards of Journal of the ACM and IEEE Journal of Selected Areas in Communications. He is currently the chair of the IFIP Working Group 7.3 on computer performance measurement, modeling, and analysis. He has also served on numerous editorial boards, including those of IEEE Transactions on Communications and Performance Evaluation. He has been active in the program committees for numerous conferences, including IEEE Infocom, ACM SIGCOMM, ACM SIGMETRICS, and IFIP Performance conferences for many years, and has served as technical program co-chair for ACM SIGMETRICS and Performance conferences. He has received the 2008 ACM SIGCOMM Award, the 2007 IEEE Koji Kobayashi Computers and Communications Award, the 2007 ACM SIGMETRICS Achievement Award, the 1999 IEEE Communications Society William Bennett Award, and several conference/workshop best paper awards. He is also the recipient of the University of Massachusetts Chancellor's Medal and the Outstanding Research Award from the College of Natural Science and Mathematics at the University of Massachusetts. He is one of the founders of the Computer Performance Foundation.
References
American computer scientists
University of Massachusetts Amherst faculty
Living people
Fellows of the IEEE
1949 births | Don Towsley (computer scientist) | Technology | 310 |
66,452,617 | https://en.wikipedia.org/wiki/Leucoagaricus%20gaillardii | Leucoagaricus gaillardii is a species of fungus belonging to the family Agaricaceae.
It is native to Northern Europe.
References
Agaricaceae
Fungus species | Leucoagaricus gaillardii | Biology | 38 |
41,131,289 | https://en.wikipedia.org/wiki/Ammophila%20urnaria | Ammophila urnaria is a species of hunting wasp in the family Sphecidae. It is a black and red insect native to the eastern United States. It feeds on nectar but catches and paralyses caterpillars to leave in underground chambers for its developing larvae to consume.
Description
Ammophila urnaria is a small wasp about long with a slender body. It is black apart from a band of red around the front portion of its abdomen.
Distribution
Ammophila urnaria is native to the eastern part of North America in locations including Nova Scotia, South Carolina and St John's Bluff in eastern Florida.
Behaviour
Ammophila urnaria feeds on nectar and can often be seen on the flower heads of sorrel or onion. The breeding season is in summer. The female wasp digs a succession of burrows in sandy soil, provisioning each burrow with one or more paralysed caterpillars, lays an egg on the first caterpillar in each and seals the hole. The caterpillars in each burrow should provide sufficient nourishment to allow the larva that hatches from the egg to grow and pupate. Some wasps are kleptoparasites. They may open the nests of other A. urnaria wasps, remove their eggs and lay their own on the caterpillars stored there. Some have also been observed to parasitise the nest burrows of other species of wasp, Ammophila kennedyi and Podalonia robusta.
George and Elizabeth Peckham were American ethologists and entomologists and described in 1898 how they watched a female A. urnaria wasp provisioning her burrow. She ran along the ground among purslane plants until she found a small green caterpillar which she paralysed with a sting. She carried this heavy burden through their garden and out into an adjoining field of maize. Although all the cornstalks look similar to the Peckhams, the wasp quickly located the right spot and laid the caterpillar down. She then moved two fragments of soil which had been concealing the entrance to a hole. Picking up the caterpillar, she reversed into the burrow dragging the caterpillar behind her and disappeared from view. They could not see what happened next but knew she was laying an egg beside the caterpillar.
They later observed another A. urnaria wasp knock a caterpillar off a bean plant onto bare ground. The caterpillar writhed and struggled to escape, rolling and unrolling itself, and the wasp had difficulty getting hold of it. She eventually straddled it, grasped it with her mandibles, lifted it off the ground and curled her abdomen underneath. It continued to struggle but she managed to insert her sting into the junction between the third and fourth segments. At once the caterpillar became motionless and limp and she was able to sting it twice more, choosing the underside junctions between segment two and three and then between segments one and two. After a brief pause and circling flight she stung it again several times in joints in the posterior part of its body. On another occasion they saw a female wasp hammering the ground firm over the entrance to a burrow with a small pebble and reported what they thought was the first observation of an insect using a tool. They were unable to observe this behaviour again but later discovered that Samuel Wendell Williston had recorded similar behaviour in the wasp Ammophila yarrowi in 1892. Since then, several species of Ammophila have been observed to move dry soil, particles of grit, male pine cones and other suitable size fragments into their burrows, push these objects down with their heads, add more, pound the surface of the ground with chips of wood or fine pebbles and leave it smooth and level. The fact that these other wasp species also use tools makes it likely that this behaviour evolved a long time ago in a common ancestor.
References
Sphecidae
Insects described in 1860
Hymenoptera of North America
Tool-using animals
Taxa named by Anders Gustaf Dahlbom | Ammophila urnaria | Biology | 837 |
153,625 | https://en.wikipedia.org/wiki/IUCN%20Red%20List | The International Union for Conservation of Nature (IUCN) Red List of Threatened Species, also known as the IUCN Red List or Red Data Book, founded in 1964, is an inventory of the global conservation status and extinction risk of biological species. A series of Regional Red Lists, which assess the risk of extinction to species within a political management unit, are also produced by countries and organizations.
The goals of the Red List are to provide scientifically based information on the status of species and subspecies at a global level, to draw attention to the magnitude and importance of threatened biodiversity, to influence national and international policy and decision-making, and to provide information to guide actions to conserve biological diversity.
Major species assessors include BirdLife International, the Institute of Zoology (the research division of the Zoological Society of London), the World Conservation Monitoring Centre, and many Specialist Groups within the IUCN Species Survival Commission (SSC). Collectively, assessments by these organizations and groups account for nearly half the species on the Red List.
The IUCN aims to have the category of every species re-evaluated at least every ten years, and every five years if possible. This is done in a peer reviewed manner through IUCN Species Survival Commission Specialist Groups (SSC), which are Red List Authorities (RLA) responsible for a species, group of species or specific geographic area, or in the case of BirdLife International, an entire class (Aves). The red list unit works with staff from the IUCN Global Species Programme as well as current program partners to recommend new partners or networks to join as new Red List Authorities.
The number of species which have been assessed for the Red List has been increasing over time. of 150,388 species surveyed, 42,108 are considered at risk of extinction because of human activity, in particular overfishing, hunting, and land development.
History
The idea for a Red Data Book was suggested by Peter Scott in 1963.
1966–1977 Red Data Lists
Initially the Red Data Lists were designed for specialists and were issued in a loose-leaf format that could be easily changed.
The first two volumes of Red Lists were published in 1966 by conservationist Noel Simon, one for mammals and one for birds.
The third volume that appeared covered reptiles and amphibians. It was created by René E. Honegger in 1968.
In 1970, the IUCN published its fifth volume in this series. This was the first Red Data List which focused on plants (angiosperms only), compiled by Ronald Melville.
The final volume of Red Data List created in the older, loose leaf style was volume 4 on freshwater fishes. This was published in 1979 by Robert Rush Miller.
1969 Red Data Book
The first attempt to create a Red Data Book for a nonspecialist public came in 1969 with The Red Book: Wildlife in Danger. This book covered varies groups but was predominantly about mammals and birds, with smaller sections on reptiles, amphibians, fishes, and plants.
2006 release
The 2006 Red List, released on 4 May 2006 evaluated 40,168 species as a whole, plus an additional 2,160 subspecies, varieties, aquatic stocks, and subpopulations.
2007 release
On 12 September 2007, the World Conservation Union (IUCN) released the 2007 IUCN Red List of Threatened Species. In this release, they have raised their classification of both the western lowland gorilla (Gorilla gorilla gorilla) and the Cross River gorilla (Gorilla gorilla diehli) from endangered to critically endangered, which is the last category before extinct in the wild, due to Ebola virus and poaching, along with other factors. Russ Mittermeier, chief of Swiss-based IUCN's Primate Specialist Group, stated that 16,306 species are endangered with extinction, 188 more than in 2006 (total of 41,415 species on the Red List). The Red List includes the Sumatran orangutan (Pongo abelii) in the Critically Endangered category and the Bornean orangutan (Pongo pygmaeus) in the Endangered category.
2008 release
The 2008 Red List was released on 6 October 2008 at the IUCN World Conservation Congress in Barcelona and "confirmed an extinction crisis, with almost one in four [mammals] at risk of disappearing forever". The study shows at least 1,141 of the 5,487 mammals on Earth are known to be threatened with extinction, and 836 are listed as Data Deficient.
2012 release
The Red List of 2012 was released 19 July 2012 at Rio+20 Earth Summit; nearly 2,000 species were added, with 4 species to the extinct list, 2 to the rediscovered list. The IUCN assessed a total of 63,837 species which revealed 19,817 are threatened with extinction. 3,947 were described as "critically endangered" and 5,766 as "endangered", while more than 10,000 species are listed as "vulnerable". At threat are 41% of amphibian species, 33% of reef-building corals, 30% of conifers, 25% of mammals, and 13% of birds. The IUCN Red List has listed 132 species of plants and animals from India as "Critically Endangered".
Categories
Species are classified by the IUCN Red List into nine groups, specified through criteria such as rate of decline, population size, area of geographic distribution, and degree of population and distribution fragmentation. There is an emphasis on the acceptability of applying any criteria in the absence of high quality data including suspicion and potential future threats, "so long as these can reasonably be supported".
Extinct (EX) – beyond reasonable doubt that the species is no longer extant.
Extinct in the wild (EW) – survives only in captivity, cultivation and/or outside native range, as presumed after exhaustive surveys.
Critically endangered (CR) – in a particularly and extremely critical state.
Endangered (EN) – very high risk of extinction in the wild, meets any of criteria A to E for Endangered.
Vulnerable (VU) – meets one of the 5 Red List criteria and thus considered to be at high risk of unnatural (human-caused) extinction without further human intervention.
Near Threatened (NT) – close to being endangered in the near future.
Lower Risk (LR) – unlikely to become endangered or extinct in the near future.
Data Deficient (DD)
Not Evaluated (NE)
In the IUCN Red List, "threatened" embraces the categories of Critically Endangered, Endangered, and Vulnerable.
1994 categories and 2001 framework
The older 1994 list has only a single "Lower Risk" category which contained three subcategories:
Conservation Dependent (LR/cd)
Near Threatened (LR/nt)
Least Concern (LR/lc)
In the 2001 framework, Near Threatened and Least Concern became their own categories, while Conservation Dependent was removed and its contents merged into Near Threatened.
Possibly extinct
The tag of "possibly extinct" (PE) is used by Birdlife International, the Red List Authority for birds for the IUCN Red List. BirdLife International has recommended PE become an official tag for Critically Endangered species, and this has now been adopted, along with a "Possibly Extinct in the Wild" tag for species with populations surviving in captivity but likely to be extinct in the wild.
Versions
There have been a number of versions, dating from 1991, including:
Version 1.0 (1991)
Version 2.0 (1992)
Version 2.1 (1993)
Version 2.2 (1994)
Version 2.3 (1994)
Version 3.0 (1999)
Version 3.1 (2001)
All new IUCN assessments since 2001 have used version 3.1 of the categories and criteria.
Criticism
In 1997, the IUCN Red List received criticism on the grounds of secrecy (or at least poor documentation) surrounding the sources of its data. These allegations have led to efforts by the IUCN to improve its documentation and data quality, and to include peer reviews of taxa on the Red List. The list is also open to petitions against its classifications, on the basis of documentation or criteria.
In the November 2002 issue of Trends in Ecology & Evolution, an article suggested that the IUCN Red List and similar works are prone to misuse by governments and other groups that draw possibly inappropriate conclusions on the state of the environment or to affect exploitation of natural resources.
In the November 2016 issue of Science Advances, a research article claims there are serious inconsistencies in the way species are classified by the IUCN. The researchers contend that the IUCN's process of categorization is "out-dated, and leaves room for improvement", and further emphasize the importance of readily available and easy-to-include geospatial data, such as satellite and aerial imaging. Their conclusion questioned not only the IUCN's method but also the validity of where certain species fall on the List. They believe that combining geographical data can significantly increase the number of species that need to be reclassified to a higher risk category.
See also
CITES
Conservation status
Red List Index
Regional Red List
Species by IUCN Red List category
Wildlife conservation
Citations
General and cited references
Hilton-Taylor, C. A history of the IUCN DATA Book and Redlist .Retrieved 2012–5–11.
IUCN Red List of Threatened Species, 2009. Summary Statistics. Retrieved 2009-12-19.
IUCN. 1994 IUCN Red List Categories and Criteria version 2.3. Retrieved 2009-12-19.
IUCN. 2001 IUCN Red List Categories and Criteria version 3.1. Retrieved 2009-12-19.
Rodrigues, A. S. L., Pilgrim, J.D., Lamoreux, J.F., Hoffmann, M. & Brooks, T.M. 2006. Trends in Ecology & Evolution 21(2): 71–76.
Sharrock, S. and Jones, M. 2009. Conserving Europe's threatened plants – Report on the lack of a European Red List and the creation of a consolidated list of the threatened plants of Europe. Retrieved 2011-03-23.
External links
1964 in the environment
Biological databases
Biota by conservation status system
.
.Red List
Lists of biota
Species described in 1963 | IUCN Red List | Biology | 2,048 |
37,384,221 | https://en.wikipedia.org/wiki/Tungsten%28IV%29%20fluoride | Tungsten tetrafluoride is an inorganic compound with the formula WF4. This little studied solid has been invoked, together with tungsten pentafluoride, as an intermediate in the chemical vapor deposition of tungsten films using tungsten hexafluoride.
Structure
Tungsten tetrafluoride was found to have polymeric structure based on Mössbauer spectroscopy.
Preparation
It has been prepared by treatment of the coordination complex WCl4(MeCN)2 with AsF3. It has been produced by from the reaction of WF6 and a W filament at 600-800 °C.
Reactions
The compound can be re-oxidized to W(VI) compounds by treatment with fluorine and chlorine:
WF4 + X2 → WF4X2
Upon heating, it disproportionates to WF6 and tungsten metal.
References
Tungsten halides
Fluorides
Tungsten(IV) compounds | Tungsten(IV) fluoride | Chemistry | 202 |
46,597,292 | https://en.wikipedia.org/wiki/Cassaine | Cassaine is a toxic compound found within the tree genus Erythrophleum. This genus has a range from Senegal to Sudan and Kenya in the east, and south to Zimbabwe and Mozambique. Cassaine was first isolated by the G. Dalma group in 1935 from the Erythrophleum guinneese tree. Since ancient times cassaine has been used as an ordeal poison by African tribes. It has also been utilized extensively as an arrow poison by the Casamance people of Senegal.
Structure
The structure of cassaine and other alkaloids found within the genus Erythrophleum are similar to the structure of cardiac glycosides. This similarity to cardiac glycosides accounts for the similarity in cardiac activity between cassaine and compounds such as digitoxin. Generally, the structure of cassaine and other Erythrophleum derivatives are considered N-alkylaminoethyl esters of tricyclic diterpene acids containing a perhydrophenanthrene skeleton.
Mechanism of action
Cassaine's biological function stems from its ability to interact with Na+-K+ ATPase through its cardiotonic steroid binding site. The binding of cassaine to this site inhibits ATPase. Cassaine can be defined as a specific inhibitor of monovalent cation transport and of Na+-K+ ATPase; the result being an inotropic effect on cardiac muscle.
Toxicity
Cassaine is highly toxic because it acts strongly on the function of the heart by affecting Na+-K+ ATPase. High doses can cause blood pressure problems, bradycardia, violent fits of vomiting, arrhythmia, and death.
While used widely in Africa for various reasons, cassaine is usually utilized with other alkaloidal compounds from its parent tree. Since mixtures are usually used it is hard to directly gauge its toxicity from the use of these mixtures. Despite the difficulty, Some examples of the toxicity of several mixtures are available. For instance, the arrow poison used by the Casamance people of Senegal; it was found by that any hunter who accidentally cut themselves with a poisoned broad-head was seen to die soon, after violent fits of vomiting.
The lack of widespread use and research in the western world has resulted in the a lack of information with regards to how toxic is to the human body.
Uses
Ordeal poison
Native Africans have used cssaine and mixtures containing cassaine as an ordeal poison for hundreds of years. These uses included trials for supposed witches and warlocks. Cassaine was also used for mass cleansings after epidemics, wars, natural disasters, etc. Anything that could have been deemed the work of evil forces brought about by an evil person could have brought an ordeal trial and the use of cassaine. Cassaine use as an ordeal poison caused nearly 2,000 deaths in 1912 [reference]. To this day cassaine is still used in Africa as an ordeal poison.
Traditional medicine
Cassaine and the other alkaloids from Erythrophleum were used on many occasions by native Africans to treat a variety of ailments. Some of these ailments were headaches, heart problems, and migraines. Cassaine was also used as a strong local anesthetic, diuretic, and as an antidote for the swallowing of other poisonous substances due to its powerful emetic effects. Prior to and until the 1930s, cassaine and its partner alkaloids were used as local anesthetics in dental and ophthalmic procedures.
Synthesis
Cassaine can be prepared by organic synthesis. One method starting from carvone uses anionic polycyclization as a key step.
References
Alkaloids
Plant toxins | Cassaine | Chemistry | 765 |
4,062,502 | https://en.wikipedia.org/wiki/Zero-product%20property | In algebra, the zero-product property states that the product of two nonzero elements is nonzero. In other words,
This property is also known as the rule of zero product, the null factor law, the multiplication property of zero, the nonexistence of nontrivial zero divisors, or one of the two zero-factor properties. All of the number systems studied in elementary mathematics — the integers , the rational numbers , the real numbers , and the complex numbers — satisfy the zero-product property. In general, a ring which satisfies the zero-product property is called a domain.
Algebraic context
Suppose is an algebraic structure. We might ask, does have the zero-product property? In order for this question to have meaning, must have both additive structure and multiplicative structure. Usually one assumes that is a ring, though it could be something else, e.g. the set of nonnegative integers with ordinary addition and multiplication, which is only a (commutative) semiring.
Note that if satisfies the zero-product property, and if is a subset of , then also satisfies the zero product property: if and are elements of such that , then either or because and can also be considered as elements of .
Examples
A ring in which the zero-product property holds is called a domain. A commutative domain with a multiplicative identity element is called an integral domain. Any field is an integral domain; in fact, any subring of a field is an integral domain (as long as it contains 1). Similarly, any subring of a skew field is a domain. Thus, the zero-product property holds for any subring of a skew field.
If is a prime number, then the ring of integers modulo has the zero-product property (in fact, it is a field).
The Gaussian integers are an integral domain because they are a subring of the complex numbers.
In the strictly skew field of quaternions, the zero-product property holds. This ring is not an integral domain, because the multiplication is not commutative.
The set of nonnegative integers is not a ring (being instead a semiring), but it does satisfy the zero-product property.
Non-examples
Let denote the ring of integers modulo . Then does not satisfy the zero product property: 2 and 3 are nonzero elements, yet .
In general, if is a composite number, then does not satisfy the zero-product property. Namely, if where , then and are nonzero modulo , yet .
The ring of 2×2 matrices with integer entries does not satisfy the zero-product property: if and then yet neither nor is zero.
The ring of all functions , from the unit interval to the real numbers, has nontrivial zero divisors: there are pairs of functions which are not identically equal to zero yet whose product is the zero function. In fact, it is not hard to construct, for any n ≥ 2, functions , none of which is identically zero, such that is identically zero whenever .
The same is true even if we consider only continuous functions, or only even infinitely smooth functions. On the other hand, analytic functions have the zero-product property.
Application to finding roots of polynomials
Suppose and are univariate polynomials with real coefficients, and is a real number such that . (Actually, we may allow the coefficients and to come from any integral domain.) By the zero-product property, it follows that either or . In other words, the roots of are precisely the roots of together with the roots of .
Thus, one can use factorization to find the roots of a polynomial. For example, the polynomial factorizes as ; hence, its roots are precisely 3, 1, and −2.
In general, suppose is an integral domain and is a monic univariate polynomial of degree with coefficients in . Suppose also that has distinct roots . It follows (but we do not prove here) that factorizes as . By the zero-product property, it follows that are the only roots of : any root of must be a root of for some . In particular, has at most distinct roots.
If however is not an integral domain, then the conclusion need not hold. For example, the cubic polynomial has six roots in (though it has only three roots in ).
See also
Fundamental theorem of algebra
Integral domain and domain
Prime ideal
Zero divisor
Notes
References
David S. Dummit and Richard M. Foote, Abstract Algebra (3d ed.), Wiley, 2003, .
External links
PlanetMath: Zero rule of product
Abstract algebra
Elementary algebra
Real analysis
Ring theory
0 (number) | Zero-product property | Mathematics | 971 |
25,495,886 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20September%2012%2C%202072 | A total solar eclipse will occur at the Moon's ascending node of orbit on Monday, September 12, 2072, with a magnitude of 1.0558. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 7 hours before perigee (on September 12, 2072, at 2:15 UTC), the Moon's apparent diameter will be larger.
The path of totality will be visible from much of northern and eastern Russia. A partial solar eclipse will also be visible for parts of Greenland, Europe, and Asia. This is the first of 56 umbral eclipses in Solar Saros 155.
The total phase of eclipse will be only in Siberia in Russia. Large cities, in which the total phase will be seen, include Yakutsk, Neryungri, Mirny in Sakha Republic and Khatanga in Krasnoyarsk Krai (also Norilsk will have 98% sun obscuration).
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2072
A total lunar eclipse on March 4.
A partial solar eclipse on March 19.
A total lunar eclipse on August 28.
A total solar eclipse on September 12.
Metonic
Preceded by: Solar eclipse of November 24, 2068
Followed by: Solar eclipse of July 1, 2076
Tzolkinex
Preceded by: Solar eclipse of August 2, 2065
Followed by: Solar eclipse of October 24, 2079
Half-Saros
Preceded by: Lunar eclipse of September 7, 2063
Followed by: Lunar eclipse of September 18, 2081
Tritos
Preceded by: Solar eclipse of October 13, 2061
Followed by: Solar eclipse of August 13, 2083
Solar Saros 155
Preceded by: Solar eclipse of September 2, 2054
Followed by: Solar eclipse of September 23, 2090
Inex
Preceded by: Solar eclipse of October 3, 2043
Followed by: Solar eclipse of August 24, 2101
Triad
Preceded by: Solar eclipse of November 12, 1985
Followed by: Solar eclipse of July 15, 2159
Solar eclipses of 2069–2072
Saros 155
Metonic series
Tritos series
Inex series
Notes
References
2072 09 12
2072 in science
2072 09 12
2072 09 12 | Solar eclipse of September 12, 2072 | Astronomy | 672 |
2,628,881 | https://en.wikipedia.org/wiki/Total%20expense%20ratio | The total expense ratio (TER) is a measure of the total cost of a fund to an investor. Total costs may include various fees (purchase, redemption, auditing) and other expenses. The TER, calculated by dividing the total annual cost by the fund's total assets averaged over that year, is denoted as a percentage. It will normally vary somewhat from year to year.
Typically it consists of the annual management charge (AMC), the fee that the fund company charges annually to manage the fund (typically commission paid to fund managers), plus 'other' charges incurred with running the fund. These other charges can consist of share registration fees, fees payable to auditors, legal fees, and custodian fees. Not included in the total expense ratio are transaction costs as a result of trading of the fund's assets.
Because the TER is inclusive of these other charges, it is a more accurate measure of the 'drag' on a fund's performance than just using the annual management charge alone. In their advertisements and even their fact sheets, fund companies tend to give more emphasis to the AMC, making it difficult for a private investor (in the UK at least) to see the total expense ratio of the fund they are investing in. In the United States, however, it is mandatory not only to show it but also to make it as clear and as concise as possible.
Fund costs are very important: every dollar charged by a fund is a dollar that investors won't get, but costs can be offset to some extent – or even completely – by benefits.
Fund managers can benefit investors in a range of ways. These include:
investing in assets that smaller direct investors cannot access;
paying lower brokerage costs for buying and selling;
using a range of risk reducing techniques; and
taking advantage of managing a large pool of assets – often with regular inflows – to make ongoing adjustments to the fund efficiently and in ways that enhance returns, minimize losses, and/or reduce price volatility.
Fund managers also save investors time and effort by:
providing summarized return and tax details and
looking after the day-to-day paperwork and decision making that can be associated with holding a large number of investments.
Just as buying the cheapest car or house isn't always the best option; it could be a mistake just to invest in the lowest-cost fund. Some kinds of funds (e.g., cash funds) cost a lot less to run than others (e.g., diversified equity funds), but a good fund should do better – after fees – than any cash fund over the longer term. In general it seems that there is, at best, a positive correlation between the fees charged by a fund and the returns it provides to investors.
Once an investor has decided on a mix of assets (asset allocation) that suits their situation, needs, and goals, they need to know whether to invest through (more expensive) actively managed funds, cheaper ETFs (exchange traded funds), or directly. When considering using a managed fund, they should research what the manager does to earn their fees and the returns they are likely to achieve after fees.
Professional financial advisers who have a fiduciary duty towards their clients can help with determining the best trade-off between all of the different investment options available, looking at all of the characteristics, including the total expense ratio.
See also
Expense ratio
References
External links
Corrigendum to Commission Recommendation 2004/384/EC of 27 April 2004 on some contents of the simplified prospectus as provided for in Schedule C of Annex I to Council Directive 85/611/EEC (Official Journal of the European Union L 144 of 30 April 2004)
Compute a fund's total costs from the turn over ratio and the total expense ratio
Expense
Financial ratios
Investment fund indicators | Total expense ratio | Mathematics | 778 |
77,651,409 | https://en.wikipedia.org/wiki/High-pressure%20torsion | High-pressure torsion (HPT) is a severe plastic deformation technique used to refine the microstructure of materials by applying both high pressure and torsional strain. HPT involves compressing a material between two anvils while simultaneously rotating one of the anvils, inducing shear deformation. HPT is widely used in materials science to create ultrafine-grained and nanostructured metallic and non-metallic materials, control phase transformations, synthesize new materials or investigate mechanisms underlying some natural phenomena. This process leads to significant grain refinement, resulting in materials with enhanced mechanical properties such as increased tensile strength and hardness. It was introduced in 1935 by P.W. Bridgman, who developed early methods to apply extreme strain under high pressures in material processing.
HPT also has applications in producing metals with enhanced superplasticity, improving the toughness of alloys, and creating materials with unique properties like high wear resistance. Researchers use HPT to study fundamental aspects of deformation and phase transition under extreme conditions. Additionally, HPT is being explored for potential applications in the energy field. Progress in HPT science and technology opens new possibilities in the development of advanced materials with superior properties.
References
Metallurgy
Industrial processes | High-pressure torsion | Chemistry,Materials_science,Engineering | 254 |
4,500,115 | https://en.wikipedia.org/wiki/Proofs%20from%20THE%20BOOK | Proofs from THE BOOK is a book of mathematical proofs by Martin Aigner and Günter M. Ziegler. The book is dedicated to the mathematician Paul Erdős, who often referred to "The Book" in which God keeps the most elegant proof of each mathematical theorem. During a lecture in 1985, Erdős said, "You don't have to believe in God, but you should believe in The Book."
Content
Proofs from THE BOOK contains 32 sections (45 in the sixth edition), each devoted to one theorem but often containing multiple proofs and related results. It spans a broad range of mathematical fields: number theory, geometry, analysis, combinatorics and graph theory. Erdős himself made many suggestions for the book, but died before its publication. The book is illustrated by . It has gone through six editions in English, and has been translated into Persian, French, German, Hungarian, Italian, Japanese, Chinese, Polish, Portuguese, Korean, Turkish, Russian and Spanish.
In November 2017 the American Mathematical Society announced the 2018 Leroy P. Steele Prize for Mathematical Exposition to be awarded to Aigner and Ziegler for this book.
The proofs include:
Six proofs of the infinitude of the primes, including Euclid's and Furstenberg's
Proof of Bertrand's postulate
Fermat's theorem on sums of two squares
Two proofs of the Law of quadratic reciprocity
Proof of Wedderburn's little theorem asserting that every finite division ring is a field
Four proofs of the Basel problem
Proof that e is irrational (also showing the irrationality of certain related numbers)
Hilbert's third problem
Sylvester–Gallai theorem and De Bruijn–Erdős theorem
Cauchy's theorem
Borsuk's conjecture
Schröder–Bernstein theorem
Wetzel's problem on families of analytic functions with few distinct values
The fundamental theorem of algebra
Monsky's theorem (4th edition)
Van der Waerden's conjecture
Littlewood–Offord lemma
Buffon's needle problem
Sperner's theorem, Erdős–Ko–Rado theorem and Hall's theorem
Lindström–Gessel–Viennot lemma and the Cauchy–Binet formula
Four proofs of Cayley's formula
Kakeya sets in vector spaces over finite fields
Bregman–Minc inequality
Dinitz problem
Steve Fisk's proof of the art gallery theorem
Five proofs of Turán's theorem
Shannon capacity and Lovász number
Chromatic number of Kneser graphs
Friendship theorem
Some proofs using the probabilistic method
References
Günter M. Ziegler's homepage, including a list of editions and translations.
Mathematical proofs
Mathematics books
Paul Erdős
1998 non-fiction books | Proofs from THE BOOK | Mathematics | 577 |
43,464,156 | https://en.wikipedia.org/wiki/Reversed-Field%20eXperiment | The Reversed-Field eXperiment (RFX) is the largest reversed field pinch device presently in operation, situated in Padua, Italy. It was constructed from 1985 to 1991, and has been in operation since 1992.
The experiments carried out in the last two decades with two large RFP machines (MST in Madison, Wisconsin and RFX in Padova) provided new insight on the physical phenomena taking place in magnetically confined plasma dynamics.
See also
Madison Symmetric Torus
References
External links
Consorzio RFX website
Magnetic confinement fusion devices
Science and technology in Italy | Reversed-Field eXperiment | Chemistry | 113 |
307,556 | https://en.wikipedia.org/wiki/Hypocotyl | The hypocotyl (short for "hypocotyledonous stem", meaning "below seed leaf") is the stem of a germinating seedling, found below the cotyledons (seed leaves) and above the radicle (root).
Eudicots
As the plant embryo grows at germination, it sends out a shoot called a radicle that becomes the primary root, and then penetrates down into the soil. After emergence of the radicle, the hypocotyl emerges and lifts the growing tip (usually including the seed coat) above the ground, bearing the embryonic leaves (called cotyledons), and the plumule that gives rise to the first true leaves. The hypocotyl is the primary organ of extension of the young plant and develops into the stem.
Monocots
The early development of a monocot seedling like cereals and other grasses is somewhat different. A structure called the coleoptile, essentially a part of the cotyledon, protects the young stem and plumule as growth pushes them up through the soil. A mesocotyl—that part of the young plant that lies between the seed (which remains buried) and the plumule—extends the shoot up to the soil surface, where secondary roots develop from just beneath the plumule. The primary root from the radicle may then fail to develop further. The mesocotyl is considered to be partly hypocotyl and partly cotyledon (see seed).
Not all monocots develop like the grasses. The onion develops in a manner similar to the first sequence described above, the seed coat and endosperm (stored food reserve) pulled upwards as the cotyledon extends. Later, the first true leaf grows from the node between the radicle and the sheath-like cotyledon, breaking through the cotyledon to grow past it.
Storage organ
In some plants, the hypocotyl becomes enlarged as a storage organ. Examples include cyclamen, gloxinia and celeriac. In cyclamen this storage organ is called a tuber.
Hypocotyl elongation assay
One of the widely used assays in the field of photobiology is the investigation of the effect of changes in light quantity and quality on hypocotyl elongation. It is frequently used to study the growth promoting vs. growth repressing effects of application of plant hormones like ethylene. Under normal light conditions, hypocotyl growth is controlled by a process called photomorphogenesis, while shading the seedlings evokes a rapid transcriptional response which negatively regulates photomorphogenesis and results in increased rates of hypocotyl growth. This rate is highest when plants are kept in darkness mediated by a process called skotomorphogenesis, which contrasts photomorphogenesis.
See also
Epicotyl
Monocotyledon
Dicotyledon
References
Plant anatomy
Plant morphology
Plant reproduction | Hypocotyl | Biology | 631 |
19,373,692 | https://en.wikipedia.org/wiki/Timeline%20of%20probability%20and%20statistics | The following is a timeline of probability and statistics.
Before 1600
8th century – Al-Khalil, an Arab mathematician studying cryptology, wrote the Book of Cryptographic Messages. The work has been lost, but based on the reports of later authors, it contained the first use of permutations and combinations to list all possible Arabic words with and without vowels.
9th century - Al-Kindi was the first to use frequency analysis to decipher encrypted messages and developed the first code breaking algorithm. He wrote a book entitled Manuscript on Deciphering Cryptographic Messages, containing detailed discussions on statistics and cryptanalysis. Al-Kindi also made the earliest known use of statistical inference.
13th century – An important contribution of Ibn Adlan was on sample size for use of frequency analysis.
13th century – the first known calculation of the probability for throwing 3 dices is published in the Latin poem De vetula.
1560s (published 1663) – Cardano's Liber de ludo aleae attempts to calculate probabilities of dice throws. He demonstrates the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes).
1577 – Bartolomé de Medina defends probabilism, the view that in ethics one may follow a probable opinion even if the opposite is more probable
17th century
1654 – Blaise Pascal and Pierre de Fermat create the mathematical theory of probability,
1657 – Chistiaan Huygens's De ratiociniis in ludo aleae is the first book on mathematical probability,
1662 – John Graunt's Natural and Political Observations Made upon the Bills of Mortality makes inferences from statistical data on deaths in London,
1666 – In Le Journal des Sçavans xxxi, 2 August 1666 (359–370(=364)) appears a review of the third edition (1665) of John Graunt's Observations on the Bills of Mortality. This review gives a summary of 'plusieurs reflexions curieuses', of which the second are Graunt's data on life expectancy. This review is used by Nicolaus Bernoulli in his De Usu Artis Conjectandi in Jure (1709).
1669 – Christiaan Huygens and his brother Lodewijk discuss between August and December that year Graunts mortality table (Graunt 1662, p. 62) in letters #1755
1693 – Edmond Halley prepares the first mortality tables statistically relating death rate to age,
18th century
1710 – John Arbuthnot argues that the constancy of the ratio of male to female births is a sign of divine providence,
1713 – Posthumous publication of Jacob Bernoulli's Ars Conjectandi, containing the first derivation of a law of large numbers,
1724 – Abraham de Moivre studies mortality statistics and the foundation of the theory of annuities in Annuities upon Lives,
1733 – de Moivre introduces the normal distribution to approximate the binomial distribution in probability,
1739 – David Hume's Treatise of Human Nature argues that inductive reasoning is unjustified,
1761 – Thomas Bayes proves Bayes' theorem,
1786 – William Playfair's Commercial and Political Atlas introduces graphs and bar charts of data,
19th century
1801 – Carl Friedrich Gauss predicts the orbit of Ceres using a line of best fit
1805 – Adrien-Marie Legendre introduces the method of least squares for fitting a curve to a given set of observations,
1814 – Pierre-Simon Laplace's Essai philosophique sur les probabilités defends a definition of probabilities in terms of equally possible cases, introduces generating functions and Laplace transforms, uses conjugate priors for exponential families, proves an early version of the Bernstein–von Mises theorem on the asymptotic irrelevance of prior distributions on the limiting posterior distribution and the role of the Fisher information on asymptotically normal posterior modes.
1835 – Adolphe Quetelet's Treatise on Man introduces social science statistics and the concept of the "average man",
1866 – John Venn's Logic of Chance defends the frequency interpretation of probability.
1877–1883 – Charles Sanders Peirce outlines frequentist statistics, emphasizing the use of objective randomization in experiments and in sampling. Peirce also invented an optimally designed experiment for regression.
1880 – Thorvald N. Thiele gives a mathematical analysis of Brownian motion, introduces the likelihood function, and invents cumulants.
1888 – Francis Galton introduces the concept of correlation,
1900 – Louis Bachelier analyzes stock price movements as a stochastic process,
20th century
1908 – Student's t-distribution for the mean of small samples published in English (following earlier derivations in German).
1913 – Michel Plancherel states fundamental results in ergodic theory.
1920 – The central limit theorem in its modern form was formally stated.
1921 – John Maynard Keynes' Treatise on Probability defends a logical interpretation of probability. Sewall Wright develops path analysis.
1928 – L. H. C. Tippett and Ronald Fisher introduce extreme value theory,
1933 – Andrey Nikolaevich Kolmogorov publishes his book Basic notions of the calculus of probability (Grundbegriffe der Wahrscheinlichkeitsrechnung) which contains an axiomatization of probability based on measure theory,
1935 – Fisher's Design of Experiments (1st ed),
1937 – Jerzy Neyman introduces the concept of confidence interval in statistical testing,
1941 – Due to the World War II, research on detection theory started, leading to the receiver operating characteristic
1946 – Cox's theorem derives the axioms of probability from simple logical assumptions,
1948 – Claude Shannon's Mathematical Theory of Communication defines capacity of communication channels in terms of probabilities,
1953 – Nicholas Metropolis introduces the idea of thermodynamic simulated annealing methods
See also
Founders of statistics
List of important publications in statistics
History of probability
History of statistics
References
Further reading
History of probability and statistics
Probability and Statistics
Statistics-related lists | Timeline of probability and statistics | Mathematics | 1,277 |
60,504,756 | https://en.wikipedia.org/wiki/List%20of%20implementations%20of%20differentially%20private%20analyses | Since the advent of differential privacy, a number of systems supporting differentially private data analyses have been implemented and deployed. This article tracks real-world deployments, production software packages, and research prototypes.
Real-world deployments
Production software packages
These software packages purport to be usable in production systems. They are split in two categories: those focused on answering statistical queries with differential privacy, and those focused on training machine learning models with differential privacy.
Statistical analyses
Machine learning
Research projects and prototypes
See also
Differential Privacy
Secure multi-party computation
References
Differential privacy
Information privacy | List of implementations of differentially private analyses | Engineering | 115 |
60,002,578 | https://en.wikipedia.org/wiki/Korean%20Astronomical%20Society | The Korean Astronomical Society () is a non-profit learned society in South Korea that aims to supporting astronomical scholarship, technological development, education, and the spread of astronomical knowledge. It operates the Journal of the Korean Astronomical Society.
The KAS was founded on 21 March 1965, and presently has over 700 members. It holds meetings in spring and autumn of each year. The KAS is a member organisation of the International Astronomical Union, which it joined in 1973, and hosted the IAU Asia-Pacific Regional Meeting in 1996 and 2014. It also co-hosted the International Astronomy Olympiad in 2012.
References
Astronomy societies
1965 establishments in Korea
Scientific organizations established in 1965
Scientific organizations based in South Korea | Korean Astronomical Society | Astronomy | 139 |
26,465,658 | https://en.wikipedia.org/wiki/Yohimban | Yohimban is a chemical compound. It is the base chemical structure of various alkaloids in the Rauvolfia and Corynanthe plant genera, including yohimbine, rauwolscine, corynanthine, ajmalicine, reserpine, deserpidine, and rescinnamine, among others.
References
Tryptamine alkaloids
Quinolizidine alkaloids
Heterocyclic compounds with 5 rings | Yohimban | Chemistry | 96 |
1,429,597 | https://en.wikipedia.org/wiki/Phenol%20red | Phenol red (also known as phenolsulfonphthalein or PSP) is a pH indicator frequently used in cell biology laboratories.
Chemical structure and properties
Phenol red exists as a red crystal that is stable in air. Its solubility is 0.77 grams per liter (g/L) in water and 2.9 g/L in ethanol. It is a weak acid with pKa = 8.00 at .
A solution of phenol red is used as a pH indicator, often in cell culture. Its color exhibits a gradual transition from yellow (λmax = 443 nm) to red (λmax = 570 nm) over the pH range 6.8 to 8.2. Above pH 8.2, phenol red turns a bright pink (fuchsia) color.
In crystalline form, and in solution under very acidic conditions (low pH), the compound exists as a zwitterion as in the structure shown above, with the sulfate group negatively charged, and the ketone group carrying an additional proton. This form is sometimes symbolically written as and is orange-red. If the pH is increased (pKa = 1.2), the proton from the ketone group is lost, resulting in the yellow, negatively charged ion denoted as HPS−. At still higher pH (pKa = 7.7), the phenol's hydroxy group loses its proton, resulting in the red ion denoted as PS2−.
In several sources, the structure of phenol red is shown with the sulfur atom being part of a cyclic group, similar to the structure of phenolphthalein. However, this cyclic structure could not be confirmed by X-ray crystallography.
Several indicators share a similar structure to phenol red, including bromothymol blue, thymol blue, bromocresol purple, thymolphthalein, and phenolphthalein. (A table of other common chemical indicators is available in the article on pH indicators.)
Uses
Phenolsulfonphthalein test
Phenol red was used by Leonard Rowntree and John Geraghty in the phenolsulfonphthalein test to estimate the overall blood flow through the kidney in 1911. It was the first test of kidney function and was used for almost a century but is now obsolete.
The test is based on the fact that phenol red is excreted almost entirely in the urine. Phenol red solution is administered intravenously; the urine produced is collected. By measuring the amount of phenol red excreted colorimetrically, kidney function can be determined.
Indicator for cell cultures
Most living tissues prosper at a near-neutral pH—that is, a pH close to 7. The pH of blood ranges from 7.35 to 7.45, for instance. When cells are grown in tissue culture, the medium in which they grow is held close to this physiological pH. A small amount of phenol red added to this growth medium will have a pink-red color under normal conditions. Typically, 15 mg/L are used for cell culture.
In the event of problems, waste products produced by dying cells or overgrowth of contaminants will cause a change in pH, leading to a change in indicator color. For example, a culture of relatively slowly dividing mammalian cells can be quickly overgrown by bacterial contamination. This usually results in an acidification of the medium, turning it yellow. Many biologists find this a convenient way to rapidly check on the health of tissue cultures. In addition, the waste products produced by the mammalian cells themselves will slowly decrease the pH, gradually turning the solution orange and then yellow. This color change is an indication that even in the absence of contamination, the medium needs to be replaced (generally, this should be done before the medium has turned completely orange).
Since the color of phenol red can interfere with some spectrophotometric and fluorescent assays, many types of tissue culture media are also available without phenol red.
Estrogen mimic
Phenol red is a weak estrogen mimic, and in cell cultures can enhance the growth of cells that express the estrogen receptor. It has been used to induce ovarian epithelial cells from post-menopausal women to differentiate into cells with properties of oocytes (eggs), with potential implications for both fertility treatment and stem cell research.
Use in swimming pool test kits
Phenol red, sometimes labelled with a different name, such as "Guardex Solution #2", is used as a pH indicator in home swimming pool test kits.
Chlorine can result in the bleaching of the dye in the absence of thiosulfate to inhibit the oxidizing chlorine. High levels of bromine can convert phenol red to bromophenol red (dibromophenolsulfonephthalein, whose lowered pKa results in an indicator with a range shifted in the acidic direction – water at pH 6.8 will appear to test at 7.5). Even higher levels of bromine (>20 ppm) can result in the secondary conversion of bromophenol red to bromophenol blue with an even lower pKa, erroneously giving the impression that the water has an extremely high pH despite being dangerously low.
References
External links
Video of phenol red activity demonstration and medium preparation
PH indicators
Triarylmethane dyes
4-Hydroxyphenyl compounds
Benzenesulfonates | Phenol red | Chemistry,Materials_science | 1,150 |
59,808,372 | https://en.wikipedia.org/wiki/Canadian%20Society%20for%20Pharmaceutical%20Sciences | The Canadian Society for Pharmaceutical Research (CSPS) advocates for excellence in pharmaceutical research, promotes the allocation of research funds, seeks involvement in decision and policy making processes and provides a forum for early scientists. It was founded in 1997. The Journal of Pharmacy and Pharmaceutical Sciences. is the official journal of CSPS.
References
External links
Learned societies of Canada
Biology societies
Organizations established in 1997
1997 establishments in Canada
Pharmacological societies | Canadian Society for Pharmaceutical Sciences | Chemistry | 86 |
21,176,816 | https://en.wikipedia.org/wiki/Foam%20separation | Foam separation is a chemical process which falls into a category of separation techniques called "Adsorptive bubble separation methods". It is further divided into froth flotation and foam fractionation. Foam separation is essential in order to prevent contamination of fermentation medium through the foam by external microbes.
References
Chemical processes | Foam separation | Chemistry | 66 |
20,937,833 | https://en.wikipedia.org/wiki/Pol%20%28HIV%29 | Pol (DNA polymerase) refers to a gene in retroviruses, or the protein produced by that gene.
Products of pol include:
Reverse transcriptase
Common to all retroviruses, this enzyme transcribes the viral RNA into double-stranded DNA.
Integrase
This enzyme integrates the DNA produced by reverse transcriptase into the host's genome.
Protease
A protease is any enzyme that cuts proteins into segments. HIV's gag and pol genes do not produce their proteins in their final form, but as larger combination proteins; the specific protease used by HIV cleaves these into separate functional units. Protease inhibitor drugs block this step.
See also
Gag/pol translational readthrough site
External links
Viral structural proteins | Pol (HIV) | Biology | 158 |
46,811,216 | https://en.wikipedia.org/wiki/Nonsteroidal%20antiandrogen | A nonsteroidal antiandrogen (NSAA) is an antiandrogen with a nonsteroidal chemical structure. They are typically selective and full or silent antagonists of the androgen receptor (AR) and act by directly blocking the effects of androgens like testosterone and dihydrotestosterone (DHT). NSAAs are used in the treatment of androgen-dependent conditions in men and women. They are the converse of steroidal antiandrogens (SAAs), which are antiandrogens that are steroids and are structurally related to testosterone.
Medical uses
NSAAs are used in clinical medicine for the following indications:
Prostate cancer in men
Androgen-dependent skin and hair conditions like acne, hirsutism, seborrhea, and pattern hair loss (androgenic alopecia) in women
Hyperandrogenism, such as due to polycystic ovary syndrome or congenital adrenal hyperplasia, in women
As a component of hormone therapy for transgender women
Precocious puberty in boys
Priapism in men
Available forms
Pharmacology
Unlike SAAs, NSAAs have little or no capacity to activate the AR, show no off-target hormonal activity such as progestogenic, glucocorticoid, or antimineralocorticoid activity, and lack antigonadotropic effects. For these reasons, they have improved efficacy and selectivity as antiandrogens and do not lower androgen levels, instead acting solely by directly blocking the actions of androgens at the level of their biological target, the AR.
List of NSAAs
Marketed
First-generation
Flutamide (Eulexin): Marketed for the treatment of prostate cancer and also used in the treatment of acne, hirsutism, and hyperandrogenism in women. It has also been studied in the treatment of benign prostatic hyperplasia. Now little-used due to high incidence of elevated liver enzymes and hepatotoxicity and the availability of safer agents.
Nilutamide (Anandron, Nilandron): Marketed for the treatment of prostate cancer. Very little-used due to a high incidence of interstitial pneumonitis and high rates of several unique and unfavorable side effects such as nausea and vomiting, visual disturbances, and alcohol intolerance.
Bicalutamide (Casodex): Marketed for the treatment of prostate cancer and also used in the treatment of hirsutism in women, as a component of hormone therapy for transgender women, to delay precocious puberty in boys, to prevent or alleviate priapism, and for other indications. It has also been studied in the treatment of benign prostatic hyperplasia. By far the most widely used NSAA, due to its favorable profile of efficacy, tolerability, and safety.
Topilutamide (Eucapil): Also known as fluridil. Marketed as a topical medication for the treatment of pattern hair loss (androgenic alopecia) in the Czech Republic and Slovakia. Limited availability and lack of an oral formulation for systemic use make it a very little-known drug.
Second-generation
Apalutamide (Erleada): Marketed for the treatment of prostate cancer. Very similar to enzalutamide, but with reduced central nervous system distribution and hence is expected to have a reduced risk of seizures and other central side effects.
Enzalutamide (Xtandi): Marketed for the treatment of prostate cancer. More effective than the first-generation NSAAs due to increased efficacy and potency and shows no risk of elevated liver enzymes or hepatotoxicity. However, it has a small (1%) risk of seizures and has central nervous system side effects like anxiety and insomnia due to off-target inhibition of the GABAA receptor that the first-generation NSAAs do not have. In addition, it has prominent drug interactions due to moderate to strong induction of multiple cytochrome P450 enzymes. Currently on-patent with no generic availability and hence is very expensive.
Darolutamide (Nubeqa): Marketed for the treatment of prostate cancer. Structurally distinct from enzalutamide, apalutamide, and other NSAAs. Relative to enzalutamide and apalutamide, shows greater efficacy as an AR antagonist, improved activity against mutated AR variants in prostate cancer, little or no inhibition or induction of cytochrome P450 enzymes, and little or no central nervous system distribution. However, has a much shorter terminal half-life and lower potency.
Miscellaneous
Cimetidine (Tagamet): An over-the-counter histamine H2 receptor antagonist that also shows very weak activity as an AR antagonist. Also inhibits cytochrome P450 enzymes and thereby inhibits hepatic estradiol metabolism and increases circulating estradiol levels. It has been investigated in the treatment of hirsutism but showed minimal effectiveness. Sometimes causes gynecomastia as a rare side effect.
Nonsteroidal androgen synthesis inhibitors like ketoconazole can also be described as "NSAAs", although the term is usually reserved to describe AR antagonists.
Not marketed
Under development
Proxalutamide (GT-0918): A second-generation NSAA. It is under development for the treatment of prostate cancer. Similar to enzalutamide and apalutamide, but with increased efficacy as an AR antagonist, little or no central nervous system distribution, and no induction of seizures in animals.
Seviteronel (VT-464) is a nonsteroidal androgen biosynthesis inhibitor which is under development for the treatment of prostate cancer.
Development discontinued
Cioteronel (CPC-10997; Cyoctol, Ethocyn, X-Andron): A structurally unique first-generation NSAA. It was under development as an oral medication for the treatment of benign prostatic hyperplasia and as a topical medication for the treatment of acne and pattern hair loss. It reached phase II and phase III clinical trials for these indications prior to discontinuation due to insufficient effectiveness.
Inocoterone acetate (RU-38882, RU-882): A steroid-like NSAA. It was under development as a topical medication for the treatment of acne but was discontinued due to insufficient effectiveness in clinical trials.
RU-58841 (PSK-3841, HMR-3841): A first-generation NSAA related to nilutamide. It was under development as a topical medication for the treatment of acne and pattern hair loss but its development was discontinued during phase I clinical trials.
See also
Selective androgen receptor modulator
N-Terminal domain antiandrogen
Discovery and development of antiandrogens
Nonsteroidal estrogen
References
Further reading
External links
Anti-acne preparations
Antiandrogens
Hair loss medications
Hair removal
Hormonal antineoplastic drugs
Progonadotropins
Prostate cancer
Sex hormones | Nonsteroidal antiandrogen | Biology | 1,465 |
10,721,443 | https://en.wikipedia.org/wiki/Journal%20of%20Hydrologic%20Engineering | The Journal of Hydrologic Engineering is a monthly engineering journal, first published by the American Society of Civil Engineers in 1996. The journal provides information on the development of new hydrologic methods, theories, and applications to current engineering problems. It publishes papers on analytical, experimental, and numerical methods with regard to the investigation and modeling of hydrological processes. It also publishes technical notes, book reviews, and forum discussions. Though the journal is based in the United States, articles dealing with subjects from around the world are accepted and published. The journal requires the use of the metric system, but allows for authors to also submit their papers in other systems of measure in addition to the SI system.
The journal is run by an editor-in-chief and a number of associate editors, who are respected professionals in the fields of hydrology and hydraulic engineering. The editors come from both academic and professional backgrounds and are responsible for screening submissions and forwarding articles to journal reviewers. The journal reviewers are subject matter experts who volunteer to review articles in order to determine if they should be published by the journal. The current editor-in-chief is R. S. Govindaraju of Purdue University.
G. V. Loganathan of Virginia Polytechnic Institute and State University (a victim of the Virginia Tech massacre on 16 April 2007) was an associate editor.
Editors
The following individuals have served as the editor-in-chief:
Rao S. Govindaraju (2013 – present)
Vijay P. Singh (2005 – 2013)
M. Levent Kavvas (1996–2005)
Indexes
The journal is indexed in Google Scholar, Baidu, Elsevier (Ei Compendex), Clarivate Analytics (Web of Science), ProQuest, Civil engineering database, TRDI, OCLC (WorldCat), IET/INSPEC, Crossref, Scopus, and EBSCOHost.
See also
List of scientific journals
References
External links
ASCE Library
Journal website
Academic journals established in 1996
Hydrology journals
Hydraulic engineering
Hydrologic Engineering
American Society of Civil Engineers academic journals | Journal of Hydrologic Engineering | Physics,Engineering,Environmental_science | 421 |
75,552,978 | https://en.wikipedia.org/wiki/Tremella%20erythrina | Tremella erythrina is a species of fungus in the family Tremellaceae. It produces orange to red, lobate to foliaceous, gelatinous basidiocarps (fruit bodies) and is parasitic on other fungi on wood of broad-leaved trees. It was originally described from China.
Taxonomy
Tremella erythrina was first published in 2019 by Chinese mycologists Xin-Zhan Liu and Feng-Yan Bai based on collections made in Guangxi Province, China. The species is considered to be close to Tremella mesenterica, the type species of the genus, and hence belongs in Tremella sensu stricto.
Description
Fruit bodies are gelatinous, red to brownish orange, up to 18 mm across, cerebriform (brain-like) to foliaceous, with undulating, hollow lobes. Microscopically, the basidia are tremelloid (globose to broadly ellipsoid, with oblique to vertical septa), 4-celled, 12 to 18 by 13 to 19 μm. The basidiospores are ellipsoid, smooth, 7 to 10 by 5 to 7 μm.
Similar species
Tremella dysenterica and T. rubromaculata are similarly coloured, but were described from Brazil and Guatemala respectively. Tremella samoensis, described from Samoa, and T. flammea, described from Japan, are also similar in colour, but differ microscopically.
Habitat and distribution
Tremella erythrina is a parasite on lignicolous fungi, but its host is unknown. It was originally described from wood of a deciduous tree.
The species is currently only known from China.
References
erythrina
Fungi described in 2019
Fungi of Asia
Fungus species | Tremella erythrina | Biology | 378 |
1,933,308 | https://en.wikipedia.org/wiki/Swivel | A swivel is a connection that allows the connected object, such as a gun, chair, swivel caster, or an anchor rode to rotate horizontally or vertically.
Swivel designs
A common design for a swivel is a cylindrical rod that can turn freely within a support structure. The rod is usually prevented from slipping out by a nut, washer or thickening of the rod. The device can be attached to the ends of the rod or the center. Another common design is a sphere that is able to rotate within a support structure. The device is attached to the sphere. A third design is a hollow cylindrical rod that has a rod that is slightly smaller than its inside diameter inside of it. They are prevented from coming apart by flanges. The device may be attached to either end.
A swivel joint for a pipe is often a threaded connection in between which at least one of the pipes is curved, often at an angle of 45 or 90 degrees. The connection is tightened enough to be water- or air-tight and then tightened further so that it is in the correct position.
Anchor rode swivel
Swivels are also used in the nautical sector as an element of the anchor rode and in a boat mooring systems. With yachts, the swivel is most commonly used between the anchor and chain. There is a school of thought that anchor swivels should not be connected to the anchor itself, but should be somewhere in the chain rode.
The anchor swivel is expected to fulfill two purposes:
If the boat swings in a circle the chain may become twisted and the swivel may alleviate this problem.
If the anchor comes up turned around, some swivels may right it.
Concerns
The biggest concern about anchor swivels is that they might introduce a weak link to the rode.
With most swivels the shaft is nice and tidily embedded in the other half of the swivel as in the example of the stainless steel anchor swivel shown here. When used in marine applications, and worse in tropical climates, this is a cause for corrosion, even in stainless steel.
The chromium in stainless steel creates a passivation layer on the surface that protects the steel from rusting. In low oxygen situations and/or warm water this passivation layer breaks down and corrosion will set in. Low oxygen will occur in crevasses which stary wet (cracks, welds, shackle threads, keel bolts, etc.) or confined spaces (swivel shafts, etc.). Corrosion may also happen internally. Welding may cause the cromium to bind with carbon and thus indirectly lead to corrosion.
In come cases the shaft is threaded with a nut welded onto it to hold the two bits together. First of all, a threaded bar is inherently weaker than a solid bar of the same diameter. Then there is the issue of the welds not holding.
When a boat swings on a well-embedded anchor, this can cause strong lateral loads on the swivel, causing its jaws to be pried open - thus disconnecting the chain from the swivel. Hence the above advice from Rocna.
Some boating schools teach that the anchor should be pulled tight against the bow roller by the windlass. This causes stress to the weakest link (swivel) as the vessel pounds though waves and can thus speed up failure.
See also
Slewing bearing
References
Bibliography
Blackwell, Alex & Daria; Happy Hooking – the Art of Anchoring, 2008, 2011, 2019 White Seahorse;
Hinz, Earl R.; The Complete Book of Anchoring and Mooring, Rev. 2d ed., 1986, 1994, 2001 Cornell Maritime Press;
External links
Rohrdrehgelenk
Joining
Mechanical engineering | Swivel | Physics,Engineering | 776 |
38,782,175 | https://en.wikipedia.org/wiki/GLIS1 | Glis1 (Glis Family Zinc Finger 1) is gene encoding a Krüppel-like protein of the same name whose locus is found on Chromosome 1p32.3. The gene is enriched in unfertilised eggs and embryos at the one cell stage and it can be used to promote direct reprogramming of somatic cells to induced pluripotent stem cells, also known as iPS cells. Glis1 is a highly promiscuous transcription factor, regulating the expression of numerous genes, either positively or negatively. In organisms, Glis1 does not appear to have any directly important functions. Mice whose Glis1 gene has been removed have no noticeable change to their phenotype.
Structure
Glis1 is an 84.3 kDa proline rich protein composed of 789 amino acids. No crystal structure has yet been determined for Glis1, however it is homologous to other proteins in many parts of its amino acid sequence whose structures have been solved.
Zinc finger domain
Glis1 uses a Zinc finger domain comprising five tandem Cys2His2 zinc finger motifs (meaning the zinc atom is coordinated by two cysteine and two histidine residues) to interact with target DNA sequences to regulate gene transcription. The domain interacts sequence specifically with the DNA, following the major groove along the double helix. It has the consensus sequence GACCACCCAC. The individual zinc finger motifs are separated from one another by the amino acid sequence(T/S)GEKP(Y/F)X, where X can be any amino acid and (A/B) can be either A or B. This domain is homologous to the zinc finger domain found in Gli1 and so is thought to interact with DNA in the same way. The alpha helices of the fourth and fifth zinc fingers are inserted into the major groove and make the most extensive contact of all the zinc fingers with the DNA. Very few contact are made by the second and third fingers and the first finger does not contact the DNA at all. The first finger does make numerous protein-protein interactions with the second zinc finger, however.
Termini
Glis1 has an activation domain at its C-terminus and a repressive domain at its N-terminus. The repressive domain is much stronger than the activation domain meaning transcription is weak. The activation domain of Glis1 is four times stronger in the presence of CaM kinase IV. This may be due to a coactivator. A proline-rich region of the protein is also found towards the N-terminal. The protein's termini are fairly unusual, and have no strong sequence similarity other proteins.
Use in cell reprogramming
Glis1 can be used as one of the four factors used in reprogramming somatic cells to induced pluripotent stem cells. The three transcription factors Oct3/4, Sox2 and Klf4 are essential for reprogramming but are extremely inefficient on their own, fully reprogramming roughly only 0.005% of the number of cells treated with the factors. When Glis1 is introduced with these three factors, the efficiency of reprogramming is massively increased, producing many more fully reprogrammed cells. The transcription factor c-Myc can also be used as the fourth factor and was the original fourth factor used by Shinya Yamanaka who received the 2012 Nobel Prize in Physiology or Medicine for his work in the conversion of somatic cells to iPS cells. Yamanaka's work allows a way of bypassing the controversy surrounding stem cells.
Mechanism
Somatic cells are most often fully differentiated in order to perform a specific function, and therefore only express the genes required to perform their function. This means the genes that are required for differentiation to other types of cell are packaged within chromatin structures, so that they are not expressed.
Glis1 reprograms cells by promoting multiple pro-reprogramming pathways. These pathways are activated due to the up regulation of the transcription factors N-Myc, Mycl1, c-Myc, Nanog, ESRRB, FOXA2, GATA4, NKX2-5, as well as the other three factors used for reprogramming. Glis1 also up-regulates expression of the protein LIN28 which binds the let-7 microRNA precursor, preventing production of active let-7. Let-7 microRNAs reduce the expression of pro-reprogramming genes via RNA interference. Glis1 is also able to directly associate with the other three reprogramming factors which may help their function.
The result of the various changes in gene expression is the conversion of heterochromatin, which is very difficult to access, to euchromatin, which can be easily accessed by transcriptional proteins and enzymes such as RNA polymerase. During reprogramming, histones, which make up nucleosomes, the complexes used to package DNA, are generally demethylated and acetylated 'unpacking' the DNA by neutralising the positive charge of the lysine residues on the N-termini of histones.
Advantages over c-myc
Glis1 has a number of extremely important advantages over c-myc in cell reprogramming.
No risk of cancer: Although c-myc enhances the efficiency of reprogramming, its major disadvantage is that it is a proto-oncogene meaning the iPS cells produced using c-myc are much more likely to become cancerous. This is an enormous obstacle between iPS cells and their use in medicine. When Glis1 is used in cell reprogramming, there is no increased risk of cancer development.
Production of fewer 'bad' colonies: While c-myc promotes the proliferation of reprogrammed cells, it also promotes the proliferation of 'bad' cells which have not reprogrammed properly and make up the vast majority of cells in a dish of treated cells. Glis1 actively suppresses the proliferation of cells that have not fully reprogrammed, making the selection and harvesting of the properly reprogrammed cells less laborious. This is likely to be due to many of these 'bad' cells expressing Glis1 but not all four of the reprogramming factors. When expressed on its own, Glis1 inhibits proliferation.
More efficient reprogramming: The use of Glis1 reportedly produces more fully reprogrammed iPS cells than c-myc. This is an important quality given the inefficiency of reprogramming.
Disadvantages
Inhibition of Proliferation: Failure to stop Glis1 expression after reprogramming inhibits cell proliferation and ultimately leads to the death of the reprogrammed cell. Therefore, careful regulation of Glis1 expression is required. This explains why Glis1 expression is switched off in embryos after they have started to divide.
Roles in disease
Glis1 has been implicated to play a part in a number of diseases and disorders.
Psoriasis
Glis1 has been shown to be heavily up regulated in psoriasis, a disease which causes chronic inflammation of the skin. Normally, Glis1 is not expressed in the skin at all. However, during inflammation, it is expressed in the spinous layer of the skin, the second layer from the bottom of four layers as a response to the inflammation. This is the last layer where the cells have nuclei and thus the last layer where gene expression occurs. It is believed that the role of Glis1 in this disease is to promote cell differentiation in the skin by changing the increasing the expression of multiple pro-differentation genes such as IGFBP2 which inhibits proliferation and can also promote apoptosis It also decreases the expression of Jagged1, a ligand of notch in the notch signaling pathway and Frizzled10, a receptor in the wnt signaling pathway.
Late onset Parkinson's Disease
A certain allele of Glis1 which exists due to a single nucleotide polymorphism, a change in a single nucleotide of the DNA sequence of the gene, has been implicated as a risk factor in the neurodegenerative disorder Parkinson's disease. The allele is linked to the late onset variety of Parkinson's, which is acquired in old age. The reason behind this link is not yet known.
References
Transcription factors | GLIS1 | Chemistry,Biology | 1,744 |
3,677,213 | https://en.wikipedia.org/wiki/Send%20track | Send tracks (sometimes simply called Sends) are the software audio routing equivalent to the aux-sends found on multitrack sound mixing/sequencing consoles.
In audio recording, a given song is almost always made up of multiple tracks, with each instrument or sound on its own track (for example, one track could contain the drums, one for the guitar, one for a vocal, etc). Further, each track can be separately adjusted in many ways, such as changing the volume, adding effects, and so on. This can be done with individual hardware components, commonly known as "outside the box," or via software applications known as DAWs (Digital Audio Workstations), commonly known as "inside the box."
Send tracks are tracks that aren't (normally) used to record sound on themselves, but to apply those adjustments to multiple, perhaps even all, tracks the same way. For example: if the drums are not on one track, but are instead spread out across multiple tracks (which is common), there is often the desire to treat them all the same in terms of volume, effects, etc. Instead of doing that for each track, you can set up a single send track to apply to all of them.
Advantages
Because one can treat numerous tracks uniformly with a single send track, they can save a lot of time and resources. They are also inherently more flexible than their hardware equivalent since any number of send tracks can be created as needed. For more complicated effect chains, send tracks also allow their output to be routed to other send tracks, which can switch their routing to other send tracks in turn. The solutions offered by most multi-track software provide musicians with an easier (although arguably less hands-on) approach to controlling sends and their respective effects on the audio.
Audio engineering | Send track | Engineering | 367 |
41,828,617 | https://en.wikipedia.org/wiki/China%20Computer%20Education | China Computer Education () is a 50-page weekly computer magazine published in mainland China.
History and profile
China Computer Education was started in 1993. It is published by the China Center for Information Industry Development (中国电子信息产业发展研究院; CCID) and managed by the Ministry of Industry and Information Technology of the People's Republic of China. Commonly referred to as "CCE", the majority of its readers are teachers and students of information technology and computer science. The content of the magazine involves product testing, computer hardware, software applications, news and commentary, and the submission of student papers. The magazine has a single-issue circulation of 350,000 copies, and is the largest IT media publication in mass circulation within China.
From 20 May 2013 onwards, the magazine was renamed China Information Weekly ().
References
External links
1993 establishments in China
Chinese-language magazines
Magazines published in China
Weekly magazines published in China
Computer magazines
Magazines established in 1993 | China Computer Education | Technology | 199 |
61,194,775 | https://en.wikipedia.org/wiki/Micro-spatially%20offset%20Raman%20spectroscopy | Micro-spatially offset Raman spectroscopy (micro-SORS) is an analytical technique developed in 2014 that combines SORS with microscopy. The technique derives its sublayer‐resolving properties from its parent technique SORS. The main difference between SORS and micro-SORS is the spatial resolution: while SORS is suited to the analysis of millimetric layers, micro-SORS is able to resolve thin, micrometric-scale layers. Similarly to SORS technique, micro-SORS is able to preferentially collect the Raman photons generated under the surface in turbid (diffusely scattering) media. In this way, it is possible to reconstruct the chemical makeup of micrometric multi-layered turbid system in a non destructive way. Micro-SORS is particularly useful when dealing with precious or unique objects as for Cultural Heritage field and Forensic Science or in biomedical applications, where a non-destructive molecular characterization constitute a great advantage.
To date, micro-SORS has been mainly used to characterize biological materials such as bones, blood, and Cultural Heritage materials, especially paint stratigraphies. Other materials have been studied with this technique including polymers, industrial paper and wheat seeds.
Micro-SORS was developed on a conventional micro-Raman instrument, and portable micro-SORS prototypes are currently under further optimization to enable in-situ measurements and avoid the need of sampling.
Working principle
In turbid media, the depth‐resolving power of confocal Raman microscopy is restricted due to the optical proprieties of these materials. In such materials, Raman photons generated at different depths emerge on the surface after a certain number of scattering events. The Raman photons generated in the sub-surface emerge on the surface laterally compared to the incident light position, and this displacement is statistically proportional with the depth the Raman photon was generated at. Micro-SORS permits to preferentially collect these displaced photons coming from the sub-surface by enlarging (defocusing) or separating laser excitation and collection zones (Full micro-SORS).
Micro-SORS key-modalities
Defocusing micro-SORS
Defocusing is the most basic variant of the technique and it does not provide a complete separation between excitation and collection zones, rendering this variant less effective. Nonetheless, defocused measurements have the great advantage to be easily performed with a conventional micro-Raman without any hardware nor software modifications. Defocusing consists in the enlargement of the excitation and the collection zones that is achieved by moving the microscope objective out of focus (Δz movements) from the surface of the object or sample under analysis. The Δz movements range goes typically from few tens to two millimeters, depending on the numbers and thicknesses of the materials.
Full micro-SORS
This more sophisticated micro-SORS variant provides a complete separation of laser excitation and collection zones (Δx offset) that requires a hardware or a software modification to a conventional Raman microscope. The separation can be achieved by using an external probe or fibre optics to deliver the laser, by displacing the laser spot by moving the beam-steer alignment mirrors, by using a spatially resolved CCD, by using a digital micro-mirror device (DMD), by moving the tip of the Raman detection fibre to perform an off-confocal detection of the signal or by combining hyperspectral SORS and defocusing micro-SORS. Full micro-SORS was proven to be more effective in terms of both penetration depth into the sample and relative enhancement of sublayer signal
Layers system reconstruction
To reconstruct the micro-layer succession it is required to collect a conventional Raman spectrum and at least a one micro-SORS spectrum; the acquisition of several spectra at gradually increasing defocusing distances or spatial offsets is usually the best way to approach unknown materials. A comparison among the acquired spectra allows achieving the layers composition: in defocused of spatially offset spectra, the signals of the sub-surface layers appear or are intensified compared to the surface signal. Data treatment as spectra normalization or subtraction is commonly used to better visualize the layer sequence.
The layers' thickness can be estimated after calibration on a well characterized sample set with a known thickness.
Micro-SORS in art
Non-destructivity is a major goal for Conservation Scientist, due to the intrinsic value of Cultural Heritage objects. Micro-SORS was developed to address the need of a non-destructive analytical technique with high chemical specificity for the non-destructive analysis of thin painted layers. In painted artworks, the painted film is typically obtained superimposing turbid thin (micrometric-scale) pigmented layers, and their chemical characterization is essential to detect the presence of degradation products, to gain information about the artistic technique and for datation and authentication purposes. To date, Micro-SORS was successfully used to characterize the paint stratigraphy in polychrome sculptures, painted plasters., painted cards and contemporary street art mural paintings
References
See also
Cultural Heritage
Conservation Science
Biomedicine
Forensic Scienze
Raman Spectroscopy
Raman spectroscopy
Microscopy | Micro-spatially offset Raman spectroscopy | Chemistry | 1,071 |
15,893,082 | https://en.wikipedia.org/wiki/Tui%20mine | The Tui mine is an abandoned mine on the western slopes of Mount Te Aroha in the Kaimai Range of New Zealand. It was considered to be the most contaminated site in the country, following the cleanup of the former Fruitgrowers Chemical Company site at Māpua, Nelson.
History
Tui mine was in production by 1881. An aerial ropeway on 12 towers was built in 1889. A road was built in 1950, when the mine was said to be above sea level.
In the 1960s, the Tui mine extracted copper, lead and zinc sulphides, but had a problem with them being contaminated with mercury. The mine was abandoned in 1973, after the mining company Norpac Mining went bankrupt. The machinery was sold to the Mineral Resources (NZ) mine at Waihi, but waste, rock ore dumps and mine tailings were left behind. The tailings have significant amounts of zinc and cadmium. The mine tailings are stored behind a dam in a large pool-like area which has an oxidised, solid surface layer. The dam contains over 100,000 cubic metres of very acidic, sulphide-rich tailings. In 1997, there had been no natural plant recolonisation on the tailings for more than 20 years.
Environmental issues
Waikato University had identified the problem of heavy metals contaminating water by 1984. The tailings dam was considered to be unstable and is leaching various minerals, including heavy metals, into neighbouring waterways and this adversely affected the stream ecology. According to Environment Waikato, the Tui mine had three major environmental impacts;
The heavy metals lead and cadmium were leaching from the tailings dam into the Tunakohoia stream, which flows through land managed by the Department of Conservation and through the centre of the town of Te Aroha. Four years after the mine closed, the Te Aroha town water supply was found to be contaminated with heavy metals leaching from the tailings.
The separate Tui catchment was also contaminated with heavy metals from the tailings dam.
The abandoned mine tailings dam in the Tui catchment was at risk of collapsing in a moderate seismic event or an extreme weather event. That could have caused 90,000 cubic metres of mine waste to liquefy and to flow down the Tui stream near to Te Aroha.
Remediation
In 2007, the New Zealand Government announced that $9.88 million will be made available to clean up the site with the work scheduled to be completed by 2010. In April 2010 it was reported that the estimated cost of the clean-up would be $17.4 million and in 2011 a sum of $16.2 million was allocated to the cleanup with most of the funding from central government. Remediation of the mine site was completed in 2013, at a total cost of $21.7 million.
See also
Mining in New Zealand
Environmental issues in New Zealand
References
Environmental issues in New Zealand
Te Aroha
Underground mines in New Zealand
Environment of Waikato
Tailings dams | Tui mine | Technology,Engineering | 619 |
924,193 | https://en.wikipedia.org/wiki/Volterra%27s%20function | In mathematics, Volterra's function, named for Vito Volterra, is a real-valued function V defined on the real line R with the following curious combination of properties:
V is differentiable everywhere
The derivative V ′ is bounded everywhere
The derivative is not Riemann-integrable.
Definition and construction
The function is defined by making use of the Smith–Volterra–Cantor set and an infinite number or "copies" of sections of the function defined by
The construction of V begins by determining the largest value of x in the interval [0, 1/8] for which f ′(x) = 0. Once this value (say x0) is determined, extend the function to the right with a constant value of f(x0) up to and including the point 1/8. Once this is done, a mirror image of the function can be created starting at the point 1/4 and extending downward towards 0. This function will be defined to be 0 outside of the interval [0, 1/4]. We then translate this function to the interval [3/8, 5/8] so that the resulting function, which we call f1, is nonzero only on the middle interval of the complement of the Smith–Volterra–Cantor set.
To construct f2, f ′ is then considered on the smaller interval [0,1/32], truncated at the last place the derivative is zero, extended, and mirrored the same way as before, and two translated copies of the resulting function are added to f1 to produce the function f2. Volterra's function then results by repeating this procedure for every interval removed in the construction of the Smith–Volterra–Cantor set; in other words, the function V is the limit of the sequence of functions f1, f2, ...
Further properties
Volterra's function is differentiable everywhere just as f (as defined above) is. One can show that f ′(x) = 2x sin(1/x) - cos(1/x) for x ≠ 0, which means that in any neighborhood of zero, there are points where f ′ takes values 1 and −1. Thus there are points where V ′ takes values 1 and −1 in every neighborhood of each of the endpoints of intervals removed in the construction of the Smith–Volterra–Cantor set S. In fact, V ′ is discontinuous at every point of S, even though V itself is differentiable at every point of S, with derivative 0. However, V ′ is continuous on each interval removed in the construction of S, so the set of discontinuities of V ′ is equal to S.
Since the Smith–Volterra–Cantor set S has positive Lebesgue measure, this means that V ′ is discontinuous on a set of positive measure. By Lebesgue's criterion for Riemann integrability, V ′ is not Riemann integrable. If one were to repeat the construction of Volterra's function with the ordinary measure-0 Cantor set C in place of the "fat" (positive-measure) Cantor set S, one would obtain a function with many similar properties, but the derivative would then be discontinuous on the measure-0 set C instead of the positive-measure set S, and so the resulting function would have a Riemann integrable derivative.
See also
Fundamental theorem of calculus
References
External links
Wrestling with the Fundamental Theorem of Calculus: Volterra's function , talk by David Marius Bressoud
Volterra's example of a derivative that is not integrable (PPT), talk by David Marius Bressoud
Fractals
Measure theory
General topology | Volterra's function | Mathematics | 774 |
36,119,905 | https://en.wikipedia.org/wiki/SrcML | srcML (source M L) is a document-oriented XML representation of source code. It was created in a collaborative effort between Michael L. Collard and Jonathan I. Maletic. The abbreviation, srcML, is short for Source Markup Language. srcML wraps source code (text) with information from the Abstract Syntax Tree or AST (tags) into a single XML document. All original text is preserved so that the original source code document can be recreated from the srcML markup. The only exception is the possibility of newline normalization.
The purpose of srcML is to provide full access to the source code at the lexical, documentary, structural, and syntactic levels. The format also provides easy support for fact-extraction and transformation. It is supported by the srcML toolkit maintained on the srcML website and has been shown to perform scalable, lightweight fact-extraction and transformation.
srcML toolkit
The srcML toolkit consists of the command-line program called srcml, which translates from source code to srcML when provided a code file on the command line or translates from srcML to source code when a srcml archive is provided on the command line. The program also supports direct queries and transformations of srcML archives using tools like XPath, XSLT, and RELAXNG. The srcML toolkit is actively maintained and currently support C, C++, C#, and Java.
srcML format
The srcML format consists of all text from the original source code file plus XML tags. Specifically, the text is wrapped with srcML elements that indicate the syntactic structure of the code. In short, this explicitly identifies all syntactic structures in the code.
The tags used in srcML are listed out below along with what category they fall within.
srcML uses XML namespaces. Below is a list of the prefix used to denote each namespace, and the namespaces themselves.
Note: for a srcML archive, the entire project will be contained within a single root unit element, and each individual file will be contained as a unit element within the root unit element.
Single file conversion
The following shows how srcml can be used on single files.
The following example converts the C++ file main.cpp to the srcML file main.cpp.xml:
The following command will extract the source code from the file main.cpp.xml and place it into the C++ file main.cpp:
Project conversion
The following shows how src2srcml and srcml2src can be used with an entire project:
The following example converts the project 'project' to the srcML file project.xml
The following command will extract the source code files from the file project.xml and place it into the directory project:
Program transformation with srcML
srcML allows the use of most if not all current XML APIs and tools to write transformations. It also allows for the use of XSLT directly using the argument—xslt={name}.xls on the srcml2src command. Using srcML's markup with XSLT allows the user to apply Program Transformations on an XML-like structure(srcML) to obtain transformed xml which can then be written back its source code representation using the srcml2src tool. The application of srcML to program transformation is explained, in detail, by Collard et al.
The following command will run the XSLT program program.xsl on the srcML archive project.xml
Fact extraction with srcML
In it simplest form, Fact Extraction using srcML leverages XPath in order to address parts of the srcML document and pull information about various entities or characteristics of the source code. Of course, it is not limited to this. Any standard XML API may be used. The application of srcML to fact extraction is explained, in detail, by Kagdi et al.
cpp:directive, cpp:file, cpp:include, cpp:define, cpp:undef, cpp:line, cpp:if, cpp:ifdef, cpp:ifndef, cpp:else, cpp:elif, cpp:endif, cpp:then, cpp:pragma, cpp:errorliteral, operator, modifier
An example to create a srcML archive from an entire software project.
The following command runs the XPath path on a srcML archive project.xml
Work is being done on providing convenient extension functions.
Source code difference analysis with srcML
srcML brings a lot of advantages to doing difference analysis on source code. One of these advantages is the ability to query for differences between specific sections of a codebase as well as across versions of the same codebase. The application of srcML for difference Analysis is explained, in detail, by Maletic et al.
Examples
As an example of how srcML is used, here is an XPath expression that could be used to find all classes in a source document:
//src:class
Another example might be finding all comments within functions:
/src:function//src:comment
Due to the fact that srcML is based on xml, all XML tools can be used with srcML, which provides rich functionality.
See also
DMS Software Reengineering Toolkit
Program Transformation
TXL programming language
References
External links
XML markup languages
Program transformation
Software maintenance | SrcML | Engineering | 1,166 |
2,497,795 | https://en.wikipedia.org/wiki/Acoustic%20metric | In acoustics and fluid dynamics, an acoustic metric (also known as a sonic metric) is a metric that describes the signal-carrying properties of a given particulate medium.
(Generally, in mathematical physics, a metric describes the arrangement of relative distances within a surface or volume, usually measured by signals passing through the region – essentially describing the intrinsic geometry of the region.)
A simple fluid example
For simplicity, we will assume that the underlying background geometry is Euclidean, and that this space is filled with an isotropic inviscid fluid at zero temperature (e.g. a superfluid). This fluid is described by a density field ρ and a velocity field . The speed of sound at any given point depends upon the compressibility which in turn depends upon the density at that point. It requires much work to compress anything more into an already compacted space. This can be specified by the "speed of sound field" c. Now, the combination of both isotropy and Galilean covariance tells us that the permissible velocities of the sound waves at a given point x, has to satisfy
This restriction can also arise if we imagine that sound is like "light" moving through a spacetime described by an effective metric tensor called the acoustic metric.
The acoustic metric is
"Light" moving with a velocity of (not the 4-velocity) has to satisfy
If
where α is some conformal factor which is yet to be determined (see Weyl rescaling), we get the desired velocity restriction. α may be some function of the density, for example.
Acoustic horizons
An acoustic metric can give rise to "acoustic horizons" (also known as "sonic horizons"), analogous to the event horizons in the spacetime metric of general relativity. However, unlike the spacetime metric, in which the invariant speed is the absolute upper limit on the propagation of all causal effects, the invariant speed in an acoustic metric is not the upper limit on propagation speeds. For example, the speed of sound is less than the speed of light. As a result, the horizons in acoustic metrics are not perfectly analogous to those associated with the spacetime metric. It is possible for certain physical effects to propagate back across an acoustic horizon. Such propagation is sometimes considered to be analogous to Hawking radiation, although the latter arises from quantum field effects in curved spacetime.
See also
Acoustics
Analog models of gravity
Gravastar
Hawking radiation
Quantum gravity
Superfluid vacuum theory
References
Considers information leakage through a transsonic horizon as an "analogue" of Hawking radiation in black hole problems.
Indirect radiation effects in the physics of acoustic horizon explored as a case of Hawking radiation.
Huge review article of "toy models" of gravitation, 2005, currently on v2, 152 pages, 435 references, alphabetical by author.
External links
Acoustic black holes on arxiv.org
Acoustics
Quantum gravity | Acoustic metric | Physics | 597 |
58,565,157 | https://en.wikipedia.org/wiki/Tagging%20system | In occupational health and safety, a tagging system is a system of recording and displaying the status of a machine or equipment, enabling staff to view whether it is in working order. It is a product of industry-specific legislation which sets safety standards for a particular piece of equipment, involving inspection, record-keeping, and repair. This sets standardized umbrella terms for equipment and machinery (e.g. machinery, scaffolding, forklift, cherry picker) to be deemed 'safe to use'.
Characteristics
A tagging system consists of a holder and insert, and is specifically designed for certain industries, machinery and equipment. For instance, a scaffold tagging system is designed to be used at the entrances and exits of erect scaffolding. A ladder tag system is designed to be permanently fixed onto the inside edge of all ladders that are used within the workplace or site.
The majority of tagging system holders are manufactured to withstand extreme weather conditions and remain attached to its equipment. Inserts are produced from polypropylene (PP) which is heat resistant and durable under adverse weather conditions.
All tagging system holders should come with an inspection warning print on the inside of the holder with space to write a reference number (ref no.), which is based on the company's system. The empty holder should have text notifying the user that the inspection record is missing and needs to be replaced (in essence, that the equipment needs to be inspected again before being deemed 'safe to use'). This notice is hidden when an insert is placed into the holder.
Tagging system inserts commonly include: a ref no.; inspection dates (due and complete); inspector name, signature, and contact number; weight class; structure type; and advice and warnings that are specific to its intended industry. Inserts are often custom designed by the company and broadly are the same format and size but include branding and contact details.
Use
Tagging systems are mainly used in manufacturing and construction industries, but any workplace that uses machinery, tools and equipment should ensure that all these items are in full working order and that they have been inspected and will continue to be inspected for the safety of the user.
Tagging system types and their uses include:
Scaffold – for scaffolding structures
Tower – for scaffold towers and platforms
Racking – for storage frameworks, such as in a warehouse
Chemical – for chemical labelling
Ladder – for ladders
LOLER – for equipment that lifts or lowers
MEWP – for mobile elevating work platforms such as cherry pickers and scissor lifts
Forklift – for forklifts
Universal equipment – for tools and equipment that need regular inspection but do not fit in any other category
Isolation – for warning regarding an isolated power source
Flange – for warning regarding the current state of a flange
Mini-tags – used to highlight inspection due dates, portable appliance testing (PAT), vibration control, harness inspections and safe working load
Legislation
United Kingdom
There is currently no legislation in the UK requiring use of a tagging system at a work site or workplace, though it is a legal obligation to inspect all machinery and tools and keep a valid record of said inspections. Equipment should be deemed 'safe to use' before use.
Major workplace hazards and legislation required to be met include:
Working at height – Falling from a height accounts for nearly half of fatal workplace injuries, the majority of these occurring in construction. Common hazards include falls from ladders, scaffolding, or though a weak roof; objects falling from scaffolding; shock from contact with power lines followed by a fall; and collapse of scaffolding or racking. If a scaffolding or tower scaffolding platform has a drop of or greater, then inspections are mandatory. All equipment and scaffolding should be regularly inspected, including after any changes have been made or after it has experienced severe weather conditions. Regulation requires the work to be "properly planned, supervised and carried out by competent people" and that the correct equipment is used for the job.
MEWP – Mobile elevating work platforms are covered by the Working at Height Regulation, which recommends a work restraint system. This would include a full body harness tethered to the MEWP basket.
Forklifts – Lifting Operations and Lifting Equipment Regulations 1998 (LOLER) states "that all equipment used for lifting is fit for purpose, appropriate for the task, suitably marked and, in many cases, subject to statutory periodic 'thorough examination. Records must be kept of all thorough examinations and any defects found must be reported to both the person responsible for the equipment and the relevant enforcing authority.
Working with Chemicals – The Control of Substances Hazardous to Health Regulations 2002 (COSHH) requires that harmful substances are labelled, accompanied by a data sheet, and that there is correctly placed safety signage. It further requires that personal protective equipment (PPE) is sufficiently strong and durable.
Confined Space – Safety within confined spaces is covered by the Approved Code of Practice (ACOP) and The Confined Space Regulations 1997. These describe potential health hazards from substances or conditions, such as fumes or combustion.
Electrical – Electrical hazards fall under the Health and Safety at Work Act 1974 (HASWA), which requires hazard warning signs and up-to-date PAT testing. Employers should also provide posters and leaflets with information about electric shock hazards.
Slips and falls – HASWA also states that employers should ensure that preventative action is taken to prevent slips, trips and falls.
Machinery and tools – Equipment within the workplace is under the Provision and Use of Work Equipment Regulations 1998 (PUWER, 1999 in Northern Ireland) requiring that employers make all machinery safe for use, including adding precautions such as safety barriers PPE (also covered by Personal Protective Equipment at Work Regulations 1992). Machinery should be inspected at regular intervals to ensure it is in a continued 'safe to use' state.
The advice given above is often characterized as 'best practice' and may not always be legally binding.
See also
Construction site safety
References
Occupational safety and health
Maintenance | Tagging system | Engineering | 1,231 |
41,626,548 | https://en.wikipedia.org/wiki/Molybdenum%28IV%29%20fluoride | Molybdenum(IV) fluoride is a binary compound of molybdenum and fluorine with the chemical formula MoF4.
References
Molybdenum(IV) compounds
Fluorides
Molybdenum halides | Molybdenum(IV) fluoride | Chemistry | 55 |
42,699,051 | https://en.wikipedia.org/wiki/QIVICON | Qivicon is an alliance of companies from different industries that was founded in 2011 by Deutsche Telekom. These companies collaborate on a cross-vendor wireless home automation solution that has been available in the German market since the fall of 2013. It includes products in the areas of energy, security, and comfort. It connects and combines controllable devices made by different manufacturers such as motion detectors, smoke detectors, water detectors, wireless adapters for power outlets, door and window contact, temperature and humidity sensors, wireless switches, carbon monoxide sensors, thermostats, cameras, household appliances (e. g. washing machines, dryers, coffee machines), weather stations, sound systems, and lighting controls.
Qivicon has stated that it would like to take the "Smart Home" further forward around the world. The alliance uses Smart Home optimized wireless protocols to make solutions easy to install in any home without needing to lay cables. The technical platform is international and open for companies of all sizes and in all industries.
Members
It currently consists of over 43 companies in different industries such as energy, electrical and household appliances, security and telecommunications. Qivicon partners include Deutsche Telekom, E wie Einfach, eQ-3, Miele, Samsung and Philips. In March 2018, Deutsche Telekom announced that it had integrated the Home Connect platform, which works with Bosch and Siemens connected devices, into Qivicon to enable greater functunality between the two. For example, as well as being able to control connected Bosch and Siemens appliances directly via the Home Connect app. DT also announced a number of new compatible devices broaden the Qivicon portfolio, such as the Nest Protect smoke and CO alarm.
EnBW
eQ-3
Miele
Samsung
Deutsche Telekom
Assa Abloy
bitronvideo
Centralite
Cosmote
digitalSTROM
D-Link
DOM Technologies
Entega
E WIE EINFACH
eww Gruppe
Gigaset
Google
Huawei
Hitch
Home Connect
Junkers
kpn
Logitech
Nest
Netatmo
Osram
PaX
Philips
Plugwise
RheinEnergie
Sengled
Smappee
Sonos
Stadtwerke Bonn
VW
History
The Qivicon platform has been around in the German market since the fall of 2013.
The platform's technical control unit, its home base, is connected to the Internet via a broadband connection in the house or apartment. In August 2016, Qivicon launched a new generation of the home base focusing on international markets. The range of different models will keep up with the diverse range of wireless protocols found throughout the international market. The models all have an identical outward appearance. But they differ in terms of their pre-installed protocols. For example, the model designed for the German market, and several other markets, already includes the protocols HomeMatic, ZigBee Pro and the inclusion of HomeMatic IP and DECT ULE has also been completed. Another model includes the ZigBee Pro and Z-Wave radio modules. All versions of the new home base can be connected to home DSL routers either by cable, wirelessly, via Wi-Fi or via Deutsche Telekom's Speedport Smart router.
The system can be expanded to include other wireless standards by means of USB sticks for which there are four corresponding slots in the home base of the first generation and two slots in the second generation. Qivicon partners’ devices can be controlled and monitored via various partner apps for the smartphone, the tablet or the PC. Since November 2017 Qivicon is compatible with Alexa from Amazon. Users can control lights, blinds or alarm systems with their voice via Amazon Echo or Google Home.
In March 2017, Deutsche Telekom launched a White Label Smart Home portfolio that includes platform, gateways, applications, compatible devices and services. The portfolio is designed to help telecommunications service providers, utility providers, hardware manufacturers and other enterprises create and offer smart home services.
Deutsche Telekom extended its international footprint within the smart home sector by partnering with Cosmote, the largest mobile operator in Greece and part of the OTE group, as well as Hitch in Norway, adding Greece and Norway to Qivicon's current footprint of Germany, Slovakia, the Netherlands, Austria, and Italy.
AV-Test, an IT security test institution, rates Qivicon as “secure”. It found that the Smart Home platform used encryption for communication and provided protection from unauthorized access.
Awards
Qivicon has won repeat awards from the international management consulting company Frost & Sullivan’s. In 2016, Frost & Sullivan has awarded Qivicon with the European Connected Home New Product Innovation Award. In 2014, the smart home platform has been awarded with the European Visionary Innovation Leadership Award in recognition of what the management consulting company saw as the most innovative Smart Home solution of the year.
References
Bibliography
Ohland, Günther. Smart-Living. Books on Demand, Norderstedt 2013. .
External links
Qivicon home page
Home automation
Building engineering organizations
Technology consortia
Environmental technology
Building automation | QIVICON | Technology,Engineering | 1,028 |
75,952,518 | https://en.wikipedia.org/wiki/NanoACE | NanoACE is a technology demonstration CubeSat by Tyvak Nano-Satellite Systems to validate their communications, navigation, guidance, and software technology. NanoACE was launched onboard a Soyuz-2.1a Fregat-M, on July 14, 2017, along with Russian Earth imaging satellite Kanopus-V-IK and 71 other CubeSats.
The satellite has two Infrared and two visible light cameras. It can maneuver via its cold gas propulsion system.
References
CubeSats
Satellites
Satellites in low Earth orbit
Spaceflight
2017 in spaceflight | NanoACE | Astronomy | 111 |
5,291,387 | https://en.wikipedia.org/wiki/Koszul%E2%80%93Tate%20resolution | In mathematics, a Koszul–Tate resolution or Koszul–Tate complex of the quotient ring R/M is a projective resolution of it as an R-module which also has a structure of a dg-algebra over R, where R is a commutative ring and M ⊂ R is an ideal. They were introduced by as a generalization of the Koszul resolution for the quotient R/(x1, ...., xn) of R by a regular sequence of elements. used the Koszul–Tate resolution to calculate BRST cohomology. The differential of this complex is called the Koszul–Tate derivation or Koszul–Tate differential.
Construction
First suppose for simplicity that all rings contain the rational numbers Q. Assume we have a graded supercommutative ring X, so that
ab = (−1)deg(a)deg (b)ba,
with a differential d, with
d(ab) = d(a)b + (−1)deg(a)ad(b)),
and x ∈ X is a homogeneous cycle (dx = 0). Then we can form a new ring
Y = X[T]
of polynomials in a variable T, where the differential is extended to T by
dT=x.
(The polynomial ring is understood in the super sense, so if T has odd degree then T2 = 0.) The result of adding the element T is to kill off the element of the homology of X represented by x, and Y is still a supercommutative ring with derivation.
A Koszul–Tate resolution of R/M can be constructed as follows. We start with the commutative ring R (graded so that all elements have degree 0). Then add new variables as above of degree 1 to kill off all elements of the ideal M in the homology. Then keep on adding more and more new variables (possibly an infinite number) to kill off all homology of positive degree. We end up with a supercommutative graded ring with derivation d whose
homology is just R/M.
If we are not working over a field of characteristic 0, the construction above still works, but it is usually neater to use the following variation of it. Instead of using polynomial rings X[T], one can use a "polynomial ring with divided powers" X〈T〉, which has a basis of elements
T(i) for i ≥ 0,
where
T(i)T(j) = ((i + j)!/i!j!)T(i+j).
Over a field of characteristic 0,
T(i) is just Ti/i!.
See also
Lie algebra cohomology
References
M. Henneaux and C. Teitelboim, Quantization of Gauge Systems, Princeton University Press, 1992
Homological algebra
Commutative algebra | Koszul–Tate resolution | Mathematics | 604 |
23,001,042 | https://en.wikipedia.org/wiki/Robert%20John%20Jenkins%20Jr. | Robert John Jenkins Junior (born 1966 in Akron, Ohio), also known as Bob Jenkins, is an American computer professional and author of several fast pseudorandom number generators such as ISAAC and hash functions (Jenkins hash)
References
1966 births
Living people
People from Akron, Ohio | Robert John Jenkins Jr. | Technology | 56 |
44,814,319 | https://en.wikipedia.org/wiki/List%20of%20largest%20machines | This is a list of the world's largest machines, both static and movable in history.
Building structure
Ground vehicles
Mining vehicles
Engineering and transport vehicles
Military vehicles
Air vehicles
Lighter-than-air vehicles
Heavier-than-air vehicles
Sea vehicles
Industrial and cargo vessels
Passenger vessels
Military vessels
Space vehicles
Space stations
Launch vehicles
See also
List of largest passenger vehicles
List of large aircraft
References
Machines
Largest
Technology-related lists
Transport-related lists of superlatives | List of largest machines | Physics,Technology,Engineering | 90 |
61,349,807 | https://en.wikipedia.org/wiki/Mollisquama%20mississippiensis | Mollisquama mississippiensis or the American pocket shark is a species of pocket shark native to the Gulf of Mexico. It is the second species of pocket shark to be described.
Discovery
The shark was first discovered by scientists from Tulane University that were conducting a study on sperm whales in 2010. In 2013, the National Oceanic and Atmospheric Administration identified it as a pocket shark, the first to be found in its region. A previously found specimen of a different pocket shark species was caught off the coast of Chile in 1979 and was used to identify the two different species due to their differences in size, vertebrae and numerous light-producing photophores.
Description
The head is bulbous, resembling that of a whale. The shark is very small, at only . Near the gills are two "pockets" that secrete a luminous fluid which may enable the shark to hunt. The body is grey with the fins being darker. The areas around the gills are cream colored. There are clusters of photophores around the body, which are able to produce light.
References
Fish described in 2019
Fish of the Gulf of Mexico
Dalatiidae
Species known from a single specimen | Mollisquama mississippiensis | Biology | 231 |
752,942 | https://en.wikipedia.org/wiki/Ty%20Treadway | Tyrus Richard Treadway (born February 11, 1967) is an American game show host, actor, and talk show host. Treadway co-hosted Soap Talk with Lisa Rinna.
Biography
Ty was born and raised in Trenton, New Jersey, to Richard and Mary Lou Treadway. Ty was the youngest of six siblings.
After high school, Ty received a scholarship for soccer and attended a couple of colleges before graduating with a degree in accounting. He went to work for the New Jersey Office of the State Auditor as an auditor and computer systems engineer but took part in bodybuilding competitions on the side. He eventually won the title of Mr. Natural Pennsylvania. He often had a "Ty Training" segment on his later talk show, Soap Talk.
However, Ty found his job boring and depressing so he decided to start modeling and acting. He appeared in several magazines, theater productions, and commercials.
In May 2000, Ty landed the role of Dr. Colin MacIver on the ABC soap opera One Life to Live. His character was killed off after a year, but he returned as his nicer twin brother, Troy MacIver, (who eventually went insane) and appeared off and on from 2001 to 2004.
In June 2002, Ty pulled double duty as soap star and talk show host. He began co-hosting Soapnet's Soap Talk with Lisa Rinna.
That year, he and his girlfriend Monica got engaged at the Eiffel Tower. They married in 2003, and their first child, a daughter named Samantha Raine, was born in February 2005. Their second child, a son named Ryder, was born March 7, 2006.
In March 2006, the series American Idol Extra debuted, featuring Treadway as host. Treadway interviewed various American Idol personalities, including producers, contestants, vocal coaches, and celebrity guests.
On April 30, 2007, Ty was chosen to host the game show Merv Griffin's Crosswords, created by Merv Griffin.
In 2009, he and his family moved to Frisco, Texas.
In 2010, he was offered a recurring role as Dr. Ben Walters on Days of Our Lives that lasted from September to February 2011.
In 2012, he resumed his role as Troy MacIver for the final episodes of One Life to Live. He also had a short-lived stint as a co-host on "Good Morning Texas," which airs throughout the Dallas–Fort Worth metroplex on WFAA. Recently he has been a real estate agent.
Soap Talk
In 2002, the American cable network SOAPnet launch a soap opera talk show called Soap Talk and Treadway became a co-host along with Lisa Rinna. He and Rinna received a Daytime Emmy nomination for outstanding talk show host alongside such shows as Live with Regis and Kelly, The View and Dr. Phil. Ultimately, they lost to Wayne Brady. Further nominations occurred in 2005 and 2006.
In 2004, Treadway and Rinna hosted a TV special that pre-empted The View for one day during the 2004 Summer Olympics called SOAPnet Reveals ABC Soap Secrets. This special gave scoops about what was going to happen in the near future on All My Children, General Hospital, and One Life to Live. ABC commissioned this special in an attempt to keep its daytime Nielsen ratings up during the Olympic season. Statistically soap operas lose ten percent of their viewership in Olympic years.
Soap Talk was canceled in 2006 but returned periodically for specials.
References
External links
Soapnet
1967 births
American male bodybuilders
American game show hosts
Male models from New Jersey
American male soap opera actors
Computer systems engineers
Living people
American television talk show hosts
Male actors from Trenton, New Jersey
The College of New Jersey alumni
20th-century American sportsmen | Ty Treadway | Technology | 754 |
363,246 | https://en.wikipedia.org/wiki/Raceme | A raceme () or racemoid is an unbranched, indeterminate type of inflorescence bearing flowers having short floral stalks along the shoots that bear the flowers. The oldest flowers grow close to the base and new flowers are produced as the shoot grows in height, with no predetermined growth limit. Examples of racemes occur on mustard (genus Brassica), radish (genus Raphanus), and orchid (genus Phalaenopsis) plants.
Definition
A raceme or racemoid is an unbranched, indeterminate type of inflorescence bearing pedicellate flowers (flowers having short floral stalks called pedicels) along its axis. In botany, an axis means a shoot, in this case one bearing the flowers. In indeterminate inflorescence-like racemes, the oldest flowers grow close to the base and new flowers are produced as the shoot grows in height, with no predetermined growth limit. A plant that flowers on a showy raceme may have this reflected in its scientific name, e.g. the species Actaea racemosa. A compound raceme, also called a panicle, has a branching main axis. Examples of racemes occur on mustard (genus Brassica) and radish (genus Raphanus) plants.
Spike
A spike is an unbranched, indeterminate inflorescence, similar to a raceme, but bearing sessile flowers (sessile flowers are attached directly, without stalks). Examples occur on Malabar nut (Justicia adhatoda) and chaff flowers (genus Achyranthes). A spikelet can refer to a small spike, although it primarily refers to the ultimate flower cluster unit in grasses (family Poaceae) and sedges (family Cyperaceae), in which case the stalk supporting the cluster becomes the pedicel. A true spikelet comprises one or more florets enclosed by two glumes (sterile bracts), with flowers and glumes arranged in two opposite rows along the spikelet. Examples occur on rice (species Oryza sativa) and wheat (genus Triticum), both grasses.
Catkin
An ament or catkin is very similar to a spike or raceme "but with subtending bracts so conspicuous as to conceal the flowers until pollination, as in the pussy–willow, alder, [and] birch...". These are sometimes called amentaceous plants.
Spadix
A spadix is a form of spike in which the florets are densely crowded along a fleshy axis and enclosed by one or more large, brightly–colored bracts called spathes. Usually the female flowers grow at the base, and male flowers grow above. They are a characteristic of the family Araceae, for example jack–in–the–pulpit (species Arisaema triphyllum) and wild calla (genus Calla).
Examples
Etymology
From classical Latin, a racemus is a cluster of grapes.
See also
Inflorescence
Glossary of botanical terms
References
Flowers
Plant morphology
sv:Blomställning#Typer av blomställningar | Raceme | Biology | 660 |
26,263,536 | https://en.wikipedia.org/wiki/Generalized%20context-free%20grammar | Generalized context-free grammar (GCFG) is a grammar formalism that expands on context-free grammars by adding potentially non-context-free composition functions to rewrite rules. Head grammar (and its weak equivalents) is an instance of such a GCFG which is known to be especially adept at handling a wide variety of non-CF properties of natural language.
Description
A GCFG consists of two components: a set of composition functions that combine string tuples, and a set of rewrite rules. The composition functions all have the form , where is either a single string tuple, or some use of a (potentially different) composition function which reduces to a string tuple. Rewrite rules look like , where , , ... are string tuples or non-terminal symbols.
The rewrite semantics of GCFGs is fairly straightforward. An occurrence of a non-terminal symbol is rewritten using rewrite rules as in a context-free grammar, eventually yielding just compositions (composition functions applied to string tuples or other compositions). The composition functions are then applied, successively reducing the tuples to a single tuple.
Example
A simple translation of a context-free grammar into a GCFG can be performed in the following fashion. Given the grammar in (), which generates the palindrome language , where is the string reverse of , we can define the composition function conc as in () and the rewrite rules as in ().
The CF production of is
and the corresponding GCFG production is
Linear Context-free Rewriting Systems (LCFRSs)
Weir (1988) describes two properties of composition functions, linearity and regularity. A function defined as is linear if and only if each variable appears at most once on either side of the =, making linear but not . A function defined as is regular if the left hand side and right hand side have exactly the same variables, making regular but not or .
A grammar in which all composition functions are both linear and regular is called a Linear Context-free Rewriting System (LCFRS). LCFRS is a proper subclass of the GCFGs, i.e. it has strictly less computational power than the GCFGs as a whole.
On the other hand, LCFRSs are strictly more expressive than linear-indexed grammars and their weakly equivalent variant tree adjoining grammars (TAGs). Head grammar is another example of an LCFRS that is strictly less powerful than the class of LCFRSs as a whole.
LCFRS are weakly equivalent to (set-local) multicomponent TAGs (MCTAGs) and also with multiple context-free grammar (MCFGs ). and minimalist grammars (MGs). The languages generated by LCFRS (and their weakly equivalents) can be parsed in polynomial time.
See also
Range concatenation grammar
References
Formal languages
Grammar frameworks | Generalized context-free grammar | Mathematics | 608 |
68,193,807 | https://en.wikipedia.org/wiki/Mustankallio%20water%20tower | Mustankallio water tower lies in the Kiveriö district of Lahti, Finland, and stands tall. Completed by a local company in 1963, it includes two water reservoirs, a penthouse meeting facility complete with sauna, and a viewing platform. The design, which features pre-stressed concrete elements and asbestos cement cladding, was a departure from the steel water tower structures commonly built in the region. When commissioned, its original name was the Metelinmäki Water Tower. It has been described as crocus-like in appearance and complimented on its elegance.
Design and construction
Water towers in the region, many of which were built in the 19th century, had previously been built from steel. They have been described as "ugly and uninspiring" and "marring the skyline". This structure, designed by Ing. Büro Paavo Simula and Company, was intended to be more aesthetic to minimise its effect on the visual environment. It stands on Mustankallio hill.
The tower was completed in 1963 by local construction firm B&K. It is constructed of pre-stressed concrete elements built locally, under license from the German Dyckerhoff & Widmann (Dywidag) company. The main body contains two drinking water reservoirs with a combined capacity of , protected from freezing by a lining of mineral wool. The exterior is clad with asbestos cement panels.
The structure stands tall; the top of the tower is above the nearby Lake Vesijärvi and above sea level. The uppermost portion of the tower is around in diameter. The structure has received praise for its "striking appearance" and "elegant lines". The design has been described as crocus-like.
Use
Lahti Aqua, a company owned by the city of Lahti, owns the tower and uses it to store water for distribution to the town. At night water is pumped into the tank from lower-level storage, during the day water is drained via gravity feed and hydrostatic pressure from the tank for distribution to Mustankallio, Kiveriö, Tonttila and Pyhättömänmäki.
The tower contains a viewing platform with good views across the town. The tower also contains a meeting room capable of housing 30 to 40 people, a sauna that can accommodate 15, a lounge, kitchen, toilet and showers. The facilities are available for rent between 8 a.m. and midnight. Small-sized lifts offer access to the facilities.
In 2015, the facade exterior panels were replaced.
It is part of the motif in the 2021 Ironman 70.3 Ironman Triathlon Finland medal.
See also
Kuwait Water Towers, for an organic "mushroom farm" look by Swedish designers and engineers
List of tallest towers
References
External links
Asbestos
Buildings and structures in Päijät-Häme
Towers completed in 1963
Water towers in Finland | Mustankallio water tower | Environmental_science | 585 |
34,205,577 | https://en.wikipedia.org/wiki/Tapinella%20panuoides | Tapinella panuoides, also known as oyster rollrim, and as fan pax from its former binomial Paxillus panuoides, is a fungus species in the genus Tapinella.
Atromentin is a phenolic compound. The first enzymes in its biosynthesis have been characterised in T. panuoides.
Despite its pleasant taste, the species is poisonous. In North America it can be confused with poisonous western jack o'lanterns, edible chanterelle mushrooms, false chanterelles (Hygrophoropsis aurantiaca), Crepidotus, or Phyllotopsis.
It grows on wood or in "lignin-rich humus," has little or no stalk where it emerges from the substrate, and the gills appear to be crimped, forked, or crosshatched close to the base.
References
External links
Boletales
Fungi described in 1931
Poisonous fungi
Fungus species | Tapinella panuoides | Biology,Environmental_science | 197 |
18,704,926 | https://en.wikipedia.org/wiki/Megaphone%20%28molecule%29 | Megaphone is a cytotoxic neolignan obtained from Aniba megaphylla, a flowering plant of Laurel family which gave the compound its name. Megaphone has also been prepared synthetically.
Studies carried out in the 1960s demonstrated that an alcoholic extract of the ground root of Aniba megaphylla inhibited, in vitro, growth of cells derived from human carcinoma of the nasopharynx. In 1978, the active components of the extract were isolated using silica gel chromatography, characterized and named as megaphone (C22H30O6, solid), megaphone acetate (C24H32O7, oily liquid) and megaphyllone acetate (C23H28O7, oily liquid). For comparison, megaphone acetate was also produced synthetically by reacting megaphone with acetic anhydride at 50 °C for 6 hours. Stirring an alcoholic solution of megaphone (megaphone acetate), with added palladium catalyst, in hydrogen atmosphere, followed by evaporation of the solvent yields tetrahydromegaphone (tetrahydromegaphone acetate) as an oil. Millimeter-sized crystals of megaphone can be grown from an ether-chloroform solution. They have monoclinic symmetry with space group P21, lattice constants a = 0.8757 nm, b = 1.1942 nm and c = 1.0177 nm and two formula units per unit cell. Megaphone and megaphone acetate molecules are chiral and the reported extraction and synthesis procedures yielded their racemic mixtures. Megaphone acetate was also isolated from the root of Endlicheria dysodantha, another plant of Laurel family, using chromatography of ethanolic solution. It showed inhibitory activity against cells of crown gall tumor and human lung, breast and colon carcinomas.
References
Lignans
O-methylated natural phenols
Total synthesis
Cyclohexenes
Plant toxins | Megaphone (molecule) | Chemistry | 419 |
44,621,880 | https://en.wikipedia.org/wiki/New%20York%20City%20agar | The NYC (New York City) medium or GC (Neisseria gonorrhoeae) medium agar is a type of selective media used for isolating Gonococci and N. meningitidis.
Composition
The agar base is composed of:
Final pH ( at 25°C) 7.4±0.2
Background and principles
NYC Agar Base was originally developed by Fauer, Weisburd and Wilson at the New York City Department of Health for selective isolation of pathogenic Neisseria species from clinical specimens. It consists of primarily a peptone-corn starch agar-base buffered with phosphates and supplemented with horse plasma, horse haemoglobin, dextrose, yeast autolysate and antibiotics. This medium is superior to other media generally employed for the isolation of Neisseria species. The transparent nature of the medium helps in studying the colonial types.
Proteose peptone, horse plasma, haemoglobin provide nutrients for the growth of N. gonorrhoeae and N. meningitidis. Phosphate buffers the medium. The selective supplement added contains the antibiotics vancomycin, colistin, nystatin and trimethoprim, to suppress the accompanying flora. Vancomycin is inhibitory for gram-positive bacteria. Colistin inhibits gram negative bacteria, including Pseudomonas species, while Proteus is inhibited by trimethoprim. The combination of trimethoprim and colistin acts synergistically against gram-negative bacilli. Starch neutralizes the toxic metabolites produced by Neisseria. The yeast autolysate supplement fulfils the requirements needed to enhance Neisseria growth. Yeast contains oxaloacetic acid which is metabolized by gonococci to produce sufficient for growth of capnophilic gonococci. Also, presence of yeast autolysate reduces the lag phase of growth of Neisseria, thus enhancing both size and number of colonies. The specimen can be directly streaked on the medium to obtain maximum isolation.
Procedure
Streak the specimen as soon as possible after it is received in the laboratory. If material is being cultured directly from a swab, proceed as follows:
Roll swab directly on the medium in a large “Z” to provide adequate exposure of swab to the medium for transfer of organisms.
Cross-streak the “Z” pattern with a sterile wire loop, preferably in the clinic. If not done previously, cross-streaking should be done in the laboratory.
Place the culture as soon as possible in an aerobic environment enriched with carbon dioxide.
Incubate at 35 ± 2 °C and examine after overnight incubation and again after approximately 48 hours.
Subculture for identification of N. gonorrhoeae should be made within 18–24 hours. If shipped after incubation, colonies should be subcultured before performing biochemical identification tests in order to ensure that adequate viability is achieved.
Expected results
Typical colonial morphology is as follows:
N. gonorrhoeae may appear as small (0.5–1.0 mm) grayish white to colorless mucoid colonies.
N. meningitidis appears as large colorless to bluish-gray mucoid colonies.
Colonies may be selected for Gram-staining, subculturing or other diagnostic procedures.
References
Microbiological media
Cell culture media | New York City agar | Biology | 713 |
16,111,341 | https://en.wikipedia.org/wiki/Edinburgh%20Phrenological%20Society | The Edinburgh Phrenological Society was founded in 1820 by George Combe, an Edinburgh lawyer, with his physician brother Andrew Combe. The Edinburgh Society was the first and foremost phrenology grouping in Great Britain; more than forty phrenological societies followed in other parts of the British Isles. The Society's influence was greatest over its first two decades and declined in the 1840s; the final meeting was recorded in 1870.
The central concept of phrenology is that the brain is the organ of the mind and that human behaviour can be usefully understood in broadly neuropsychological rather than philosophical or religious terms. Phrenologists discounted supernatural explanations and stressed the modularity of mind. The Edinburgh phrenologists also acted as midwives to evolutionary theory and inspired a renewed interest in psychiatric disorder and its moral treatment. Phrenology claimed to be scientific but is now regarded as a pseudoscience as its formal procedures did not conform to the usual standards of scientific method.
Edinburgh phrenologists included George and Andrew Combe; asylum doctor and reformer William A.F. Browne, father of James Crichton-Browne; Robert Chambers, author of the 1844 proto-Darwinian book Vestiges of the Natural History of Creation; William Ballantyne Hodgson, economist and pioneer of women's education; astronomer John Pringle Nichol; and botanist and evolutionary thinker Hewett Cottrell Watson. Charles Darwin, a medical student in Edinburgh in 1825–7, took part in phrenological discussions at the Plinian Society and returned to Edinburgh in 1838 when formulating his concepts concerning natural selection.
Background
Phrenology emerged from the views of the medical doctor and scientific researcher Franz Joseph Gall in 18th-century Vienna. Gall suggested that facets of the mind corresponded to regions of the brain, and that it was possible to determine character traits by examining the shape of a person's skull. This "craniological" aspect was greatly extended by his one-time disciple, Johann Spurzheim, who coined the term phrenology and saw it as a means of advancing society by social reform (improving the material conditions of human life).
In 1815, the Edinburgh Review published a hostile article by anatomist John Gordon, who called phrenology a "mixture of gross errors" and "extravagant absurdities". In response, Spurzheim went to Edinburgh to take part in public debates and to perform brain dissections in public. Although he was received politely by the scientific and medical community there, many were troubled by the philosophical materialism implicit in phrenology. George Combe, a lawyer who had previously been skeptical, became a convert to phrenology after listening to Spurzheim's commentary as he dissected a human brain.
Founding and function
The Edinburgh Phrenological Society was founded on 22 February 1820, by the Combe brothers with the support of the Evangelical minister David Welsh. The Society grew rapidly; in 1826, it had 120 members, an estimated one third of whom had a medical background.
The Society acquired large numbers of phrenological artefacts, such as marked porcelain heads indicating the location of cerebral organs, and endocranial casts of individuals with unusual personalities. Their museum was located on Chambers Street.
Members published articles, gave lectures, and defended phrenology. Critics included philosopher Sir William Hamilton and the editor of the Edinburgh Review, Francis Jeffrey, Lord Jeffrey. The hostility of other critics, including Alexander Monro tertius, anatomy professor at the University of Edinburgh Medical School, actually added to the glamour of phrenological concepts. Some anti-religionists, including the anatomist Robert Knox and the evolutionist Robert Edmond Grant, while sympathetic to its materialist implications, rejected the unscientific nature of phrenology and did not embrace its speculative and reformist aspects.
In 1823, Andrew Combe addressed the Royal Medical Society in a debate, arguing that phrenology explained the intellectual and moral abilities of mankind. Both sides claimed victory after the lengthy debate, but the Medical Society refused to publish an account. This prompted the Edinburgh Phrenological Society to establish its own journal in 1824: The Phrenological Journal and Miscellany, later renamed Phrenological Journal and Magazine of Moral Science.
In the mid-1820s, a split emerged between the Christian phrenologists and Combe's closer associates. Matters came to a head when Combe and his supporters passed a motion banning the discussion of theology in the Society, effectively silencing their critics. In response, David Welsh and other evangelical members left the Society.
In December 1826, the atheistic phrenologist William A.F. Browne caused a sensation at the university's Plinian Society with an attack on the recently republished theories of Charles Bell concerning the expression of the human emotions. Bell believed that human anatomy uniquely allowed the expression of the human moral self while Browne argued that there were no absolute distinctions between human and animal anatomy. Charles Darwin, then a 17-year-old student at the university, was there to listen. On 27 March 1827, Browne advanced phrenological theories concerning the human mind in terms of the Lamarckist evolution of the brain. This attracted the opposition of almost all members of the Plinian Society and, again, Darwin observed the ensuing outrage. In his private notebooks, including the M Notebook written ten years later, Darwin commented sympathetically on the views of the phrenologists.
George Combe published The Constitution of Man in 1828. After a slow start, it became an international bestseller in the 19th century, with around 350,000 copies sold. Almost a century later, psychiatrist Sir James Crichton-Browne said of the book: "The Constitution of Man on its first appearance was received in Edinburgh with an odium theologicum, analogous to that afterwards stirred up by the Vestiges of Creation and On The Origin of Species. It was denounced as an attack on faith and morals.... read today, it must be regarded as really rather more orthodox in its teaching than some of the lucubrations of the Dean of St Paul's and the Bishop of Durham".
Phrenologists from the Society applied their methods to the Burke and Hare murders in Edinburgh. Over the course of ten months in 1828, Burke and Hare murdered sixteen people and sold the bodies for dissection in the private anatomy schools. Burke was executed on 28 January 1829, while Hare turned King's evidence; Burke was publicly dissected by Professor Monro the next day, and the phrenologists were permitted to examine his skull. Face masks of both men – a death-mask for Burke and a life-mask for Hare – form part of the Edinburgh phrenology collection.
Scotswoman Agnes Sillars Hamilton made a living as a "practical phrenologist", travelling throughout Britain and Ireland. Her son, Archibald Sillars Hamilton left for Australia in 1854, developed a successful phrenology practice there, and published an account of Ned Kelly's skull.
Society co-founder and president Andrew Combe had two successful publications in the early 1830s: Observations on Mental Derangement in 1831 and Physiology applied to Health and Education in 1834. The latter, especially, sold well in Great Britain and the United States, with numerous editions and reprintings.
The Edinburgh Phrenological Society received a financial boost by the death of a wealthy supporter in 1832. William Ramsay Henderson left a large bequest to the Edinburgh Society to promote phrenology as it saw fit. The Henderson Trust enabled the society to publish an inexpensive edition of The Constitution of Man, which went on to become one of the best-selling books of the 19th century. However, despite the widespread interest in phrenology in the 1820s and 1830s, the Phrenological Journal always struggled to make a profit.
Influences from the society
W.A.F. Browne: In 1832–1834, Browne published a paper in The Phrenological Journal in three serialised episodes On Morbid Manifestations of the Organ of Language, as connected with Insanity, relating mental disorder to a disturbance in the neurological organization of language. Browne went on to a distinguished career as an asylum doctor and his internationally influential 1837 publication What Asylums Were, Are and Ought To Be was dedicated to Andrew Combe. In 1866, after his twenty years of leadership at The Crichton asylum in Dumfries, Browne was elected President of the Medico-Psychological Association. In his later years, Browne returned to relationships of psychosis, brain injury and language in his 1872 paper Impairment of Language, The Result of Cerebral Disease, published in the West Riding Lunatic Asylum Medical Reports, edited by his son James Crichton-Browne.
Robert Chambers: Although not formally admitted to the Society, Chambers occasionally acted as George Combe's publisher and became an enthusiast for phrenological thinking. In 1844, Chambers anonymously published Vestiges of the Natural History of Creation, written as he recovered from depression at his holiday home in St Andrews. Chambers' wife, Anne Kirkwood, transcribed the manuscript for the publishers (dictated by her husband) so that they would not recognise its origins. In a strange parallel, Prince Albert read it aloud to Queen Victoria in the Summer of 1845. It became an international bestseller and a powerful public influence, situated midway between Combe's The Constitution of Man (1828) and Darwin's On the Origin of Species in 1859.
Charles Darwin: Darwin attended the University of Edinburgh Medical School and, as an active member of Plinian Society, observed the 1826-1827 controversies with phrenologist William A.F. Browne. In 1838, some eleven years after his hurried departure, Darwin revisited Edinburgh and his undergraduate haunts, recording his psychological speculations in the M Notebook and teasing out the details of his theory of natural selection. At this time, Darwin was preparing for marriage with his religiously minded cousin Emma Wedgwood, and was in some emotional turmoil: on 21 September, after his return to England, he recorded a vivid and disturbing dream in which he seemed to be involved in an execution at which the corpse came to life and joked about having died as a hero. Darwin committed his "gigantic blunder" concerning the parallel roads of Glen Roy while on this Scottish trip, suggesting an element of mental distraction. He published On the Origin of Species some twenty years later, in 1859; the book was translated into many languages, and became a staple scientific text and a key fixture of modern scientific culture.
William Ballantyne Hodgson: Hodgson joined the phrenology movement as a student at Edinburgh University and later supported himself as a professional lecturer on literature, education, and phrenology. He became an educational reformer, a pioneering proponent of women's education and – in 1871 – the first Professor of Political Economy (and Mercantile Law) at Edinburgh University. In later life, Hodgson lived at Bonaly Tower outside Edinburgh, and was elected President of the Educational Institute of Scotland.
Thomas Laycock: Laycock was one of George Combe's "influential disciples". He was a pioneering neurophysiologist. In 1855, Laycock was appointed to the Chair of Medicine in Edinburgh University. In 1860, Laycock published his Mind and Brain, an extended essay on the neurological foundations of psychological life. Laycock was friendly with asylum reformer William A.F. Browne and was an important influence on Browne's son, Sir James Crichton-Browne.
John Pringle Nichol: Nichol was originally educated and licensed as a preacher, but the impact of phrenological thinking pushed him into education. He became a celebrated lecturer and Regius Professor of Astronomy in Glasgow University, and his 1837 book The Architecture of the Heavens was a classic of popular science. In the 1840s, Nichol became addicted to prescription opiates, and he recorded his successful hydropathic rehabilitation in his autobiographical correspondence Memorials from Ben Rhydding.
Hewett Cottrell Watson: In 1836, Watson published a paper in The Phrenological Journal entitled What Is The Use of the Double Brain? in which he speculated on the differential development of the two human cerebral hemispheres. This theme of cerebral asymmetry was picked up rather casually by the London society physician Sir Henry Holland in 1840, and then much more extensively by the eccentric Brighton medical practitioner Arthur Ladbroke Wigan in his 1844 treatise A New View of Insanity: On the Duality of Mind. It did not achieve scientific status until Paul Broca, encouraged by the French phrenologist/physician Jean-Baptiste Bouillaud, published his research into the speech centres of the brain in 1861. In 1868, Broca presented his findings at the Norwich meeting of the British Association for the Advancement of Science. In 1889, Henry Maudsley published a searching review of this topic entitled The Double Brain in the philosophical journal Mind. Like Robert Chambers, Watson later turned his energies to the question of the transmutation of species, and, having bought the Phrenological Journal with the proceeds of a large inheritance, appointed himself as its editor in 1837. In the 1850s, Watson conducted an extensive correspondence with Charles Darwin concerning the geographical distribution of British plant species, and Darwin made generous acknowledgement of Watson's scientific assistance in On The Origin of Species (second edition). Watson was unusual amongst phrenologists in explicitly disavowing phrenological ideas in later life.
Decline
Interest in phrenology declined in Edinburgh in the 1840s. Some of the phrenologists' concerns drifted into the related fields of anthropometry, psychiatry and criminology, and also into degeneration theory as set out by Bénédict Morel, Arthur de Gobineau and Cesare Lombroso. In the 1870s, the eminent social psychologist Gustav Le Bon (1841–1931) invented a cephalometer which facilitated the measurement of cranial capacity and variation. In 1885, the German medical scientist Rudolf Virchow launched a large scale craniometric investigation of the supposed racial stereotypes with decisively negative results for the proponents of racial science. Worldwide, interest in phrenology remained high throughout the nineteenth century, with George Combe's The Constitution of Man being much in demand. Combe devoted his later years to international travel, lecturing on phrenology. He was preparing the ninth edition of The Constitution of Man when he died while receiving hydrotherapy treatment at Moor Park, Farnham.
The last recorded meeting of the Society took place in 1870. The Society's museum closed in 1886.
Legacy of the Society
Together with mesmerism, phrenology exerted an extraordinary influence on the Victorian literary imagination in the later 19th century, especially in the fin-de-siècle aesthetic, and comparable to the later cultural influences of spiritualism and psychoanalysis. Examples of phrenology's literary legacy feature in the works of Sir Arthur Conan Doyle, George du Maurier, Bram Stoker, Robert Louis Stevenson and H. G. Wells.
On 29 February 1924, Sir James Crichton-Browne (the son of William A.F. Browne) delivered the Ramsay Henderson Bequest Lecture entitled The Story of the Brain in which he recorded a generous appreciation of the role of the Edinburgh phrenologists in the later development of neurology and neuropsychiatry. Crichton-Browne did not remark, however, on his father's having joined the Society a century earlier, almost to the day.
The Henderson Trust was wound up in 2012. Many of the society's phrenological artefacts survive today, having passed to the University of Edinburgh's Anatomical Museum under the direction of Professor Matthew Kaufman, and some are now on display at the Scottish National Portrait Gallery.
The activities of the Edinburgh phrenologists have enjoyed an unusual afterlife in the history and sociology of scientific knowledge (science studies), as an example of a discarded cultural production.
References
External links
Anatomical Museum at the University of Edinburgh
Organizations established in 1820
1870 disestablishments in Scotland
Phrenology
Organisations based in Edinburgh
History of Edinburgh
History of psychology
History of neuroscience
Clubs and societies in Edinburgh
History of mental health in the United Kingdom
Former mental health organisations in the United Kingdom
Charles Darwin
1820 establishments in Scotland
Organizations disestablished in 1870 | Edinburgh Phrenological Society | Biology | 3,301 |
6,199,604 | https://en.wikipedia.org/wiki/Hydrogen%20technologies | Hydrogen technologies are technologies that relate to the production and use of hydrogen as a part hydrogen economy. Hydrogen technologies are applicable for many uses.
Some hydrogen technologies are carbon neutral and could have a role in preventing climate change and a possible future hydrogen economy. Hydrogen is a chemical widely used in various applications including ammonia production, oil refining and energy. The most common methods for producing hydrogen on an industrial scale are: Steam reforming, oil reforming, coal gasification, water electrolysis.
Hydrogen is not a primary energy source, because it is not naturally occurring as a fuel. It is, however, widely regarded as an ideal energy storage medium, due to the ease with which electricity can convert water into hydrogen and oxygen through electrolysis and can be converted back to electrical power using a fuel cell or hydrogen turbine. There are a wide number of different types of fuel and electrolysis cells.
The potential environmental impact depends primarily on the methods used to generate hydrogen as a fuel.
Fuel cells
Alkaline fuel cell (AFC)
Direct borohydride fuel cell (DBFC)
Direct carbon fuel cell (DCFC)
Direct ethanol fuel cell (DEFC)
Direct methanol fuel cell (DMFC)
Electro-galvanic fuel cell (EGFC)
Flow battery (RFC)
Formic acid fuel cell (FAFC)
Metal hydride fuel cell (MHFC)
Microbial fuel cell (MFC)
Molten carbonate fuel cell (MCFC)
Phosphoric acid fuel cell (PAFC)
Photoelectrochemical cell (PEC)
Proton-exchange membrane fuel cell (PEMFC)
Protonic ceramic fuel cell (PCFC)
Regenerative fuel cell (RFC)
Solid oxide fuel cell (SOFC)
Hydrogen infrastructure
Hydrogen plant (Steam reformer)
Hydrogen pipeline transport
Hydrogen pressure letdown station
Compressed hydrogen tube trailer
Liquid hydrogen tank truck
Hydrogen piping
Hydrogen station
HCNG
Homefueler
Home Energy Station
Hydrogen highway
Zero Regio
Hydrogen compressor
Electrochemical hydrogen compressor
Guided rotor compressor
Hydride compressor
Ionic liquid piston compressor
Linear compressor
Hydrogen turboexpander-generator
Hydrogen leak testing
Hydrogen sensor
Hydrogen purifier
Hydrogen analyzer
Hydrogen valve
Hydrogen storage
Compressed hydrogen
Cryo-adsorption
Liquid hydrogen
Slush hydrogen
Underground hydrogen storage
Hydrogen tank
Power to gas
Hydrogen vehicles
Historic hydrogen filled airships
Hindenburg (airship)
Zeppelin
Hydrogen powered cars
Audi:
2004 – Audi A2H2-hybrid vehicle
2009 – Audi Q5-FCEV
BMW:
2002 – BMW 750hl
2010 – BMW 1 Series Fuel-cell hybrid electric
Chrysler:
2000 – Jeep Commander II-hybrid vehicle-Commercial
2001 – Chrysler Natrium-hybrid vehicle
2003 – Jeep Treo-Fuel cell
Daimler:
1994 – Mercedes-Benz NECAR 1
1996 – Mercedes-Benz NECAR 2
1997 – Mercedes-Benz NECAR 3
1999 – Mercedes-Benz NECAR 4
2000 – Mercedes-Benz NECAR 5
2002 – Mercedes-Benz F-Cell based on the Mercedes-Benz A-Class
2005 – Mercedes-Benz F600 Hygenius
2009 – Mercedes-Benz F-CELL Roadster
2009 – Mercedes-Benz F-Cell based on the Mercedes-Benz B-Class
Fiat:
2001 – Fiat Seicento Elettra H2 Fuel Cell-hybrid vehicle
2003 – Fiat Seicento Hydrogen-hybrid vehicle
2005 – Fiat Panda Hydrogen-Fuel cell
2008 – Fiat Phyllis-Fuel cell
2008 – Fiat Panda-Fiat Panda HyTRAN
Ford:
2000 – Ford Focus FCV-Fuel cell. Note however that Ford Motor Company has dropped its plans to develop hydrogen cars, stating that "The next major step in Ford’s plan is to increase over time the volume of electrified vehicles".
2006 – F-250 Super Chief a Tri-Flex engine concept pickup.
Forze Hydrogen-Electric Racing Team Delft
2016 – Forze-Fuel cell
General Motors:
1966 – GM Electrovan-Fuel cell
2001 – HydroGen3-Fuel cell
2002 – GM HyWire-Fuel cell
2005 – GM Sequel-hybrid vehicle
2006 – Chevrolet Equinox Fuel Cell
2007 – HydroGen4
Honda:
2002 – Honda FCX – hybrid vehicle
2007 – Honda FCX Clarity – Hydrogen Fuel cell – Production model
Hyundai:
2001 – Hyundai Santa Fe FCEV
2010 – Hyundai ix35 FCEV
Lotus Engineering:
2010 – Black Cab-Fuel cell
Kia:
2009 – Kia Borrego FCEV-Fuel cell
Mazda:
1991 – Mazda HR-X Hydrogen Wankel Rotary.
1993 – Mazda HR-X2 Hydrogen Wankel Rotary.
1993 – Mazda MX-5 Miata Hydrogen Wankel Rotary.
1995 – Mazda Capella Cargo, first public street test of the hydrogen Wankel Rotary engine.
1997 – Mazda Demio FC-EV Methanol-Reducing Fuel Cell
2001 – Mazda Premacy FC-EV – First public street test of the Methanol-Reducing Fuel Cell vehicle in Japan
2003 – Mazda RX-8 Hydrogen RE Hydrogen \ Gasoline hybrid Wankel Rotary.
2007 – Mazda Premacy Hydrogen RE Hybrid
2009 – Mazda 5 Hydrogen RE Hybrid
Mitsubishi:
2004 – Mitsubishi FCV
Morgan:
2005 – Morgan LIFEcar-hybrid vehicle-concept car
Nissan:
2002 – Nissan X-Trail FCHV-hybrid vehicle. Note, however that in 2009, Nissan announced that it is cancelling its hydrogen car R&D efforts.
Peugeot:
2004 – Peugeot Quark
2006 – Peugeot 207 Epure
2008 – H2Origin-Fuel cell
Renault:
Scenic ZEV H2 is a hydro-electric MPV co-developed by Nissan.
Riversimple:
2009 – Riversimple Urban Car
Ronn Motor Company:
2008 – Ronn Motor Scorpion
Toyota:
2002 – Toyota FCHV-hybrid vehicle
2003 – Toyota Fine-S-concept car
2003 – Toyota Fine-N-concept car
2005 – Toyota Fine-T-concept car
2005 – Toyota Fine-X-concept car
2008 – Toyota FCHV-adv-preproduction vehicle (expected public release 2015)
Volkswagen:
2000 – VW Bora Hy-motion-Fuel cell
2002 – VW Bora Hy-power-Fuel cell
2004 – VW Touran Hy-motion-Fuel cell
2007 – VW space up! blue
Hydrogen powered planes
Hyfish
Smartfish
Tupolev Tu-155-hydrogen-powered version of Tu-154
Antares DLR-H2 -The first aircraft capable of performing a complete flight on fuel-cell power only
Possible future aircraft using precooled jet engines include Reaction Engines Skylon and the Reaction Engines A2.
Hydrogen powered rockets
The following rockets were/are partially or completely propelled by hydrogen fuel:
Saturn V (upper stage)
Space Shuttle
Ariane 5
Delta IV
Atlas V (Centaur upper stage)
CE-20 (cryogenic rocket engine for upper stage of GSLV-III)
Related technologies
Environmental
Anaerobic digestion
Dark fermentation
Photofermentation
Syngas
Nuclear
Generation IV reactor
Hydrogen bomb
Organic chemistry
Dehydrogenation
Hydrogenation
Hydrogenolysis
Miscellaneous
Hydrogen odorant
Atomic hydrogen welding
Hydrogen-cooled turbogenerator
Oxyhydrogen flame
Low hydrogen annealing
Hydrogen decrepitation process (HD)
Hydrogenation disproportionation desorption and recombination (HDDR)
Standard hydrogen electrode
Reversible hydrogen electrode
Dynamic hydrogen electrode
Palladium-Hydrogen electrode
Cathodic protection
Iron-hydrogen resistor
Hydrogen pinch
Hofmann voltameter
Hydrox
Hydreliox
Joule-Thomson effect
Hydrogen ion
Bussard ramjet
Döbereiner's lamp
Nickel hydrogen battery
Gas-absorption refrigerator
Electroosmotic pump
Sodium silicide
Temperature-programmed reduction
Hydrogen damage
Hydrogen embrittlement
See also
National Center for Hydrogen Technology
Methane pyrolysis
Laura Maersk (2023) - first methanol container ship
References
Hydrogen economy
Industrial gases
Hydrogenation | Hydrogen technologies | Chemistry | 1,578 |
893,362 | https://en.wikipedia.org/wiki/Galactic%20corona | The terms galactic corona and gaseous corona have been used in the first decade of the 21st century to describe a hot, ionised, gaseous component in the galactic halo of the Milky Way. A similar body of very hot and tenuous gas in the halo of any spiral galaxy may also be described by these terms.
Current hypothetical scenario
The hypothetical source of the galactic halo of coronal gas may be the cumulative output of many “galactic fountains” in the galactic disc ejecting hot gas.
The hypothesis is that a single supernova and then its supernova remnant both produce hot ionized gas that supplies an individual “galactic fountain”. The expelled material forms a giant bubble of high-pressure, low density, hot gas in the denser, cooler gas and dust of the galactic disc. At least some of those bubbles extend high or low enough, vertically, to pierce through the denser disk, and form “chimneys” which exhaust the hot gas into the halo, analogous to a terrestrial geyser spewing out water and steam that is much hotter and much less dense than the surrounding earth, heated by a source hidden deep below.
As the expelled gas in the galactic corona cools, it falls back into the galactic disc, guided by the disc's own gravitational attraction, enriching the gas and dust in the disc with the heavy elements (loosely termed “metals” by astronomers) which were produced in supernova precursors, and during supernova explosions.
Current research
Galactic coronas have been and are currently being studied extensively, in the hope of gaining a further understanding of galaxy formation.
However, considering how galaxies differ in shape and size, no particular theory has been able to adequately explain how all galactic coronas are formed and maintained.
See also
References
External links
Corona
Corona | Galactic corona | Astronomy | 362 |
21,591,347 | https://en.wikipedia.org/wiki/Gratification%20disorder | Gratification disorder is a rare and often misdiagnosed form of masturbatory behavior, or the behavior of stimulating of one's own genitals, seen predominantly in infants and toddlers. Most pediatricians agree that masturbation is both normal and common behavior in children at some point in their childhood. The behavior is labeled a disorder when the child forms a habit, and misdiagnoses of the behavior can lead to unnecessary and invasive testing for other severe health conditions, including multiple neurological or motor disorders.
Signs and symptoms
The behavior of gratification disorder closely mimics that of a seizure, though the exact appearance varies. It often involves symptoms of flushing, or when the skin of the face becomes red, sweating, grunting, and erratic movements of the body. The child remains conscious during episodes of infantile masturbation and can be distracted from the behavior, which could help rule out the suspicion of a serious condition. Additional symptoms can include: rhythmic or rhythmical rubbing of genitals against objects or hands; a fixated or dazed gaze; straightening of the legs or crossed legs; and a pleasant feeling post-episode.
Duration and frequency of the episodes vary from as little as 5–10 minutes, to episodes reported to last 30–40 minutes. Some episodes occur weekly, while other reports document episodes occurring multiple times throughout a single day. In general, parents of children affected by gratification disorder noted an increase in both duration and frequency as time went on before an intervention, or remedy, such as behavioral therapy was introduced.
Because this behavior can be worrisome, the possibility of sexual abuse to the child should be thoroughly examined by parents and/or health care professionals to help determine that this is not the likely reason for this behavior. This masturbatory behavior tends to diminish with age, and as of 2023, there were no clinical trials that explore medical approaches or defined treatment options for gratification disorder.
Diagnosis
Gratification disorder may be unrecognized by both families and clinicians, possibly due to the absence of genital manipulation or physical touching of the genitals. Because of the inability to correctly recognize and diagnose gratification disorder, children are put at higher risk for more invasive testing because the disorder and its characteristics are largely misunderstood. Failure to correctly diagnose can lead to an increased risk of unnecessary testing or the use of potentially harmful medications, such as medications used for seizures or other neurological disorders.
Differential diagnosis
Little research has been published regarding this early childhood condition, but it is likely misdiagnosed when the child's bodily movements are of concern. The behavior can look different from case to case and does not always involve direct stimulation of the genitals, so the movements exhibited by the child can also resemble conditions such as epilepsy, a neurological condition that causes unprovoked and recurrent seizures; paroxysmal dystonia, a neurological disorder causing episodes of spastic movements that cause muscles to contract involuntarily; dyskinesia, a disorder involving the involuntary contraction of muscles; and gastrointestinal disorders, which would be health issues relating to the stomach or GI tract.
A strategy for differentiating gratification disorder, or infantile masturbation, from other movement disorders or seizure disorders is via direct observation. Usually in cases of gratification disorder, the physical and laboratory examination results are normal. Consciousness is also not altered in gratification disorder, which can be another key element in the differential diagnosis. Children with gratification disorder are likely responsive and should stop an episode upon distraction, which is not something that would be seen in movement or seizure disorders. Several studies stress the importance of direct observation and identifying features of gratification disorder to prevent unnecessary invasive testing and diagnoses.
Epidemiology
Most instances of gratification disorder occur from the ages of 3 months to 3 years but it can sometimes resurface in older adolescence.
References
Child sexuality
Pediatrics
Sexology | Gratification disorder | Biology | 820 |
27,687,935 | https://en.wikipedia.org/wiki/Environmental%20issues%20with%20coral%20reefs | Human activities have substantial impact on coral reefs, contributing to their worldwide decline. Damaging activities encompass coral mining, pollution (both organic and non-organic), overfishing, blast fishing, as well as the excavation of canals and access points to islands and bays. Additional threats comprise disease, destructive fishing practices, and the warming of oceans. Furthermore, the ocean's function as a carbon dioxide sink, alterations in the atmosphere, ultraviolet light, ocean acidification, viral infections, the repercussions of dust storms transporting agents to distant reefs, pollutants, and algal blooms represent some of the factors exerting influence on coral reefs. Importantly, the jeopardy faced by coral reefs extends far beyond coastal regions. The ramifications of climate change, notably global warming, induce an elevation in ocean temperatures that triggers coral bleaching—a potentially lethal phenomenon for coral ecosystems.
Scientists estimate that over next 20 years, about 70 to 90% of all coral reefs will disappear. With primary causes being warming ocean waters, ocean acidity, and pollution. In 2008, a worldwide study estimated that 19% of the existing area of coral reefs had already been lost. Only 46% of the world's reefs could be currently regarded as in good health and about 60% of the world's reefs may be at risk due to destructive, human-related activities. The threat to the health of reefs is particularly strong in Southeast Asia, where 80% of reefs are endangered. By the 2030s, 90% of reefs are expected to be at risk from both human activities and climate change; by 2050, it is predicted that all coral reefs will be in danger.
Issues
Competition
In the Caribbean Sea and tropical Pacific Ocean, direct contact between coral and common seaweeds causes bleaching and death of coral tissue via allelopathic competition. The lipid-soluble extracts of seaweeds that harmed coral tissues, also produced rapid bleaching. At these sites, bleaching and mortality was limited to areas of direct contact with seaweed or their extracts. The seaweed then expanded to occupy the dead coral's habitat. However, as of 2009, only 4% of coral reefs worldwide had more than 50% algal coverage which means that there are no recent global trend towards algal dominance over coral reefs.
Competitive seaweed and other algae thrive in nutrient-rich waters in the absence of sufficient herbivorous predators. Herbivores include fish such as parrotfish, the urchin Diadema antillarum, surgeonfishes, tangs and unicornfishes.
Predation
Overfishing, particularly selective overfishing, can unbalance coral ecosystems by encouraging the excessive growth of coral predators. Predators that eat living coral, such as the crown-of-thorns starfish, are called corallivores. Coral reefs are built from stony coral, which evolved with large amounts of the wax cetyl palmitate in their tissues. Most predators find this wax indigestible. The crown-of-thorns starfish is a large (up to one meter) starfish protected by long, venomous spines. Its enzyme system dissolves the wax in stony corals, and allows the starfish to feed on the living animal. Starfish face predators of their own, such as the giant triton sea snail. However, the giant triton is valued for its shell and has been over fished. As a result, crown-of-thorns starfish populations can periodically grow unchecked, devastating reefs.
Fishing practices
Although some marine aquarium fish species can reproduce in aquaria (such as Pomacentridae), most (95%) are collected from coral reefs. Intense harvesting, especially in maritime Southeast Asia (including Indonesia and the Philippines), damages the reefs. This is aggravated by destructive fishing practices, such as cyanide and blast fishing. Most (80–90%) aquarium fish from the Philippines are captured with sodium cyanide. This toxic chemical is dissolved in sea water and released into areas where fish shelter. It narcotizes the fish, which are then easily captured. However, most fish collected with cyanide die a few months later from liver damage. Moreover, many non-marketable specimens die in the process. It is estimated that 4,000 or more Filipino fish collectors have used over of cyanide on Philippine reefs alone, about 150,000 kg per year. A major catalyst of cyanide fishing is poverty within fishing communities. In countries like the Philippines that regularly employ cyanide, more than thirty percent of the population lives below the poverty line.
Dynamite fishing is another destructive method for gathering fish. Sticks of dynamite, grenades, or homemade explosives are detonated in the water. This method of fishing kills the fish within the main blast area, along with many unwanted reef animals. The blast also kills the corals in the area, eliminating the reef's structure, destroying habitat for the remaining fish and other animals important for reef health. Muro-ami is the destructive practice of covering reefs with nets and dropping large stones onto the reef to produce a flight response among the fish. The stones break and kill the coral. Muro-ami was generally outlawed in the 1980s.
Fishing gear damages reefs via direct physical contact with the reef structure and substrate. They are typically made of synthetic materials that do not deteriorate in the ocean, causing a lasting effect on the ecosystem and reefs. Gill nets, fish traps, and anchors break branching coral and cause coral death through entanglement. When fishermen drop lines by coral reefs, the lines entangle the coral. The fisher cuts the line and abandons it, leaving it attached to the reef. The discarded lines abrade coral polyps and upper tissue layers. Corals are able to recover from small lesions, but larger and recurrent damage complicates recovery.
Bottom dragging gear such as beach seines can damage corals by abrasion and fracturing. A beach seine is a long net about with a mesh size of and a weighted line to hold the net down while it is dragged across the substrate and is one of the most destructive types of fishing gear on Kenya's reefs.
Bottom trawling in deep oceans destroys cold-water and deep-sea corals. Historically, industrial fishers avoided coral because their nets would get caught on the reefs. In the 1980s, "rock-hopper" trawls attached large tires and rollers to allow the nets to roll over rough surfaces. Fifty-five percent of Alaskan cold-water coral that was damaged by one pass from a bottom trawl had not recovered a year later. Northeast Atlantic reefs bear scars up to long. In Southern Australia, 90 percent of the surfaces on coral seamounts are now bare rock. Even in the Great Barrier Reef World Heritage Area, seafloor trawling for prawns and scallops is causing localized extinction of some coral species.
Marine pollution
Reefs in close proximity to human populations are subject to poor water quality from land- and marine-based sources. In 2006 studies suggested that approximately 80 percent of ocean pollution originates from activities on land. Pollution arrives from land via runoff, the wind and "injection" (deliberate introduction, e.g., drainpipes).
Runoff brings with it sediment from erosion and land-clearing, nutrients and pesticides from agriculture, wastewater, industrial effluent and miscellaneous material such as petroleum residue and trash that storms wash away. Some pollutants consume oxygen and lead to eutrophication, killing coral and other reef inhabitants.
An increasing fraction of the global population lives in coastal areas. Without appropriate precautions, development (e.g., buildings and paved roads) increases the fraction of rainfall and other water sources that enter the ocean as runoff by decreasing the land's ability to absorb it.
Pollution can introduce pathogens. For example, Aspergillus sydowii has been associated with a disease in sea fans, and Serratia marcescens, has been linked to the coral disease white pox.
Reefs near human populations can be faced with local stresses, including poor water quality from land-based sources of pollution. Copper, a common industrial pollutant has been shown to interfere with the life history and development of coral polyps.
In addition to runoff, wind blows material into the ocean. This material may be local or from other regions. For example, dust from the Sahara moves to the Caribbean and Florida. Dust also blows from the Gobi and Taklamakan deserts across Korea, Japan, and the Northern Pacific to the Hawaiian Islands. Since 1970, dust deposits have grown due to drought periods in Africa. Dust transport to the Caribbean and Florida varies from year to year with greater flux during positive phases of the North Atlantic Oscillation. The USGS links dust events to reduced health of coral reefs across the Caribbean and Florida, primarily since the 1970s. Dust from the 1883 eruption of Krakatoa in Indonesia appeared in the annular bands of the reef-building coral Montastraea annularis from the Florida Reeftract.
Sediment smothers corals and interferes with their ability to feed and reproduce. Pesticides can interfere with coral reproduction and growth. There are studies that present evidence that chemicals in sunscreens contribute to coral bleaching by lowering the resistance of zooxanthellae to viruses, though these studies showed significant flaws in methodology and did not attempt to replicate the complex environment found in coral reefs.
Nutrient pollution
Nutrient pollution, particularly nitrogen and phosphorus can cause eutrophication, upsetting the balance of the reef by enhancing algal growth and crowding out corals. This nutrient–rich water can enable blooms of fleshy algae and phytoplankton to thrive off coasts. These blooms can create hypoxic conditions by using all available oxygen. Biologically available nitrogen (nitrate plus ammonia) needs to be below 1.0 micromole per liter (less than 0.014 parts per million of nitrogen), and biologically available phosphorus (orthophosphate plus dissolved organic phosphorus) needs to be below 0.1 micromole per liter (less than 0.003 parts per million of phosphorus). In addition concentrations of chlorophyll (in the microscopic plants called phytoplankton) needs to be below 0.5 parts per billion. Both plants also obscure sunlight, killing both fish and coral. High nitrate levels are specifically toxic to corals, while phosphates slow down skeletal growth.
Excess nutrients can intensify existing disease, including potentially doubling the spread of Aspergillosis, a fungal infection that kills soft corals such as sea fans, and increasing yellow band disease, a bacterial infection that kills reef-building hard corals by fifty percent.
Air pollution
A study released in April 2013 has shown that air pollution can also stunt the growth of coral reefs; researchers from Australia, Panama and the UK used coral records (between 1880 and 2000) from the western Caribbean to show the threat of factors such as coal-burning coal and volcanic eruptions. The researchers state that the study signifies the first time that the relationship between air pollution and coral reefs has been elucidated, while former chair of the Great Barrier Reef Marine Park Authority Ian McPhail referred to the report as "fascinating" upon the public release of its findings.
Marine debris
Marine debris is defined as any persistent solid material that is manufactured or processed and directly or indirectly, intentionally or unintentionally, disposed of or abandoned into the marine environment or the Great Lakes. Debris may arrive directly from a ship or indirectly when washed out to sea via rivers, streams, and storm drains. Human-made items tend to be the most harmful such as plastics (from bags to balloons, hard hats to fishing line), glass, metal, rubber (millions of waste tires), and even entire vessels.
Plastic debris can kill and harm multiple reef species. Corals and coral reefs are at higher risk because they are immobile, meaning that when the water quality or other changes in their habitat occur, corals cannot move
to a different place; they must adapt or they will not survive. There are two different classes of plastics, macro and microplastics and both types can cause damage in a number of ways. For example, macroplastics such as derelict (abandoned) fishing nets and other gear—often called "ghost nets" can still catch fish and other marine life and kill those organisms and break or damage reefs. Large items such as abandoned fishing nets are known as macroplastics whereas microplastics are plastic fragments that are typically less than or equal to 5 mm in length and have primarily been found to cause damage to coral reefs though corals ingesting these plastic fragments.Fig. 1. Video frame sequence of capture and ingestion of a microplastic particle by a polyp of Astroides calycularis. Obtained from Savinelli et al. (2020)]] Some researchers have found that ingestion of microlastics harms coral, and subsequently coral reefs, because ingesting these fragments reduced coral food intake as well as coral fitness since corals waste a lot of time and energy handling the plastic particles. Unfortunately, even remote reef systems suffer the effects of marine debris, especially if it is plastic pollution. Reefs in the Northwestern Hawaiian Islands are particularly prone to the accumulation of marine debris because of their central location in the North Pacific Gyre. Fortunately, there are solutions to protect corals and coral reefs against the harmful effects of plastic pollution. However, since little to no research exists regarding specific medicinal ways to help corals recover from plastic exposure, the best solution is to not let plastics enter the marine environment at all. This can be accomplished through a number of ways, some of which are already being enacted. For example, there are measures to ban microplastics from products like cosmetics and toothpaste as well as measures that demand for products that contain microplastics to be labeled as such so as to reduce their consumption. Additionally, newer and better detection methods are needed for microplastics and they must be installed at waste water treatment facilities to prevent these particles from entering the marine environment and causing damage to marine life, especially coral reefs. Many people are realizing the problem of plastic pollution and other marine debris though, and have taken steps to mitigate it. For example, from 2000 to 2006, NOAA and partners removed over 500 tons of marine debris from the reefs in the Northwestern Hawaiian Islands.
Cigarette butts also damage aquatic life. In order to avoid cigarette butt litter, some solutions have been proposed, including possibly banning cigarette filters and implementing a deposit system for e-cigarette pods.
Dredging
Dredging operations are sometimes completed by cutting a path through a coral reef, directly destroying the reef structure and killing any organisms that live on it. Operations that directly destroy coral are often intended to deepen or otherwise enlarge shipping channels or canals, due to the fact that in many areas, removal of coral requires a permit, making it more cost-effective and simple to avoid coral reefs if possible.
Dredging also releases plumes of suspended sediment, which can settle on coral reefs, damaging them by starving them of food and sunlight. Continued exposure to dredging spoil has been shown to increase rates of diseases such as white syndrome, bleaching and sediment necrosis among others. A study conducted in the Montebello and Barrow Islands showed that the number of coral colonies with signs of poor health more than doubled in transects with high exposure to dredging sediment plumes.
Sunscreen
Sunscreen can enter the ocean indirectly through wastewater systems when it is washed off and from swimmers and divers or directly if the sunscreen comes off people when in the ocean. Some 14,000 tons of sunscreen ends up in the ocean each year, with 4000 to 6000 tons entering reef areas annually. There is an estimate that 90% of snorkeling and diving tourism is concentrated on 10% of the world's coral reefs, meaning that popular reefs are especially vulnerable to sunscreen exposure. Certain formulations of sunscreen are a serious danger to coral health. The common sunscreen ingredient oxybenzone causes coral bleaching and has an impact on other marine fauna. In addition to oxybenzone, there are other sunscreen ingredients, known as chemical UV filters, that can also be harmful to corals and coral reefs and other marine life. They are: Benzophenone-1, Benzophenone-8, OD-PABA, 4-Methylbenzylidene camphor, 3-Benzylidene camphor, nano-Titanium dioxide, nano-Zinc oxide, Octinoxate, and Octocrylene.
In Akumal, Mexico, visitors are warned not to use sunscreen and are kept out of some areas to prevent damage to the coral. In several other tourist destinations, authorities recommend the use of sunscreens prepared with the naturally occurring chemicals titanium dioxide or zinc oxide, or suggest the use of clothing rather than chemicals to screen the skin from the sun. The city of Miami Beach, Florida rejected calls for a ban on sunscreen in 2019 due to lack of evidence. In 2020, Palau enacted a ban on sunscreen and skincare products containing 10 chemicals including oxybenzone. The US state of Hawaii enacted a similar ban which came into effect in 2021. Care should be taken to protect both the marine environment and your skin, as sun exposure causes 90% of premature aging and could cause skin cancer, and it is possible to do both.
Climate change
Rising sea levels due to climate change requires coral to grow so the coral can stay close enough to the surface to continue the process of photosynthesis. Water temperature changes or disease of the coral can induce coral bleaching, as happened during the 1998 and 2004 El Niño years, in which sea surface temperatures rose well above normal, bleaching and killing many reefs. Bleaching may be caused by different triggers, including high sea surface temperature (SST), pollution, or other diseases. SST coupled with high irradiance (light intensity), triggers the loss of zooxanthellae, a symbiotic single cell algae that gives the coral its color and the coral's dinoflagellate pigmentation, which turns the coral white when it is expelled, which can kill the coral. Zooxanthellae provide up to 90% of their hosts' energy supply. Healthy reefs can often recover from bleaching if water temperatures cool. However, recovery may not be possible if levels rise to 500 ppm because concentrations of carbonate ions may then be too low. In summary, ocean warming is the primary cause of mass coral bleaching and mortality (very high confidence), which, together with ocean acidification, deteriorates the balance between coral reef construction and erosion (high confidence).
Warming seawater may also welcome the emerging problem of coral disease. Weakened by warm water, coral is much more prone to diseases including black band disease, white band disease and skeletal eroding band. If global temperatures increase by 2 °C during the twenty-first century, coral may not be able to adapt quickly enough to survive.
Warming seawater is also expected to cause migrations in fish populations to compensate for the change. This puts coral reefs and their associated species at risk of invasion and may cause their extinction if they are unable to compete with the invading populations.
A 2010 report by the Institute of Physics predicts that unless the national targets set by the Copenhagen Accord are amended to eliminate loopholes, then by 2100 global temperatures could rise by 4.2 °C and result in an end to coral reefs. Even a temperature rise of just 2 °C, currently very likely to happen in the next 50 years (so by 2068 A.D.), there would be a more than 99% chance that tropical corals would be eradicated.
Warm-water coral reef ecosystems house one-quarter of the marine biodiversity and provide services in the form of food, income and shoreline protection to coastal communities around the world. These ecosystems are threatened by climate and non-climate drivers, especially ocean warming, MHWs, ocean acidification, SLR, tropical cyclones, fisheries/overharvesting, land-based pollution, disease spread and destructive shoreline practices. Warm-water coral reefs face near-term threats to their survival, but research on observed and projected impacts is very advanced.
Anthropogenic climate change has exposed ocean and coastal ecosystems to conditions that are unprecedented over millennia (high confidence2 15 ), and this has greatly impacted life in the ocean and 16 along its coasts (very high confidence).
Ocean acidification
Ocean acidification results from increases in atmospheric carbon dioxide. Oceans absorb around one–third of the increase. The dissolved gas reacts with the water to form carbonic acid, and thus acidifies the ocean. This decreasing pH is another issue for coral reefs.
Ocean surface pH is estimated to have decreased from about 8.25 to 8.14 since the beginning of the industrial era, and a further drop of 0.3–0.4 units is expected. This drop has made it so the amount of hydrogen ions have increased by 30%. Before the industrial age the conditions for calcium carbonate production were typically stable in surface waters since the carbonate ion is at supersaturated concentrations. However, as the ionic concentration falls, carbonate becomes under-saturated, making calcium carbonate structures vulnerable to dissolution. Corals experience reduced calcification or enhanced dissolution when exposed to elevated . This causes the skeletons of the corals to weaken, or even not be made at all. Ocean acidification may also have an effect of 'gender discrimination' as spawning female corals are significantly more susceptible to the negative effects of ocean acidification than spawning male coral
Bamboo coral is a deep water coral which produces growth rings similar to trees. The growth rings illustrate growth rate changes as deep sea conditions change, including changes due to ocean acidification. Specimens as old as 4,000 years have given scientists "4,000 years worth of information about what has been going on in the deep ocean interior".
Rising carbon dioxide levels could confuse brain signaling in fish. In 2012, researchers reported on their results after studying the behavior of baby clown and damselfishes for several years in water with elevated levels of dissolved carbon dioxide, in line with what may exist by the end of the century. They found that the higher carbon dioxide disrupted a key brain receptor in the fish, interfering with neurotransmitter functions. The damaged central nervous systems affected fish behavior and diminishing their sensory capacity to a point "likely to impair their chances of survival". The fishes were less able to locate reefs by smell or "detect the warning smell of a predator fish". Nor could they hear the sounds made by other reef fish, compromising their ability to locate safe reefs and avoid dangerous ones. They also lost their usual tendencies to turn to the left or right, damaging their ability to school with other fish.
Although previous experiments found several detrimental effects on coral fish behavior from projected end-of-21st-century ocean acidification, a 2020 replication study found that "end-of-century ocean acidification levels have negligible effects on [three] important behaviors of coral reef fishes" and with "data simulations, [showed] that the large effect sizes and small within-group variances that have been reported in several previous studies are highly improbable". In 2021 it emerged that allegations of some of the previous studies being fraudulent have been raised. Furthermore, effect sizes of studies assessing ocean acidification effects on fish behavior have declined dramatically over a decade of research on this topic, with effects appearing negligible since 2015.
Ocean deoxygenation
There has been a severe increase in mass mortality events associated with low oxygen causing mass hypoxia with the majority having been in the last 2 decades. The rise in water temperature leads to an increase in oxygen demand and the increase for ocean deoxygenation which causes these large coral reef dead zones. For many coral reefs, the response to this hypoxia is very dependent on the magnitude and duration of the deoxygenation. The symptoms can be anywhere from reduced photosynthesis and calcification to bleaching. Hypoxia can have indirect effects like the abundance of algae and spread of coral diseases in the ecosystems. While coral is unable to handle such low levels of oxygen, algae is quite tolerant. Because of this, in interaction zones between algae and coral, increased hypoxia will cause more coral death and higher spread of algae. The increase mass coral dead zones is reinforced by the spread of coral diseases. Coral diseases can spread easily when there are high concentrations of sulfide and hypoxic conditions. Due to the loop of hypoxia and coral reef mortality, the fish and other marine life that inhabit the coral reefs have a change in behavioral in response to the hypoxia. Some fish will go upwards to find more oxygenated water, and some enter a phase of metabolic and ventilatory depression. Invertebrates migrate out of their homes to the surface of substratum or move to the tips of arborescent coral colonies.
Around 6 million people, the majority who live in developing countries, depend on coral reef fisheries. These mass die-offs due to extreme hypoxic events can have severe impacts on reef fish populations. Coral reef ecosystems offer a variety of essential ecosystem services including shoreline protection, nitrogen fixation, and waste assimilation, and tourism opportunities. The continued decline of oxygen in oceans on coral reefs is concerning because it takes many years (decades) to repair and regrow corals.
Disease
Disease is a serious threat to many coral species. The diseases of coral may consist of bacterial, viral, fungal, or parasitic infections. Due to stressors like climate change and pollution, coral can become more vulnerable to diseases. Some examples of coral disease are Vibrio, white syndrome, white band, rapid wasting disease, and many more. These diseases have different effects on the corals, ranging from damaging and killing individual corals to wiping out entire reefs.
In the Caribbean, white band disease is one of the primary causes for the death of over eighty percent of Staghorn and Elkhorn coral (Reef Resilience). It is a disease that can destroy miles of coral reef fast.
A disease such as white plague can spread over a coral colony by a half an inch a day. By the time the disease has fully taken over the colony, it leaves behind a dead skeleton. Dead standing coral structures are what most people see after disease has taken over a reef.
Recently, the Florida Reef Tract in the United States has been plagued by a stony coral tissue loss disease. The disease was first identified in 2014 and as of 2018 has been reported in every part of the reef except the lower Florida Keys and the Dry Tortugas. The cause of the disease is unknown but is thought to be caused by bacteria and be transmitted through direct contact and water circulation. This disease event is unique due to its large geographic range, extended duration, rapid progression, high rates of mortality and the number of species affected.
Recreational diving
During the 20th century recreational scuba diving was considered to have generally low environmental impact, and was consequently one of the activities permitted in most marine protected areas. Since the 1970s diving has changed from an elite activity to a more accessible recreation, marketed to a very wide demographic. To some extent better equipment has been substituted for more rigorous training, and the reduction in perceived risk has shortened minimum training requirements by several training agencies. Training has concentrated on an acceptable risk to the diver, and paid less attention to the environment. The increase in the popularity of diving and in tourist access to sensitive ecological systems has led to the recognition that the activity can have significant environmental consequences.
Scuba diving has grown in popularity during the 21st century, as is shown by the number of certifications issued worldwide, which has increased to about 23 million by 2016 at about one million per year. Scuba diving tourism is a growth industry, and it is necessary to consider environmental sustainability, as the expanding impact of divers can adversely affect the marine environment in several ways, and the impact also depends on the specific environment. Tropical coral reefs are more easily damaged by poor diving skills than some temperate reefs, where the environment is more robust due to rougher sea conditions and fewer fragile, slow-growing organisms. The same pleasant sea conditions that allow development of relatively delicate and highly diverse ecologies also attract the greatest number of tourists, including divers who dive infrequently, exclusively on vacation and never fully develop the skills to dive in an environmentally friendly way. Low impact diving training has been shown to be effective in reducing diver contact to more sustainable levels. Experience appears to be the most important factor in explaining divers' underwater behaviour, followed by their attitude towards diving and the environment, and personality type.
Other issues
Within the last 20 years, once-prolific seagrass meadows and mangrove forests, which absorb massive amounts of nutrients and sediment, have been destroyed. Both the loss of wetlands, mangrove habitats and seagrass meadows affect the water quality of inshore reefs.
Coral mining is another threat. Both small scale harvesting by villagers and industrial scale mining by companies are serious threats. Mining is usually done to produce construction material which is valued as much as 50% cheaper than other rocks, such as from quarries. The rocks are ground and mixed with other materials, like cement to make concrete. Ancient coral used for construction is known as coral rag. Building directly on the reef also takes its toll, altering water circulation and the tides which bring the nutrients to the reef. The pressing reason for building on reefs is simply lack of space, and because of this, some of the areas with heavily mined coral reefs have still not been able to recover. Another pressing issue is coral collecting. There are bountiful amounts of coral that are deemed so beautiful that they are often collected. The collected coral are used to make a handful of things, including jewelry and home decorations. The breakage of coral branches is unhealthy for the reefs; therefore, tourists and those who purchase such items contribute greatly to the already devastating coral reefs and climate change.
Boats and ships require access points into bays and islands to load and unload cargo and people. For this, parts of reefs are often chopped away to clear a path. Negative consequences can include altered water circulation and altered tidal patterns which can disrupt the reef's nutrient supply; sometimes destroying a great part of the reef. Fishing vessels and other large boats occasionally run aground on a reef. Two types of damage can result. Collision damage occurs when a coral reef is crushed and split by a vessel's hull into multiple fragments. Scarring occurs when boat propellers tear off the live coral and expose the skeleton. The physical damage can be noticed as striations. Mooring causes damage which can be reduced by using mooring buoys. Buoys can attach to the seafloor using concrete blocks as weights or by penetrating the seafloor, which further reduces damage. Also, reef docks can be used to move over goods from large, seagoing vessels to small, flat-bottomed vessels.
Coral in Taiwan is being threatened by the influx of human population growth. Since 2007, several local environmental groups conducted research and found that much of the coral populations are being affected by untreated sewage, an influx of tourists taking corals for souvenirs, without fully understanding the destructive impact on the coral's ecological system. Researchers reported to the Taiwanese government that many coral populations have turned black in the southeast coast of Taiwan. Potentially, this could lead to loss of food supply, medicinal sources and tourism due to the breakdown of the food chain.
Oil
Causes and Effects of Oil Spills
The causes for oils spills can be separated into 2 categories: natural and anthropogenic causes.
Natural causes can be from oil that leaks out from the ocean floor into the water; erosion of the seafloor; or even climate change. The amount that naturally seeps into the ocean is 181 million gallons, which varies yearly.
Anthropogenic causes involve human activities and is how most oil enters the ocean. The ways oil spills anthropogenically in the ocean are because of drilling rigs, pipelines, refineries, and wars. Anthropogenic spills are more harmful than naturals spills, as unlike natural spills, they leak about 210 million gallons of petroleum each year. Also, anthropogenic spills cause abrupt changes to ecosystems with long-term effects and even longer remediations.
When oil spills occur, the affects can be felt in an area for decades and can cause massive damage to the aquatic life. For aquatic plant life an oil spill could affect how light, and oxygen is available for photosynthesis.
Two other examples of the many ways oil harms wildlife are in the form of oil toxicity and fouling. Oil toxicity affects the wildlife when the toxic compounds oil is made up of enters the body doing damage to the internal organs, and eventually causes death. Fouling is when oil harms wildlife via coating itself on an animal or plant physically.
Oil Impacts on Coral Reef Communities
Oil pollution is hazardous to living marine habitats due to its toxic constituents. Oil spills occur due to natural seepage and during activities such as transportation and handling. These spills harm the marine and coastal wildlife. When the organisms have become exposed to these oil spills, it can lead them to suffer from skin irritation, decreased immunity and gastrointestinal damage.
When oil floats above the coral reef, it will have no effect on the coral below, it is when the oil starts to sink to the ocean floor when it becomes a problem. The problem is the physical effect from the oil-sediment particle which has been found to be less harmful than if the coral came in contact with the toxic oil.
When the oil comes into contact with corals, not only the reef system will be affected but fish, crabs and many more marine invertebrates. Just a few drops of oil can cause coral reef fish to make poor decisions. Oil will impact the thinking of the coral reef fish in a way that could be dangerous to the fish and the coral reef where they choose their home.
It can negatively affect their growth, survival, settlement behaviors, and increases predation. It has been found that larval fish who have been exposed to oil will eventually have heart issues and physical irregularities later in life.
Oil Impacts on Coral Life and Symbiotic Relationships
Evidence for the damaging effects of oil spills on coral reef structures can be seen at a reef site a few kilometers southwest of the Macondo well. Coral at this site, which has been covered in crude oil chemicals and brown flocculent particles, were found dying just seven months after the Deepwater Horizon eruption.
Gorgonian octocorals (soft coral communities) are highly susceptible to damage from oil spills. This is due to the structure and function of their polyps, which are specialized in filtering tiny particles from the water.
Corals have a complex relationship with many different prokaryotic organisms, including probiotic microorganisms that protect the corals from harmful environmental pollutants. However, research has shown that oil spills damage these organisms, and weaken their ability to protect reef structures in the presence of oil pollution.
Oil Clean up Methods
Booms are floating barricades that are placed in an oil spreading area that restrict the movement of floating oil. Booms are often utilized alongside skimmers, which are sponges and oil absorbent ropes that collect oil from the water. Moreover, insitu-burning and chemical dispersion can be utilized during an oil spillage. Insitu-burning refers to burning oil that has been collected to one location with a fire-resistant containment boom, however, the combustion from insitu-burning does not fully remove the oil but instead breaks it down into different chemicals which can negatively affect marine reefs.
Chemical dispersants consist of emulsifiers and solvents that break oil into small droplets and are the most common form of oil removal, however, these can reduce corals resilience to environmental stressors. Moreover, chemical dispersants can physically harm coral species when exposed. Dispersants have been also utilized to clean oil spills, however, they harm early stages of coral and reduces the settlement on reef systems and have since been banned. However, there is still one formulation of dispersants used, the Corexit 9427.
Microbial biosurfactants can be utilized to reduce the damage to reef ecosystems as an eco-friendly method, however, there are limitations to their effect. This method is still being studied and is not a certain method of oil clean up.
Threatened species
The global standard for recording threatened marine species is the IUCN Red List of Threatened Species. This list is the foundation for marine conservation priorities worldwide. A species is listed in the threatened category if it is considered to be critically endangered, endangered, or vulnerable. Other categories are near threatened and data deficient. By 2008, the IUCN had assessed all 845 known reef-building corals species, marking 27% as Threatened, 20% as near threatened, and 17% as data deficient.
The coral triangle (Indo-Malay-Philippine archipelago) region has the highest number of reef-building coral species in threatened category as well as the highest coral species diversity. The loss of coral reef ecosystems will have devastating effects on many marine species, as well as on people that depend on reef resources for their livelihoods.
Issues by region
Australia
The Great Barrier Reef is the world's largest coral reef system. The reef is located in the Coral Sea and a large part of the reef is protected by the Great Barrier Reef Marine Park. Particular environmental pressures include surface runoff, salinity fluctuations, climate change, cyclic crown-of-thorns outbreaks, overfishing, and spills or improper ballast discharge. According to the 2014 report of the Government of Australia's Great Barrier Reef Marine Park Authority (GBRMPA), climate change is the most significant environmental threat to the Great Barrier Reef. , 50% of the coral on the Great Barrier Reef has been lost.
Southeast Asia
Southeast Asian coral reefs are at risk from damaging fishing practices (such as cyanide and blast fishing), overfishing, sedimentation, pollution and bleaching. Activities including education, regulation and the establishment of marine protected areas help protect these reefs.
Indonesia
Indonesia is home to one-third of the world's coral reefs, with coral that covers nearly and is home to one-quarter of its fish species. Indonesia's coral reefs are located in the heart of the Coral Triangle and have fallen victim to destructive fishing, tourism and bleaching. Data from LIPI in 1998 found that only 7 percent is in excellent condition, 24 percent is in good condition and approximately 69 percent is in poor-to-fair condition. According to one source, Indonesia will lose 70 percent of its coral reef by 2050 if restoration action does not occur.
Philippines
In 2007, Reef Check, the world's largest reef conservation organization, stated that only 5% of Philippines of coral reef are in "excellent condition": Tubbataha Reef, Marine Park in Palawan, Apo Island in Negros Oriental, Apo Reef in Puerto Galera, Mindoro, and Verde Island Passage off Batangas. Philippine coral reefs is Asia's second largest.
Taiwan
Coral reefs in Taiwan are being threatened by human population growth. Many corals are affected by untreated sewage and souvenir-hunting tourists, not knowing that this practice destroys habitat and causes disease. Many corals have turned black from disease off Taiwan's southeast coast.
Caribbean
Coral disease was first recognized as a threat to Caribbean reefs in 1972 when black band disease was discovered. Since then diseases have been occurring with higher frequency.
It has been estimated that 50% of the Caribbean Sea coral cover has disappeared since the 1960s. According to a United Nations Environment Program report, the Caribbean coral reefs might face extirpation in next 20 years due to population expansion along the coast lines, overfishing, the pollution of coastal areas, global warming, and invasive species.
In 2005, the Caribbean lost about 50% of its reef in one year due to coral bleaching. The warm water from Puerto Rico and the Virgin Islands travelled south to cause this coral bleaching.
Jamaica
Jamaica is the third largest Caribbean island. The Caribbean's coral reefs will cease to exist in 20 years if a conservation effort is not made. In 2005, 34 percent of Jamaica's coral reefs were bleached due to rising sea temperatures. Jamaica's coral reefs are also threatened by overfishing, pollution, natural disasters, and reef mining. In 2009, researchers concluded that many of the corals are recovering very slowly.
United States
Southeastern Florida's reef track is 300 miles long. Florida's coral reefs are currently undergoing an unprecedented stony coral tissue loss disease. The disease covers a large geographic range and affects many species of coral.
In January 2019, science divers confirmed that the outbreak of stony coral tissue that extends south and west of Key West. In December 2018, Disease was spotted at Maryland Shoals, near the Saddlebunch Keys. By mid January 5 more sites between American Shoal and Eastern Dry Rocks were confirmed diseased.
Puerto Rico is home to over 5,000 square kilometers of shallow coral reef ecosystems. Puerto Rico's coral reefs and associated ecosystems have an average economic value of nearly $1.1 billion per year.
The U.S. Virgin Islands’ coral reefs and associated ecosystems have an average economic value of $187 million per year.
Pacific
United States
Hawaii's coral reefs (e.g. French Frigate Shoals) are a major factor in Hawaii's $800 million a year marine tourism and are being affected negatively by coral bleaching and increased sea surface temperatures, which in turn leads to coral reef diseases. The first large-scale coral bleaching occurred in 1996 and in 2004 it was found that the sea surface temperatures had been steadily increasing and if this pattern continues, bleaching events will occur more frequently and severely.
See also
Marine cloud brightening
References
Further reading
Barber, Charles V. and Vaughan R. Pratt. 1998. Poison and Profit: Cyanide Fishing in the Indo-Pacific. Environment, Heldref Publications.
Martin, Glen. 2002. "The depths of destruction Dynamite fishing ravages Philippines' precious coral reefs". San Francisco Chronicle, 30 May 2002
External links
NOAA Report: The State of Coral Reef Ecosystems of the United States and Pacific Freely Associated States: 2008
A special report on the plight of the planet's coral reefs—and how you can help—from Mother Jones magazine
Coral reefs
Environmental conservation
Coral reefs
Great Barrier Reef | Environmental issues with coral reefs | Biology | 8,762 |
34,261,699 | https://en.wikipedia.org/wiki/Database%20for%20bacterial%20group%20II%20introns | The Database for Bacterial Group II Introns is a repository of full-length, non-redundant group II introns present in bacterial DNA sequence. The database is first established in 2002 with roughly 40 introns. In less than 10 years, the database has expanded to 400 introns. Current database includes a wealth of information on the properties, structures, and classification of group II intron. In addition, it contains a list of intron insertion sites, DNA sequences, protein-encoding sequences, as well as RNA secondary structures.
See also
group II intron
References
External links
https://web.archive.org/web/20120425142811/http://webapps2.ucalgary.ca/~groupii/index.html
Biological databases
RNA
Ribozymes
RNA splicing | Database for bacterial group II introns | Chemistry,Biology | 171 |
38,518,053 | https://en.wikipedia.org/wiki/RailRadar | RailRadar GPS (by RailYatri) is a live tracker allowing users to watch the movements of passenger trains running in India on an interactive map. All passenger trains in India are operated by state-owned Indian Railways. In the first release the location and status of trains shown on the map was typically 15 to 30 minutes delayed from real-time. RailRadar was created when Indian Railways Center for Railway Information System (CRIS) and RailYatri joined hands, and the service was launched on 10 October 2012. RailRadar uses Google Maps as its web mapping software, and is accessible in the form of a website and a mobile app. RailRadar was discontinued by Indian Railways on 6 September 2013, before RailYatri relaunched it in November 2013. However the RailRadar service did not provide the actual running status or the actual location of the train, rather these locations were plotted based on the regular scheduled timetable.
RailYatri relaunched the site with RailRadar GPS in November 2015. RailRadar GPS determines train locations by analyzing the pattern of locations transmitted by the smartphone travelers sitting on the train, similar to how Google Maps determines traffic density on the road. RailRadar GPS shows train tracking data displayed on a Google Map, which also indicates the delay status of trains - trains running on time have green indicators, while those running late are marked red.
See also
Centre for Railway Information Systems (CRIS)
References
External links
Official website
Indian Railways
Rail transport operations
Tracking | RailRadar | Technology | 300 |
47,615,730 | https://en.wikipedia.org/wiki/Torbj%C3%B6rn%20Sj%C3%B6strand | Torbjörn Sjöstrand (born 13 November 1954) is a Swedish theoretical physicist and a professor at Lund University in Sweden, where he also got his PhD in 1982. He is one of the main authors of PYTHIA, a program for generation of high-energy physics events.
In his early career, Sjöstrand spent shorter postdoc periods at DESY (Germany) and Fermilab (USA). From 1989 to 1995 he was staff member in the CERN Theory division.
Honours
In 2012, he was awarded the J. J. Sakurai Prize for Theoretical Particle Physics by the American Physical Society. The citation reads:
In 2021, he was awarded the High Energy and Particle Physics Prize of the European Physical Society for the development of PYTHIA. He received the award together with Bryan Webber, who was also a co-recipient of the Sakurai Prize.
References
J. J. Sakurai Prize for Theoretical Particle Physics recipients
Academic staff of Lund University
Theoretical physicists
Swedish physicists
Living people
1954 births
People associated with CERN | Torbjörn Sjöstrand | Physics | 216 |
7,724,083 | https://en.wikipedia.org/wiki/Penicillium%20crustosum | Penicillium crustosum is a blue-green or blue-grey mold that can cause food spoilage, particularly of protein-rich foods such as meats and cheeses. It is identified by its complex biseriate conidiophores on which phialides produce asexual spores. It can grow at fairly low temperatures (it is a psychrophile), and in low water activity environments.
Penicillium crustosum produces mycotoxins, most notoriously the neurotoxic penitrems, including the best known penitrem toxin, penitrem A, and including penitrems A through G. Penitrem G has been shown to have insecticidal activity. In addition, P. crustosum can produce thomitrems A and E, and roquefortine C. Consumption of foods spoiled by this mold can cause transient neurological symptoms such as tremors. In dogs, symptoms can include vomiting, convulsion, tremors, ataxia, and tachycardia.
References
crustosum
Fungi described in 1930
Taxa named by Charles Thom
Fungus species | Penicillium crustosum | Biology | 237 |
20,877,791 | https://en.wikipedia.org/wiki/Design%20By%20Numbers | Design By Numbers (DBN) was an influential experiment in teaching programming initiated at the MIT Media Lab during the 1990s. Led by John Maeda and his students they created software aimed at allowing designers, artists and other non-programmers to easily start computer programming. The software itself could be run in a browser and published alongside the software was a book and courseware.
Design By Numbers is no longer an active project but has gone on to influence many other projects aimed at making computer programming more accessible to non-technical people. Its most public result is Processing, created by Maeda's students Casey Reas and Ben Fry, who built on the work of DBN and has gone on to international success.
See also
Processing
Smile software
Further reading
External links
Educational programming languages | Design By Numbers | Technology | 155 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.