id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
46,968,364 | https://en.wikipedia.org/wiki/Inferring%20horizontal%20gene%20transfer | Horizontal or lateral gene transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate investigations of the evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages.
Inferring horizontal gene transfer through computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events.
Overview
Horizontal gene transfer was first observed in 1928, in Frederick Griffith's experiment: showing that virulence was able to pass from virulent to non-virulent strains of Streptococcus pneumoniae, Griffith demonstrated that genetic information can be horizontally transferred between bacteria via a mechanism known as transformation. Similar observations in the 1940s and 1950s showed evidence that conjugation and transduction are additional mechanisms of horizontal gene transfer.
To infer HGT events, which may not necessarily result in phenotypic changes, most contemporary methods are based on analyses of genomic sequence data. These methods can be broadly separated into two groups: parametric and phylogenetic methods. Parametric methods search for sections of a genome that significantly differ from the genomic average, such as GC content or codon usage. Phylogenetic methods examine evolutionary histories of genes involved and identify conflicting phylogenies. Phylogenetic methods can be further divided into those that reconstruct and compare phylogenetic trees explicitly, and those that use surrogate measures in place of the phylogenetic trees.
The main feature of parametric methods is that they only rely on the genome under study to infer HGT events that may have occurred on its lineage. It has been a considerable advantage at the early times of the sequencing era, when few closely related genomes were available for comparative methods. However, because they rely on the uniformity of the host's signature to infer HGT events, not accounting for the host's intra-genomic variability will result in overpredictions—flagging native segments as possible HGT events. Similarly, the transferred segments need to exhibit the donor's signature and to be significantly different from the recipient's. Furthermore, genomic segments of foreign origin are subject to the same mutational processes as the rest of the host genome, and so the difference between the two tends to vanish over time, a process referred to as amelioration. This limits the ability of parametric methods to detect ancient HGTs.
Phylogenetic methods benefit from the recent availability of many sequenced genomes. Indeed, as for all comparative methods, phylogenetic methods can integrate information from multiple genomes, and in particular integrate them using a model of evolution. This lends them the ability to better characterize the HGT events they infer—notably by designating the donor species and time of the transfer. However, models have limits and need to be used cautiously. For instance, the conflicting phylogenies can be the result of events not accounted for by the model, such as unrecognized paralogy due to duplication followed by gene losses. Also, many approaches rely on a reference species tree that is supposed to be known, when in many instances it can be difficult to obtain a reliable tree. Finally, the computational costs of reconstructing many gene/species trees can be prohibitively expensive. Phylogenetic methods tend to be applied to genes or protein sequences as basic evolutionary units, which limits their ability to detect HGT in regions outside or across gene boundaries.
Because of their complementary approaches—and often non-overlapping sets of HGT candidates—combining predictions from parametric and phylogenetic methods can yield a more comprehensive set of HGT candidate genes. Indeed, combining different parametric methods has been reported to significantly improve the quality of predictions. Moreover, in the absence of a comprehensive set of true horizontally transferred genes, discrepancies between different methods might be resolved through combining parametric and phylogenetic methods. However, combining inferences from multiple methods also entails a risk of an increased false-positive rate.
Parametric methods
Parametric methods to infer HGT use characteristics of the genome sequence specific to particular species or clades, also called genomic signatures. If a fragment of the genome strongly deviates from the genomic signature, this is a sign of a potential horizontal transfer. For example, because bacterial GC content falls within a wide range, GC content of a genome segment is a simple genomic signature. Commonly used genomic signatures include nucleotide composition, oligonucleotide frequencies, or structural features of the genome.
To detect HGT using parametric methods, the host's genomic signature needs to be clearly recognizable. However, the host's genome is not always uniform with respect to the genome signature: for example, GC content of the third codon position is lower close to the replication terminus and GC content tends to be higher in highly expressed genes. Not accounting for such intra-genomic variability in the host can result in over-predictions, flagging native segments as HGT candidates. Larger sliding windows can account for this variability at the cost of a reduced ability to detect smaller HGT regions.
Just as importantly, horizontally transferred segments need to exhibit the donor's genomic signature. This might not be the case for ancient transfers where transferred sequences are subjected to the same mutational processes as the rest of the host genome, potentially causing their distinct signatures to "ameliorate" and become undetectable through parametric methods. For example, Bdellovibrio bacteriovorus, a predatory δ-Proteobacterium, has homogeneous GC content, and it might be concluded that its genome is resistant to HGT. However, subsequent analysis using phylogenetic methods identified a number of ancient HGT events in the genome of B. bacteriovorus. Similarly, if the inserted segment was previously ameliorated to the host's genome, as is the case for prophage insertions, parametric methods might miss predicting these HGT events. Also, the donor's composition must significantly differ from the recipient's to be identified as abnormal, a condition that might be missed in the case of short- to medium-distance HGT, which are the most prevalent. Furthermore, it has been reported that recently acquired genes tend to be AT-richer than the recipient's average, which indicates that differences in GC-content signature may result from unknown post-acquisition mutational processes rather than from the donor's genome.
Nucleotide composition
Bacterial GC content falls within a wide range, with Ca. Zinderia insecticola having a GC content of 13.5% and Anaeromyxobacter dehalogenans having a GC content of 75%. Even within a closely related group of α-Proteobacteria, values range from approximately 30% to 65%. These differences can be exploited when detecting HGT events as a significantly different GC content for a genome segment can be an indication of foreign origin.
Oligonucleotide spectrum
The oligonucleotide spectrum (or k-mer frequencies) measures the frequency of all possible nucleotide sequences of a particular length in the genome. It tends to vary less within genomes than between genomes and therefore can also be used as a genomic signature. A deviation from this signature suggests that a genomic segment might have arrived through horizontal transfer.
The oligonucleotide spectrum owes much of its discriminatory power to the number of possible oligonucleotides: if n is the size of the vocabulary and w is oligonucleotide size, the number of possible distinct oligonucleotides is nw; for example, there are 45=1024 possible pentanucleotides. Some methods can capture the signal recorded in motifs of variable size, thus capturing both rare and discriminative motifs along with frequent, but more common ones.
Codon usage bias, a measure related to codon frequencies, was one of the first detection methods used in methodical assessments of HGT. This approach requires a host genome which contains a bias towards certain synonymous codons (different codons which code for the same amino acid) which is clearly distinct from the bias found within the donor genome. The simplest oligonucleotide used as a genomic signature is the dinucleotide, for example the third nucleotide in a codon and the first nucleotide in the following codon represent the dinucleotide least restricted by amino acid preference and codon usage.
It is important to optimise the size of the sliding window in which to count the oligonucleotide frequency: a larger sliding window will better buffer variability in the host genome at the cost of being worse at detecting smaller HGT regions. A good compromise has been reported using tetranucleotide frequencies in a sliding window of 5 kb with a step of 0.5kb.
A convenient method of modelling oligonucleotide genomic signatures is to use Markov chains. The transition probability matrix can be derived for endogenous vs. acquired genes, from which Bayesian posterior probabilities for particular stretches of DNA can be obtained.
Structural features
Just as the nucleotide composition of a DNA molecule can be represented by a sequence of letters, its structural features can be encoded in a numerical sequence. The structural features include interaction energies between neighbouring base pairs, the angle of twist that makes two bases of a pair non-coplanar, or DNA deformability induced by the proteins shaping the chromatin.
The autocorrelation analysis of some of these numerical sequences show characteristic periodicities in complete genomes. In fact, after detecting archaea-like regions in the thermophilic bacteria Thermotoga maritima, periodicity spectra of these regions were compared to the periodicity spectra of the homologous regions in the archaea Pyrococcus horikoshii. The revealed similarities in the periodicity were strong supporting evidence for a case of massive HGT between the bacteria and the archaea kingdoms.
Genomic context
The existence of genomic islands, short (typically 10–200kb long) regions of a genome which have been acquired horizontally, lends support to the ability to identify non-native genes by their location in a genome. For example, a gene of ambiguous origin which forms part of a non-native operon could be considered to be non-native. Alternatively, flanking repeat sequences or the presence of nearby integrases or transposases can indicate a non-native region. A machine-learning approach combining oligonucleotide frequency scans with context information was reported to be effective at identifying genomic islands. In another study, the context was used as a secondary indicator, after removal of genes which are strongly thought to be native or non-native through the use of other parametric methods.
Phylogenetic methods
The use of phylogenetic analysis in the detection of HGT was advanced by the availability of many newly sequenced genomes. Phylogenetic methods detect inconsistencies in gene and species evolutionary history in two ways: explicitly, by reconstructing the gene tree and reconciling it with the reference species tree, or implicitly, by examining aspects that correlate with the evolutionary history of the genes in question, e.g., patterns of presence/absence across species, or unexpectedly short or distant pairwise evolutionary distances.
Explicit phylogenetic methods
The aim of explicit phylogenetic methods is to compare gene trees with their associated species trees. While weakly supported differences between gene and species trees can be due to inference uncertainty, statistically significant differences can be suggestive of HGT events. For example, if two genes from different species share the most recent ancestral connecting node in the gene tree, but the respective species are spaced apart in the species tree, an HGT event can be invoked. Such an approach can produce more detailed results than parametric approaches because the involved species, time and direction of transfer can potentially be identified.
As discussed in more detail below, phylogenetic methods range from simple methods merely identifying discordance between gene and species trees to mechanistic models inferring probable sequences of HGT events. An intermediate strategy entails deconstructing the gene tree into smaller parts until each matches the species tree (genome spectral approaches).
Explicit phylogenetic methods rely upon the accuracy of the input rooted gene and species trees, yet these can be challenging to build. Even when there is no doubt in the input trees, the conflicting phylogenies can be the result of evolutionary processes other than HGT, such as duplications and losses, causing these methods to erroneously infer HGT events when paralogy is the correct explanation. Similarly, in the presence of incomplete lineage sorting, explicit phylogeny methods can erroneously infer HGT events. That is why some explicit model-based methods test multiple evolutionary scenarios involving different kinds of events, and compare their fit to the data given parsimonious or probabilistic criteria.
Tests of topologies
To detect sets of genes that fit poorly to the reference tree, one can use statistical tests of topology, such as the Kishino–Hasegawa (KH), Shimodaira–Hasegawa (SH), and Approximately Unbiased (AU) tests. These tests assess the likelihood of the gene sequence alignment when the reference topology is given as the null hypothesis.
The rejection of the reference topology is an indication that the evolutionary history for that gene family is inconsistent with the reference tree. When these inconsistencies cannot be explained using a small number of non-horizontal events, such as gene loss and duplication, an HGT event is inferred.
One such analysis checked for HGT in groups of homologs of the γ-Proteobacterial lineage. Six reference trees were reconstructed using either the highly conserved small subunit ribosomal RNA sequences, a consensus of the available gene trees or concatenated alignments of orthologs. The failure to reject the six evaluated topologies, and the rejection of seven alternative topologies, was interpreted as evidence for a small number of HGT events in the selected groups.
Tests of topology identify differences in tree topology taking into account the uncertainty in tree inference but they make no attempt at inferring how the differences came about. To infer the specifics of particular events, genome spectral or subtree pruning and regraft methods are required.
Genome spectral approaches
In order to identify the location of HGT events, genome spectral approaches decompose a gene tree into substructures (such as bipartitions or quartets) and identify those that are consistent or inconsistent with the species tree.
Bipartitions
Removing one edge from a reference tree produces two unconnected sub-trees, each a disjoint set of nodes—a bipartition. If a bipartition is present in both the gene and the species trees, it is compatible; otherwise, it is conflicting. These conflicts can indicate an HGT event or may be the result of uncertainty in gene tree inference. To reduce uncertainty, bipartition analyses typically focus on strongly supported bipartitions such as those associated with branches with bootstrap values or posterior probabilities above certain thresholds. Any gene family found to have one or several conflicting, but strongly supported, bipartitions is considered as an HGT candidate.
Quartet decomposition
Quartets are trees consisting of four leaves. In bifurcating (fully resolved) trees, each internal branch induces a quartet whose leaves are either subtrees of the original tree or actual leaves of the original tree. If the topology of a quartet extracted from the reference species tree is embedded in the gene tree, the quartet is compatible with the gene tree. Conversely, incompatible strongly supported quartets indicate potential HGT events. Quartet mapping methods are much more computationally efficient and naturally handle heterogeneous representation of taxa among gene families, making them a good basis for developing large-scale scans for HGT, looking for highways of gene sharing in databases of hundreds of complete genomes.
Subtree pruning and regrafting
A mechanistic way of modelling an HGT event on the reference tree is to first cut an internal branch—i.e., prune the tree—and then regraft it onto another edge, an operation referred to as subtree pruning and regrafting (SPR). If the gene tree was topologically consistent with the original reference tree, the editing results in an inconsistency. Similarly, when the original gene tree is inconsistent with the reference tree, it is possible to obtain a consistent topology by a series of one or more prune and regraft operations applied to the reference tree. By interpreting the edit path of pruning and regrafting, HGT candidate nodes can be flagged and the host and donor genomes inferred. To avoid reporting false positive HGT events due to uncertain gene tree topologies, the optimal "path" of SPR operations can be chosen among multiple possible combinations by considering the branch support in the gene tree. Weakly supported gene tree edges can be ignored a priori or the support can be used to compute an optimality criterion.
Because conversion of one tree to another by a minimum number of SPR operations is NP-Hard, solving the problem becomes considerably more difficult as more nodes are considered. The computational challenge lies in finding the optimal edit path, i.e., the one that requires the fewest steps, and different strategies are used in solving the problem. For example, the HorizStory algorithm reduces the problem by first eliminating the consistent nodes; recursive pruning and regrafting reconciles the reference tree with the gene tree and optimal edits are interpreted as HGT events. The SPR methods included in the supertree reconstruction package SPRSupertrees substantially decrease the time of the search for the optimal set of SPR operations by considering multiple localised sub-problems in large trees through a clustering approach. The T-REX (webserver) includes a number of HGT detection methods (mostly SPR-based) and allows users to calculate the bootstrap support of the inferred transfers.
Model-based reconciliation methods
Reconciliation of gene and species trees entails mapping evolutionary events onto gene trees in a way that makes them concordant with the species tree. Different reconciliation models exist, differing in the types of event they consider to explain the incongruences between gene and species tree topologies. Early methods exclusively modelled horizontal transfers (T). More recent ones also account for duplication (D), loss (L), incomplete lineage sorting (ILS) or homologous recombination (HR) events. The difficulty is that by allowing for multiple types of events, the number of possible reconciliations increases rapidly. For instance, a conflicting gene tree topologies might be explained in terms of a single HGT event or multiple duplication and loss events. Both alternatives can be considered plausible reconciliation depending on the frequency of these respective events along the species tree.
Reconciliation methods can rely on a parsimonious or a probabilistic framework to infer the most likely scenario(s), where the relative cost/probability of D, T, L events can be fixed a priori or estimated from the data. The space of DTL reconciliations and their parsimony costs—which can be extremely vast for large multi-copy gene family trees—can be efficiently explored through dynamic programming algorithms. In some programs, the gene tree topology can be refined where it was uncertain to fit a better evolutionary scenario as well as the initial sequence alignment. More refined models account for the biased frequency of HGT between closely related lineages, reflecting the loss of efficiency of HR with phylogenetic distance, for ILS, or for the fact that the actual donor of most HGT belong to extinct or unsampled lineages. Further extensions of DTL models are being developed towards an integrated description of the genome evolution processes. In particular, some of them consider horizontal at multiple scales—modelling independent evolution of gene fragments or recognising co-evolution of several genes (e.g., due to co-transfer) within and across genomes.
Implicit phylogenetic methods
In contrast to explicit phylogenetic methods, which compare the agreement between gene and species trees, implicit phylogenetic methods compare evolutionary distances or sequence similarity. Here, an unexpectedly short or long distance from a given reference compared to the average can be suggestive of an HGT event. Because tree construction is not required, implicit approaches tend to be simpler and faster than explicit methods.
However, implicit methods can be limited by disparities between the underlying correct phylogeny and the evolutionary distances considered. For instance, the most similar sequence as obtained by the highest-scoring BLAST hit is not always the evolutionarily closest one.
Top sequence match in a distant species
A simple way of identifying HGT events is by looking for high-scoring sequence matches in distantly related species. For example, an analysis of the top BLAST hits of protein sequences in the bacteria Thermotoga maritima revealed that most hits were in archaea rather than closely related bacteria, suggesting extensive HGT between the two; these predictions were later supported by an analysis of the structural features of the DNA molecule.
However, this method is limited to detecting relatively recent HGT events. Indeed, if the HGT occurred in the common ancestor of two or more species included in the database, the closest hit will reside within that clade and therefore the HGT will not be detected by the method. Thus, the threshold of the minimum number of foreign top BLAST hits to observe to decide a gene was transferred is highly dependent on the taxonomic coverage of sequence databases. Therefore, experimental settings may need to be defined in an ad-hoc way.
Discrepancy between gene and species distances
The molecular clock hypothesis posits that homologous genes evolve at an approximately constant rate across different species. If one only considers homologous genes related through speciation events (referred to as “orthologous" genes), their underlying tree should by definition correspond to the species tree. Therefore, assuming a molecular clock, the evolutionary distance between orthologous genes should be approximately proportional to the evolutionary distances between their respective species. If a putative group of orthologs contains xenologs (pairs of genes related through an HGT), the proportionality of evolutionary distances may only hold among the orthologs, not the xenologs.
Simple approaches compare the distribution of similarity scores of particular sequences and their orthologous counterparts in other species; HGT are inferred from outliers. The more sophisticated DLIGHT ('Distance Likelihood-based Inference of Genes Horizontally Transferred') method considers simultaneously the effect of HGT on all sequences within groups of putative orthologs: if a likelihood-ratio test of the HGT hypothesis versus a hypothesis of no HGT is significant, a putative HGT event is inferred. In addition, the method allows inference of potential donor and recipient species and provides an estimation of the time since the HGT event.
Phylogenetic profiles
A group of orthologous or homologous genes can be analysed in terms of the presence or absence of group members in the reference genomes; such patterns are called phylogenetic profiles. To find HGT events, phylogenetic profiles are scanned for an unusual distribution of genes. Absence of a homolog in some members of a group of closely related species is an indication that the examined gene might have arrived via an HGT event. For example, the three facultatively symbiotic Frankia sp. strains are of strikingly different sizes: 5.43 Mbp, 7.50 Mbp and 9.04 Mbp, depending on their range of hosts. Marked portions of strain-specific genes were found to have no significant hit in the reference database, and were possibly acquired by HGT transfers from other bacteria. Similarly, the three phenotypically diverse Escherichia coli strains (uropathogenic, enterohemorrhagic and benign) share about 40% of the total combined gene pool, with the other 60% being strain-specific genes and consequently HGT candidates. Further evidence for these genes resulting from HGT was their strikingly different codon usage patterns from the core genes and a lack of gene order conservation (order conservation is typical of vertically evolved genes). The presence/absence of homologs (or their effective count) can thus be used by programs to reconstruct the most likely evolutionary scenario along the species tree. Just as with reconciliation methods, this can be achieved through parsimonious or probabilistic estimation of the number of gain and loss events. Models can be complexified by adding processes, like the truncation of genes, but also by modelling the heterogeneity of rates of gain and loss across lineages and/or gene families.
Clusters of polymorphic sites
Genes are commonly regarded as the basic units transferred through an HGT event. However it is also possible for HGT to occur within genes. For example, it has been shown that horizontal transfer between closely related species results in more exchange of ORF fragments, a type a transfer called gene conversion, mediated by homologous recombination. The analysis of a group of four Escherichia coli and two Shigella flexneri strains revealed that the sequence stretches common to all six strains contain polymorphic sites, consequences of homologous recombination. Clusters of excess of polymorphic sites can thus be used to detect tracks of DNA recombined with a distant relative. This method of detection is, however, restricted to the sites in common to all analysed sequences, limiting the analysis to a group of closely related organisms.
Evaluation
The existence of the numerous and varied methods to infer HGT raises the question of how to validate individual inferences and of how to compare the different methods.
A main problem is that, as with other types of phylogenetic inferences, the actual evolutionary history cannot be established with certainty. As a result, it is difficult to obtain a representative test set of HGT events. Furthermore, HGT inference methods vary considerably in the information they consider and often identify inconsistent groups of HGT candidates: it is not clear to what extent taking the intersection, the union, or some other combination of the individual methods affects the false positive and false negative rates.
Parametric and phylogenetic methods draw on different sources of information; it is therefore difficult to make general statements about their relative performance. Conceptual arguments can however be invoked. While parametric methods are limited to the analysis of single or pairs of genomes, phylogenetic methods provide a natural framework to take advantage of the information contained in multiple genomes. In many cases, segments of genomes inferred as HGT based on their anomalous composition can also be recognised as such on the basis of phylogenetic analyses or through their mere absence in genomes of related organisms. In addition, phylogenetic methods rely on explicit models of sequence evolution, which provide a well-understood framework for parameter inference, hypothesis testing, and model selection. This is reflected in the literature, which tends to favour phylogenetic methods as the standard of proof for HGT. The use of phylogenetic methods thus appears to be the preferred standard, especially given that the increase in computational power coupled with algorithmic improvements has made them more tractable, and that the ever denser sampling of genomes lends more power to these tests.
Considering phylogenetic methods, several approaches to validating individual HGT inferences and benchmarking methods have been adopted, typically relying on various forms of simulation. Because the truth is known in simulation, the number of false positives and the number of false negatives are straightforward to compute. However, simulating data do not trivially resolve the problem because the true extent of HGT in nature remains largely unknown, and specifying rates of HGT in the simulated model is always hazardous. Nonetheless, studies involving the comparison of several phylogenetic methods in a simulation framework could provide quantitative assessment of their respective performances, and thus help the biologist in choosing objectively proper tools.
Standard tools to simulate sequence evolution along trees such as INDELible or PhyloSim can be adapted to simulate HGT. HGT events cause the relevant gene trees to conflict with the species tree. Such HGT events can be simulated through subtree pruning and regrafting rearrangements of the species tree. However, it is important to simulate data that are realistic enough to be representative of the challenge provided by real datasets, and simulation under complex models are thus preferable. A model was developed to simulate gene trees with heterogeneous substitution processes in addition to the occurrence of transfer, and accounting for the fact that transfer can come from now extinct donor lineages. Alternatively, the genome evolution simulator ALF directly generates gene families subject to HGT, by accounting for a whole range of evolutionary forces at the base level, but in the context of a complete genome. Given simulated sequences which have HGT, analysis of those sequences using the methods of interest and comparison of their results with the known truth permits study of their performance. Similarly, testing the methods on sequence known not to have HGT enables the study of false positive rates.
Simulation of HGT events can also be performed by manipulating the biological sequences themselves. Artificial chimeric genomes can be obtained by inserting known foreign genes into random positions of a host genome. The donor sequences are inserted into the host unchanged or can be further evolved by simulation, e.g., using the tools described above.
One important caveat to simulation as a way to assess different methods is that simulation is based on strong simplifying assumptions which may favour particular methods.
See also
Index of evolutionary biology articles
Horizontal gene transfer
Horizontal gene transfer in evolution
Phylogenetic tree
Phylogenetic network
Bioinformatics
Comparative genomics
Homology (biology)
References
Computational biology | Inferring horizontal gene transfer | Biology | 6,356 |
6,416,001 | https://en.wikipedia.org/wiki/Dimethyl%20acetylenedicarboxylate | Dimethyl acetylenedicarboxylate (DMAD) is an organic compound with the formula CH3O2CC2CO2CH3. It is a di-ester in which the ester groups are conjugated with a C-C triple bond. As such, the molecule is highly electrophilic, and is widely employed as a dienophile in cycloaddition reactions, such as the Diels-Alder reaction. It is also a potent Michael acceptor. This compound exists as a colorless liquid at room temperature. This compound was used in the preparation of nedocromil.
Preparation
Although inexpensively available, DMAD is prepared today as it was originally. Maleic acid is brominated and the resulting dibromosuccinic acid is dehydrohalogenated with potassium hydroxide yielding acetylenedicarboxylic acid. The acid is then esterified with methanol and sulfuric acid as a catalyst:
Safety
DMAD is a lachrymator and a vesicant.
References
Alkyne derivatives
Methyl esters
Carboxylate esters | Dimethyl acetylenedicarboxylate | Chemistry | 232 |
65,434,892 | https://en.wikipedia.org/wiki/NGC%204365 | NGC 4365 is an elliptical galaxy located in the constellation Virgo. It was discovered by William Herschel on April 13, 1784.
NGC 4365 is the central galaxy of W' cloud, a cloud of galaxies about 6 megaparsecs behind (further from us than) the Virgo Supercluster.
NGC 4365 has a kinematically distinct, counter-rotating stellar core region, which provides strong evidence for the theory that elliptical galaxies grow through mergers. The mean age of its stellar population is greater than 12 billion years, and it retains a triaxial structure that has remained largely unchanged for 12 billion years. Because supermassive black holes in the centers of galaxies tend to scatter stars into chaotic new orbits, the longevity of NGC 4365's triaxial structure and kinematically distinct stellar populations indicates that it cannot have a supermassive black hole with a mass greater than .
There is a stream of globular clusters connecting NGC 4365 to the neighboring compact S0 galaxy NGC 4342. It appears that NGC 4365 is stripping globular clusters and stars from its neighbor via tidal interaction.
References
External links
Virgo (constellation)
Elliptical galaxies
4365
Astronomical objects discovered in 1784
040375 | NGC 4365 | Astronomy | 252 |
3,992,164 | https://en.wikipedia.org/wiki/Oldest%20people | This is a list of tables of the oldest people in the world in ordinal ranks. To avoid including false or unconfirmed claims of old age, names here are restricted to those people whose ages have been validated by an international body dealing in longevity research, such as the Gerontology Research Group or Guinness World Records, and others who have otherwise been reliably sourced.
The longest documented and verified human lifespan is that of Jeanne Calment of France (1875–1997), a woman who lived to the age of 122 years and 164 days. As women live longer than men on average, women predominate in combined records. The longest lifespan for a man is that of Jiroemon Kimura of Japan (1897–2013), who lived to the age of 116 years and 54 days.
The oldest living person in the world whose age has been validated is -year-old Inah Canabarro Lucas of Brazil, born 8 June 1908. The oldest living verified man is -year-old João Marinho Neto of Brazil, born 5 October 1912.
Ten oldest verified people ever
Systematic verification of longevity has only been practiced since the 1950s and only in certain parts of the world. All ten oldest verified people ever are female.
The longest documented and verified human lifespan is that of Jeanne Calment of France, a woman who lived to age 122 years and 164 days. She received news media attention in 1985, after turning 110. Calment's claim was investigated and authenticated by Jean-Marie Robine and Dr. Michel Allard for the Gerontology Research Group (GRG). Her longevity claim was put into question in 2018, but the original assessing team stood by their judgement.
Oldest people (all women)
Oldest men
aBranyas was born in the United States.
bMortensen was born in Denmark.
Ten oldest living people
Oldest living people (all women)
Oldest living men
Chronological list of the oldest known living person since 1951
This table lists the sequence of the world's oldest known living person from 1951 to present, according to GRG research and the Guinness World Records. Due to the life expectancy difference between sexes, nearly all the oldest living people have been women (thus the maximum life span is guided by the female numbers); a sequence of the oldest living men is provided below the main list.
Chronological list of the oldest living man since 1951
This table lists the sequence of the world's oldest known living man from 1951 to present.
References
Gerontology
Oldest organisms
Record progressions
sv:Lista över världens äldsta människor | Oldest people | Biology | 528 |
1,114,367 | https://en.wikipedia.org/wiki/Optical%20parametric%20amplifier | An optical parametric amplifier, abbreviated OPA, is a laser light source that emits light of variable wavelengths by an optical parametric amplification process. It is essentially the same as an optical parametric oscillator, but without the optical cavity (i.e., the light beams pass through the apparatus just once or twice, rather than many many times).
Optical parametric generation (OPG)
Optical parametric generation (OPG) (also called "optical parametric fluorescence", or "spontaneous parametric down conversion") often precedes optical parametric amplification.
In optical parametric generation, the input is one light beam of frequency ωp, and the output is two light beams of lower frequencies ωs and ωi, with the requirement ωp=ωs+ωi. These two lower-frequency beams are called the "signal" and "idler", respectively.
This light emission is based on the nonlinear optical principle. The photon of an incident laser pulse (pump) is, by a nonlinear optical crystal, divided into two lower-energy photons. The wavelengths of the signal and the idler are determined by the phase matching condition, which is changed, e.g. by temperature or, in bulk optics, by the angle between the incident pump laser ray and the optical axes of the crystal. The wavelengths of the signal and the idler photons can, therefore, be tuned by changing the phase matching condition.
Optical parametric amplification (OPA)
The output beams in optical parametric generation are usually relatively weak and have relatively spread-out direction and frequency. This problem is solved by using optical parametric amplification (OPA), also called difference frequency generation, as a second stage after the OPG.
In an OPA, the input is two light beams, of frequency ωp and ωs. The OPA will make the pump beam (ωp) weaker, and amplify the signal beam (ωs), and also create a new, so-called idler beam at the frequency ωi with ωp=ωs+ωi.
In the OPA, the pump and idler photons usually travel collinearly through a nonlinear optical crystal. Phase matching is required for the process to work well.
Because the wavelengths of an OPG+OPA system can be varied (unlike most lasers which have a fixed wavelength), they are used in many spectroscopic methods.
As an example of OPA, the incident pump pulse is the 800 nm (12500 cm−1) output of a Ti:sapphire laser, and the two outputs, signal and idler, are in the near-infrared region, the sum of the wavenumber of which is equal to 12500 cm−1.
Noncollinear OPA (NOPA)
Because most nonlinear crystals are birefringent, beams that are collinear inside a crystal may not be collinear outside of it. The phase fronts (wave vector) do not point in the same direction as the energy flow (Poynting vector) because of walk-off.
The phase matching angle makes possible any gain at all (0th order). In a collinear setup, the freedom to choose the center wavelength allows a constant gain up to first order in wavelength. Noncollinear OPAs were developed to have an additional degree of freedom, allowing constant gain up to second order in wavelength. The optimal parameters are 4 degrees of noncollinearity, β-barium borate (BBO) as the material, a 400-nm pump wavelength, and signal around 800 nm (and can be tunable in the range 605-750 nm with sub-10 fs pulse width which allows exploring the ultrafast dynamics of large molecules) This generates a bandwidth 3 times as large of that of a Ti-sapphire-amplifier. The first order is mathematically equivalent to some properties of the group velocities involved, but this does not mean that pump and signal have the same group velocity. After propagation through 1-mm BBO, a short pump pulse no longer overlaps with the signal. Therefore, chirped pulse amplification must be used in situations requiring large gain amplification in long crystals. Long crystals introduce such a large chirp that a compressor is needed anyway. An extreme chirp can lengthen a 20-fs seed pulse to 50 ps, making it suitable for use as the pump. Unchirped 50-ps pulses with high energy can be generated from rare earth-based lasers.
The optical parametric amplifier has a wider bandwidth than a -amplifier, which in turn has a wider bandwidth than an optical parametric oscillator because of white-light generation even one octave wide (for example using nonlinear self-phase modulation in neon gas). Therefore, a subband can be selected and fairly short pulses can still be generated.
The higher gain per mm for BBO compared to Ti:Sa and, more importantly, lower amplified spontaneous emission allows for higher overall gain.
Interlacing compressors and OPA leads to tilted pulses.
Multipass OPA
Multipass can be used for
walk off and group velocity (dispersion) compensation;
constant intensity with increasing signal power means to have an exponential rising cross section. This can be done by means of lenses, which also refocus the beams to have the beam waist in the crystal;
reduction of OPG by increasing the pump power proportional to the signal and splitting the pump across the passes of the signal;
broadband amplification by dumping the idler and optionally individually detuning the crystals;
complete pump depletion by offsetting the pump and signal in time and space at every pass and feeding one pump pulse through all passes;
high gain with BBO, since BBO is only available in small dimensions.
Since the direction of the beams is fixed, multiple passes cannot be overlapped into a single small crystal like in a Ti:Sa amplifier. Unless one uses noncolinear geometry and adjusts amplified beams onto the parametric fluorescence cone produced by the pump pulse.
Relationship to parametric amplifiers in electronics
The idea of parametric amplification first arose at much lower frequencies: AC circuits, including radio frequency and microwave frequency (in the earliest investigations, sound waves were also studied). In these applications, typically a strong pump signal (or "local oscillator") at frequency f passes through a circuit element whose parameters are modulated by the weak "signal" wave at frequency fs (for example, the signal might modulate the capacitance of a varactor diode). The result is that some of the energy of the local oscillator gets transferred to the signal frequency fs, as well as the difference ("idler") frequency f-fs. The term parametric amplifier is used because the parameters of the circuit are varied.
The optical case uses the same basic principle—transferring energy from a wave at the pump frequency to waves at the signal and idler frequencies—so it took the same name.
See also
Optical parametric oscillator
SU(1,1) interferometry
Footnotes and references
Boichenko, V.L.; Zasavitskii, I.I.; Kosichkin, Yu.V.; Tarasevich, A.P.; Tunkin, V.G.; Shotov, A.P. (1984) "A picosecond optical parametric oscillator with amplification of the tunable semiconductor laser radiation", Soviet Journal of Quantum Electronics 11 (1): 141–143.
Magnitskii, S.A.; Malakhova, V.I.; Tarasevich, A.P.; Tunkin, V.G.; Yakubovich, S.D. (1986) "Generation of bandwidth-limited tunable picosecond pulses by injection-locked optical parametric oscillator", Optics Letters 11 (1): 18–20.
External links
NOPA and Group Velocity
Rainbow in photo
Nonlinear optics
Electronic amplifiers
Laser science | Optical parametric amplifier | Technology | 1,677 |
3,601,611 | https://en.wikipedia.org/wiki/Visual%20pollution | Visual pollution is the degradation of the visual environment due to unattractive or disruptive elements that negatively impact the aesthetic quality of an area. It can affect urban, suburban, and natural landscapes. It also refers to the impacts pollution has in impairing the quality of the landscape, formed from compounding sources of pollution to create the impairment. Visual pollution disturbs the functionality and enjoyment of a given area, limiting the ability for the wider ecological system, from humans to animals, to prosper and thrive within it due to the disruptions to their natural and human-made habitats. Although visual pollution can be caused by natural sources (e.g. wildfires), the predominant cause comes from human sources.
As such, visual pollution is not considered a primary source of pollution but a secondary symptom of intersecting pollution sources. Its secondary nature and subjective aspect sometimes makes it difficult to measure and engage with (e.g. within quantitative figures for policymakers). However, the history of the word pollution, and pollution's effect over time, reveals the fact that every form of pollution can be categorised and studied in its three main characteristics, namely contextual, subjective and complex. Frameworks for measurement have been established and include public opinion polling and surveys, visual comparison, spatial metrics, and ethnographic work.
Visual pollution can manifest across levels of analysis, from micro instances that effect the individual to macro issues that impact society as a whole. Instances of visual pollution can take the form of plastic bags stuck in trees, advertisements with contrasting colors and content, which create an oversaturation of anthropogenic visual information within a landscape, to community-wide impacts of overcrowding, overhead power lines, or congestion. Poor urban planning and irregular built-up environments contrast with natural spaces, creating alienating landscapes. Using Pakistan as a case study, a detailed analysis of all visual pollution objects was published in 2022.
The effects of visual pollution have primary symptoms, such as distraction, eye fatigue, decreases in opinion diversity, and loss of identity. It has also been shown to increase biological stress responses and impair balance. As a secondary source of pollution, these also compound with the impact of its primary source such as light or noise pollution that can create multi-layered public health concerns and crisis.
Sources
Local managers of urban areas sometimes lack control over what is built and assembled in public places. As businesses look for ways to increase the profits, cleanliness, architecture, logic and use of space in urban areas are suffering from visual clutter. Variations in the built environment are determined by the location of street furniture such as public transport stations, garbage cans, large panels and stalls. Insensitivity of local administration is another cause for visual pollution. For example, poorly planned buildings and transportation systems create visual pollution. High-rise buildings, if not planned properly or sufficiently, can bring adverse change to the visual and physical characteristics of a city, which may reduce said city's readability.
A frequent criticism of advertising is that there is too much of it. Billboards, for example, have been alleged to distract drivers, corrupt public taste, promote meaningless and wasteful consumerism and clutter the land. See highway beautification. Vandalism, in the form of graffiti, is defined as street markings, offensive, inappropriate, and tasteless messages made without the owner's consent. Graffiti adds to visual clutter as it disturbs the view.
Visual Pollution Assessment
The process of measuring, quantifying or assessing the level of visual pollution at any place is called a visual pollution assessment (VPA). In past few years, the demand for methods to assess visual pollution in communities has increased. Recently, a tool was introduced for visual pollution measurement which can be used to measure the presence of various visual pollution objects (VPOs) and the resultant level of visual pollution. A detailed analysis of visual pollution, its context, case studies and analysis using the tool is discussed in Visual Pollution: Concepts, Practices and Management Framework by Nawaz et al.
Prevention
United States
In the United States, there are several initiatives gradually taking place to prevent visual pollution. The Federal Highway Beautification Act of 1965 limits placement of billboards on Interstate highways and federally aided roads. It has dramatically reduced the amount of billboards placed on these roads. Another highway bill, the Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991 has made transportation facilities sync with the needs of communities. This bill created a system of state and national scenic byways and provided funds for biking trails, historic preservation and scenic conservation.
Businesses situated near an interstate can create problems of advertising through large billboards; however, now an alternative solution for advertisers is gradually eliminating the problem. For example, logo signs that provide directional information for travelers without disfiguring the landscape are increasing and are a step toward decreasing visual pollution on highways in America.
Brazil
In September 2006, São Paulo passed the Cidade Limpa (Clean City Law), outlawing the use of all outdoor advertisements, including on billboards, transit, and in front of stores.
See also
Clutter (marketing)
Eyesore
Light pollution
Noise pollution
Cidade Limpa
References
External links
Pollution
Advertising
Urban planning | Visual pollution | Engineering | 1,065 |
21,028,038 | https://en.wikipedia.org/wiki/Rapid%20Boot | Rapid Boot is an EFI BIOS alternative using a Linux kernel (in the BIOS flash part) developed by Intel Corporation, primarily intended for computer clusters.
See also
Coreboot
Das U-Boot
References
Anton Borisov (6 January 2009) The Open Source BIOS is Ten. An interview with the coreboot developers, The H
External links
BIOS | Rapid Boot | Technology | 73 |
18,618,136 | https://en.wikipedia.org/wiki/Chichen%20Itza | Chichén Itzá (often spelled Chichen Itza in English and traditional Yucatec Maya) was a large pre-Columbian city built by the Maya people of the Terminal Classic period. The archeological site is located in Tinúm Municipality, Yucatán State, Mexico.
Chichén Itzá was a major focal point in the Northern Maya Lowlands from the Late Classic (c. AD 600–900) through the Terminal Classic (c. AD 800–900) and into the early portion of the Postclassic period (c. AD 900–1200). The site exhibits a multitude of architectural styles, reminiscent of styles seen in central Mexico and of the Puuc and Chenes styles of the Northern Maya lowlands. The presence of central Mexican styles was once thought to have been representative of direct migration or even conquest from central Mexico, but most contemporary interpretations view the presence of these non-Maya styles more as the result of cultural diffusion.
Chichén Itzá was one of the largest Maya cities and it was likely to have been one of the mythical great cities, or Tollans, referred to in later Mesoamerican literature. The city may have had the most diverse population in the Maya world, a factor that could have contributed to the variety of architectural styles at the site.
The ruins of Chichén Itzá are federal property, and the site's stewardship is maintained by Mexico's Instituto Nacional de Antropología e Historia (National Institute of Anthropology and History). The land under the monuments had been privately owned until 29 March 2010, when it was purchased by the state of Yucatán.
Chichén Itzá is one of the most visited archeological sites in Mexico with over 2.6 million tourists in 2017.
Name and orthography
The Maya name "Chichen Itza" means "At the mouth of the well of the Itza." This derives from chi, meaning "mouth" or "edge", and chʼen or chʼeʼen, meaning "well". Itzá is the name of an ethnic-lineage group that gained political and economic dominance of the northern peninsula. One possible translation for Itza is "enchanter (or enchantment) of the water," from its (itz), "sorcerer", and ha, "water".
The name is spelled Chichén Itzá in Spanish, and the accents are sometimes maintained in other languages to show that both parts of the name are stressed on their final syllable. Other references prefer the modern Maya orthography, Chichʼen Itzaʼ (pronounced ). This form preserves the phonemic distinction between chʼ and ch, since the base word chʼeʼen (which, however, is not stressed in Maya) begins with a postalveolar ejective affricate consonant. Traditional Yucatec Maya spelling in Latin letters, used from the 16th through mid 20th century, spelled it as "Chichen Itza" (as accents on the last syllable are usual for the language, they are not indicated as they are in Spanish). The word "Itzaʼ" has a high tone on the "a" followed by a glottal stop (indicated by the apostrophe).
Evidence in the Chilam Balam books indicates another, earlier name for this city prior to the arrival of the Itza hegemony in northern Yucatán. While most sources agree the first word means seven, there is considerable debate as to the correct translation of the rest. This earlier name is difficult to define because of the absence of a single standard of orthography, but it is represented variously as Uuc Yabnal ("Seven Great House"), Uuc Hab Nal ("Seven Bushy Places"), Uucyabnal ("Seven Great Rulers") or Uc Abnal ("Seven Lines of Abnal"). This name, dating to the Late Classic Period, is recorded both in the book of Chilam Balam de Chumayel and in hieroglyphic texts in the ruins.
Location
Chichén Itzá is located in the eastern portion of Yucatán state in Mexico. The northern Yucatán Peninsula is karst, and the rivers in the interior all run underground. There are four visible, natural sink holes, called cenotes, that could have provided plentiful water year round at Chichen, making it attractive for settlement. Of these cenotes, the "Cenote Sagrado" or "Sacred Cenote" (also variously known as the Sacred Well or Well of Sacrifice), is the most famous. In 2015, scientists determined that there is a hidden cenote under the Temple of Kukulkan, which has never been seen by archeologists.
According to post-Conquest sources (Maya and Spanish), pre-Columbian Maya sacrificed objects and human beings into the cenote as a form of worship to the Maya rain god Chaac. Edward Herbert Thompson dredged the Cenote Sagrado from 1904 to 1910, and recovered artifacts of gold, jade, pottery and incense, as well as human remains. A study of human remains taken from the Cenote Sagrado found that they had wounds consistent with human sacrifice.
Political organization
Several archeologists in the late 1980s suggested that unlike previous Maya polities of the Early Classic, Chichén Itzá may not have been governed by an individual ruler or a single dynastic lineage. Instead, the city's political organization could have been structured by a "multepal" system, which is characterized as rulership through council composed of members of elite ruling lineages.
This theory was popular in the 1990s, but in recent years, the research that supported the concept of the "multepal" system has been called into question, if not discredited. The current belief trend in Maya scholarship is toward the more traditional model of the Maya kingdoms of the Classic Period southern lowlands in Mexico.
Economy
Chichén Itzá was a major economic power in the northern Maya lowlands during its apogee. Participating in the water-borne circum-peninsular trade route through its port site of Isla Cerritos on the north coast, Chichen Itza was able to obtain locally unavailable resources from distant areas such as obsidian from central Mexico and gold from southern Central America.
Between AD 900 and 1050 Chichén Itzá expanded to become a powerful regional capital controlling north and central Yucatán. It established Isla Cerritos as a trading port.
History
The layout of Chichén Itzá site core developed during its earlier phase of occupation, between 750 and 900 AD. Its final layout was developed after 900 AD, and the 10th century saw the rise of the city as a regional capital controlling the area from central Yucatán to the north coast, with its power extending down the east and west coasts of the peninsula. The earliest hieroglyphic date discovered at Chichen Itza is equivalent to 832 AD, while the last known date was recorded in the Osario temple in 998.
Establishment
The Late Classic city was centered upon the area to the southwest of the Xtoloc cenote, with the main architecture represented by the substructures now underlying the Las Monjas and Observatorio and the basal platform upon which they were built.
Ascendancy
Chichén Itzá rose to regional prominence toward the end of the Early Classic period (roughly 600 AD). It was, however, toward the end of the Late Classic and into the early part of the Terminal Classic that the site became a major regional capital, centralizing and dominating political, sociocultural, economic, and ideological life in the northern Maya lowlands. The ascension of Chichen Itza roughly correlates with the decline and fragmentation of the major centers of the southern Maya lowlands.
As Chichén Itzá rose to prominence, the cities of Yaxuna (to the south) and Coba (to the east) were suffering decline. These two cities had been mutual allies, with Yaxuna dependent upon Coba. At some point in the 10th century Coba lost a significant portion of its territory, isolating Yaxuna, and Chichen Itza may have directly contributed to the collapse of both cities.
Decline
According to some colonial Mayan sources (e.g., the Book of Chilam Balam of Chumayel), Hunac Ceel, ruler of Mayapan, conquered Chichén Itzá in the 13th century. Hunac Ceel supposedly prophesied his own rise to power. According to custom at the time, individuals thrown into the Cenote Sagrado were believed to have the power of prophecy if they survived. During one such ceremony, the chronicles state, there were no survivors, so Hunac Ceel leaped into the Cenote Sagrado, and when removed, prophesied his own ascension.
While there is some archeological evidence that indicates Chichén Itzá was at one time looted and sacked, there appears to be greater evidence that it could not have been by Mayapan, at least not when Chichén Itzá was an active urban center. Archeological data now indicates that Chichen Itza declined as a regional center by 1100, before the rise of Mayapan. Ongoing research at the site of Mayapan may help resolve this chronological conundrum.
After Chichén Itzá elite activities ceased, the city may not have been abandoned. When the Spanish arrived, they found a thriving local population, although it is not clear from Spanish sources if these Maya were living in Chichen Itza proper, or a nearby settlement. The relatively high population density in the region was a factor in the conquistadors' decision to locate a capital there. According to post-Conquest sources, both Spanish and Maya, the Cenote Sagrado remained a place of pilgrimage.
Spanish conquest
In 1526, Spanish Conquistador Francisco de Montejo (a veteran of the Grijalva and Cortés expeditions) successfully petitioned the King of Spain for a charter to conquer Yucatán. His first campaign in 1527, which covered much of the Yucatán Peninsula, decimated his forces but ended with the establishment of a small fort at Xaman Haʼ, south of what is today Cancún. Montejo returned to Yucatán in 1531 with reinforcements and established his main base at Campeche on the west coast. He sent his son, Francisco Montejo The Younger, in late 1532 to conquer the interior of the Yucatán Peninsula from the north. The objective from the beginning was to go to Chichén Itzá and establish a capital.
Montejo the Younger eventually arrived at Chichén Itzá, which he renamed Ciudad Real. At first he encountered no resistance, and set about dividing the lands around the city and awarding them to his soldiers. The Maya became more hostile over time, and eventually they laid siege to the Spanish, cutting off their supply line to the coast, and forcing them to barricade themselves among the ruins of the ancient city. Months passed, but no reinforcements arrived. Montejo the Younger attempted an all-out assault against the Maya and lost 150 of his remaining troops. He was forced to abandon Chichén Itzá in 1534 under cover of darkness. By 1535, all Spanish had been driven from the Yucatán Peninsula.
Montejo eventually returned to Yucatán and, by recruiting Maya from Campeche and Champoton, built a large Indio-Spanish army and conquered the peninsula. The Spanish crown later issued a land grant that included Chichen Itza and by 1588 it was a working cattle ranch.
Modern history
Chichén Itzá entered the popular imagination in 1843 with the book Incidents of Travel in Yucatan by John Lloyd Stephens (with illustrations by Frederick Catherwood). The book recounted Stephens' visit to Yucatán and his tour of Maya cities, including Chichén Itzá. The book prompted other explorations of the city. In 1860, Désiré Charnay surveyed Chichén Itzá and took numerous photographs that he published in Cités et ruines américaines (1863).
Visitors to Chichén Itzá during the 1870s and 1880s came with photographic equipment and recorded more accurately the condition of several buildings. In 1875, Augustus Le Plongeon and his wife Alice Dixon Le Plongeon visited Chichén, and excavated a statue of a figure on its back, knees drawn up, upper torso raised on its elbows with a plate on its stomach. Augustus Le Plongeon called it "Chaacmol" (later renamed "Chac Mool", which has been the term to describe all types of this statuary found in Mesoamerica). Teobert Maler and Alfred Maudslay explored Chichén in the 1880s and both spent several weeks at the site and took extensive photographs. Maudslay published the first long-form description of Chichen Itza in his book, Biologia Centrali-Americana.
In 1894, the United States Consul to Yucatán, Edward Herbert Thompson, purchased the Hacienda Chichén, which included the ruins of Chichen Itza. For 30 years, Thompson explored the ancient city. His discoveries included the earliest dated carving upon a lintel in the Temple of the Initial Series and the excavation of several graves in the Osario (High Priest's Temple). Thompson is most famous for dredging the Cenote Sagrado (Sacred Cenote) from 1904 to 1910, where he recovered artifacts of gold, copper and carved jade, as well as the first-ever examples of what were believed to be pre-Columbian Maya cloth and wooden weapons. Thompson shipped the bulk of the artifacts to the Peabody Museum at Harvard University.
In 1913, the Carnegie Institution accepted the proposal of archeologist Sylvanus G. Morley and committed to conduct long-term archeological research at Chichen Itza. The Mexican Revolution and the following government instability, as well as World War I, delayed the project by a decade.
In 1923, the Mexican government awarded the Carnegie Institution a ten-year permit (later extended by another ten years) to allow U.S. archeologists to conduct extensive excavation and restoration of Chichen Itza. Carnegie researchers excavated and restored the Temple of Warriors and the Caracol, among other major buildings. At the same time, the Mexican government excavated and restored El Castillo (Temple of Kukulcán) and the Great Ball Court.
In 1926, the Mexican government charged Edward Thompson with theft, claiming he stole the artifacts from the Cenote Sagrado and smuggled them out of the country. The government seized the Hacienda Chichén. Thompson, who was in the United States at the time, never returned to Yucatán. He wrote about his research and investigations of the Maya culture in a book People of the Serpent published in 1932. He died in New Jersey in 1935. In 1944 the Mexican Supreme Court ruled that Thompson had broken no laws and returned Chichen Itza to his heirs. The Thompsons sold the hacienda to tourism pioneer Fernando Barbachano Peon.
There have been two later expeditions to recover artifacts from the Cenote Sagrado, in 1961 and 1967. The first was sponsored by the National Geographic, and the second by private interests. Both projects were supervised by Mexico's National Institute of Anthropology and History (INAH). INAH has conducted an ongoing effort to excavate and restore other monuments in the archeological zone, including the Osario, Akab Dzib, and several buildings in Chichén Viejo (Old Chichen).
In 2009, to investigate construction that predated El Castillo, Yucatec archeologists began excavations adjacent to El Castillo under the direction of Rafael (Rach) Cobos.
Site description
Chichen Itza was one of the largest Maya cities, with the relatively densely clustered architecture of the site core covering an area of at least . Smaller scale residential architecture extends for an unknown distance beyond this. The city was built upon broken terrain, which was artificially levelled in order to build the major architectural groups, with the greatest effort being expended in the levelling of the areas for the Castillo pyramid, and the Las Monjas, Osario and Main Southwest groups.
The site contains many fine stone buildings in various states of preservation, and many have been restored. The buildings were connected by a dense network of paved causeways, called sacbeob. Archeologists have identified over 80 sacbeob criss-crossing the site, and extending in all directions from the city. Many of these stone buildings were originally painted in red, green, blue and purple colors. Pigments were chosen according to what was most easily available in the area. The site must be imagined as a colorful one, not like it is today. Just like Gothic cathedrals in Europe, colors provided a greater sense of completeness and contributed greatly to the symbolic impact of the buildings.
The architecture encompasses a number of styles, including the Puuc and Chenes styles of the northern Yucatán Peninsula. The buildings of Chichen Itza are grouped in a series of architectonic sets, and each set was at one time separated from the other by a series of low walls. The three best known of these complexes are the Great North Platform, which includes the monuments of the Temple of Kukulcán (El Castillo), Temple of Warriors and the Great Ball Court; The Osario Group, which includes the pyramid of the same name as well as the Temple of Xtoloc; and the Central Group, which includes the Caracol, Las Monjas, and Akab Dzib.
South of Las Monjas, in an area known as Chichén Viejo (Old Chichén) and only open to archeologists, are several other complexes, such as the Group of the Initial Series, Group of the Lintels, and Group of the Old Castle.
Architectural styles
The Puuc-style architecture is concentrated in the Old Chichen area, and also the earlier structures in the Nunnery Group (including the Las Monjas, Annex and La Iglesia buildings); it is also represented in the Akab Dzib structure. The Puuc-style building feature the usual mosaic-decorated upper façades characteristic of the style but differ from the architecture of the Puuc heartland in their block masonry walls, as opposed to the fine veneers of the Puuc region proper.
At least one structure in the Las Monjas Group features an ornate façade and masked doorway that are typical examples of Chenes-style architecture, a style centered upon a region in the north of Campeche state, lying between the Puuc and Río Bec regions.
Those structures with sculpted hieroglyphic script are concentrated in certain areas of the site, with the most important being the Las Monjas group.
Architectural groups
Great North Platform
Temple of Kukulcán (El Castillo)
Dominating the North Platform of Chichen Itza is the Temple of Kukulcán (a Maya feathered serpent deity similar to the Aztec Quetzalcoatl). The temple was identified by the first Spaniards to see it, as El Castillo ("the castle"), and it regularly is referred to as such. This step pyramid stands about high and consists of a series of nine square terraces, each approximately high, with a high temple upon the summit.
The sides of the pyramid are approximately at the base and rise at an angle of 53°, although that varies slightly for each side. The four faces of the pyramid have protruding stairways that rise at an angle of 45°. The talud walls of each terrace slant at an angle of between 72° and 74°. At the base of the balustrades of the northeastern staircase are carved heads of a serpent.
Mesoamerican cultures periodically superimposed larger structures over older ones, and the Temple of Kukulcán is one such example. In the mid-1930s, the Mexican government sponsored an excavation of the temple. After several false starts, they discovered a staircase under the north side of the pyramid. By digging from the top, they found another temple buried below the current one.
Inside the temple chamber was a Chac Mool statue and a throne in the shape of Jaguar, painted red and with spots made of inlaid jade. The Mexican government excavated a tunnel from the base of the north staircase, up the earlier pyramid's stairway to the hidden temple, and opened it to tourists. In 2006, INAH closed the throne room to the public.
Around the Spring and Autumn equinoxes, in the late afternoon, the northwest corner of the pyramid casts a series of triangular shadows against the western balustrade on the north side that evokes the appearance of a serpent wriggling down the staircase, which some scholars have suggested is a representation of the feathered-serpent deity, Kukulcán. It is a widespread belief that this light-and-shadow effect was achieved on purpose to record the equinoxes, but the idea is highly unlikely: it has been shown that the phenomenon can be observed, without major changes, during several weeks around the equinoxes, making it impossible to determine any date by observing this effect alone.
Great Ball Court
Archeologists have identified in Chichen Itza thirteen ballcourts for playing the Mesoamerican ballgame, but the Great Ball Court about to the north-west of the Castillo is the most impressive. It is the largest and best preserved ball court in ancient Mesoamerica. It measures .
The parallel platforms flanking the main playing area are each long. The walls of these platforms stand high; set high up in the center of each of these walls are rings carved with intertwined feathered serpents.A popular explanation is that the objective of the game was to pass a ball through one of the rings, however in other, smaller ball courts there is no ring, only a post.
At the base of the high interior walls are slanted benches with sculpted panels of teams of ball players. In one panel, one of the players has been decapitated; the wound emits streams of blood in the form of wriggling snakes.
At one end of the Great Ball Court is the North Temple, also known as the Temple of the Bearded Man (Templo del Hombre Barbado). This small masonry building has detailed bas relief carving on the inner walls, including a center figure that has carving under his chin that resembles facial hair. At the south end is another, much bigger temple, but in ruins.
Built into the east wall are the Temples of the Jaguar. The Upper Temple of the Jaguar overlooks the ball court and has an entrance guarded by two, large columns carved in the familiar feathered serpent motif. Inside there is a large mural, much destroyed, which depicts a battle scene.
In the entrance to the Lower Temple of the Jaguar, which opens behind the ball court, is another Jaguar throne, similar to the one in the inner temple of El Castillo, except that it is well worn and missing paint or other decoration. The outer columns and the walls inside the temple are covered with elaborate bas-relief carvings.
Additional structures
The Tzompantli, or Skull Platform (Plataforma de los Cráneos), shows the clear cultural influence of the central Mexican Plateau. Unlike the tzompantli of the highlands, however, the skulls were impaled vertically rather than horizontally as at Tenochtitlan.
The Platform of the Eagles and the Jaguars (Plataforma de Águilas y Jaguares) is immediately to the east of the Great Ballcourt. It is built in a combination Maya and Toltec styles, with a staircase ascending each of its four sides. The sides are decorated with panels depicting eagles and jaguars consuming human hearts.
This Platform of Venus is dedicated to the planet Venus. In its interior archeologists discovered a collection of large cones carved out of stone, the purpose of which is unknown. This platform is located north of El Castillo, between it and the Cenote Sagrado.
The Temple of the Tables is the northernmost of a series of buildings to the east of El Castillo. Its name comes from a series of altars at the top of the structure that are supported by small carved figures of men with upraised arms, called "atlantes."
The Steam Bath is a unique building with three parts: a waiting gallery, a water bath, and a steam chamber that operated by means of heated stones.
Sacbe Number One is a causeway that leads to the Cenote Sagrado, is the largest and most elaborate at Chichen Itza. This "white road" is long with an average width of . It begins at a low wall a few meters from the Platform of Venus. According to archeologists there once was an extensive building with columns at the beginning of the road.
Sacred Cenote
The Yucatán Peninsula is a limestone plain, with no rivers or streams. The region is pockmarked with natural sinkholes, called cenotes, which expose the water table to the surface. One of the most impressive of these is the Cenote Sagrado, which is in diameter and surrounded by sheer cliffs that drop to the water table some below.
The Cenote Sagrado was a place of pilgrimage for ancient Maya people who, according to ethnohistoric sources, would conduct sacrifices during times of drought. Archeological investigations support this as thousands of objects have been removed from the bottom of the cenote, including material such as gold, carved jade, copal, pottery, flint, obsidian, shell, wood, rubber, cloth, as well as skeletons of children and men.
Chultun of Children
In 1967, while building an airstrip 200 meters north of the Cenote Sagrado, workers found a small cave system that contained the remains of more than 100 children, a majority between the ages of three and six. DNA testing in the 2020s found that the remains exclusively came from males. Archaeologists have concluded that because the remains came from individuals of a narrow range of age and sex, and that DNA testing found some were related (including two pairs of identical twins), the remains had been part of a"ritual event." Although the remains show no evidence of sacrifice, some researchers believe that may have been part of the ritual.
Temple of the Warriors
The Temple of the Warriors complex consists of a large stepped pyramid fronted and flanked by rows of carved columns depicting warriors. This complex is analogous to Temple B at the Toltec capital of Tula, and indicates some form of cultural contact between the two regions. The one at Chichen Itza, however, was constructed on a larger scale. At the top of the stairway on the pyramid's summit (and leading toward the entrance of the pyramid's temple) is a Chac Mool.
This temple encases or entombs a former structure called The Temple of the Chac Mool. The archeological expedition and restoration of this building was done by the Carnegie Institution of Washington from 1925 to 1928. A key member of this restoration was Earl H. Morris, who published the work from this expedition in two volumes entitled Temple of the Warriors. Watercolors were made of murals in the Temple of the Warriors that were deteriorating rapidly following exposure to the elements after enduring for centuries in the protected enclosures being discovered. Many depict battle scenes and some even have tantalizing images that lend themselves to speculation and debate by prominent Maya scholars, such as Michael D. Coe and Mary Miller, regarding possible contact with Viking sailors.
Group of a Thousand Columns
Along the south wall of the Temple of Warriors are a series of what are today exposed columns, although when the city was inhabited these would have supported an extensive roof system. The columns are in three distinct sections: A west group, that extends the lines of the front of the Temple of Warriors. A north group runs along the south wall of the Temple of Warriors and contains pillars with carvings of soldiers in bas-relief;
A northeast group, which apparently formed a small temple at the southeast corner of the Temple of Warriors, contains a rectangular decorated with carvings of people or gods, as well as animals and serpents. The northeast column temple also covers a small marvel of engineering, a channel that funnels all the rainwater from the complex some away to a rejollada, a former cenote.
To the south of the Group of a Thousand Columns is a group of three, smaller, interconnected buildings. The Temple of the Carved Columns is a small elegant building that consists of a front gallery with an inner corridor that leads to an altar with a Chac Mool. There are also numerous columns with rich, bas-relief carvings of some 40 personages.
A section of the upper façade with a motif of x's and o's is displayed in front of the structure. The Temple of the Small Tables which is an unrestored mound. And the Thompson's Temple (referred to in some sources as Palace of Ahau Balam Kauil ), a small building with two levels that has friezes depicting Jaguars (balam in Maya) as well as glyphs of the Maya god Kahuil.
El Mercado
This square structure anchors the southern end of the Temple of Warriors complex. It is so named for the shelf of stone that surrounds a large gallery and patio that early explorers theorized was used to display wares as in a marketplace. Today, archeologists believe that its purpose was more ceremonial than commercial.
Osario Group
South of the North Group is a smaller platform that has many important structures, several of which appear to be oriented toward the second largest cenote at Chichen Itza, Xtoloc.The Osario itself, like the Temple of Kukulkan, is a step-pyramid temple dominating its platform, only on a smaller scale. Like its larger neighbor, it has four sides with staircases on each side. There is a temple on top, but unlike Kukulkan, at the center is an opening into the pyramid that leads to a natural cave below. Edward H. Thompson excavated this cave in the late 19th century, and because he found several skeletons and artifacts such as jade beads, he named the structure The High Priests' Temple. Archeologists today believe neither that the structure was a tomb nor that the personages buried in it were priests.
The Temple of Xtoloc is a recently restored temple outside the Osario Platform is. It overlooks the other large cenote at Chichen Itza, named after the Maya word for iguana, "Xtoloc." The temple contains a series of pilasters carved with images of people, as well as representations of plants, birds, and mythological scenes.
Between the Xtoloc temple and the Osario are several aligned structures: The Platform of Venus, which is similar in design to the structure of the same name next to Kukulkan (El Castillo), the Platform of the Tombs, and a small, round structure that is unnamed. These three structures were constructed in a row extending from the Osario. Beyond them the Osario platform terminates in a wall, which contains an opening to a sacbe that runs several hundred feet to the Xtoloc temple.
South of the Osario, at the boundary of the platform, there are two small buildings that archeologists believe were residences for important personages. These have been named as the House of the Metates and the House of the Mestizas.
Casa Colorada Group
South of the Osario Group is another small platform that has several structures that are among the oldest in the Chichen Itza archeological zone.
The Casa Colorada (Spanish for "Red House") is one of the best preserved buildings at Chichen Itza. Significant red paint was still present in the days of the 19th century explorers. Its Maya name is Chichanchob, which according to INAH may mean "small holes". In one chamber there are extensive carved hieroglyphs that mention rulers of Chichen Itza and possibly of the nearby city of Ek Balam, and contain a Maya date inscribed which correlates to 869 AD, one of the oldest such dates found in all of Chichen Itza.
In 2009, INAH restored a small ball court that adjoined the back wall of the Casa Colorada.
While the Casa Colorada is in a good state of preservation, other buildings in the group, with one exception, are decrepit mounds. One building is half standing, named La Casa del Venado (House of the Deer). This building's name has been long used by the local Maya, and some authors mention that it was named after a deer painting over stucco that doesn't exist anymore.
Central GroupLas Monjas is one of the more notable structures at Chichen Itza. It is a complex of Terminal Classic buildings constructed in the Puuc architectural style. The Spanish named this complex Las Monjas ("The Nuns" or "The Nunnery"), but it was a governmental palace. Just to the east is a small temple (known as the La Iglesia, "The Church") decorated with elaborate masks.
The Las Monjas group is distinguished by its concentration of hieroglyphic texts dating to the Late to Terminal Classic. These texts frequently mention a ruler by the name of Kʼakʼupakal.El Caracol ("The Snail") is located to the north of Las Monjas. It is a round building on a large square platform. It gets its name from the stone spiral staircase inside. The structure, with its unusual placement on the platform and its round shape (the others are rectangular, in keeping with Maya practice), is theorized to have been a proto-observatory with doors and windows aligned to astronomical events, specifically around the path of Venus as it traverses the heavens.Akab Dzib is located to the east of the Caracol. The name means, in Yucatec Mayan, "Dark Writing"; "dark" in the sense of "mysterious". An earlier name of the building, according to a translation of glyphs in the Casa Colorada, is Wa(k)wak Puh Ak Na, "the flat house with the excessive number of chambers", and it was the home of the administrator of Chichén Itzá, kokom Yahawal Choʼ Kʼakʼ.
INAH completed a restoration of the building in 2007. It is relatively short, only high, and is in length and wide. The long, western-facing façade has seven doorways. The eastern façade has only four doorways, broken by a large staircase that leads to the roof. This apparently was the front of the structure, and looks out over what is today a steep, dry, cenote.
The southern end of the building has one entrance. The door opens into a small chamber and on the opposite wall is another doorway, above which on the lintel are intricately carved glyphs—the "mysterious" or "obscure" writing that gives the building its name today. Under the lintel in the doorjamb is another carved panel of a seated figure surrounded by more glyphs. Inside one of the chambers, near the ceiling, is a painted hand print.
Old ChichenOld Chichen (or Chichén Viejo in Spanish) is the name given to a group of structures to the south of the central site, where most of the Puuc-style architecture of the city is concentrated. It includes the Initial Series Group, the Phallic Temple, the Platform of the Great Turtle, the Temple of the Owls, and the Temple of the Monkeys.
This section of the site has been closed to tourism for years while archaeological excavations and restorations were ongoing, and is planned to reopen to visitors in 2024.
Other structures
Chichen Itza also has a variety of other structures densely packed in the ceremonial center of about and several outlying subsidiary sites.
Caves of Balankanche
Approximately south east of the Chichen Itza archeological zone are a network of sacred caves known as Balankanche (), Balamkaʼancheʼ''' in Yucatec Maya). In the caves, a large selection of ancient pottery and idols may be seen still in the positions where they were left in pre-Columbian times.
The location of the cave has been well known in modern times. Edward Thompson and Alfred Tozzer visited it in 1905. A.S. Pearse and a team of biologists explored the cave in 1932 and 1936. E. Wyllys Andrews IV also explored the cave in the 1930s. Edwin Shook and R.E. Smith explored the cave on behalf of the Carnegie Institution in 1954, and dug several trenches to recover potsherds and other artifacts. Shook determined that the cave had been inhabited over a long period, at least from the Preclassic to the post-conquest era.
On 15 September 1959, José Humberto Gómez, a local guide, discovered a false wall in the cave. Behind it he found an extended network of caves with significant quantities of undisturbed archeological remains, including pottery and stone-carved censers, stone implements and jewelry. INAH converted the cave into an underground museum, and the objects after being catalogued were returned to their original place so visitors can see them in situ.
Tourism
Chichen Itza is one of the most visited archeological sites in Mexico; in 2017 it was estimated to have received 2.1 million visitors.
Tourism has been a factor at Chichen Itza for more than a century. John Lloyd Stephens, who popularized the Maya Yucatán in the public's imagination with his book Incidents of Travel in Yucatan, inspired many to make a pilgrimage to Chichén Itzá. Even before the book was published, Benjamin Norman and Baron Emanuel von Friedrichsthal traveled to Chichen after meeting Stephens, and both published the results of what they found. Friedrichsthal was the first to photograph Chichen Itza, using the recently invented daguerreotype.
After Edward Thompson in 1894 purchased the Hacienda Chichén, which included Chichen Itza, he received a constant stream of visitors. In 1910 he announced his intention to construct a hotel on his property, but abandoned those plans, probably because of the Mexican Revolution.
In the early 1920s, a group of Yucatecans, led by writer/photographer Francisco Gomez Rul, began working toward expanding tourism to Yucatán. They urged Governor Felipe Carrillo Puerto to build roads to the more famous monuments, including Chichen Itza. In 1923, Governor Carrillo Puerto officially opened the highway to Chichen Itza. Gomez Rul published one of the first guidebooks to Yucatán and the ruins.
Gomez Rul's son-in-law, Fernando Barbachano Peon (a grandnephew of former Yucatán Governor Miguel Barbachano), started Yucatán's first official tourism business in the early 1920s. He began by meeting passengers who arrived by steamship at Progreso, the port north of Mérida, and persuading them to spend a week in Yucatán, after which they would catch the next steamship to their next destination. In his first year Barbachano Peon reportedly was only able to convince seven passengers to leave the ship and join him on a tour. In the mid-1920s Barbachano Peon persuaded Edward Thompson to sell next to Chichen for a hotel. In 1930, the Mayaland Hotel opened, just north of the Hacienda Chichén, which had been taken over by the Carnegie Institution.
In 1944, Barbachano Peon purchased all of the Hacienda Chichén, including Chichen Itza, from the heirs of Edward Thompson. Around that same time the Carnegie Institution completed its work at Chichen Itza and abandoned the Hacienda Chichén, which Barbachano turned into another seasonal hotel.
In 1972, Mexico enacted the Ley Federal Sobre Monumentos y Zonas Arqueológicas, Artísticas e Históricas (Federal Law over Monuments and Archeological, Artistic and Historic Sites) that put all the nation's pre-Columbian monuments, including those at Chichen Itza, under federal ownership. There were now hundreds, if not thousands, of visitors every year to Chichen Itza, and more were expected with the development of the Cancún resort area to the east.
In the 1980s, Chichen Itza began to receive an influx of visitors on the day of the spring equinox. Today several thousand show up to see the light-and-shadow effect on the Temple of Kukulcán during which the feathered serpent appears to crawl down the side of the pyramid. Tour guides will also demonstrate a unique the acoustical effect at Chichen Itza: a handclap before the staircase of the El Castillo pyramid will produce by an echo that resembles the chirp of a bird, similar to that of the quetzal as investigated by Declercq.
Chichen Itza, a UNESCO World Heritage Site, is the second-most visited of Mexico's archeological sites. The archeological site draws many visitors from the popular tourist resorts in Cancún, who make a day trip on tour buses.
In 2007, Chichen Itza's Temple of Kukulcán (El Castillo) was named one of the New Seven Wonders of the World after a worldwide vote. Despite the fact that the vote was sponsored by a commercial enterprise, and that its methodology was criticized, the vote was embraced by government and tourism officials in Mexico who projected that as a result of the publicity the number of tourists to Chichen would double by 2012. The ensuing publicity re-ignited debate in Mexico over the ownership of the site, which culminated on 29 March 2010 when the state of Yucatán purchased the land upon which the most recognized monuments rest from owner Hans Juergen Thies Barbachano.
INAH, which manages the site, has closed a number of monuments to public access. While visitors can walk around them, they can no longer climb them or go inside their chambers. Climbing access to El Castillo was closed after a San Diego, California, woman fell to her death in 2006.
Photograph gallery
See also
Asteroid 100456 Chichen Itza
List of archeoastronomical sites sorted by country
List of Mesoamerican pyramids
Maya–Toltec controversy at Chichen Itza
Tikal
Uxmal
Notes
References
Bibliography
Cobos, Rafael. "Chichén Itzá", in Davíd Carrasco (ed). The Oxford Encyclopedia of Mesoamerican Cultures. Oxford University Press, 2001.
Further reading
Wren, Linnea, et al., eds. Landscapes of the Itza: Archeology and Art History at Chichen Itza and Neighboring Sites''. Gainesville: University of Florida Press 2018.
External links
Encyclopædia Britannica: Article on Chichen Itza
Chichen Itza Digital Media Archive (creative commons-licensed photos, laser scans, panoramas), with particularly detailed information on El Caracol and el Castillo, using data from a National Science Foundation/CyArk research partnership
UNESCO page about Chichen Itza World Heritage site
Ancient Observatories page on Chichen Itza
Chichen Itza reconstructed in 3D
Archaeological documentation for Chichen Itza created by non-profit group INSIGHT and funded by the National Science Foundation and Chabot Space and Science Center
Maya sites in Yucatán
Archaeoastronomy
Archaeological museums in Mexico
Museums in Yucatán
Itza
National Monuments of Mexico
Former populated places in Mexico
Populated places established in the 8th century
8th-century establishments in the Maya civilization
750s establishments
13th-century disestablishments in the Maya civilization
13th-century disestablishments in North America
Articles containing video clips
Tourist attractions in Yucatán
World Heritage Sites in Mexico
Maya sites that survived the end of the Classic Period | Chichen Itza | Astronomy | 8,931 |
43,154,679 | https://en.wikipedia.org/wiki/Factory%20automation%20infrastructure | Factory automation infrastructure describes the process of incorporating automation into the manufacturing environment and processing input goods into final products. Factory automation intends to decrease risks associated with laborious and dangerous work faced by human workers.
The manufacturing environment is defined by its ability to manufacture and/or assemble goods by machines, integrated assembly lines, and robotic arms. Automated environments are also defined by their coordination with (and usually their systematic integration with) the required automatic equipment to form a complete system.
Automation
Automation has produced sophisticated parts with similar or higher output qualities with minor quality fluctuation. It also can help cut overall manufacturing costs and create safer working environments for workers.The use of automation in manufacturing started by using technologies such as pneumatic and hydraulic systems in applications where their mechanical advantages could be used to raise output quality and efficiency in production. Complex and highly integrated systems have since evolved, composed of procedures with sophisticated operation drivers. These drivers often are running languages that support 6, 7, and 8-axis controls for sophisticated robotics.
Robotic arm
A robotic arm is a type of mechanical arm, usually programmable, with functions similar to a human arm; the arm may be the total of the mechanism or may be part of a more complex robot. The links of such a manipulator are connected by joints allowing either rotational motion (such as in an articulated robot) or transnational (linear) displacement. The links of the manipulator can be considered to form a kinematic chain. The terminus of the kinematic chain of the manipulator is called the end effector and is analogous to the human hand.
References
Industrial automation | Factory automation infrastructure | Engineering | 329 |
35,887,615 | https://en.wikipedia.org/wiki/Psi%20Lupi | The Bayer designation Psi Lupi (ψ Lup / ψ Lupi) is shared by two stars, in the constellation Lupus:
ψ¹ Lupi
ψ² Lupi
Lupi, Psi
Lupus (constellation) | Psi Lupi | Astronomy | 46 |
37,265,653 | https://en.wikipedia.org/wiki/Tune%20shift%20with%20amplitude | The tune shift with amplitude is an important concept in circular accelerators or synchrotrons. The machine may be described via a symplectic one turn map at each position, which may be thought of as the Poincaire section of the dynamics.
A simple harmonic oscillator has a constant tune for all initial positions in phase space. Adding some non-linearity results in a variation of the tune with amplitude.
Amplitude may refer to either the initial position, or more formally, the initial action of the particle.
Definition
Consider dynamics in phase space. These dynamics are assumed to be determined by a Hamiltonian, or a symplectic map. For each initial position, we follow the particle as it traces out its orbit. After transformation into action-angle coordinates, one compute the tune and the action . The tune shift with amplitude is then given by . The transformation to action-angle variables out of which the tune may be derived may be considered as a transformation to normal form.
Significance
The tune shift with amplitude is important as a measure of non-linearity of a system. A linear system will have no tune shift with amplitude. Further, it can be important regarding the stability of the system. When the tune reaches resonant values, it can be unstable, and thus a tune-shift with amplitude can limit the stability region, or dynamic aperture.
Examples of systems with tune shift with amplitude
In classical mechanics, a simple example of a system with tune shift with amplitude is a pendulum. In accelerator physics, both the transverse and the longitudinal dynamics show tune shift with amplitude. A simple model of the transverse dynamics is of an oscillator with a single sextupole, it is referred to as the Hénon map. Another model for this case is the Standard Map.
An important example is the typical case of distributed sextupoles in a storage ring.
Computation
The tune shift with amplitude may be computed in numerous ways. One involves the use of the normal form method. See for the use of this method for the pendulum.
It may also be computed by tracking the orbit through phase space, and then Fourier transforming the projections onto the different planes. For computation in the Elegant code, see
The tune may also be computed by a refinement over the Fourier transform method, called NAFF. e.g.
It may also be computed analytically via a formula, using the normal form method, otherwise. For the storage ring case with distributed sextupoles, one can see
See also
anharmonicity
References
Accelerator physics | Tune shift with amplitude | Physics | 516 |
822,294 | https://en.wikipedia.org/wiki/Drain-waste-vent%20system | A drain-waste-vent system (or DWV) is the combination of pipes and plumbing fittings that captures sewage and greywater within a structure and routes it toward a water treatment system. It includes venting to the exterior environment to prevent a vacuum from forming and impeding fixtures such as sinks, showers, and toilets from draining freely, and employs water-filled traps to block dangerous sewer gasses from entering a plumbed structure.
Overview
DWV systems capture both sewage and greywater within a structure and safely route it out via the low point of its "soil stack" to a waste treatment system, either via a municipal sanitary sewer system, or to a septic tank and leach field. (Cesspits are generally prohibited in developed areas.) For such drainage systems to work properly it is crucial that neutral air pressure be maintained within all pipes, allowing free gravity flow of water and sewage through drains. It is critical that a sufficient fall gradient (downward slope) be maintained throughout the drain pipes to keep liquids and entrained solids flowing freely from a building towards the main drain. In situations where a downward slope out of a building en route to a treatment system cannot be created, a special collection sump pit and grinding lift "sewage ejector" pump are needed. By contrast, potable water supply systems are pressurized up to or more and so do not require a continuous downward slope in their piping to distribute water through buildings.
Every fixture is required to have an internal or external trap to prevent sewer gases from entering a structure. Double trapping is prohibited by plumbing codes due to its susceptibility to clogging. In the U.S., every plumbing fixture must also be coupled to the system's vent piping. Without a vent, negative pressure can slow the flow of water leaving the system, resulting in clogs, or cause siphonage to empty a trap. The high point of the vent system (the top of its "soil stack") must be open to the exterior at atmospheric pressure. On large systems, separate parallel vent stacks may also be run to ensure sufficient airflow, because the number of devices linked to an atmospheric vent, and their distances from it, are regulated by plumbing code.
Operation
A sewer pipe is normally at neutral air pressure compared to the surrounding atmosphere. When a column of waste water flows through a pipe, it compresses air ahead of it in the system, creating a positive pressure that must be released so it does not push back on the waste stream and downstream traps, slow drainage, and induce potential clogs. As the column of water passes, air must also freely flow in behind the waste stream, or negative pressure results, which can siphon water out of a trap after it is passed and allow noxious sewer gases to enter a building. The extent of these pressure fluctuations is determined by the fluid volume of the waste discharge.
Generally, a toilet outlet has the shortest trap seal, making it most vulnerable to being emptied by induced siphonage.
An additional risk of pressurizing a system ahead of a waste stream is the potential for it to overwhelm a downstream trap and force tainted water into its fixture. Serious hygiene and health consequences can result. Tall buildings of three or more stories are particularly susceptible to this problem. Adequate supplementary vent stacks are installed in parallel to waste stacks to allow proper venting in large and tall buildings and eliminate these pressure-related venting problems.
External venting
DWV systems are vented directly through the building roof. Increasingly DWV pipe is ABS or PVC DWV-rated plastic pipe equipped with a flashing at the roof penetration to prevent rainwater from entering the buildings. Older structures may use asbestos, copper, iron, lead or clay pipes, in rough order of era of use.
Under many older building codes, a vent stack (a pipe leading to the main roof vent) is required to be within approx. a radius of the draining fixture it serves (sink, toilet, shower stall, etc.). To allow a single roof penetration as permitted by local building code, sub-vents may be tied together inside the building and exit via a common vent stack, frequently the "main" vent. Adding a vent connection within a long horizontal run with little slope will aid flow, and when used with a cleanout allows for better serviceability.
Unlike traps for other fixtures, toilet traps are usually designed to self-siphon to ensure complete evacuation of their contents; toilet bowls are then automatically refilled by a special valve mechanism.
Internal venting
In exceptional cases it is either not possible or inconvenient to vent a fixture or fixtures externally. In such cases a resort to "internal venting" may be viable, where compliant with local plumbing codes. Such alternatives include mechanical vents (also called cheater vents) such as air admittance valves and check vents, and "plumb-arounds" such as an inline vent employed in kitchen islands and similar applications:
Air admittance valves (AAVs, or commonly referred to in the UK as Durgo valves and in the US as Studor vents and Sure-Vent®) are negative-pressure-activated, one-way mechanical valves, used in a plumbing or drainage venting system to eliminate the need for conventional pipe venting and roof penetrations. A discharge of wastewater causes the AAV to open, releasing the vacuum and allowing air to enter the plumbing vent pipe for proper pressure equalization.
Since AAVs will only operate under negative pressure situations, they are not suitable for all venting applications, such as venting a sump, where positive pressures are created when the sump fills. Also, where positive drainage pressures are found in larger buildings or multi-story buildings, an air admittance valve could be used in conjunction with a positive pressure reduction device such as the PAPA positive air pressure attenuator to provide a complete venting solution for more complicated drainage venting systems.
Using AAVs can significantly reduce the amount of venting materials needed in a plumbing system, increase plumbing labor efficiency, allow greater flexibility in the layout of plumbing fixtures, and reduce long-term roof maintenance problems associated with conventional vent stack roofing penetrations.
While some state and local building departments prohibit AAVs, the International Residential and International Plumbing Codes allow it to be used in place of a vent through the roof. AAVs are certified to reliably open and close a minimum of 500,000 times, (approximately 30 years of use) with no release of sewer gas; some manufacturers claim their units are tested for up to 1.5 million cycles, or at least 80 years of use. AAVs have been effectively used in Europe for more than two decades.
Check vents
In-line vent (also known as an island fixture vent, and, colloquially, a "Chicago Loop", "Boston loop" or "Bow Vent") is an alternate method permissible in some jurisdictions of venting the trap installed on an under counter island sink or other similar applications where a conventional vertical vent stack or air admittance valve is not feasible or allowed.
As with all drains, ventilation must be provided to allow the flowing waste water to displace the sewer gas in the drain, and then to allow air (or some other fluid) to fill the vacuum which would otherwise form as the water flows down the pipe.
An island fixture vent allows water displaces the sewer gas up to the sanitary tee, the water flows downward while sewer gas is displaced upward and toward the vent. The vent can also provide air to fill any vacuum created.
The key to a functional island fixture vent is that the top elbow must be at least as high as the "flood level" (the peak possible drain water level in the sink), allowing it to serve as a de facto vacuum breaker preventing the loop from becoming a siphon for an overfilled sink, as from a clogged drain (rather than vent) line.
Fittings
All DWV systems require various sized fittings and pipes which are measured by their internal diameter of both the pipes and the fittings which, and in most cases are Schedule 40 PVC wye's, tee's, elbows ranging from 90 degrees to 22.5 degrees for both inside diameter fitment (street) as well as outer diameter fitment (hub), repair and slip couplings, reducer couplings, and pipe which is typically ten feet in length. Sizes for hub fittings such as wye's and tee's are based on the inside diameter of the pipe that goes into their hubs. Items such as washer boxes and Studor vents are also measured by the internal diameter of the fittings.
Cost of materials, ease of installation, and resistance to corrosion all have come to favor Schedule 40 PVC DWV systems, which are replacing cast iron "hub" and "no-hub" DWV systems in many municipalities, while parts and skills associated with installing and maintaining cast iron systems are becoming increasingly scarce and costly.
The advent of PVC and solvent welding adhesives, which secure fittings against leakage and separation by melting the material into itself, has profoundly simplified and made installing a DWV system less expensive. As with pressurized water "supply" plumbing, all lines must be bored for where they will not compromise structural framing and properly supported inline, and all external penetrations properly sealed and flashed.
See also
Fuel gas piping
Plumber
Potable cold and hot water supply
Rainwater, surface, and subsurface water drainage
References
Further reading
Building engineering | Drain-waste-vent system | Engineering | 1,952 |
78,478,987 | https://en.wikipedia.org/wiki/Ammonium%20dicyanoaurate | Ammonium dicyanoaurate is a chemical compound with the chemical formula . This is a salt of ammonium as cation with an anion composed of a gold atom bearing two cyanide ligands.
Synthesis
Ammonium dicyanoaurate can be synthesised by dissolution of gold(I) cyanide in ammonium cyanide solution:
Physical properties
The compound forms colorless crystals which are soluble in water and ethanol.
References
Cyanides
Ammonium compounds
Aurates | Ammonium dicyanoaurate | Chemistry | 100 |
928,282 | https://en.wikipedia.org/wiki/Minelayer | A minelayer is any warship, submarine, military aircraft or land vehicle deploying explosive mines. Since World War I the term "minelayer" refers specifically to a naval ship used for deploying naval mines. "Mine planting" was the term for installing controlled mines at predetermined positions in connection with coastal fortifications or harbor approaches that would be detonated by shore control when a ship was fixed as being within the mine's effective range.
An army's special-purpose combat engineering vehicles used to lay landmines are sometimes called "minelayers".
Etymology
Before World War I, mine ships were termed mine planters generally. For example, in an address to the United States Navy ships of Mine Squadron One at Portland, England, Admiral Sims used the term "mine layer" while the introduction speaks of the men assembled from the "mine planters". During and after that war the term "mine planter" became particularly associated with defensive coastal fortifications. The term "minelayer" was applied to vessels deploying both defensive- and offensive mine barrages and large scale sea mining. "Minelayer" lasted well past the last common use of "mine planter" in the late 1940s.
Naval minelayers
The most common use of the term "minelayer" is a naval ship used for deploying sea mines. Russian minelayers were highly efficient sinking the Japanese battleships and in 1904 in the Russo-Japanese War. In the Gallipoli Campaign of World War I, mines laid by the Ottoman Empire's Navy's Nusret sank , , and the in the Dardanelles on 18 March 1915.
In World War II, the British employed the Abdiel minelayers both as minelayers and as transports to isolated garrisons, such as Malta and Tobruk. Their combination of high speed (up to 40 knots) and carrying capacity was highly valued. The French used the same concept for the cruiser .
A naval minelayer can vary considerably in size, from coastal boats of several hundred tonnes in displacement to destroyer-like ships of several thousand tonnes displacement. Apart from their loads of sea mines, most would also carry other weapons for self-defense, with some armed well enough to carry out other combat operations besides minelaying, such as the World War II Romanian minelayer Amiral Murgescu, which was successfully employed as a convoy escort due to her armament (2 × 105 mm, 2 × 37 mm, 4 × 20 mm, 2 machine guns, 2 depth charge throwers).
Submarines can also be minelayers. The first submarine to be designed as such was the . was another such minelaying submarine. Although there are no modern dedicated submarine minelayers, mines sized to be deployed from a submarine's torpedo tubes, such as the Stonefish, allow any submarine to be a minelayer.
In modern times, few navies worldwide still possess minelaying vessels. The United States Navy, for example, uses aircraft to lay sea mines instead. Mines themselves have evolved from purely passive to active; for example the US CAPTOR (enCAPsulated TORpedo) that sits as a mine until detecting a target, then launches a torpedo.
A few navies still have dedicated minelayers in commission, including those of South Korea, Poland, Sweden and Finland; countries with long, shallow coastlines where sea mines are most effective. Other navies have plans to create improvised minelayers in times of war, for example by rolling sea-mines into the sea from the vehicle deck through the open aft doors of a Roll-on/roll-off ferry. In 1984, the Libyan Navy was suspected of having mined the Red Sea a few nautical miles south of the Suez Canal using the Ro-Ro ferry Ghat, other nations suspected of having similar wartime plans include Iran and North Korea.
Aerial minelaying
Beginning in World War II, military aircraft were used to deliver naval mines by dropping them, attached to a parachute. Germany, Britain and the United States made significant use of aerial minelaying.
A new type of magnetic mine dropped by a German aircraft in a campaign of mining the Thames Estuary in 1939 landed in a mudflat, where disposal experts determined how it worked, which allowed Britain to fashion appropriate mine countermeasures.
The British Royal Air Force minelaying operations were codenamed "Gardening". As well as mining the North Sea and approaches to German ports, mines were laid in the Danube River near Belgrade, Yugoslavia, starting on 8 April 1944, to block the shipments of petroleum products from the refineries at Ploiești, Romania.
"Gardening" operations by the RAF were also sometimes used to assist in code breaking activities at Bletchley Park. Mines would be laid, at Bletchley Park's request, in specific locations. Resulting German radio transmissions were then monitored for clues which could help deciphering messages encoded by the Germans using Enigma machines.
In the Pacific, the US dropped thousands of mines in Japanese home waters, contributing to that country's defeat.
Aerial mining was also used in the Korean and Vietnam Wars. In Vietnam, rivers and coastal waters were extensively mined with a modified bomb called a destructor that proved very successful.
Landmine laying
Some examples of minelaying vehicles:
Shielder minelaying system
Zemledeliye (minelaying system)
GMZ family of minelayers, which the 2S4 Tyulpan is based on, using TM-62 series mines
Minenwerfer Skorpion
Type 94 Minelayer
Istrice (M113 variant)
See also
List of minelayer ship classes
List of mine warfare vessels of the US Navy in the Second World War
Mine Planter Service (U.S. Army)
Minesweeper (ship)
Submarine mines in United States harbor defense
Notes
References
External links
Mine warfare
Minelayers | Minelayer | Engineering | 1,194 |
63,842,793 | https://en.wikipedia.org/wiki/Tapchan | A Tapchan () is a type of outdoor furniture unique to Central Asia especially Tajikistan and Uzbekistan, combining a large bed capable of holding 4-8 adults with a table at which meals can be eaten.. It is similar or identical to the Malay bale-bale, 'wooden raised platform'.
Variants
Although typically an outdoor fixture, they are also found indoors, for instance at roadside restaurants, since they allow the customer to both rest and eat. Private homes with a tapchan in the yard often build canopy posts with either a fixed shade or curtains.
External links
A custom-built tapchan in North America
Footnotes
Central Asia
Tajikistani design
Tables (furniture)
Beds | Tapchan | Biology | 138 |
4,658,305 | https://en.wikipedia.org/wiki/PyBOP | PyBOP (benzotriazol-1-yloxytripyrrolidinophosphonium hexafluorophosphate) is a reagent used to prepare amides from carboxylic acids and amines in the context of peptide synthesis. It can be prepared from 1-hydroxybenzotriazole and a chlorophosphonium reagent under basic conditions. It is a substitute for the BOP reagent that avoids the formation of the carcinogenic waste product HMPA. Thermal hazard analysis by differential scanning calorimetry (DSC) shows PyBOP is potentially explosive.
See also
BOP reagent
DEPBT, a related reagent that contains no phosphorus-nitrogen bonds
HATU
HBTU
References
Hexafluorophosphates
Peptide coupling reagents
Benzotriazoles
1-Pyrrolidinyl compounds
Biochemistry
Biochemistry methods
Reagents for biochemistry
Quaternary phosphonium compounds
Organophosphorus compounds | PyBOP | Chemistry,Biology | 213 |
4,555,947 | https://en.wikipedia.org/wiki/Search%20engine%20results%20page | A search engine results page (SERP) is a webpage that is displayed by a search engine in response to a query by a user. The main component of a SERP is the listing of results that are returned by the search engine in response to a keyword query.
The results are of two general types:
organic search: retrieved by the search engine's algorithm;
sponsored search: advertisements.
The results are normally ranked by relevance to the query. Each result displayed on the SERP normally includes a title, a link that points to the actual page on the Web, and a short description, known as a snippet, showing where the keywords have matched content within the page for organic results. For sponsored results, the advertiser chooses what to display.
With the vast amount of content available online, it's no surprise that a single search query can yield countless pages of results. However, in order to avoid overwhelming users, search engines and personal preferences often limit the number of results displayed per page. As a result, subsequent pages may not be as relevant or ranked as highly as the first. Just like the world of traditional print media and its advertising, this enables competitive pricing for page real estate but is complicated by the dynamics of consumer expectations and intent— unlike static print media where the content and the advertising on every page are the same all of the time for all viewers, despite such hard copy being localized to some degree, usually geographic, like state, metro-area, city, or neighbourhood, search engine results can vary based on individual factors such as browsing habits.
Components
The organic search results, queries, and advertisements are the three main components of the SERP, However, the SERP of major search engines, like Google, Yahoo!, Bing, and Sogou may include many different types of enhanced results (organic search, and sponsored) such as rich snippets, images, maps, definitions, answer boxes, videos or suggested search refinements. A study revealed that 97% of queries in Google returned at least one rich feature. Another study on the evolution of SERPs interfaces from 2000 to 2020 shows that SERP are becoming more diverse in terms of elements, aggregating content from different verticals and including more features that provide direct answers.
The major search engines visually differentiate specific content types such as images, news, and blogs. Many content types have specialized SERP templates and visual enhancements on the first search results page.
Search query
Also known as 'user search string', this is the word or set of words that are typed by the user in the search bar of the search engine. The search box is located on all major search engines like Google, Yahoo, Bing, and Sogou. Users indicate the topic desired based on the keywords they enter into the search box in the search engine.
Organic results
Organic SERP listings are the natural listings generated by search engines, they list webpages matching the query. The pages are sorted on a relevance score based on a series of metrics generally based upon factors such as quality and relevance of the content, expertise, authoritativeness, trustworthiness of the website and author on a given topic, good user experience, and backlinks.
Each of the matching web pages is presented as a visual element composed of attribution, a title link, and a snippet of the matching webpage showing how the query matched on the page.
Search results pages typically contain numerous organic results, and users tend to view only the first results on the first page. According to a 2019 study, the click-through rates (CTRs) drop significantly after the first few results.
Sponsored results
Several major search engines offer "sponsored results" to companies, who may pay the search engine to have their products or services appear above other search hits. This is often done in the form of bidding between companies, where the highest bidder gets the top result. A 2018 report from the European Commission showed that consumers generally avoid these top results, as there is an expectation that the topmost results on a search engine page will be sponsored, and thus less relevant.
Rich snippets
Rich snippets are displayed by Google in the search results pages when a website contains content in structured data markup. Structured data markup helps the Google algorithm to index and understand the content better. Google supports rich snippets for various data types, including products, recipes, reviews, events, news articles, and job postings.
Featured snippets
A featured snippet is a summary of an answer to a user's query. This snippet appears at the top of the list of search hits. Google supports the following types of featured snippets: Paragraph Featured Snippet, Numbered List Featured Snippet, Bulleted List Featured Snippet, Table Featured Snippet, YouTube Featured Snippet, Carousel Snippet, Double Featured Snippet, and Two-for-One Featured Snippet.
Knowledge graph
Search engines like Google, Bing, Sogou have started to expand their data into Encyclopedia and other rich sources of information.
Google for example calls this sort of information "Google Knowledge Graph", if a search query matches it will display an additional sub-window on right-hand side with information from its sources. Such panels may offer the user a zero-click result to their query.
Google Discover
Google Discover formerly known as Google Feed is a way of getting topics and news information to users on the homepage below the search box.
Generation
Major search engines like Google, Yahoo!, Bing, Sogou primarily use content contained within the page and fallback to metadata tags of a web page to generate the content that makes up a search snippet. Generally, the HTML title tag will be used as the title of the snippet while the most relevant or useful contents of the web page (description tag or page copy) will be used for the description.
Scraping and automated access
Search engine result pages are protected from automated access by a range of defensive mechanisms and terms of service. These result pages are the primary data source for Search engine optimization, the website placement for competitive keywords that has become an important field of business and interest.
The process of harvesting search engine result pages data is usually called "search engine scraping" or in a general form "web crawling" and generates the data SEO-related companies need to evaluate website competitive organic and sponsored rankings. This data can be used to track the position of websites and show the effectiveness of SEO as well as keywords that may need more SEO investment to rank higher.
There is no evidence of Google making any public announcement as to the practice of scraping being in breach of its terms of service, as previously documented in this section, as any such 'warnings' could not, by their nature, apply universally to its users as well as, say, users in countries where Google does not operate, nor would they be capable of applying to a private individual in the same way as they do to one of Google's Ad Partners. Further, crawling itself remains one of the core elements of Google's search functionality and tools; purported 'warnings' against scraping previously attributed to Google, were in reality posts on third-party platforms such as Twitter that were manifestly by individuals not necessarily associated with or employed by Google, and in any event made in a personal capacity alone, rather than in a Google-endorsed formal capacity.
See also
User intent
Search Engine Optimisation
References
Search engine optimization
Internet search engines
Internet terminology
Digital marketing | Search engine results page | Technology | 1,539 |
78,141,952 | https://en.wikipedia.org/wiki/Bufenadrine | Bufenadrine (; developmental code name B.S. 6534), also known as 2-tert-butyldiphenhydramine, is a drug described as an antiemetic, antihistamine, anticholinergic, and antiparkinsonian agent which was never marketed. It is the 2-tert-butyl analogue of diphenhydramine. The drug was found to produce stereoselective hepatotoxicity in animals and this led to the discontinuation of its development. Bufenadrine was first described in the literature by 1967. Its suffix "-drine" is generally for sympathomimetics but bufenadrine itself is not actually a sympathomimetic or related agent.
References
Abandoned drugs
Anticholinergics
Antiemetics
Antihistamines
Antiparkinsonian agents
Hepatotoxins
Tert-butyl compounds
Dimethylamino compounds
Ethanolamines | Bufenadrine | Chemistry | 207 |
67,255,612 | https://en.wikipedia.org/wiki/Einsteinium%28III%29%20chloride | Einsteinium(III) chloride is a chloride of einsteinium.
Preparation
Einsteinium(III) chloride is created by reacting einsteinium metal with dry hydrogen chloride gas for 20 minutes at 500 °C which crystallized around 425 °C.
2 Es + 6 HCl → 2 EsCl3 + 3 H2
Chemical properties
The compound can be reduced by to obtain .
References
Chlorides
Einsteinium compounds
Actinide halides | Einsteinium(III) chloride | Chemistry | 84 |
44,640,690 | https://en.wikipedia.org/wiki/Gap%20reduction | In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem.
-gap problem
We define a -gap problem as follows: given an optimization (maximization or minimization) problem , the equivalent -gap problem distinguishes between two cases, for an input and an instance of problem :
. Here, the best solution to instance of problem has a cost, or score, below .
. Here, the best solution to instance of problem has cost above The gap between the two thresholds is thus .
Note that whenever falls between the thresholds, there is no requirement on what the output should be. valid algorithm for the -gap problem may answer anything if is in the middle of the gap. The value does not need to be constant; it can depend on the size of the instance of . Note that -approximating the solution to an instance of is at least as hard as solving the -gap version of .
One can define an -gap problem similarly. The difference is that the thresholds do not depend on the input; instead, the lower threshold is and the upper threshold is .
Gap-producing reduction
A gap-producing reduction is a reduction from an optimization problem to a c-gap problem, so that solving the c-gap problem quickly would enable solving the optimization problem quickly. The term gap-producing arises from the nature of the reduction: the optimal solution in the optimization problem maps to the opposite side of the gap from every other solution via reduction. Thus, a gap is produced between the optimal solution and every other solution.
A simple example of a gap-producing reduction is the nonmetric Traveling Salesman problem (i.e. where the graph's edge costs need not satisfy the conditions of a metric). We can reduce from the Hamiltonian path problem on a given graph G = (V, E) to this problem as follows: we construct a complete graph G' = (V, E'), for the traveling salesman problem. For each edge e ∈ G', we let the cost of traversing it be 1 if e is in the original graph G and ∞ otherwise. A Hamiltonian path in the original graph G exists if and only if there exists a traveling salesman solution with weight (|V|-1). However, if no such Hamiltonian path exists, then the best traveling salesman tour must have weight at least |V|. Thus, Hamiltonian Path reduces to |V|/(|V|-1)-gap nonmetric traveling salesman.
Gap-preserving reduction
A gap-preserving reduction is a reduction from a c-gap problem to a c'-gap problem. More specifically, we are given an instance x of a problem A with |x| = n and want to reduce it to an instance x' of a problem B with |x'| = n'. A gap-preserving reduction from A to B is a set of functions (k(n), k'(n'), c(n), c'(n')) such that
For minimization problems:
OPTA(x) ≤ k ⇒ OPTB(x') ≤ k', and
OPTA(x) ≥ c⋅k ⇒ OPTB(x') ≥ c'⋅k'
For maximization problems:
OPTA(x) ≥ k ⇒ OPTB(x') ≥ k', and
OPTA(x) ≤ k/c ⇒ OPTB(x') ≤ k'/c'
If c' > c, then this is a gap-amplifying reduction.
Examples
Max E3SAT
This problem is a form of the Boolean satisfiability problem (SAT), where each clause contains three distinct literals and we want to maximize the number of clauses satisfied.
Håstad showed that the (1/2+ε, 1-ε)-gap version of a similar problem, MAX E3-X(N)OR-SAT, is NP-hard. The MAX E3-X(N)OR-SAT problem is a form of SAT where each clause is the XOR of three distinct literals, exactly one of which is negated. We can reduce from MAX E3-X(N)OR-SAT to MAX E3SAT as follows:
A clause xi ⊕ xj ⊕ xk = 1 is converted to (xi ∨ xj ∨ xk) ∧ (¬xi ∨ ¬xj ∨ xk) ∧ (¬xi ∨ xj ∨ ¬xk) ∧ (xi ∨ ¬xj ∨ ¬xk)
A clause xi ⊕ xj ⊕ xk = 0 is converted to (¬xi ∨ ¬xj ∨ ¬xk) ∧ (¬xi ∨ xj ∨ xk) ∧ (xi ∨ ¬xj ∨ xk) ∧ (xi ∨ xj ∨ ¬xk)
If a clause is not satisfied in the original instance of MAX E3-X(N)OR-SAT, then at most three of the four corresponding clauses in our MAX E3SAT instance can be satisfied. Using a gap argument, it follows that a YES instance of the problem has at least a (1-ε) fraction of the clauses satisfied, while a NO instance of the problem has at most a (1/2+ε)(1) + (1/2-ε)(3/4) = (7/8 + ε/4)-fraction of the clauses satisfied. Thus, it follows that (7/8 + ε, 1 - ε)-gap MAX E3SAT is NP-hard. Note that this bound is tight, as a random assignment of variables gives an expected 7/8 fraction of satisfied clauses.
Label Cover
The label cover problem is defined as follows: given a bipartite graph G = (A∪B, E), with
A = A1 ∪ A2 ∪ ... ∪ Ak, |A| = n, and |Ai| = n/k
B = B1 ∪ B2 ∪ ... ∪ Bk, |B| = n, and |Bi| = n/k
We define a "superedge" between Ai and Bj if at least one edge exists from Ai to Bj in G, and define the superedge to be covered if at least one edge from Ai to Bj is covered.
In the max-rep version of the problem, we are allowed to choose one vertex from each Ai and each Bi, and we aim to maximize the number of covered superedges. In the min-rep version, we are required to cover every superedge in the graph, and want to minimize the number of vertices we choose. Manurangsi and Moshkovitz show that the (O(n1/4), 1)-gap version of both problems is solvable in polynomial time.
See also
Approximation-preserving reduction
Optimization problem
Approximation algorithm
PTAS reduction
References
Approximation algorithms
Computational problems | Gap reduction | Mathematics | 1,533 |
65,973,341 | https://en.wikipedia.org/wiki/Intermediate%20luminosity%20optical%20transient | An Intermediate Luminosity Optical Transient (ILOT) is an astronomical object which undergoes an optically detectable explosive event with an absolute magnitude (M) brighter than a classical nova (M ~ −8) but fainter than that of a supernova (M ~ −17). That nine magnitude range corresponds to a factor of nearly 4000 in luminosity, so the ILOT class may include a wide variety of objects. The term ILOT first appeared in a 2009 paper discussing the nova-like event NGC 300 OT2008-1. As the term has gained more widespread use, it has begun to be applied to some objects like KjPn 8 and CK Vulpeculae for which no transient event has been observed, but which may have been dramatically affected by an ILOT event in the past. The number of ILOTs known is expected to increase substantially when the Vera C. Rubin Observatory becomes operational.
A very wide variety of objects have been classified as ILOTs in the astronomical literature. Kashi and Soker proposed a model for the outburt of ASASSN-15qi, in which a Jupiter-mass planet is tidally destroyed and accreted onto a young main sequence star. Luminous red novae, believed be caused by the merger of two stars, are classified as ILOTs. Some luminous blue variables, such as η Car have been classified as ILOTs. Some objects which have been classified as failed supernovae may be ILOTs. The common thread tying all of these objects together is a transfer of a large amount of mass (0.001 M⊙ to a few M⊙) from a planet or star to a companion star, over a short period of time, leading to a massive eruption. That large range in accretion mass explains the large range in ILOT event brightness.
See also
Fast blue optical transient
References
External links
The ILOT Club
Stellar phenomena
Astronomical events | Intermediate luminosity optical transient | Physics,Astronomy | 389 |
28,966,352 | https://en.wikipedia.org/wiki/Sustainability%20and%20environmental%20management | At the global scale sustainability and environmental management involves managing the oceans, freshwater systems, land and atmosphere, according to sustainability principles.
Land use change is fundamental to the operations of the biosphere because alterations in the relative proportions of land dedicated to urbanisation, agriculture, forest, woodland, grassland and pasture have a marked effect on the global water, carbon and nitrogen biogeochemical cycles. Management of the Earth's atmosphere involves assessment of all aspects of the carbon cycle to identify opportunities to address human-induced climate change and this has become a major focus of scientific research because of the potential catastrophic effects on biodiversity and human communities. Ocean circulation patterns have a strong influence on climate and weather and, in turn, the food supply of both humans and other organisms.
Atmosphere
In March 2009, at a meeting of the Copenhagen Climate Council, 2,500 climate experts from 80 countries issued a keynote statement that there is now "no excuse" for failing to act on global warming and without strong carbon reduction targets "abrupt or irreversible" shifts in climate may occur that "will be very difficult for contemporary societies to cope with". Management of the global atmosphere now involves assessment of all aspects of the carbon cycle to identify opportunities to address human-induced climate change and this has become a major focus of scientific research because of the potential catastrophic effects on biodiversity and human communities.
Other human impacts on the atmosphere include the air pollution in cities, the pollutants including toxic chemicals like nitrogen oxides, sulphur oxides, volatile organic compounds and airborne particulate matter that produce photochemical smog and acid rain, and the chlorofluorocarbons that degrade the ozone layer. Anthropogenic particulates such as sulfate aerosols in the atmosphere reduce the direct irradiance and reflectance (albedo) of the Earth's surface. Known as global dimming the decrease is estimated to have been about 4% between 1960 and 1990 although the trend has subsequently reversed. Global dimming may have disturbed the global water cycle by reducing evaporation and rainfall in some areas. It also creates a cooling effect and this may have partially masked the effect of greenhouse gases on global warming.
Oceans
Ocean circulation patterns have a strong influence on climate and weather and, in turn, the food supply of both humans and other organisms. Scientists have warned of the possibility, under the influence of climate change, of a sudden alteration in circulation patterns of ocean currents that could drastically alter the climate in some regions of the globe. Major human environmental impacts occur in the more habitable regions of the ocean fringes – the estuaries, coastline and bays. Eight point five of the world's population – about 600 million people – live in low-lying areas vulnerable to sea level rise. Trends of concern that require management include: over-fishing (beyond sustainable levels); coral bleaching due to ocean warming, and ocean acidification due to increasing levels of dissolved carbon dioxide; and sea level rise due to climate change. Because of their vastness oceans also act as a convenient dumping ground for human waste. Remedial strategies include: more careful waste management, statutory control of overfishing by adoption of sustainable fishing practices and the use of environmentally sensitive and sustainable aquaculture and fish farming, reduction of fossil fuel emissions and restoration of coastal and other marine habitats.
Freshwater
Water covers 71% of the Earth's surface. Of this, 97.5% is the salty water of the oceans and only 2.5% freshwater, most of which is locked up in the Antarctic ice sheet. The remaining freshwater is found in lakes, rivers, wetlands, the soil, aquifers and atmosphere. All life depends on the solar-powered global water cycle, the evaporation from oceans and land to form water vapour that later condenses from clouds as rain, which then becomes the renewable part of the freshwater supply. Awareness of the global importance of preserving water for ecosystem services has only recently emerged as: during the 20th century, more than half the world's wetlands have been lost along with their valuable environmental services. Biodiversity-rich freshwater ecosystems are currently declining faster than marine or land ecosystems making them the world's most vulnerable habitats. Increasing urbanization pollutes clean water supplies and much of the world still does not have access to clean, safe water. In the industrial world demand management has slowed absolute usage rates but increasingly water is being transported over vast distances from water-rich natural areas to population-dense urban areas and energy-hungry desalination is becoming more widely used. Greater emphasis is now being placed on the improved management of blue (harvestable) and green (soil water available for plant use) water, and this applies at all scales of water management.
Land
Loss of biodiversity originates largely from the habitat loss and fragmentation produced by artificial land development, forestry and agriculture as natural capital is progressively converted to man-made capital. Land-use change is fundamental to the operations of the biosphere because alterations in the relative proportions of land dedicated to urbanisation, agriculture, forest, woodland, grassland and pasture have a marked effect on the global water, carbon and nitrogen biogeochemical cycles and this can negatively impact both natural and human systems. At the local human scale major sustainability benefits accrue from the pursuit of green cities and sustainable parks and gardens.
Forests
Since the Neolithic Revolution, human consumption has reduced the world's forest cover by about 47%. Present-day forests occupy about a quarter of the world's ice-free land with about half of these occurring in the tropics. In temperate and boreal regions forest area is gradually increasing (with the exception of Siberia), but deforestation in the tropics is of major concern.
Forests moderate the local climate and the global water cycle through their light reflectance (albedo) and evapotranspiration. They also conserve biodiversity, protect water quality, preserve soil and soil quality, provide fuel and pharmaceuticals, and purify the air. These free ecosystem services are not given a market value under most current economic systems, and so forest conservation has little appeal when compared with the economic benefits of logging and clearance which, through soil degradation and organic decomposition returns carbon dioxide to the atmosphere. The United Nations Food and Agriculture Organization (FAO) estimates that about 90% of the carbon stored in land vegetation is locked up in trees and that they sequester about 50% more carbon than is present in the atmosphere. Changes in land use currently contribute about 20% of total global carbon emissions (heavily logged Indonesia and Brazil are a major source of emissions). Climate change can be mitigated by sequestering carbon in reafforestation schemes, plantations and timber products. Also wood biomass can be utilized as a renewable carbon-neutral fuel. The FAO has suggested that, over the period 2005–2050, effective use of tree planting could absorb about 10–20% of man-made emissions – so monitoring the condition of the world's forests must be part of a global strategy to mitigate emissions and protect ecosystem services. However, climate change may preempt this FAO scenario as a study by the International Union of Forest Research Organizations in 2009 concluded that the stress of a temperature rise above pre-industrial levels could result in the release of vast amounts of carbon so the potential of forests to act as carbon "sinks" is "at risk of being lost entirely".
Cultivated land
Feeding more than seven billion human bodies takes a heavy toll on the Earth's resources. This begins with the appropriation of about 38% of the Earth's land surface and about 20% of its net primary productivity. Added to this are the resource-hungry activities of industrial agribusiness – everything from the crop need for irrigation water, synthetic fertilizers and pesticides to the resource costs of food packaging, transport (now a major part of global trade) and retail. Food is essential to life. But the list of environmental costs of food production is a long one: topsoil depletion, erosion and conversion to desert from constant tillage of annual crops; overgrazing; salinization; sodification; waterlogging; high levels of fossil fuel use; reliance on inorganic fertilisers and synthetic organic pesticides; reductions in genetic diversity by the mass use of monocultures; water resource depletion; pollution of waterbodies by run-off and groundwater contamination; social problems including the decline of family farms and weakening of rural communities.
All of these environmental problems associated with industrial agriculture and agribusiness are now being addressed through such movements as sustainable agriculture, organic farming and more sustainable business practices.
Extinctions
Although biodiversity loss can be monitored simply as loss of species, effective conservation demands the protection of species within their natural habitats and ecosystems. Following human migration and population growth, species extinctions have progressively increased to a rate unprecedented since the Cretaceous–Paleogene extinction event. Known as the Holocene extinction event this current human-induced extinction of species ranks as one of the world's six mass extinction events. Some scientific estimates indicate that up to half of presently existing species may become extinct by 2100. Current extinction rates are 100 to 1000 times their prehuman levels with more than 10% birds and mammals threatened, about 8% of plants, 5% of fish and more than 20% of freshwater species.
The 2008 IUCN Red List warns that long-term droughts and extreme weather put additional stress on key habitats and, for example, lists 1,226 bird species as threatened with extinction, which is one eighth of all bird species. The Red List Index also identifies 44 tree species in Central Asia as under threat of extinction due to over-exploitation and human development and threatening the region's forests which are home to more than 300 wild ancestors of modern domesticated fruit and nut cultivars.
Biological invasions
In many parts of the industrial world land clearing for agriculture has diminished and here the greatest threat to biodiversity, after climate change, has become the destructive effect of invasive species. Increasingly efficient global transport has facilitated the spread of organisms across the planet. The potential danger of this aspect of globalization is starkly illustrated through the spread of human diseases like HIV AIDS, mad cow disease, bird flu and swine flu, but invasive plants and animals are also having a devastating impact on native biodiversity. Non-indigenous organisms can quickly occupy disturbed land and natural areas where, in the absence of their natural predators, they are able to thrive. At the global scale this issue is being addressed through the Global Invasive Species Information Network but there is improved international biosecurity legislation to minimise the transmission of pathogens and invasive organisms. Also, through CITES legislation there is control the trade in rare and threatened species. Increasingly at the local level public awareness programs are alerting communities, gardeners, the nursery industry, collectors, and the pet and aquarium industries, to the harmful effects of potentially invasive species.
Resistance to change
The environmental sustainability problem has proven difficult to solve. The modern environmental movement has attempted to solve the problem in a large variety of ways. But little progress has been made, as shown by severe ecological footprint overshoot and lack of sufficient progress on the climate change problem. Something within the human system in preventing change to a sustainable mode of behavior. That system trait is systemic change resistance. Change resistance is also known as organizational resistance, barriers to change, or policy resistance.
See also
Environmental management
Integrated landscape management
Natural resource management
Planetary management
References
Sources
Blood, K. (2001). Environmental Weeds. Mt Waverley, Victoria: C.H. Jerram & Associates. . An example of a local guide to invasive plants.
Clarke, R. & King, J. (2006). The Atlas of Water. London: Earthscan. .
Groombridge, B. & Jenkins, M.D. (2002). World Atlas of Biodiversity. Berkeley: University of California Press. .
Krebs, C.J. (2001). Ecology: the Experimental Analysis of Distribution and Abundance. Sydney: Benjamin Cummings. .
Leakey, R. & Lewin, R. (1995). The Sixth Extinction: Patterns of Life and the Future of Humankind. New York: Bantam Dell Publishing Group.
Lindenmayer, D. & Burgman, M. (2005). Practical Conservation Biology. Collingwood, Victoria: CSIRO Publishing. .
E, Huttmanová. The Possibilities of Sustainable Development Evaluation in the European Union Area. European Journal of Sustainable Development ISSN 2239-5938.
Randall, R. (2002). A Global Compendium of Weeds. Meredith, Victoria, Australia: R.G. & F.J. Richardson. .
Tudge, C. (2004). So Shall We Reap. London: Penguin Books. .
Wilson, E.O. (2002). The Future of Life. New York: Knopf. .
External links
Master education in Environmental Management & Sustainability Science at Aalborg University in Denmark
Sustainable development
Systems ecology
Natural resource management | Sustainability and environmental management | Environmental_science | 2,653 |
63,033,632 | https://en.wikipedia.org/wiki/Protocol%20Wars | The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.
In the late 1960s and early 1970s, the pioneers of packet switching technology built computer networks providing data communication, that is the ability to transfer data between points or nodes. As more of these networks emerged in the mid to late 1970s, the debate about communication protocols became a "battle for access standards". An international collaboration between several national postal, telegraph and telephone (PTT) providers and commercial operators led to the X.25 standard in 1976, which was adopted on public data networks providing global coverage. Separately, proprietary data communication protocols emerged, most notably IBM's Systems Network Architecture in 1974 and Digital Equipment Corporation's DECnet in 1975.
The United States Department of Defense (DoD) developed TCP/IP during the 1970s in collaboration with universities and researchers in the US, UK and France. IPv4 was released in 1981 and was made the standard for all DoD computer networking. By 1984, the international reference model OSI model, which was not compatible with TCP/IP, had been agreed upon. Many European governments (particularly France, West Germany and the UK) and the United States Department of Commerce mandated compliance with the OSI model, while the US Department of Defense planned to transition from TCP/IP to OSI.
Meanwhile, the development of a complete Internet protocol suite by 1989, and partnerships with the telecommunication and computer industry to incorporate TCP/IP software into various operating systems laid the foundation for the widespread adoption of TCP/IP as a comprehensive protocol suite. While OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking and as the core component of the emerging Internet.
Early computer networking
Packet switching vs circuit switching
Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users and, later, the possibility of achieving this over wide area networks. In the early 1960s, J. C. R. Licklider proposed the idea of a universal computer network while working at Bolt Beranek & Newman (BBN) and, later, leading the Information Processing Techniques Office (IPTO) at the Advanced Research Projects Agency (ARPA, later, DARPA) of the US Department of Defense (DoD). Independently, Paul Baran at RAND in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK invented new approaches to the design of computer networks.
Baran published a series of papers between 1960 and 1964 about dividing information into "message blocks" and dynamically routing them over distributed networks. Davies conceived of and named the concept of packet switching using high-speed interface computers for data communication in 1965–1966. He proposed a national commercial data network in the UK, and designed the local-area NPL network to demonstrate and research his ideas. The first use of the term protocol in a modern data-communication context occurs in an April 1967 memorandum A Protocol for Use in the NPL Data Communications Network written by two members of Davies' team, Roger Scantlebury and Keith Bartlett.
Licklider, Baran and Davies all found it hard to convince incumbent telephone companies of the merits of their ideas. AT&T held a monopoly on communications infrastructure in the United States, as did the General Post Office (GPO) in the United Kingdom, which was the national postal, telegraph and telephone service (PTT). They both believed speech traffic would continue to dominate and continued to invest in traditional telegraphic techniques. Telephone companies were operating on the basis of circuit switching, alternatives to which are message switching or packet switching.
Bob Taylor became the director of the IPTO in 1966 and set out to achieve Licklider's vision to enable resource sharing between remote computers. Taylor hired Larry Roberts to manage the programme. Roberts brought Leonard Kleinrock into the project; Kleinrock had applied mathematical methods to study communication networks in his doctoral thesis. At the October 1967 Symposium on Operating Systems Principles, Roberts presented the early "ARPA Net" proposal, based on Wesley Clark's idea for a message switching network using Interface Message Processors (IMPs). Roger Scantlebury presented Davies' work on a digital communication network and referenced the work of Paul Baran. At this seminal meeting, the NPL paper articulated how the data communications for such a resource-sharing network could be implemented.
Larry Roberts incorporated Davies' and Baran's ideas on packet switching into the proposal for the ARPANET. The network was built by BBN. Designed principally by Bob Kahn, it departed from the NPL's connectionless network model in an attempt to avoid the problem of network congestion. The service offered to hosts by the network was connection oriented. It enforced flow control and error control (although this was not end-to-end). With the constraint that, for each connection, only one message may be in transit in the network, the sequential order of messages is preserved end-to-end. This made the ARPANET what would come to be called a virtual circuit network.
Datagrams vs virtual circuits
Packet switching can be based on either a connectionless or connection-oriented mode, which are different approaches to data communications. A connectionless datagram service transports data packets between two hosts independently of any other packet. Its service is best effort (meaning out-of-order packet delivery and data losses are possible). With a virtual circuit service, data can be exchanged between two host applications only after a virtual circuit has been established between them in the network. After that, flow control is imposed to sources, as much as needed by destinations and intermediate network nodes. Data are delivered to destinations in their original sequential order.
Both concepts have advantages and disadvantages depending on their application domain. Where a best effort service is acceptable, an important advantage of datagrams is that a subnetwork may be kept very simple. A counterpart is that, under heavy traffic, no subnetwork is per se protected against congestion collapse. In addition, for users of the best effort service, use of network resources does not enforce any definition of "fairness"; that is, relative delay among user classes.
Datagram services include the information needed for looking up the next link in the network in every packet. In these systems, routers examine each arriving packet, look at their routing information, and decide where to route it. This approach has the advantage that there is no inherent overhead in setting up the circuit, meaning that a single packet can be transmitted as efficiently as a long stream. Generally, this makes routing around problems simpler as only the single routing table needs to be updated, not the information for every virtual circuit. It also requires less memory, as only one route needs to be stored for any destination, not one per virtual circuit. On the downside, there is a need to examine every datagram, which makes them (theoretically) slower.
On the ARPANET, the starting point in 1969 for connecting a host computer (i.e., a user) to an IMP (i.e., a packet switch) was the 1822 protocol, which was written by Bob Kahn. Steve Crocker, a graduate student at the University of California Los Angeles (UCLA) formed a Network Working Group (NWG) that year. He said "While much of the development proceeded according to a grand plan, the design of the protocols and the creation of the RFCs was largely accidental." Under the auspices of Leonard Kleinrock at UCLA, Crocker led other graduate students, including Jon Postel, in designing a host-host protocol known as the Network Control Program (NCP). They planned to use separate protocols, Telnet and the File Transfer Protocol (FTP), to run functions across the ARPANET. After approval by Barry Wessler at ARPA, who had ordered certain more exotic elements to be dropped, the NCP was finalized and deployed in December 1970 by the NWG. NCP codified the ARPANET network interface, making it easier to establish, and enabling more sites to join the network.
Roger Scantlebury was seconded from the NPL to the British Post Office Telecommunications division (BPO-T) in 1969. There, engineers developed a packet-switching protocol from basic principles for an Experimental Packet Switched Service (EPSS) based on a virtual call capability. However, the protocols were complex and limited; Davies described them as "esoteric".
Rémi Després started work in 1971, at the CNET (the research center of the French PTT), on the development of an experimental packet switching network, later known as RCP. Its purpose was to put into operation a prototype packet switching service to be offered on a future public data network. Després simplified and improved on the virtual call approach, introducing the concept of "graceful saturated operation" in 1972. He coined the term "virtual circuit" and validated the concepts on the RCP network. Once set up, the data packets do not have to contain any routing information, which can simplify the packet structure and improve channel efficiency. The routers are also faster as the route setup is only done once; from then on, packets are simply forwarded down the existing link. One downside is that the equipment has to be more complex as the routing information has to be stored for the length of the connection. Another disadvantage is that the virtual connection may take some time to set up end-to-end, and for small messages, this time may be significant.
TCP vs CYCLADES and INWG vs X.25
Davies had conceived and described datagram networks, done simulation work on them, and built a single packet switch with local lines. Louis Pouzin thought it looked technically feasible to employ a simpler approach to wide-area networking than that of the ARPANET. In 1972, Pouzin launched the CYCLADES project, with cooperation provided by the French PTT, including free lines and modems. He began to research what would later be called internetworking; at the time, he coined the term "catenet" for concatenated network. The name "datagram" was coined by Halvor Bothner-By. Hubert Zimmermann was one of Pouzin's principal researchers and the team included Michel Elie, Gérard Le Lann, and others. While building the network, they were advised by BBN as consultants. Pouzin's team was the first to tackle the highly-complex problem of providing user applications with a reliable virtual circuit while using a best-effort service. The network used unreliable, standard-sized, datagrams in the packet-switched network and virtual circuits for the transport layer. First demonstrated in 1973, it pioneered the use of the datagram model, functional layering, and the end-to-end principle. Le Lann proposed the sliding window scheme for achieving reliable error and flow control on end-to-end connections. However, the sliding window scheme was never implemented on the CYCLADES network and it was never interconnected with other networks (except for limited demonstrations using traditional telegraphic techniques).
Louis Pouzin's ideas to facilitate large-scale internetworking caught the attention of ARPA researchers through the International Network Working Group (INWG), an informal group established by Steve Crocker, Pouzin, Davies, and Peter Kirstein in June 1972 in Paris, a few months before the International Conference on Computer Communication (ICCC) in Washington demonstrated the ARPANET. At the ICCC, Pouzin first presented his ideas on internetworking, and Vint Cerf was approved as INWG's Chair on Steve Crocker's recommendation. INWG grew to include other American researchers, members of the French CYCLADES and RCP projects, and the British teams working on the NPL network, EPSS and the proposed European Informatics Network (EIN), a datagram network. Like Baran in the mid-1960s, when Roberts approached AT&T about taking over the ARPANET to offer a public packet-switched service, they declined.
Bob Kahn joined the IPTO in late 1972. Although initially expecting to work in another field, he began work on satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In Spring 1973, Vint Cerf moved to Stanford University. With funding from DARPA, he began collaborating with Kahn on a new protocol to replace NCP and enable internetworking. Cerf built a research team at Stanford studying the use of fragmentable datagrams. Gérard Le Lann joined the team during the period 1973-4 and Cerf incorporated his sliding windows scheme into the research work.
Also in the United States, Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. INWG met in Stanford in June 1973. Zimmermann and Metcalfe dominated the discussions. Notes from the meetings were recorded by Cerf and Alex McKenzie, from BBN, and published as numbered INWG Notes (some of which were also RfCs). Building on this, Kahn and Cerf presented a paper at a networking conference at the University of Sussex in England in September 1973. Their ideas were refined further in long discussions with Davies, Scantlebury, Pouzin and Zimmerman. Most of the work was done by Kahn and Cerf working as a duet.
Peter Kirstein put internetworking into practice at University College London (UCL) in June 1973, connecting the ARPANET to British academic networks, the first international heterogeneous computer network. By 1975, there were 40 British academic and research groups using the link.
The seminal paper, A Protocol for Packet Network Intercommunication, published by Cerf and Kahn in 1974 addressed the fundamental challenges involved in interworking across datagram networks with different characteristics, including routing in interconnected networks, and packet fragmentation and reassembly. The paper drew upon and extended their prior research, developed in collaboration and competition with other American, British and French researchers. DARPA sponsored work to formulate the first version of the Transmission Control Program (TCP) later that year. At Stanford, its specification, , was written in December by Cerf with Yogen Dalal and Carl Sunshine as a monolithic (single layer) design. The following year, testing began through concurrent implementations at Stanford, BBN and University College London, but it was not installed on the ARPANET at this time.
A protocol for internetworking was also being pursued by INWG. There were two competing proposals, one based on the early Transmission Control Program proposed by Cerf and Kahn (using fragmentable datagrams), and the other based on the CYCLADES transport protocol proposed by Pouzin, Zimmermann and Elie (using standard-sized datagrams). A compromise was agreed and Cerf, McKenzie, Scantlebury and Zimmermann authored an "international" end-to-end protocol. It was presented to the CCITT by Derek Barber in 1975 but was not adopted by the CCITT nor by the ARPANET.
The fourth biennial Data Communications Symposium later that year included presentations from Davies, Pouzin, Derek Barber, and Ira Cotten about the current state of packet-switched networking. The conference was covered by Computerworld magazine which ran a story on the "battle for access standards" between datagrams and virtual circuits, as well as a piece describing the "lack of standard access interfaces for emerging public packet-switched communication networks is creating 'some kind of monster' for users". At the conference, Pouzin said pressure from European PTTs forced the Canadian DATAPAC network to change from a datagram to virtual circuit approach, although historians attribute this to IBM's rejection of their request for modification to their proprietary protocol. Pouzin was outspoken in his advocacy for datagrams and attacks on virtual circuits and monopolies. He spoke about the "political significance of the [datagram versus virtual circuit] controversy," which he saw as "initial ambushes in a power struggle between carriers and the computer industry. Everyone knows in the end, it means IBM vs. Telecommunications, through mercenaries."
After Larry Roberts and Barry Wessler left ARPA in 1973 to found Telenet, a commercial packet-switched network in the US, they joined the international effort to standardize a protocol for packet switching based on virtual circuits shortly before it was finalized. With contributions from the French, British, and Japanese PTTs, particularly the work of Rémi Després on RCP and TRANSPAC, along with concepts from DATAPAC in Canada, and Telenet in the US, the X.25 standard was agreed by the CCITT in 1976. X.25 virtual circuits were easily marketed because they permit simple host protocol support. They also satisfy the INWG expectation of 1972 that each subnetwork can exercise its own protection against congestion (a feature missing with datagrams).
Larry Roberts adopted X.25 on Telenet and found that "datagram packets are now more expensive than VC packets" in 1978. Vint Cerf said Roberts turned down his suggestion to use TCP when he built Telenet, saying that people would only buy virtual circuits and he could not sell datagrams. Roberts predicted that "As part of the continuing evolution of packet switching, controversial issues are sure to arise." Pouzin remarked that "the PTT's are just trying to drum up more business for themselves by forcing you to take more service than you need."
Common host protocol vs translating between protocols
Internetworking protocols were still in their infancy. Various groups, including ARPA researchers, the CYCLADES team, and others participating in INWG, were researching the issues involved, including the use of gateways to connect between two networks. At the National Physical Laboratory in the UK, Davies' team studied the "basic dilemma" involved in interconnecting networks: a common host protocol requires restructuring existing networks that use different protocols. To explore this dilemma, the NPL network connected with the EIN by translating between two different host protocols, that is, using a gateway. Concurrently, the NPL connection to the EPSS used a common host protocol in both networks. NPL research confirmed establishing a common host protocol would be more reliable and efficient.
The CYCLADES project, however, was shut down in the late 1970s for budgetary, political and industrial reasons and Pouzin was "banished from the field he had inspired and helped to create".
DoD model vs X.25/X.75 vs proprietary standards
The design of the Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. A DARPA internetworking experiment in July 1977 linking the ARPANET, SATNET and PRNET demonstrated its viability. Subsequently, DARPA and collaborating researchers at Stanford, UCL and BBN, among others, began work on the Internet, publishing a series of Internet Experiment Notes. Bob Kahn's efforts led to the absorption of MIT's proposal by Dave Clark and Dave Reed for a Data Stream Protocol (DSP) into version 3 of TCP in January 1978 written by Cerf, now at DARPA, and Jon Postel at the Information Sciences Institute of the University of Southern California (USC). Following discussions with Yogen Dalal and Bob Metcalfe at Xerox PARC, in version 4 of TCP, first drafted in September 1978, Postel split the Transmission Control Program into two distinct protocols, the Transmission Control Protocol (TCP) as a reliable connection-oriented service and the Internet Protocol (IP) as connectionless service. For applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP. Referred to as TCP/IP from December 1978, Version 4 was made standard for all military computer networking in March 1982. It was installed on SATNET and adopted by NORSAR/NDRE in March and Peter Kirstein's group at UCL in November. On January 1, 1983, known as "flag day", TCP/IP was installed on the ARPANET. This resulted in a networking model that became known as the DoD internet architecture model (DoD model for short) or DARPA model. Leonard Kleinrock's theoretical work published in the mid-1970s on the performance of the ARPANET was referred to during the development of the protocol.
The Coloured Book protocols, developed by British Post Office Telecommunications and the academic community at UK universities, gained some acceptance internationally as the first complete X.25 standard. First defined in 1975, they gave the UK "several years lead over other countries" but were intended as "interim standards" until international agreement was reached. The X.25 standard gained political support in European countries and from the European Economic Community (EEC). The EIN, which was based on datagrams, was replaced with Euronet, which used X.25. Peter Kirstein wrote that European networks tended to be short-term projects with smaller numbers of computers and users. As a result, the European networking activities did not lead to any strong standards except X.25, which became the main European data protocol for fifteen to twenty years. Kirstein said his group at University College London was widely involved, partly because they were one of the groups with the most expertise, and partly to try to ensure that the British activities, such as the JANET NRS, did not diverge too far from the US. The construction of public data networks based on the X.25 protocol suite continued through the 1980s; international examples included the International Packet Switched Service (IPSS) and the SITA network. Complemented by the X.75 standard, which enabled internetworking across national PTT networks in Europe and commercial networks in North America, this led to a global infrastructure for commercial data transport.
Computer manufacturers developed proprietary protocol suites such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's (DEC's) DECnet, Xerox's Xerox Network Systems (XNS, based on PUP) and Burroughs' BNA. By the end of the 1970s, IBM's networking activities were, by some measures, two orders of magnitude larger in scale than the ARPANET. During the late 1970s and most of the 1980s, there remained a lack of open networking options. Therefore, proprietary standards, particularly SNA and DECnet, as well as some variants of XNS (e.g., Novell NetWare and Banyan VINES), were commonly used on private networks, becoming somewhat "de facto" industry standards. Ethernet, promoted by DEC, Intel, and Xerox, outcompeted MAN/TOP, promoted by General Motors and Boeing. DEC was an exception among the computer manufactures in supporting the peer-to-peer approach.
In the US, the National Science Foundation (NSF), NASA, and the United States Department of Energy (DoE) all built networks variously based on the DoD model, DECnet, and IP over X.25.
Internet–OSI Standards War
The early research and development of standards for data networks and protocols culminated in the Internet–OSI Standards War in the 1980s and early 1990s. Engineers, organizations and nations became polarized over the issue of which standard would result in the best and most robust computer networks. Both standards are open and non-proprietary in addition to being incompatible, although "openness" may have worked against OSI while being successfully employed by Internet advocates.
OSI reference model
Researchers in the UK and elsewhere identified the need for defining higher-level protocols. The UK National Computing Centre publication 'Why Distributed Computing', which was based on extensive research into future potential configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977.
Hubert Zimmermann, and Charles Bachman as chairman, played a key role in the development of the Open Systems Interconnections reference model. They considered it too early to define a set of binding standards while technology was still developing since irreversible commitment to a particular standard might prove sub-optimal or constraining in the long run. Although dominated by computer manufacturers, they had to contend with many competing priorities and interests. The rate of technological change made it necessary to define a model that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was an architectural framework that could accommodate existing and future standards.
Beginning in 1978, international work led to a draft proposal in 1980. In developing the proposal, there were clashes of opinions between computer manufacturers and PTTs, and of both against IBM. The final OSI model was published in 1984 by the International Organization for Standardization (ISO) in alliance with the International Telecommunication Union Telecommunication Standardization Sector (ITU-T), which was dominated by the PTTs.
The most fundamental idea of the OSI model was that of a "layered" architecture. The layering concept was simple in principle but very complex in practice. The OSI model redefined how engineers thought about network architectures.
Internet protocol suite
The DoD model and other existing protocols, such as X.25 and SNA, all quickly adopted a layered approach in the late 1970s. Although the OSI model shifted power away from the PTTs and IBM towards smaller manufacturer and users, the "strategic battle" remained the competition between the ITU's X.25 and proprietary standards, particularly SNA. Neither were fully OSI compliant. Proprietary protocols were based on closed standards and struggled to adopt layering while X.25 was limited in terms of speed and higher-level functionality that would become important for applications. As early as 1982, criticised "zealous" advocates of the OSI reference model and criticised the functionality of the X.25 protocol and its use as an "end-to-end" protocol in the sense of a Transport or Host-to-Host protocol".
Vint Cerf formed the Internet Configuration Control Board (ICCB) in 1979 to oversee the network's architectural evolution and field technical questions. However, DARPA was still in control and, outside the nascent Internet community, TCP/IP was not even a candidate for universal adoption. The implementation in 1985 of the Domain Name System proposed by Paul Mockapetris at USC, which enabled network growth by facilitating cross-network access, and the development of TCP congestion control by Van Jacobson in 1986–88, led to a complete protocol suite, as outlined in and in 1989. This laid the foundation for the growth of TCP/IP as a comprehensive protocol suite, which became known as the Internet protocol suite.
DARPA studied and implemented gateways, which helped to neutralize X.25 as a rival networking paradigm. The computer science historian Janet Abbate explained: "by running TCP/IP over X.25, [D]ARPA reduced the role of X.25 to providing a data conduit, while TCP took over responsibility for end-to-end control. X.25, which had been intended to provide a complete networking service, would now be merely a subsidiary component of [D]ARPA's own networking scheme. The OSI model reinforced this reinterpretation of X.25's role. Once the concept of a hierarchy of protocols had been accepted, and once TCP, IP, and X.25 had been assigned to different layers in this hierarchy, it became easier to think of them as complementary parts of a single system, and more difficult to view X.25 and the Internet protocols as distinct and competing systems."
The DoD reduced research funding for networks, responsibilities for governance shifted to the National Science Foundation and the ARPANET was shut down in 1990.
Philosophical and cultural aspects
Historian Andrew L. Russell wrote that Internet engineers such as Danny Cohen and Jon Postel were accustomed to continual experimentation in a fluid organizational setting through which they developed TCP/IP. They viewed OSI committees as overly bureaucratic and out of touch with existing networks and computers. This alienated the Internet community from the OSI model. A dispute broke out within the Internet community after the Internet Architecture Board (IAB) proposed replacing the Internet Protocol in the Internet with the OSI Connectionless Network Protocol (CLNP). In response, Vint Cerf performed a striptease in a three-piece suit while presenting to the 1992 Internet Engineering Task Force (IETF) meeting, revealing a T-shirt emblazoned with "IP on Everything". According to Cerf, his intention was to reiterate that a goal of the IAB was to run IP on every underlying transmission medium. At the same meeting, David Clark summarized the IETF approach with the famous saying "We reject: kings, presidents, and voting. We believe in: rough consensus and running code." The Internet Society (ISOC) was chartered that year.
Cerf later said the social culture (group dynamics) that first evolved during the work on the ARPANET was as important as the technical developments in enabling the governance of the Internet to adapt to the scale and challenges involved as it grew.
François Flückiger wrote that "firms that win the Internet market, like Cisco, are small. Simply, they possess the Internet culture, are interested in it and, notably, participate in IETF."
Furthermore, the Internet community was opposed to a homogeneous approach to networking, such as one based on a proprietary standard such as SNA. They advocated for a pluralistic model of internetworking where many different network architectures could be joined into a network of networks.
Technical aspects
Russell notes that Cohen, Postel and others were frustrated with technical aspects of OSI. The model defined seven layers of computer communications, from physical media in layer 1 to applications in layer 7, which was more layers than the network engineering community had anticipated. In 1987, Steve Crocker said that although they envisaged a hierarchy of protocols in the early 1970s, "If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required." Although some sources say this was an acknowledgement that the four layers of the Internet Protocol Suite were inadequate.
Strict layering in OSI was viewed by Internet advocates as inefficient and did not allow trade-offs ("layer violation") to improve performance. The OSI model allowed what some saw as too many transport protocols (five compared with two for TCP/IP). Furthermore, OSI allowed for both the datagram and the virtual circuit approach at the network layer, which are non-interoperable options.
By the early 1980s, the conference circuit became more acrimonious. Carl Sunshine summarized in 1989: "In hindsight, much of the networking debate has resulted from differences in how to prioritize the basic network design goals such as accountability, reliability, robustness, autonomy, efficiency, and cost effectiveness. Higher priority on robustness and autonomy led to the DoD Internet design, while the PDNs have emphasized accountability and controllability."
Richard des Jardins, an early contributor to the OSI reference model, captured the intensity of the rivalry in a 1992 article by saying "Let's continue to get the people of good will from both communities to work together to find the best solutions, whether they are two-letter words or three-letter words, and let's just line up the bigots against a wall and shoot them."
In 1996, described the "Architectural Principles of the Internet" by saying "in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network."
Practical and commercial aspects
Beginning in the early 1980s, DARPA pursued commercial partnerships with the telecommunication and computer industry which enabled the adoption of TCP/IP. In Europe, CERN purchased UNIX machines with TCP/IP for their intranet between 1984 and 1988. Nonetheless, Paul Bryant, the UK representative on the European Academic and Research Network (EARN) Board of Directors, said "By the time JNT [the UK academic network JANET] came along [in 1984] we could demonstrate X25… and we firmly believed that BT [British Telecom] would provide us with the network infrastructure and we could do away with leased lines and experimental work. If we had gone with DARPA then we would not have expected to be able to use a public service. In retrospect the flaws in that argument are clear but not at the time. Although we were fairly proud of what we were doing, I don't think it was national pride or anti USA that drove us, it was a belief that we were doing the right thing. It was the latter that translated to religious dogma." JANET was a free X.25-based network for academic use, not research; experiments and other protocols were forbidden.
The DARPA Internet was still a research project that did not allow commercial traffic or for-profit services. The NSFNET initiated operations in 1986 using TCP/IP but, two years later, the US Department of Commerce mandated compliance with the OSI model and the Department of Defense planned to transition away from TCP/IP to OSI. Carl Sunshine wrote in 1989 that "by the mid-1980s ... serious performance problems were emerging [with TCP/IP], and it was beginning to look like the critics of "stateless" datagram networking might have been right on some points".
The major European countries and the EEC endorsed OSI. They founded RARE and associated national network operators (such as DFN, SURFnet, SWITCH) to promote OSI protocols, and restricted funding for non-OSI compliant protocols. However, by 1988, the Internet community had defined the Simple Network Management Protocol (SNMP) to enable management of network devices (such as routers) on multi-vendor networks and the Interop '88 trade show showcased new products for implementing networks based on TCP/IP. The same year, EUnet, the European UNIX Network, announced its conversion to Internet technology. By 1989, the OSI advocate Brian Carpenter made a speech at a technical conference entitled "Is OSI Too Late?" which received a standing ovation. OSI was formally defined, but vendor products from computer manufactures and network services from PTTs were still to be developed. TCP/IP by comparison was not an official standard (it was defined in unofficial RFCs) but UNIX workstations with both Ethernet and TCP/IP included had been available since 1983 and now served as a de facto interoperability standard. Carl Sunshine notes that "research is underway on how to optimize TCP/IP performance over variable delay and/or very-high-speed networks" However, Bob Metcalfe said "it has not been worth the ten years wait to get from TCP to TP4, but OSI is now inevitable" and Sunshine expected "OSI architecture and protocols ... will dominate in the future." The following year, in 1990, Cerf said: "You can't pick up a trade press article anymore without discovering that somebody is doing something with TCP/IP, almost in spite of the fact that there has been this major effort to develop international standards through the international standards organization, the OSI protocol, which eventually will get there. It's just that they are taking a lot of time.".
By the beginning of the 1990s, some smaller European countries had adopted TCP/IP. In February 1990, RARE stated "without putting into question its OSI policy, [RARE] recognizes the TCP/IP family of protocols as an open multivendor suite, well adapted to scientific and technical applications." In the same month, CERN established a transatlantic TCP/IP link with Cornell University in the United States. Conversely, starting in August 1990, the NSFNET backbone supported the OSI CLNP in addition to TCP/IP. CLNP was demonstrated in production on NSFNET in April 1991, and OSI demonstrations, including interconnections between US and European sites, were planned at the Interop '91 conference in October that year.
At the Rutherford Appleton Laboratory (RAL) in the United Kingdom in January 1991, DECnet represented 75% of traffic, attributed to Ethernet between VAXs. IP was the second most popular set of protocols with 20% of traffic, attributed to UNIX machines for which "IP is the natural choice". Paul Bryant, Head of Communications and Small Systems at RAL, wrote "Experience has shown that IP systems are very easy to mount and use, in contrast to such systems as SNA and to a lesser extent X.25 and Coloured Books where the systems are rather more complex." The author continued "The principal network within the USA for academic traffic is now based on IP. IP has recently become popular within Europe for inter-site traffic and there are moves to try and coordinate this activity. With the emergence of such a large combined USA/Europe network there are great attractions for UK users to have good access to it. This can be achieved by gatewaying Coloured Book protocols to IP or by allowing IP to penetrate the UK. Gateways are well known to be a cause of loss of quality and frustration. Allowing IP to penetrate may well upset the networking strategy of the UK." Similar views were shared by others at the time, including Louis Pouzin. At CERN, Flückiger reflected "The technology is simple, efficient, is integrated into UNIX-type operating systems and costs nothing for the users' computers. The first companies that commercialize routers, such as Cisco, seem healthy and supply good products. Above all, the technology used for local campus networks and research centres can also be used to interconnect remote centers in a simple way."
Beginning in March 1991, the JANET IP Service (JIPS) was set up as a pilot project to host IP traffic on the existing network. Within eight months, the IP traffic had exceeded the levels of X.25 traffic, and the IP support became official in November. Also in 1991, Dai Davies introduced Internet technology over X.25 into the pan-European NREN, EuropaNet, although he experienced personal opposition to this approach. The EARN and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992. OSI usage on the NSFNET remained low when compared to TCP/IP. In the UK, the JANET community talked about a transition to OSI protocols, which was to begin with moving to X.400 mail as the first step, but this never happened. The X.25 service was closed in August 1997.
Mail was commonly delivered via Unix to Unix Copy Program (UUCP) in the 1980s, which was well suited for handling message transfers between machines that were intermittently connected. The Government Open Systems Interconnection Profile (GOSIP), developed in the late 1980s and early 1990s, would have led to X.400 adoption. Proprietary commercial systems offered an alternative. In practice, use of the Internet suite of email protocols (SMTP, POP and IMAP) grew rapidly.
The invention of the World Wide Web in 1989 by Tim Berners-Lee at CERN, as an application on the Internet, brought many social and commercial uses to what was previously a network of networks for academic and research institutions. The Web began to enter everyday use in 1993–4. The US National Institute for Standards and Technology proposed in 1994 that GOSIP should incorporate TCP/IP and drop the requirement for compliance with OSI, which was adopted into Federal Information Processing Standards the following year. NSFNET had altered its policies to allow commercial traffic in 1991, and was shut down in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic. Subsequently, the Internet backbone was provided by commercial Internet service providers and Internet connectivity became ubiquitous.
Legacy
As the Internet evolved and expanded exponentially, an enhanced protocol was developed, IPv6, to address IPv4 address exhaustion. In the 21st century, the Internet of things is leading to the connection of new types of devices to the Internet, bringing reality to Cerf's vision of "IP on Everything". Nonetheless, shortcomings exist with today's Internet; for example, insufficient support for multihoming. Alternatives have been proposed, such as Recursive Network Architecture, and Recursive InterNetwork Architecture.
The seven-layer OSI model is still used as a reference for teaching and documentation; however, the OSI protocols conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model doesn't fit today's networking protocols and have suggested instead a simplified approach.
Other standards such as X.25 and SNA remain niche players.
Historiography
Katie Hafner and Matthew Lyon published one of the earliest in-depth and comprehensive histories of the ARPANET and how it led to the Internet. Where Wizards Stay Up Late: The Origins of the Internet (1996) explores the "human dimension" of the development of the ARPANET covering the "theorists, computer programmers, electronic engineers, and computer gurus who had the foresight and determination to pursue their ideas and affect the future of technology and society".
Roy Rosenzweig suggested in 1998 that no one single account of the history of the Internet is sufficient and there will need to be a more adequate history written that includes aspects of many books.
Janet Abbate's 1999 book Inventing the Internet was widely reviewed as an important work on the history of computing and networking, particularly in highlighting the role of social dynamics and of non-American participation in early networking development. The book was also praised for its use of archival resources to tell the history. She has since written about the need for historians to be aware of the perspectives they take in writing about the history of the Internet and explored the implications of defining the Internet in terms of "technology, use and local experience" rather than through the lens of the spread of technologies from the United States.
In his many publications on the "histories of networking", Andrew L. Russell argues scholars could and should look differently at the history of the Internet. His work shifts scholarly and popular understanding about the origins of the Internet and contemporary work in Europe that both competed and cooperated with the push for TCP/IP. James Pelkey conducted interviews with Internet pioneers in the late 1980s and completed his book with Andrew Russell in 2022.
Martin Campbell-Kelly and Valérie Schafer have focused on British and French contributions as well as global and international considerations in the development of packet switching, internetworking and the Internet.
See also
History of the Internet
History of email
History of the World Wide Web
List of Internet pioneers
Notes
References
Sources
Primary sources
In chronological order:
, private papers.
Crocker, Steve; McKenzie, Alex; Postel, Jon (January 1972), Host-Host Protocol for the Arpa Network. NIC 8246.
Pouzin, Louis (May 1974), A Proposal for Interconnecting Packet Switching Networks, Proceedings of EUROCOMP, Brunel University, pp. 1023-36.
Further reading
External links
Roger Scantlebury: Intro to the Protocol Wars, Computer History Museum
Computer Freaks Podcasts, Inc. magazine
Internet Histories: Digital Technology, Culture and Society, Routledge
Internet protocols
History of the Internet
Communications protocols
Network protocols
OSI model
X.25 | Protocol Wars | Technology | 9,051 |
40,810,996 | https://en.wikipedia.org/wiki/Generic%20Product%20Identifier | The Generic Product Identifier (GPI) is a 14-character hierarchical classification system created by Wolters Kluwer's Medi-Span that identifies drugs from their primary therapeutic use down to the unique interchangeable product regardless of manufacturer or package size. The code consists of seven subsets, each providing increasingly more specific information about a drug available with a prescription in the United States. The GPI is created and maintained by UpToDate, Inc a Wolters Kluwer Company.
The GPI defines Drug Group, Drug Class, Drug Subclass, Drug Base Name, Drug Name, Dose Form, and GPI Name in a codified manner. The first six characters of the GPI define the therapeutic class code, the next two pairs the drug name, and the last four define route, dosage or strength. For example GPI 58-20-00-60-10-01-05 is for the drug nortriptyline HCl cap 10 mg (an antidepressant) and can be further classified as follows:
Alternate drug classification systems include the AHFS Drug Information brand run by the American Society of Health-System Pharmacists and First DataBank's Generic Sequence Number (GSN) also known as the Clinical Formulation ID or formerly as Generic Code Number Sequence Number (GCN Seq No).
Wolters Kluwer provides a database under their Medi-Span brand called Medi-Span Electronic Drug File v2.5 that provides this therapeutic classification system which can be mapped to other prescription drug classification codes commonly used for payment and analysis in the United States Health Care System. This classification system is used in conjunction with other embedded drug information like adverse drug effects, drug interactions, drug dosing, and more.
References
Pharmacological classification systems | Generic Product Identifier | Chemistry | 370 |
20,214,001 | https://en.wikipedia.org/wiki/M.%20Lynne%20Markus | M. Lynne Markus (born 1950) is an American Information systems researcher, and John W. Poduska, Sr. Chair of Information and Process Management, Bentley University, who has made fundamental contributions to the study of enterprise systems and inter-enterprise systems, IT and organizational change, and knowledge management.
Education
Markus received her B.S. in 1972 from the University of Pittsburgh, and her PhD in Organizational Behavior in 1979 from the Case Western Reserve University.
Career and research
She was formerly a member of the Faculty of Business at the City University of Hong Kong (as Chair Professor of Electronic Business), the Peter F. Drucker Graduate School of Management at Claremont Graduate University, the Anderson Graduate School of Management (UCLA), and the MIT Sloan School of Management.
Markus' research interests are in the fields of "effective design, implementation and use of information systems within and across organizations; the risks and unintended consequences of information technology use; and innovations in the governance and management of information technology."
Her work in these areas has been published in several high-impact peer-reviewed journals, and set the stage for much of the future work in these areas. She is one of the most widely cited researchers in the field of information systems.
Her article "The Technology Shaping Effects of E-Collaboration Technologies – Bugs and Features" was selected as the best article published in 2005 in the International Journal of e-Collaboration. The article "Industry-Wide Information Systems Standardization as Collective Action: The Case of the U.S. Residential Mortgage Industry", which she co-authored, was selected as the paper of the year for 2006 in the journal MIS Quarterly.
Awards and honours
Best article published in 2005 in the International Journal of e-Collaboration.
Paper of the year for 2006 in the journal MIS Quarterly.
2008 Leo Award for Exceptional Lifetime Achievement in Information Systems by the Association for Information Systems.
Selected publications
Markus, M. Lynne, and Robert I. Benjamin. 1997. The Magic Bullet Theory In IT-Enabled Transformation, Sloan Management Review, 38(2): 55-68.
Markus, M. Lynne. 1983. Power, Politics, and MIS Implementation, Communications of the ACM, 26(6): 430-444.
Markus, M. Lynne. 1987. Toward a 'Critical Mass' Theory of Interactive Media: Universal Access, Interdependence, and Diffusion, Communications Research, 14(5): 491-511.
Markus, M. Lynne and Daniel Robey. 1988. Information Technology and Organizational Change: Causal Structure in Theory and Research, Management Science, 34(5): 583-598.
Ortiz de Guinea, Ana and M. Lynne Markus. 2009. Why break the habit of a lifetime? Rethinking the roles of intention, habit, and emotion in continuing information technology use, MIS Quarterly, 33(3): 433-444.
References
External links
M. Lynne Markus at bentley.edu
Living people
American sociologists
American women sociologists
Information systems researchers
MIT Sloan School of Management faculty
Year of birth uncertain
21st-century American women
Year of birth missing (living people) | M. Lynne Markus | Technology | 638 |
10,156,258 | https://en.wikipedia.org/wiki/Rectoanal%20inhibitory%20reflex | The rectoanal inhibitory reflex (RAIR), also known as the anal sampling mechanism, anal sampling reflex, rectosphincteric reflex, or anorectal sampling reflex, is a reflex characterized by a transient involuntary relaxation of the internal anal sphincter in response to distention of the rectum. The RAIR provides the upper anal canal with the ability to discriminate between flatus and fecal material.
The ability of the rectum to discriminate between gaseous, liquid and solid contents is essential to the ability to voluntarily control defecation. The RAIR allows for voluntary flatulation to occur without also eliminating solid waste, irrespective of the presence of fecal material in the anal canal.
Reflex arc
The physiological basis for the RAIR is poorly understood, but it is thought to involve a coordinated response by the internal anal sphincter to rectal distention with recovery of anal pressure from the distal to the proximal sphincter. Mediated by the autonomic nervous system, the afferent limb of this reflex depends upon an intact network of interstitial cells of Cajal in the internal anal sphincter. These cells, which are mediated at least in part by nitric oxide, provide inhibitory innervation of the internal anal sphincter.
Clinical significance
Impairment of this reflex can result in fecal incontinence. The absence of a RAIR is pathognomonic for Hirschsprung's disease.
See also
External anal sphincter
Levator ani
References
Rectum
Excretion | Rectoanal inhibitory reflex | Biology | 326 |
7,997,999 | https://en.wikipedia.org/wiki/Sheet%20moulding%20compound | Sheet moulding compound (SMC) or sheet moulding composite is a ready to mould glass-fibre reinforced polyester material primarily used in compression moulding. The sheet is provided in rolls weighing up to 1000 kg. Alternatively the resin and related materials may be mixed on site when a producer wants greater control over the chemistry and filler.
SMC is both a process and reinforced composite material. This is manufactured by dispersing long strands (usually >1”) of chopped fiber, commonly glass fibers or carbon fibers on a bath of thermoset resin (typically polyester resin, vinyl ester resin or epoxy resin). The longer fibers in SMC result in better strength properties than standard bulk moulding compound (BMC) products. Typical applications include demanding electrical applications, corrosion resistant needs, structural components at low cost, automotive, and transit.
Process
Paste reservoir dispenses a measured amount of specified resin paste onto a plastic carrier film. This carrier film passes underneath a chopper which cuts the fibers onto the surface. Once these have drifted through the depth of resin paste, another sheet is added on top which sandwiches the glass. The sheets are compacted and then enter onto a take-up roll, which is used to store the product whilst it matures. The carrier film is then later removed and the material is cut into charges. Depending on what shape is required determines the shape of the charge and steel die which it is then added to. Heat and pressure act on the charge and once fully cured, this is then removed from the mould as the finished product. Fillers both reduce weight and change the physical properties, typically adding strength. Production challenges include wetting the filler, which could consist of glass microspheres or aligned fibers rather than random chopped fibers; adjusting die temperature and pressure to provide the proper geometry; and adjusting chemistry to end use.
Advantages
Compared to similar methods, SMC benefits from a very high volume production ability, excellent part reproducibility, it is cost effective as low labor requirements per production level is very good and industry scrap is reduced substantially. Weight reduction, due to lower dimensional requirements and because of the ability to consolidate many parts into one, is also advantageous. The level of flexibility also exceeds many counterpart processes.
Physical properties
Properties vary depending upon filler and resin types, with compounds using aligned fibers (especially long fibers) being subject to greater anisotropy. Typical ranges are listed below.
See also
Bulk moulding compound
Fiberglass
Fibre-reinforced plastic
Thermosetting polymer
Thermoset polymer matrix
Forged composite
CFSMC
References
Plastics industry
Composite materials
Fibre-reinforced polymers | Sheet moulding compound | Physics | 542 |
591,375 | https://en.wikipedia.org/wiki/Willem%20de%20Sitter | Willem de Sitter (6 May 1872 – 20 November 1934) was a Dutch mathematician, physicist, and astronomer. The De Sitter universe is a cosmological model named after him.
Life and work
Born in Sneek, De Sitter studied mathematics at the University of Groningen and then joined the Groningen astronomical laboratory. He worked at the Cape Observatory in South Africa (1897–1899). Then, in 1908, De Sitter was appointed to the chair of astronomy at Leiden University. He was director of the Leiden Observatory from 1919 until his death.
De Sitter made major contributions to the field of physical cosmology. He co-authored a paper with Albert Einstein in 1932 in which they discussed the implications of cosmological data for the curvature of the universe. He also came up with the concept of the De Sitter space and De Sitter universe, a solution for Einstein's general relativity in which there is no matter and a positive cosmological constant. This results in an exponentially expanding, empty universe. De Sitter was also well-known for his research on the motions of the moons of Jupiter, invited to give the George Darwin Lecture at the Royal Astronomical Society in 1931.
Willem de Sitter died after a brief illness in November 1934.
Honours
In 1912, he became a member of the Royal Netherlands Academy of Arts and Sciences.
Awards
James Craig Watson Medal (1929)
Bruce Medal (1931)
Gold Medal of the Royal Astronomical Society (1931)
Prix Jules Janssen, the highest award of the Société astronomique de France, the French astronomical society (1934)
Named after him
The crater De Sitter on the Moon
Asteroid 1686 De Sitter
De Sitter universe
De Sitter space
Anti-de Sitter space
De Sitter invariant special relativity
Einstein–de Sitter universe
De Sitter double star experiment
De Sitter precession
De Sitter–Schwarzschild metric
Family
One of his sons, Ulbo de Sitter (1902 – 1980), was a Dutch geologist, and one of Ulbo's sons was a Dutch sociologist Ulbo de Sitter (1930 – 2010).
Another son of Willem, Aernout de Sitter (1905 – 15 September 1944), was the director of the Bosscha Observatory in Lembang, Indonesia (then the Dutch East Indies), where he studied the Messier 4 globular cluster.
Selected publications
On Einstein's theory of gravitation and its astronomical consequences:
See also
De Sitter double star experiment
De Sitter precession
De Sitter relativity
De Sitter space
De Sitter universe
Anti-de Sitter space
The Dreams in the Witch House, a story by H. P. Lovecraft featuring de Sitter, and inspired by his lecture The Size of the Universe
References
External links
P.C. van der Kruit Willem de Sitter (1872 – 1934) in: History of science and scholarship in the Netherlands.
A. Blaauw, Sitter, Willem de (1872–1934), in Biografisch Woordenboek van Nederland.
Bruce Medal page
Awarding of Bruce Medal: PASP 43 (1931) 125
Awarding of RAS gold medal: MNRAS 91 (1931) 422
de Sitter's binary star arguments against Ritz's relativity theory (1913) (four articles)
Obituaries
AN 253 (1934) 495/496 (one line)
JRASC 29 (1935) 1
MNRAS 95 (1935) 343
Obs 58 (1935) 22
PASP 46 (1934) 368 (one paragraph)
PASP 47 (1935) 65
1872 births
1934 deaths
19th-century Dutch astronomers
19th-century Dutch mathematicians
20th-century Dutch astronomers
Dutch relativity theorists
20th-century Dutch mathematicians
Cosmologists
People from Sneek
Academic staff of Leiden University
University of Groningen alumni
Members of the Royal Netherlands Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Recipients of the Gold Medal of the Royal Astronomical Society
Presidents of the International Astronomical Union | Willem de Sitter | Astronomy | 813 |
11,540,801 | https://en.wikipedia.org/wiki/HD%20125612 | HD 125612 is a binary star system with three exoplanetary companions in the equatorial constellation of Virgo. It is too dim to be visible to the naked eye, having an apparent visual magnitude of 8.31. The system is located at a distance of 188 light years from the Sun based on parallax measurements, but it is drifting closer with a radial velocity of −18 km/s.
The yellow-hued primary component, designated HD 125612 A, is an ordinary G-type main-sequence star with a stellar classification of G3V, which indicates it is generating energy through hydrogen fusion at its core. It is about 1.4 billion years old and is rich in heavy elements, having a 70% greater abundance of iron compared to the Sun. The star has 109% of the mass and 105% of the girth of the Sun. It is radiating 109% of the luminosity of the Sun from its photosphere at an effective temperature of 5,900 K.
A red dwarf companion star, HD 125612 B, was detected in 2009 at a projected separation of 4750 AU. The possibility of a much closer companion to the primary star was also suggested, though this will need more observation to better define.
Planetary system
There are three known exoplanets in orbit around HD 125612 A. The first was reported in 2007 and designated HD 125612 b, but it did not fully resolve the stellar velocity variations and it was clear there were other companions. Two additional companions, HD 125612 c and d, were reported in 2009. In 2022, the inclination and true mass of the outer planet HD 125612 d were measured via astrometry.
See also
HD 170469
HD 231701
HD 17156
HD 11506
List of extrasolar planets
References
G-type main-sequence stars
M-type main-sequence stars
Planetary systems with three confirmed planets
Virgo (constellation)
Durchmusterung objects
125612
070123
Binary stars | HD 125612 | Astronomy | 413 |
1,926,432 | https://en.wikipedia.org/wiki/Circumcircle | In geometry, the circumscribed circle or circumcircle of a triangle is a circle that passes through all three vertices. The center of this circle is called the circumcenter of the triangle, and its radius is called the circumradius. The circumcenter is the point of intersection between the three perpendicular bisectors of the triangle's sides, and is a triangle center.
More generally, an -sided polygon with all its vertices on the same circle, also called the circumscribed circle, is called a cyclic polygon, or in the special case , a cyclic quadrilateral. All rectangles, isosceles trapezoids, right kites, and regular polygons are cyclic, but not every polygon is.
Straightedge and compass construction
The circumcenter of a triangle can be constructed by drawing any two of the three perpendicular bisectors. For three non-collinear points, these two lines cannot be parallel, and the circumcenter is the point where they cross. Any point on the bisector is equidistant from the two points that it bisects, from which it follows that this point, on both bisectors, is equidistant from all three triangle vertices.
The circumradius is the distance from it to any of the three vertices.
Alternative construction
An alternative method to determine the circumcenter is to draw any two lines each one departing from one of the vertices at an angle with the common side, the common angle of departure being 90° minus the angle of the opposite vertex. (In the case of the opposite angle being obtuse, drawing a line at a negative angle means going outside the triangle.)
In coastal navigation, a triangle's circumcircle is sometimes used as a way of obtaining a position line using a sextant when no compass is available. The horizontal angle between two landmarks defines the circumcircle upon which the observer lies.
Circumcircle equations
Cartesian coordinates
In the Euclidean plane, it is possible to give explicitly an equation of the circumcircle in terms of the Cartesian coordinates of the vertices of the inscribed triangle. Suppose that
are the coordinates of points . The circumcircle is then the locus of points in the Cartesian plane satisfying the equations
guaranteeing that the points are all the same distance from the common center of the circle. Using the polarization identity, these equations reduce to the condition that the matrix
has a nonzero kernel. Thus the circumcircle may alternatively be described as the locus of zeros of the determinant of this matrix:
Using cofactor expansion, let
we then have where and – assuming the three points were not in a line (otherwise the circumcircle is that line that can also be seen as a generalized circle with at infinity) – giving the circumcenter and the circumradius A similar approach allows one to deduce the equation of the circumsphere of a tetrahedron.
Parametric equation
A unit vector perpendicular to the plane containing the circle is given by
Hence, given the radius, , center, , a point on the circle, and a unit normal of the plane containing the circle, one parametric equation of the circle starting from the point and proceeding in a positively oriented (i.e., right-handed) sense about is the following:
Trilinear and barycentric coordinates
An equation for the circumcircle in trilinear coordinates is An equation for the circumcircle in barycentric coordinates is
The isogonal conjugate of the circumcircle is the line at infinity, given in trilinear coordinates by and in barycentric coordinates by
Higher dimensions
Additionally, the circumcircle of a triangle embedded in three dimensions can be found using a generalized method. Let be three-dimensional points, which form the vertices of a triangle. We start by transposing the system to place at the origin:
The circumradius is then
where is the interior angle between and . The circumcenter, , is given by
This formula only works in three dimensions as the cross product is not defined in other dimensions, but it can be generalized to the other dimensions by replacing the cross products with following identities:
This gives us the following equation for the circumradius :
and the following equation for the circumcenter :
which can be simplified to:
Circumcenter coordinates
Cartesian coordinates
The Cartesian coordinates of the circumcenter are
with
Without loss of generality this can be expressed in a simplified form after translation of the vertex to the origin of the Cartesian coordinate systems, i.e., when In this case, the coordinates of the vertices and represent the vectors from vertex to these vertices. Observe that this trivial translation is possible for all triangles and the circumcenter of the triangle follow as
with
Due to the translation of vertex to the origin, the circumradius can be computed as
and the actual circumcenter of follows as
Trilinear coordinates
The circumcenter has trilinear coordinates
where are the angles of the triangle.
In terms of the side lengths , the trilinears are
Barycentric coordinates
The circumcenter has barycentric coordinates
where are edge lengths respectively) of the triangle.
In terms of the triangle's angles , the barycentric coordinates of the circumcenter are
Circumcenter vector
Since the Cartesian coordinates of any point are a weighted average of those of the vertices, with the weights being the point's barycentric coordinates normalized to sum to unity, the circumcenter vector can be written as
Here is the vector of the circumcenter and are the vertex vectors. The divisor here equals where is the area of the triangle. As stated previously
Cartesian coordinates from cross- and dot-products
In Euclidean space, there is a unique circle passing through any given three non-collinear points . Using Cartesian coordinates to represent these points as spatial vectors, it is possible to use the dot product and cross product to calculate the radius and center of the circle. Let
Then the radius of the circle is given by
The center of the circle is given by the linear combination
where
Location relative to the triangle
The circumcenter's position depends on the type of triangle:
For an acute triangle (all angles smaller than a right angle), the circumcenter always lies inside the triangle.
For a right triangle, the circumcenter always lies at the midpoint of the hypotenuse. This is one form of Thales' theorem.
For an obtuse triangle (a triangle with one angle bigger than a right angle), the circumcenter always lies outside the triangle.
These locational features can be seen by considering the trilinear or barycentric coordinates given above for the circumcenter: all three coordinates are positive for any interior point, at least one coordinate is negative for any exterior point, and one coordinate is zero and two are positive for a non-vertex point on a side of the triangle.
Angles
The angles which the circumscribed circle forms with the sides of the triangle coincide with angles at which sides meet each other. The side opposite angle meets the circle twice: once at each end; in each case at angle (similarly for the other two angles). This is due to the alternate segment theorem, which states that the angle between the tangent and chord equals the angle in the alternate segment.
Triangle centers on the circumcircle
In this section, the vertex angles are labeled and all coordinates are trilinear coordinates:
Steiner point: the non-vertex point of intersection of the circumcircle with the Steiner ellipse.
(The Steiner ellipse, with center = centroid (), is the ellipse of least area that passes through . An equation for this ellipse is
Tarry point: antipode of the Steiner point
Focus of the Kiepert parabola:
Other properties
The diameter of the circumcircle, called the circumdiameter and equal to twice the circumradius, can be computed as the length of any side of the triangle divided by the sine of the opposite angle:
As a consequence of the law of sines, it does not matter which side and opposite angle are taken: the result will be the same.
The diameter of the circumcircle can also be expressed as
where are the lengths of the sides of the triangle and is the semiperimeter. The expression above is the area of the triangle, by Heron's formula. Trigonometric expressions for the diameter of the circumcircle include
The triangle's nine-point circle has half the diameter of the circumcircle.
In any given triangle, the circumcenter is always collinear with the centroid and orthocenter. The line that passes through all of them is known as the Euler line.
The isogonal conjugate of the circumcenter is the orthocenter.
The useful minimum bounding circle of three points is defined either by the circumcircle (where three points are on the minimum bounding circle) or by the two points of the longest side of the triangle (where the two points define a diameter of the circle). It is common to confuse the minimum bounding circle with the circumcircle.
The circumcircle of three collinear points is the line on which the three points lie, often referred to as a circle of infinite radius. Nearly collinear points often lead to numerical instability in computation of the circumcircle.
Circumcircles of triangles have an intimate relationship with the Delaunay triangulation of a set of points.
By Euler's theorem in geometry, the distance between the circumcenter and the incenter is
where is the incircle radius and is the circumcircle radius; hence the circumradius is at least twice the inradius (Euler's triangle inequality), with equality only in the equilateral case.
The distance between and the orthocenter is
For centroid and nine-point center we have
The product of the incircle radius and the circumcircle radius of a triangle with sides is
With circumradius , sides , and medians , we have
If median , altitude , and internal bisector all emanate from the same vertex of a triangle with circumradius , then
Carnot's theorem states that the sum of the distances from the circumcenter to the three sides equals the sum of the circumradius and the inradius. Here a segment's length is considered to be negative if and only if the segment lies entirely outside the triangle.
If a triangle has two particular circles as its circumcircle and incircle, there exist an infinite number of other triangles with the same circumcircle and incircle, with any point on the circumcircle as a vertex. (This is the case of Poncelet's porism). A necessary and sufficient condition for such triangles to exist is the above equality
Cyclic polygons
A set of points lying on the same circle are called concyclic, and a polygon whose vertices are concyclic is called a cyclic polygon. Every triangle is concyclic, but polygons with more than three sides are not in general.
Cyclic polygons, especially four-sided cyclic quadrilaterals, have various special properties. In particular, the opposite angles of a cyclic quadrilateral are supplementary angles (adding up to 180° or π radians).
See also
Circumcenter of mass
Circumscribed sphere
Circumcevian triangle
Inscribed circle
Kosnita theorem
Lester's theorem
Problem of Apollonius
References
External links
Derivation of formula for radius of circumcircle of triangle at Mathalino.com
Semi-regular angle-gons and side-gons: respective generalizations of rectangles and rhombi at Dynamic Geometry Sketches, interactive dynamic geometry sketch.
Weisstein, Eric W. "Circumcircle", "Cyclic Polygon". MathWorld.
Triangle circumcircle and circumcenter With interactive animation
An interactive Java applet for the circumcenter
Circles defined for a triangle
Compass and straightedge constructions | Circumcircle | Mathematics | 2,654 |
36,904,331 | https://en.wikipedia.org/wiki/Ricardo%20L%C3%B3pez%20%28stalker%29 | Ricardo López (January 14, 1975 – September 12, 1996) was a Uruguayan-born American stalker who attempted to murder the Icelandic singer Björk.
López was born in Montevideo, Uruguay, and moved to Lawrenceville, Georgia, with his family at a young age, and began working as a pest exterminator. He had poor self-esteem, was socially reclusive, and eventually developed an obsession with Björk in 1993. Though he did not hope to be sexually intimate with her, he was particularly angry over her brief relationship with the English jungle producer Goldie due to his race. Over the course of nearly nine months in 1996, he made video diaries about her and other topics, at his apartment in Hollywood, Florida.
On September 12, 1996, López mailed a letter bomb, rigged with sulfuric acid, to Björk's residence in London. He recorded a final video diary explaining his motivations, and ended it by filming his suicide by gunshot. Hollywood police found his body and the videos four days after his death and contacted Scotland Yard, who located the bomb in a London postal sorting office. The parcel was safely detonated, and Björk was unharmed.
Early life
Ricardo López was born in Montevideo, Uruguay, on January 14, 1975, into a middle-class family, which moved to the United States and settled in Lawrenceville, Georgia. He had a good relationship with his family, and was described as easygoing but introverted. He had a few male friends, but no female friends, nor a girlfriend. In a diary found by police, López expressed feelings of shame and inadequacy, as well as feelings of social awkwardness around women. Journalist Paolo Pellegrini wrote that López had been diagnosed with Klinefelter syndrome.
With aspirations to become a famous artist, López dropped out of high school. However, he did not seriously pursue an artistic career due to his feelings of inferiority, and fear of being denied entry into art school. He intermittently worked for his brother's pest control business to support himself. By the age of 17, López had become reclusive and, as a means of escape, retreated into a world of fantasies and became enthralled by celebrities.
Obsession with Björk
In 1993, López became fixated on the Icelandic singer Björk. He began gathering information about her life, followed her career, and wrote her numerous fan letters. Initially, López cited her as his muse and said that his infatuation gave him a "euphoric feeling". As time passed, his fixation became all-consuming and he grew more disconnected from reality. In his diary, López wrote of longing to be accepted by Björk and to be a person who had "an effect on her life". He fantasized about inventing a time machine to travel to the 1970s and befriending her as a child. His fantasies about Björk were not sexual. In his diary, he wrote, "I couldn't have sex with Björk because I love her."
López's diary grew to 803 pages, with passages about his thoughts on Björk and his feelings of inadequacy due to being overweight, his disgust and embarrassment about his gynecomastia, and his inability to get a girlfriend. He wrote that he considered himself "a loser who never even learned to drive" and complained about his menial job as an exterminator that earned little money. The diary contained 168 references to López's feelings of failure, 34 references to suicide, and 14 references to murder. He made 408 references to Björk and 52 references to other celebrities.
Letter bomb plot
In 1996, López was living alone in an apartment in Hollywood, Florida. Around that time he read in Entertainment Weekly that Björk was in a romantic relationship with another musician, the English jungle producer Goldie. López was angered by the perceived betrayal, and the fact that she was involved with a black man, writing in his diary: "I wasted eight months and she has a fucking lover." He began fantasizing about how he could "punish" Björk.
López stopped writing his diary and began filming a video diary in his apartment. According to López, the diary's purpose was "... a documentation of my life, of my art and of my plans", and that "comfort is what I seek in speaking to you”. He recorded eleven video tapes containing approximately two hours of footage each. The tapes contain footage of López preparing his "revenge" and discussing his "crush [that] ended up as an obsession". López's anger over Björk's relationship with Goldie intensified and he decided to kill her. In one entry, he said: "I'm just going to have to kill her. I'm going to send a package. I'm going to be sending her to Hell."
López initially intended to construct a bomb filled with hypodermic needles containing HIV-tainted blood, which satisfied his desire to have a lasting effect on Björk's life. When he realized it would not be feasible to build such a device, López began constructing a letter bomb, comprising sulfuric acid in a hollowed-out book, which he planned to send to Björk's home in London, England. The device was designed to explode and kill or disfigure Björk as she opened the book. He was going to commit suicide after mailing the bomb, hoping that he and Björk would be united in heaven.
Death
On the morning of September 12, 1996, López started filming his final video diary entry. The final tape, titled "Last Day – Ricardo López", began with López preparing to go to the post office to mail the letter bomb. He said that he was "very, very nervous", but that he would kill himself rather than be arrested if he aroused suspicion. After returning from the post office, he resumed filming. While Björk's music played in the background, a naked López shaved his head and eyebrows and painted his face red and green. Police speculated that López's reason for doing so was to make himself less recognizable, so it would be easier for him to take his own life. He examined himself in a mirror and told the camera that he was "a little nervous now". He then stated, "I'm definitely not drunk. I am completely am not depressed. I know exactly what I am doing. [The gun]'s cocked back. It's ready to roll." As Björk's song "I Remember You" finished playing, López shouted "Victory!" and shot himself in the mouth with a .38 caliber revolver. He groaned and his body fell out of view. Shortly after, he began to bleed out on the floor, which was audible. At that point, the camera stopped filming. A sign bearing the hand-painted words "The best of me - Sep 12" was propped on an upturned mattress behind him. Police theorized that López intended to cover the sign with his blood and brain matter with the gunshot, but the gun was not powerful enough to cause that to happen.
Four days later, on September 16, a foul odor and blood were noticed coming from López's apartment. The Hollywood Police Department entered and discovered López's decomposing corpse. Written on the wall was a message: "The 8mm tapes are a documentation of a crime, terrorist matter, they are for the FBI." The Broward County Sheriff's Office evacuated the building while the bomb squad searched for further explosives, and found none. After viewing López's final tape, police contacted Scotland Yard to warn them that the potentially explosive package was en route to Björk's residence in London. The package had yet to be delivered, and the Metropolitan Police intercepted it at a South London post office, after which it was safely detonated. There had been little danger of Björk receiving the bomb because her mail was vetted through her management's office. Unbeknownst to López, Björk and Goldie had ended their relationship a few days before he killed himself.
Aftermath
On September 18, outside her home in London, Björk gave a statement to the press, saying that she was very distressed by the incident. She described it as "terrible" and "very sad", and said that people should not "take me too literally and get involved in my personal life". She sent a card and flowers to López's family. She left for Spain, where she recorded the remainder of her third album, Homogenic, away from media attention. She also hired security for her son, Sindri, who was escorted to school with a minder. A year after López's death, Björk discussed the incident in an interview: "I was very upset that somebody had died. I couldn't sleep for a week. And I'd be lying if I said it didn't scare the fuck out of me. That I could get hurt and, most of all, that my son could get hurt."
López's family and friends were aware of his obsession with Björk. They maintained that they had no idea that he harbored violent thoughts or was capable of violence. At one point, his brother had told him to "get a real woman, you're obsessed". A psychiatrist who treated López for anxiety shortly before his death also stated that he did not appear dangerous. López's videotapes, including his suicide, were confiscated by the FBI and then were released to journalists.
In popular culture
In 2000, Sami Saif released a 70-minute documentary, The Video Diary of Ricardo López, comprising a condensed version of López's 22-hour video diary. Saif decided to limit its availability as "I want to be there when people see the film, because there are all sorts of things about Ricardo López on the internet. I like to be able to talk to people about what it is they've actually seen."
In 2004, a number of episodes of season 6 of the television show Third Watch were based on the case. They involve a man obsessed with a schoolteacher who makes a video diary about her before sending a letter bomb to her and killing himself on video with a shotgun.
In 2019, independent Italian director Domiziano Cristopharo released an 87-minute erotic horror film titled The Obsessed, with the working title of Last Day: The Best of Me. López's video diary was adapted as the subject material of The Obsessed in what Cristopharo described as "Albania's first horror film" and "a body horror freely inspired to the real story of Ricardo López, Bjork’s stalker".
See also
Mark David Chapman, stalked and murdered John Lennon in 1980.
Yolanda Saldívar, president of the fan club and manager of boutiques for singer Selena, who murdered her in 1995.
Robert John Bardo, stalked and murdered actress Rebecca Schaeffer in 1989.
John Hinckley Jr., stalker of actress Jodie Foster, who tried to kill President Ronald Reagan in an attempt to impress her.
Christina Grimmie, American singer, murdered by Kevin James Loibl, who subsequently shot and killed himself.
R. Budd Dwyer, Pennsylvania Treasurer who shot and killed himself January 22, 1987, at the end of a press conference at which he had been expected to resign.
Daniel V. Jones, who set his truck on fire in the middle of a highway in 1998 and was captured shooting and killing himself on live television.
Ronnie McNutt, who shot and killed himself with a rifle on a Facebook livestream in 2020.
Christine Chubbuck, newswoman who shot and killed herself on live television in 1974 while doing a report.
References
External links
1996 suicides
1996 deaths
1975 births
American failed assassins
Criminals from Florida
Filmed deaths in the United States
People from Hollywood, Florida
Stalking
Suicides by firearm in Florida
Uruguayan criminals
Uruguayan emigrants to the United States
American intersex men
Filmed suicides
American diarists
Björk | Ricardo López (stalker) | Biology | 2,496 |
350,581 | https://en.wikipedia.org/wiki/Paddy%20Chayefsky | Sidney Aaron "Paddy" Chayefsky (; January 29, 1923 – August 1, 1981) was an American playwright, screenwriter and novelist. He is the only person to have won three solo Academy Awards for writing both adapted and original screenplays.
He was one of the most renowned dramatists of the Golden Age of Television. His intimate, realistic scripts provided a naturalistic style of television drama for the 1950s, dramatizing the lives of ordinary Americans. Martin Gottfried wrote in All His Jazz that Chayefsky was "the most successful graduate of television's slice of life school of naturalism."
Following his critically acclaimed teleplays, Chayefsky became a noted playwright and novelist. As a screenwriter, he received three Academy Awards for Marty (1955), The Hospital (1971) and Network (1976). The movie Marty was based on his own television drama about two lonely people finding love. Network was a satire of the television industry and The Hospital was also satiric. Film historian David Thomson called The Hospital "years ahead of its time.… Few films capture the disaster of America's self-destructive idealism so well." His screenplay for Network is often regarded as his masterpiece, and has been hailed as "the kind of literate, darkly funny and breathtakingly prescient material that prompts many to claim it as the greatest screenplay of the 20th century."
Chayefsky's early stories were frequently influenced by the author's childhood in The Bronx. Chayefsky was part of the inaugural class of inductees into the Academy of Television Arts & Sciences' Television Hall of Fame. He received this honor three years after his death, in 1984.
Early life
Sidney Aaron Chayefsky was born in the Bronx, New York City, to Russian-Jewish immigrants Harry and Gussie (Stuchevsky) Chayefsky. Harry Chayefsky's father served for twenty-five years in the Russian army so the family was allowed to live in Moscow, while Gussie Stuchevsky lived in a village near Odessa. Harry and Gussie immigrated to the United States in 1907 and 1909 respectively.
Harry Chayefsky worked for a New Jersey milk distribution company in which he eventually took a controlling interest and renamed Dellwood Dairies. The family lived in Perth Amboy, New Jersey, and Mount Vernon, New York, moving temporarily to Bailey Avenue in the West Bronx at the time of Sidney Chayefsky's birth while a larger house in Mount Vernon was being completed. He had two older brothers, William and Winn.
As a toddler Chayefsky showed signs of being gifted, and could "speak intelligently" at two and a half. His father suffered a financial reversal during the Wall Street Crash of 1929, and the family moved back to the Bronx. Chayefsky attended a public elementary school. As a boy, Chayefsky was noted for his verbal ability, which won him friends. He attended DeWitt Clinton High School, where he served as editor of the school's literary magazine The Magpie. He graduated from Clinton in 1939 at age 16 and attended the City College of New York, graduating with a degree in social sciences in 1943. While at City College he played for the semi-professional football team Kingsbridge Trojans. He studied languages at Fordham University during his Army service.
Military service
In 1943, two weeks before his graduation from City College, Chayefsky was drafted into the United States Army, and served in combat in Europe. While in the Army he adopted the nickname "Paddy." The nickname was given spontaneously when he was awakened at dawn for kitchen duty. Although actually Jewish, he asked to be excused to attend Mass. "Sure you do, Paddy," said the officer, and the name stuck.
Chayefsky was wounded by a land mine while serving with the 104th Infantry Division in the European Theatre near Aachen, Germany. He was awarded the Purple Heart. The wound left him badly scarred, contributing to his shyness around women. While recovering from his injuries in the Army Hospital near Cirencester, England, he wrote the book and lyrics to a musical comedy, No T.O. for Love. First produced in 1945 by the Special Services Unit, the show toured European Army bases for two years.
The London opening of No T.O. for Love at the Scala Theatre in the West End was the beginning of Chayefsky's theatrical career. During the London production of this musical, Chayefsky encountered Joshua Logan, a future collaborator, and Garson Kanin, who invited Chayefsky to collaborate with him on a documentary of the Allied invasion, The True Glory.
Career
1940s
Returning to the United States, Chayefsky worked in his uncle's print shop, Regal Press, an experience which provided a background for his later teleplay, Printer's Measure (1953), as well as his story for the movie As Young as You Feel (1951). Kanin enabled Chayefsky to spend time working on his second play, Put Them All Together (later known as M is for Mother), but it was never produced. Producers Mike Gordon and Jerry Bressler gave him a junior writer's contract. He wrote a story, The Great American Hoax, which sold to Good Housekeeping but was never published.
Chayefsky went to Hollywood in 1947 with the aim of becoming a screenwriter. His friends Garson Kanin and Ruth Gordon found him a job in the accounting office of Universal Pictures. He studied acting at the Actor's Lab and Kanin got him a bit part in the film A Double Life. He returned to New York, submitted scripts, and was hired as an apprentice scriptwriter by Universal. His script outlines were not accepted and he was fired after six weeks. After returning to New York, Chayefsky wrote the outline for a play that he submitted to the William Morris Agency. The agency, treating it as a novella, submitted it to Good Housekeeping magazine. Movie rights were purchased by Twentieth Century Fox, Chayefsky was hired to write the script, and he returned to Hollywood in 1948. But Chayefsky was discouraged by the studio system, which involved rewrites and relegated writers to inferior roles, so he quit and moved back to New York, vowing not to return.
During the late 1940s, he began working full-time on short stories and radio scripts, and during that period, he was a gagwriter for radio host Robert Q. Lewis. Chayefsky later recalled, "I sold some plays to men who had an uncanny ability not to raise money."
Early 1950s
During 1951–52, Chayefsky wrote adaptations for radio's Theater Guild on the Air: The Meanest Man in the World (with James Stewart), Cavalcade of America, Tommy (with Van Heflin and Ruth Gordon and Over 21 (with Wally Cox).
His play The Man Who Made the Mountain Shake was noticed by Elia Kazan, and his wife, Molly Kazan, helped Chayefsky with revisions. It was retitled Fifth From Garibaldi but was never produced. In 1951, the movie As Young as You Feel was adapted from a Chayefsky story.
He moved into television with scripts for Danger, The Gulf Playhouse and Manhunt. Philco Television Playhouse producer Fred Coe saw the Danger and Manhunt episodes and enlisted Chayefsky to adapt the story It Happened on the Brooklyn Subway about a photographer on a New York City Subway train who reunites a concentration camp survivor with his long-lost wife. Chayefsky's first script to be telecast was a 1949 adaptation of Budd Schulberg's What Makes Sammy Run? for Philco.
Since he had always wanted to use a synagogue as backdrop, he wrote Holiday Song, telecast in 1952 and also in 1954. He submitted more work to Philco, including Printer's Measure, The Bachelor Party (1953) and The Big Deal (1953).
The seventh season of Philco Television Playhouse began September 19, 1954 with E. G. Marshall and Eva Marie Saint in Chayefsky's Middle of the Night, a play which relocated to Broadway theaters 15 months later; In 1956, Middle of the Night opened on Broadway with Edward G. Robinson and Gena Rowlands, and its success led to a national tour. It was filmed by Columbia Pictures in 1959 with Kim Novak and Fredric March.
Marty and fame
In 1953, Chayefsky wrote Marty, which was premiered on The Philco Television Playhouse, with Rod Steiger and Nancy Marchand. Marty is about a decent, hard-working Bronx butcher, pining for the company of a woman in his life but despairing of ever finding true love in a relationship. Fate pairs him with a plain, shy schoolteacher named Clara whom he rescues from the embarrassment of being abandoned by her blind date in a local dance hall. The production, the actors and Chayefsky's naturalistic dialogue received much critical acclaim and influenced subsequent live television dramas.
Chayefsky was initially uninterested when producer Harold Hecht sought to buy film rights for Marty for Hecht-Hill-Lancaster. Chayefsky, still upset by his treatment years before, demanded creative control, consultation on casting, and the same director as in the TV version, Delbert Mann. Surprisingly, Hecht agreed to all of Chayefsky's demands, and named Chayefsky "associate producer" of the film. Chayefsky then requested and was granted "co-director" status, so that he could take over production if Mann were fired.
The screenplay was little changed from the teleplay, but with Clara's role expanded. Chayefsky was involved in all casting decisions and had a cameo role, playing one of Marty's friends, unseen, in a car. Actress Betsy Blair, playing Clara, faced difficulties because of her affiliation with left-wing causes, and United Artists demanded that she be removed. Chayefsky refused, and her husband Gene Kelly also intervened on her behalf. Blair remained in the cast.
In September 1954, after most of the movie had been filmed, the studio ceased production due to accounting and financial difficulties. Producer Harold Hecht encountered resistance to the Marty project from his partner Burt Lancaster from the beginning, with Lancaster "only tolerating" it. The film had a limited publicity budget. But reviews were glowing, and the film won the Palme d'Or at the 1955 Cannes Film Festival and the Academy Award for Best Picture, greatly boosting Chayefsky's career.
Late 1950s
After his success with Marty, Chayefsky continued to write for TV and theater as well as films. Chayefsky's The Great American Hoax was broadcast May 15, 1957 during the second season of The 20th Century Fox Hour.
His TV play The Bachelor Party was bought by United Artists and The Catered Affair was acquired by Metro-Goldwyn-Mayer. Gore Vidal was hired to write the screenplay by MGM, while Chayefsky wrote the Bachelor Party. Catered Affair did well in Europe but poorly in U.S. theaters, and was not a success.
Bachelor Party was budgeted at $750,000, twice Marty, but received far less acclaim and was viewed by United Artists as artistically inferior. The studio chose instead to promote another Hecht-Hill-Lancaster film, Sweet Smell of Success, which it believed to be better. Bachelor Party was a commercial failure, and never made a profit.
Chayefsky wrote a film adaptation of his Broadway play Middle of the Night, originally writing the female lead role for Marilyn Monroe. She passed on the part, which went to Kim Novak. He also commenced work on The Goddess, the story of the rise and fall of a movie star resembling Monroe. The star of The Goddess, Kim Stanley, despised the film and refused to publicize it. He and Stanley clashed during production of the film, in which Chayefsky served as producer as well as screenwriter. Despite her requests, Chayefsky refused to change any aspect of the script. Monroe's husband, Arthur Miller believed that the film was based on his wife's life and protested to Chayefsky. The film received positive reviews, and Chayefsky received an Academy Award nomination for his script. A New York Herald Tribune reviewer called the film "a substantial advance in the work of Chayefsky."
Chayefsky denied for years that the film was based on Monroe, but Chayefsky's biographer Shaun Considine observes that not only was she the prototype but the film "captured her longing and despair" accurately.
In 1958 Chayefsky began adapting Middle of the Night as a film, and he decided not to use the star of the Broadway version, Edward G. Robinson, with whom he had clashed, choosing instead Frederic March. Elizabeth Taylor initially agreed to appear in the female lead, but dropped out. Kim Novak was ultimately cast in the part. The film was chosen as the American entry at the Cannes Film Festival, but reviews were mixed and the film had only a short run in theaters.
The Tenth Man (1959) marked Chayefsky's second Broadway theatrical success, garnering 1960 Tony Award nominations for Best Play, Best Director (Tyrone Guthrie) and Best Scenic Design. Guthrie received another nomination for Chayefsky's Gideon, as did actor Fredric March. Chayefsky's final Broadway theatrical production, a play based on the life of Joseph Stalin, The Passion of Josef D, received unfavorable reviews and ran for only 15 performances.
Although Chayefsky was an early writer for the television medium, he eventually abandoned it, "decrying the lack of interest the networks demonstrated toward quality programming". As a result, during the course of his career, he constantly toyed with the idea of lampooning the television industry, which he succeeded in doing with Network.
The Americanization of Emily
Although Chayefsky wished only to do original screenplays, he was persuaded by producer Martin Ransohoff to adapt William Bradford Huie's 1959 novel that was eventually filmed with the book's title The Americanization of Emily (1964). The novel dealt with interservice rivalries prior to the Normandy landings during World War II, with a love story at the center of the plot. Chayefsky agreed to adapting the novel but only if he could fundamentally change the story. He made the titular character more sophisticated, but refusing to be "Americanized" by accepting material goods.
William Wyler was initially brought in as the director, but his relationship with Chayefsky deteriorated when he sought to change the script. William Holden was initially cast in the male lead, but that led to conflict when he asked that Julie Andrews be replaced by his then-girlfriend, Capucine. James Garner, adept at comedy with sophisticated dialogue but originally slated to play a supporting role, replaced Holden and delivered a critically acclaimed performance while James Coburn took over the part originally meant for Garner. Both James Garner and Julie Andrews always maintained that The Americanization of Emily was their favorite film of their own work. The film opened in August 1964 to superlative reviews but was a box office failure, possibly due to its extremely controversial anti-war stance at the dawn of the Vietnam War. The studio changed the title in the middle of its release, calling it Emily...she's super! to avoid confusing part of the public with a seven-syllable word in the title. The film has since been praised as a "vanguard anti-war film."
1960s 'fallow period'
The failure of Americanization of Emily and Josef D. on Broadway shook Chayefsky's confidence, and was the beginning of a what his biographer Shaun Considine calls a "fallow period." He agreed to do novel adaptations, which he had previously shunned, and was hired to adapt the Richard Jessup novel The Cincinnati Kid. Director Sam Peckinpah rejected the script, and Chayefsky was fired. Peckinpah was replaced by Norman Jewison shortly after the film began production.
Chayefsky worked for a time on adapting Huie's book Three Lives for Mississippi, about the murders of three civil rights workers in 1964, and in 1967 was hired to adapt the Broadway musical Paint Your Wagon. He was fired from the film after producing a script that Alan Jay Lerner, the playwright and producer, felt lacked "a musical structure." Chayefsky had his name removed as screenwriter but remained as adapter.
Comeback with The Hospital
In 1969 and 1970. Chayefsky began to consider a film that would be set among the civil unrest taking place at the time. When his wife Susan received poor care at a hospital, he pitched to United Artists a story based at a hospital. To ensure that he had the same kind of creative control given to playwrights, he formed Simcha Productions, named after the Hebrew version of his given name, Sidney. He then commenced research, reading medical books and visiting hospitals.
The leading character in the film, Dr. Herbert Bock, included many of Chayefsky's personal traits. Bock had been a "boy genius" who felt bitter and that his life was over. One of the monologues of George C. Scott as Bock in the film, in which Bock says he is miserable and considering suicide, was repeated verbatim from a conversation that Chayefsky had with a business associate during that time.
The long speeches written for Bock and other characters by Chayefsky, later praised by critics, met resistance from United Artists executives during the making of the film. The script was described as "too talky" and containing excessive medical terminology. But Chayefsky, as producer, prevailed. He also vetoed the studio's suggestion that Walter Matthau or Burt Lancaster be hired for the lead role, insisting on Scott. Chayefsky worked on the dialogue with Diana Rigg, the female lead, but Scott rejected his input.
After filming, Chayefsky spoke the opening narration after several actors were rejected for the job. It was supposed to be temporary, but became the one that was used in the film. Although some initial reviews were negative, the film received rave reviews from leading critics, and was a box office hit. Chayefsky won an Academy Award for his script, and his career was revived.
Network
Chayefsky believed that television news desensitized viewers to violence and murder, and he was shocked one day when a respected news anchorman "rattled off inanities." He asked his friend, the NBC News anchor John Chancellor, if it was possible for an anchorman to go crazy on the air, and Chancellor replied "Every day." Within a week of that conversation, Chayefsky had written the rough draft of a script, centering on Howard Beale, an elderly, disillusioned anchor who announces he will commit suicide on the air. In 1974, a local news anchor, Christine Chubbuck, committed suicide during a broadcast.
Chayefsky researched the project by watching hours of television and consulting with NBC executive David Tebet, who allowed Chayefsky to attend programming meetings. He later conducted research at CBS and met with Walter Cronkite. The completed script reflected his research and his personal view, prevalent at the time, that Arabs were "buying up" U.S. corporations. The "mad as hell" speech was a deeply personal statement reflecting the core of Chayefsky's beliefs during the early 1970s. Chayefsky later called it an easy speech to write, reflecting his view that people had a right to get mad.
The script encountered difficulty because of film industry concerns that it was too tough on television. Ultimately it was decided that the film would be a co-production of MGM and United Artists, with Chayefsky having complete creative control. The deal was announced in July 1975. George C. Scott was offered the supporting role of Max Schumacher (Beale's friend and a traditional journalist representing integrity in the media) but rejected it, and the role went to William Holden. Chayefsky refused requests by UA and MGM to give the film a "softer" ending, feeling that the actual ending – with the Howard Beale character assassinated at the order of the network's executives – would alienate audiences.
Outside the expected negative reviews from television network film critics, the film was a critical and box office success, winning ten Academy Award nominations, and Chayefsky won his third Academy Award, making him the only three-time solo recipient of a screenwriting Oscar; all the other three-time winners (Francis Ford Coppola, Charles Brackett, Woody Allen, and Billy Wilder) shared at least one of their awards with co-writers. When Peter Finch posthumously won Best Actor for playing Beale, Chayefsky was to accept on his behalf, but he defied the show's producer, William Friedkin, and called Finch's wife Eletha to the stage to accept the award.
The film is said to have "presaged the advent of reality television by twenty years" and was a "sardonic satire" of the television industry, dealing with the "dehumanization of modern life."
Altered States
After Network Chayefsky explored an offer from Warren Beatty to write a film based on the life of John Reed and his book Ten Days That Shook the World. He agreed to do research, and spent three months exploring the subject of what eventually became the Beatty film Reds. Negotiations with Beatty's lawyers failed.
In the spring of 1977, Chayefsky began work on a project delving into "man's search of his true self." The genesis of the idea was a joke with his friends Bob Fosse and Herb Gardner. The three cooked up a joke project to remake King Kong, in which Kong becomes a movie star. The comic project got Chayefsky interested in exploring the origins of the human spirit. That evolved into a project updating the theme of Dr. Jekyll and Mr. Hyde.
Chayefsky conducted research on genetic regression, speaking to doctors and professors of anthropology and human genetics. He then began a rough outline of a story in which the lead character immerses himself in an isolation tank, and with the aid of hallucinogens regresses to become a prehuman creature. Chayefsky wrote an eighty-seven page treatment and, at the suggestion of Columbia executive Daniel Melnick, he adapted it into a novel
Film rights were bought by Columbia Pictures for nearly $1 million, and with the same creative control and financial terms as for Network. Chayefsky suffered greatly from stress while working on the novel, resulting in a heart attack in 1977. The heart attack resulted in strict dietary and lifestyle restrictions. The novel, titled Altered States, was published by HarperCollins in June 1978 and received mixed reviews. Chayefsky did not promote the book, which he viewed only as a blueprint for the screenplay.
Since his contract gave him creative control, Chayefsky participated in the selection of William Hurt and Blair Brown as the leads. Arthur Penn was initially hired as director, but left after disagreements with Chayefsky. He was replaced by Ken Russell.
Chayefsky made it clear that he would allow no input into the dialogue or narrative, which Russell felt was too "soppy." Russell was confident that he could get rid of Chayefsky, but found that "the monkey on my back was always there and wouldn't let go." Russell was polite and deferential prior to production but after rehearsals began in 1979 "began to treat Paddy as a nonentity" and was "mean and sarcastic," according to the film's producer Howard Gottfried.
Chayefsky had the power to fire Russell, but was told by Gottfried that he could only do so if he took over direction himself. He left for New York and continued to monitor production. The actors were not permitted to alter the dialogue. Chayefsky later said that in retaliation the actors were instructed to speak their lines while eating or talking too fast. Russell stated that the fast pace and overlapping dialogue was Chayefsky's idea.
Upset by the filming of his screenplay, Chayefsky withdrew from the production of Altered States and took his name off the credits, substituting the pseudonym Sidney Aaron.
Personality and characteristics
In his book Mad as Hell: The Making of Network and the Fateful Vision of the Angriest Man in Movies, journalist Dave Itzkoff wrote that the Howard Beale character in Network was a product of Chayefsky's many frustrations. Itzkoff wrote: "Where others avoided conflict, he cultivated it and embraced it. His fury nourished him, making him intense and unpredictable, but also keeping him focused and productive." Itzkoff describes Chayefsky as "intensely troubled, a huge egomaniac and control freak, dispirited about the world, wryly comic, and a both present and absent family man."
In his biography of Chayefsky's friend Bob Fosse, drama critic Martin Gottfried said Chayefsky was compact and burly in the bulky way of a schoolyard athlete, with thick dark hair and a bent nose that could pass for a streetfighter's. He was a grown-up with one foot in the boys' clubs of his city youth, a street snob who would not allow the loss of his nostalgia. He was an intellectual competitor, always spoiling for a political argument or a philosophical argument, or any exchange over any issue, changing sides for the fun of the fray. A liberal, he was annoyed by liberals; a proud Jew, he wouldn't let anyone call him a "Jewish writer".In his biography Mad as Hell, author Shaun Considine says that Chayefsky had a "dual personality". Chayefsky's "Paddy" persona had "character, caprice; it appealed to his sense of swagger" and gave him confidence to stand up for his rights. "Sidney" was the "silent creator" who had the talent and genius.
Chayefsky was under psychoanalysis for years, beginning in the late 1950s, to deal with his volatile behavior and rage, which at times was difficult to control.
Political activism
Opposition to McCarthyism
Early in his career, Chayefsky was an opponent of McCarthyism. He signed a telegram signed by other writers and performers protesting federal inaction after a concert featuring Paul Robeson in Peekskill, New York, prompted violence in which 150 persons were injured. As a result, his name appeared in the anti-Communist vigilante publication The Firing Line, published by the American Legion. Although Chayefsky feared being subpoeanaed and his career ruined, that never happened. Actress Betsy Blair described Chayefsky as a Social Democrat and as an anti-Marxist.
He opposed the Vietnam War as a "stupid and utterly unnecessary war whose principal victim would be the United States" and sent a letter to President Richard Nixon decrying the My Lai Massacre, saying Americans were in danger of turning into "a nation of bad Germans."
Soviet Jews and Israel
In the 1970s Chayefsky worked for the cause of Soviet Jews, and in 1971 went to Brussels as part of an U.S. delegation to the International Conference on Soviet Jewry. Believing that the conference was insufficiently aggressive, he founded a new activist organization in New York, Writers and Artists for Peace in the Middle East. Co-founders included Colleen Dewhurst, Frank Gervasi, Leon Uris, Gerold Frank and Elie Wiesel. Chayefsky believed that "Zionists" was a code word for "Jews" by Marxist anti-Semites.
Chayefsky was increasingly interested in Israel at that time. In an interview with Women's Wear Daily in 1971, he said that he believed that Jews around the world were in imminent danger of genocide. Journalist Dave Itzkoff writes that in the 1970s his views on Israel possessed a "more aggressive and admittedly paranoid streak." He believed that anti-Semitism was rife in the U.S., especially in the New Left, and once physically confronted a heckler who used an anti-Semitic slur during a David Steinberg performance. While filming The Hospital, Chayefsky commenced work on a film project called "The Habbakuk Conspiracy," which he described as a "study of life within an Arab guerrilla cell on the West Bank of the Jordan." The project was sold to United Artists but never filmed, which resulted in lingering resentment toward the studio.
Chayefsky composed, without credit, pro-Israel ads for the Anti-Defamation League at the time of the Yom Kippur War in 1973. In the late 1970s Writers and Artists for Peace in the Middle East placed full-page newspaper ads written by Chayefsky attacking the Palestine Liberation Organization for the massacre of Israeli athletes at the 1972 Summer Olympics.
He rejected Jane Fonda and Vanessa Redgrave for the role of the female lead in Network because of what he alleged were their "anti-Israel leanings," even though Redgrave was director Sidney Lumet's first choice. Redgrave, accepting the Best Supporting Actress Academy Award for Julia at the 1978 Academy Awards, made a statement during her award acceptance speech denouncing protestors who were members of the Jewish Defense League (JDL), led by Rabbi Meir Kahane, who burned an effigy of Redgrave outside the Awards site, picketed the Academy Awards ceremony to protest against her, and had earlier called on 20th Century Fox to denounce Redgrave and promise never to hire her again, saying, "You should be very proud that in the last few weeks you have stood firm and you have refused to be intimidated by the threats of a small bunch of Zionist hoodlums whose behavior is an insult to the stature of Jews all over the world, and to their great and heroic record of struggle against fascism and oppression." Chayefsky, appearing later, upbraided Redgrave and said "a simple 'Thank you' would have sufficed." The Redgrave and Chayefsky remarks prompted controversy.
Family
Chayefsky met his future wife Susan Sackler during his 1940s stay in Hollywood. The couple married in February 1949. Their son Dan was born in 1955.
Chayefsky's relationship with his wife was strained for much of their marriage, and she became withdrawn and unwilling to appear with him as he became more prominent. Gwen Verdon, wife of his friend Bob Fosse, only saw Susan Chayefsky five times in her life.
Susan Chayefsky suffered from muscular dystrophy, and Dan Chayefsky described himself to author Dave Itzkoff as "a self-destructive teen who brought more pressure to the family home." Despite an alleged affair with Kim Novak, which resulted in his asking his wife for a divorce, Paddy Chayefsky remained married to Susan Chayefsky until his death, and sought her opinion on his screenplays, including Network. She died in 2000.
Death
Chayefsky contracted pleurisy in 1980 and again in 1981. Tests revealed cancer, but he refused surgery out of fear that surgeons would "cut me up because of that movie I wrote about them," referring to The Hospital. He opted for chemotherapy. He died in a New York hospital on August 1, 1981, aged 58, and was interred in the Sharon Gardens Division of Kensico Cemetery in Valhalla, Westchester County, New York.
Longtime friend Bob Fosse performed a tap dance at the funeral, as part of a deal he and Chayefsky had made when Fosse was in the hospital for open-heart surgery. If Fosse died first, Chayefsky promised to deliver a tedious eulogy or Fosse would dance at Chayefsky's memorial if he were the one to die first. Fosse would dedicate his final film Star 80 to Chayefsky in 1983. Chayefsky's personal papers are at the Wisconsin Historical Society and the New York Public Library for the Performing Arts, Billy Rose Theatre Division.
Filmography
The True Glory (1945) (uncredited)
As Young as You Feel (1951) (story)
Marty (1955)
The Catered Affair (1956)
The Bachelor Party (1957)
The Goddess (1958)
Middle of the Night (1959)
The Americanization of Emily (1964)
Paint Your Wagon (1969) (adaptation)
The Hospital (1971)
Network (1976)
Altered States (1980) (as "Sidney Aaron")
Television and stage plays
Television (selection)
1950–1955 Danger
1951–1952 Manhunt
1951–1960 Goodyear Playhouse
1952–1954 Philco Television Playhouse
1952 Holiday Song
1952 The Reluctant Citizen
1953 Printer's Measure
1953 Marty
1953 The Big Deal
1953 The Bachelor Party
1953 The Sixth Year
1953 Catch My Boy On Sunday
1954 The Mother
1954 Middle of the Night
1955 The Catered Affair
1956 The Great American Hoax
Stage
No T.O. for Love (1945)
Middle of the Night (1956)
The Tenth Man (1959)
Gideon (1961)
The Passion of Josef D. (1964)
The Latent Heterosexual (originally titled The Accountant's Tale or The Case of the Latent Heterosexual) (1968)
Novels
Altered States: A Novel (1978)
Academy Awards
References
Bibliography
External links
The Angry Man WNYC: On The Media audio profile of Paddy Chayefsky, October 27, 2006
Paddy Chayefsky papers at the New York Public Library for the Performing Arts
Paddy Chayefsky Papers at the Wisconsin Center for Film and Theater Research.
Museum of Broadcast Communications: Paddy Chayefsky
Paddy Chayefsky, on Enciclopedia Britannica, Encyclopædia Britannica, Inc
Paddy Chayefsky, on The Encyclopedia of Science Fiction
Paddy Chayefsky, on Open Library, Internet Archive
Paddy Chayefsky, on Internet Speculative Fiction Database, Al von Ruff
Paddy Chayefsky, on MusicBrainz, MetaBrainz Foundation
1923 births
1981 deaths
United States Army personnel of World War II
American male screenwriters
Best Original Screenplay Academy Award winners
City College of New York alumni
DeWitt Clinton High School alumni
Fordham University alumni
Writers from the Bronx
Novelists from New York City
Screenwriters from New York City
Jewish American dramatists and playwrights
Burials at Kensico Cemetery
Jewish American military personnel
Jewish American screenwriters
American people of Ukrainian-Jewish descent
Best Screenplay Golden Globe winners
Best Screenplay BAFTA Award winners
Best Adapted Screenplay Academy Award winners
20th-century American dramatists and playwrights
American male dramatists and playwrights
20th-century American male writers
20th-century American screenwriters
Landmine victims
Military personnel from New York City
United States Army soldiers
20th-century American Jews
Writers Guild of America Award winners
American Zionists
American satirists | Paddy Chayefsky | Chemistry | 7,255 |
52,062,473 | https://en.wikipedia.org/wiki/Spinning%20bee | Spinning bees were 18th-century public events where women in the American Colonies produced homespun cloth to help the colonists reduce their dependence on British goods. They emerged in the decade prior to the American Revolution as a way for women to protest British policies and taxation.
Historical background
Great Britain enforced the 1765 Stamp Act on its American colonies, which taxed official documents throughout the colony. The British Crown viewed these measures as a legitimate way to raise revenue. In contrast, many colonists viewed these acts as tyrannical, arguing that taxation without consent violated their rights as Englishmen. One common way that colonists protested this act of Parliament was through non-importation agreements and boycotts. Though the Stamp Act 1765 was repealed in 1766, the following year Parliament passed the Townshend Acts, imposing a new tax on goods such as glass and paper. Non-importation movements and boycotts resumed in protest of these additional taxes. Spinning bees were among these acts of defiance of the Townshend Acts by encouraging local production of cloth instead of purchasing imported English textiles that bore the new tax.
Political significance
The homespun cloth and garments that these spinning bees produced became a political symbol as well as a material boycott. Wearing homespun showed other colonists that the wearer was protesting the British by refusing to buy British clothes. In addition to average colonists, prominent colonial leaders and politicians also donned homespun clothing as a show of rebellion against the British Crown. One year prior to the outbreak of the Revolution, the entirety of Harvard's graduating class wore homespun garments.
Spinning bees also held a personal importance for women as well, involving women in the resistance to Great Britain where previously they had been excluded from public displays of resistance against the Crown.
Process of spinning bees
The spinning bees sponsored by Rebel groups such as the Daughters of Liberty represented one way that colonial women could get involved in the protest of imperial policies. The colonies relied on Great Britain for textiles, meaning that a successful boycott would require alternate sources for many goods that colonists imported. The task of enacting the boycott fell to women, providing them an opportunity to enter the public side of the protest alongside men against the British Crown. Women began to compete publicly against one another to see who could make the most homemade cloth, known as homespun. These contest became known as spinning bees.
The Sons of Liberty often co-hosted these events with the Daughters of Liberty as a way to publicly support the Patriot cause against the British. Like other local festivities of the time, spinning bees included songs, picnics, and friendly competitions. Newspaper accounts, for example those from Rhode Island, also demonstrate that spinning bees attempted to use the spirit of competition to bridge the gap between married and unmarried women as well as lower- and upper-class women. The spinning bees would often be community events, taking place in the center of town or in the town minister's home, depending upon the class status of the women involved. It was more likely for poorer women to spin as part of a bigger festivity than upper-class women, who spun at their minister's house.
Legacy
Spinning bees were a predecessor to women's paid work outside the home. Since the spinning bees required women to spin and weave out in public, they presented an opportunity for women to participate in the colonial economy in a public setting. The ability for women to spin as well as weave in public paved the way for women's eventual role in the United States factory system. Factory work became one of the few occupations open to women in the 19th century.
In other countries
Before the advent of electric lighting in Europe, rural and urban women in Germany would gather to do their spinning and other handicrafts in a single house or room in order to preserve firewood, candles, and lantern oil, thus collectively saving supplies for heating and lighting. This was variably referred to depending on the dialect as a (), (light room), or (distaff room), among other terms. While the spinning rooms were nominally segregated by gender, it was common for young men to visit the spinning rooms to accompany young women home in the evenings. As such, it was one of the few places that a relationship could be started away from the watchful eyes of church authorities and family members. From the 16th century onwards, this practice later drew outrage from Catholics and Protestants alike due to accusations of sexual debauchery. In response, a ('light man') could be assigned to a spinning room to hold the members responsible to spiritual authorities.
Ernest Borneman mentions the following obscene terms from spinning room jargon:
('naughty bride'), (flax queen), (commercial bride), (rough bride): The prettiest girl was chosen to be the "naughty bride" at the time of the flax breaking.
(shaggy bush): A distaff coated in flax. The resembled a fir tree decorated with ribbons, which a girl threw under the boys so that they could fight for it: Whoever conquered it won the favor of the .
: On the back of her smock, the wore a flax wreath, which the boys tried to soak with a bucket of water to get the girl to hang up her skirt and petticoats to dry.
: The flax waste () was stuffed into the boys' waistbands by the girls, which served as a playful excuse to quickly grope the male genitals.
(meat pile): After dancing, all participants dropped to the floor, creating the largest possible crowd, in which there was an opportunity for mutual contact. This custom was particularly offensive and was condemned in numerous sermons.
(flax break): "tell nonsense, make stupid jokes".
(hair drying): drying flax or coitus.
: Children born in autumn who may have been conceived in the spinning room during flax crumbling in the previous winter months.
References
American Revolutionary War
Quilting
Weaving
Civil disobedience in the United States
Protest tactics
Manufacturing
18th century in economic history
History of fashion
Textile and clothing labor disputes in the United States | Spinning bee | Engineering | 1,238 |
293,870 | https://en.wikipedia.org/wiki/Natural%20Bridges%20National%20Monument | Natural Bridges National Monument is a U.S. National Monument located about northwest of the Four Corners boundary of southeast Utah, in the western United States, at the junction of White Canyon and Armstrong Canyon, part of the Colorado River drainage. It features the thirteenth largest natural bridge in the world, carved from the white Permian sandstone of the Cedar Mesa Formation that gives White Canyon its name.
The three bridges in the park are named Kachina, Owachomo, and Sipapu (the largest), which are all Hopi names. A natural bridge is formed through erosion by water flowing in the stream bed of the canyon. During periods of flash floods, particularly, the stream undercuts the walls of rock that separate the meanders (or "goosenecks") of the stream until the rock wall within the meander is undercut and the meander is cut off and the new stream bed then flows underneath the bridge. Eventually, as erosion and gravity enlarge the bridge's opening, the bridge collapses under its own weight. There is evidence of at least two collapsed natural bridges within the Monument.
History
Humans have lived in the area around Natural Bridges since as early as 7500 BCE, as shown by rock art and stone tools found at nearby sites. Around 700 CE ancestors of modern Puebloan people moved to the site, constructing stone and mortar buildings and granaries. These structures share similarities with those found in Mesa Verde National Park, which can be seen distantly, to the east, from the Bears Ears on the park's eastern border. Like the people of Mesa Verde, the residents of Natural Bridges seem to have left the region around the year 1270.
Europeans first visited the area in 1883 when gold prospector, Cass Hite followed White Canyon upstream, from the Colorado River, and found the bridges near the junction of White and Armstrong canyons. In 1904, the National Geographic Magazine publicized the bridges and the area was designated a National Monument on April 16, 1908, by President Theodore Roosevelt. It is Utah's first National Monument.
The Monument was nearly inaccessible for many decades as reflected by the visitor log kept by the Monument's superintendents. Reaching the site from Blanding, Utah, the nearest settlement would take a three-day horseback ride. The park received little visitation until after the uranium boom of the 1950s, which resulted in the creation of new roads in the area, including modern-day Utah State Route 95, which was paved in 1976.
Geology
Located within the Colorado Plateau, the monument has three distinct bridges in White and Armstrong Canyons. These canyons were formed when the Colorado River eroded the Permian Cedar Mesa Sandstone. The Sipapu, Kachina, and Owachoma bridges were formed through rock decay, weathering and erosion, as water cut through narrow canyon walls. The monument is also the location of significant Biological soil crust. In places, Desert varnish darkens the lighter White Cedar Mesa Sandstone in places. The Monument's elevation ranges up to .
Attractions
The main attractions are the natural bridges, accessible from the Bridge View Drive, which winds along the park and goes by all three bridges, and by hiking trails leading down to the bases of the bridges. There is also a campground and picnic areas and a visitor center within the park. Electricity in the park comes entirely from a large solar array near the visitor center.
In 2007, the International Dark-Sky Association named Natural Bridges the first International Dark-Sky Park, which is a designation that recognizes not only that the park has some of the darkest and clearest skies in all of the United States, but also that the park has made every effort to conserve the natural dark as a resource worthy of protection. , Natural Bridges has the only night sky monitored by the NPS Night Sky Team that rates a Class 2 on the Bortle Dark-Sky Scale, giving it the darkest sky ever assessed.
Horsecollar Ruin is an Ancestral Puebloan ruin visible from an overlook a short hike from Bridge View Drive. The site was abandoned more than 700 years ago but is in a remarkable state of preservation, including an undisturbed rectangular kiva with the original roof and interior, and two granaries with unusual oval-shaped doors whose shape resembles horse collars (hence the site's name).
Biology
Animal species found in the National Monument include birds such as the pinyon jay, canyon wren, and turkeys (which were reintroduced by the State of Utah to the table-lands above the Monument) and mammals like rabbits, pack rats, bobcats, coyotes, bears, mule deer, and mountain lion. The Monument's pygmy rattlesnakes have been the subject of occasional study; several lizard species common to southern Utah are abundant. In May 2006, KSL Newsradio reported a case of plague found in dead field mice and chipmunks at Natural Bridges.
Native plant species include willow, cottonwood, Douglas fir, ponderosa pine, pinyon pine, juniper, grasses, annuals, and perennials such as asters, penstemons, buckwheats, and Indian paintbrush, and various shrubs such as dwarf oaks, bayberry, manzanita, buffaloberry, rabbitbrush, black brush, brittle brush, Apache's plume, sage, yucca, and Mormon tea. Invasive species include tumbleweeds, certain thistles, dandelions, and tamarisk.
Much of the Monument's ground is covered in communities of cryptobiotic soil crusts, which prevents soil erosion and promotes the retention of soil nutrients.
Climate
Natural Bridges National Monument has a cold semi-arid climate (Köppen: BSk) with cold winters and hot summers.
See also
List of national monuments of the United States
Arches National Park
Dark Canyon Wilderness
Glen Canyon National Recreation Area
Manti-La Sal National Forest
Muley Point
Rainbow Bridge National Monument
Bears Ears National Monument
Hite, Utah
References
External links
by the National Park Service
Information by AmericanSouthwest.net
National Park Service national monuments in Utah
Colorado Plateau
Natural arches of Utah
Protected areas of San Juan County, Utah
Parks in Utah
Protected areas established in 1908
Dark-sky preserves in the United States
1908 establishments in Utah
Natural arches of San Juan County, Utah
Geotopes | Natural Bridges National Monument | Astronomy | 1,269 |
1,709,422 | https://en.wikipedia.org/wiki/IBM%20Lightweight%20Third-Party%20Authentication | Lightweight Third-Party Authentication (LTPA), is an authentication technology used in IBM WebSphere and Lotus Domino products. When accessing web servers that use the LTPA technology it is possible for a web user to re-use their login across physical servers.
A Lotus Domino server or an IBM WebSphere server that is configured to use the LTPA authentication will challenge the web user for a name and password. When the user has been authenticated, their browser will have received a session cookie - a cookie that is only available for one browsing session. This cookie contains the LTPA token.
If the user – after having received the LTPA token – accesses a server that is a member of the same authentication realm as the first server, and if the browsing session has not been terminated (the browser was not closed down), then the user is automatically authenticated and will not be challenged for a name and password. Such an environment is also called a single sign-on environment.
See also
Access control
List of single sign-on implementations
References
DeveloperToolbox Technical Magazine: WebSphere and Domino single sign-on
DominoTomcatSSO at OpenNTF.org: A open source implementation of LTPA for Tomcat
Websphere
Websphere Liberty Profile
Lightweight Third-Party Authentication
Computer access control | IBM Lightweight Third-Party Authentication | Technology,Engineering | 264 |
36,796,573 | https://en.wikipedia.org/wiki/Foundation%20%28framework%29 | Foundation is a free responsive front-end framework, providing a responsive grid and HTML and CSS UI components, templates, and code snippets, including typography, forms, buttons, navigation and other interface elements, as well as optional functionality provided by JavaScript extensions. Foundation is an open source project, and was formerly maintained by ZURB. Since 2019, Foundation has been maintained by volunteers.
Origin
Foundation emerged as a ZURB project to develop front-end code more efficiently. In October 2011, ZURB released Foundation 2.0 as open source under the MIT License. ZURB released Foundation 3.0 in June 2012, 4.0 in February 2013, 5.0 in November 2013, and 6.0 in November 2015. The team started working on the next version of Foundation for Sites 7 which most likely will drop support for older browsers and implement newer technologies like flexbox or maybe calculated grid system.
Foundation for Emails, formerly known as ZURB Ink, was released in September 2013.
Foundation for Apps was released in December 2014.
Features
Foundation was designed for and tested on numerous browsers and devices. It is a responsive framework built with Sass/SCSS. The framework includes most common patterns needed to prototype a responsive site.
Since version 2.0 it also supports responsive design. This means the graphic design of web pages adjusts dynamically, taking into account the characteristics of the device used (PC, tablet, mobile phone). Version 4.0 has taken a mobile-first approach, designing and developing for mobile devices first, and enhancing the web pages and applications for larger screens.
Foundation is open source and available on GitHub. Developers are encouraged to participate in the project and make their own contributions to the platform.
Structure and function
Foundation is modular and consists essentially of a series of Sass stylesheets that implement the various components of the toolkit. Component stylesheets can be included via Sass or by customizing the initial Foundation download. Developers can adapt the Foundation file itself, selecting the components they wish to use in their project.
Grid system and responsive design
Foundation comes standard with a 940 pixel wide, flexible Grid (graphic design) layout. The toolkit is fully responsive to make use of different resolutions and types of devices: mobile phones, portrait and landscape format, tablets and PCs with a low and high resolution (widescreen). This adjusts the width of the columns automatically.
Understanding CSS stylesheet
Foundation provides a set of stylesheets that provide basic style definitions for all key HTML components. These provide a browser and system-wide uniform, modern appearance for formatting text, tables and form elements.
Reusable components
In addition to the regular HTML elements, Foundation contains other commonly used interface elements. These include buttons with advanced features (for example, grouping of buttons or buttons with drop-down option, make and navigation lists, horizontal and vertical tabs, navigation, breadcrumb navigation, pagination, etc.), labels, advanced typographic capabilities, and formatting for messages such as warnings.
JavaScript components and plug-ins
The JavaScript components of Foundation 4 were moved from jQuery JavaScript library to Zepto, on a presumption that the physically smaller, but API-compatible alternative to JQuery would prove faster for the user. However, Foundation 5 moved back to the newer release JQuery-2. "jQuery 2.x has the same API as jQuery 1.x, but does not support Internet Explorer 6, 7, or 8." the official ZURB blog explains, and the unsigned writer claims that the switch back was due to issues of compatibility with customized efforts; and that performance was found to be not as good, on use testing with the newer jQuery-2.
Use
There are three levels of integration for Foundation: CSS, SASS, and Ruby on Rails with the Foundation Rails Gem.
CSS
To use Foundation CSS, default or custom CSS packages can be downloaded from the download page and installed into the appropriate web server folders. Foundation is then integrated into HTML page markup.
Sass
The Foundation Sass install uses Ruby, Node.js, and Git to install Foundation sources. Foundation then provides a command line interface to modify and compile source to CSS for use in HTML page markup.
Foundation Rails gem
The Foundation Rails gem can be installed by adding "gem 'foundation-rails'" to the Rails Application Gemfile.
References
External links
Official documentation for the JavaScript components
CSS frameworks
Software using the MIT license
Web frameworks
Web design | Foundation (framework) | Engineering | 944 |
33,973,815 | https://en.wikipedia.org/wiki/Online%20doctor | Online doctor is a term that emerged during the 2000s, used by both the media and academics, to describe a generation of physicians and health practitioners who deliver healthcare, including drug prescription, over the internet.
Emergence of online doctoring
In the 2000s, many people came to treat the internet as a first, or at least a major, source of information and communication. Health advice is now the second-most popular topic, after pornography, that people search for on the internet. With the advent of broadband and videoconferencing, many individuals have turned to online doctors to receive online consultations and purchase prescription drugs. Use of this technology has many advantages for both the doctor and the patient, including cost savings, convenience, accessibility, and improved privacy and communication.
In the US, a 2006 study found that searching for information on prescription or over-the-counter drugs was the fifth most popular search topic, and a 2004 study found that 4% of Americans had purchased prescription medications online. A 2009 survey conducted by Geneva-based Health On the Net Foundation found one-in-ten Europeans buys medicines from websites and one-third claim to use online consultation. In Germany, approximately seven million people buy from mail-order pharmacies, and mail-order sales account for approximately 8–10% of total pharmaceutical sales. In 2008, the Royal Pharmaceutical Society of Great Britain reported that approximately two million people in Great Britain were regularly purchasing pharmaceuticals online (both with a prescription from registered online UK doctors and without prescriptions from other websites). A recent survey commissioned by Pfizer, the Medicines and Healthcare products Regulatory Agency, RPSGB, the Patients Association and HEART UK found that 15% of the British adults asked had bought a prescription-only medicine online.
In developed countries, many online doctors prescribe so-called ‘lifestyle drugs’, such as for weight loss, hair loss or erectile dysfunction. The RPSGB has identified the most popular products prescribed online as Prozac (an antidepressant), Viagra (for erectile dysfunction), Valium (a tranquilliser), Ritalin (a psychostimulant), Serostim (a synthetic growth hormone) and Provigil (a psychostimulant). A study in the USA has also shown that antibiotics are commonly available online without prescription.
Potential harm
Traditionalist critics of online doctors argue that an online doctor cannot provide proper examinations or diagnosis either by email or video call. Such consultations, they argue, will always be dangerous, with the potential for serious disease to be missed. There are also concerns that the absence of proximity leads to treatment by unqualified doctors or patients using false information to secure dangerous drugs.
Proponents argue there is little difference between an e-mail consultation and the sort of telephone assessment and advice that doctors regularly make out of hours or in circumstances where doctors cannot physically examine a patient (e.g., jungle medicine).
Laurence Buckman, chairman of the British Medical Association’s GPs’ committee, says that online consultations make life easier for doctors and patients when used properly. "Many GPs will be very happy with it and it could be useful. When it’s a regular patient you know well, it follows on from telephone consulting. Voice is essential, vision is desirable. The problem comes when I don’t know the patient".
Niall Dickson, chief executive of the General Medical Council, says: "We trust doctors to use their judgement to decide whether they should see a patient in person. Online consultations will be appropriate for some patients, whereas other patients will need a physical examination or may benefit from seeing their doctor in person".
Past and future developments
The first medical consulting website in the US was WebMD, founded in 1996 by Jim Clark (one of the founders of Netscape) and Pavan Nigam as Healthscape. Currently, its website carries information regarding health and health care, including a symptom checklist, pharmacy information, drug information, blogs of physicians with specific topics, and a place to store personal medical information. As of February 2011, WebMD's network of sites reaches an average of 86.4 million visitors per month and is the leading health portal in the United States.
Many US healthcare and medical consulting sites have experienced dramatic growth. (Healthline, launched in 2005, grew by 269% to 2.7 million average monthly unique visitors in Q1 2007 from 0.8 million average monthly unique visitors in Q1 2006). Several American online doctor companies provide consultations with doctors over the phone or the Internet. Prominent San Francisco-based venture capital firm Founders Fund called such services "extraordinarily fast" and predicted that they will "bring relief to thousands of people with immediate medical needs".
In the UK, e-med was the first online health site to offer both a diagnosis and prescriptions to patients over the Internet. It was established in March 2000 by Dr. Julian Eden, In 2010, DrThom claimed to have 100,000 patients visit their site. NHS Direct (currently NHS Choices) is the free health advice and information service provided by the National Health Service (NHS) for residents and visitors in the UK, with advice offered 24 hours a day via telephone and web contact. Over 1.5 million patients visit the website every month. More recently, a number of online doctors have emerged in the country, firms such as Now Healthcare Group, Dr Fox Pharmacy, Push Doctor and Lloyds Pharmacy offer consultation and prescriptions via the Internet.
In Australia HealthDirect is the free health advice and information service provided by the government with advice offered 24 hours a day via telephone. Medicare began funding online consultations for specialists on 1 July 2011 which has seen a slow but steady increase in volumes.
In India, Lybrate is an online healthcare platform to connect doctors and patients to get an instant solution on their mobile. This mobile technology allows a patient to connect with the doctor online through a video call, live message chat or schedule an appointment and can get instant medication info.
New advances in digital information technology mean that in future online doctors and healthcare websites may offer advanced scanning and diagnostic services over the internet. The Nuffield Council on Bioethics identifies such services as direct-to-consumer body imaging (such as CT and MRI scans) and personal genetic profiling for individual susceptibility to disease. Professor Sir Bruce Keogh, the medical director of the UK NHS, is drawing up plans to introduce electronic consultation via Skype and has said IT will "completely change the way [doctors] deliver medicine".
See also
eHealth
e-Patient
Health informatics
mHealth
Telemedicine
Telehealth
References
External links
NHS Choices The UK government's medical advice and treatment portal
FDA Federal Drug Administration's guidelines to consumers dealing with online doctors
FSMB Federation of State Medical Boards, body of the state medical boards that regulate online doctors in the US
CQC Quality Care Commission Board, body that regulates online doctors in the UK
Health informatics
Telemedicine
Health care occupations
Physicians | Online doctor | Biology | 1,440 |
18,416,104 | https://en.wikipedia.org/wiki/YM%20%28selective%20medium%29 | YM Agar and Broth, is a selective growth medium with low pH useful for cultivating yeasts, molds, or other acid-tolerant or acidophilic organisms, while deterring growth of most bacteria and other acid intolerant organisms. It is malt extract medium modified by the addition of yeast extract and peptone.
The 'YM' of the name stands either for 'Yeast and Mold', or 'Yeast extract-Malt extract' depending on the source.
Variations include YMG agar/broth, which contains yeast, malt, and glucose.
References
Microbiological media
Cell culture media | YM (selective medium) | Biology | 131 |
27,848,005 | https://en.wikipedia.org/wiki/Spatial%20distribution | A spatial distribution in statistics is the arrangement of a phenomenon across the Earth's surface and a graphical display of such an arrangement is an important tool in geographical and environmental statistics. A graphical display of a spatial distribution may summarize raw data directly or may reflect the outcome of a more sophisticated data analysis. Many different aspects of a phenomenon can be shown in a single graphical display by using a suitable choice of different colours to represent differences.
One example of such a display could be observations made to describe the geographic patterns of features, both physical and human across the earth.
The information included could be where units of something are, how many units of the thing there are per units of area, and how sparsely or densely packed they are from each other.
Patterns of spatial distribution
Usually, for a phenomenon that changes in space, there is a pattern that determines the location of the subject of the phenomenon and its intensity or size, in X and Y coordinates. The scientific challenge is trying to identify the variables that affect this pattern. The issue can be demonstrated with several simple examples:
The spatial distribution of the human population
The spatial distribution of the population and development are closely related to each other, especially in the context of sustainability. The challenges related to the spatial spread of a population include: rapid urbanization and population concentration, rural population, urban management and poverty housing, displaced persons and refugees. Migration is a basic element in the spatial distribution of a population, and it may remain a key driver in the coming decades, especially as an element of urbanization in developing countries.
The spatial distribution of economic activity in the world
In a pair of studies from Brown University by urban economist J. Vernon Henderson, with co-authors Adam Storeygard and David Weil, the spatial distribution of the economic activity in the world was examined by mapping the artificial lights at night from space over 250,000 grid cells, the average area of each of which is 560 square kilometers. They found that 50% of the variation in this activity can be explained through a system of physical geographic features.
The spatial distribution of the seismic intensities of an earthquake
The seismic intensityies of an earthquake are distributed across space with an elementary regularity, so that in towns located close to the epicenter of the earthquake, high seismic intensities are observed and vice versa; Low intensities were observed in settlements far from the epicenter. The distance of each settlement from the epicenter is marked with XY coordinates, a variable that affects the seismic intensity observed there. But there are other variables that affect these intensities, such as the geological structure of each settlement, its topography, and more. All these make the simple regularity of the effect of the distance variable more complex. If we succeed in identifying the contribution of most of the variables to the fact that Intensity Z occurred in the XY settlement and not other one, we will understand the pattern that stands behind the organization of the seismic intensity in a specific earthquake, a fact that will help us in the field of seismic risks surveys and their assessments.
The spatial distribution of a population with health impairments related to vitamin A deficiency
Vitamin A deficiency is a major public health problem in poor societies. Dietary consumption of foods rich with vitamin A was low in Ethiopia. In 2021, a study was published that evaluated the spatial distribution and the spatial variables affecting it in dietary consumption of foods rich (or poor) in vitamin A among children aged 6–23 months in Ethiopia.
More examples
Many police departments colour-code a city map based on crime statistics.
The two-step floating catchment area (2SFCA) method has been used to prepare maps showing the relative accessibility of individuals (demand units) to physicians (supply units), by shading which shows many different degrees of accessibility.
Notes
Demographics
Spatial analysis
Statistical charts and diagrams | Spatial distribution | Physics | 768 |
67,319,458 | https://en.wikipedia.org/wiki/A.%20W.%20Faber%20Model%20366 | The A. W. Faber Model 366 was an unusual model of slide rule, manufactured in Germany by the A. W. Faber Company around 1909, with scales that followed a system invented by Johannes Schumacher (1858-1930) that used discrete logarithms to calculate products of integers without approximation.
The Model 366 is notable for its table of numbers, mapping the numbers 1 to 100 to a permutation of the numbers 0 to 99 in a pattern based on discrete logarithms. The markings on the table are:
{| style="text-align: right; font-size: 90%;"
| N || || 0 || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9
|-
|
|-
| || || || 0 || 1 || 69 || 2 || 24 || 70 || 9 || 3 || 38
|-
| 1 || || 25 || 13 || 71 || 66 || 10 || 93 || 4 || 30 || 39 || 96
|-
| 2 || || 26 || 78 || 14 || 86 || 72 || 48 || 67 || 7 || 11 || 91
|-
| 3 || || 94 || 84 || 5 || 82 || 31 || 33 || 40 || 56 || 97 || 35
|-
| 4 || || 27 || 45 || 79 || 42 || 15 || 62 || 87 || 58 || 73 || 18
|-
| 5 || || 49 || 99 || 68 || 23 || 8 || 37 || 12 || 65 || 92 || 29
|-
| 6 || || 95 || 77 || 85 || 47 || 6 || 90 || 83 || 81 || 32 || 55
|-
| 7 || || 34 || 44 || 41 || 61 || 57 || 17 || 98 || 22 || 36 || 64
|-
| 8 || || 28 || 76 || 46 || 89 || 80 || 54 || 43 || 60 || 16 || 21
|-
| 9 || || 63 || 75 || 88 || 53 || 59 || 20 || 74 || 52 || 19 || 51
|-
| 10 || || 50
|}
The slide rule has two scales on each side of the upper edge of the slider marked with the integers 1 to 100 in a different permuted order, evenly spaced apart. The ordering of the numbers on these scales is
1, 2, 4, 8, 16, 32, 64, 27, 54, 7, 14, 28, 56, 11, 22, 44, 88, 75, 49, 98, 95, 89, 77, 53, 5, 10, 20, 40, 80, 59, 17, 34, 68, 35, 70, 39, 78, 55, 9, 18, 36, 72, 43, 86, 71, 41, 82, 63, 25, 50, 100, 99, 97, 93, 85, 69, 37, 74, 47, 94, 87, 73, 45, 90, 79, 57, 13, 26, 52, 3, 6, 12, 24, 48, 96, 91, 81, 61, 21, 42, 84, 67, 33, 66, 31, 62, 23, 46, 92, 83, 65, 29, 58, 15, 30, 60, 19, 38, 76, 51
which corresponds to the inverse permutation to the one given by the number table.
There are also two scales on each side of the lower edge of the slider, consisting of the integers 0 to 100 similarly spaced, but in ascending order, with the zero on the lower scales lining up with the 1 on the upper scales.
Schumacher's indices are an example of Jacobi indices, generated with p = 101 and g = 2. Schumacher's system of indices correctly generates the desired products, but is not unique: several other similar systems have been created by others, including systems by Ludgate, Remak and Korn.
An elaborate system of rules had to be used to compute products of numbers larger than 101.
Very few of the Model 366 slide rules remain, with only five known to have survived.
See also
Irish logarithms, a similar scheme intended for use in a mechanical calculation machine, introduced in 1909 by Percy Ludgate
Canon arithmeticus, a table of indices and powers with respect to primitive roots originally published by Carl Gustav Jacob Jacobi
References
External links
Ein Rechenschieber mit Teilung in gleiche Intervalle auf der Grundlage der zahlentheoretischen Indizes. Für den Unterricht konstruiert (English: "A slide rule with division into equal intervals based on number theoretic indices. Designed for teaching."), Dr. Joh. Schumacher, Munich, 1909 (in German)
Rechnerlexikon article on discrete logarithms, including use in the Schumacher slide rule (in German)
High resolution images of the Model 366 slide rule at the Oughtred Society
A Model 366 slide rule made in 1921
Close-up of the number table attached to the cursor
Mechanical calculators
Discrete mathematics
Multiplication | A. W. Faber Model 366 | Mathematics | 1,151 |
2,388,915 | https://en.wikipedia.org/wiki/Gunnar%20Nordstr%C3%B6m | Gunnar Nordström (12 March 1881 – 24 December 1923) was a Finnish theoretical physicist best remembered for his theory of gravitation, which was an early competitor of general relativity. Nordström is often designated by modern writers as The Einstein of Finland due to his novel work in similar fields with similar methods to Einstein.
Education and career
Nordström graduated high-school from Brobergska skolan in central Helsinki 1899. At first he went on to study mechanical engineering, graduating in 1903 from the Polytechnic institute in Helsinki, later renamed Helsinki University of Technology and today a part of the Aalto University. During his studies he developed an interest for more theoretical subjects, proceeding after graduation to further study for a master's degree in natural science, mathematics and economy at the University of Helsinki (1903–1907).
Nordström then moved to Göttingen, Germany, where he had been recommended to go to study physical chemistry. However, he soon lost interest in the intended field and moved to study electrodynamics, a field the University of Göttingen was renowned for at the time. He returned to Finland to complete his doctoral dissertation at the University of Helsinki in 1910, and become a docent at the university. Subsequently, he became fascinated with the very novel and soon burgeoning field of gravitation and wanted to move to the Netherlands where scientists with contributions to that fields such as Hendrik Lorentz, Paul Ehrenfest and Willem de Sitter were active. Nordström was able to move to Leiden in 1916 to work under Ehrenfest, in the midst of the First World War, due to his Russian passport. Nordström spent considerable time in Leiden where he met a Dutch physics student, Cornelia van Leeuwen, with whom he went on to have several children. After the war he declined a professorship at the University of Berlin, a post awarded instead to Max Born, in order to return to Finland in 1918 and hold at first the professorship of physics and later the professorship of mechanics at the Helsinki University of Technology.
One of the keys to Nordström's success as a scientist was his ability to learn to apply differential geometry to physics, a new approach that also would eventually lead Albert Einstein to the theory of general relativity. Few other scientists of the time in the world were able to make effective use of this new analytical tool, with the notable exception of Ernst Lindelöf.
Contributions to theory
During his time in Leiden, Nordström solved Einstein's field equations outside a spherically symmetric charged body. The solution was also found by Hans Reissner, Hermann Weyl and George Barker Jeffery, and it is nowadays known as the Reissner–Nordström metric. Nordström maintained frequent contact with many of the other great physicists of the era, including Niels Bohr and Albert Einstein. For example, it was Bohr's contributions that helped Nordström to circumvent the Russian censorship of German post to Finland, which at the time was a grand duchy in personal union with the Russian Empire.
The theory for which Nordström was arguably most famous in his own lifetime, his theory of gravitation, was for a long time considered as a competitor to Einstein's theory of general relativity, which was published in 1915, after Nordström's theory. In 1914 Nordström introduced an additional space dimension to his theory, which provided coupling to electromagnetism. This was the first of the extra dimensional theories, which later came to be known as Kaluza–Klein theory. Kaluza and Klein, whose names are commonly used today for the theory, did not publish their work until the 1920s. Some speculations as to why Nordström's contribution fell into obscurity are that his theory was partly published in Swedish and that Einstein in a later publication referenced to Kaluza alone. Today extra dimensions and theories thereof are widely researched, debated and even looked for experimentally.
Nordström's theory of gravitation was subsequently experimentally found to be inferior to Einstein's, as it did not predict the bending of light which was observed during the solar eclipse in 1919. However, Nordström and Einstein were in friendly competition or by some measure even cooperating scientists, not rivals. This can be seen from Nordström's public admiration of Einstein's work, as demonstrated by the two occasions on which Nordström nominated Einstein for the Nobel Prize in physics for his theory of relativity. Einstein never received the Nobel prize for the theory, as the first experimental evidence presented in 1919 could at the time still be disputed and there was not yet a consensus or even general understanding in the scientific community of the complex mathematical models that Einstein, Nordström and others had developed. Nordström's scalar theory is today mainly used as a pedagogical tool when learning general relativity.
Today, there is limited public knowledge of Nordström's contributions to science, even in Finland. However, after his death a number of Finnish physicists and mathematicians devoted their time to the theory of relativity and differential geometry, presumably due to the legacy he left. On the other hand, the most notable opponent of general relativity in the Finnish scientific world was Hjalmar Mellin, the previous rector of the Helsinki University of Technology where Nordström held professorship.
Personal life
At the outbreak of WWI, Nordström moved to the Netherlands, where he met and married his wife Cornelia van Leeuwen. They moved back to Finland in 1918.
Death
Nordström died in December 1923, at the age of 42, from pernicious anemia. The illness was perhaps caused by exposure to radioactive substances. Nordström was known for experimenting with radioactive substances and for enjoying the Finnish sauna tradition using water from a spring rich in radium. Among his publications there is one from 1913 regarding the measurement of the radioactive emancipation power of different springs and ground waters in Finland.
Selected publications
During Nordström's career he published 34 articles and research papers in languages including German, Dutch, Finnish, and his mother-tongue Swedish. Nordström is probably the first person to write about the theory of relativity in the languages of Finland.
Die Energiegleichung für das elektromagnetische Feld bewegter Körper, 1908, Doctoral dissertation
Rum och tid enligt Einstein och Minkowski, 1909, published in a series of the Finnish Society of Sciences and Letters: Öfversigt af Finska Vetenskaps-Societetens Förhandlingar
Relativitätsprinzip und Gravitation, 1912, in Physikalische Zeitschrift
Träge und Schwere Masse in der Relativitätsmechanik, 1913, in Annalen der Physik
Über die Möglichkeit, das Elektromagnetische Feld und das Gravitationsfeld zu vereiningen, 1914, in Physikalische Zeitschrift
Zur Elektrizitäts- und Gravitationstheorie, 1914, in the series Öfversigt
Über eine mögliche Grundlage einer Theorie der Materie, 1915, in the series Öfversigt
Een en ander over de energie van het zwaarte krachtsveld volgens de theorie van Einstein, 1918
See also
Nordström's theory of gravitation
Notes
References
Offers some historical information regarding Nordström's contributions to physics.
External links
Relativity theorists
1881 births
1923 deaths
20th-century Finnish physicists
Swedish-speaking Finns | Gunnar Nordström | Physics | 1,537 |
68,363,689 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Z%20Fold%203 | The Samsung Galaxy Z Fold 3 (stylized as Samsung Galaxy Z Fold3, sold as Samsung Galaxy Fold 3 in certain territories) is a foldable smartphone that is part of the Samsung Galaxy Z series. It was revealed by Samsung Electronics on August 11, 2021 at the Samsung Unpacked event alongside the Z Flip 3. It is the successor to the Samsung Galaxy Z Fold 2.
In March 2022, Samsung rebranded the device as "Galaxy Fold 3" in certain Eastern European territories, potentially due to the Russian invasion of Ukraine and Russia's use of Z for military vehicles.
Specifications
Design
The Z Fold 3's outer display and back panel use Gorilla Glass Victus, whilst the foldable inner display is made of Samsung's proprietary "Ultra-Thin Glass" with two protective PET plastic layers covering it, the top of which is a replaceable screen protector.
The Z Fold 3 has an IPX8 ingress protection rating for water resistance up to and including full submersion for 30 minutes up to a maximum depth of 1.5 meters, with dust resistance not being rated. The outer frame is constructed from aluminum, marketed as 'Armor Frame' by Samsung, which is claimed to be 10% stronger than the Z Fold 2's aluminum frame.
Hardware
The Galaxy Z Fold 3 has two screens: its external cover screen which is a 6.23-inch 120 Hz display, and its foldable inner screen which is a 7.6-inch 120 Hz display featuring support for the S Pen Pro and the S Pen Fold Edition; both of which feature support for variable refresh rate to help maximize power efficiency.
While the inner display remained totally unchanged in its size and shape compared to the previous generation Z Fold 2, the external cover display was actually made ever so slightly wider by about +1 mm through a slight reduction in display bezel (making its aspect ratio ~24.5:9 vs the Fold 2's ~25:9), which as such also made it ever so slightly larger by about the same amount (~15.8 vs ~15.7 cm).
The device has 12 GB of RAM, and either 256 or 512 GB of UFS 3.1 flash storage, with no support for expanding the device's storage capacity via micro-SD cards.
The Z Fold 3 is powered by the Qualcomm Snapdragon 888, upgraded from the Z Fold 2's Qualcomm Snapdragon 865+.
The device's included battery is a 4400 mAh dual-cell that fast charges via a USB-C cable up to 25W, or via wireless charging up to 10W.
The Z Fold 3 features 3 rear cameras, including a 12MP wide-angle camera, 12MP ultra-wide camera, and a 12MP telephoto camera, and features two front facing cameras, a 10MP selfie camera on the outer display, and a 4MP under-screen camera on the foldable inner display.
Samsung used to disable camera-related functions if the user attempted to unlock the bootloader.
This behaviour was changed with newer updates.
References
External links
Samsung Galaxy
Foldable smartphones
Mobile phones introduced in 2021
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
Discontinued flagship smartphones
Discontinued Samsung Galaxy smartphones
Samsung smartphones | Samsung Galaxy Z Fold 3 | Technology | 675 |
18,866,087 | https://en.wikipedia.org/wiki/HD%20152082 | HD 152082 is an A-type shell star in the southern constellation of Ara. This is a double star with a thirteenth magnitude companion at an angular separation of 6.8″ along a position angle of 329° (as of 2000).
References
External links
HR 6253
Image HD 152082
Ara (constellation)
152082
Double stars
A-type giants
6253
082806
Durchmusterung objects | HD 152082 | Astronomy | 91 |
15,092,456 | https://en.wikipedia.org/wiki/ASReml | ASReml is a statistical software package for fitting linear mixed models using restricted maximum likelihood, a technique commonly used in plant and animal breeding and quantitative genetics as well as other fields. It is notable for its ability to fit very large and complex data sets efficiently, due to its use of the average information algorithm and sparse matrix methods.
It was originally developed by Arthur Gilmour.
ASREML can be used in Windows, Linux, and as an add-on to S-PLUS and R.
References
External links
ASReml home page
ASReml "Cook book"
Review at Scientific Computing World
Statistical software | ASReml | Mathematics | 123 |
1,699,254 | https://en.wikipedia.org/wiki/Networking%20hardware | Networking hardware, also known as network equipment or computer networking devices, are electronic devices that are required for communication and interaction between devices on a computer network. Specifically, they mediate data transmission in a computer network. Units which are the last receiver or generate data are called hosts, end systems or data terminal equipment.
Range
Networking devices includes a broad range of equipment which can be classified as core network components which interconnect other network components, hybrid components which can be found in the core or border of a network and hardware or software components which typically sit on the connection point of different networks.
One of the most common types of networking hardware today is a copper-based Ethernet adapter which is a standard inclusion on most modern computer systems. Wireless networking has become increasingly popular, especially for portable and handheld devices.
Other networking hardware used in computers includes data center equipment (such as file servers, database servers and storage areas), network services (such as DNS, DHCP, email, etc.) as well as devices which assure content delivery.
Taking a wider view, mobile phones, tablet computers and devices associated with the internet of things may also be considered networking hardware. As technology advances and IP-based networks are integrated into building infrastructure and household utilities, network hardware will become an ambiguous term owing to the vastly increasing number of network-capable endpoints.
Specific devices
Network hardware can be classified by its location and role in the network.
Core
Core network components interconnect other network components.
Gateway: an interface providing a compatibility between networks by converting transmission speeds, protocols, codes, or security measures.
Router: a networking device that forwards data packets between computer networks. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it reaches its destination node. It works on OSI layer 3.
Switch: a multi-port device that connects devices together at the same or different speeds on a computer network, by using packet switching to receive, process and forward data to the destination device. Unlike less advanced network hubs, a network switch forwards data only to one or multiple devices that need to receive it, rather than broadcasting the same data out of each of its ports. It works on OSI layer 2.
Bridge: a device that connects multiple network segments. It works on OSI layers 1 and 2.
Repeater: an electronic device that receives a signal and retransmits it at a higher level or higher power, or onto the other side of an obstruction, so that the signal can cover longer distances.
Repeater hub: for connecting multiple Ethernet devices together at the same speed, making them act as a single network segment. It has multiple input/output (I/O) ports, in which a signal introduced at the input of any port appears at the output of every port except the original incoming. A hub works at the physical layer (layer 1) of the OSI model and all devices form a single collision domain. Repeater hubs also participate in collision detection, forwarding a jam signal to all ports if they detect a collision. Hubs are now largely obsolete, having been replaced by network switches except in very old installations or specialized applications.
Wireless access point
Structured cabling
Hybrid
Hybrid components can be found in the core or border of a network.
Multilayer switch: a switch that, in addition to switching on OSI layer 2, provides functionality at higher protocol layers.
Protocol converter: a hardware device that converts between two different types of transmission, for interoperation.
Bridge router (brouter): a device that works as a bridge and as a router. The brouter routes packets for known protocols and simply forwards all other packets as a bridge would.
Border
Hardware or software components which typically sit on the connection point of different networks (for example, between an internal network and an external network) include:
Proxy server: computer network service which allows clients to make indirect network connections to other network services.
Firewall: a piece of hardware or software put on the network to prevent some communications forbidden by the network policy. A firewall typically establishes a barrier between a trusted, secure internal network and another outside network, such as the Internet, that is assumed to not be secure or trusted.
Network address translator (NAT): network service (provided as hardware or as software) that converts internal to external network addresses and vice versa.
Residential gateway: interface between a WAN connection to an Internet service provider and the home network.
Terminal server: connects devices with a serial port to a local area network.
End stations
Other hardware devices used for establishing networks or dial-up connections include:
Network interface controller (NIC): a device connecting a computer to a computer network.
Wireless network interface controller: a device connecting the attached computer to a radio-based computer network.
Modem: device that modulates an analog "carrier" signal (such as sound) to encode digital information, and that also demodulates such a carrier signal to decode the transmitted information. Used (for example) when a computer communicates with another computer over a telephone network.
ISDN terminal adapter (TA): a specialized gateway for ISDN.
Line driver: a device to increase transmission distance by amplifying the signal; used in base-band networks only.
See also
Computer hardware
Data circuit-terminating equipment
List of networking hardware vendors
Network simulation
Node (networking)
Telecommunications equipment
References
External links
USF Explanation of network hardware
Computer networking | Networking hardware | Technology,Engineering | 1,123 |
35,891,416 | https://en.wikipedia.org/wiki/SpiNNaker | SpiNNaker (spiking neural network architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors (specifically ARM968) and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain (see Human Brain Project).
The completed design is housed in 10 19-inch racks, with each rack holding over 100,000 cores. The cards holding the chips are held in 5 blade enclosures, and each core emulates 1,000 neurons. In total, the goal is to simulate the behaviour of aggregates of up to a billion neurons in real time. This machine requires about 100 kW from a 240 V supply and an air-conditioned environment.
SpiNNaker is being used as one component of the neuromorphic computing platform for the Human Brain Project.
On 14 October 2018 the HBP announced that the million core milestone had been achieved.
On 24 September 2019 HBP announced that an 8 million euro grant, that will fund construction of the second generation machine, (called SpiNNcloud) has been given to TU Dresden.
References
Cybernetics
Supercomputers
Computational neuroscience
Computational fields of study
AI accelerators
Computer architecture
Department of Computer Science, University of Manchester
Science and technology in Greater Manchester | SpiNNaker | Technology,Engineering | 321 |
42,920,560 | https://en.wikipedia.org/wiki/Zeoform | Zeoform is a material developed by the Australian company, Zeo IP Pty made from water and cellulose. Polymeric lignocellulosic fibres from industrial biomass are used to produce a structural material suitable for various applications in the industrial sector. Depending on the source material it is non-toxic, biodegradable, and that it could replace many forms of hard plastics, synthetic compounds and other polymers.
History
Production is based on a process developed in 1897 by the German company M.M.Rotten in Berlin to produce a natural material utilizing cellulose. Almost 100 years later, three material researchers advanced the process and, in 2005, created a company that manufactured artisanal products from the material.
Production
Zeoform is derived from lignocellulosic biomass, such as hemp, cotton, bamboo, sisal, jute, palm, coconut and other cellulose feedstock. It is made without any glues, binders, chemicals or synthetics. The fundamental chemistry (and patented formula) causes a fibrillation (feathering) of cellulose micro-fibres (in water), then physical ‘entanglement’ and hydroxyl bonding through evaporation. Done correctly, it results in a super-strong, highly durable, consistent material that emulates wood & wood composites, resin composites, fibreglass and many hard plastics. Zeoform can be produced with various qualities – from light styrofoam to dense ebony. The material is sustainable, compostable and sequesters carbon.
Applications
Zeoform can be used as a replacement for conventional materials in hundreds of industries, including construction grade flat sheets and curved panels to replace MDF, Masonite, Formica, Corian and other synthetic composites. Zeoform can be sprayed, molded, pressed, laminated or formed using manual and mechanical processes. It can be produced in quantities ranging from small cottage industry to fully automated and robotic mass production.
See also
similar materials are marketed as Zelfo and Hempstone
Biodegradable plastic
References
Composite materials | Zeoform | Physics | 436 |
28,952,727 | https://en.wikipedia.org/wiki/Aminoacyl%20tRNA%20synthetases%2C%20class%20II | Aminoacyl-tRNA synthetases, class II is a family of proteins. These proteins catalyse the attachment of an amino acid to its cognate transfer RNA molecule in a highly specific two-step reaction. These proteins differ widely in size and oligomeric state, and have a limited sequence homology.
The 20 aminoacyl-tRNA synthetases are divided into two classes, I and II. Class I aminoacyl-tRNA synthetases contain a characteristic Rossman fold catalytic domain and are mostly monomeric. Class II aminoacyl-tRNA synthetases share an anti-parallel beta-sheet fold flanked by alpha-helices, and are mostly dimeric or multimeric, containing at least three conserved regions. However, tRNA binding involves an alpha-helical structure that is conserved between class I and class II synthetases. In reactions catalysed by the class I aminoacyl-tRNA synthetases, the aminoacyl group is coupled to the 2'-hydroxyl of the tRNA, while, in class II reactions, the 3'-hydroxyl site is preferred. The synthetases specific for arginine, cysteine, glutamic acid, glutamine, isoleucine, leucine, methionine, tyrosine, tryptophan and valine belong to class I synthetases; these synthetases are further divided into three subclasses, a, b and c, according to sequence homology. The synthetases specific for alanine, asparagine, aspartic acid, glycine, histidine, lysine, phenylalanine, proline, serine, and threonine belong to class-II synthetases.
Human proteins containing this domain
DARS
DARS2
KARS
NARS
NARS2
References
Protein families
EC 3.1.1 | Aminoacyl tRNA synthetases, class II | Chemistry,Biology | 406 |
72,783,720 | https://en.wikipedia.org/wiki/10%20Cassiopeiae | 10 Cassiopeiae (10 Cas) is a blue-white giant star in the constellation Cassiopeia, about 960 light years away.
10 Cassiopeiae is a B9 giant star. It shows emission lines in its spectrum and is classified as a Be star. It shows slight variations in its brightness, between magnitudes 5.54 and 5.59.
At an age of 218 million years, 10 Cassiopeiae has expanded away from the main sequence after exhausting its core hydrogen and now has a radius about eight times that of the Sun. With an effective temperature of about , it emits nearly a thousand times the luminosity of the Sun.
References
B-type giants
Cassiopeia (constellation)
Cassiopeiae, 10
0007
000144
000531
BD+63 2107
Be stars
Suspected variables | 10 Cassiopeiae | Astronomy | 175 |
64,888,864 | https://en.wikipedia.org/wiki/History%20of%20arcade%20video%20games | An arcade video game is an arcade game where the player's inputs from the game's controllers are processed through electronic or computerized components and displayed to a video device, typically a monitor, all contained within an enclosed arcade cabinet. Arcade video games are often installed alongside other arcade games such as pinball and redemption games at amusement arcades. Up until the late 1990s, arcade video games were the largest and most technologically advanced sector of the video game industry.
The first arcade game, Computer Space, was created by Nolan Bushnell and Ted Dabney, the founders of Atari, Inc., and released in 1971; the company followed on its success the next year with Pong. The industry grew modestly until the release of Taito's Space Invaders in 1978 and Namco's Pac-Man in 1980, creating a golden age of arcade video games that lasted through about 1983. At this point, saturation of the market with arcade games led to a rapid decline in both the arcade game market and arcades to support them. The arcade market began recovering in the mid-1980s, with the help of software conversion kits, new genres such as beat 'em ups, and advanced motion simulator cabinets. There was a resurgence in the early 1990s, with the birth of the fighting game genre with Capcom's Street Fighter II in 1991 and the emergence of 3D graphics, before arcades began declining in the West during the late 1990s. After several traditional companies closed or migrated to other fields (especially in the West), arcades lost much of their relevance in the West, but have continued to remained popular in Eastern and Southeastern Asia.
Early arcade games
Since the early 20th century, skee ball and other pin-based games had been a popular arcade game. The first pinball machines had been introduced in the 1930s but gained a reputation as games of chance and had been banned from many venues from the 1940s through the 1960s. Instead, newer coin-operated electro-mechanical games (EM games), classified as games of skill took their place in amusement arcades by the 1960s.
Following the arrival of Sega's EM game Periscope (1966), the arcade industry was experiencing a "technological renaissance" driven by "audio-visual" EM novelty games, establishing the arcades as a healthy environment for the introduction of commercial video games in the early 1970s. In the late 1960s, a college student Nolan Bushnell had a part-time job at an arcade where he became familiar with EM games through watching customers play and helping to maintain the machinery, while learning how it worked and developing his understanding of how the game business operates.
Arrival of arcade video games (1971−1977)
While early video games running on computers had been developed as far back as 1950, the first video game to spread beyond a single computer installation, Spacewar!, was developed by students and staff at MIT on a PDP-1 mainframe computer in 1962. As the group that developed it migrated across the country to other schools, they took Spacewar!s source code to run on other mainframe machines at those schools. It inspired two different groups to attempt to develop a coin-operated version of the game.
Around 1970, Nolan Bushnell was invited by a colleague to see Spacewar! running on Stanford University's PDP-6 computer. Bushnell got the idea of recreating the game on a smaller computer, a Data General Nova, connected to multiple coin-operated terminals. He and fellow Ampex employee Ted Dabney, under the company name Syzygy, worked with Nutting Associates to create Computer Space, the first commercial arcade game, with location tests in August 1971 and production starting in November. More than 1300 units of the game were sold, and while not as large of a hit game as hoped, it proved the potential for the coin-operated computer game. At Stanford University, students Bill Pitts and Hugh Tuck used a PDP-11 mainframe to build two prototypes of Galaxy Game, which they demonstrated at the university starting in November 1971, but were unable to turn into a commercial game.
Bushnell got the idea for his next game after seeing a demonstration of a table tennis game on the Magnavox Odyssey, the first home video game console that was based on the designs of Ralph H. Baer. Deciding to go on their own, Bushnell and Dabney left Nutting and reformed their company as Atari Inc., and brought on Allan Alcorn to help design an arcade game based on the Odyssey game. After a well-received trial run of a demo unit at Andy Capp's Tavern in San Jose, California in August 1972, Pong was first released in limited numbers in November 1972 with a wider release by March 1973. Pong was highly successful, with each machine earning over a day, far greater than most other coin-operated machine at the time.
With Pong success, numerous other coin-operated manufacturers, most who were making electro-mechanical games and pinball machines, attempted to capitalize on the success of arcade games; such companies included Bally Manufacturing, Midway Manufacturing, and Williams Electronics, as well as Japanese companies Taito and Sega. Most took to trying to copy the games that Atari had already made with small alterations, leading to a wave of clones. Bushnell, having failed to patent on the idea, considered these competitors "jackals" but rather than seeking legal action, continued to have Atari devise new games. Separately, Magnavox and Sanders Associates, through which Baer had developed the basics of the Odyssey, sued Atari, among the other manufacturers, for patent violations of the basic patents behind the electronic game concepts. Bushnell opted to settle out of court, negotiating for perpetual licensing rights to Baer's patents for Atari as part of the settlement fee, which allowed Atari to pursue the development of additional arcade games and bringing Pong in a home console form, while Magnavox continued legal against the other manufacturers. It is estimated that Magnavox collected over in awards and settlements from these suits over the Baer patents.
By the end of 1974, more than fifteen companies, both in the United States and Japan, were in the development of arcade games. A key milestone was the introduction of microprocessor technology to arcade games with Midway's Gun Fight (an adaptation of Taito's Western Gun as released in Japan), which could be programmed more directly rather than relying on the complex interaction of integrated circuitry (IC) chips.
Video games were still considered to be adult entertainment at this point, and treated as with pinball machines as games of skill, "For Amusement Only", and placed in locations that children would likely not be at such as bar and lounges. However, the same stigma that pinball machines had seen in the prior decades became to appear for video games. Notably, the release of Death Race in 1976, which involved driving over gremlins on screen, drew criticism in the United States for being too violent, and created the first major debate on violence and video games.
After the "paddle game" trend came to an end around 1975, the arcade video game industry entered a period of stagnation in the "post paddle game era" over the next several years up until 1977.
Golden age of arcade games (1978–1983)
In 1978, Taito released Space Invaders, first in Japan, followed by its North American release. Among its novel gameplay features that drove its popularity, the game was the first to maintain a persistent high score. and though simplistic, used an interactive audio system that increased with the pace of the game. The game was extremely popular in both regions. In Japan, specialty arcades were established that featured only Space Invaders machines, and Taito estimated that they had sold over 100,000 machines in the country alone by the end of 1978, while in the United States, over 60,000 machines had been sold by 1980. The game was considered the best-selling video game and highest-grossing "entertainment product" of its time. Many arcade games since then have been based on "the multiple life, progressively difficult level paradigm" established by Space Invaders.
Space Invaders led to a string of popular arcade games over the next five years that are considered the "Golden Years" for arcade games. Among these titles include:
Asteroids (Atari, 1979)
Galaxian (Namco, 1979)
Berzerk (Stern Electronics, 1980)
Missile Command (Atari, 1980)
Pac-Man (Namco, 1980)
Rally-X (Namco, 1980)
Centipede (Atari, 1980)
Defender (Williams Electronics, 1981)
Donkey Kong (Nintendo, 1981)
Frogger (Konami, 1981)
Scramble (Konami, 1981)
Zaxxon (Sega, 1981)
Of these, Pac-Man had an even larger impact on popular culture when it arrived in 1980; the game itself was popular but people took to Pac-Man as a mascot, leading to merchandise and an animated series of the same name in 1982. The game also inspired the Gold-certified song "Pac-Man Fever" by Buckner & Garcia. Pac-Man sold about 400,000 cabinets overall by 1982. Donkey Kong was also significant as not only being the first recognized platform game but also bringing a cute, more fantastical concept that was well-founded in Japanese culture but new to Western regions, compared to prior arcade games. Western audiences became accustomed to this level of abstraction, making later Japanese-made arcade games and titles for the Nintendo Entertainment System easily accepted by these players.
These games, along with numerous others, created video game arcades around the world. The construction boom of shopping malls in the United States during the 1970s and 1980s gave rise to dedicated arcade storefronts such as Craig Singer's Tilt Arcades. Other arcades were featured in bowling alleys and skating rinks, as well as standalone facilities, such as Bushnell's chain of Chuck E. Cheese pizzerias and arcades. Time reported in January 1982 that there were over 13,000 arcades in the United States, with the most popular machines bringing in over $400 in profit each day. Twin Galaxies, an arcade opened by Walter Day in Ottumwa, Iowa, became known for tracking the high scores of many these top video games, and in 1982, Life featured the arcade, Day, and several of the top players at the time in a cover story, bringing the idea of a professional video game player to public consciousness. The formation of video game tournaments around arcade games in the 1980s was the predecessor of modern esports.
Arcade machines also found their way into any area where they could be placed and would be able to draw children or young adults, such as supermarkets and drug stores. The Golden Age was also buoyed by the growing home console market which had just entered the second generation with the introduction of game cartridges. Atari had been able to license Space Invaders for the Atari 2600 which became the system's killer application. Similarly, Coleco beat Atari in licensing Donkey Kong from Nintendo, and among other ports, included their conversion of the game as a pack-in for the ColecoVision, which helped to boost sales of the console and compete against the Atari 2600. Licensing of arcade hits became a major business for the home markets, which in turn spurred further growth in the arcade field. By 1981, the US arcade game market had an estimated value of .
Jonathan Greenberg of Forbes predicted in early 1981 that Japanese companies would eventually dominate the North American video game industry, as American video game companies were increasingly licensing products from Japanese companies, who in turn were opening up North American branches. By 1982–1983, Japanese manufacturers had captured a large share of the North American arcade market, which Gene Lipkin of Data East USA partly attributed to Japanese companies having more finances to invest in new ideas.
End of the golden age (1984)
Though 1982 was recognized as the height of success of the video game arcade, many in the industry knew the success could not last too long. Walter Day had commented in 1982 that there were "too many arcades" by that point for what was really needed. Additionally, players required novelty and new games, and thus required older games to be discontinued and replaced with new ones, but not all new games were as successful as those at the height of the Golden Age. Knowing that players were seeking more challenge, game manufacturers designed the newer games to be harder, but this caused less-skilled mainstream players to be turned away.
Coupled with this was an increased pressure on possible harmful impacts of video games on children. Arcades had taken steps to make their venues as "family fun centers" alleviate some concerns, but parents and activists still saw video games as potentially addictive and leading to aggressive behavior. The U.S. Surgeon General C. Everett Koop spoke in November 1982 about the potential addiction of video game by young children, as part general moral concerns around youth in the early 1980s. These fears not only affected video game arcades, but other places where youth would normally be able to hang out without adult supervision such as shopping malls and skating rinks. There were also reports of increased crime associated with arcades due to lack of adult supervision. Many cities and towns implemented bans on arcades or limiting businesses to only a few machines by the mid-1980s. Several of these bans were challenged by arcade owners on First Amendment grounds, asserting video games merits protection as an art form, but the bulk of these cases ruled against arcades, favoring local regulations that were limiting conduct rather than restricting speech. Further impacting the arcades, the rising popularity of home consoles threatened the arcades, since players did not have to repeatedly spend money to play at arcades when they could play at home. But with the 1983 video game crash which drastically affected the home console market, the arcade market also felt its impact as it was already waning from oversaturation, loss of players, and the moral concerns over video games, all stressed by the early 1980s recession.
Arcade games became relatively dormant in the United States for a while, declining from the peak financial success of the golden age. The US arcade industry had declined from a peak of in 1982 down to in 1984. The US arcade video game market was sluggish in 1984, but Sega president Hayao Nakayama was confident that good games "can surely be sold in the U.S. market, if done adequately." Sega announced plans to open a new US subsidiary for early 1985, which Game Machine magazine predicted would "most probably enliven" the American video game business. Despite the downturn in 1984, John Lotz of Betson Pacific Distributing predicted that another arcade boom could potentially happen by the early 1990s.
Market recovery (1985−1990)
The arcade industry began recovering in 1985 and made a comeback by 1986, with the arcade industry experiencing several years of growth during the late 1980s. A major factor in its recovery was the arrival of software conversion kit systems, such as Sega's Convert-a-Game system, the Atari System 1, and the Nintendo VS. System, the latter being the Western world's introduction to the Famicom (NES) hardware in 1984, prior to the official release of the NES console; the success of the VS. System in arcades was instrumental to the release and success of the NES in North America. Other major factors that helped revive arcades were the arrival of popular martial arts action games (including fighting games such as Karate Champ and Yie Ar Kung-Fu, and beat 'em ups such as Kung-Fu Master and Renegade), advanced motion simulator games (such as Sega's "taikan" games including Hang-On, Space Harrier and Out Run), and the resurgence of sports video games (such as Track & Field, Punch-Out and Tehkan World Cup).
By 1985, the arcade industry was largely dominated by Japanese manufacturers, with the number of American manufacturers having declined. By 1988, annual US arcade video game revenue had increased to . However, competition from new home consoles, like the Nintendo Entertainment System (NES) that had revitalized the home video game industry, were drawing players away from the arcades. After the NES took off in North America, home consoles kept many children at home and under parental supervision, keeping them away from arcades. The US arcade video game market experienced another decline from 1989. RePlay magazine partly attributed the decline to the rise of home consoles following the success of the NES. In Japan, on the other hand, the arcade market grew while home video game sales declined. Overall, the worldwide arcade market continued to grow, remaining larger than the console market.
Various technological advances were made in arcades during this era. Sega's Hang-On, designed by Yu Suzuki and running on the Sega Space Harrier hardware, was the first of Sega's 16-bit "Super Scaler" arcade system boards that pushed pseudo-3D sprite-scaling at high frame rates. Hang-On also used a motion-controlled arcade cabinet that included a mounted motorbike-like control unit on a hydraulic system, which the player used to control the game by tilting their body to the left or right, two decades before motion controls became popular on consoles. This game began the "taikan" ("body sensation") trend, the use of motion simulator arcade cabinets in many arcade games of the late 1980s, such as Sega's Space Harrier (1985), Out Run (1986) and After Burner (1987). SNK also launched its Neo Geo line in 1990 to try to bridge the arcade and home console gap. The launch consisted of the Neo Geo Multi Video System (MVS) arcade system and the Neo Geo Advanced Entertainment System (AES). Both units shared the same game cartridges, with the MVS able to support up to six different games at the same time selectable by the player. Further, players could use a memory card to transfer save game information from the MVS to their home AES and back. Arcade systems dedicated to flat-shaded 3D polygon graphics also began emerging, with the Namco System 21 used for Winning Run (1988) and the related Atari Games hardware for Hard Drivin' (1989), as well as the Taito Air System used for amateur flight simulations such as Top Landing (1988) and Air Inferno (1990).
One format of arcade video games that briefly expanded during this period were quiz-style or trivia-based arcade games. Besides the other avenues of technical advances, the hardware for arcade machines had shrunk small enough that the core electronics could be fitted into cocktail-style cabinets or half-height bartop or countertop versions, making them ideal for placement in more adult venues. Coupled with waning interest in traditional arcade games due to the 1983 video game crash and the rising popularity of the board game Trivial Pursuit first introduced in 1981, several manufacturers turned to quiz style games to be sold to bars in these smaller formats, including more risque titles. Manufacturers also saw similar opportunities to promote these games for family-oriented entertainment and potential use as educational tools. The rush of arcade-based trivia games waned around 1986 as the general interest in trivia waned, but arcades and other entertainment businesses managed to find ways to keep trivia-style games going within arcades since, often based on multiplayer trivia challenges played out on multiple screens. These trivia games also influenced the creation of trivia games on consoles and computers such as the You Don't Know Jack series of games and Trivia HQ.
Resurgence and 3D revolution (1991−1999)
Fighting game boom
Arcade games gained a resurgence with the introduction of Street Fighter II by Capcom in 1991. The original Street Fighter in 1987 had already introduced a fighting game game format that allowed two players to challenge each other, but the characters were generic combatants. Street Fighter II introduced modern elements to the genre and created the fundamental one-on-one fighting game template, featuring numerous characters with backgrounds and personalities to select from and a wide range of special moves to use. Street Fighter II sold more than 200,000 cabinets worldwide, and drew other arcade manufacturers to make similar fighting games, including Mortal Kombat in 1992, Virtua Fighter in 1993, and Tekken in 1994. The period was referred to as a "boom" or "renaissance" for the arcade industry, with the success of Street Fighter II drawing comparisons to that of arcade golden age blockbusters Space Invaders and Pac-Man.
By 1993, arcade games in the United States were generating an annual revenue of , larger than both the home video game market () as well as the film box office market (). Worldwide arcade video game revenue also maintained its lead over consoles. In 1993, Electronic Games noted that when "historians look back at the world of coin-op during the early 1990s, one of the defining highlights of the video game art form will undoubtedly focus on fighting/martial arts themes" which it described as "the backbone of the industry" at the time. Mortal Kombat, however, led to further controversy over violence in video games due to its gruesome-looking finishing moves. When the game was ported to home consoles in 1993, it led to U.S. Congressional hearings on violence in video games, which resulted in the formation of the Entertainment Software Ratings Board (ESRB) in 1994 to avoid government oversight in video games. Despite this, fighting games remained the dominant style of game in arcades through the 1990s.
3D revolution
Another factor that contributed to the arcade "renaissance" was increasingly realistic games, notably the "3D Revolution" where arcade games made the transition from 2D and pseudo-3D graphics to true real-time 3D polygon graphics, largely driven by a technological arms race rivalry between Sega and Namco. The Namco System 21 which was originally developed for racing games in the late 1980s was adapted by Namco for new 3D action games in the early 1990s, such as the rail shooters Galaxian 3 (1990) and Solvalou (1991). Sega responded with the Sega Model 1, which further popularized 3D polygons with Sega AM2 games including Virtua Racing (1992) and the fighting game Virtua Fighter (1993), which popularized 3D polygon human characters. Namco then responded with the Namco System 22, capable of 3D polygon texture mapping and Gouraud shading, used for Ridge Racer (1993). The Sega Model 2 took it further with 3D polygon texture filtering, used by 1994 for racers such as Daytona USA, fighting games such as Virtua Fighter 2, and light gun shooters such as Virtua Cop. Namco responded with 3D fighters such as Tekken (1994) and 3D light gun shooters such as Time Crisis (1995), the latter running on the Super System 22.
Other arcade manufacturers were also manufacturing 3D arcade hardware by this time, including Midway, Konami, and Taito, as well as Mesa Logic with light gun shooter Area 51 (1995). The new, more realistic 3D games gained considerable popularity in arcades, especially in more family-family fun centers. Virtual reality (VR) also began appearing in arcades during the early 1990s. The Amusement & Music Operators Association (AMOA) in the United States held its second largest AMOA show ever in 1994, after the 1982 AMOA show.
Home console competition
Around the mid-1990s, the fifth-generation home consoles, Sega Saturn, PlayStation, and Nintendo 64, also began offering true 3D graphics, along with improved sound and better 2D graphics than the previous fourth generation of video game consoles. By 1995, personal computers followed, with 3D accelerator cards. While arcade systems such as the Sega Model 3 remained considerably more advanced than home systems in the late 1990s, the technological advantage that arcade games had, in their ability to customize and use the latest graphics and sound chips, slowly began narrowing, and the convenience of home games eventually caused a decline in arcade gaming. Sega's sixth generation console, the Dreamcast, could produce 3D graphics comparable to the Sega NAOMI arcade system in 1998, after which Sega produced more powerful arcade systems such as the Sega NAOMI Multiboard and Sega Hikaru in 1999 and the Sega NAOMI 2 in 2000, before Sega eventually stopped manufacturing expensive proprietary arcade system boards, with their subsequent arcade boards being based on more affordable commercial console or PC components.
During the late 1990s, arcade video games declined, while console games overtook arcade video games for the first time around 1997–1998. Up until then, the arcade video game market had larger revenue than consoles. In 1997, Konami began releasing a number of music-based games that used unique peripherals to control the game in time to music, including Beatmania and GuitarFreaks, culminating in the 1998 release of Dance Dance Revolution (DDR) in Japan, a new style of arcade game that used a dance pad and required players to tap their feet on appropriate squares on the pad in time to notes on screen in synchronization to popular music. DDR later released in the West in 1999, and while it did not enjoy the same popularity in Japan initially, it led the trend of rhythm games in arcades.
Regional divergences (2000−2019)
Worldwide arcade video game revenue stabilized in the early 2000s after years of declining revenue in the late 1990s, during which time it had been surpassed in revenue by the console, handheld and PC game industries. Arcade video games continue to be a thriving industry in Eastern Asian countries such as Japan and China, where arcades are widespread across the region.
United States
Since the 2000s, arcade games and arcades in the United States have generally had to continue as niche markets to adapt to remain profitable, competition against the allure of home consoles. Most arcades were unable to sustain on operating arcade games alone, and have since added back redemption systems for prizes along with non-arcade games for these, such as Dave & Busters. Arcade games were developed to try to create experiences that could not be had via home consoles, such as motion simulator games, but their expense and space required was difficult to justify over more traditional games. The US market has experienced a slight resurgence, with the number of video game arcades across the nation increasing from 2,500 in 2003 to 3,500 in 2008, though this is significantly less than the 10,000 arcades in the early 1980s. As of 2009, a successful arcade game usually sells around 4000 to 6000 units worldwide. Since around 2018, arcades specializing in virtual reality games have also become popular, allowing players to experience these games without the hardware investment in VR headsets.
The relative simplicity yet solid gameplay of many of these early games has inspired a new generation of fans who can play them on mobile phones or with emulators such as MAME. Some classic arcade games are reappearing in commercial settings, such as Namco's Ms. Pac-Man/Galaga: Class of 1981 two-in-one game, or integrated directly into controller hardware (joysticks) with replaceable flash drives storing game ROMs. Arcade classics have also been reappearing as mobile games, with Pac-Man in particular selling over 30 million downloads in the United States by 2010. Arcade classics also began to appear on replica multi-game arcade machines for home users, using emulation on modern hardware.
Japan
In the Japanese gaming industry, arcades have remained popular since the 2000s. Much of the consistent popularity and growing industry is due to several factors such as support for continued innovation and that developers of machines own the arcades. Additionally, Japan arcade machines are notably more unique as to US machines, where Japanese arcades can offer experiences that players could not get at home. This is constant throughout Japanese arcade history. As of 2009, out of Japan's US$20 billion gaming market, US$6 billion of that amount is generated from arcades, which represent the largest sector of the Japanese video game market, followed by home console games and mobile games at US$3.5 billion and US$2 billion, respectively. According to in 2005, arcade ownership and operation accounted for a majority of Namco's for example. With considerable withdrawal from the arcade market from companies such as Capcom, Sega became the strongest player in the arcade market with 60% marketshare in 2006. Despite the global decline of arcades, Japanese companies hit record revenue for three consecutive years during this period. However, due to the country's economic recession, the Japanese arcade industry has also been steadily declining, from ¥702.9 billion (US$8.7 billion) in 2007 to ¥504.3 billion (US$6.2 billion) in 2010. In 2013, estimation of revenue is ¥470 billion.
The layout of an arcade in Japan greatly differs from an arcade in America. The arcades of Japan are multi-floor complexes (often taking up entire buildings), split into sections by game types. On the ground level the arcade typically hosts physically demanding games that draw crowds of onlookers, like music rhythm games. Another floor is often a maze of multi-player games and battle simulators. These multi-player games often have online connectivity tracking rankings and reputation of each player; top players are revered and respected in arcades. The top floor of the arcade is typically for rewards where Players can trade credits or tickets for prizes.
In the Japanese market, network and card features introduced by Virtua Fighter 4 and World Club Champion Football, and novelty cabinets such as Gundam Pod machines have caused revitalizations in arcade profitability in Japan. The reason for the continued popularity of arcades in comparison to the west, are heavy population density and an infrastructure similar to casino facilities.
Former rivals in the Japanese arcade industry, Konami, Taito, Bandai Namco Entertainment and Sega, collaborated during the period. Approaching the end of the 2010s, the typical business of the Japanese arcade shifted further as arcade video games were less predominant, accounting for only 13% of revenue in arcades in 2017, while redemption games like claw crane machines were the most popular. By 2019, only about four thousand arcades remained in Japan, down from the height of 22,000 in 1989.
COVID-19 pandemic and decline (2020- present)
The impact of the COVID-19 pandemic from March 2020 onward on arcades financially harmed many arcades that were still operating. In Japan, arcades did not qualify for funding to recover from lost revenue from the Japanese government. In the wake of the pandemic, several long-standing arcades were forced to close; notably, Sega sold off most of its arcade business. Financial analysis firm Teikoku Databank reported in 2024 that they estimated that over 8000 arcades had closed in the previous decade, with arcade games being shifted away in favor of redemption games. Large game companies view the remaining arcade businesses "as a rapidly sinking ship", and regard future investment in arcade titles as "fruitless". The decline was experienced strongly among gambling oriented games such as Pachislot. A UK arcade owner described a similar situation there, saying that "All arcades are either closed or suffering hardships."
See also
History of mobile games
History of online games
History of video games
List of arcade video games
References
Arcade video games
Arcade
Arcade | History of arcade video games | Technology | 6,395 |
51,847,293 | https://en.wikipedia.org/wiki/CO2%20fertilization%20effect | {{DISPLAYTITLE:CO2 fertilization effect}}
The CO2 fertilization effect or carbon fertilization effect causes an increased rate of photosynthesis while limiting leaf transpiration in plants. Both processes result from increased levels of atmospheric carbon dioxide (CO2). The carbon fertilization effect varies depending on plant species, air and soil temperature, and availability of water and nutrients. Net primary productivity (NPP) might positively respond to the carbon fertilization effect, although evidence shows that enhanced rates of photosynthesis in plants due to CO2 fertilization do not directly enhance all plant growth, and thus carbon storage. The carbon fertilization effect has been reported to be the cause of 44% of gross primary productivity (GPP) increase since the 2000s. Earth System Models, Land System Models and Dynamic Global Vegetation Models are used to investigate and interpret vegetation trends related to increasing levels of atmospheric CO2. However, the ecosystem processes associated with the CO2 fertilization effect remain uncertain and therefore are challenging to model.
Terrestrial ecosystems have reduced atmospheric CO2 concentrations and have partially mitigated climate change effects. The response by plants to the carbon fertilization effect is unlikely to significantly reduce atmospheric CO2 concentration over the next century due to the increasing anthropogenic influences on atmospheric CO2. Earth's vegetated lands have shown significant greening since the early 1980s largely due to rising levels of atmospheric CO2.
Theory predicts the tropics to have the largest uptake due to the carbon fertilization effect, but this has not been observed. The amount of uptake from fertilization also depends on how forests respond to climate change, and if they are protected from deforestation.
Changes in atmospheric carbon dioxide may reduce the nutritional quality of some crops, with for instance wheat having less protein and less of some minerals. Food crops could see a reduction of protein, iron and zinc content in common food crops of 3 to 17%.
Mechanism
Through photosynthesis, plants use CO2 from the atmosphere, water from the ground, and energy from the sun to create sugars used for growth and fuel. While using these sugars as fuel releases carbon back into the atmosphere (photorespiration), growth stores carbon in the physical structures of the plant (i.e. leaves, wood, or non-woody stems). With about 19 percent of Earth's carbon stored in plants, plant growth plays an important role in storing carbon on the ground rather than in the atmosphere. In the context of carbon storage, growth of plants is often referred to as biomass productivity. This term is used because researchers compare the growth of different plant communities by their biomass, amount of carbon they contain.
Increased biomass productivity directly increases the amount of carbon stored in plants. And because researchers are interested in carbon storage, they are interested in where most of the biomass is found in individual plants or in an ecosystem. Plants will first use their available resources for survival and support the growth and maintenance of the most important tissues like leaves and fine roots which have short lives. With more resources available plants can grow more permanent, but less necessary tissues like wood.
If the air surrounding plants has a higher concentration of carbon dioxide, they may be able to grow better and store more carbon and also store carbon in more permanent structures like wood. Evidence has shown this occurring for a few different reasons. First, plants that were otherwise limited by carbon or light availability benefit from a higher concentration of carbon. Another reason is that plants are able use water more efficiently because of reduced stomatal conductance. Plants experiencing higher CO2 concentrations may benefit from a greater ability to gain nutrients from mycorrhizal fungi in the sugar-for-nutrients transaction. The same interaction may also increase the amount of carbon stored in the soil by mycorrhizal fungi.
Observations and trends
From 2002 to 2014, plants appear to have gone into overdrive, starting to pull more CO2 out of the air than they have done before. The result was that the rate at which CO2 accumulates in the atmosphere did not increase during this time period, although previously, it had grown considerably in concert with growing greenhouse gas emissions.
A 1993 review of scientific greenhouse studies found that a doubling of concentration would stimulate the growth of 156 different plant species by an average of 37%. Response varied significantly by species, with some showing much greater gains and a few showing a loss. For example, a 1979 greenhouse study found that with doubled concentration the dry weight of 40-day-old cotton plants doubled, but the dry weight of 30-day-old maize plants increased by only 20%.
In addition to greenhouse studies, field and satellite measurements attempt to understand the effect of increased in more natural environments. In free-air carbon dioxide enrichment (FACE) experiments plants are grown in field plots and the concentration of the surrounding air is artificially elevated. These experiments generally use lower levels than the greenhouse studies. They show lower gains in growth than greenhouse studies, with the gains depending heavily on the species under study. A 2005 review of 12 experiments at 475–600 ppm showed an average gain of 17% in crop yield, with legumes typically showing a greater response than other species and C4 plants generally showing less. The review also stated that the experiments have their own limitations. The studied levels were lower, and most of the experiments were carried out in temperate regions. Satellite measurements found increasing leaf area index for 25% to 50% of Earth's vegetated area over the past 35 years (i.e., a greening of the planet), providing evidence for a positive CO2 fertilization effect.
Depending on environment, there are differential responses to elevated atmospheric CO2 between major 'functional types' of plant, such as and plants, or more or less woody species; which has the potential among other things to alter competition between these groups. Increased CO2 can also lead to increased Carbon : Nitrogen ratios in the leaves of plants or in other aspects of leaf chemistry, possibly changing herbivore nutrition. Studies show that doubled concentrations of CO2 will show an increase in photosynthesis in C3 plants but not in C4 plants. However, it is also shown that plants are able to persist in drought better than the plants.
Experimentation by enrichment
The effects of enrichment can be most simply attained in a greenhouse (see for its agricultural use). However, for experimentation, the results obtained in a greenhouse would be doubted due to it introducing too many confounding variables. Open-air chambers have been similarly doubted, with some critiques attributing, e.g., a decline in mineral concentrations found in these -enrichment experiments to constraints put on the root system. The current state-of-the art is the FACE methodology, where is put out directly in the open field. Even then, there are doubts over whether the results of FACE in one part of the world applies to another.
Free-Air Enrichment (FACE) experiments
The ORNL conducted FACE experiments where levels were increased above ambient levels in forest stands. These experiments showed:
Increased root production stimulated by increased , resulting in more soil carbon.
An initial increase of net primary productivity, which was not sustained.
Faster decline in nitrogen availability in increased forest plots.
Change in plant community structure, with minimal change in microbial community structure.
Enhanced cannot significantly increase the leaf carrying capacity or leaf area index of an area.
FACE experiments have been criticized as not being representative of the entire globe. These experiments were not meant to be extrapolated globally. Similar experiments are being conducted in other regions such as in the Amazon rainforest in Brazil.
Pine trees
Duke University did a study where they dosed a loblolly pine plantation with elevated levels of . The studies showed that the pines did indeed grow faster and stronger. They were also less prone to damage during ice storms, which is a factor that limits loblolly growth farther north. The forest did relatively better during dry years. The hypothesis is that the limiting factors in the growth of the pines are nutrients such as nitrogen, which is in deficit on much of the pine land in the Southeast. In dry years, however, the trees do not bump up against those factors since they are growing more slowly because water is the limiting factor. When rain is plentiful trees reach the limits of the site's nutrients and the extra is not beneficial. Most forest soils in Southeastern region are deficient in nitrogen and phosphorus as well as trace minerals. Pine forests often sit on land that was used for cotton, corn or tobacco. Since these crops depleted originally shallow and infertile soils, tree farmers must work to improve soils.
Impacts on human nutrition
See also
Effects of climate change on agriculture
References
External links
4. The CO2 fertilization effect: higher carbohydrate production and retention as biomass and seed yield
CO2 fertilization
Atmosphere of Earth
Carbon dioxide
Greenhouse gases
Mineral deficiencies | CO2 fertilization effect | Chemistry,Environmental_science | 1,816 |
35,580,791 | https://en.wikipedia.org/wiki/Valperinol | Valperinol (INN; GA 30-905) is a drug which acts as a calcium channel blocker. It was patented as a possible sedative, antiepileptic, and/or antiparkinsonian agent, but was never marketed.
References
Ethers
1-Piperidinyl compounds | Valperinol | Chemistry | 67 |
21,319,714 | https://en.wikipedia.org/wiki/FleetBroadband | FleetBroadband is maritime satellite internet, telephony, SMS texting, and ISDN network for ocean-going vessels using portable domed terminal antennas.
These antennas and corresponding indoor controllers are used to connect phones and laptop computers from sailing vessels to the Internet. All antennas require line-of-sight (LOS) to one of three geosynchronous orbit satellites, thus allowing the terminal to be used on land as well.
Details
The FleetBroadband network was developed by Inmarsat and is composed of three geosynchronous orbiting satellites known as I-4s that allow contiguous global coverage, except for the poles. FleetBroadband systems installed on vessels may travel from ocean to ocean without human interaction. Line-of-sight to the I-4 satellites is required for connectivity, which can be achieved even in rough rolling seas. Since the FleetBroadband network uses the L band, it is more resistant to rain fade than VSAT or C Band systems.
The FleetBroadband service was modeled after terrestrial Internet services where IP (Internet Protocol)-based traffic dominated over ISDN and other earlier communication protocols.
Terminals
There are three-terminal antenna types available: The FB150 antenna (291 × 275 mm), commercially launched in 2009, is capable of 150 kbit/s, the FB250 antenna (329 × 276 mm) is capable of 284 kbit/s, the FB500 antenna (605 × 630 mm) capable of up to 432 kbit/s. The latter two commercially launched in 2007. Current manufacturers of FleetBroadband systems include Thrane & Thrane (Sailor Systems), Wideye (Skipper), KVH, and JRC.
See also
SES Broadband for Maritime
Stratos Global Corporation, makers of AmosConnect
References
External links
Inmarsat FleetBroadband website
Satellite telephony
Satellite Internet access
Maritime communication | FleetBroadband | Technology | 396 |
46,177,355 | https://en.wikipedia.org/wiki/S%20Microscopii | S Microscopii is a star in the constellation Microscopium. It is a red giant star of spectral type M3e-M5.5 that is also a Mira variable, with an apparent magnitude ranging between 7.4 and 14.8 over 210 days. The Astronomical Society of Southern Africa in 2003 reported that observations of S Microscopii were very urgently needed as data on its light curve was incomplete.
References
Microscopium
M-type giants
Mira variables
Microscopii, S
Durchmusterung objects
204045
102096
Emission-line stars | S Microscopii | Astronomy | 122 |
70,620,578 | https://en.wikipedia.org/wiki/HD%2020104 | HD 20104 (HR 967) is a visual binary in the northern circumpolar constellation Camelopardalis. The system has a combined apparent magnitude of 6.41, making it near naked eye visibility. When resolved in a large telescope, HD 20104 appears to be a pair of 7th magnitude A-type main-sequence stars with a separation of about . Located approximately 550 light years away, the system is approaching the Sun with a heliocentric radial velocity of .
The system's stars have masses twice that of the Sun and effective temperatures ranging from 8,100 to 8,700 K, typical of stars their type. The primary radiates at − over luminous for its class − and spins with a projected rotational velocity of . HD 20104 has an age of 313 million years.
References
Camelopardalis
A-type main-sequence stars
Binary stars
020104
0967
BD+65 388
015309 | HD 20104 | Astronomy | 195 |
39,338,633 | https://en.wikipedia.org/wiki/Nested%20triangles%20graph | In graph theory, a nested triangles graph with n vertices is a planar graph formed from a sequence of n/3 triangles, by connecting pairs of corresponding vertices on consecutive triangles in the sequence. It can also be formed geometrically, by gluing together n/3 − 1 triangular prisms on their triangular faces.
This graph, and graphs closely related to it, have been frequently used in graph drawing to prove lower bounds on the area requirements of various styles of drawings.
Polyhedral representation
The nested triangles graph with two triangles is the graph of the triangular prism, and the nested triangles graph with three triangles is the graph of the triangular bifrustum.
More generally, because the nested triangles graphs are planar and 3-vertex-connected, it follows from Steinitz's theorem that they all can be represented as convex polyhedra.
Another geometric representation of these graphs may be given by gluing triangular prisms end-to-end on their triangular faces; the number of nested triangles is one more than the number of glued prisms. However, using right prisms, this gluing process will cause the rectangular faces of adjacent prisms to be coplanar, so the result will not be strictly convex.
Area lower bounds for graph drawings
The nested triangles graph was named by , who used it to show that drawing an n-vertex planar graph in the integer lattice (with straight line-segment edges) may require a bounding box of size at least n/3 × n/3. In such a drawing, no matter which face of the graph is chosen to be the outer face, some subsequence of at least n/6 of the triangles must be drawn nested within each other, and within this part of the drawing each triangle must use two rows and two columns more than the next inner triangle. If the outer face is not allowed to be chosen as part of the drawing algorithm, but is specified as part of the input, the same argument shows that a bounding box of size 2n/3 × 2n/3 is necessary, and a drawing with these dimensions exists.
For drawings in which the outer face may be freely chosen, the area lower bound of may not be tight.
showed that this graph, and any graph formed by adding diagonals to its quadrilaterals, can be drawn within a box of dimensions n/3 × 2n/3. When no extra diagonals are added the nested triangles graph itself can be drawn in even smaller area, approximately n/3 × n/2, as shown. Closing the gap between the 2n2/9 upper bound and the n2/9 lower bound on drawing area for completions of the nested triangle graph remains an open problem.
Variants of the nested triangles graph have been used for many other lower bound constructions in graph drawing, for instance on area of rectangular visibility representations, area of drawings with right angle crossings or relative area of planar versus nonplanar drawings.
References
Planar graphs
Parametric families of graphs | Nested triangles graph | Mathematics | 625 |
55,485,184 | https://en.wikipedia.org/wiki/ATX-II | ATX-II, also known as neurotoxin 2, Av2, Anemonia viridis toxin 2 or δ-AITX-Avd1c, is a neurotoxin derived from the venom of the sea anemone Anemonia sulcata. ATX-II slows down the inactivation of different voltage-gated sodium channels, including Nav1.1 and Nav1.2, thus prolonging action potentials.
Sources
ATX-II is the main component of the venom of Mediterranean snakelocks sea anemone, Anemonia sulcata. ATX-II is produced by the nematocysts in the sea anemone's tentacles and the anemone uses this venom to paralyze its prey.
Etymology
"ATX-II" is an acronym for "anemone toxin".
Chemistry
Structure
ATX-II is a protein comprising 47 amino acids crosslinked by three disulfide bridges. The molecular mass of the protein is 4,94 kDa (calculated with ProtParam ExPASy).
Family and homology
ATX-II belongs to the sea anemone neurotoxin family. Purification studies of ATX-II and the two other sea anemone neurotoxins, I and III, have revealed the polypeptide nature of these toxins. Toxins I and II are very potent paralyzing toxins that act on crustaceans, fish and mammals and have cardiotoxic and neurotoxic effects. Toxin III has been shown to cause muscular contraction with subsequent paralysis in the crab Carcinus maenas. All three toxins are highly homologous and block neuromuscular transmission in crabs.
Four other sea anemone toxins purified from Condylactis aurantiaca show close sequence similarities with toxins I, II and III of Anemonia sulcata. The effect of these different toxins on Carcinus meanas is visually indistinguishable, namely cramp followed by paralysis and death. However, their mode of action differs. Toxin IV of Condylactis aurantiaca causes a repetitive firing of the excitatory axon for several minutes resulting in muscle contraction without causing a detectable change in the amplitude of the excitatory junction potentials (EJPS). In contrast, the application of Toxin II from Anemonia sulcata results in the increase of the EJPS up to 40 mV causing large action potentials at the muscle fibers. Other toxins with a similar mode of action to ATX-II are α-scorpion toxins. Although both sea anemone and α-scorpion toxins bind to common overlapping elements on the extracellular surface of sodium channels, they belong to distinct families and share no sequence homology. The toxins AFT-II (from Anthopleura fuscoviridis) and ATX-II differ by only one amino acid, L36A, and the protein sequence of BcIII (from Bunodosoma caissarum) is 70% similar to ATX-II.
Target
ATX-II is highly potent at voltage-gated sodium channels subtype 1.1 and 1.2 (Nav1.1 and Nav1.2) with an EC50 of approximately 7 nM when tested in human embryonic kidney 293 cells lines. Moreover, studies suggest that ATX-II interacts with glutamic acid residue (Glu-1613 and 1616 in Nav1.2) on the third and fourth transmembrane loop (S3-S4) of domain IV on the alpha-subunit of neuronal channel Nav1.2 in rats.
The KD of type IIa Na+ channels for ATX II is 76 ± 6 nM. In small and large dorsal root ganglion cells mainly Nav1.1, Nav1.2 and Nav1.6 are sensitive to ATX-II. The binding of the toxin can only occur when the sodium channel is open.
Mode of action
The major action of ATX-II is to delay sodium channel inactivation. Studies using giant crayfish axons and myelinated fibers from frogs indicate that ATX-II acts at low doses, without changing the opening mechanism or steady-state potassium conductance. This mode of action is caused by binding of ATX-II across the extracellular loop. ATX-II slows conformational changes or translocation that are necessary for closing the sodium channel. When applied externally in high concentrations (100 μM range), ATX-II reduces potassium conductance, yet without modifying the kinetic properties of the potassium channel.
ATX-II prolongs the duration of the cardiac action potential, as demonstrated in cultured embryonic chicken cardiac muscle cells. ATX-II also selectively activates A-fibers of peripheral nerves projecting to the sensory neuron of the dorsal root ganglia (DRG) by enhancing resurging currents in DRGs. This mechanism can thereby induce itch-like sensations and pain.
Toxicity
People who got in contact with Anemonia sulcata reported symptoms such as pain and itch. The same symptoms were found in human research subjects after injection of ATX-II into their skin.
In cardiac muscle tissue of various mammals, ATX-II has been shown to produce large and potentially lethal increases in heart rate. The lethal dose of ATX-II for the crab Carcinus maenas is 2 μg/kg.
References
Neurotoxins
Ion channel toxins
Sea anemone toxins | ATX-II | Chemistry | 1,178 |
3,922,789 | https://en.wikipedia.org/wiki/Titanium%20aluminide | Titanium aluminide (chemical formula TiAl), commonly gamma titanium, is an intermetallic chemical compound. It is lightweight and resistant to oxidation and heat, but has low ductility. The density of γ-TiAl is about 4.0 g/cm3. It finds use in several applications including aircraft, jet engines, sporting equipment and automobiles. The development of TiAl based alloys began circa 1970. The alloys have been used in these applications only since about 2000.
Titanium aluminide has three major intermetallic compounds: gamma titanium aluminide (gamma TiAl, γ-TiAl), alpha 2-Ti3Al and TiAl3. Among the three, gamma TiAl has received the most interest and applications.
Applications of gamma-TiAl
Gamma TiAl has excellent mechanical properties and oxidation and corrosion resistance at elevated temperatures (over 600°C), which makes it a possible replacement for traditional Ni based superalloy components in aircraft turbine engines.
TiAl-based alloys have potential to increase the thrust-to-weight ratio in aircraft engines. This is especially the case with the engine's low-pressure turbine blades and the high-pressure compressor blades. These are traditionally made of Ni-based superalloy, which is nearly twice as dense as TiAl-based alloys. Some gamma titanium aluminide alloys retain strength and oxidation resistance to 1000 °C, which is 400 °C higher than the operating temperature limit of conventional titanium alloys.
General Electric uses gamma TiAl for the low-pressure turbine blades on its GEnx engine, which powers the Boeing 787 and Boeing 747-8 aircraft. This was the first large-scale use of this material on a commercial jet engine when it entered service in 2011. The TiAl LPT blades are cast by Precision Castparts Corp. and Avio s.p.a. Machining of the Stage 6, and Stage 7 LPT blades is performed by Moeller Manufacturing. An alternate pathway for production of the gamma TiAl blades for the GEnx and GE9x engines using additive manufacturing is being explored.
In 2019 a new 55g lightweight version of the Omega Seamaster wristwatch was made, using gamma titanium aluminide for the case, backcase and crown, and a titanium dial and mechanism in Ti 6/4 (grade 5). The retail price of this watch at £37,240 was nine times that of the basic Seamaster and comparable to the top of the range platinum-cased version with a moonphase complication.
Alpha 2-Ti3Al
TiAl3
TiAl3 has the lowest density of 3.4 g/cm3, the highest micro hardness of 465–670 kg/mm2 and the best oxidation resistance even at 1 000 °C. However, the applications of TiAl3 in the engineering and aerospace fields are limited by its poor ductility. In addition, the loss of ductility at ambient temperature is usually accompanied by a change of fracture mode from ductile transgranular to brittle intergranular or to brittle cleavage. Despite the fact that a lot of toughening strategies have been developed to improve their toughness, machining quality is still a difficult problem to tackle. Near-net shape manufacturing technology is considered as one of the best choices for preparing such materials. {date=July 2022}
References
External links
Machining Gamma Titanium Aluminide Components - Moeller Manufacturing
Titanium Aluminide Applications in the HighSpeed Civil Transport
Titanium Aluminides - Intermetallics on azom.com.
Power House (GEnx TiAl LPT Blade Announcement)
Aluminides
Titanium alloys
Intermetallics | Titanium aluminide | Physics,Chemistry,Materials_science | 752 |
8,968,314 | https://en.wikipedia.org/wiki/GatorBox | The GatorBox is a LocalTalk-to-Ethernet bridge, a router used on Macintosh-based networks to allow AppleTalk communications between clients on LocalTalk and Ethernet physical networks. The GatorSystem software also allowed TCP/IP and DECnet protocols to be carried to LocalTalk-equipped clients via tunneling, providing them with access to these normally Ethernet-only systems. The GatorBox was designed and manufactured by Cayman Systems, Inc.
When the GatorBox is running GatorPrint software, computers on the Ethernet network can send print jobs to printers on the LocalTalk network using the 'lpr' print spool command. When the GatorBox is running GatorShare software, computers on the LocalTalk network can access Network File System (NFS) hosts on Ethernet.
Specifications
The original GatorBox (model: 10100) is a desktop model that has a 10 MHz Motorola 68000 CPU, 1 MB RAM, 128 KB EPROM for boot program storage, 2 KB NVRAM for configuration storage, LocalTalk Mini-DIN-8 connector, Serial port Mini-DIN-8 connector, BNC connector, AUI connector, and is powered by an external power supply (16 VAC 1 A transformer that is connected by a 2.5 mm plug). This model requires a software download when it is powered on to be able to operate.
The GatorBox CS (model: 10101) is a desktop model that uses an internal power supply (120/240 V, 1.0 A, 50–60 Hz).
The GatorMIM CS is a media interface module that fits in a Cabletron Multi-Media Access Center (MMAC).
The GatorBox CS/Rack (model: 10104) is a rack-mountable version of the GatorBox CS that uses an internal power supply (120/240 V, 1.0 A, 50–60 Hz).
The GatorStar GXM integrates the GatorMIM CS with a 24 port LocalTalk repeater.
The GatorStar GXR integrates the GatorBox CS/Rack with a 24 port LocalTalk repeater. This model does not have a BNC connector and the serial port is a female DE-9 connector.
All "CS" models have 2 MB of memory and can boot from images of the software that have been downloaded into the EPROM using the GatorInstaller application.
Software
There are three disks in the GatorBox software package. Note that the content of the disks for an original GatorBox is different from that of the GatorBox CS models.
Configuration - contains GatorKeeper, MacTCP folder and either GatorInstaller (for CS models) or GatorBox TFTP and GatorBox UDP-TFTP (for original GatorBox model)
Application - contains GatorSystem, GatorPrint or GatorShare, which is the software that runs in the GatorBox. The application software for the GatorBox CS product family has a "CS" at the end of the filename. GatorPrint includes GatorSystem functionality. GatorShare includes GatorSystem and GatorPrint functionality.
Network Applications - NCSA Telnet, UnStuffit
Software Requirements
The GatorKeeper 2.0 application requires Macintosh System 6.0.2 up to 7.5.1 and Finder version 6.1 (or later)
MacTCP (not Open Transport)
See also
Kinetics FastPath
Line Printer Daemon protocol – Print spooling
LocalTalk-to-Ethernet bridge – Other LocalTalk-to-Ethernet bridges
MacIP – A tunneling protocol carrying Internet Protocol in AppleTalk
References
External links
GatorBox CS configuration information
Internet Archive copy of a configuration guide produced by the University of Illinois
Juiced.GS magazine Volume 10, Issue 4 (Dec 2005) – contains an article on how to set up a GatorBox for use with an Apple IIGS
Software and scanned manuals for the GatorBox and GatorBox CS
Networking hardware | GatorBox | Engineering | 841 |
32,740,552 | https://en.wikipedia.org/wiki/Team%20Building%20%28Align%29 | Team Building (Align) is a public artwork by American artist collective Type A, located on the Indianapolis Museum of Art, which is in Indianapolis, Indiana, United States. It was commissioned by the Indianapolis Museum of Art for their 100 Acres Park sculpture garden, which opened in 2010. It consists of two 30' aluminum rings suspending in midair, aligned such that their shadows merge at noon on the summer solstice.
Description
Team Building (Align) consists of two 30' aluminum rings suspended with steel cables from telephone poles. They are carefully oriented so that their two shadows merge into one at noon on the summer solstice.
Creation
This work was created by Type A in collaboration with a team of IMA staff from nearly every department (security, curatorial, grounds, conservation, etc.), a variety chosen to serve as a microcosm of Indianapolis. This is the reason for the first half of the artwork's title: Type A wished to examine the subject of team-building, now so prevalent in popular culture. Type A frequently described the artwork as a "gesture" rather than a sculpture, as their primary interest was the examination of the team members' engagement with both one another and the artistic process. For a year, beginning in the spring of 2007, the team would meet for five-day rope course workshops of the sort favored for corporate retreats. These were held at the High 5 Adventure Learning Center in Brattleboro, Vermont.
One of the members of the team was Lisa Freiman, chairwoman of the contemporary art department of the IMA. Her focus on collaborative art, dating back to her doctoral dissertation, has been cited as a main reason for the preponderance of artistic duos and collectives featured at 100 Acres. Freiman's interest in the give-and-take between artist and curator was tested by degree of collaboration experienced during the creation of Team Building (Align), which resulted in a "fraught" relationship.
Even though the team was focused on the process rather than the result, Type A still had to design a physical sculpture representing that process in order to fulfill the IMA's commission. The original plan was a 40' climbing tower with handholds shaped from casts of the teammates' hands, suspended in midair so as to be inaccessible and absurd. However, the team disliked this idea, as they had used ropes rather than climbing towers in their sessions, and furthermore did not appreciate the implication that their work was useless or silly. In response to this, Type A designed a second sculpture, the one that was ultimately created. This design was inspired by many factors, such as the emotional resonance of a circle; the concept of two bodies joining to form a third, distinct entity without losing their own identities; and a specific teamwork exercise called the Bull Ring Initiative involving a tennis ball, two rings, and the careful application of tension to a number of cords. The team was much more enthusiastic about this concept. They selected the summer solstice as the appropriate date for the alignment in order to coincide with the opening of 100 Acres. Type A determined the correct orientation of the rings in consultation with Prof. Brian Murphy from the Holcomb Observatory and Planetarium of Butler University. The 30' diameter was based on the size of the circle produced by all the teammates holding hands.
The actual construction of the sculpture was handled by the local company Indianapolis Fabrications. iFab roll formed 3" diameter aluminum tubing to form the 30' diameter rings, then covered the rings with a thin aluminum plate, which was then ground and sanded smooth. iFab was also responsible for the placement of the three 55' telephone poles and the rigging of the rings according to Prof. Murphy's specifications.
Artist
Type A is an artist collective consisting of the New York-based duo Adam Ames and Andrew Bordwin. Their partnership began in 1998 with a five-minute video of the two of them wrestling, entitled Dance. Since then, they have created many artworks in a variety of media, including videos, photographs, sculptures, and drawings. Their work examines masculinity, identity, intimacy, power, individuality, and collaboration, and frequently stars themselves. Humor and absurdity also often play a large role in their art. A major turning point in the history of the collective occurred in 2006 when the Addison Gallery of American Art of the Phillips Academy in Andover, Massachusetts offered them residency. That enabled to them to recruit others to participate in their artworks, although they maintained strict control of the process. The logical extension of that experience was the team-based process that led to Team Building (Align).
See also
List of outdoor artworks at the Indianapolis Museum of Art
References
5.[Building (Align)|^]Team Building (Align).Retrieved 15 May 2012.
External links
Artist website
High 5 Adventure Learning Center
Flickr account for the team
Outdoor sculptures in Indianapolis
Sculptures in the Indianapolis Museum of Art
Installation art works
2010 sculptures
Aluminum sculptures in Indiana
2010 establishments in Indiana
Summer solstice | Team Building (Align) | Astronomy | 1,024 |
10,548,862 | https://en.wikipedia.org/wiki/Epichlo%C3%AB%20coenophiala | Epichloë coenophiala is a systemic and seed-transmissible endophyte of tall fescue, a grass endemic to Eurasia and North Africa, but widely naturalized in North America, Australia and New Zealand. The endophyte has been identified as the cause of the "fescue toxicosis" syndrome sometimes suffered by livestock that graze the infected grass. Possible symptoms include poor weight gain, elevated body temperature, reduced conception rates, agalactia, rough hair coat, fat necrosis, loss of switch and ear tips, and lameness or dry gangrene of the feet. Because of the resemblance to symptoms of ergotism in humans, the most likely agents responsible for fescue toxicosis are thought to be the ergot alkaloids, principally ergovaline produced by E. coenophiala.
Continued popularity of tall fescue with this endophyte, despite episodic livestock toxicosis, is attributable to the exceptional productivity and stress tolerance of the grass in pastures and hay fields. The endophyte produces two classes of alkaloids, loline alkaloids and the pyrrolopyrazine, peramine, which are insecticidal and insect deterrent, respectively, and presence of the fungus increases drought tolerance, nitrogen utilization, phosphate acquisition, and resistance to nematodes. Recently, natural strains of E. coenophiala with little or no ergot alkaloid production have been introduced into tall fescue for new cultivar development. These strains are apparently not toxic to livestock, and also provide some, but not necessarily all, of the benefits attributable to the "common toxic" strains in the older tall fescue cultivars.
Epichloë coniophiala was originally described as an Acremonium species and later moved to the anamorphic form genus Neotyphodium. Today, it is classified in Epichloë. Molecular phylogenetic analysis indicates that E. coenophiala is an interspecific hybrid with three ancestors: E. festucae, a strain from the Epichloë typhina complex (from Poa nemoralis) and a third, undescribed or extinct species similar to the Lolium associated clade of Epichloë baconii that also contributed a genome to the hybrid endophyte E. occultans, among others.
References
coenophiala
Fungi described in 1982
Fungus species | Epichloë coenophiala | Biology | 508 |
14,270,204 | https://en.wikipedia.org/wiki/Bertram%20Fraser-Reid | Bertram Oliver "Bert" Fraser-Reid (23 February 1934 – 25 May 2020) was a Jamaican synthetic organic chemist who has been widely recognised for his work using carbohydrates as starting materials for chiral materials and on the role of oligosaccharides in immune response.
Early life
Fraser-Reid was born in Coleyville, Jamaica to William, an elementary school principal, and Laura, a teacher. He had five older siblings. Laura died when Fraser-Reid was only nine months old. He attended Excelsior High School and Clarendon College before moving to Canada to earn BSc (1959) and MSc (1961) at Queen's University in Ontario He went to University of Alberta to earn a PhD in 1964 under the supervision of Raymond Lemieux. He went to Imperial College London to do postdoctoral work for Nobel Laureate Sir Derek Barton from 1964 to 1966.
Academic career
From 1966 to 1980 Fraser-Reid was on the faculty of the University of Waterloo in Waterloo, Ontario where he established a research group known as "Fraser-Reid's Rowdies". The primary emphasis of his work at this point was the synthesis of chiral natural products using carbohydrates as the starting materials. In 1975, Fraser-Reid was the first to publish a method for making nonsugar compounds with simple sugars. In 1980, he was hired at the University of Maryland, College Park, and then at Duke University in North Carolina in 1983. In 1985 he was appointed the James B. Duke Professor of Chemistry. At Duke University, his research shifted to exploring the role of oligosaccharides in immune responses, and particularly on the effect of molecules on human diseases like malaria and AIDS. After retiring from Duke in 1996, due to an undisclosed harassment claim, he established the Natural Products & Glycotechnology Research Institute, a nonprofit, to study the carbohydrate chemistry/biology of tropical parasitic diseases in developing countries and to develop a carbohydrate-based malaria vaccine. Fraser-Reid and his team achieved a milestone in oligosaccharide synthesis by assembling a molecule consisting of 28 monosaccharide units.
Achievements
Several sources have reported that Fraser-Reid was nominated in 1998 for a Nobel Prize in chemistry for his work on oligosaccharides and immune responses. This statement cannot be verified since the names of the nominees are never publicly announced, and neither are they told that they have been considered for the Prize. Nomination records are sealed for fifty years.
The Institute of Jamaica awarded Fraser-Reid the 2007 Musgrave Medal (Gold) for his work in chemistry, noting that during his career he co-authored over 330 peer-reviewed publications and supervised 85 post-doctoral fellows and 55 PhD students.
Other interests
Along with his interest in science, Fraser-Reid was an accomplished pianist and organist who gave recitals at notable venues such as St. George's Cathedral, Kingston, Jamaica (December 1986) and Cathedral de Seville, Spain (August 1995).
In the 1970s Fraser-Reid filed a lawsuit against a building contractor who had not followed municipal building codes. The case went all the way to the Supreme Court of Canada where Fraser-Reid prevailed, and "Fraser-Reid v Droumtsekas" is often cited in Canadian civil law.
See also
List of University of Waterloo people
References
1934 births
2020 deaths
Organic chemists
Jamaican academics
Jamaican emigrants to Canada
Canadian chemists
Academic staff of the University of Waterloo
Duke University faculty
Recipients of the Musgrave Medal
Male organists
Queen's University at Kingston alumni
University of Alberta alumni
21st-century Canadian pianists
21st-century organists
21st-century chemists
21st-century Canadian male musicians
People from Manchester Parish | Bertram Fraser-Reid | Chemistry | 761 |
2,101,846 | https://en.wikipedia.org/wiki/Epsilon%20Tauri | Epsilon Tauri or ε Tauri, formally named Ain (), is an orange giant star located approximately from the Sun in the constellation of Taurus. An exoplanet (designated Epsilon Tauri b, later named Amateru) is believed to be orbiting the star.
It is a member of the Hyades open cluster. As such its age is well constrained at 625 million years. It is claimed to be the heaviest among planet-harboring stars with reliable initial masses. Given its large mass, this star, though presently of spectral type K0 III, was formerly of spectral type A that has now evolved off the main sequence into the giant phase. It is regarded as a red clump giant; that is, a core-helium burning star.
Since Epsilon Tauri lies near the plane of the ecliptic, it is sometimes occulted by the Moon and (very rarely) by planets.
It has an 11th magnitude companion 182 arcseconds from the primary, although this is an unrelated background star.
Nomenclature
ε Tauri (Latinised to Epsilon Tauri) is the star's Bayer designation; it also bears the Flamsteed designation of 74 Tauri. On discovery, the planet was designated Epsilon Tauri b (or Ain b).
The star bore the traditional name Ain (Arabic عين for "eye") and was given the name Oculus Boreus (Latin for "Northern eye") by John Flamsteed. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Ain for this star.
In July 2014, the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Amateru for this planet.
The winning name was based on that submitted by the Kamagari Astronomical Observatory of Kure, Hiroshima Prefecture, Japan: namely 'Amaterasu', the Shinto goddess of the Sun, born from the left eye of the god Izanagi. The IAU substituted 'Amateru' – which is a common Japanese appellation for shrines when they enshrine Amaterasu – because 'Amaterasu' is already used for an asteroid (10385 Amaterasu).
In Chinese, (), meaning Net, refers to an asterism consisting ε Tauri, δ3 Tauri, δ1 Tauri, γ Tauri, Aldebaran, θ2 Tauri, 71 Tauri and λ Tauri. Consequently, the Chinese name for ε Tauri itself is (), "the First Star of Net".
Planetary system
In 2007, a massive exoplanet was reported orbiting the star with a period of 1.6 years in a somewhat eccentric orbit. It was the first planet ever discovered in an open cluster. A 2023 study updated this planet's parameters, and detected additional radial velocity variations that are likely caused by stellar activity.
References
External links
K-type giants
Horizontal-branch stars
Planetary systems with one confirmed planet
Hyades (star cluster)
Taurus (constellation)
Tauri, Epsilon
1409
BD+18 0640
Tauri, 074
028305
020889
17554529
Ain | Epsilon Tauri | Astronomy | 727 |
1,516,371 | https://en.wikipedia.org/wiki/Cavendish%20Astrophysics%20Group | The Cavendish Astrophysics Group (formerly the Radio Astronomy Group) is based at the Cavendish Laboratory at the University of Cambridge. The group operates all of the telescopes at the Mullard Radio Astronomy Observatory except for the 32m MERLIN telescope, which is operated by Jodrell Bank.
The group is the second largest of three astronomy departments in the University of Cambridge.
Instruments under development by the group
The Atacama Large Millimeter Array (ALMA) - several modules of this international project
The Magdalena Ridge Observatory Interferometer (MRO Interferometer)
The SKA
The Radio Experiment for the Analysis of Cosmic Hydrogen (REACH)
Instruments in service
The Arcminute Microkelvin Imager (AMI)
A Heterodyne Array Receiver for B-band (HARP-B) at the James Clerk Maxwell Telescope
The Planck Surveyor
Previous instruments
The CLOVER telescope
The Very Small Array
The 5 km Ryle Telescope
The Cambridge Optical Aperture Synthesis Telescope (COAST)
The Cosmic Anisotropy Telescope
The Cambridge Low Frequency Synthesis Telescope
The Half-Mile Telescope
The One-Mile Telescope
The Interplanetary Scintillation Array which discovered the first pulsar
The 4C Array which made the 4C catalogue
The Cambridge Interferometer
The Long Michelson Interferometer
Various aperture masking instruments for optical aperture synthesis
Catalogues published by the group
Preliminary survey of the radio stars in the Northern Hemisphere (sometimes called the 1C catalogue) at 81.5-MHz (unreliable at low flux levels)
2C catalogue 81.5-MHz (unreliable at low flux levels)
3C catalogue 159 MHz
4C catalogue 178 MHz
5C catalogue 408 MHz and 1407 MHz
6C catalogue 151 MHz
7C catalogue 151 MHz
8C catalogue 38 MHz
9C catalogue 15 GHz
10C catalogue 14–18 GHz
Cambridge Interplanetary Scintillation survey
Famous Group Members
Sir Martin Ryle, 1918–1984, Nobel Prize for Physics, founder of the group, former British Astronomer Royal
Tony Hewish, Nobel Prize for Physics, designed the telescope which discovered the first pulsars
Malcolm Longair Jacksonian Professor of Natural Philosophy, former head of the Cavendish Laboratory
Jocelyn Bell Burnell, detected the first signal from a pulsar
John E. Baldwin
Richard Edwin Hills
F. Graham Smith - early co-worker with Ryle, later Astronomer Royal
David Saint-Jacques Canadian astronaut
External links
Cavendish Astrophysics Group webpage
Cavendish Laboratory
Astronomy institutes and departments | Cavendish Astrophysics Group | Astronomy | 485 |
272,231 | https://en.wikipedia.org/wiki/ISDB | Integrated Services Digital Broadcasting (ISDB; Japanese: , Tōgō dejitaru hōsō sābisu) is a Japanese broadcasting standard for digital television (DTV) and digital radio.
ISDB supersedes both the NTSC-J analog television system and the previously used MUSE Hi-vision analog HDTV system in Japan. An improved version of ISDB-T (ISDB-T International) will soon replace the NTSC, PAL-M, and PAL-N broadcast standards in South America and the Philippines. Digital Terrestrial Television Broadcasting (DTTB) services using ISDB-T started in Japan in December 2003, and since then, many countries have adopted ISDB over other digital broadcasting standards.
A newer and "advanced" version of the ISDB standard (that will eventually allow up to 8K terrestrial broadcasts and 1080p mobile broadcasts via the VVC codec, including HDR and HFR) is currently under development.
Countries and territories using ISDB-T
Asia
(officially adopted ISDB-T, started broadcasting in digital)
(officially adopted ISDB-T)
(officially adopted ISDB-T HD)
Americas
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started broadcasting in digital)
(officially adopted ISDB-T International, started pre-implementation stage)
(officially adopted ISDB-T International, started pre-implementation stage, briefly experimented with ATSC)
(officially adopted ISDB-T International, briefly experimented with ATSC, started broadcasting in digital)
(officially adopted ISDB-T international, started pre-implementation stage)
(currently assessing digital platform)
(officially adopted ISDB-T international, started pre-implementation stage)
Africa
(officially adopted ISDB-T International (SBTVD), started pre-implementation stage)
(In 2013, decided on European digital terrestrial TV. However, Angola reviewed the adoption to ISDB-T International system in March 2019.)
Countries and territories are available using ISDB-T
Americas
Asia
Africa
Europe
Introduction
ISDB is maintained by the Japanese organization ARIB. The standards can be obtained for free at the Japanese organization DiBEG website and at ARIB.
The core standards of ISDB are ISDB-S (satellite television), ISDB-T (terrestrial), ISDB-C (cable) and 2.6 GHz band mobile broadcasting which are all based on MPEG-2, MPEG-4, or HEVC standard for multiplexing with transport stream structure and video and audio coding (MPEG-2, H.264, or HEVC) and are capable of UHD, high-definition television (HDTV) and standard-definition television. ISDB-T and ISDB-Tsb are for mobile reception in TV bands. 1seg is the name of an ISDB-T component that allows viewers to watch TV channels via cell phones, laptop computers, and vehicles.
The concept was named for its similarity to ISDN as both allow multiple channels of data to be transmitted together (a process called multiplexing). This broadcast standard is also much like another digital radio system, Eureka 147, which calls each group of stations on a transmitter an ensemble; this is very much like the multi-channel digital TV standard DVB-T. ISDB-T operates on unused TV channels, an approach that was taken by other countries for TV but never before for radio.
Transmission
The various flavors of ISDB differ mainly in the modulations used, due to the requirements of different frequency bands. The 12 GHz band ISDB-S uses PSK modulation, 2.6 GHz band digital sound broadcasting uses CDM, and ISDB-T (in VHF and/or UHF band) uses COFDM with PSK/QAM.
Interaction
Besides audio and video transmission, ISDB also defines data connections (Data broadcasting) with the internet as a return channel over several media (10/100 Ethernet, telephone line modem, mobile phone, wireless LAN (IEEE 802.11), etc.) and with different protocols. This component is used, for example, for interactive interfaces like data broadcasting (ARIB STD-B24) and electronic program guides (EPG).
Interfaces and Encryption
The ISDB specification describes a lot of (network) interfaces, but most importantly, the Common Interface for Conditional Access System (CAS). While ISDB has examples of implementing various kinds of CAS systems, in Japan, a CAS system called "B-CAS" is used. ARIB STD-B25 defines the Common Scrambling Algorithm (CSA) system called MULTI2 required for (de-)scrambling television.
The ISDB CAS system in Japan is operated by a company named B-CAS; the CAS card is called B-CAS card. The Japanese ISDB signal is always encrypted by the B-CAS system even if it is a free television program. That is why it is commonly called "Pay per view system without charge". An interface for mobile reception is under consideration.
ISDB supports RMP (Rights management and protection). Since all digital television (DTV) systems carry digital data content, a DVD or high-definition (HD) recorder could easily copy content losslessly.
US major film studios requested copy protection; this was the main reason for RMP being mandated. The content has three modes: "copy once", "copy free" and "copy never". In "copy once" mode, a program can be stored on a hard disk recorder, but cannot be further copied; only moved to another copy-protected media—and this move operation will mark the content "copy one generation", which is mandated to prevent further copying permanently. "Copy never" programs may only be timeshifted and cannot be permanently stored. In 2006, the Japanese government is evaluating using the Digital Transmission Content Protection (DTCP) "Encryption plus Non-Assertion" mechanism to allow making multiple copies of digital content between compliant devices.
Receiver
There are two types of ISDB receiver: Television and set-top box. The aspect ratio of an ISDB-receiving television set is 16:9; televisions fulfilling these specs are called Hi-Vision TV. There are four TV types: Cathode-ray tube (CRT), plasma display panel (PDP), organic light-emitting diode (OLED) and liquid crystal display (LCD), with LCD being the most popular Hi-Vision TV on the Japanese market nowadays.
The LCD share, as measured by JEITA in November 2004, was about 60%. While PDP sets occupy the high-end market with units that are over 50 inches (1270 mm), PDP and CRT set shares are about 20% each. CRT sets are considered low end for Hi-Vision. An STB is sometimes referred to as a digital tuner.
Typical middle to high-end ISDB receivers marketed in Japan have several interfaces:
F connectors for RF input.
HDMI or D4 connector for an HDTV monitor in a home cinema.
Optical digital audio interface for an audio amplifier and speakers for 5.1 surround audio in a home cinema.
IEEE 1394 (aka FireWire) interface for digital data recorders (like DVD recorders) in a home cinema.
RCA video jack provides SDTV signal that is sampled down from the HDTV signal for analog CRT television sets or VCRs.
RCA audio jacks provide stereo audio for analog CRT television sets or VCRs.
S video is for VCRs or analog CRT television sets.
10/100 and modular jack telephone line modem interfaces are for an internet connection.
B-CAS card interface to de-scramble.
IR interface jack for controlling a VHS or DVD player.
Services
A typical Japanese broadcast service consists as follows:
One HDTV or up to three SDTV services within one channel.
Provides interactive television through datacasting.
Interactive services such as games or shopping, via telephone line or broadband internet.
Equipped with an electronic program guide.
Ability to send firmware patches for the TV/tuner over the air.
During emergencies, the service utilizes Emergency Warning Broadcast system to quickly inform the public of various threats for the areas at risk.
There are examples providing more than 10 SDTV services with H.264 coding in some countries.
ISDB-S
History
Japan started digital broadcasting using the DVB-S standard by PerfecTV in October/1996, and DirecTV in December/1997, with communication satellites. Still, DVB-S did not satisfy the requirements of Japanese broadcasters, such as NHK, key commercial broadcasting stations like Nippon Television, TBS, Fuji Television, TV Asahi, TV Tokyo, and WOWOW (Movie-only Pay-TV broadcasting). Consequently, ARIB developed a new broadcast standard called ISDB-S. The requirements were HDTV capability, interactive services, network access and effective frequency utilization, and other technical requirements. The DVB-S standard allows the transmission of a bitstream of roughly 34 Mbit/s with a satellite transponder, which means the transponder can send one HDTV channel. Unfortunately, the NHK broadcasting satellite had only four vacant transponders, which led ARIB and NHK to work on ISDB-S: the new standard could transmit at 51 Mbit/s with a single transponder, which means that ISDB-S is 1.5 times more efficient than DVB-S and that one transponder can transmit two HDTV channels, along with other independent audio and data. Digital satellite broadcasting (BS digital) was started by NHK and followed commercial broadcasting stations on 1 December 2000. Today, SKY PerfecTV! (the successor of Skyport TV and Sky D), CS burn, Platone, EP, DirecTV, J Sky B, and PerfecTV!, adopted the ISDB-S system for use on the 110-degree (east longitude) wide-band communication satellite.
Technical specification
This table shows the summary of ISDB-S (satellite digital broadcasting).
Channel
Frequency and channel specification of Japanese Satellites using ISDB-S
ISDB-S3
ISDB-S3 is a satellite digital broadcasting specification supporting 4K, 8K, HDR, HFR, and 22.2 audio.
ISDB-C
ISDB-C is a cable digital broadcasting specification. The technical specification J.83/C is developed by JCTEA. ISDB-C is identical to DVB-C but has a different channel bandwidth of 6 MHz (instead of 8 MHz) and roll-off factor.
ISDB-T
History
HDTV was invented at NHK Science & Technology Research Laboratories (Japan Broadcasting Corporation's Science & Technical Research Laboratories). The research for HDTV started as early as the 1960s, though a standard was proposed to the ITU-R (CCIR) only in 1973.
By the 1980s, a high definition television camera, cathode-ray tube, videotape recorder, and editing equipment, among others, had been developed. In 1982 NHK developed MUSE (Multiple sub-Nyquist sampling encoding), the first HDTV video compression and transmission system. MUSE used digital video compression, but for transmission frequency modulation was used after a digital-to-analog converter converted the digital signal.
In 1987, NHK demonstrated MUSE in Washington D.C. as well as NAB. The demonstration made a great impression in the U.S., leading to the development of the ATSC terrestrial DTV system. Europe also developed a DTV system called DVB. Japan began R&D of a completely digital system in the 1980s that led to ISDB. Japan began terrestrial digital broadcasting, using ISDB-T standard by NHK and commercial broadcasting stations, on 1 December 2003.
Features
ISDB-T is characterized by the following features:
ISDB-T (Integrated Services Digital Broadcasting-Terrestrial) in Japan use UHF 470 MHz-710 MHz, bandwidth of 240 MHz, allocate 40 channels namely channels 13 to 52 (previously used also 710 MHz-770 MHz, 53 to 62, but this range was re-assigned to cell phones), each channel is 6 MHz width (actually 5.572 MHz effective bandwidth and 430 kHz guard band between channels). These channels are called "physical channel(物理チャンネル)". For other countries, US channel table or European channel table are used.
For channel tables with 6 MHz width, ISDB-T single channel bandwidths 5.572 MHz has number of carriers 5,617 with interval of 0.99206 kHz. For 7 MHz channel, channel bandwidth is 6.50 MHz; for 8 MHz 7.42 MHz.
ISDB-T allows to accommodate any combination of HDTV (roughly 8 Mbit/s in H.264) and SDTV (roughly 2 Mbit/s in H.264) within the given bitrate determined by the transmission parameters such as bandwidth, code-rate, guard interval, etc. Typically, among the 13 segments, the center segment is used for 1seg with QPSK modulation and the remaining 12 segments for the HDTV or SDTV payloads for 64QAM modulation. The bitstream of the 12 segments are combined into one transport stream, within which any combination of programs can be carried based on the MPEG-2 transport stream definition.
ISDB-T transmits an HDTV channel and a mobile TV channel 1seg within one channel. 1seg is a mobile terrestrial digital audio/video broadcasting service in Japan. Although 1seg is designed for mobile usage, reception is sometimes problematic in moving vehicles. Because of reception on high speed vehicle, UHF transmission is shaded by buildings and hills frequently, but reported well receiving in Shinkansen as far as run in flat or rural area.
ISDB-T provides interactive services with data broadcasting. Such as Electronic Program Guides. ISDB-T supports internet access as a return channel that works to support the data broadcasting. Internet access is also provided on mobile phones.
ISDB-T provides Single-Frequency Network (SFN) and on-channel repeater technology. SFN makes efficient utilization of the frequency resource (spectrum). For example, the Kanto area (greater Tokyo area including most part of Tokyo prefecture and some part of Chiba, Ibaragi, Tochigi, Saitama and Kanagawa prefecture) are covered with SFN with roughly 10 million population coverage.
ISDB-T can be received indoors with a simple indoor antenna.
ISDB-T provides robustness to multipath interference ("ghosting"), co-channel analog television interference, and electromagnetic interferences that come from motor vehicles and power lines in urban environments.
ISDB-T is claimed to allow HDTV to be received on moving vehicles at over 100 km/h; DVB-T can only receive SDTV on moving vehicles, and it is claimed that ATSC can not be received on moving vehicles at all (however, in early 2007 there were reports of successful reception of ATSC on laptops using USB tuners in moving vehicles).
Adoption
ISDB-T was adopted for commercial transmissions in Japan in December 2003. It currently comprises a market of about 100 million television sets. ISDB-T had 10 million subscribers by the end of April 2005. Along with the wide use of ISDB-T, the price of receivers is getting low. The price of ISDB-T STB in the lower end of the market is ¥19800 as of 19 April 2006. By November 2007 only a few older, low-end STB models could be found in the Japanese market (average price U$180), showing a tendency towards replacement by mid to high-end equipment like PVRs and TV sets with inbuilt tuners. In November 2009, a retail chain AEON introduced STB in 40 USD, followed by variety of low-cost tuners. The Dibeg web page confirms this tendency by showing low significance of the digital tuner STB market in Japan.
Brazil, which used an analogue TV system (PAL-M) that slightly differed from any other countries, has chosen ISDB-T as a base for its DTV format, calling it ISDB-Tb or internally SBTVD (Sistema Brasileiro de Televisão Digital-Terrestre). The Japanese DiBEG group incorporated the advancements made by Brazil -MPEG4 video codec instead of ISDB-T's MPEG2 and a powerful interaction middleware called Ginga- and has renamed the standard to "ISDB-T International". Other than Argentina, Brazil, Peru, Chile and Ecuador which have selected ISDB-Tb, there are other South American countries, mainly from Mercosur, such as Venezuela, that chose ISDB-Tb, which providing economies of scale and common market benefits from the regional South American manufacturing instead of importing ready-made STBs as is the case with the other standards. Also, it has been confirmed with extensive tests realized by Brazilian Association of Radio and Television Broadcasters (ABERT), Brazilian Television Engineering Society (SET) and Universidade Presbiteriana Mackenzie the insufficient quality for indoor reception presented by ATSC and, between DVB-T and ISDB-T, the latter presented superior performance in indoor reception and flexibility to access digital services and TV programs through non-mobile, mobile or portable receivers with impressive quality.
The ABERT–SET group in Brazil did system comparison tests of DTV under the supervision of the CPqD foundation. The comparison tests were done under the direction of a work group of SET and ABERT. The ABERT/SET group selected ISDB-T as the best choice in digital broadcasting modulation systems among ATSC, DVB-T and ISDB-T. Another study found that ISDB-T and DVB-T performed similarly, and that both were outperformed by DVB-T2.
ISDB-T was singled out as the most flexible of all for meeting the needs of mobility and portability. It is most efficient for mobile and portable reception. On June 29, 2006, Brazil announced ISDB-T-based SBTVD as the chosen standard for digital TV transmissions, to be fully implemented by 2016. By November 2007 (one month prior DTTV launch), a few suppliers started to announce zapper STBs of the new Nippon-Brazilian SBTVD-T standard, at that time without interactivity.
As in 2019, the implementation rollout in Brazil proceeded successfully, with terrestrial analog services (PAL-M) phased out in most of the country (for some less populated regions, analog signal shutdown was postponed to 2023).
Adoption by country
This lists the other countries who adopted the ISDB-T standard, chronologically arranged.
On June 30, 2006, Brazil announced its decision to adopt ISDB-T as the digital terrestrial television standard, by means of presidential decree 5820/2006.
On April 23, 2009, Peru announced its decision to adopt ISDB-T as the digital terrestrial television standard. This decision was taken on the basis of the recommendations by the Multi-sectional Commission to assess the most appropriate standard for the country.
On August 28, 2009, Argentina officially adopted the ISDB-T system calling it internally SATVD-T (Sistema Argentino de Televisión – Terrestre).
On September 14, 2009, Chile announced it was adopting the ISDB-T standard because it adapts better to the geographical makeup of the country, while allowing signal reception in cell phones, high-definition content delivery and a wider variety of channels.
On October 6, 2009, Venezuela officially adopted the ISDB-T standard.
On March 26, 2010, Ecuador announced its decision to adopt ISDB-T standard. This decision was taken on the basis of the recommendations by the Superintendent of Telecommunications.
On April 29, 2010, Costa Rica officially announced the adoption of ISDB-Tb standard based upon a commission in charge of analyzing which protocol to accept.
On June 1, 2010, Paraguay officially adopted ISDB-T International, via a presidential decree #4483.
On June 11, 2010, the Philippines (NTC) officially adopted the ISDB-T standard.
On July 6, 2010, Bolivia announced its decision to adopt ISDB-T standard as well.
On December 27, 2010, the Uruguayan Government adopts the ISDB-T standard., voiding a previous 2007 decree which adopted the European DVB system.
On November 15, 2011, the Maldivian Government adopts the ISDB-T standard. As the first country in the region that use European channel table and 1 channel bandwidth is 8 MHz.
On February 26, 2013, the Botswana government adopts the ISDB-T standard; as the first country within the SADC region and even the first country within the continent of Africa as a whole.
On September 12, 2013, Honduras adopted the ISDB-T standard.
On May 20, 2014, Government of Sri Lanka officially announced its decision to adopt ISDB-T standard, and on September 7, 2014 Japanese Prime Minister Shinzo Abe signed an agreement with Sri Lankan President Mahinda Rajapakse for constructing infrastructure such as ISDB-T networks with a view to smooth conversion to ISDB-T, and cooperating in the field of content and developing human resources.
On January 23, 2017, El Salvador adopted the ISDB-T standard.
On March 20, 2019, Angola adopted the ISDB-T standard.
Technical specification
Segment structure
ARIB has developed a segment structure called BST-OFDM (see figure).
ISDB-T divides the frequency band of one channel into thirteen segments. The broadcaster can select which combination of segments to use; this choice of segment structure allows for service flexibility. For example, ISDB-T can transmit both LDTV and HDTV using one TV channel or change to 3 SDTV, a switch that can be performed at any time. ISDB-T can also change the modulation scheme at the same time.
The above figure shows the spectrum of 13 segments structure of ISDB-T.
(s0 is generally used for 1seg, s1-s12 are used for one HDTV or three SDTVs)
Summary of ISDB-T
H.264 Baseline profile is used in one segment (1seg) broadcasting for portables and Mobile phone.
H.264 High-profile is used in ISDB-Tb to high definition broadcasts.
Channel
Specification of Japanese terrestrial digital broadcasting using ISDB-T.
ISDB-Tsb
ISDB-Tsb is the terrestrial digital sound broadcasting specification. The technical specification is the same as ISDB-T. ISDB-Tsb supports the coded transmission of OFDM signals.
ISDB-Tmm
ISDB-Tmm (Terrestrial mobile multi-media) utilised suitable number of segments by station with video coding MPEG-4 AVC/H.264. With multiple channels, ISDB-Tmm served dedicated channels such as sport, movie, music channels and others with CD quality sound, allowing for better broadcast quality as compared to 1seg. This service used the VHF band, 207.5–222 MHz which began to be utilised after Japan's switchover to digital television in July 2011.
Japan's Ministry of Internal Affairs and Communications licensed to NTT Docomo subsidiary mmbi, Inc. for ISDB-Tmm method on September 9, 2010. The MediaFLO method offered with KDDI was not licensed.
The ISDB-Tmm broadcasting service by mmbi, Inc. is named モバキャス (pronounced mobakyasu), literally short form of mobile casting on July 14, 2011, and had been branded as NOTTV since October 4, 2011. The Minister of Internal Affairs and Communications approved the start of operations of NOTTV on October 13, 2011. Planning the service with monthly subscription fee of 420 yen for south Kanto Plain, Aichi, Osaka, Kyoto and some other prefectures from April 1, 2012. The deployment plan was to cover approximately 73% of households by the end of 2012, approximately 91% by the end of 2014, and 125 stations or repeaters to be installed in 2016 to cover cities nationwide. Android smartphones and tablets with ISDB-Tmm receiving capability were also sold mainly by NTT DoCoMo, although a separate tuner (TV BoX manufactured by Huawei; or StationTV manufactured by Pixela) could be purchased for iPhones and iPads as well as Android smartphones and tablets sold by au by KDDI and SoftBank Mobile to receive ISDB-Tmm broadcasts.
Due to the continued unprofitability of NOTTV, mmbi, Inc. shut down the service on June 30, 2016.
2.6 GHz Mobile satellite digital audio/video broadcasting
MobaHo! is the name of the services that uses the Mobile satellite digital audio broadcasting specifications. MobaHo! started its service on 20 October 2004. Ended on 31 March 2009
Standards
ARIB and JCTEA developed the following standards. Some part of standards are located on the pages of ITU-R and ITU-T.
Table of terrestrial HDTV transmission systems
See also
General category
DiBEG – The Digital Broadcasting Experts Group
Digital television
Digital terrestrial television
Digital radio
Digital multimedia broadcasting (DMB)
1seg
B-CAS
Datacasting
SDTV, EDTV, HDTV
ISDB-T International (SBTVD) – Brazilian Digital Television System based on ISDB-T
Tokyo Skytree – ISDB-T broadcasting for Kanto Plain
Transmission technology
ATSC Standards – Advanced Television Systems Committee Standard
DMB-T – Digital Multimedia Broadcast-Terrestrial
DVB-T – Digital Video Broadcasting-Terrestrial
MPEG
Single-frequency network (SFN), multi-frequency network (MFN)
References
External links
Welcome to ISDB-T Official Web Site! Digital Broadcasting Experts Group (DiBEG)
ISDB-T International Web Site!
Outline of the Specification for ISDBNHK
The ISDB-T SystemITU (link is down, 2012/10/28)
Comparison Test Results in Brazil, Clear Superiority of the ISDB-T systemNHK
Digital Television Laboratory and Field Test Results - BrazilITU
ISDB-T: Japanese Digital Terrestrial Television Broadcasting (DTTB), (PDF) Asian Institute of Technology
Final report of the Digital Terrestrial Television Peruvian Commission (In Spanish)
Digital Broadcasting, the Launching by Country Digital Broadcasting Experts Group (DiBEG)
ISDB-C – Cable Television Transmission for Digital Broadcasting in JapanNHK
ISDB-S – Satellite Transmission System for Advanced Multimedia Services Provided by Integrated Services Digital BroadcastingNHK
The Association for Promotion of Digital Broadcasting (Dpa)
ISDB-T – Digital Terrestrial Television/Sound/Data Broadcasting in JapanNHK
Switching On to ISDB-T Digital Highlighting Japan September 2010 (Public Relations Office Government of Japan)
ISDB-Tmm
Introducing ISDB-Tmm mobile multimedia broadcasting system – ITU (May 2010)
Deployment of Mobile Multimedia Broadcasting based on ISDB-Tmm technology in Japan – ITU(May 23, 2011)
Broadband
Broadcast engineering
Digital television
High-definition television
Radio broadcasting
Mass media companies established in 1981
Satellite television
Television transmission standards
Japanese inventions
Standards of Japan
Mass media companies of Japan
1981 establishments in Japan
2000 introductions
2003 introductions | ISDB | Engineering | 5,721 |
4,691,922 | https://en.wikipedia.org/wiki/Phased%20adoption | Phased adoption or phased implementation is a strategy of implementing an innovation (i.e., information systems, new technologies, processes, etc.) in an organization in a phased way, so that different parts of the organization are implemented in different subsequent time slots. Phased implementation is a method of System Changeover from an existing system to a new one that takes place in stages. Other concepts that are used are: phased conversion, phased approach, phased strategy, phased introduction and staged conversion. Other methods of system changeover include direct changeover and parallel running.
Overview
Information Technology has revolutionized the way of working in organizations. With the introduction of high-tech Enterprise Resource Planning Systems (ERP), Content Management Systems (CMS), Customer and Supplier Relationship Management Systems (CRM and SRM), came the task to implement these systems in the organizations that are about to use it. The following entry will discuss just a small fraction of what has to be done or can be done when implementing such a system in the organization.
The phased approach takes the conversion one step at a time. The implementation requires a thoroughly thought out scenario for starting to use the new system. And at every milestone one has to instruct the employees and other users. The old system is taken over by the new system in predefined steps until it is totally abounded. The actual installation of the new system will be done in several ways, per module or per product and several instances can be carried out. This may be done by introducing some of the functionalities of the system before the rest or by introducing some functionalities to certain users before introducing them to all the users. This gives the users the time to cope with the changes caused by the system.
It is common to organize an implementation team that moves from department to department. By moving, the team learns and so gains expertise and knowledge, so that each subsequent implementation will be a lot faster than the first one.
The Process Data Diagram
The visualizing technique used in this entry is a technique developed by the O&I group of the University of Utrecht. The technique is described in the following Wiki: Meta-modeling technique.
As can be seen in figure 1, phased adoption has a loop in it. Every department that is to be connected to the system is going through the same process. First based on the previous training sessions security levels are set (see ITIL) In this way every unique user has its own profile which describes, which parts of the system are visible and/or usable to that specific user. Then the document and policies are documented. All processes and procedures are described in process descriptions, can be in paper or on the intranet. Then the actual conversion is depicted. As described in the above text, certain departments and or parts of an organization may be implemented in different time slots. In figure 1 that is depicted by implementing an additional module or even a total product. HRM needs different modules of an ERP system than Finance (module) or Finance may need an additional accounting software package (Product). Tuning of the system occurs to solve existing problems. After the certain department has been conversed the loop starts over, and another department or user group may be conversed. If all of the departments or organization parts are conversed and the system is totally implemented the system is officially delivered to the organization and the implementation team may be dissolved.
Phased adoption makes it possible to introduce modules that are ready whilst programming the other future modules. This does make the implementation scenario more critical, since certain modules depend on one another. Project Management techniques can be adopted to tackle these problems. See the techniques section below.
However, the actual adoption of the system by the users can be more problematic. The system may work just fine but if it is not used it’s worthless. Users base their attitude towards the system on their first experience. As this creates an extra weight on the first interaction, the implementers should be concerned with making the first interaction especially a pleasant one.
In the technique used in this entry each CONCEPT requires a proper definition which is preferably copied from a standard glossary of which the source is given, if applicable. All CONCEPT names in the text are with capital characters. In Table 1 the concept definition list is presented.
Table 1: Concept Diagram
Advantages, disadvantages and risks of Phased Adoption
The Phased adoption method has certain pros, cons and risks
Pros:
The conversion will be done in parts. Time is available for adjustments
Negative influences that arise at the start are less critical
No ‘catch-up’ period is needed.
Time for the users to adapt is longer
Technical staff can concentrate on part of the system or some of the users.
Cons:
Several adjustments are needed
Training sessions are confusing for users as they are asked to work with the new and the old system
Several changes in documentation
The duration of the project
System delivery milestone is unclear
Correctness and completeness of the dataset has to be checked several times
A ‘fall back’ to the old system is becoming more difficult every new phase.
The implementation may appear unclear to the employees and other users.
Risks:
Complexity of the implementation
Prone to make mistakes
Fall back impossible in later phases
Hardware and software installation
The following sections are supplemental to the entry about adoption (software implementation) and are specific to phased adoption:
The configuration and specification of the hardware in place used by the legacy system and to run the new system is delivered in the hardware specifications. The hardware configuration is tested to assure proper functioning. This is reported in the hardware configuration report.
The configuration and specification of the software in place, i.e., the legacy system and the future new system is made clear to assure proper functioning once the system is installed. The act of specifying the system already installed is key to the implementation. Which parts or even total systems will be taken over by the new system? All this is reported in the software installation and software test reports.
The actual installation of the software of the new system is also done here in a confined area to support the training sessions described in the following section.
Training
The system training will teach users the keystrokes and transactions required to run the system. The pilot exercises the systems and tests the users understanding of the system. The project team creates a skeletal business case test environment which takes the business processes from the beginning, when a customer order is received, to the end, when the customer order is shipped.
Training as such is not enough for adopting an information system. The users have learning needs. Known learning needs are the emotional guidance. Users need to make emotional steps in order to make cognitive
steps. If they fear the system due to its difficult handling they may not be able to understand the cognitive steps needed to successfully carry out the tasks.
Techniques
In the implementation field several techniques are used. A well-known method, and specifically oriented on the implementation field, is the Regatta method by Sogeti. Other techniques are the SAP Implementation method, which is adapted to implementing SAP systems. Systems are installed in several different ways. Different organizations may have their own methods, When implementing a system, it is considered a project and thus must be handled as such. Well known theories and methods are used in the field such as the PRINCE2 method with all of its underlying techniques, such as a PERT diagram, Gantt chart and critical path methods.
Examples
Electronic medical records
The EMR implementation at the University Physicians Group (UPG) in Staten Island and Brooklyn, New York.
The University Physicians Group in New York went with a complete technical installation of an EMR (Electronic Medical Record) software package. The UPG found that some vendors of the EMR package recommended a rolling out that would be done all-at-one, also called the Big Bang. But they found out that the Big Bang would have overwhelmed the physicians and staff due to the following factors:
Ongoing workload during the key lessons prevented them to fully pay attention.
Urgent need to complete some records caused the users to fall back to the old system
Information overload on the physicians side.
No time to play around with the system.
100% availability was not assured by the vendor.
Thus they chose a phased approach: “Hence, a phased adoption to us, offered the greatest chance of success, staff adoption, and opportunity for the expected return-on-investment once the system was completely adopted.” (J. Hyman, M.D.)
There also was a group who were somewhat reluctant about any new systems. By introducing the system to certain early adopters the late majority would be able to get to know the system. As it was introduced phased through the organisation. Per loop (see figure 5, A) the UPG was introduced to the system.
Supermarket checkout system
As an example, think of a supermarket. In this supermarket the checkout system is being upgraded to a newer version. Imagine that only the checkout counters of the vegetable section are changed over to the new system, while the other counters carry on with the old system. If the new system does not work properly, it would not matter because only a small portion of the supermarket has been computerised. If it does work, staff can take turns working on the vegetable counters to get some practice using the new system.
After the vegetables section is working perfectly, the meat section might be next, then the confectionery section, and so on. Eventually all the various counters in the supermarket system would have been phased in, and everything would be running. This takes a long time as there are two systems working until the changeover is completed. However, the supermarket is never in danger of having to close and the staff are all able to get plenty of training in operating the new system, so it is a much friendlier method.
See also
PRINCE2
Regatta method by Sogeti
Parallel adoption
ERP
SRM
CRM
Software package
References
Further reading
Gallivan, M.J., (1996) Strategies for implementing new software processes: An evaluation of a contingency framework, SIGCPR/SIGMIS ’96, Denver Colorado
Rooimans, R., Theye, M. de, & Koop, R. (2003). Regatta: ICT-implementaties als uitdaging voor een vier-met-stuurman. The Hague: Ten Hagen en Stam Uitgevers.
Information systems | Phased adoption | Technology | 2,104 |
2,115,435 | https://en.wikipedia.org/wiki/LITNET | LITNET is Lithuanian Research and Education Network in Lithuania. It was established in 1991 and had X.25 satellite connectivity to University of Oslo.
LITNET NOC is located in Kaunas University of Technology (KTU).
References
External links
Educational organizations based in Lithuania
Internet in Lithuania
National research and education networks | LITNET | Technology | 63 |
77,102,780 | https://en.wikipedia.org/wiki/William%20J.%20Pietro | William Joseph Pietro (born 1956) is an American/Canadian research scientist working in quantum chemistry, molecular electronics, and molecular machines.
Education
Pietro was born in Jersey City, New Jersey. His education includes a B.S. in chemistry from the Brooklyn Polytechnic Institute of New York, a Ph.D. in chemistry from the University of California, Irvine, and a postdoctoral fellowship at Northwestern University.
Career
Pietro was one of the founding authors of both Gaussian and Spartan electronic structure software packages. Pietro and co-workers Robert Hout and Warren Hehre invented the first algorithm for the high-resolution visualization of molecular orbitals. Working in collaboration with John Pople and Warren Hehre, Pietro developed the first split-valence basis sets for transition metals and higher-row main-group elements.
Between 1985 and 1991, Pietro was a professor of chemistry at the University of Wisconsin–Madison, where his research group pioneered the first working molecular diode.
Pietro is a professor of chemistry at York University researching theoretical aspects of electron transfer reactions in transition metal complexes. and the quantum dynamics of molecular and biomolecular machines.
References
1956 births
Living people
Northwestern University alumni
University of California, Irvine alumni
Scientists from New Jersey
People from Jersey City, New Jersey
Theoretical chemists
University of Wisconsin–Madison faculty
Academic staff of York University | William J. Pietro | Chemistry | 267 |
62,321,646 | https://en.wikipedia.org/wiki/BugsXLA | BugsXLA is a Microsoft Excel add-in that provides a graphical user interface for WinBUGS, OpenBUGS and JAGS, developed by Phil Woodward. BugsXLA allows a wide range of Bayesian models to be fitted to data stored in Excel using model statements similar to those used in R, SAS or Genstat. It has been used to analyse data in a variety of application areas, for example quality engineering, pharmaceutical research, organisational sciences and ecology. The primary purpose of BugsXLA is to reduce the learning curve associated with using Bayesian software. It does this by removing the need to know how to code in the BUGS language, how to create the other files needed, as well as providing reasonable default initial values and prior distributions.
References
External links
BugsXLA page
BugsXLA YouTube
Statistical software | BugsXLA | Mathematics | 169 |
822,575 | https://en.wikipedia.org/wiki/Biogeochemistry | Biogeochemistry is the scientific discipline that involves the study of the chemical, physical, geological, and biological processes and reactions that govern the composition of the natural environment (including the biosphere, the cryosphere, the hydrosphere, the pedosphere, the atmosphere, and the lithosphere). In particular, biogeochemistry is the study of biogeochemical cycles, the cycles of chemical elements such as carbon and nitrogen, and their interactions with and incorporation into living things transported through earth scale biological systems in space and time. The field focuses on chemical cycles which are either driven by or influence biological activity. Particular emphasis is placed on the study of carbon, nitrogen, oxygen, sulfur, iron, and phosphorus cycles. Biogeochemistry is a systems science closely related to systems ecology.
History
Early Greek
Early Greeks established the core idea of biogeochemistry that nature consists of cycles.
18th-19th centuries
Agricultural interest in 18th-century soil chemistry led to better understanding of nutrients and their connection to biochemical processes. This relationship between the cycles of organic life and their chemical products was further expanded upon by Dumas and Boussingault in a 1844 paper that is considered an important milestone in the development of biogeochemistry. Jean-Baptiste Lamarck first used the term biosphere in 1802, and others continued to develop the concept throughout the 19th century. Early climate research by scientists like Charles Lyell, John Tyndall, and Joseph Fourier began to link glaciation, weathering, and climate.
20th century
The founder of modern biogeochemistry was Vladimir Vernadsky, a Russian and Ukrainian scientist whose 1926 book The Biosphere, in the tradition of Mendeleev, formulated a physics of the Earth as a living whole. Vernadsky distinguished three spheres, where a sphere was a concept similar to the concept of a phase-space. He observed that each sphere had its own laws of evolution, and that the higher spheres modified and dominated the lower:
Abiotic sphere – all the non-living energy and material processes
Biosphere – the life processes that live within the abiotic sphere
Nöesis or noosphere – the sphere of human cognitive process
Human activities (e.g., agriculture and industry) modify the biosphere and abiotic sphere. In the contemporary environment, the amount of influence humans have on the other two spheres is comparable to a geological force (see Anthropocene).
The American limnologist and geochemist G. Evelyn Hutchinson is credited with outlining the broad scope and principles of this new field. More recently, the basic elements of the discipline of biogeochemistry were restated and popularized by the British scientist and writer, James Lovelock, under the label of the Gaia Hypothesis. Lovelock emphasized a concept that life processes regulate the Earth through feedback mechanisms to keep it habitable. The research of Manfred Schidlowski was concerned with the biochemistry of the Early Earth.
Biogeochemical cycles
Biogeochemical cycles are the pathways by which chemical substances cycle (are turned over or moved through) the biotic and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, hydrosphere and lithosphere. There are biogeochemical cycles for chemical elements, such as for calcium, carbon, hydrogen, mercury, nitrogen, oxygen, phosphorus, selenium, iron and sulfur, as well as molecular cycles, such as for water and silica. There are also macroscopic cycles, such as the rock cycle, and human-induced cycles for synthetic compounds such as polychlorinated biphenyls (PCBs). In some cycles there are reservoirs where a substance can remain or be sequestered for a long period of time.
Research
Biogeochemistry research groups exist in many universities around the world. Since this is a highly interdisciplinary field, these are situated within a wide range of host disciplines including: atmospheric sciences, biology, ecology, geomicrobiology, environmental chemistry, geology, oceanography and soil science. These are often bracketed into larger disciplines such as earth science and environmental science.
Many researchers investigate the biogeochemical cycles of chemical elements such as carbon, oxygen, nitrogen, phosphorus and sulfur, as well as their stable isotopes. The cycles of trace elements, such as the trace metals and the radionuclides, are also studied. This research has obvious applications in the exploration of ore deposits and oil, and in the remediation of environmental pollution.
Some important research fields for biogeochemistry include:
modelling of natural systems
soil and water acidification recovery processes
eutrophication of surface waters
carbon sequestration
environmental remediation
global change
climate change
biogeochemical prospecting for ore deposits
soil chemistry
chemical oceanography
Evolutionary Biogeochemistry
Evolutionary biogeochemistry is a branch of modern biogeochemistry that applies the study of biogeochemical cycles to the geologic history of the Earth. This field investigates the origin of biogeochemical cycles and how they have changed throughout the planet's history, specifically in relation to the evolution of life.
See also
Acid rain
Atlantic Data Base for Exchange Processes at the Deep Sea Floor
Carbon sink
Ecosystem model
Edaphology
Environmental engineering science
Geochemistry
Geophysiology
GEOTRACES
Hydrogen isotope biogeochemistry
IMBER
Marine biogeochemical cycles
Pedology
Physical impacts of climate change
References
Representative books and publications
Vladimir I. Vernadsky, 2007, Essays on Geochemistry and the Biosphere, tr. Olga Barash, Santa Fe, NM, Synergetic Press, (originally published in Russian in 1924)
Schlesinger, W. H. 1997. Biogeochemistry: An Analysis of Global Change, 2nd edition. Academic Press, San Diego, Calif. .
Schlesinger, W. H., 2005. Biogeochemistry. Vol. 8 in: Treatise on Geochemistry. Elsevier Science.
Vladimir N. Bashkin, 2002, Modern Biogeochemistry. Kluwer, .
Samuel S. Butcher et al. (Eds.), 1992, Global Biogeochemical Cycles. Academic, .
Susan M. Libes, 1992, Introduction to Marine Biogeochemistry. Wiley, .
Dmitrii Malyuga, 1995, Biogeochemical Methods of Prospecting. Springer, .
Global Biogeochemical Cycles. A journal published by the American Geophysical Union.
Woolman, T. A., & John, C. Y., 2013, An Analysis of the Use of Predictive Modeling with Business Intelligence Systems for Exploration of Precious Metals Using Biogeochemical Data. International Journal of Business Intelligence Research (IJBIR), 4(2), 39-53.v .
Biogeochemistry. A journal published by Springer.
External links
Treatise on Geochemistry Volume 8. Biogeochemistry
International Geosphere-Biosphere Programme
Chemical oceanography
Limnology
Systems ecology | Biogeochemistry | Chemistry,Environmental_science | 1,466 |
10,207,364 | https://en.wikipedia.org/wiki/Kentucky%20statistical%20areas | The United States currently has 32 statistical areas that have been delineated by the Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated 8 combined statistical areas, 9 metropolitan statistical areas, and 15 micropolitan statistical areas in Kentucky. As of 2023, the largest of these is the Louisville-Jefferson County--Elizabethtown, KY-IN CSA, comprising greater Louisville, Kentucky's largest city.
Table
Primary statistical areas
Primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area. Of the 32 statistical areas of Kentucky, 16 are PSAs comprising eight combined statistical areas, two metropolitan statistical areas and six micropolitan statistical areas.
See also
Geography of Kentucky
Demographics of Kentucky
Notes
References
External links
Office of Management and Budget
United States Census Bureau
United States statistical areas
Statistical Areas Of Kentucky
Statistical Areas Of Kentucky | Kentucky statistical areas | Mathematics | 193 |
587,247 | https://en.wikipedia.org/wiki/Classical%20planet | A classical planet is an astronomical object that is visible to the naked eye and moves across the sky and its backdrop of fixed stars (the common stars which seem still in contrast to the planets). Visible to humans on Earth there are seven classical planets (the seven luminaries). They are from brightest to dimmest: the Sun, the Moon, Venus, Jupiter, Mars, Mercury and Saturn.
Greek astronomers such as Geminus and Ptolemy recorded these classical planets during classical antiquity, introducing the term planet, which means 'wanderer' in Greek ( and ), expressing the fact that these objects move across the celestial sphere relative to the fixed stars. Therefore, the Greeks were the first to document the astrological connections to the planets' visual detail.
Through the use of telescopes other celestial objects like the classical planets were found, starting with the Galilean moons in 1610. Today the term planet is used considerably differently, with a planet being defined as a natural satellite directly orbiting the Sun (or other stars) and having cleared its own orbit. Therefore, only five of the seven classical planets remain recognized as planets, alongside Earth, Uranus, and Neptune.
History
Babylonian
The Babylonians recognized seven planets. A bilingual list in the British Museum records the seven Babylonian planets in the following order:
The Moon, Sin.
The Sun, Shamash.
Jupiter, Merodach.
Venus, Ishtar.
Saturn, Ninip.
Mercury, Nebo.
Mars, Nergal.
Mandaean
In Mandaeism, the names of the seven planets are derived from the seven Babylonian planets. Overall, the seven classical planets (; , "planets"; or, combined, "Seven Planets") are generally not viewed favorably in Mandaeism, since they constitute part of the entourage of Ruha, the Queen of the World of Darkness who is also their mother. However, individually, some of the planets can be associated with positive qualities. The names of the seven planets in Mandaic are borrowed from Akkadian. Some of the names are ultimately derived from Sumerian, since Akkadian had borrowed many deity names from Sumerian.
Each planet is said to be carried in a ship. Drawings of these ships are found in various Mandaean scriptures, such as the Scroll of Abatur. The planets are listed according to the traditional Mandaean order of the planets as mentioned in Masco (2012).
Symbols
The astrological symbols for the classical planets appear in the medieval Byzantine codices in which many ancient horoscopes were preserved. In the original papyri of these Greek horoscopes, there are found a circle with one ray () for the Sun and a crescent for the Moon.
The written symbols for Mercury, Venus, Jupiter, and Saturn have been traced to forms found in late Greek papyri. The symbols for Jupiter and Saturn are identified as monograms of the initial letters of the corresponding Greek names, and the symbol for Mercury is a stylized caduceus.
A. S. D. Maunder finds antecedents of the planetary symbols in earlier sources, used to represent the gods associated with the classical planets. Bianchini's planisphere, produced in the 2nd century, shows Greek personifications of planetary gods charged with early versions of the planetary symbols: Mercury has a caduceus; Venus has, attached to her necklace, a cord connected to another necklace; Mars, a spear; Jupiter, a staff; Saturn, a scythe; the Sun, a circlet with rays radiating from it; and the Moon, a headdress with a crescent attached.
A diagram in Johannes Kamateros' 12th century Compendium of Astrology shows the Sun represented by the circle with a ray, Jupiter by the letter zeta (the initial of Zeus, Jupiter's counterpart in Greek mythology), Mars by a shield crossed by a spear, and the remaining classical planets by symbols resembling the modern ones, without the cross-mark seen in modern versions of the symbols. The modern Sun symbol, pictured as a circle with a dot (☉), first appeared in the Renaissance.
Planetary hours
The Ptolemaic system used in ancient Greek astronomy placed the planets by order of proximity to Earth in the then-current geocentric model, closest to furthest, as the Moon, Mercury, Venus, Sun, Mars, Jupiter, and Saturn. In addition the day was divided into seven-hour intervals, each ruled by one of the planets, although the order was staggered (see below).
The first hour of each day was named after the ruling planet, giving rise to the names and order of the Roman seven-day week. Modern Latin-based cultures, in general, directly inherited the days of the week from the Romans and they were named after the classical planets; for example, in Spanish Miércoles is Mercury, and in French mardi is Mars-day.
The modern English days of the week were mostly inherited from gods of the old Germanic Norse culture – Wednesday is Wōden’s-day (Wōden or Wettin eqv. Mercury), Thursday is Thor’s-day (Thor eqv. Jupiter), Friday is Frige-day (Frige eqv. Venus). Equivalence here is by the gods' roles; for instance, Venus and Frige were both goddesses of love. It can be correlated that the Norse gods were attributed to each Roman planet and its god, probably due to Roman influence rather than coincidentally by the naming of the planets. A vestige of the Roman convention remains in the English name Saturday.
Alchemy
In alchemy, each classical planet (Moon, Mercury, Venus, Sun, Mars, Jupiter, and Saturn) was associated with one of the seven metals known to the classical world (silver, mercury/quicksilver, copper, gold, iron, tin and lead respectively). As a result, the alchemical glyphs for the metal and associated planet coincide. Alchemists believed the other elemental metals were variants of these seven (e.g. zinc was known as "Indian tin" or "mock silver").
Alchemy in the Western World and other locations where it was widely practiced was (and in many cases still is) allied and intertwined with traditional Babylonian-Greek style astrology; in numerous ways they were built to complement each other in the search for hidden knowledge (knowledge that is not common i.e. the occult). Astrology has used the concept of classical elements from antiquity up until the present day today. Most modern astrologers use the four classical elements extensively, and indeed they are still viewed as a critical part of interpreting the astrological chart.
Traditionally, each of the seven planets in the Solar System as known to the ancients was associated with, held dominion over, and "ruled" a certain metal.
The list of rulership is as follows:
The Sun rules Gold ()
The Moon, Silver ()
Mercury, Quicksilver/Mercury ()
Venus, Copper ()
Mars, Iron ()
Jupiter, Tin ()
Saturn, Lead ()
Some alchemists (e.g. Paracelsus) adopted the Hermetic Qabalah assignment between the vital organs and the planets as follows:
Contemporary astrology
Western astrology
Indian astrology
Indian astronomy and astrology (jyotiṣa) recognises seven visible planets (including the Sun and Moon) and two additional invisible planets(tamo'graha); rahu and ketu.
Naked-eye planets
Mercury and Venus are visible only in twilight hours because their orbits are interior to that of Earth. Venus is the third-brightest object in the sky and the most prominent planet. Mercury is more difficult to see due to its proximity to the Sun. Lengthy twilight and an extremely low angle at maximum elongations make optical filters necessary to see Mercury from extreme polar locations. Mars is at its brightest when it is in opposition, which occurs approximately every twenty-five months. Jupiter and Saturn are the largest of the five planets, but are farther from the Sun, and therefore receive less sunlight. Nonetheless, Jupiter is often the next brightest object in the sky after Venus. Saturn's luminosity is often enhanced by its rings, which reflect light to varying degrees, depending on their inclination to the ecliptic; however, the rings themselves are not visible to the naked eye from the Earth.
See also
Antikythera mechanism
Behenian fixed star
List of former planets
Monas Hieroglyphica of John Dee
Olympian spirits
Worship of heavenly bodies
Wufang Shangdi
References
Further reading
External links
Chronology of Solar System Discovery
Ancient astronomy
Planets of the Solar System
Solar System | Classical planet | Astronomy | 1,776 |
41,055,466 | https://en.wikipedia.org/wiki/Nam%20Chang-hee | Nam Chang-hee (; born February 14, 1957) is a South Korean plasma physicist. Nam is specializing in the exploration of relativistic laser-matter interactions using femtosecond PW lasers. Currently he is professor of physics at Gwangju Institute of Science and Technology and director of the Center for Relativistic Laser Science as a part of the Institute for Basic Science (IBS).
Biography
Nam studied nuclear engineering at Seoul National University, where he obtained his B.Sc. in 1977. After that he received a M.Sc. in physics from the Korea Advanced Institute of Science and Technology (KAIST) in 1979. Entering the classroom as an instructor, he taught at Pusan National University until 1982. Enrolling at Princeton University, he moved to the United States, where he later received a Ph.D. in plasma physics in 1988. He stayed in Princeton for a year working as a staff research physicist at the Princeton Plasma Physics Laboratory.
In 1989, he began working as an assistant professor in the Department of Physics in KAIST, where he was promoted to associate professor in 1992 and full professor in 1998. From 1999, he was also director of the Coherent X-ray Research Center at KAIST where he researched ultrafast laser science. He left KAIST in 2012 to become a professor at the Department of Physics and Photon Science at Gwangju Institute of Science and Technology (GIST) and the founding director of the Center for Relativistic Laser Science, a research center at GIST with funding provided by the Institute for Basic Science.
Academic work
Nam has published more than 120 journal papers and gives invited talks in international conferences. He served as a steering committee member of OECD Global Science Forum on Compact High-Intensity Short-Pulse Lasers (2001-2003) that eventually became ICUIL (Int. Comm. on Ultra-high Intensity Lasers) - a working group of IUPAP, and is a scientific advisory committee member of ELI - ALPS (Extreme Light Infrastructure/ Attosecond Light Pulse Source) – the EU program for the PW laser facility in Hungary started from 2011.
He has served as conference chairs (ICXRL in 2010, ISUILS in 2012), organizing chairs (APLS in 2008, 2010, 2012) or program chairs (CLEO-Pacific Rim in 2007, 2009). He is on the editorial boards of J. Phys. B as an international advisory committee member since 2007 and as a guest editor in 2012; IEEE Photonics Journal as an associate editor (from 2009 to 2011). He has represented Korea in international committees (ICQE from 2005 to 2012; CLEO-PR from 2012; Commission on Quantum Electronics of IUPAP since 2008). He has been instrumental in launching the Asian Intense Laser Network in 2004, serving as the first secretary
Awards
2020: Presidential Citation, Government Science Day Award, Ministry of Science and ICT
2011: Sungdo Optical Science Award of the Optical Society Korea
2010: National Academy of Sciences Award, Korea
2009: Fellow of the Optical Society of America
2008: Fellow of the American Physical Society
Further reading
External links
IBS Center for Relativistic Laser Science home page: http://corels.ibs.re.kr
http://www.gist.ac.kr/
References
21st-century physicists
Optical physicists
South Korean physicists
Academic staff of KAIST
Academic staff of Gwangju Institute of Science and Technology
Princeton University alumni
KAIST alumni
Seoul National University alumni
People from Gwangju
Living people
Institute for Basic Science
Plasma physicists
1957 births | Nam Chang-hee | Physics | 734 |
31,586,461 | https://en.wikipedia.org/wiki/Ingres%20paper | Ingres paper is a type of drawing paper. It is a laid finish paper of light to medium weight, and it is not as strong or as durable as Bristol paper. Laid finish refers to the imprint of regular screen pattern of a papermaker's mould. Ingres is not necessarily a handmade paper, but is produced to replicate the properties of laid paper. Ingres is often used for charcoal and pastel drawing. It is also used as an endpaper in books.
The development of Ingres paper for drawing is ascribed to the French Neoclassical artist Dominique Ingres (1780-1867), although modern Ingres papers can differ from those actually used by Ingres. Ingres paper's pattern is a laid mesh. The laid effect creates a toothy grain of close lines on one side and a mottled surface on the reverse. The toothiness allows the paper to take charcoal easily and evenly. Ingres paper is favored in book arts for its antique appearance and pH neutrality.
Prominent manufactures include Canson, Hahnemühle (sometimes called "German Ingres"), and Fabriano. Ingres paper has a high rag content (around 65%) and is gelatin sized. It is available in a variety of colors.
References
Paper
Pastel | Ingres paper | Physics | 260 |
44,794,241 | https://en.wikipedia.org/wiki/Tylopilus%20oradivensis | Tylopilus oradivensis is a bolete fungus in the family Boletaceae. Found in the Talamanca Mountains of Costa Rica, it was described as new to science in 2010 by mycologists Todd Osmundson and Roy Halling. The bolete fruits scattered or in groups under oak trees, at elevations ranging between . The specific epithet combines the words ora ("coast"), dives ("rich"), and the suffix ensis ("from a place") to refer to the type locality.
Description
Fruit bodies have convex to flattened caps measuring in diameter. The cap surface is tomentose with an inrolled margin, and ranges in color from brown to orange to red. Flesh is white to cream colored, and does not change color with injury. The tubes on the cap underside are up to 6 mm deep; the pores stain light brown with injury. The stipe measures long by thick, and is roughly the same color as the cap or paler. The fusiform (spindle-shaped), thin-walled spores typically measure 8.2–12 by 3–4 μm, and contain a single oil droplet. T. oradivensis fruit bodies are similar in morphology to the eastern North American bolete Tylopilus balloui.
References
External links
oradivensis
Fungi described in 2010
Fungi of Central America
Fungus species | Tylopilus oradivensis | Biology | 289 |
40,994,286 | https://en.wikipedia.org/wiki/Leo%20Pharma | LEO Pharma A/S is a multinational Danish pharmaceutical company, founded in 1908, with a presence in about 100 countries. Its headquarters are in Ballerup, near Copenhagen The company is 100% integrated into a private foundation owned by the LEO Foundation. LEO Pharma develops and markets products for dermatology, bone remodeling thrombosis and coagulation. In 1945, it was the first producer of penicillin outside the US and UK.
History
Formation & the 20th Century
In 1908, pharmacists August Kongsted and Anton Antons bought the LEO Pharmacy in Copenhagen, Denmark. With the purchase, they established 'Københavns Løveapoteks kemiske Fabrik', today known as LEO Pharma. LEO Pharma celebrated its centennial in 2008. Flags bearing the LEO logo were flying in every country where LEO products are available, more than a hundred flags in total. Today, LEO Pharma has an ever growing pipeline with over 4,800 specialists focusing on dermatology and thrombosis.
1912 – The company launched its own Aspirin headache tablet
1917 – The company exported Denmark's first drug, Digisolvin
1940 – The company launched its own heparin product.
1958 – Patent filed for bendrofluazide.
1962 – The company launched Fucidin to be used to treat staphylococcus infections.
21st Century & onwards
In 2015, the company announced it would acquire Astellas Pharmas dermatology business for $725 million.
In 2018, the company acquired Bayer's dermatology unit for an undisclosed amount.
In April 2022, the company appointed Christophe Bourdon as its new CEO. Prior to this, he served as the CEO of Orphazyme A/S.
In January 2023, the company started extensive layoffs (of about 300 of its current employees, or ~5% of the workforce) as a part of major restructuring and reorganization in anticipation of a possibly planned IPO. Because of slimming down of the company's R&D program, new early-stage drug candidates will have to be sourced externally.
In August 2023, it was announced LEO Pharma had entered into a definitive agreement to acquire key assets of the Basking Ridge-headquartered biopharma company, Timber Pharmaceuticals, for $36 million. This transaction included TMB-001, a topical isotretinoin ointment currently under development for the treatment of moderate to severe subtypes of Congenital Ichthyosis (CI), which has no treatment options.
In September 2023, the company announced the implementation of a new capital structure with over 4 billion Danish kroner (approximately $587 million) allocated for business development and mergers and acquisitions. The company is focused on acquiring assets aimed at treating rare dermatological diseases with unmet medical needs.
In February 2024, LEO Pharma announced a net loss of 3.6 billion Danish kroner (equivalent to $528 million) for 2023 due to non-recurring project impairments, tax asset adjustments, and rising interest expenses. It also reported that it had cut its operating costs by 14% and increased its revenues by 7% in 2023.
Controversies
LEO Pharma, along with 21 other Danish companies, was accused of bribery and corruption in connection with the Oil-for-Food Programme that came to light in 2005. The accusation was that LEO Pharma had acted outside the UN system during the first Gulf War by bribing employees in the relief program and thereby helping Saddam Hussein. LEO Pharma quickly settled with the police and paid 8.5 million. The new CEO quickly cracked down on corruption both abroad and internally. This can affect employee flexibility and cause delays in production. In Berlingske Business on June 6, 2015, Gitte Aabo speaks about her personal responsibility and that LEO is ready for a few years of lower earnings, which is a possible consequence of her intervention in employee relations.
References
Pharmaceutical companies of Denmark
Companies based in Ballerup Municipality
Pharmaceutical companies established in 1908
Danish companies established in 1908 | Leo Pharma | Chemistry | 850 |
42,609,271 | https://en.wikipedia.org/wiki/Centre%20for%20Environmental%20Policy | The Centre for Environmental Policy (CEP) is a department at Imperial College London in the Faculty of Natural Sciences. Its aim is to influence a wide range of environmental issues through research on the environmental, energy and health aspects of global problems. CEP's current director is Professor Mark Burgman.
History
The Centre for Environmental Policy was first established in 1977 as the Interdepartmental Centre for Environmental Technology (ICCET). ICCET was the first of many interdisciplinary centres within Imperial College London to cross traditional boundaries between departments. It aims “to produce quality research, teaching and advice on environmental matters”. Whilst Imperial College is known for its scientific and technological activities, the centre was established to combine these with the legal, medical, economic and sociological aspects of the environment, with particular emphasis on cross-linkages between the disciplines. CEP has since evolved to cover a wide range of science, technology and policy research, teaching within the broad disciplines of physical and natural environment and more specifically in the energy, agriculture and international development fields.
CEP focuses primarily on evidence-based policy making with an emphasis on social sciences relevant to the environment and to the interface between science and policy in key environmental subjects. This work is often carried out in collaboration with other departments at Imperial College London.
In addition to research opportunities, the department offers two Postgraduate Taught and Research degrees – PhDs and the MSc in Environmental Technology.
CEP is organized in functional groups around research topics, and teaching is related closely to the research groups by subject.
Notable Alumni, Faculty and Staff
Helen ApSimon
Professor Sir Gordon Conway
Lord Flowers
Kaveh Madani
Ian Scoones
James Skea, Professor of Sustainable Energy
Sources
Imperial College London official website
Imperial College Faculty of Natural Sciences website
Centre for Environmental Policy website
References
Research institutes of Imperial College London
Environmental studies institutions in the United Kingdom
Environmental research institutes
Imperial College Faculty of Natural Sciences | Centre for Environmental Policy | Environmental_science | 383 |
37,208,491 | https://en.wikipedia.org/wiki/Thermus%20antranikianii | Thermus antranikianii is a bacterium belonging to the Deinococcota phylum, known to be present in hazardous conditions. This species was identified in Iceland, together with Thermus igniterrae.
References
External links
Type strain of Thermus antranikianii at BacDive - the Bacterial Diversity Metadatabase
Deinococcota
Bacteria described in 2000 | Thermus antranikianii | Biology | 80 |
14,882,861 | https://en.wikipedia.org/wiki/Overburden%20Conveyor%20Bridge%20F60 | F60 is the series designation of five overburden conveyor bridges used in brown coal (lignite) opencast mining in the Lusatian coalfields in Germany. They were built by the former Volkseigener Betrieb TAKRAF in Lauchhammer and are the largest movable technical industrial machines in the world. As overburden conveyor bridges, they transport the overburden which lies over the coal seam. The cutting height is , hence the name F60. In total, the F60 is up to high and wide; with a length of , it is described as the lying Eiffel Tower, making these behemoths not only the longest vehicle ever made—beating Prelude FLNG, the longest ship—but the largest vehicle by physical dimensions ever made by humankind. In operating condition, it weighs 13,600 metric tons making the F60 also one of the heaviest land vehicles ever made, beaten only by Bagger 293, which is a giant bucket-wheel excavator. Nevertheless, despite its immense size, it is operated by only a crew of 14.
The first conveyor bridge was built from 1969 to 1972, being equipped with a feeder bridge in 1977. The second was built from 1972 to 1974, having been equipped with a feeder bridge during construction. The third conveyor bridge was built from 1976 to 1978, being provided with a feeder bridge in 1985. The fourth and fifth conveyor bridges were built 1986–1988 and 1988–1991 respectively.
There are still four F60s in operation in the Lusatian coalfields today: in the brown coal opencast mines in Jänschwalde (Brandenburg, near Jänschwalde Power Station), Welzow-Süd (Brandenburg, near Schwarze Pumpe Power Station), Nochten and Reichwalde (Saxony, both near Boxberg Power Station). The fifth F60, the last one built, is in Lichterfeld-Schacksdorf and is accessible to visitors.
Technology
The F60 has two bogies, one on the dumping side (front) and one on the excavating side (back), which each run on two rails (). In addition to the two rails on the excavating side, there are another two rails for the transformer and cable cars. There are a total of 760 wheels on the bogies, of which 380 are powered. The maximum speed of the F60 is and the operating speed is . All in all, the F60 is powered by two large Siemens Type 1DM6536-4AA14-Z electric motors which supply more than 1,800 horsepower of electricity, the motors themselves are connected to 6,000 meters of cable capable of supplying up to 30,000 volts. However, like the majority of ultraheavy mining machines, the F60 gets the majority of its power from a nearby external coal power plant. Because of the physical limitations of the cable, the F60 only has a operational range of 6km.
The F60 has two bucket chain excavators of Type Es 3750 on the sides to do preparatory work (see the panoramic photograph from the Jänschwalde mine), one each on the northern and southern crosswise conveyor. They each have an output of (), which corresponds to a volume the size of a soccer field with a depth of . There are nine various overburden conveyor belts with a speed of .
The F60, including the two excavators, requires of power. The bridge needs of electricity to convey of overburden, from the crosswise conveyors up to the dumping at a height of .
The Lichterfeld F60
The overburden conveyor bridge of Lichterfeld-Schacksdorf, now shut down, was used from 1991 until 1992 in the brown coal mine Klettwitz-Nord near Klettwitz. It is open for visitors today as a project of the Internationale Bauausstellung Fürst-Pückler-Land (International Mining Exhibition Fürst-Pückler-Land) and is an anchor of the European Route of Industrial Heritage (ERIH).
This F60 is the last of five F60s. The installation was carried out between 1988 and 1991 in the Klettwitz-Nord opencast mine. The F60 began operation in March 1991. Between its commission and its shutting down in June 1992, it moved around of overburden. After the German reunification, the mine became the responsibility of the Lausitzer und Mitteldeutsche Bergbau-Verwaltungsgesellschaft (Lusatian and Middle-German Mining Administrative Society, LMBV), which closed the mine on the orders of the German federal government and renovated it economically and in a way not harmful to the environment.
Between 2000 and 2010, the Internationale Bauausstellung Fürst-Pückler-Land is pursuing the goal of giving new momentum to the region and the former opencast mine of Klettwitz-Nord has also been integrated into that concept. The mine has been converted into a 'visitors' mine' and the conveyor bridge has been accessible since 1998. Various sound and light installations help make the facility an attraction for visitors.
References
External links
Web site about the F60 of the Internationalen Bauausstellung Fürst-Pückler-Land
F60 via Google Maps in Welzow
European Route of Industrial Heritage Anchor Points
Engineering vehicles
Mining equipment
Surface mining
Surface mines in Germany
Takraf GmbH | Overburden Conveyor Bridge F60 | Engineering | 1,130 |
49,978,438 | https://en.wikipedia.org/wiki/Vibrio%20holin%20family | The Vibrio Holin Family (TC# 1.E.30) consists of small proteins 50 to 65 amino acyl residues in length that exhibit a single N-terminal transmembrane domain. A representative list of proteins belonging to the Vibrio Holin family can be found in the Transporter Classification Database.
See also
Holin
Lysin
Transporter Classification Database
References
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | Vibrio holin family | Chemistry,Biology | 100 |
92,839 | https://en.wikipedia.org/wiki/Bourne%20shell | The Bourne shell (sh) is a shell command-line interpreter for computer operating systems. It first appeared on Version 7 Unix, as its default shell. Unix-like systems continue to have /bin/sh—which will be the Bourne shell, or a symbolic link or hard link to a compatible shell—even when other shells are used by most users.
The Bourne shell was once standard on all branded Unix systems, although historically BSD-based systems had many scripts written in csh. As the basis of POSIX sh syntax, Bourne shell scripts can typically be run with Bash or dash on Linux or other Unix-like systems; Bash itself is a free clone of Bourne.
History
Origins
Work on the Bourne shell initially started in 1976. Developed by Stephen Bourne at Bell Labs, it was a replacement for the Thompson shell, whose executable file had the same name—sh. The Bourne shell was also preceded by the Mashey shell. Bourne was released in 1979 in the Version 7 Unix release distributed to colleges and universities. Although it is used as an interactive command interpreter, it was also intended as a scripting language and contains most of the features that are commonly considered to produce structured programs.
It gained popularity with the publication of The Unix Programming Environment by Brian Kernighan and Rob Pike—the first commercially published book that presented the shell as a programming language in a tutorial form.
Some of the primary goals of the shell were:
To allow shell scripts to be used as filters.
To provide programmability including control flow and variables.
Control over all input/output file descriptors.
Control over signal handling within scripts.
No limits on string lengths when interpreting shell scripts.
Rationalize and generalize string quoting mechanism.
The environment mechanism. This allowed context to be established at startup and provided a way for shell scripts to pass context to sub scripts (processes) without having to use explicit positional parameters.
Features of the original version
Features of the Version 7 UNIX Bourne shell include:
Scripts can be invoked as commands by using their filename
May be used interactively or non-interactively
Allows both synchronous and asynchronous execution of commands
Supports input and output redirection and pipelines
Provides a set of built-in commands
Provides flow control constructs and quotation facilities.
Typeless variables
Provides local and global variable scope
Scripts do not require compilation before execution
Does not have a goto facility, so code restructuring may be necessary
Command substitution using backquotes: `command`.
Here documents using << to embed a block of input text within a script.
for ~ do ~ done loops, in particular the use of $* to loop over arguments, as well as for ~ in ~ do ~ done loops for iterating over lists.
case ~ in ~ esac selection mechanism, primarily intended to assist argument parsing.
sh provided support for environment variables using keyword parameters and exportable variables.
Contains strong provisions for controlling input and output and in its expression matching facilities.
The Bourne shell also was the first to feature the convention of using file descriptor 2> for error messages, allowing much greater programmatic control during scripting by keeping error messages separate from data.
Stephen Bourne's coding style was influenced by his experience with the ALGOL 68C compiler that he had been working on at Cambridge University. In addition to the style in which the program was written, Bourne reused portions of ALGOL 68's if ~ then ~ elif ~ then ~ else ~ fi, case ~ in ~ esac and for/while ~ do ~ od" (using done instead of od) clauses in the common Unix Bourne shell syntax. Moreover, – although the v7 shell is written in C – Bourne took advantage of some macros to give the C source code an ALGOL 68 flavor. These macros (along with the finger command distributed in Unix version 4.2BSD) inspired the International Obfuscated C Code Contest (IOCCC).
Features introduced after 1979
Over the years, the Bourne shell was enhanced at AT&T. The various variants are thus called like the respective AT&T Unix version it was released with (some important variants being Version7, System III, SVR2, SVR3, SVR4). As the shell was never versioned, the only way to identify it was testing its features.
Features of the Bourne shell versions since 1979 include:
Built-in command – System III shell (1981)
# as comment character – System III shell (1981)
Colon in parameter substitutions "${parameter:=word}" – System III shell (1981)
with argument – System III shell (1981)
for indented here documents – System III shell (1981)
Functions and the builtin – SVR2 shell (1984)
Built-ins , , – SVR2 shell (1984)
Source code de-ALGOL68-ized – SVR2 shell (1984)
Modern "" – SVR3 shell (1986)
Built-in – SVR3 shell (1986)
Cleaned up parameter handling allows recursively callable functions – SVR3 shell (1986)
8-bit clean – SVR3 shell (1986)
Job control – SVR4 shell (1989)
Multi-byte support – SVR4 shell (1989)
Variants
DMERT shell
Duplex Multi-Environment Real-Time (DMERT) is a hybrid time-sharing/real-time operating system developed in the 1970s at Bell Labs Indian Hill location in Naperville, Illinois uses a 1978 snapshot of Bourne Shell "VERSION sys137 DATE 1978 Oct 12 22:39:57". The DMERT shell runs on 3B21D computers still in use in the telecommunications industry.
Korn shell
The Korn shell (ksh) written by David Korn based on the original Bourne Shell source code, was a middle road between the Bourne shell and the C shell. Its syntax was chiefly drawn from the Bourne shell, while its job control features resembled those of the C shell. The functionality of the original Korn Shell (known as ksh88 from the year of its introduction) was used as a basis for the POSIX shell standard. A newer version, ksh93, has been open source since 2000 and is used on some Linux distributions. A clone of ksh88 known as pdksh is the default shell in OpenBSD.
Schily Bourne Shell
Jörg Schilling's Schily-Tools includes three Bourne Shell derivatives.
Relationship to other shells
C shell
Bill Joy, the author of the C shell, criticized the Bourne shell as being unfriendly for interactive use, a task for which Stephen Bourne himself acknowledged C shell's superiority. Bourne stated, however, that his shell was superior for scripting and was available on any Unix system, and Tom Christiansen also criticized C shell as being unsuitable for scripting and programming.
Almquist shells
Due to copyright issues surrounding the Bourne Shell as it was used in historic CSRG BSD releases, Kenneth Almquist developed a clone of the Bourne Shell, known by some as the Almquist shell and available under the BSD license, which is in use today on some BSD descendants and in low-memory situations. The Almquist Shell was ported to Linux, and the port renamed the Debian Almquist shell, or dash. This shell provides faster execution of standard sh (and POSIX-standard sh, in modern descendants) scripts with a smaller memory footprint than its counterpart, Bash. Its use tends to expose bashisms – bash-centric assumptions made in scripts meant to run on sh.
Other shells
Bash (the Bourne-Again shell) was developed in 1989 for the GNU project and incorporates features from the Bourne shell, csh, and ksh. It is meant to be POSIX-compliant.
rc was created at Bell Labs by Tom Duff as a replacement for sh for Version 10 Unix. It is the default shell for Plan 9 from Bell Labs. It has been ported to UNIX as part of Plan 9 from User Space.
Z shell, developed by Paul Falstad in 1990, is an extended Bourne shell with a large number of improvements, including some features of Bash, ksh, and tcsh.
See also
Comparison of command shells
Unix shell
References
External links
The individual members of "The Traditional Bourne Shell Family"
"Characteristical common properties of the traditional Bourne shells"
Historical C source code for the Bourne shell using mac.h macros from 1979
Original Bourne Shell documentation from 1978
A port of the "heirloom" SVR4 Bourne shell from OpenSolaris to some other Unix-like systems
Migrating from the System V (SVR4) Shell to the POSIX Shell
Bourne Shell Tutorial (syntax)
Faqs shell differences
Howard Dahdah, The A–Z of Programming Languages: Bourne shell, or sh – An in-depth interview with Steve Bourne, creator of the Bourne shell, or sh, Computerworld, 5 March 2009.
1979 software
POSIX
Scripting languages
Text-oriented programming languages
Unix shells
Unix SUS2008 utilities
de:Unix-Shell#Die Bourne-Shell | Bourne shell | Technology | 1,879 |
77,079,654 | https://en.wikipedia.org/wiki/Pilbaria | Pilbaria is a genus of fossil stromatolite-forming cyanobacteria from the Paleoproterozoic era 2.3 to 1.7 billion years ago. It is named after the Pilbara region of Western Australia where the type specimen was found.
Description
The type species, Pilbaria perplexa is characterised by long, mostly straight, subparallel and mostly smooth columns with proportionately small transversely elongate niches with projections. Near the bases of beds, branching varies from parallel to markedly divergent, but above that level, it is α-β-parallel or slightly divergent. Laminae are predominantly steeply convex and form a patchy wall.
Pilbaria is similar to Inzeria and Nordia in that it has well-developed niches and projections.
Distribution and age
Fossils of P. perplexa, the type species, have been found in the Wyloo Group of the Pilbara region in Western Australia, aged 1.7 to 2 billion years old. It has also been found in the Epworth Group of the Coronation and Pine Creek Geosynclines in Canada, aged 1.865 to 2.2 and 2 to 2.3 billion years old.
P. boetsapia and P. inzeriaformis are from Schmidtsdrift Formation of the Northern Cape Province in South Africa, aged approximately 2.2 billion years old.
P. deverella is from the Yelma and Frere formations of the Earaheedy Group of the Nabberu Basin in Western Australia. It is the youngest of the four species, with fossils from these areas aged to about 1.7 billion years old. Other stromatolite genera found in the Yelma formation include Ephyaltes, Externia, Murgurra and Yelma.
See also
List of fossil stromatolites
References
Proterozoic life
Prehistoric bacteria
Cyanobacteria genera
Fossil taxa described in 1972 | Pilbaria | Biology | 409 |
8,593,762 | https://en.wikipedia.org/wiki/300B | In electronics, the 300B is a directly-heated power triode vacuum tube with a four-pin base, introduced in 1938 by Western Electric to amplify telephone signals. It measures high and wide, and the anode can dissipate 40 watts thermal. In the 1980s it began to be used increasingly by audiophiles in home audio equipment. The 300B has good linearity, low noise and good reliability; it is often used in single-ended triode (SET) audio amplifiers of about eight watts output. A push-pull pair can output 20 watts.
manufacturers of 300B and other tubes of similar characteristics included EkspoPUL (Electro Harmonix brand), ELROG, Emission Labs - EML, JJ Electronic, KR Audio, TJ FullMusic, Hengyang Electronics (Psvane brand), Linlai, Takatsuki Electric and Western Electric. Prices for new 300B tubes ranged from US$175 to $2,000 per matched pair.
Western Electric (tube manufacturer), a small, privately owned company in Rossville, Georgia resumed production of the original 300B in 2018 using the original, 1938 manufacturing standards on a modernized assembly line housed at the Rossville Works.
See also
List of vacuum tubes
References
http://www.aes.org/e-lib/browse.cfm?elib=6058
The 300B's history
300B data sheet
External links
Stereophile: In Search of the Perfect 300B Tube
Reviews of 300B tubes.
Vacuum tubes | 300B | Physics | 316 |
7,393,485 | https://en.wikipedia.org/wiki/Lecithinase | Lecithinase is a type of phospholipase that acts upon lecithin.
It can be produced by Clostridium perfringens, Staphylococcus aureus, Pseudomonas aeruginosa or Listeria monocytogenes. C. perfringens alpha toxin (lecithinase) causes myonecrosis and hemolysis. The lecithinase of S. aureus is used in detection of coagulase-positive strains, because of high link between lecithinase activity and coagulase activity.
References
EC 3.1.4 | Lecithinase | Chemistry,Biology | 130 |
24,398,716 | https://en.wikipedia.org/wiki/C17H27N3O4S | {{DISPLAYTITLE:C17H27N3O4S}}
The molecular formula C17H27N3O4S may refer to:
Amisulpride
SEP-4199 | C17H27N3O4S | Chemistry | 44 |
1,459,075 | https://en.wikipedia.org/wiki/Parallel%20tempering | Parallel tempering, in physics and statistics, is a computer simulation method typically used to find the lowest energy state of a system of many interacting particles. It addresses the problem that at high temperatures, one may have a stable state different from low temperature, whereas simulations at low temperatures may become "stuck" in a metastable state. It does this by using the fact that the high temperature simulation may visit states typical of both stable and metastable low temperature states.
More specifically, parallel tempering (also known as replica exchange MCMC sampling), is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo (MCMC) sampling methods more generally. The replica exchange method was originally devised by Robert Swendsen and J. S. Wang, then extended by Charles J. Geyer, and later developed further by Giorgio Parisi,
Koji Hukushima and Koji Nemoto,
and others.
Y. Sugita and Y. Okamoto also formulated a molecular dynamics version of parallel tempering; this is usually known as replica-exchange molecular dynamics or REMD.
Essentially, one runs N copies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method
is to make configurations at high temperatures available to the simulations at low temperatures and vice versa.
This results in a very robust ensemble which is able to sample both low and high energy configurations.
In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision.
Background
Typically a Monte Carlo simulation using a Metropolis–Hastings update consists of a single stochastic process that evaluates the energy of the system and accepts/rejects updates based on the temperature T. At high temperatures updates that change the energy of the system are comparatively more probable. When the system is highly correlated, updates are rejected and the simulation is said to suffer from critical slowing down.
If we were to run two simulations at temperatures separated by a ΔT, we would find that if ΔT is small enough, then the energy histograms obtained by collecting the values of the energies over a set of Monte Carlo steps N will create two distributions that will somewhat overlap. The overlap can be defined by the area of the histograms that falls over the same interval of energy values, normalized by the total number of samples. For ΔT = 0 the overlap should approach 1.
Another way to interpret this overlap is to say that system configurations sampled at temperature T1 are likely to appear during a simulation at T2. Because the Markov chain should have no memory of its past, we can create a new update for the system composed of the two systems at T1 and T2. At a given Monte Carlo step we can update the global system by swapping the configuration of the two systems, or alternatively trading the two temperatures. The update is accepted according to the Metropolis–Hastings criterion with probability
and otherwise the update is rejected. The detailed balance condition has to be satisfied by ensuring that the reverse update has to be equally likely, all else being equal. This can be ensured by appropriately choosing regular Monte Carlo updates or parallel tempering updates with probabilities that are independent of the configurations of the two systems or of the Monte Carlo step.
This update can be generalized to more than two systems.
By a careful choice of temperatures and number of systems one can achieve an improvement in the mixing properties of a set of Monte Carlo simulations that exceeds the extra computational cost of running parallel simulations.
Other considerations to be made: increasing the number of different temperatures can have a detrimental effect, as one can think of the 'lateral' movement of a given system across temperatures as a diffusion process.
Set up is important as there must be a practical histogram overlap to achieve a reasonable probability of lateral moves.
The parallel tempering method can be used as a super simulated annealing that does not need restart, since a system at high temperature can feed new local optimizers to a system at low temperature, allowing tunneling between metastable states and improving convergence to a global optimum.
Implementations
See also
Bennett acceptance ratio
References
Markov chain Monte Carlo
Molecular dynamics
Heuristics
Statistical mechanics
Stochastic optimization | Parallel tempering | Physics,Chemistry | 887 |
73,047,762 | https://en.wikipedia.org/wiki/Cobalt%28II%29%20stearate | Cobalt(II) stearate is a metal-organic compound, a salt of cobalt and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Synthesis
An exchange reaction of sodium stearate and cobalt dichloride:
Physical properties
Cobalt(II) stearate forms a violet substance, occurring in several crystal structures.
It is insoluble in water.
Uses
Cobalt(II) stearate is a high-performance bonding agent for rubber. The compound is suitable for applications in natural rubber, cisdene, styrene-butadiene rubber, and their compounds to bond easily with brass- or zinc-plated steel cord or metal plates as well as various bare steel, especially for bonding with brass plating of various thicknesses.
References
Stearates
Cobalt(II) compounds | Cobalt(II) stearate | Chemistry | 184 |
5,171,645 | https://en.wikipedia.org/wiki/F%20Centauri | F Centauri is a suspected astrometric binary star system in the southern constellation of Centaurus. It has a reddish hue and is visible to the naked eye with an apparent visual magnitude that fluctuates around +5.01. The system is located at a distance of approximately 450 light years from the Sun based on parallax, and it has an absolute magnitude of −0.87. O. J. Eggen flagged this star as a member of the Hyades Supercluster.
The visible component is an aging red giant star on the asymptotic giant branch with a stellar classification of M1III, indicating it has exhausted the supply of both hydrogen and helium at its core and is cooling and expanding. It is a suspected variable star of unknown type that has been measured ranging in brightness from visual magnitude 4.94 down to 5.07. At present it has 48 times the radius of the Sun. It is radiating 502 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 3,948 K.
References
M-type giants
Asymptotic-giant-branch stars
Suspected variables
Astrometric binaries
Centaurus
Centauri, F
Durchmusterung objects
107079
60059
4682 | F Centauri | Astronomy | 263 |
4,061,679 | https://en.wikipedia.org/wiki/Pieter%20Van%20den%20Abeele | Pieter Van den Abeele is a computer programmer, and the founder of the PowerPC-version of Gentoo Linux, a foundation connected with a distribution of the Linux computer operating system. He founded Gentoo for OS X, for which he received a scholarship by Apple Computer. In 2004 Pieter was invited to the OpenSolaris pilot program and assisted Sun Microsystems with building a development eco-system around Solaris. Pieter was nominated for the OpenSolaris Community Advisory Board and managed a team of developers to make Gentoo available on the Solaris operating system as well. Pieter is a co-author of the Gentoo handbook.
The teams managed by Pieter Van den Abeele have shaped the PowerPC landscape with several "firsts". Gentoo/PowerPC was the first distribution to introduce PowerPC Live CDs. Gentoo also beat Apple to releasing a full 64-bit PowerPC userland environment for the IBM PowerPC 970 (G5) processor.
His Gentoo-based Home Media and Communication System, based on a Freescale Semiconductor PowerPC 7447 processor won the Best of Show award at the inaugural 2005 Freescale Technology Forum in Orlando, Florida. Pieter is also a member of the Power.org consortium and participates in committees and workgroups focusing on disruptive business plays around the Power Architecture.
References
People in information technology
Gentoo Linux people
Living people
Year of birth missing (living people) | Pieter Van den Abeele | Technology | 296 |
434,221 | https://en.wikipedia.org/wiki/James%20Webb%20Space%20Telescope | The James Webb Space Telescope (JWST) is a space telescope designed to conduct infrared astronomy. As the largest telescope in space, it is equipped with high-resolution and high-sensitivity instruments, allowing it to view objects too old, distant, or faint for the Hubble Space Telescope. This enables investigations across many fields of astronomy and cosmology, such as observation of the first stars and the formation of the first galaxies, and detailed atmospheric characterization of potentially habitable exoplanets.
Although the Webb's mirror diameter is 2.7 times larger than that of the Hubble Space Telescope, it produces images of comparable sharpness because it observes in the longer-wavelength infrared spectrum. The longer the wavelength of the spectrum, the larger the information-gathering surface required (mirrors in the infrared spectrum or antenna area in the millimeter and radio ranges) for an image comparable in clarity to the visible spectrum of the Hubble Space Telescope.
The Webb was launched on 25 December 2021 on an Ariane 5 rocket from Kourou, French Guiana. In January 2022 it arrived at its destination, a solar orbit near the Sun–Earth L2 Lagrange point, about from Earth. The telescope's first image was released to the public on 11 July 2022.
The U.S. National Aeronautics and Space Administration (NASA) led Webb's design and development and partnered with two main agencies: the European Space Agency (ESA) and the Canadian Space Agency (CSA). The NASA Goddard Space Flight Center in Maryland managed telescope development, while the Space Telescope Science Institute in Baltimore on the Homewood Campus of Johns Hopkins University operates Webb. The primary contractor for the project was Northrop Grumman.
The telescope is named after James E. Webb, who was the administrator of NASA from 1961 to 1968 during the Mercury, Gemini, and Apollo programs.
Webb's primary mirror consists of 18 hexagonal mirror segments made of gold-plated beryllium, which together create a mirror, compared with Hubble's . This gives Webb a light-collecting area of about , about six times that of Hubble. Unlike Hubble, which observes in the near ultraviolet and visible (0.1 to 0.8 μm), and near infrared (0.8–2.5 μm) spectra, Webb observes a lower frequency range, from long-wavelength visible light (red) through mid-infrared (0.6–28.5 μm). The telescope must be kept extremely cold, below , so that the infrared light emitted by the telescope itself does not interfere with the collected light. Its five-layer sunshield protects it from warming by the Sun, Earth, and Moon.
Initial designs for the telescope, then named the Next Generation Space Telescope, began in 1996. Two concept studies were commissioned in 1999, for a potential launch in 2007 and a US$1 billion budget. The program was plagued with enormous cost overruns and delays. A major redesign was accomplished in 2005, with construction completed in 2016, followed by years of exhaustive testing, at a total cost of US$10 billion.
Features
The mass of the James Webb Space Telescope (JWST) is about half that of the Hubble Space Telescope. Webb has a gold-coated beryllium primary mirror made up of 18 separate hexagonal mirrors. The mirror has a polished area of , of which is obscured by the secondary support struts, giving a total collecting area of . This is over six times larger than the collecting area of Hubble's diameter mirror, which has a collecting area of . The mirror has a gold coating to provide infrared reflectivity and this is covered by a thin layer of glass for durability.
Webb is designed primarily for near-infrared astronomy, but can also see orange and red visible light, as well as the mid-infrared region, depending on the instrument being used. It can detect objects up to 100 times fainter than Hubble can, and objects much earlier in the history of the universe, back to redshift z≈20 (about 180 million years cosmic time after the Big Bang). For comparison, the earliest stars are thought to have formed between z≈30 and z≈20 (100–180 million years cosmic time), and the first galaxies may have formed around redshift z≈15 (about 270 million years cosmic time). Hubble is unable to see further back than very early reionization at about z≈11.1 (galaxy GN-z11, 400 million years cosmic time).
The design emphasizes the near to mid-infrared for several reasons:
high-redshift (very early and distant) objects have their visible emissions shifted into the infrared, and therefore their light can be observed only via infrared astronomy;
infrared light passes more easily through dust clouds than visible light;
colder objects such as debris disks and planets emit most strongly in the infrared;
these infrared bands are difficult to study from the ground or by existing space telescopes such as Hubble.
Ground-based telescopes must look through Earth's atmosphere, which is opaque in many infrared bands (see figure at right). Even where the atmosphere is transparent, many of the target chemical compounds, such as water, carbon dioxide, and methane, also exist in the Earth's atmosphere, vastly complicating analysis. Existing space telescopes such as Hubble cannot study these bands since their mirrors are insufficiently cool (the Hubble mirror is maintained at about ) which means that the telescope itself radiates strongly in the relevant infrared bands.
Webb can also observe objects in the Solar System at an angle of more than 85° from the Sun and having an apparent angular rate of motion less than 0.03 arc seconds per second. This includes Mars, Jupiter, Saturn, Uranus, Neptune, Pluto, their satellites, and comets, asteroids and minor planets at or beyond the orbit of Mars. Webb has the near-IR and mid-IR sensitivity to be able to observe virtually all known Kuiper Belt Objects. In addition, it can observe opportunistic and unplanned targets within 48 hours of a decision to do so, such as supernovae and gamma ray bursts.
Location and orbit
Webb operates in a halo orbit, circling around a point in space known as the Sun–Earth L2 Lagrange point, approximately beyond Earth's orbit around the Sun. Its actual position varies between about from L2 as it orbits, keeping it out of both Earth and Moon's shadow. By way of comparison, Hubble orbits above Earth's surface, and the Moon is roughly from Earth. Objects near this Sun–Earth point can orbit the Sun in synchrony with the Earth, allowing the telescope to remain at a roughly constant distance with continuous orientation of its sunshield and equipment bus toward the Sun, Earth and Moon. Combined with its wide shadow-avoiding orbit, the telescope can simultaneously block incoming heat and light from all three of these bodies and avoid even the smallest changes of temperature from Earth and Moon shadows that would affect the structure, yet still maintain uninterrupted solar power and Earth communications on its sun-facing side. This arrangement keeps the temperature of the spacecraft constant and below the necessary for faint infrared observations.
Sunshield protection
To make observations in the infrared spectrum, Webb must be kept under ; otherwise, infrared radiation from the telescope itself would overwhelm its instruments. Its large sunshield blocks light and heat from the Sun, Earth, and Moon, and its position near the Sun–Earth keeps all three bodies on the same side of the spacecraft at all times. Its halo orbit around the L2 point avoids the shadow of the Earth and Moon, maintaining a constant environment for the sunshield and solar arrays. The resulting stable temperature for the structures on the dark side is critical to maintaining precise alignment of the primary mirror segments.
The sunshield consists of five layers, each approximately as thin as a human hair. Each layer is made of Kapton E film, coated with aluminum on both sides. The two outermost layers have an additional coating of doped silicon on the Sun-facing sides, to better reflect the Sun's heat back into space. Accidental tears of the delicate film structure during deployment testing in 2018 led to further delays to the telescope deployment.
The sunshield was designed to be folded twelve times so that it would fit within the Ariane 5 rocket's payload fairing, which is in diameter, and long. The shield's fully deployed dimensions were planned as .
Keeping within the shadow of the sunshield limits the field of regard of Webb at any given time. The telescope can see 40 percent of the sky from any one position, but can see all of the sky over a period of six months.
Optics
Webb's primary mirror is a -diameter gold-coated beryllium reflector with a collecting area of . If it had been designed as a single, large mirror, it would have been too large for existing launch vehicles. The mirror is therefore composed of 18 hexagonal segments (a technique pioneered by Guido Horn d'Arturo), which unfolded after the telescope was launched. Image plane wavefront sensing through phase retrieval is used to position the mirror segments in the correct location using precise actuators. Subsequent to this initial configuration, they only need occasional updates every few days to retain optimal focus. This is unlike terrestrial telescopes, for example the Keck telescopes, which continually adjust their mirror segments using active optics to overcome the effects of gravitational and wind loading. The Webb telescope uses 132 small actuation motors to position and adjust the optics. The actuators can position the mirror with 10 nanometer accuracy.
Webb's optical design is a three-mirror anastigmat, which makes use of curved secondary and tertiary mirrors to deliver images that are free from optical aberrations over a wide field. The secondary mirror is in diameter. In addition, there is a fine steering mirror which can adjust its position many times per second to provide image stabilization. Point light sources in images taken by Webb have six diffraction spikes plus two fainter ones, due to the hexagonal shape of the primary mirror segments.
Scientific instruments
The Integrated Science Instrument Module (ISIM) is a framework that provides electrical power, computing resources, cooling capability as well as structural stability to the Webb telescope. It is made with bonded graphite-epoxy composite attached to the underside of Webb's telescope structure. The ISIM holds the four science instruments and a guide camera.
NIRCam (Near Infrared Camera) is an infrared imager which has spectral coverage ranging from the edge of the visible (0.6 μm) through to the near infrared (5 μm). There are 10 sensors each of 4 megapixels. NIRCam serves as the observatory's wavefront sensor, which is required for wavefront sensing and control activities, used to align and focus the main mirror segments. NIRCam was built by a team led by the University of Arizona, with principal investigator Marcia J. Rieke.
NIRSpec (Near Infrared Spectrograph) performs spectroscopy over the same wavelength range. It was built by the European Space Agency (ESA) at ESTEC in Noordwijk, Netherlands. The leading development team includes members from Airbus Defence and Space, Ottobrunn and Friedrichshafen, Germany, and the Goddard Space Flight Center; with Pierre Ferruit (École normale supérieure de Lyon) as NIRSpec project scientist. The NIRSpec design provides three observing modes: a low-resolution mode using a prism, an R~1000 multi-object mode, and an R~2700 integral field unit or long-slit spectroscopy mode. Switching of the modes is done by operating a wavelength preselection mechanism called the Filter Wheel Assembly, and selecting a corresponding dispersive element (prism or grating) using the Grating Wheel Assembly mechanism. Both mechanisms are based on the successful ISOPHOT wheel mechanisms of the Infrared Space Observatory. The multi-object mode relies on a complex micro-shutter mechanism to allow for simultaneous observations of hundreds of individual objects anywhere in NIRSpec's field of view. There are two sensors, each of 4 megapixels.
MIRI (Mid-Infrared Instrument) measures the mid-to-long-infrared wavelength range from 5 to 27 μm. It contains both a mid-infrared camera and an imaging spectrometer. MIRI was developed as a collaboration between NASA and a consortium of European countries, and is led by George Rieke (University of Arizona) and Gillian Wright (UK Astronomy Technology Centre, Edinburgh, Scotland). The temperature of the MIRI must not exceed : a helium gas mechanical cooler sited on the warm side of the environmental shield provides this cooling.
FGS/NIRISS (Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph), led by the Canadian Space Agency (CSA) under project scientist John Hutchings (Herzberg Astronomy and Astrophysics Research Centre), is used to stabilize the line-of-sight of the observatory during science observations. Measurements by the FGS are used both to control the overall orientation of the spacecraft and to drive the fine steering mirror for image stabilization. The CSA also provided a Near Infrared Imager and Slitless Spectrograph (NIRISS) module for astronomical imaging and spectroscopy in the 0.8 to 5 μm wavelength range, led by principal investigator René Doyon at the Université de Montréal. Although they are often referred together as a unit, the NIRISS and FGS serve entirely different purposes, with one being a scientific instrument and the other being a part of the observatory's support infrastructure.
NIRCam and MIRI feature starlight-blocking coronagraphs for observation of faint targets such as extrasolar planets and circumstellar disks very close to bright stars.
Spacecraft bus
The spacecraft bus is the primary support component of the JWST, hosting a multitude of computing, communication, electric power, propulsion, and structural parts. Along with the sunshield, it forms the spacecraft element of the space telescope. The spacecraft bus is on the Sun-facing "warm" side of the sunshield and operates at a temperature of about .
The structure of the spacecraft bus has a mass of , and must support the space telescope. It is made primarily of graphite composite material. The assembly was completed in California in 2015. It was integrated with the rest of the space telescope leading to its 2021 launch. The spacecraft bus can rotate the telescope with a pointing precision of one arcsecond, and isolates vibration to two milliarcseconds.
Webb has two pairs of rocket engines (one pair for redundancy) to make course corrections on the way to L2 and for station keepingmaintaining the correct position in the halo orbit. Eight smaller thrusters are used for attitude controlthe correct pointing of the spacecraft. The engines use hydrazine fuel ( at launch) and dinitrogen tetroxide as oxidizer ( at launch).
Servicing
Webb is not intended to be serviced in space. A crewed mission to repair or upgrade the observatory, as was done for Hubble, would not be possible, and according to NASA Associate Administrator Thomas Zurbuchen, despite best efforts, an uncrewed remote mission was found to be beyond available technology at the time Webb was designed. During the long Webb testing period, NASA officials referred to the idea of a servicing mission, but no plans were announced. Since the successful launch, NASA has stated that nevertheless limited accommodation was made to facilitate future servicing missions. These accommodations included precise guidance markers in the form of crosses on the surface of Webb, for use by remote servicing missions, as well as refillable fuel tanks, removable heat protectors, and accessible attachment points.
Software
Ilana Dashevsky and Vicki Balzano write that Webb uses a modified version of JavaScript, called Nombas ScriptEase 5.00e, for its operations; it follows the ECMAScript standard and "allows for a modular design flow, where on-board scripts call lower-level scripts that are defined as functions". "The JWST science operations will be driven by ASCII (instead of binary command blocks) on-board scripts, written in a customized version of JavaScript. The script interpreter is run by the flight software, which is written in the programming language C++. The flight software operates the spacecraft and the science instruments."
Comparison with other telescopes
The desire for a large infrared space telescope traces back decades. In the United States, the Space Infrared Telescope Facility (later called the Spitzer Space Telescope) was planned while the Space Shuttle was in development, and the potential for infrared astronomy was acknowledged at that time. Unlike ground telescopes, space observatories are free from atmospheric absorption of infrared light. Space observatories opened a "new sky" for astronomers.
However, there is a challenge involved in the design of infrared telescopes: they need to stay extremely cold, and the longer the wavelength of infrared, the colder they need to be. If not, the background heat of the device itself overwhelms the detectors, making it effectively blind. This can be overcome by careful design. One method is to put the key instruments in a dewar with an extremely cold substance, such as liquid helium. The coolant will slowly vaporize, limiting the lifetime of the instrument from as short as a few months to a few years at most.
It is also possible to maintain a low temperature by designing the spacecraft to enable near-infrared observations without a supply of coolant, as with the extended missions of the Spitzer Space Telescope and the Wide-field Infrared Survey Explorer, which operated at reduced capacity after coolant depletion. Another example is Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) instrument, which started out using a block of nitrogen ice that depleted after a couple of years, but was then replaced during the STS-109 servicing mission with a cryocooler that worked continuously. The Webb Space Telescope is designed to cool itself without a dewar, using a combination of sunshields and radiators, with the mid-infrared instrument using an additional cryocooler.
Webb's delays and cost increases have been compared to those of its predecessor, the Hubble Space Telescope. When Hubble formally started in 1972, it had an estimated development cost of US$300 million (), but by the time it was sent into orbit in 1990, the cost was about four times that. In addition, new instruments and servicing missions increased the cost to at least US$9 billion by 2006 ().
Development history
Background (development to 2003)
Discussions of a Hubble follow-on started in the 1980s, but serious planning began in the early 1990s. The Hi-Z telescope concept was developed between 1989 and 1994: a fully baffled aperture infrared telescope that would recede to an orbit at 3 Astronomical unit (AU). This distant orbit would have benefited from reduced light noise from zodiacal dust. Other early plans called for a NEXUS precursor telescope mission.
Correcting the flawed optics of the Hubble Space Telescope (HST) in its first years played a significant role in the birth of Webb. In 1993, NASA conducted STS-61, the Space Shuttle mission that replaced HST's camera and installed a retrofit for its imaging spectrograph to compensate for the spherical aberration in its primary mirror.
The HST & Beyond Committee was formed in 1994 "to study possible missions and programs for optical-ultraviolet astronomy in space for the first decades of the 21st century." Emboldened by HST's success, its 1996 report explored the concept of a larger and much colder, infrared-sensitive telescope that could reach back in cosmic time to the birth of the first galaxies. This high-priority science goal was beyond the HST's capability because, as a warm telescope, it is blinded by infrared emission from its own optical system. In addition to recommendations to extend the HST mission to 2005 and to develop technologies for finding planets around other stars, NASA embraced the chief recommendation of HST & Beyond for a large, cold space telescope (radiatively cooled far below 0 °C), and began the planning process for the future Webb telescope.
Preparation for the 2000 Astronomy and Astrophysics Decadal Survey (a literature review produced by the United States National Research Council that includes identifying research priorities and making recommendations for the upcoming decade) included further development of the scientific program for what became known as the Next Generation Space Telescope, and advancements in relevant technologies by NASA. As it matured, studying the birth of galaxies in the young universe, and searching for planets around other starsthe prime goals coalesced as "Origins" by HST & Beyond became prominent.
As hoped, the NGST received the highest ranking in the 2000 Decadal Survey.
An administrator of NASA, Dan Goldin, coined the phrase "faster, better, cheaper", and opted for the next big paradigm shift for astronomy, namely, breaking the barrier of a single mirror. That meant going from "eliminate moving parts" to "learn to live with moving parts" (i.e. segmented optics). With the goal to reduce mass density tenfold, silicon carbide with a very thin layer of glass on top was first looked at, but beryllium was selected at the end.
The mid-1990s era of "faster, better, cheaper" produced the NGST concept, with an aperture to be flown to , roughly estimated to cost US$500 million. In 1997, NASA worked with the Goddard Space Flight Center, Ball Aerospace & Technologies, and TRW to conduct technical requirement and cost studies of the three different concepts, and in 1999 selected Lockheed Martin and TRW for preliminary concept studies. Launch was at that time planned for 2007, but the launch date was pushed back many times (see table further down).
In 2002, the project was renamed after NASA's second administrator (1961–1968), James E. Webb (1906–1992). Webb led the agency during the Apollo program and established scientific research as a core NASA activity.
In 2003, NASA awarded TRW the US$824.8 million prime contract for Webb. The design called for a de-scoped primary mirror and a launch date of 2010. Later that year, TRW was acquired by Northrop Grumman in a hostile bid and became Northrop Grumman Space Technology.
Early development and replanning (2003–2007)
Development was managed by NASA's Goddard Space Flight Center in Greenbelt, Maryland, with John C. Mather as its project scientist. The primary contractor was Northrop Grumman Aerospace Systems, responsible for developing and building the spacecraft element, which included the satellite bus, sunshield, Deployable Tower Assembly (DTA) which connects the Optical Telescope Element to the spacecraft bus, and the Mid Boom Assembly (MBA) which helps to deploy the large sunshields on orbit, while Ball Aerospace & Technologies was subcontracted to develop and build the OTE itself, and the Integrated Science Instrument Module (ISIM).
Cost growth revealed in spring 2005 led to an August 2005 re-planning. The primary technical outcomes of the re-planning were significant changes in the integration and test plans, a 22-month launch delay (from 2011 to 2013), and elimination of system-level testing for observatory modes at wavelengths shorter than 1.7 μm. Other major features of the observatory were unchanged. Following the re-planning, the project was independently reviewed in April 2006.
In the 2005 re-plan, the life-cycle cost of the project was estimated at US$4.5 billion. This comprised approximately US$3.5 billion for design, development, launch and commissioning, and approximately US$1.0 billion for ten years of operations. The ESA agreed in 2004 to contributing about €300 million, including the launch. The CSA pledged CA$39 million in 2007 and in 2012 delivered its contributions in equipment to point the telescope and detect atmospheric conditions on distant planets.
Detailed design and construction (2007–2021)
In January 2007, nine of the ten technology development items in the project successfully passed a Non-Advocate Review. These technologies were deemed sufficiently mature to retire significant risks in the project. The remaining technology development item (the MIRI cryocooler) completed its technology maturation milestone in April 2007. This technology review represented the beginning step in the process that ultimately moved the project into its detailed design phase (Phase C). By May 2007, costs were still on target. In March 2008, the project successfully completed its Preliminary Design Review (PDR). In April 2008, the project passed the Non-Advocate Review. Other passed reviews include the Integrated Science Instrument Module review in March 2009, the Optical Telescope Element review completed in October 2009, and the Sunshield review completed in January 2010.
In April 2010, the telescope passed the technical portion of its Mission Critical Design Review (MCDR). Passing the MCDR signified the integrated observatory can meet all science and engineering requirements for its mission. The MCDR encompassed all previous design reviews. The project schedule underwent review during the months following the MCDR, in a process called the Independent Comprehensive Review Panel, which led to a re-plan of the mission aiming for a 2015 launch, but as late as 2018. By 2010, cost over-runs were impacting other projects, though Webb itself remained on schedule.
By 2011, the Webb project was in the final design and fabrication phase (Phase C).
Assembly of the hexagonal segments of the primary mirror, which was done via robotic arm, began in November 2015 and was completed on 3 February 2016. The secondary mirror was installed on 3 March 2016. Final construction of the Webb telescope was completed in November 2016, after which extensive testing procedures began.
In March 2018, NASA delayed Webb's launch an additional two years to May 2020 after the telescope's sunshield ripped during a practice deployment and the sunshield's cables did not sufficiently tighten. In June 2018, NASA delayed the launch by an additional 10 months to March 2021, based on the assessment of the independent review board convened after the failed March 2018 test deployment. The review identified that Webb launch and deployment had 344 potential single-point failures – tasks that had no alternative or means of recovery if unsuccessful, and therefore had to succeed for the telescope to work. In August 2019, the mechanical integration of the telescope was completed, something that was scheduled to be done 12 years before in 2007.
After construction was completed, Webb underwent final tests at Northrop Grumman's historic Space Park in Redondo Beach, California. A ship carrying the telescope left California on 26 September 2021, passed through the Panama Canal, and arrived in French Guiana on 12 October 2021.
Cost and schedule issues
NASA's lifetime cost for the project is expected to be US$9.7 billion, of which US$8.8 billion was spent on spacecraft design and development and US$861 million is planned to support five years of mission operations. Representatives from ESA and CSA stated their project contributions amount to approximately €700 million and CA$200 million, respectively.
A study in 1984 by the Space Science Board estimated that to build a next-generation infrared observatory in orbit would cost US$4 billion (US$7B in 2006 dollars, or $10B in 2020 dollars). While this came close to the final cost of Webb, the first NASA design considered in the late 1990s was more modest, aiming for a $1 billion price tag over 10 years of construction. Over time this design expanded, added funding for contingencies, and had scheduling delays.
By 2008, when the project entered preliminary design review and was formally confirmed for construction, over US$1 billion had already been spent on developing the telescope, and the total budget was estimated at US$5 billion (equivalent to $ billion in ). In summer 2010, the mission passed its Critical Design Review (CDR) with excellent grades on all technical matters, but schedule and cost slips at that time prompted Maryland U.S. Senator Barbara Mikulski to call for external review of the project. The Independent Comprehensive Review Panel (ICRP) chaired by J. Casani (JPL) found that the earliest possible launch date was in late 2015 at an extra cost of US$1.5 billion (for a total of US$6.5 billion). They also pointed out that this would have required extra funding in FY2011 and FY2012 and that any later launch date would lead to a higher total cost.
On 6 July 2011, the United States House of Representatives' appropriations committee on Commerce, Justice, and Science moved to cancel the James Webb project by proposing an FY2012 budget that removed US$1.9 billion from NASA's overall budget, of which roughly one quarter was for Webb. US$3 billion had been spent and 75% of its hardware was in production. This budget proposal was approved by subcommittee vote the following day. The committee charged that the project was "billions of dollars over budget and plagued by poor management". In response, the American Astronomical Society issued a statement in support of Webb, as did Senator Mikulski. A number of editorials supporting Webb appeared in the international press during 2011 as well. In November 2011, Congress reversed plans to cancel Webb and instead capped additional funding to complete the project at US$8 billion.
While similar issues had affected other major NASA projects such as the Hubble telescope, some scientists expressed concerns about growing costs and schedule delays for the Webb telescope, worrying that its budget might be competing with those of other space science programs. A 2010 Nature article described Webb as "the telescope that ate astronomy". NASA continued to defend the budget and timeline of the program to Congress.
In 2018, Gregory L. Robinson was appointed as the new director of the Webb program. Robinson was credited with raising the program's schedule efficiency (how many measures were completed on time) from 50% to 95%. For his role in improving the performance of the Webb program, Robinsons's supervisor, Thomas Zurbuchen, called him "the most effective leader of a mission I have ever seen in the history of NASA." In July 2022, after Webb's commissioning process was complete and it began transmitting its first data, Robinson retired following a 33-year career at NASA.
On 27 March 2018, NASA pushed back the launch to May 2020 or later, with a final cost estimate to come after a new launch window was determined with the ESA. In 2019, its mission cost cap was increased by US$800 million. After launch windows were paused in 2020 due to the COVID-19 pandemic, Webb was launched at the end of 2021, with a total cost of just under US$10 billion.
No single area drove the cost. For future large telescopes, there are five major areas critical to controlling overall cost:
System Complexity
Critical path and overhead
Verification challenges
Programmatic constraints
Early integration and test considerations
Partnership
NASA, ESA and CSA have collaborated on the telescope since 1996. ESA's participation in construction and launch was approved by its members in 2003 and an agreement was signed between ESA and NASA in 2007. In exchange for full partnership, representation and access to the observatory for its astronomers, ESA is providing the NIRSpec instrument, the Optical Bench Assembly of the MIRI instrument, an Ariane 5 ECA launcher, and manpower to support operations. The CSA provided the Fine Guidance Sensor and the Near-Infrared Imager Slitless Spectrograph and manpower to support operations.
Several thousand scientists, engineers, and technicians spanning 15 countries have contributed to the build, test and integration of Webb. A total of 258 companies, government agencies, and academic institutions participated in the pre-launch project; 142 from the United States, 104 from 12 European countries (including 21 from the U.K., 16 from France, 12 from Germany and 7 international), and 12 from Canada. Other countries as NASA partners, such as Australia, were involved in post-launch operation.
Participating countries:
Naming concerns
In 2002, NASA administrator (2001–2004) Sean O'Keefe made the decision to name the telescope after James E. Webb, the administrator of NASA from 1961 to 1968 during the Mercury, Gemini, and much of the Apollo programs.
In 2015, concerns were raised around Webb's possible role in the lavender scare, the mid-20th-century persecution by the U.S. government targeting homosexuals in federal employment. In 2022, NASA released a report of an investigation, based on an examination of more than 50,000 documents. The report found "no available evidence directly links Webb to any actions or follow-up related to the firing of individuals for their sexual orientation", either in his time in the State Department or at NASA.
Mission goals
The James Webb Space Telescope has four key goals:
to search for light from the first stars and galaxies that formed in the universe after the Big Bang
to study galaxy formation and evolution
to understand star formation and planet formation
to study planetary systems and the origins of life
These goals can be accomplished more effectively by observation in near-infrared light rather than light in the visible part of the spectrum. For this reason, Webb's instruments will not measure visible or ultraviolet light like the Hubble Telescope, but will have a much greater capacity to perform infrared astronomy. Webb will be sensitive to a range of wavelengths from 0.6 to 28 μm (corresponding respectively to orange light and deep infrared radiation at about ).
Webb may be used to gather information on the dimming light of star KIC 8462852, which was discovered in 2015, and has some abnormal light-curve properties.
Additionally, it will be able to tell if an exoplanet has methane in its atmosphere, allowing astronomers to determine whether or not the methane is a biosignature.
Orbit design
Webb orbits the Sun near the second Lagrange point () of the Sun–Earth system, which is farther from the Sun than the Earth's orbit, and about four times farther than the Moon's orbit. Normally an object circling the Sun farther out than Earth would take longer than one year to complete its orbit. But near the point, the combined gravitational pull of the Earth and the Sun allow a spacecraft to orbit the Sun in the same time that it takes the Earth. Staying close to Earth allows data rates to be much faster for a given size of antenna.
The telescope circles about the Sun–Earth point in a halo orbit, which is inclined with respect to the ecliptic, has a radius varying between about and , and takes about half a year to complete. Since is just an equilibrium point with no gravitational pull, a halo orbit is not an orbit in the usual sense: the spacecraft is actually in orbit around the Sun, and the halo orbit can be thought of as controlled drifting to remain in the vicinity of the point. This requires some station-keeping: around per year from the total ∆v budget of . Two sets of thrusters constitute the observatory's propulsion system. Because the thrusters are located solely on the Sun-facing side of the observatory, all station-keeping operations are designed to slightly undershoot the required amount of thrust in order to avoid pushing Webb beyond the semi-stable point, a situation which would be unrecoverable. Randy Kimble, the Integration and Test Project Scientist for the JWST, compared the precise station-keeping of Webb to "Sisyphus [...] rolling this rock up the gentle slope near the top of the hill – we never want it to roll over the crest and get away from him".
Infrared astronomy
Webb is the formal successor to the Hubble Space Telescope (HST), and since its primary emphasis is on infrared astronomy, it is also a successor to the Spitzer Space Telescope. Webb will far surpass both those telescopes, being able to see many more and much older stars and galaxies. Observing in the infrared spectrum is a key technique for achieving this, because of cosmological redshift, and because it better penetrates obscuring dust and gas. This allows observation of dimmer, cooler objects. Since water vapor and carbon dioxide in the Earth's atmosphere strongly absorbs most infrared, ground-based infrared astronomy is limited to narrow wavelength ranges where the atmosphere absorbs less strongly. Additionally, the atmosphere itself radiates in the infrared spectrum, often overwhelming light from the object being observed. This makes a space telescope preferable for infrared observation.
The more distant an object is, the younger it appears; its light has taken longer to reach human observers. Because the universe is expanding, as the light travels it becomes red-shifted, and objects at extreme distances are therefore easier to see if viewed in the infrared. Webb's infrared capabilities are expected to let it see back in time to the first galaxies forming just a few hundred million years after the Big Bang.
Infrared radiation can pass more freely through regions of cosmic dust that scatter visible light. Observations in infrared allow the study of objects and regions of space which would be obscured by gas and dust in the visible spectrum, such as the molecular clouds where stars are born, the circumstellar disks that give rise to planets, and the cores of active galaxies.
Relatively cool objects (temperatures less than several thousand degrees) emit their radiation primarily in the infrared, as described by Planck's law. As a result, most objects that are cooler than stars are better studied in the infrared. This includes the clouds of the interstellar medium, brown dwarfs, planets both in our own and other solar systems, comets, and Kuiper belt objects that will be observed with the Mid-Infrared Instrument (MIRI).
Some of the missions in infrared astronomy that impacted Webb development were Spitzer and the Wilkinson Microwave Anisotropy Probe (WMAP). Spitzer showed the importance of mid-infrared, which is helpful for tasks such as observing dust disks around stars. Also, the WMAP probe showed the universe was "lit up" at redshift 17, further underscoring the importance of the mid-infrared. Both these missions were launched in the early 2000s, in time to influence Webb development.
Ground support and operations
The Space Telescope Science Institute (STScI), in Baltimore, Maryland, on the Homewood Campus of Johns Hopkins University, was selected in 2003 as the Science and Operations Center (S&OC) for Webb with an initial budget of US$162.2 million intended to support operations through the first year after launch. In this capacity, STScI was to be responsible for the scientific operation of the telescope and delivery of data products to the astronomical community. Data was to be transmitted from Webb to the ground via the NASA Deep Space Network, processed and calibrated at STScI, and then distributed online to astronomers worldwide. Similar to how Hubble is operated, anyone, anywhere in the world, will be allowed to submit proposals for observations. Each year several committees of astronomers will peer review the submitted proposals to select the projects to observe in the coming year. The authors of the chosen proposals will typically have one year of private access to the new observations, after which the data will become publicly available for download by anyone from the online archive at STScI.
The bandwidth and digital throughput of the satellite is designed to operate at 458 gigabits of data per day for the length of the mission (equivalent to a sustained rate of 5.42 Mbps). Most of the data processing on the telescope is done by conventional single-board computers. The digitization of the analog data from the instruments is performed by the custom SIDECAR ASIC (System for Image Digitization, Enhancement, Control And Retrieval Application Specific Integrated Circuit). NASA stated that the SIDECAR ASIC will include all the functions of a instrument box in a package and consume only 11 milliwatts of power. Since this conversion must be done close to the detectors, on the cold side of the telescope, the low power dissipation is crucial for maintaining the low temperature required for optimal operation of Webb.
The telescope is equipped with a solid-state drive (SSD) with a capacity of 68 GB, used as temporary storage for data collected from its scientific instruments. By the end of the 10-year mission, the usable capacity of the drive is expected to decrease to 60 GB due to the effects of radiation and read/write operations.
Micrometeoroid strike
The C3 mirror segment suffered a micrometeoroid strike from a large dust mote-sized particle between 23 and 25 May, the fifth and largest strike since launch, reported 8 June 2022, which required engineers to compensate for the strike using a mirror actuator. Despite the strike, a NASA characterization report states "all JWST observing modes have been reviewed and confirmed to be ready for science use" as of July 10, 2022.
From launch through commissioning
Launch
The launch (designated Ariane flight VA256) took place as scheduled at 12:20 UTC on 25 December 2021 on an Ariane 5 rocket that lifted off from the Guiana Space Centre in French Guiana. The telescope was confirmed to be receiving power, starting a two-week deployment phase of its parts and traveling to its target destination. The telescope was released from the upper stage 27 minutes 7 seconds after launch, beginning a 30-day adjustment to place the telescope in a Lissajous orbit around the L2 Lagrange point.
The telescope was launched with slightly less speed than needed to reach its final orbit, and slowed down as it travelled away from Earth, in order to reach L2 with only the velocity needed to enter its orbit there. The telescope reached L2 on 24 January 2022. The flight included three planned course corrections to adjust its speed and direction. This is because the observatory could recover from underthrust (going too slowly), but could not recover from overthrust (going too fast) – to protect highly temperature-sensitive instruments, the sunshield must remain between telescope and Sun, so the spacecraft could not turn around or use its thrusters to slow down.
An orbit is unstable, so JWST needs to use propellant to maintain its halo orbit around L2 (known as station-keeping) to prevent the telescope from drifting away from its orbital position. It was designed to carry enough propellant for 10 years, but the precision of the Ariane 5 launch and the first midcourse correction were credited with saving enough onboard fuel that JWST may be able to maintain its orbit for around 20 years instead. Space.com called the launch "flawless".
Transit and structural deployment
Webb was released from the rocket upper stage 27 minutes after a flawless launch. Starting 31 minutes after launch, and continuing for about 13 days, Webb began the process of deploying its solar array, antenna, sunshield, and mirrors. Nearly all deployment actions are commanded by the Space Telescope Science Institute in Baltimore, Maryland, except for two early automatic steps, solar panel unfolding and communication antenna deployment. The mission was designed to give ground controllers flexibility to change or modify the deployment sequence in case of problems.
At 7:50p.m. EST on 25 December 2021, about 12 hours after launch, the telescope's pair of primary rockets began firing for 65 minutes to make the first of three planned mid-course corrections. On day two, the high gain communication antenna deployed automatically.
On 27 December 2021, at 60 hours after launch, Webb's rockets fired for nine minutes and 27 seconds to make the second of three mid-course corrections for the telescope to arrive at its L2 destination. On 28 December 2021, three days after launch, mission controllers began the multi-day deployment of Webb's all-important sunshield. On 30 December 2021, controllers successfully completed two more steps in unpacking the observatory. First, commands deployed the aft "momentum flap", a device that provides balance against solar pressure on the sunshield, saving fuel by reducing the need for thruster firing to maintain Webb's orientation.
On 31 December 2021, the ground team extended the two telescoping "mid booms" from the left and right sides of the observatory. The left side deployed in 3 hours and 19 minutes; the right side took 3 hours and 42 minutes. Commands to separate and tension the membranes followed between 3 and 4 January and were successful. On 5 January 2022, mission control successfully deployed the telescope's secondary mirror, which locked itself into place to a tolerance of about one and a half millimeters.
The last step of structural deployment was to unfold the wings of the primary mirror. Each panel consists of three primary mirror segments and had to be folded to allow the space telescope to be installed in the fairing of the Ariane rocket for the launch of the telescope. On 7 January 2022, NASA deployed and locked in place the port-side wing, and on 8 January, the starboard-side mirror wing. This successfully completed the structural deployment of the observatory.
On 24 January 2022, at 2:00p.m. Eastern Standard Time, nearly a month after launch, a third and final course correction took place, inserting Webb into its planned halo orbit around the Sun–Earth L2 point.
The MIRI instrument has four observing modes – imaging, low-resolution spectroscopy, medium-resolution spectroscopy and coronagraphic imaging. "On Aug. 24, a mechanism that supports medium-resolution spectroscopy (MRS), exhibited what appears to be increased friction during setup for a science observation. This mechanism is a grating wheel that allows scientists to select between short, medium, and longer wavelengths when making observations using the MRS mode," said NASA in a press statement.
Commissioning and testing
On 12 January 2022, while still in transit, mirror alignment began. The primary mirror segments and secondary mirror were moved away from their protective launch positions. This took about 10 days, because the 132 actuator motors are designed to fine-tune the mirror positions at microscopic accuracy (10 nanometer increments) and must each move over 1.2 million increments (12.5 mm) during initial alignment.
Mirror alignment requires each of the 18 mirror segments, and the secondary mirror, to be positioned to within 50 nanometers. NASA compares the required accuracy by analogy: "If the Webb primary mirror were the size of the United States, each [mirror] segment would be the size of Texas, and the team would need to line the height of those Texas-sized segments up with each other to an accuracy of about 1.5 inches".
Mirror alignment was a complex operation split into seven phases, that had been repeatedly rehearsed using a 1:6 scale model of the telescope. Once the mirrors reached , NIRCam targeted the 6th magnitude star HD 84406 in Ursa Major. To do this, NIRCam took 1560 images of the sky and used these wide-ranging images to determine where in the sky each segment of the main mirror initially pointed. At first, the individual primary mirror segments were greatly misaligned, so the image contained 18 separate, blurry, images of the star field, each containing an image of the target star. The 18 images of HD 84406 are matched to their respective mirror segments, and the 18 segments are brought into approximate alignment centered on the star ("Segment Image Identification"). Each segment was then individually corrected of its major focusing errors, using a technique called phase retrieval, resulting in 18 separate good quality images from the 18 mirror segments ("Segment Alignment"). The 18 images from each segment, were then moved so they precisely overlap to create a single image ("Image Stacking").
With the mirrors positioned for almost correct images, they had to be fine tuned to their operational accuracy of 50 nanometers, less than one wavelength of the light that will be detected. A technique called dispersed fringe sensing was used to compare images from 20 pairings of mirrors, allowing most of the errors to be corrected ("Coarse Phasing"), and then introduced light defocus to each segment's image, allowing detection and correction of almost all remaining errors ("Fine Phasing"). These two processes were repeated three times, and Fine Phasing will be routinely checked throughout the telescope's operation. After three rounds of Coarse and Fine Phasing, the telescope was well aligned at one place in the NIRCam field of view. Measurements will be made at various points in the captured image, across all instruments, and corrections calculated from the detected variations in intensity, giving a well-aligned outcome across all instruments ("Telescope Alignment Over Instrument Fields of View"). Finally, a last round of Fine Phasing and checks of image quality on all instruments was performed, to ensure that any small residual errors remaining from the previous steps, were corrected ("Iterate Alignment for Final Correction"). The telescope's mirror segments were then aligned and able to capture precise focused images.
In preparation for alignment, NASA announced at 19:28 UTC on 3 February 2022, that NIRCam had detected the telescope's first photons (although not yet complete images). On 11 February 2022, NASA announced the telescope had almost completed phase 1 of alignment, with every segment of its primary mirror having located and imaged the target star HD 84406, and all segments brought into approximate alignment. Phase 1 alignment was completed on 18 February 2022, and a week later, phases 2 and 3 were also completed. This meant the 18 segments were working in unison, however until all 7 phases are complete, the segments were still acting as 18 smaller telescopes rather than one larger one. At the same time as the primary mirror was being commissioned, hundreds of other instrument commissioning and calibration tasks were also ongoing.
Allocation of observation time
Webb observing time is allocated through a General Observers (GO) program, a Guaranteed Time Observations (GTO) program, and a Director's Discretionary Early Release Science (DD-ERS) program. The GTO program provides guaranteed observing time for scientists who developed hardware and software components for the observatory. The GO program provides all astronomers the opportunity to apply for observing time and will represent the bulk of the observing time. GO programs are selected through peer review by a Time Allocation Committee (TAC), similar to the proposal review process used for the Hubble Space Telescope.
Early Release Science program
In November 2017, the Space Telescope Science Institute announced the selection of 13 Director's Discretionary Early Release Science (DD-ERS) programs, chosen through a competitive proposal process. The observations for these programs – Early Release Observations (ERO) – were to be obtained during the first five months of Webb science operations after the end of the commissioning period. A total of 460 hours of observing time was awarded to these 13 programs, which span science topics including the Solar System, exoplanets, stars and star formation, nearby and distant galaxies, gravitational lenses, and quasars. These 13 ERS programs were to use a total of 242.8 hours of observing time on the telescope (not including Webb observing overheads and slew time).
General Observer Program
For GO Cycle 1 there were 6,000 hours of observation time available to allocate, and 1,173 proposals were submitted requesting a total of 24,500 hours of observation time. Selection of Cycle 1 GO programs was announced on 30 March 2021, with 266 programs approved. These included 13 large programs and treasury programs producing data for public access. The Cycle 2 GO program was announced on May 10, 2023. Webb science observations are nominally scheduled in weekly increments. The observation plan for every week is published on Mondays by the Space Telescope Science Institute. In Cycle 4 the telescope showed its continued popularity in the astronomy community by garnering 2,377 proposals for 78,000 hours of observing time, nine times more than the available amount.
Scientific results
The JWST completed its commissioning and was ready to begin full scientific operations on 11 July 2022. With some exceptions, most experiment data is kept private for one year for the exclusive use of scientists running that particular experiment, and then the raw data will be released to the public. JWST observations substantially advanced understanding of exoplanets, the first billion years of the universe, and many other astrophysical and cosmological phenomena.
First full-color images
The first full-color images and spectroscopic data were released on 12 July 2022, which also marked the official beginning of Webb's general science operations. U.S. President Joe Biden revealed the first image, Webb's First Deep Field, on 11 July 2022. Additional releases around this time include:
Carina Nebula young, star-forming region called NGC 3324 about 8,500 light-years from Earth, described by NASA as "Cosmic Cliffs".
WASP-96b including an analysis of atmosphere with evidence of water around a giant gas planet orbiting a distant star 1120 light-years from Earth.
Southern Ring Nebula clouds of gas and dust expelled by a dying star 2,500 light-years from Earth.
Stephan's Quintet a visual display of five galaxies with colliding gas and dust clouds creating new stars; four central galaxies are 290 million light-years from Earth.
SMACS J0723.3-7327 a galaxy cluster at redshift 0.39, with distant background galaxies whose images are distorted and magnified due to gravitational lensing by the cluster. This image has been called Webb's First Deep Field. It was later discovered that in this picture the JWST had also revealed three ancient galaxies that existed shortly after the Big Bang. Its images of these distant galaxies are views of the universe 13.1 billion years ago.
On 14 July 2022, NASA presented images of Jupiter and related areas by the JWST, including infrared views.
In a preprint released around the same time, NASA, ESA and CSA scientists stated that "almost across the board, the science performance of JWST is better than expected". The document described a series of observations during the commissioning, when the instruments captured spectra of transiting exoplanets with a precision better than 1000 ppm per data point, and tracked moving objects with speeds up to 67 milliarcseconds/second, more than twice as fast as the requirement. It also obtained the spectra of hundreds of stars simultaneously in a dense field towards the Milky Way's Galactic Center. Other targets included:
Moving targets: Jupiter's rings and moons (particularly Europa, Thebe and Metis), asteroids 2516 Roman, 118 Peitho, 6481 Tenzing, 1773 Rumpelstilz, 216 Kleopatra, 2035 Stearns, 4015 Wilson-Harrington and
NIRCam grism time-series, NIRISS SOSS and NIRSpec BOTS mode: the Jupiter-sized planet HAT-P-14b
NIRISS aperture masking interferometry (AMI): A clear detection of the very low-mass companion star AB Doradus C, which had a separation of only 0.3 arcseconds to the primary. This observation was the first demonstration of AMI in space.
MIRI low-resolution spectroscopy (LRS): a hot super-Earth planet L 168-9 b (TOI-134) around a bright M-dwarf star (red dwarf star)
Bright early galaxies
Within two weeks of the first Webb images, several preprint papers described a wide range of high redshift and very luminous (presumably large) galaxies believed to date from 235 million years (z=16.7) to 280 million years after the Big Bang, far earlier than previously known. On 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the NIRCam on Webb of numerous very early galaxies. Some early galaxies observed by Webb like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidates. In September 2022, primordial black holes were proposed as explaining these unexpectedly large and early galaxies. In May 2024, the JWST identified the most distant known galaxy, JADES-GS-z14-0, seen just 290 million years after the Big Bang, corresponding to a redshift of 14.32. Part of the JWST Advanced Deep Extragalactic Survey (JADES), this discovery highlights a galaxy significantly more luminous and massive than expected for such an early period. Detailed analysis using JWST's NIRSpec and MIRI instruments revealed this galaxy's remarkable properties, including its significant size and dust content, challenging current models of early galaxy formation.
Subsequent noteworthy observations and interpretations
In June 2023 detection of organic molecules 12 billion light-years away in a galaxy called SPT0418-47 using the Webb telescope was announced.
On 12 July 2023, NASA celebrated the first year of operations with the release of Webb's image of a small star-forming region in the Rho Ophiuchi cloud complex, 390 light years away.
In September 2023, two astrophysicists questioned the accepted Standard Model of Cosmology, based on the latest JWST studies.
In December 2023, NASA released Christmas holiday-related images by JWST, including the Christmas Tree Galaxy Cluster and others.
In May 2024, the JWST detected the farthest known black hole merger. Occurring within the galaxy system ZS7, 740 million years after the Big Bang, this discovery suggests a fast growth rate for black holes through mergers, even in the young Universe.
Gallery
See also
Collier Trophy – to JWST in 2023
List of largest infrared telescopes
List of largest optical reflecting telescopes
List of space telescopes
Nancy Grace Roman Space Telescope – planned launch no later than May 2027
New Worlds Mission – proposed occulter for the JWST
Timeline of the James Webb Space Telescope
Unknown: Cosmic Time Machine, Netflix documentary about JWST
Notes
References
Further reading
The formal case for JWST science presented in 2006.
A review of JWST capabilities and scientific opportunities.
External links
Official NASA / STScI / ESA / French website
JWST NASA – Tracking Page − Launch to Final Calibrations (and more)
JWST NASA – About page − Timeline details / Webb orbit / L2 / Communicating
Chronological List of James Webb Telescope Discoveries
JWST Text – Most Critical Events – Launching and Deployment (2021)
JWST Video (031:22): Highlights − Technical Engineering Details (2021)
JWST Video (012:02): 1st Month – Launching and Deployment (animation; 2017)
JWST Video (008:06): 1st Month − Launching and Deployment (update; 2021)
JWST Video (003:00): 2nd Month − Mirror Alignment details (2/11/2022)
JWST Videos (Mission Control Live) – Deployment Events − Now Successfully Completed (2022):
⇒ (mirror) ⇒ James Webb Space Telescope: Sunshield Deployment – Mission Control Live ⇒
James Webb Space Telescope: Secondary Mirror Deployment – Mission Control Live ⇒ James Webb Space Telescope: Primary Mirror Deployment – Mission Control Live ⇒ News Update on James Webb Space Telescope's Full Deployment ⇒
Media Briefing: What's Next for the James Webb Space Telescope ⇒ JAMES WEBB TELESCOPE First Photos, Data & Calibrations Explained ⇒ The First Thing That James Webb Will See
2021 in French Guiana
2021 in science
Articles containing video clips
Artificial satellites at Earth-Sun Lagrange points
European Space Agency space probes
Exoplanet search projects
Goddard Space Flight Center
Infrared telescopes
NASA programs
NASA space probes
Northrop Grumman spacecraft
Space program of Canada
Space telescopes
Spacecraft launched by Ariane rockets
Spacecraft launched in 2021
Spacecraft using halo orbits | James Webb Space Telescope | Astronomy | 12,299 |
8,647,599 | https://en.wikipedia.org/wiki/Frangible%20nut | The frangible nut is a component used in many industries, but most commonly by NASA, to sever mechanical connections. It is, by definition, an explosively-splittable nut. The bolt remains intact while the nut itself is split into two or more parts.
Space Shuttle
Solid Rocket Booster Holddown System
Frangible nuts secured the solid rocket boosters (SRB) of the Space Shuttle, which were bolted to the mobile launcher platform (MLP) until liftoff. On the Shuttle, they were separated using NASA standard detonators (NSDs) and explosive booster cartridges. The space shuttle used two NSDs and booster cartridges for the frangible nut atop each of the four , studs holding each SRB to the MLP. Once detonation occurred, the shuttle lifted free of the MLP. The broken nut and any fragments from detonation were captured by energy absorption material, such as metal foam, enclosed in a blast container to prevent damage to the shuttle. In case of NSD failure, or incomplete clearance of the nut from the bolt, the SRB had ample thrust to break the bolt itself and launch unhindered.
At launch, two pyrotechnic, or explosive, devices "break" a frangible nut into two halves, allowing the stud, which is under high tension, to eject into the hold-down post system and release the space shuttle from the MLP. A number of factors work to slow or interrupt the stud’s ejection velocity. At liftoff, a stud not ejected prior to the first space shuttle movement, which occurs approximately 200—250 milliseconds after ignition, becomes bound and/or pinched and results in a hang-up.
Each frangible nut has two recesses 180 degrees apart, where a pyrotechnic device, or booster cartridge, and detonator are installed. At liftoff, each detonator receives a "fire" signal, which in turn initiates the booster cartridges, causing the frangible nut to fracture. Although only one is actually required to fire and break the frangible nut, two booster cartridges/detonators are used for redundancy. The difference in the booster cartridge function time of the two sides has been determined to decrease initial stud velocity and is determined to be a major contributor to stud hang-ups.
The frangible nut has been modified to incorporate a crossover assembly which pyrotechnically "links" the two booster cartridges/detonators in each frangible nut, resulting in detonation of both sides within 50 microseconds or less, versus a typical difference of approximately 250 microseconds experienced prior to this design modification. With the time reduction, a greater initial velocity is achieved, thereby reducing the probability of a stud hang-up. After completion of extensive component qualification and system certification testing to prove the design goal of 50 microseconds or less had been achieved, the crossover system design was approved for flight. The first flight using this new design occurred on STS-126. The crossover system was installed in all eight holddown locations on the solid rocket boosters.
External Tank Separation
Frangible nuts were also used for separation of the two aft structural attachments of the external tank prior to orbital insertion. The attach bolts were driven by the explosive force of the NSDs and a spring into a cavity in the tank strut. The nuts and all residual pieces of the NSDs were caught in a cover assembly within the shuttle.
References
Nuts (hardware)
Spacecraft pyrotechnics | Frangible nut | Engineering | 724 |
57,672,909 | https://en.wikipedia.org/wiki/Thiosilicate | In chemistry and materials science, thiosilicate refers to materials containing anions of the formula . Derivatives where some sulfide is replaced by oxide are also called thiosilicates, examples being materials derived from the oxohexathiodisilicate . Silicon is tetrahedral in all thiosilicates and sulfur is bridging or terminal. Formally such materials are derived from silicon disulfide in analogy to the relationship between silicon dioxide and silicates. Thiosilicates are typically encountered as colorless solids. They are characteristically sensitive to hydrolysis. They are from the class of chalcogenidotetrelates.
Materials science
The LISICON (LIthium Super Ionic CONductor) include thiosilicates, which are fast ion conductors. Thiosilicates and related thiogermanates are also of interest for infrared optics, since they only absorb low frequency IR modes.
References
Inorganic silicon compounds
Sulfides
Inorganic polymers
Sulfur ions | Thiosilicate | Physics,Chemistry | 200 |
41,470,199 | https://en.wikipedia.org/wiki/Kappa%20Hydrae | κ Hydrae, Latinised as Kappa Hydrae, is a solitary star in the equatorial constellation of Hydra. Its apparent visual magnitude is 5.06, which is bright enough to be faintly visible to the naked eye. The distance to this star is around , based upon an annual parallax shift of 7.48 mas. It may be a variable star, meaning it undergoes repeated fluctuations in brightness by at least 0.1 magnitude.
This is an evolving B-type star with a stellar classification of B4 IV/V, having a luminosity class intermediate between a subgiant and a giant star. It has an estimated five times the mass of the Sun and 3.4 times the Sun's radius. Kappa Hydrae has a high rate of spin with a projected rotational velocity of 115.0 km/s, and is only about 31 million years old. The star radiates 328 times the solar luminosity from its outer atmosphere at an effective temperature of 16,150 K.
Name
This star was one of the set assigned by the 16th century astronomer Al Tizini to Al Sharāsīf (ألشراسيف), the Ribs (of Hydra), which included the stars from β Crateris westward through κ Hydrae.
According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Sharāsīf were the title for two stars : β Crateris as Al Sharasīf II and κ Hydrae as Al Sharasīf I.
In Chinese, (), meaning Extended Net, refers to an asterism consisting of Kappa Hydrae, Upsilon1 Hydrae, Lambda Hydrae, Mu Hydrae, HD 87344, and Phi1 Hydrae. Consequently, Kappa Hydrae itself is known as (), "the Fifth Star of Extended Net".
References
B-type main-sequence stars
B-type subgiants
Suspected variables
Hydra (constellation)
Hydrae, Kappa
Durchmusterung objects
Hydrae, 38
083754
047452
3849 | Kappa Hydrae | Astronomy | 428 |
91,820 | https://en.wikipedia.org/wiki/Instructional%20design | Instructional design (ID), also known as instructional systems design and originally known as instructional systems development (ISD), is the practice of systematically designing, developing and delivering instructional materials and experiences, both digital and physical, in a consistent and reliable fashion toward an efficient, effective, appealing, engaging and inspiring acquisition of knowledge. The process consists broadly of determining the state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed. There are many instructional design models, but many are based on the ADDIE model with the five phases: analysis, design, development, implementation, and evaluation.
History
Origins
As a field, instructional design is historically and traditionally rooted in cognitive and behavioral psychology, though recently constructivism has influenced thinking in the field. This can be attributed to the way it emerged during a period when the behaviorist paradigm was dominating American psychology. There are also those who cite that, aside from behaviorist psychology, the origin of the concept could be traced back to systems engineering. While the impact of each of these fields is difficult to quantify, it is argued that the language and the "look and feel" of the early forms of instructional design and their progeny were derived from this engineering discipline. Specifically, they were linked to the training development model used by the U.S. military, which were based on systems approach and was explained as "the idea of viewing a problem or situation in its entirety with all its ramifications, with all its interior interactions, with all its exterior connections and with full cognizance of its place in its context."
The role of systems engineering in the early development of instructional design was demonstrated during World War II when a considerable amount of training materials for the military were developed based on the principles of instruction, learning, and human behavior. Tests for assessing a learner's abilities were used to screen candidates for the training programs. After the success of military training, psychologists began to view training as a system and developed various analysis, design, and evaluation procedures. In 1946, Edgar Dale outlined a hierarchy of instructional methods, organized intuitively by their concreteness. The framework first migrated to the industrial sector to train workers before it finally found its way to the education field.
1950s
B. F. Skinner's 1954 article "The Science of Learning and the Art of Teaching" suggested that effective instructional materials, called programmed instructional materials, should include small steps, frequent questions, and immediate feedback; and should allow self-pacing. Robert F. Mager popularized the use of learning objectives with his 1962 article "Preparing Objectives for Programmed Instruction". The article describes how to write objectives including desired behavior, learning condition, and assessment.
In 1956, a committee led by Benjamin Bloom published an influential taxonomy with three domains of learning: cognitive (what one knows or thinks), psychomotor (what one does, physically) and affective (what one feels, or what attitudes one has). These taxonomies still influence the design of instruction.
1960s
Robert Glaser introduced "criterion-referenced measures" in 1962. In contrast to norm-referenced tests in which an individual's performance is compared to group performance, a criterion-referenced test is designed to test an individual's behavior in relation to an objective standard. It can be used to assess the learners' entry level behavior, and to what extent learners have developed mastery through an instructional program.
In 1965, Robert Gagné described three domains of learning outcomes (cognitive, affective, psychomotor), five learning outcomes (Verbal Information, Intellectual Skills, Cognitive Strategy, Attitude, Motor Skills), and nine events of instruction in The Conditions of Learning, which remain foundations of instructional design practices. Gagne's work in learning hierarchies and hierarchical analysis led to an important notion in instruction – to ensure that learners acquire prerequisite skills before attempting superordinate ones.
In 1967, after analyzing the failure of training material, Michael Scriven suggested the need for formative assessment – e.g., to try out instructional materials with learners (and revise accordingly) before declaring them finalized.
1970s
During the 1970s, the number of instructional design models greatly increased and prospered in different sectors in military, academia, and industry. Many instructional design theorists began to adopt an information-processing-based approach to the design of instruction. David Merrill for instance developed Component Display Theory (CDT), which concentrates on the means of presenting instructional materials (presentation techniques).
1980s
Although interest in instructional design continued to be strong in business and the military, there was little evolution of ID in schools or higher education.
However, educators and researchers began to consider how the personal computer could be used in a learning environment or a learning space. PLATO (Programmed Logic for Automatic Teaching Operation) is one example of how computers began to be integrated into instruction. Many of the first uses of computers in the classroom were for "drill and skill" exercises. There was a growing interest in how cognitive psychology could be applied to instructional design.
1990s
The influence of constructivist theory on instructional design became more prominent in the 1990s as a counterpoint to the more traditional cognitive learning theory. Constructivists believe that learning experiences should be "authentic" and produce real-world learning environments that allow learners to construct their own knowledge. This emphasis on the learner was a significant departure away from traditional forms of instructional design.
Performance improvement was also seen as an important outcome of learning that needed to be considered during the design process. The World Wide Web emerged as an online learning tool with hypertext and hypermedia being recognized as good tools for learning. As technology advanced and constructivist theory gained popularity, technology's use in the classroom began to evolve from mostly drill and skill exercises to more interactive activities that required more complex thinking on the part of the learner. Rapid prototyping was first seen during the 1990s. In this process, an instructional design project is prototyped quickly and then vetted through a series of try and revise cycles. This is a big departure from traditional methods of instructional design that took far longer to complete.
The concept of learning design arrived in the literature of technology for education in the late 1990s and early 2000s with the idea that "designers and instructors need to choose for themselves the best mixture of behaviourist and constructivist learning experiences for their online courses". But the concept of learning design is probably as old as the concept of teaching. Learning design might be defined as "the description of the teaching-learning process that takes place in a unit of learning (e.g., a course, a lesson or any other designed learning event)".
As summarized by Britain, learning design may be associated with:
The concept of learning design
The implementation of the concept made by learning design specifications like PALO, IMS Learning Design, LDL, SLD 2.0, etc.
The technical realisations around the implementation of the concept like TELOS, RELOAD LD-Author, etc.
2000 - 2010
Online learning became common. Technology advances permitted sophisticated simulations with authentic and realistic learning experiences.
In 2008, the Association for Educational Communications and Technology (AECT) changed the definition of Educational Technology to "the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources".
2010 - 2020
Academic degrees focused on integrating technology, internet, and human–computer interaction with education gained momentum with the introduction of Learning Design and Technology (LDT) majors. Universities such as Bowling Green State University, Pennsylvania State University, Purdue, San Diego State University, Stanford, Harvard University of Georgia, California State University, Fullerton, and Carnegie Mellon University have established undergraduate and graduate degrees in technology-centered methods of designing and delivering education.
Informal learning became an area of growing importance in instructional design, particularly in the workplace. A 2014 study showed that formal training makes up only 4 percent of the 505 hours per year an average employee spends learning. It also found that the learning output of informal learning is equal to that of formal training. As a result of this and other research, more emphasis was placed on creating knowledge bases and other supports for self-directed learning.
Timeline
Models
ADDIE model
Perhaps the most common model used for creating instructional materials is the ADDIE Model. This acronym stands for the five phases contained in the model: Analyze, Design, Develop, Implement, and Evaluate.
The ADDIE model was initially developed by Florida State University to explain "the processes involved in the formulation of an instructional systems development (ISD) program for military interservice training that will adequately train individuals to do a particular job, and which can also be applied to any interservice curriculum development activity." The model originally contained several steps under its five original phases (Analyze, Design, Develop, Implement, and [Evaluation and] Control), whose completion was expected before movement to the next phase could occur. Over the years, the steps were revised and eventually the model itself became more dynamic and interactive than its original hierarchical rendition, until its most popular version appeared in the mid-80s, as we understand it today.
Connecting all phases of the model are external and reciprocal revision opportunities. As in the internal Evaluation phase, revisions should and can be made throughout the entire process.
Most of the current instructional design models are variations of the ADDIE model.
Rapid prototyping
An adaptation of the ADDIE model, which is used sometimes, is a practice known as rapid prototyping.
Proponents suggest that through an iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc. In fact, some proponents of design prototyping assert that a sophisticated understanding of a problem is incomplete without creating and evaluating some type of prototype, regardless of the analysis rigor that may have been applied up front. In other words, up-front analysis is rarely sufficient to allow one to confidently select an instructional model. For this reason many traditional methods of instructional design are beginning to be seen as incomplete, naive, and even counter-productive.
However, some consider rapid prototyping to be a somewhat simplistic type of model. As this argument goes, at the heart of Instructional Design is the analysis phase. After you thoroughly conduct the analysis—you can then choose a model based on your findings. That is the area where most people get snagged—they simply do not do a thorough-enough analysis. (Part of Article By Chris Bressi on LinkedIn)
Dick and Carey
Another well-known instructional design model is the Dick and Carey Systems Approach Model. The model was originally published in 1978 by Walter Dick and Lou Carey in their book entitled The Systematic Design of Instruction.
Dick and Carey made a significant contribution to the instructional design field by championing a systems view of instruction, in contrast to defining instruction as the sum of isolated parts. The model addresses instruction as an entire system, focusing on the interrelationship between context, content, learning and instruction. According to Dick and Carey, "Components such as the instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired student learning outcomes". The components of the Systems Approach Model, also known as the Dick and Carey Model, are as follows:
Identify Instructional Goal(s): A goal statement describes a skill, knowledge or attitude (SKA) that a learner will be expected to acquire
Conduct Instructional Analysis: Identify what a learner must recall and identify what learner must be able to do to perform particular task
Analyze Learners and Contexts: Identify general characteristics of the target audience, including prior skills, prior experience, and basic demographics; identify characteristics directly related to the skill to be taught; and perform analysis of the performance and learning settings.
Write Performance Objectives: Objectives consists of a description of the behavior, the condition and criteria. The component of an objective that describes the criteria will be used to judge the learner's performance.
Develop Assessment Instruments: Purpose of entry behavior testing, purpose of pretesting, purpose of post-testing, purpose of practice items/practice problems
Develop Instructional Strategy: Pre-instructional activities, content presentation, Learner participation, assessment
Develop and Select Instructional Materials
Design and Conduct Formative Evaluation of Instruction: Designers try to identify areas of the instructional materials that need improvement.
Revise Instruction: To identify poor test items and to identify poor instruction
Design and Conduct Summative Evaluation
With this model, components are executed iteratively and in parallel, rather than linearly.
Guaranteed Learning
The instructional design model, Guaranteed Learning, was formerly known as the Instructional Development Learning System (IDLS). The model was originally published in 1970 by Peter J. Esseff, PhD and Mary Sullivan Esseff, PhD in their book entitled IDLS—Pro Trainer 1: How to Design, Develop, and Validate Instructional Materials.
Peter (1968) & Mary (1972) Esseff both received their doctorates in Educational Technology from the Catholic University of America under the mentorship of Gabriel Ofiesh, a founding father of the Military Model mentioned above. Esseff and Esseff synthesized existing theories to develop their approach to systematic design, "Guaranteed Learning" aka "Instructional Development Learning System" (IDLS). In 2015, the Drs. Esseffs created an eLearning course to enable participants to take the GL course online under the direction of Esseff.
The components of the Guaranteed Learning Model are the following:
Design a task analysis
Develop criterion tests and performance measures
Develop interactive instructional materials
Validate the interactive instructional materials
Create simulations or performance activities (Case Studies, Role Plays, and Demonstrations)
Other
Other useful instructional design models include: the Smith/Ragan Model, the Morrison/Ross/Kemp Model and the OAR Model of instructional design in higher education, as well as, Wiggins' theory of backward design.
Learning theories also play an important role in the design of instructional materials. Theories such as behaviorism, constructivism, social learning, and cognitivism help shape and define the outcome of instructional materials.
Also see: Managing Learning in High Performance Organizations, by Ruth Stiehl and Barbara Bessey, from The Learning Organization, Corvallis, Oregon. .
Motivational design
Motivation is defined as an internal drive that activates behavior and gives it direction. The term motivation theory is concerned with the process that describes why and how human behavior is activated and directed.
Motivation concepts include intrinsic motivation and extrinsic motivation.
John M. Keller
has devoted his career to researching and understanding motivation in instructional systems. These decades of work constitute a major contribution to the instructional design field. First, by applying motivation theories systematically to design theory. Second, in developing a unique problem-solving process he calls the ARCS model.
Although Keller's ARCS model currently dominates instructional design with respect to learner motivation, in 2006 Hardré and Miller proposed a need for a new design model that includes current research in human motivation, a comprehensive treatment of motivation, integrates various fields of psychology and provides designers the flexibility to be applied to a myriad of situations.
Hardré proposes an alternate model for designers called the Motivating Opportunities Model or MOM. Hardré's model incorporates cognitive, needs, and affective theories as well as social elements of learning to address learner motivation. MOM has seven key components spelling the acronym 'SUCCESS' – Situational, Utilization, Competence, Content, Emotional, Social, and Systemic.
Influential researchers and theorists
Alphabetic by last name
Bloom, Benjamin – Taxonomies of the cognitive, affective, and psychomotor domains – 1950s
Bransford, John D. – How People Learn: Bridging Research and Practice – 1990s
Bruner, Jerome – Constructivism - 1950s-1990s
Gagné, Robert M. – The Conditions of Learning has had a great influence on the discipline.
Gibbons, Andrew S - developed the Theory of Model Centered Instruction; a theory rooted in Cognitive Psychology.
Heinich, Robert – Instructional Media and the new technologies of instruction 3rd ed. – Educational Technology – 1989
Jonassen, David – problem-solving strategies – 1990s
Kemp, Jerold E. – Created a cognitive learning design model - 1980s
Mager, Robert F. – ABCD model for instructional objectives – 1962 - Criterion-Referenced Instruction and Learning Objectives
Marzano, Robert J. - "Dimensions of Learning", Formative Assessment - 2000s
Mayer, Richard E. - Multimedia Learning - 2000s
Merrill, M. David – Component Display Theory / Knowledge Objects / First Principles of Instruction
Osguthorpe, Russell T. – Overview of Instructional Design – The education of the heart: rediscovering the spiritual roots of learning
Papert, Seymour – Constructionism, LOGO – 1970s-1980s
Piaget, Jean – Cognitive development – 1960s
Reigeluth, Charles – Elaboration Theory, "Green Books" I, II, and III – 1990s–2010s
Rita Richey - instructional design theory and research methods
Schank, Roger – Constructivist simulations – 1990s
Simonson, Michael – Instructional Systems and Design via Distance Education – 1980s
Skinner, B.F. – Radical Behaviorism, Programed Instruction - 1950s-1970s
Vygotsky, Lev – Learning as a social activity – 1930s
Wiley, David A. - influential work on open content, open educational resources, and informal online learning communities
See also
References
External links
Instructional Design – An overview of Instructional Design
ISD Handbook
Edutech wiki: Instructional design model
ATD: What Is Instructional Design?
Applied psychology
Educational technology
Learning
Pedagogy
Communication design
Curricula | Instructional design | Engineering | 3,642 |
12,024,873 | https://en.wikipedia.org/wiki/G12/G13%20alpha%20subunits | {{DISPLAYTITLE:G12/G13 alpha subunits}}
G12/G13 alpha subunits are alpha subunits of heterotrimeric G proteins that link cell surface G protein-coupled receptors primarily to guanine nucleotide exchange factors for the Rho small GTPases to regulate the actin cytoskeleton. Together, these two proteins comprise one of the four classes of G protein alpha subunits. G protein alpha subunits bind to guanine nucleotides and function in a regulatory cycle, and are active when bound to GTP but inactive and associated with the G beta-gamma complex when bound to GDP. G12/G13 are not targets of pertussis toxin or cholera toxin, as are other classes of G protein alpha subunits.
G proteins G12 and G13 regulate actin cytoskeletal remodeling in cells during movement and migration, including cancer cell metastasis. G13 is also essential for receptor tyrosine kinase-induced migration of fibroblast and endothelial cells.
Genes
GNA12 ()
GNA13
See also
Second messenger system
G protein-coupled receptor
Heterotrimeric G protein
Gs alpha subunit
Gi alpha subunit
Gq alpha subunit
Rho family of GTPases
References
External links
Peripheral membrane proteins | G12/G13 alpha subunits | Chemistry | 272 |
8,732,238 | https://en.wikipedia.org/wiki/Crowdreviewing | Crowdreviewing is the practice of gathering opinion or feedback from a large number of people, typically via the internet or an online community; a portmanteau of "crowd" and "reviews". Crowdreviewing is also often viewed as a form of crowd voting which occurs when a website gathers a large group's opinions and judgment. The concept is based on the principles of crowdsourcing and lets users submit online reviews to participate in building online metrics that measure performance. By harnessing social collaboration in the form of feedback individuals are generally able to form a more informed opinion.
Role of the crowd
In crowdreviewing the crowd becomes the source of information used in determining the relative performance of products and services. As crowdreviewing focuses on receiving input from a large number of parties, the resulting collaboration produces more credible feedback compared to the feedback left by a single party. The responsibility of identifying strengths and weaknesses falls to multiple individuals which each have had their own experience rather than on a single individual. Buyers will therefore be more likely to trust the feedback of a collective group of people rather than a single individual.
Common Parties
The crowd consists of a number of different parties which have various interests in regards to the outcome produced.
Potential Customers
A potential customer of a product or service would have an interest in viewing information on how a particular product or service stands in terms of subjective or objective quality before making a purchasing decision. Potential customers may also be interested in leaving feedback on a particular product or service to explain why they did not make their purchase.
Customers
Customers of products and services are a primary party in the process of reviewing. Customers are closely connected to the process as they would have first-hand experience with a product and service. Their primary role would be detailing their experiences with the product or service. A customer's interest in crowdreviewing would stem from an interest in showing their appreciation towards the quality of a product or service or in voicing their concerns or disappointment in a product or service.
Sellers
Sellers usually get their satisfied customers involved in leaving reviews for their products and services. A seller has an interest in having positive feedback on display as a means to influence potential buyers.
Competitors
Competitors would have an interest in reviewing feedback from the crowd as a means of obtaining competitive intelligence.
There may be other audiences involved in the process such as employees, suppliers, partners, and other relevant parties.
Benefits and risks
There are a number of benefits to the different parties which make up the crowd. Potential buyers are able to obtain information on products and services prior to making a purchase. Those which have already bought or used the product or service are able to post experiences both positive and negative in order to inform other potential buyers. As an additional benefit to the buyers, buyers may also post negative reviews in hopes of resolving their negative experiences with their seller. Sellers have the benefits of receiving positive feedback and also potentially resolving issues with dissatisfied customers. Competitors are able to learn more about what their competition is doing in order to improve their own products and services.
In addition to the benefits associated with crowdreviewing, there are a number of risks and challenges to overcome. For potential buyers there is always the risk that reviews may be sourced by the vendors themselves or other parties paid to leave a specific type of feedback on a product or service. Sellers have the possibility of receiving negative reviews which may in turn negatively influence their reputation and affect their bottom line revenue numbers. Competitors, while enjoying the benefit of being able to learn from their competitors are also subject to their competitors learning about their positives and negatives.
Limitations and controversies
Size of the Crowd
One of the major factors influencing crowdreviewing is the size of the crowd involved. A crowdreviewing venture is positively influenced by having a large number of parties leave reviews and feedback on products and services. In cases where a small number of individuals leave their feedback, more weight is placed on an individual reviewer or opinion and could therefore be of minimal value to potential customers. A smaller sample of reviews may also exhibit bias towards or against the product or service.
Industry Knowledge
A common limitation of allowing all parties to have an opportunity to review a product or service may involve having reviews written without a minimal or meaningful understanding of the product or service. A lack of industry or specialized knowledge may in turn minimize the value of a review or potentially inversely affect what would be considered a fair review.
Seller Manipulation
With allowing multiple parties to review a product or service there is a possibility that a seller may attempt to manipulate reviews in a number of ways. Sellers may hire third parties or create fake identities in order to leave positive reviews on their product or service. They may also do the same to create negative reviews on competing products and services.
Balance of Negative and Positive Reviews
Customers which have a negative experience with a product or service are more likely to offer a their review in an effort to resolve buyer's remorse in comparison to those which have had a positive experience.
One Side of the Story
Those reading reviews on products and services are likely to view reviews which only tell one side to the story. This is a disadvantage to both a potential customer and seller as the review may not tell the other side of a story which may be based on a misunderstanding.
See also
Distributed thinking
Collective consciousness
Participatory monitoring
Crowdfunding
Crowdsourcing
References
Further reading
Is There an eBay for Ideas? European Management Review, 2011
Herding Behavior as a Network Externality, Proceedings of the International Conference on Information Systems, Shanghai, December 2011
The Geography of Crowdfunding, NET Institute Working Paper No. 10-08, Oct 2010
The micro-price of micropatronage, The Economist, September 27, 2010
Putting your money where your mouse is, The Economist, September 2, 2010
Cash-strapped entrepreneurs get creative in BBC News
Harter, J.K., Shmidt, F.L., & Keyes, C.L. (2002). Well-Being in the Workplace and its Relationship to Business Outcomes: A Review of the Gallup Studies. In C.L. Keyes & J. Haidt (Eds.), Flourishing: The Positive Person and the Good Life (pp. 205–224). Washington D.C.: American Psychological Association.
Surowiecki, James, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, 2004
Internet terminology
Customer experience
Collaboration
Collective intelligence
Human-based computation
Social information processing
Crowd psychology | Crowdreviewing | Technology | 1,315 |
74,296,425 | https://en.wikipedia.org/wiki/Cystobasidium | Cystobasidium is a genus of fungi in the order Cystobasidiales. The type species is a fungal parasite forming small gelatinous basidiocarps (fruit bodies) on various ascomycetous fungi (including Lasiobolus and Thelebolus spp) on dung. Microscopically, it has auricularioid (laterally septate) basidia producing basidiospores that germinate by budding off yeast cells. Other species are known only from their yeast states. The yeasts Cystobasidium minutum and C. calyptogenae are rare but known human pathogens.
Taxonomy
The genus was originally described in 1898 by Swedish mycologist Gustaf Lagerheim as a subgenus of Jola and later (1924) raised to a full genus by the German mycologist Walther Neuhoff. Its main distinguishing feature (microscopically) was the swollen, cyst-like probasidia from which the basidia emerge. Only one species, Cystobasidium lasioboli, was originally described, but two further species with probasidia were added by subsequent authors. In 1999, British mycologist Peter Roberts noted that Tremella fimetaria Schum. (1803) was an earlier name for Cystobasidium lasioboli and proposed the new combination Cystobasidium fimetarium.
Molecular research, based on cladistic analysis of DNA sequences, has shown that Cystobasidium (based on the type species) is a monophyletic (natural) genus. An additional 20 or so yeast species have been added to the genus, most of which were formerly placed in Rhodotorula.
Species
, Species Fungorum (in the Catalogue of Life) accepts 21 species of Cystobasidium:
Cystobasidium alpinum
Cystobasidium benthicum
Cystobasidium calyptogenae
Cystobasidium cunninghamiae
Cystobasidium fimetarium
Cystobasidium halotolerans
Cystobasidium iriomotense
Cystobasidium iriomotense
Cystobasidium keelungense
Cystobasidium laryngis
Cystobasidium lysinophilum
Cystobasidium minutum
Cystobasidium ongulense
Cystobasidium onofrii
Cystobasidium pinicola
Cystobasidium portillonense
Cystobasidium proliferans
Cystobasidium psychroaquaticum
Cystobasidium raffinophilum
Cystobasidium ritchiei
Cystobasidium sebaceum
Cystobasidium slooffiae
Cystobasidium terricola
Cystobasidium tubakii
References
Pucciniomycotina
Basidiomycota genera
Taxa described in 1924
Fungal diseases | Cystobasidium | Biology | 574 |
14,733,839 | https://en.wikipedia.org/wiki/Telbivudine | Telbivudine is an antiviral drug used in the treatment of hepatitis B infection. It is marketed by Swiss pharmaceutical company Novartis under the trade names Sebivo (European Union) and Tyzeka (United States). Clinical trials have shown it to be significantly more effective than lamivudine or adefovir, and less likely to cause resistance. However, HBV signature resistance mutation M204I (a change from methionine to isoleucine at position 204 in the reverse transcriptase domain of the hepatitis B polymerase) or L180M+M204V have been associated with Telbivudine resistance.
Telbivudine is a synthetic thymidine β-L-nucleoside analogue; it is the L-isomer of thymidine. Telbivudine impairs hepatitis B virus (HBV) DNA replication by leading to chain termination. It differs from the natural nucleotide only with respect to the location of the sugar and base moieties, taking on an levorotatory configuration versus a dextrorotatory configuration as do the natural deoxynucleosides. It is taken orally in a dose of 600 mg once daily with or without food.
Telbivudine has no in vitro activity against HIV-1, and in a case-series of three HIV-HBV co-infected patients, telbivudine did not produce sustained HIV-1 virologic suppression or induce any resistance mutations in HIV-1.
Phase III clinical trials suggested that telbivudine put patients at greater risk for myopathy and peripheral neuropathy than the comparator drug lamivudine. FDA required a required a risk evaluation and mitigation strategy (REMS) aiming to increase awareness of peripheral neuropathy by requiring distribution of a medication guide.
In 2016, Novartis posted a discontinuation notice. Efficacy or safety concerns were not cited as rationale for discontinuation, but rather "availability of alternative medications"; presumably this refers to tenofovir disoproxil, which became available as a generic medication in 2017, and is a safe and effective treatment for chronic HBV infection.
References
External links
Antiviral drugs
Pyrimidinediones
Nucleosides
Drugs developed by Novartis
Withdrawn drugs | Telbivudine | Chemistry,Biology | 491 |
44,308,184 | https://en.wikipedia.org/wiki/Tricholoma%20sejunctum | Tricholoma sejunctum (colloquially yellow blusher in the eastern regions of North America) is a mushroom that appears across much of the Northern Hemisphere and is associated with pine forests.
Description
The cap is greenish-brownish yellow, slightly moist, and has dark fibrils near the center. The gills and stipe are whitish-yellow. The odor is mild to mealy and the taste mild to unpleasant.
Edibility
There is some confusion as to the certain identification of the species, so it is considered unsafe for eating. While classified as inedible by some field guides, it seems to have been traditionally consumed in much of world without noted ill effects. More recently, in Europe it has been identified as responsible for poisonings.
The species is reportedly consumed in China's Yunnan province, where it is generally known as 荞面菌 (Pinyin: qiao mian jun; lit. 'Buckwheat Noodle Mushroom') on account of this property, despite the fact that its proper name is 黄绿口蘑 (lit. 'Yellow Green Mouth Mushroom').
Similar species
Tricholoma flavovirens is usually larger and fleshier, with more solid yellow gills and stipe and a less fibrillose cap. Other similar species include Tricholoma arvernense, and T. viridilutescens.
See also
List of North American Tricholoma
List of Tricholoma species
References
sejunctum
Fungi described in 1799
Fungi of Asia
Fungi of Europe
Taxa named by James Sowerby
Fungus species | Tricholoma sejunctum | Biology | 319 |
59,013,378 | https://en.wikipedia.org/wiki/Microcrystal%20electron%20diffraction | Microcrystal electron diffraction, or MicroED, is a CryoEM method that was developed by the Gonen laboratory in late 2013 at the Janelia Research Campus of the Howard Hughes Medical Institute. MicroED is a form of electron crystallography where thin 3D crystals are used for structure determination by electron diffraction. Prior to this demonstration, macromolecular (protein) electron crystallography was mainly used on 2D crystals, for example. The method is one of several modern versions of approaches to determine atomic structures using electron diffraction first demonstrated for the positions of hydrogen atoms in NH4Cl crystals by W. E. Laschkarew and I. D. Usykin in 1933, which has since been used for surfaces, via precession electron diffraction, with much of the early work described in the work of Boris Vainshtein and Douglas L. Dorset.
The method was developed for structure determination of proteins from nanocrystals that are typically not suitable for X-ray diffraction because of their size. Crystals that are one billionth the size needed for X-ray crystallography can yield high quality data. The samples are frozen hydrated as for all other CryoEM modalities but instead of using the transmission electron microscope (TEM) in imaging mode one uses it in diffraction mode with a low electron exposure (typically < 0.01 e−/Å2/s). The nano crystal is exposed to the diffracting beam and continuously rotated while diffraction is collected on a fast camera as a movie. MicroED data is then processed using software for X-ray crystallography for structure analysis and refinement. The hardware and software used in a MicroED experiment are standard and broadly available.
Development
Electron diffraction to solve crystal structures date back to the earliest days of electron diffraction. The first successful demonstration of MicroED was reported in 2013 by the Gonen laboratory for the structure of lysozyme, a classic test protein in X-ray crystallography.
Experimental setup
Detailed protocols for setting up the electron microscope and for data collections have been published.
Instrumentation
Microscope
MicroED data is collected using transmission electron (cryogenic) microscopy. The microscope can be equipped with a selected area aperture but MicroED can also be done without a selected area aperture. While some structures have been reported without freezing, radiation damage is sometimes minimized and higher resolution obtained by using cryo cooling even for small molecules.
Detectors
A variety of detectors have been used to collected electron diffraction data in MicroED experiments. Detectors utilizing charge-coupled device (CCD) and complementary metal–oxide–semiconductor (CMOS) technology have been used. With CMOS detectors, individual electron counts can be interpreted. More recently, direct electron detectors have been successfully used in both linear and counting modes. In these examples electron counting allowed ab initio phasing and visualization of hydrogens in proteins.
Data collection
Still diffraction
The initial proof of concept publication on MicroED used lysozyme crystals. Up to 90 degrees of data were collected from a single nano crystal, with discrete 1 degree steps between frames. Each diffraction pattern was collected with an ultra-low dose rate of ~0.01 e−/Å2/s. Data from 3 crystals was merged to yield a 2.9Å resolution structure with good refinement statistics, enabling determination of the structure of a dose-sensitive protein from 3D microcrystals in cryogenic conditions.
Continuous rotation
MicroED uses continuous rotation during the data collection scheme. Here the crystal is slowly rotated in a single direction while diffraction is recorded on a fast camera as a movie. This led to several improvements in data quality and allowed data processing using standard X-ray crystallographic software. Continuous rotation MicroED improves sampling of reciprocal space.
Data processing
Detailed protocols for MicroED data processing have been published. When MicroED data is collected using continuous stage rotation, standard crystallography software can be used.
Differences between MicroED and other electron diffraction methods
Other electron diffraction methods that have been developed and demonstrated to work include Automated Diffraction Tomography (ADT) and Rotation Electron Diffraction (RED). These methods differ slightly from MicroED: In ADT discrete steps of goniometer tilt are used to cover reciprocal space in combination with beam precession to reduce dynamical diffraction effects. ADT uses hardware and software for precession and scanning transmission electron microscopy for crystal tracking. RED is done in TEM but the goniometer is tilted in discrete steps and beam tilting is used to fill in the gaps. Software is used to process ADT and RED data.
Milestones
Method scope
MicroED has been used to determine the structures of large globular proteins, small proteins, peptides, membrane proteins, organic molecules, and inorganic compounds. In many of these examples hydrogens and charged ions were observed.
Novel structures of α-synuclein of Parkinson's disease
The first structures solved by MicroED were published in late 2015. These structures were of peptide fragments that form the toxic core of α-synculein, the protein responsible for Parkinson's disease and lead to insight into the aggregation mechanism toxic aggregates. The structures were solved at 1.4 Å resolution.
Novel protein structure of R2lox
The first novel structure of a protein solved by MicroED was published in 2019. The protein is the metalloenzyme R2-like ligand-binding oxidase (R2lox) from Sulfolobus acidocaldarius. The structure was solved at 3.0 Å resolution by molecular replacement using a model of 35% sequence identity built from the closest homolog with a known structure.
References
Further reading
Background on MicroED from ThermoFisher Scientific, a major producer of transmission electron microscopes
Video interview about the development of MicroED and its applications
The Janelia Archives background on MicroED
Background and publications on MicroED from the Gonen Laboratory
Electron microscopy | Microcrystal electron diffraction | Chemistry | 1,234 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.