id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
6,793,679 | https://en.wikipedia.org/wiki/Pointwise%20mutual%20information | In statistics, probability theory and information theory, pointwise mutual information (PMI), or point mutual information, is a measure of association. It compares the probability of two events occurring together to what this probability would be if the events were independent.
PMI (especially in its positive pointwise mutual information variant) has been described as "one of the most important concepts in NLP", where it "draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in [a] corpus than we would have expected them to appear by chance."
The concept was introduced in 1961 by Robert Fano under the name of "mutual information", but today that term is instead used for a related measure of dependence between random variables: The mutual information (MI) of two discrete random variables refers to the average PMI of all possible events.
Definition
The PMI of a pair of outcomes x and y belonging to discrete random variables X and Y quantifies the discrepancy between the probability of their coincidence given their joint distribution and their individual distributions, assuming independence. Mathematically:
(with the latter two expressions being equal to the first by Bayes' theorem). The mutual information (MI) of the random variables X and Y is the expected value of the PMI (over all possible outcomes).
The measure is symmetric (). It can take positive or negative values, but is zero if X and Y are independent. Note that even though PMI may be negative or positive, its expected outcome over all joint events (MI) is non-negative. PMI maximizes when X and Y are perfectly associated (i.e. or ), yielding the following bounds:
Finally, will increase if is fixed but decreases.
Here is an example to illustrate:
Using this table we can marginalize to get the following additional table for the individual distributions:
With this example, we can compute four values for . Using base-2 logarithms:
(For reference, the mutual information would then be 0.2141709.)
Similarities to mutual information
Pointwise Mutual Information has many of the same relationships as the mutual information. In particular,
Where is the self-information, or .
Variants
Several variations of PMI have been proposed, in particular to address what has been described as its "two main limitations":
PMI can take both positive and negative values and has no fixed bounds, which makes it harder to interpret.
PMI has "a well-known tendency to give higher scores to low-frequency events", but in applications such as measuring word similarity, it is preferable to have "a higher score for pairs of words whose relatedness is supported by more evidence."
Positive PMI
The positive pointwise mutual information (PPMI) measure is defined by setting negative values of PMI to zero:
This definition is motivated by the observation that "negative PMI values (which imply things are co-occurring less often than we would expect by chance) tend to be unreliable unless our corpora are enormous" and also by a concern that "it's not clear whether it's even possible to evaluate such scores of 'unrelatedness' with human judgment". It also avoid having to deal with values for events that never occur together (), by setting PPMI for these to 0.
Normalized pointwise mutual information (npmi)
Pointwise mutual information can be normalized between [-1,+1] resulting in -1 (in the limit) for never occurring together, 0 for independence, and +1 for complete co-occurrence.
Where is the joint self-information .
PMIk family
The PMIk measure (for k=2, 3 etc.), which was introduced by Béatrice Daille around 1994, and as of 2011 was described as being "among the most widely used variants", is defined as
In particular, . The additional factors of inside the logarithm are intended to correct the bias of PMI towards low-frequency events, by boosting the scores of frequent pairs. A 2011 case study demonstrated the success of PMI3 in correcting this bias on a corpus drawn from English Wikipedia. Taking x to be the word "football", its most strongly associated words y according to the PMI measure (i.e. those maximizing ) were domain-specific ("midfielder", "cornerbacks", "goalkeepers") whereas the terms ranked most highly by PMI3 were much more general ("league", "clubs", "england").
Specific Correlation
Total correlation is an extension of mutual information to multi-variables. Analogously to the definition of total correlation, the extension of PMI to multi-variables is "specific correlation."
The SI of the results of random variables is expressed as the following:
Chain-rule
Like mutual information, point mutual information follows the chain rule, that is,
This is proven through application of Bayes' theorem:
Applications
PMI could be used in various disciplines e.g. in information theory, linguistics or chemistry (in profiling and analysis of chemical compounds). In computational linguistics, PMI has been used for finding collocations and associations between words. For instance, countings of occurrences and co-occurrences of words in a text corpus can be used to approximate the probabilities and respectively. The following table shows counts of pairs of words getting the most and the least PMI scores in the first 50 millions of words in Wikipedia (dump of October 2015) filtering by 1,000 or more co-occurrences. The frequency of each count can be obtained by dividing its value by 50,000,952. (Note: natural log is used to calculate the PMI values in this example, instead of log base 2)
Good collocation pairs have high PMI because the probability of co-occurrence is only slightly lower than the probabilities of occurrence of each word. Conversely, a pair of words whose probabilities of occurrence are considerably higher than their probability of co-occurrence gets a small PMI score.
References
External links
Demo at Rensselaer MSR Server (PMI values normalized to be between 0 and 1)
Information theory
Summary statistics for contingency tables
Entropy and information | Pointwise mutual information | Physics,Mathematics,Technology,Engineering | 1,281 |
33,685,545 | https://en.wikipedia.org/wiki/ESPRIT%20project | ESPRIT, or the Elite Sport Performance Research in Training is a UK EPSRC and UK Sport funded research project aiming to develop pervasive sensing technologies for better the understanding of the physiology and biomechanics of athletes in training, and apply the technologies to enhance the well being and healthcare of general public.
Key research themes
Generalised Body Sensor Networks - Imperial College London
Optimised Sensor Design and Embodiment - Queen Mary University of London
Learning, Data Modelling and Performance Optimisation - UK Sport, Imperial College London
Device and Technology Innovation (GOLD) in elite sports - Loughborough University
Proof of concept projects
Application of a solid-state saliva-based system to monitoring circadian rhythms in elite athletes - Swansea University
Real-time wireless localisation for team sports using body-centric communications - Queen's University Belfast
Optimized athlete body sensor networks for simulation-based performance analysis - Southampton University
Showcase/secondment projects
Monitor the effects of a warm-up on power production and wheelchair performance - Loughborough University
Using interleukin-6 (IL-6) as a measurement of exercise-induced inflammation - Loughborough University
Improvement of Powerwheel for racing wheelchairs - Frazer-Nash Consultancy Ltd.
Ankle and Foot Modelling in Elite Cycling - Paul Francis
Sports exemplars
A number of sports exemplars have been selected in the ESPRIT Programme to demonstrate and validate the application of pervasive sensing technology in elite sport performance monitoring
Healthcare exemplars
One of the main objectives of the ESPRIT project is to extend the developed sensing technology for wellbeing and healthcare applications. To demonstrate the application of the technology, a number of healthcare exemplars have been selected.
Fall detection
Post-operative care
Rehabilitation after knee-replacement surgery
COPD patient monitoring
Elderly care
Key Partners
See also
Wireless sensor networks
References
External links
Imperial College London
Loughborough University
Science and technology in Leicestershire
Sports science | ESPRIT project | Technology | 389 |
42,888,123 | https://en.wikipedia.org/wiki/Genomatica | Genomatica is a San Diego–based biotechnology company that develops and licenses biological manufacturing processes for the production of intermediate and basic chemicals. Genomatica’s process technology for the chemical 1,4-Butanediol (BDO) is now commercial. Genomatica produced 5 million pounds of renewable BDO in five weeks at a DuPont Tate & Lyle plant in Tennessee. Its GENO BDO process has been licensed by BASF and by Novamont.
History
Genomatica was founded in San Diego in 1998 by Christophe Schilling and Bernhard Palsson. Schilling's goal was to use biotechnology to make more sustainable choices in manufacturing. In 2021, Lululemon partnered with Genomatica to create a plant-based nylon material, which was launched in 2023.
In 2023, L'Oréal along with Unilever and Kao Corporation invested in Genomatica. The investment will go toward developing plant-based personal care and cosmetics products.
References
Biotechnology companies of the United States | Genomatica | Biology | 207 |
23,699,292 | https://en.wikipedia.org/wiki/Distributed%20source%20coding | Distributed source coding (DSC) is an important problem in information theory and communication. DSC problems regard the compression of multiple correlated information sources that do not communicate with each other. By modeling the correlation between multiple sources at the decoder side together with channel codes, DSC is able to shift the computational complexity from encoder side to decoder side, therefore provide appropriate frameworks for applications with complexity-constrained sender, such as sensor networks and video/multimedia compression (see distributed video coding). One of the main properties of distributed source coding is that the computational burden in encoders is shifted to the joint decoder.
History
In 1973, David Slepian and Jack Keil Wolf proposed the information theoretical lossless compression bound on distributed compression of two correlated i.i.d. sources X and Y. After that, this bound was extended to cases with more than two sources by Thomas M. Cover in 1975, while the theoretical results in the lossy compression case are presented by Aaron D. Wyner and Jacob Ziv in 1976.
Although the theorems on DSC were proposed on 1970s, it was after about 30 years that attempts were started for practical techniques, based on the idea that DSC is closely related to channel coding proposed in 1974 by Aaron D. Wyner. The asymmetric DSC problem was addressed by S. S. Pradhan and K. Ramchandran in 1999, which focused on statistically dependent binary and Gaussian sources and used scalar and trellis coset constructions to solve the problem. They further extended the work into the symmetric DSC case.
Syndrome decoding technology was first used in distributed source coding by the DISCUS system of SS Pradhan and K Ramachandran (Distributed Source Coding Using Syndromes). They compress binary block data from one source into syndromes and transmit data from the other source uncompressed as side information. This kind of DSC scheme achieves asymmetric compression rates per source and results in asymmetric DSC. This asymmetric DSC scheme can be easily extended to the case of more than two correlated information sources. There are also some DSC schemes that use parity bits rather than syndrome bits.
The correlation between two sources in DSC has been modeled as a virtual channel which is usually referred as a binary symmetric channel.
Starting from DISCUS, DSC has attracted significant research activity and more sophisticated channel coding techniques have been adopted into DSC frameworks, such as Turbo Code, LDPC Code, and so on.
Similar to the previous lossless coding framework based on Slepian–Wolf theorem, efforts have been taken on lossy cases based on the Wyner–Ziv theorem. Theoretical results on quantizer designs was provided by R. Zamir and S. Shamai, while different frameworks have been proposed based on this result, including a nested lattice quantizer and a trellis-coded quantizer.
Moreover, DSC has been used in video compression for applications which require low complexity video encoding, such as sensor networks, multiview video camcorders, and so on.
With deterministic and probabilistic discussions of correlation model of two correlated information sources, DSC schemes with more general compressed rates have been developed. In these non-asymmetric schemes, both of two correlated sources are compressed.
Under a certain deterministic assumption of correlation between information sources, a DSC framework in which any number of information sources can be compressed in a distributed way has been demonstrated by X. Cao and M. Kuijper. This method performs non-asymmetric compression with flexible rates for each source, achieving the same overall compression rate as repeatedly applying asymmetric DSC for more than two sources. Then, by investigating the unique connection between syndromes and complementary codewords of linear codes, they have translated the major steps of DSC joint decoding into syndrome decoding followed by channel encoding via a linear block code and also via its complement code, which theoretically illustrated a method of assembling a DSC joint decoder from linear code encoders and decoders.
Theoretical bounds
The information theoretical lossless compression bound on DSC (the Slepian–Wolf bound) was first purposed by David Slepian and Jack Keil Wolf in terms of entropies of correlated information sources in 1973. They also showed that two isolated sources can compress data as efficiently as if they were communicating with each other. This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975.
Similar results were obtained in 1976 by Aaron D. Wyner and Jacob Ziv with regard to lossy coding of joint Gaussian sources.
Slepian–Wolf bound
Distributed Coding is the coding of two or more dependent sources with separate encoders and joint decoder. Given two statistically dependent i.i.d. finite-alphabet random sequences X and Y, Slepian–Wolf theorem includes theoretical bound for the lossless coding rate for distributed coding of the two sources as below:
If both the encoder and decoder of the two sources are independent, the lowest rate we can achieve for lossless compression is and for and respectively, where and are the entropies of and . However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved. As long as the total rate of and is larger than their joint entropy and none of the sources is encoded with a rate larger than its entropy, distributed coding can achieve arbitrarily small error probability for long sequences.
A special case of distributed coding is compression with decoder side information, where source is available at the decoder side but not accessible at the encoder side. This can be treated as the condition that has already been used to encode , while we intend to use to encode . The whole system is operating in an asymmetric way (compression rate for the two sources are asymmetric).
Wyner–Ziv bound
Shortly after Slepian–Wolf theorem on lossless distributed compression was published, the extension to lossy compression with decoder side information was proposed as Wyner–Ziv theorem. Similarly to lossless case, two statistically dependent i.i.d. sources and are given, where is available at the decoder side but not accessible at the encoder side. Instead of lossless compression in Slepian–Wolf theorem, Wyner–Ziv theorem looked into the lossy compression case.
The Wyner–Ziv theorem presents the achievable lower bound for the bit rate of at given distortion . It was found that for Gaussian memoryless sources and mean-squared error distortion, the lower bound for the bit rate of remain the same no matter whether side information is available at the encoder or not.
Virtual channel
Deterministic model
Probabilistic model
Asymmetric DSC vs. symmetric DSC
Asymmetric DSC means that, different bitrates are used in coding the input sources, while same bitrate is used in symmetric DSC. Taking a DSC design with two sources for example, in this example and are two discrete, memoryless, uniformly distributed sources which generate set of variables and of length 7 bits and the Hamming distance between and is at most one. The Slepian–Wolf bound for them is:
This means, the theoretical bound is and symmetric DSC means 5 bits for each source. Other pairs with are asymmetric cases with different bit rate distributions between and , where , and , represent two extreme cases called decoding with side information.
Practical distributed source coding
Slepian–Wolf coding – lossless distributed coding
It was understood that Slepian–Wolf coding is closely related to channel coding in 1974, and after about 30 years, practical DSC started to be implemented by different channel codes. The motivation behind the use of channel codes is from two sources case, the correlation between input sources can be modeled as a virtual channel which has input as source and output as source . The DISCUS system proposed by S. S. Pradhan and K. Ramchandran in 1999 implemented DSC with syndrome decoding, which worked for asymmetric case and was further extended to symmetric case.
The basic framework of syndrome based DSC is that, for each source, its input space is partitioned into several cosets according to the particular channel coding method used. Every input of each source gets an output indicating which coset the input belongs to, and the joint decoder can decode all inputs by received coset indices and dependence between sources. The design of channel codes should consider the correlation between input sources.
A group of codes can be used to generate coset partitions, such as trellis codes and lattice codes. Pradhan and Ramchandran designed rules for construction of sub-codes for each source, and presented result of trellis-based coset constructions in DSC, which is based on convolution code and set-partitioning rules as in Trellis modulation, as well as lattice code based DSC. After this, embedded trellis code was proposed for asymmetric coding as an improvement over their results.
After DISCUS system was proposed, more sophisticated channel codes have been adapted to the DSC system, such as Turbo Code, LDPC Code and Iterative Channel Code. The encoders of these codes are usually simple and easy to implement, while the decoders have much higher computational complexity and are able to get good performance by utilizing source statistics. With sophisticated channel codes which have performance approaching the capacity of the correlation channel, corresponding DSC system can approach the Slepian–Wolf bound.
Although most research focused on DSC with two dependent sources, Slepian–Wolf coding has been extended to more than two input sources case, and sub-codes generation methods from one channel code was proposed by V. Stankovic, A. D. Liveris, etc. given particular correlation models.
General theorem of Slepian–Wolf coding with syndromes for two sources
Theorem: Any pair of correlated uniformly distributed sources, , with , can be compressed separately at a rate pair such that , where and are integers, and . This can be achieved using an binary linear code.
Proof: The Hamming bound for an binary linear code is , and we have Hamming code achieving this bound, therefore we have such a binary linear code with generator matrix . Next we will show how to construct syndrome encoding based on this linear code.
Let and be formed by taking first rows from , while is formed using the remaining rows of . and are the subcodes of the Hamming code generated by and respectively, with and as their parity check matrices.
For a pair of input , the encoder is given by and . That means, we can represent and as , , where are the representatives of the cosets of with regard to respectively. Since we have with . We can get , where , .
Suppose there are two different input pairs with the same syndromes, that means there are two different strings , such that and . Thus we will have . Because minimum Hamming weight of the code is , the distance between and is . On the other hand, according to together with and , we will have and , which contradict with . Therefore, we cannot have more than one input pairs with the same syndromes.
Therefore, we can successfully compress the two dependent sources with constructed subcodes from an binary linear code, with rate pair such that , where and are integers, and . Log indicates Log2.
Slepian–Wolf coding example
Take the same example as in the previous Asymmetric DSC vs. Symmetric DSC part, this part presents the corresponding DSC schemes with coset codes and syndromes including asymmetric case and symmetric case. The Slepian–Wolf bound for DSC design is shown in the previous part.
Asymmetric case
In the case where and , the length of an input variable from source is 7 bits, therefore it can be sent lossless with 7 bits independent of any other bits. Based on the knowledge that and have Hamming distance at most one, for input from source , since the receiver already has , the only possible are those with at most 1 distance from . If we model the correlation between two sources as a virtual channel, which has input and output , as long as we get , all we need to successfully "decode" is "parity bits" with particular error correction ability, taking the difference between and as channel error. We can also model the problem with cosets partition. That is, we want to find a channel code, which is able to partition the space of input into several cosets, where each coset has a unique syndrome associated with it. With a given coset and , there is only one that is possible to be the input given the correlation between two sources.
In this example, we can use the binary Hamming Code , with parity check matrix . For an input from source , only the syndrome given by is transmitted, which is 3 bits. With received and , suppose there are two inputs and with same syndrome . That means , which is . Since the minimum Hamming weight of Hamming Code is 3, . Therefore, the input can be recovered since .
Similarly, the bits distribution with , can be achieved by reversing the roles of and .
Symmetric case
In symmetric case, what we want is equal bitrate for the two sources: 5 bits each with separate encoder and joint decoder. We still use linear codes for this system, as we used for asymmetric case. The basic idea is similar, but in this case, we need to do coset partition for both sources, while for a pair of received syndromes (corresponds to one coset), only one pair of input variables are possible given the correlation between two sources.
Suppose we have a pair of linear code and and an encoder-decoder pair based on linear codes which can achieve symmetric coding. The encoder output is given by: and . If there exists two pair of valid inputs and generating the same syndromes, i.e. and , we can get following( represents Hamming weight):
, where
, where
Thus:
where and . That means, as long as we have the minimum distance between the two codes larger than , we can achieve error-free decoding.
The two codes and can be constructed as subcodes of the Hamming code and thus has minimum distance of . Given the generator matrix of the original Hamming code, the generator matrix for is constructed by taking any two rows from , and is constructed by the remaining two rows of . The corresponding parity-check matrix for each sub-code can be generated according to the generator matrix and used to generate syndrome bits.
Wyner–Ziv coding – lossy distributed coding
In general, a Wyner–Ziv coding scheme is obtained by adding a quantizer and a de-quantizer to the Slepian–Wolf coding scheme. Therefore, a Wyner–Ziv coder design could focus on the quantizer and corresponding reconstruction method design. Several quantizer designs have been proposed, such as a nested lattice quantizer, trellis code quantizer and Lloyd quantization method.
Large scale distributed quantization
Unfortunately, the above approaches do not scale (in design or operational complexity requirements) to sensor networks of large sizes, a scenario where distributed compression is most helpful. If there are N sources transmitting at R bits each (with some distributed coding scheme), the number of possible reconstructions scales . Even for moderate values of N and R (say N=10, R = 2), prior design schemes become impractical. Recently, an approach, using ideas borrowed from Fusion Coding of Correlated Sources, has been proposed where design and operational complexity are traded against decoder performance. This has allowed distributed quantizer design for network sizes reaching 60 sources, with substantial gains over traditional approaches.
The central idea is the presence of a bit-subset selector which maintains a certain subset of the received (NR bits, in the above example) bits for each source. Let be the set of all subsets of the NR bits i.e.
Then, we define the bit-subset selector mapping to be
Note that each choice of the bit-subset selector imposes a storage requirement (C) that is exponential in the cardinality of the set of chosen bits.
This allows a judicious choice of bits that minimize the distortion, given the constraints on decoder storage. Additional limitations on the set of allowable subsets are still needed. The effective cost function that needs to be minimized is a weighted sum of distortion and decoder storage
The system design is performed by iteratively (and incrementally) optimizing the encoders, decoder and bit-subset selector till convergence.
Non-asymmetric DSC
Non-asymmetric DSC for more than two sources
The syndrome approach can still be used for more than two sources. Consider binary sources of length- . Let be the corresponding coding matrices of sizes . Then the input binary sources are compressed into of total bits. Apparently, two source tuples cannot be recovered at the same time if they share the same syndrome. In other words, if all source tuples of interest have different syndromes, then one can recover them losslessly.
General theoretical result does not seem to exist. However, for a restricted kind of source so-called Hamming source that only has at most one source different from the rest and at most one bit location not all identical, practical lossless DSC is shown to exist in some cases. For the case when there are more than two sources, the number of source tuple in a Hamming source is . Therefore, a packing bound that obviously has to satisfy. When the packing bound is satisfied with equality, we may call such code to be perfect (an analogous of perfect code in error correcting code).
A simplest set of to satisfy the packing bound with equality is . However, it turns out that such syndrome code does not exist. The simplest (perfect) syndrome code with more than two sources have and . Let
,
and
such that
are any partition of .
can compress a Hamming source (i.e., sources that have no more than one bit different will all have different syndromes).
For example, for the symmetric case, a possible set of coding matrices are
See also
Linear code
Syndrome decoding
Low-density parity-check code
Turbo Code
References
Information theory
Coding theory
Wireless sensor network
Data transmission | Distributed source coding | Mathematics,Technology,Engineering | 3,832 |
69,191,814 | https://en.wikipedia.org/wiki/Brenda%20Almond | Brenda Margaret Almond (; 19 September 1937 – 14 January 2023) was a British philosopher, known for her work on philosophy of education and applied ethics. She was an elected member of the Austrian Academy of Sciences.
Biography
Almond co-founded the Society for Applied Philosophy in 1982 with her then colleague at Surrey University Anthony O'Hear and co-founded the International Journal of Applied Philosophy in 1983 part of a conscious strategy of moving philosophy away from abstract and abstruse debates towards issues that affect people in their everyday lives. Almond’s writing highlights issues like health and family and social relations. In 1987, at a time when HIV/AIDS was still barely understood, she wrote in The Times on the difficult balance of health and safety over risk and freedom. “What is clear”, she wrote, “is that in the absence of a vaccine or cure, the virus will increasingly move towards the centre of the world stage”. Almond went on to write a book setting out key debates in the area called AIDS: A Moral Issue (MacMillan) in 1990. Among the topics discussed here are confidentiality, autonomy and welfare, the role of the media, legal implications of infection in Britain and the US, coping with the threat of death, along with some theological reflections.
Almond also organised and reported on academic conferences on the issue including one held at Surrey University in 1986 focussing on medical confidentiality and discrimination and the Third International Conference on AIDS in Washington in 1987.
In later years, Almond moved on to issues such as biotechnologies and even debates about who and what constituted a “legitimate target” during a war. In an opinion piece for the magazine Philosophy Now she accused fellow philosophers of still preferring to “stick to tired and familiar academic debates while the world burns”.
Almond was later a professor emeritus at Hull University.
Almond argued that ultimately the freedom to opt out of the education system altogether must be protected, as well as the freedom to choose a religious education in a secular state, or a secular education in a religious state in Education and the Individual, (written when she was in her thirties, under her married name), and went on to write Moral Concerns, The Philosophical Quest and Exploring Ethics: A Traveller's Tale and The Fragmenting Family. As part of a personal profile of Almond, the Times Higher Education Supplement says "she argues that the family is about more than stability in the present: it is about the past and the future" and notes that the book emphasises G. K. Chesterton's description of the family as "this frail cord, flung from the forgotten hills of yesterday to the invisible mountains of tomorrow".
As well as being a philosophy professor, Almond sought to present her particular view of individual rights to a wider public. She argued regularly for maintenance of the “welfare of the child provision” when legislation was crafted to reflect the changing technologies of birth and raised ethical issues surrounding the use of human embryos.
Ailsa Stevens wrote in an article that appeared in BioNews that Almond, "felt that anxieties over hybrid embryo research had been fuelled by confusion over the definition of an embryo".
Almond died in Sussex on 14 January 2023, at the age of 85. In an appreciation published by The Guardian, her son Martin Cohen noted that her "authentic voice" was to be found in her best-known title, The Philosophical Quest (1990), a mix of conventional, essentially educational, summaries of the core themes of philosophy, alongside more fluid, creative passages in which the narrator records receiving philosophical letters from a mysterious correspondent called Sophia, even as her later writing centred on defence of the "traditional family" from both social and technological changes.
Selected publications
Awards and honors
She was awarded an Honorary doctorate by the University of Utrecht in 1998. In 1999 she was named an elected member of the Austrian Academy of Sciences.
References
1937 births
2023 deaths
20th-century British philosophers
21st-century British philosophers
Alumni of University College London
British women philosophers
English philosophers
Environmental ethicists
Members of the Austrian Academy of Sciences
Academics of the University of Hull
People from Liverpool | Brenda Almond | Environmental_science | 829 |
26,830,333 | https://en.wikipedia.org/wiki/Water-energy%20nexus | The water-energy nexus is the relationship between the water used for energy production, including both electricity and sources of fuel such as oil and natural gas, and the energy consumed to extract, purify, deliver, heat/cool, treat and dispose of water (and wastewater) sometimes referred to as the energy intensity (EI). Energy is needed in every stage of the water cycle from producing, moving, treating and heating water to collecting and treating wastewater. The relationship is not truly a closed loop as the water used for energy production need not be the same water that is processed using that energy, but all forms of energy production require some input of water making the relationship inextricable.
Among the first studies to evaluate the water and energy relationship was a life-cycle analysis conducted by Peter Gleick in 1994 that highlighted the interdependence and initiated the joint study of water and energy. In 2014 the US Department of Energy (DOE) released their report on the water-energy nexus citing the need for joint water-energy policies and better understanding of the nexus and its susceptibility to climate change as a matter of national security. The hybrid Sankey diagram in the DOE's 2014 water-energy nexus report summarizes water and energy flows in the US by sector, demonstrating interdependence as well as singling out thermoelectric power as the single largest user of water, used mainly for cooling.
Water used in the energy sector
All types of power generation consume water either to process the raw materials used in the facility, constructing and maintaining the plant, or to just generate the electricity itself. Renewable power sources such as photovoltaic solar and wind power, which require little water to produce energy, require water in processing the raw materials to build. Water can either be used or consumed, and can be categorized as fresh, ground, surface, blue, grey or green among others. Water is considered used if it does not reduce the supply of water to downstream users, i.e. water that is taken and returned to the same source (instream use), such as in thermoelectric plants that use water for cooling and are by far the largest users of water. While used water is returned to the system for downstream uses, it has usually been degraded in some way, mainly due to thermal or chemical pollution, and the natural flow has been altered which does not factor into an assessment if only the quantity of water is considered. Water is consumed when it is removed completely from the system, such as by evaporation or consumption by crops or humans. When assessing water use all these factors must be considered as well as spatiotemporal considerations making precise determination of water use very difficult. According to the International Energy Agency (IEA), water stress also poses risks to the transport of fuels and materials. In 2022, droughts and severe heatwaves led to low water levels in key European rivers such as the Rhine, limiting barge transport of coal, chemicals and other materials.
Spang et al. (2014) conducted a study looking at the water consumption for electricity production (WCEP) internationally that both showed the variation in energy types produced across countries as well as the vast differences in efficiency of power production per unit of water use (Figure 1). Operations of water distribution systems and power distribution systems under emergency conditions of limited power and water availability is an important consideration for improving the overall resilience of the water – energy nexus. Khatavkar and Mays (2017a) present a methodology for control of water distribution and power distribution systems under emergency conditions of drought and limited power availability to ascertain at least minimal supply of cooling water to the power plants. Khatavkar and Mays (2017) applied an optimization model for water – energy nexus system for a hypothetical regional level system which showed an improved resilience for several contingency scenarios.
Increasingly controversial has been the use of water resources for hydraulic fracturing of shale gas and tight oil reserves. Many environmentalists are deeply concerned about the potential for such operations to exacerbate local water scarcity (since the water volumes required are large) and to produce considerable volumes of polluted water (both directly through pollution of fracking water, and indirectly through contamination of groundwater). With rising energy prices in North America and Europe in the 2020s it is likely that government and industry interest in hydraulic fracturing will grow.
Energy intensity
The operation of urban water systems requires substantial energy support. Key processes such as water transfer, consumption, and wastewater treatment consume significant amounts of energy, sparking discussions about the energy intensity and carbon emissions of water systems.
US (California)
In 2001, operating water systems in the US consumed approximately 3% of the total annual electricity (~75 TWh). The California's State Water Project (SWP) and Central Valley Project (CVP) are together the largest water system in the world with the highest water lift, over 2000 ft. across the Tehachapi Mountains, delivering water from the wetter and relatively rural north of the state, to the agriculturally intensive central valley, and finally to the arid and heavily populated south. Consequently, the SWP and CVP are the single largest consumers of electricity in California consuming approximately 5 TWh of electricity each per year. In 2001, 19% of the state's total electricity use (~48 TWh/year) was used in processing water, including end uses, with the urban sector accounting for 65% of this. In addition to electricity, 30% of California's natural gas consumption was due to water-related processes, mainly residential water heating, and 88 million gallons of diesel was consumed by groundwater pumps for agriculture. The residential sector alone accounted for 48% of the total combined electricity and natural gas consumed for water-related processes in the state.
According to the California Public Utilities Commission (CPUC) Energy Division's Embedded Energy in Water Studies report:"'Energy Intensity' refers to the average amount of energy needed to transport or treat water or wastewater on a per unit basis."
Energy intensity is sometimes used synonymous with embedded or embodied energy. In 2005, water deliveries to Southern California were assessed to have an average EI of 12.7 MWh/MG, nearly two-thirds of which was due to transportation. Following the findings that a fifth of California's electricity is consumed in water-related processes including end-use, the CPUC responded by authorising a statewide study into the relationship between energy and water that was conducted by the California Institute for Energy and Environment (CIEE), and developed programs to save energy through water conservation.
Arab region
According to the World Energy Outlook 2016, in the Middle East, the water sector's share of total electricity consumption is expected to increase from 9% in 2015 to 16% by 2040, because of a rise in desalination capacity. The Arab region which includes the following countries: Kuwait, Lebanon, Libya, Mauretania, Morocco, Oman, Palestinian Territories, Algeria, Bahrain, Egypt, Iraq, Jordan, Qatar, Sudan, Saudi Arabia, Syria, Tunisia, the United Arab Emirates, and Yemen. Some general characteristics of the Arab region is that it is one of the most water stressed regions of the world, rain fall is mostly rare, or it rains in an unpredictable
pattern. The cumulative area of the Arab region is approximately 10.2% of the world's area, but the region only receives 2.1% of the world's average annual precipitation. Further, the region accommodates 0.3% of the world's annual renewable water resources (ACSAD 1997). Consequently, the region has experienced a declining fresh water supply per capita, roughly a shortage of 42 cubic kilometers of water demand. This shortage is expected to grow three times by 2030, and four times by 2050. This is crucially alarming given the world's economic stability highly depends on the Arab region.
There are numerous methods to mitigate the growing gap of fresh water supply per capita. One applicable method is desalination which is ubiquitous particularly in the GCC region. All of the world's desalination capacity, approximately 50% is contained in the Arab region, and almost all of that 50% is held in the GCC countries. Countries such as Bahrain provides 79% of its fresh water through desalination, Qatar is around 75%, Kuwait around 70%, Saudi Arabia 15%, and the UAE about 67%. These Persian Gulf countries built enormous desalination plants to fulfill the water supply shortages as these countries have developed economically. Agriculture in the GCC region accounts for approximately 2% of its GDP however, it utilizes 80% of water produced. It should also be noted that it requires immense amount of energy mostly from oil to operate these desalination plants. Countries such as Saudi Arabia, Bahrain, and Kuwait will face difficulty to meet the demand for desalination if the current trend continues. The GCC spends 10–25% of its generated electric power to desalinate water.
Hydroelectricity
Hydroelectricity is a special case of water used for energy production mainly because hydroelectric power generation is regarded as being cleaner and renewable energy, and dams (the main source of hydroelectric production) serve multiple purposes besides energy generation, including flood prevention, storage, control and recreation which make justifiable allocation analyses difficult. Furthermore, the impacts of hydroelectric power generation can be hard to quantify both in terms of evaporative consumptive losses and altered quality of water, since damming results in flows that are much colder than for flowing streams. In some cases the moderation of flows can be seen as a rivalry of water use in time may also need to accounted for in impact analysis. Willingness to pay can be used as an estimate to determine the value of the cost.
Retrofitting existing dams to produce electricity has been one approach to hydroelectricity. While using dams to produce electricity is seen as a cleaner form of energy, it does not come without its own challenges to the environment. Hydorelectric power has typically been seen as a lower carbon emission strategy to generating power; however, recent studies have been linked dams to greenhouse gas emissions. Galy-Lacaux et al conducted a study to measure the emissions produced by the Petit Saut Dam on the Sinnamary River in French Guyana for a two year period. The researchers found that About 10% of the carbon stored in soil and vegetation was released in gaseous form within 2 years.
Water Availability
Because of the shift in developing new renewable energy technologies, there is a new added stress to water availability. Renewable energy methods, such as biofuels, concentrating solar power (CSP), carbon capture, utilization and storage or nuclear power, are quite water intensive. Water scarcity has a huge impact on energy production and reliability.
See also
Climate and energy
Water, energy and food security nexus
References
External links
California's Water – Energy Relationship
WaterEnergyNEXUS – Advanced Technologies and Best Practices
Embedded Energy in Water Studies Study 1: Statewide and Regional Water-Energy Relationship
Embedded Energy in Water Studies Study 2: Water Agency and Function Component Study and Embedded Energy- Water Load Profiles
The Water-Energy Nexus: Challenges and Opportunities
Thirsty Energy
Water supply
Water and the environment
Energy | Water-energy nexus | Physics,Chemistry,Engineering,Environmental_science | 2,303 |
30,607,352 | https://en.wikipedia.org/wiki/Amy%20Alexander%20%28artist%29 | Amy Alexander is an artist and researcher working in audio/visual performance, interactive art and software art, under a number of pseudonyms including VJ Übergeek and Cue P. Doll. She is a professor at the Department of Visual Arts at the University of California, San Diego.
Biography
Alexander is a digital artist, in the areas of software art and live coding. Her works have been exhibited and performed at museums, festivals, and conferences including the Whitney Museum, Transmediale, Ars Electronica, and SIGGRAPH. She has also performed in non-art venues including nightclubs and street performances.
Alexander's first widely exhibited new media work was the net art project, The Multi-Cultural Recycler (1996/7), which was nominated for a Webby Award in 1999. She then developed the plagiarist.org website, which was known for its humorous projects related to Internet culture. Since 2012, her work has been in video installation and visual performance, most notably SVEN, Discotrope: The Secret Nightlife of Solar Cells with Annina Ruest and CyberSpaceLand. She has also written texts on historical and contemporary audiovisual performance, including a chapter in the volume of the book See This Sound - Audiology, called Compendium.
Notable works
In 2005, Alexander's piece theBot was included in an exhibit at the New Museum in New York City, as part of 40 works selected by Rhizome, an organization and platform for Internet art. In 2022, Alexander's What the robot saw, a continuously updated livestream of "low engagement" YouTube videos and channels, was included in the Rencontres d'Arles film festival in Berlin, Germany and Paris, France. The livestream grabbed clips of videos with no or few views and added machine-generated subtitles.
Education
Alexander attended Rowan University from 19881991 and received her BA in Communications: Radio/TV/Film. She then attended the California Institute of the Arts from 1993 to 1996, and received her MFA in Film/Video and New Media.
Career
Alexander developed a background in programming, music, and visual media at her alma maters. She taught at the California Institute of the Arts and the University of Southern California. She also worked in television, animation, information technology and new media.
Amy Alexander is currently Professor of Visual Arts: Computing at the University of California, San Diego. Her teaching focuses on contemporary expanded cinema, visual performance, abstract cinema history, and process-based digital media art.
References
External links
Pau Alsina interviews Amy Alexander
Academic profile
American digital artists
American women digital artists
American performance artists
Living people
Net.artists
Year of birth missing (living people)
21st-century American women artists | Amy Alexander (artist) | Technology | 556 |
54,292,928 | https://en.wikipedia.org/wiki/Sentinus | Sentinus is an educational charity based in Lisburn, Northern Ireland that provides educational programs for young people interested in science, technology, engineering and mathematics (STEM).
History
Northern Ireland produces around 2,000 qualified IT workers each year; there are around 16,000 IT jobs in the Northern Ireland economy.
Function
It works with EngineeringUK and the Council for the Curriculum, Examinations & Assessment (CCEA). It works with primary and secondary schools in Northern Ireland.
It runs summer placements for IT workshops for those of sixth form age (16-18). It offers Robotics Roadshows for primary school children.
Sentinus Young Innovators
Sentinus hosts the annual Big Bang Northern Ireland Fair which incorporates Sentinus Young Innovators. This is a one day science and engineering project exhibition for post-primary students. It is one of largest such events in the United Kingdom. In 2019 over 3,000 students participated from 130 schools across both Northern Ireland and the Republic of Ireland.
The competition is affiliated with the International Science and Engineering Fair (ISEF) and the Broadcom MASTERS program. The overall winner represents Northern Ireland at the following year's ISEF.
Past Overall Winners
See also
Discover Science & Engineering, equivalent in the Republic of Ireland
Science Week Ireland
The Big Bang Fair
Young Scientist and Technology Exhibition
References
External links
Sentinus
Computer science education in the United Kingdom
Educational charities based in the United Kingdom
Educational organisations based in Northern Ireland
Engineering education in the United Kingdom
Engineering organizations
Learning programs in Europe
Mathematics education in the United Kingdom
Science and technology in Northern Ireland
Science events in the United Kingdom | Sentinus | Engineering | 321 |
39,008,595 | https://en.wikipedia.org/wiki/Corps%20of%20Canadian%20Railway%20Troops | The Corps of Canadian Railway Troops were part of the Canadian Expeditionary Force (CEF) during World War I. Although Canadian railway units had been arriving in France since August 1915, it was not until March 1917 that the units were placed under a unified headquarters named the Canadian Railway Troops. They were redesignated as the "Corps of ..." on 23 April 1918. The corps was disbanded along with the rest of the CEF on 1 November 1920.
Organization
The initial 500 men came from the Canadian Pacific Railway, but overall the railway troops had 13,000 members.
Canadian Overseas Railway Construction Corps
1st Construction Battalion
2nd CRT Battalion – formed from 127th Battalion (12th York Rangers), CEF
3rd CRT Battalion – 239th Battalion
4th CRT Battalion – Depot unit
5th CRT Battalion – Depot unit
6th CRT Battalion – 228th Battalion
7th CRT Battalion – 257th Battalion
8th CRT Battalion – 218th and 211th Battalions
9th CRT Battalion – 1st Pioneer
10th CRT Battalion – 256th Battalion
11th CRT Battalion – 3rd Labour Battalion
12th CRT Battalion – 2nd Labour Battalion
13th CRT Battalion – Depot unit
Further reading
References
Canadian Expeditionary Force
Administrative corps of the Canadian Army
Railway troops | Corps of Canadian Railway Troops | Engineering | 242 |
68,749,668 | https://en.wikipedia.org/wiki/Active%20Asteroids%20%28citizen%20science%20project%29 | Active Asteroids is a NASA partner citizen science project that successfully discovered active asteroids, including main-belt comets, quasi-Hilda objects, and Jupiter family comets. The project is hosted on the Zooniverse platform and is funded by a NSF Graduate Research Fellowship Program. It uses images from the Dark Energy Camera (DECam) to search for tails around asteroids and other minor planets. The research team is led by Colin Orion Chandler. As of April 2024 about 8300 volunteers carried out 6.7 million classifications of 430 thousand images. At the time only 60 active asteroids were known and 16 new active objects were discovered by this project, significantly increasing the sample of known objects.
Pre-launch preparation
Before the team launched the project, the team gained experience with DECam and published three papers. These include detection of activity around previously known active asteroid (62412) 2000 SY178, revealing 6 years of avtivity on (6478) Gault and activity discovered on the centaur 2014 OG392.
Discoveries
The project uses a pipeline called HARVEST, which compares metadata from astronomical image archives with the data from the Minor Planet Center and produces images at positions of minor planets. It also excludes images with no detection or images that cannot detect asteroids. Since February 2024 the team also used a Convolutional Neural Network (CNN), called TailNet, to filter out bad images before they are shown to volunteers and to identify high-likely candidates. This CNN uses classification-labels made by the volunteers and is constantly improved with new classifications. One of the first discovery was made in September 2022, when the team published a paper describing that 282P/(323137) 2003 BM80 showed sustained activity over 15 months in 2021-2022. Activity was previously reported in 2012-2013 and the team analysed the orbit, finding that it is an outbursting quasi-Hilda object.
List of discoveries
See also
other citizen science projects researching minor planets:
Asteroid Zoo inactive
Stardust@home
Catalina Outer Solar System Survey inactive
other citizen science projects
Zooniverse citizen science platform
BOINC volunteer computing platform
Planet Hunters: exoplanet discovery project
Backyard Worlds: brown dwarf discovery project
References
Astronomy websites
Astronomy projects
Human-based computation
Citizen science
Internet properties established in 2021 | Active Asteroids (citizen science project) | Astronomy,Technology | 463 |
7,365,406 | https://en.wikipedia.org/wiki/Tell%20Hammeh | Tell Hammeh () is a relatively small tell in the central Jordan Valley, Hashemite Kingdom of Jordan, located where the Zarqa River valley opens into the Jordan Valley.
It is the site of some of the earliest bloomery smelting of iron, from around 930 BC.
It is close to several of the larger tells in this part of the Jordan Valley (e.g. Tell Deir 'Alla, Tell al-Sa'idiyeh) as well as to the natural resources desirable in metal production: access to water, outcrops of marly clays (see Veldhuijzen 2005b, 297), and above all the only iron ore deposit of the wider region at Mugharet al-Warda.
Excavation
The excavations at Hammeh are part of the Deir 'Alla Regional Project, a joint undertaking of Yarmouk University in Irbid, Jordan, and Leiden University in the Netherlands, in collaboration with the Jordanian Department of Antiquities.
The site's most intriguing feature is the presence of a substantial and very early iron smelting operation, as evidence by large quantities of slag, technical ceramics, furnace remnants etc. This activity dates to 930 BC.
Fieldwork at Tell Hammeh took place in 1996, 1997, and 2000. The first two (rescue) seasons were directed by Dr E.J. van der Steen; the third season was directed by Dr H.A. Veldhuijzen. A fourth season, planned in 2003, had to be abandoned due to the invasion of Iraq. As with the third season, the focus of new excavation would primarily be on the iron smelting evidence. A new excavation was to start in May 2009.
Research
Extensive research has been carried out on the metallurgical material from Tell Hammeh. Both excavation and archaeometric analyses were carried out by Dr H.A. Veldhuijzen, first at Leiden University, then since 2001 at the UCL Institute of Archaeology, as a part of the joint excavations conducted by Yarmouk University and Leiden University and co-directed by Prof. Dr. Zeidan Kafafi and Dr. Gerrit Van der Kooij.
Chronology and iron smelting activities
Several periods are attested at Hammeh. From bedrock upward, remains of Chalcolithic (ca. 4500-3000 BC) and Early Bronze Age (ca. 3000-2000 BC) occupation were found, followed by more substantial layers of Late Bronze Age (ca. 1600-1150 BC) material. Hammeh appears continuously settled through the Late Bronze Age and Iron Age I (ca. 1150-1000 BC), up to the moment when iron production started in the early Iron Age II (see van der Steen 2004).
At that point in time, domestic structures, at least in the excavated areas, cease to exist, and are covered, without a clear interruption, by a stratigraphically well defined phase of iron production. This phase has a complex internal layering, likely reflecting seasonal activity over an extended period of time. (Veldhuijzen 2005a).
This phase consists of large quantities of various types of slag, most belonging to a bloomery iron smelting operation, and a fraction to primary smithing (i.e. bloom-smithing or bloom consolidation).
Very soon or immediately after iron production ceased, habitation of the site resumed. This later Iron Age II phase seems to form the last extensive occupation of Tell Hammeh. Based on examination of the extensive pottery finds from this post-smelting phase, it can be assumed that the iron production activities must have ended no later than 750 BC. No settlement structures contemporary to the iron smelting phase are presently known from Tell Hammeh.
See also
Hama (disambiguation)
References
External links
Information on Hammeh and iron smelting
Archaeological sites in Jordan
History of metallurgy
Prehistoric mines
Mines in Jordan | Tell Hammeh | Chemistry,Materials_science | 820 |
477,808 | https://en.wikipedia.org/wiki/Portico | A portico is a porch leading to the entrance of a building, or extended as a colonnade, with a roof structure over a walkway, supported by columns or enclosed by walls. This idea was widely used in ancient Greece and has influenced many cultures, including most Western cultures.
Porticos are sometimes topped with pediments.
Palladio was a pioneer of using temple-fronts for secular buildings. In the UK, the temple-front applied to The Vyne, Hampshire, was the first portico applied to an English country house.
A pronaos ( or ) is the inner area of the portico of a Greek or Roman temple, situated between the portico's colonnade or walls and the entrance to the cella, or shrine. Roman temples commonly had an open pronaos, usually with only columns and no walls, and the pronaos could be as long as the cella. The word pronaos () is Greek for "before a temple". In Latin, a pronaos is also referred to as an anticum or prodomus. The pronaos of a Greek and Roman temple is typically topped with a pediment.
Types
The different variants of porticos are named by the number of columns they have. The "style" suffix comes from the Greek , "column". In Greek and Roman architecture, the pronaos of a temple is typically topped with a pediment.
Tetrastyle
The tetrastyle has four columns; it was commonly employed by the Greeks and the Etruscans for small structures such as public buildings and amphiprostyles.
The Romans favoured the four columned portico for their pseudoperipteral temples like the Temple of Portunus, and for amphiprostyle temples such as the Temple of Venus and Roma, and for the prostyle entrance porticos of large public buildings like the Basilica of Maxentius and Constantine. Roman provincial capitals also manifested tetrastyle construction, such as the Capitoline Temple in Volubilis.
The North Portico of the White House is perhaps the most notable four-columned portico in the United States.
Hexastyle
Hexastyle buildings had six columns and were the standard façade in canonical Greek Doric architecture between the archaic period 600–550 BCE up to the Age of Pericles 450–430 BCE.
Greek hexastyle
Some well-known examples of classical Doric hexastyle Greek temples:
The group at Paestum comprising the Temple of Hera (c. 550 BCE), the Temple of Apollo (c. 450 BCE), the first Temple of Athena ("Basilica") (c. 500 BCE) and the second Temple of Hera (460–440 BCE)
The Temple of Aphaea at Aegina c. 495 BCE
Temple E at Selinus (465–450 BCE) dedicated to Hera
The Temple of Zeus at Olympia, now a ruin
Temple F or the so-called "Temple of Concordia" at Agrigentum (c. 430 BCE), one of the best-preserved classical Greek temples, retaining almost all of its peristyle and entablature
The "unfinished temple" at Segesta (c. 430 BCE)
The Temple of Hephaestus below the Acropolis at Athens, long known as the "Theseum" (449–444 BCE), also one of the most intact Greek temples surviving from antiquity
The Temple of Poseidon on Cape Sunium (c. 449 BCE)
Hexastyle was also applied to Ionic temples, such as the prostyle porch of the sanctuary of Athena on the Erechtheum, at the Acropolis of Athens.
Roman hexastyle
With the colonization by the Greeks of Southern Italy, hexastyle was adopted by the Etruscans and subsequently acquired by the ancient Romans. Roman taste favoured narrow pseudoperipteral and amphiprostyle buildings with tall columns, raised on podiums for the added pomp and grandeur conferred by considerable height. The Maison Carrée at Nîmes, France, is the best-preserved Roman hexastyle temple surviving from antiquity.
Octastyle
Octastyle buildings had eight columns; they were considerably rarer than the hexastyle ones in the classical Greek architectural canon. The best-known octastyle buildings surviving from antiquity are the Parthenon in Athens, built during the Age of Pericles (450–430 BCE), and the Pantheon in Rome (125 CE). The destroyed Temple of Divus Augustus in Rome, the centre of the Augustan cult, is shown on Roman coins of the 2nd century CE as having been built in octastyle.
Decastyle
The decastyle has ten columns; as in the temple of Apollo Didymaeus at Miletus, and the portico of University College London.
The only known Roman decastyle portico is on the Temple of Venus and Roma, built by Hadrian in about 130 CE.
Gallery
See also
Citations
General and cited references
External links
Ancient Roman architectural elements
Architectural elements
Columns and entablature | Portico | Technology,Engineering | 1,043 |
27,343,427 | https://en.wikipedia.org/wiki/World%20Renewable%20Energy%20Network | WREN is a major non-profit organization registered in the United Kingdom with charitable status and affiliated to UNESCO, the Deputy Director General of which is its honorary President. It has a Governing Council, an Executive Committee and a Director General. It maintains links with many United Nations, governmental and non-governmental organisations.
Established in 1992 during the second World Renewable Energy Congress in Reading, UK, WREN supports and enhances the utilisation and implementation of renewable energy sources that are both environmentally safe and economically sustainable. This is done through a worldwide network of agencies, laboratories, institutions, companies and individuals, all working together towards the international diffusion of renewable energy technologies and applications. Representing most countries in the world, it aims to promote the communication and technical education of scientists, engineers, technicians and managers in this field and to address itself to the energy needs of both developing and developed countries.
Over two billion dollars have now been allocated to projects dealing with renewable energy and the environment by the World Solar Summit and World Solar Decade along with the World Bank.
Global Activities of WREC/WREN
The global activities of the World Renewable Energy Congress / Network encompass:
Newsletter
Regional meetings
Scientific publications
Targeted books and annual magazine
Workshops on renewable energy topics
Journal publication "Renewable Energy"
Competitions and awards promoting renewable energy
International congresses (World Renewable Energy Congress, WREC)
Mission statement
With the accelerated approach of the global climate-change point-of-no-return the need to address the pivotal role of renewable energy in the formation of coping strategies, rather than prevention, is more crucial than ever. Sustainability, green buildings, and the development of the large-scale renewable energy industry must be at the top of all development, economic, financial and political agendas. The time for action has arrived. Prevention and questioning how and why we face this great challenge is a luxury we can no longer indulge. We welcome the establishment of the long overdue International Renewable Energy Agency which we hope will work side-by-side with similar intergovernmental agencies striving for the adoption of renewable energies.
Major events
The major event organised by WREC/WREN is the biennial congress, normally held during the summer of every even year. The congresses are mostly run and organised by the WREC headquarters which are in Brighton, UK. All members of WREC/WREN are entitled to bid to host the Congress. The WREC/WREN Council meets and decides the location based on: availability of local funding and sponsorship; ease of travel to the location; extent of host government and institutional support; benefits to the local country. All local organisation and services must be provided by the host country.
The first three congresses were held in the UK (Reading), followed by a move to Denver (United States) and then to Florence (Italy). In the year 2000 the congress returned to the UK (Brighton) with every effort being made to ensure that this event enhanced the recognition of Renewable Energies in the new millennium. In 2002 the congress took place in Cologne (Germany) and 2004 once more in Denver (USA). In 2006 the congress was held in Florence (Italy) and in 2008 in Glasgow (UK). The next congresses will be in Abu Dhabi (UAE) in 2010 and in Denver (USA) in 2012 respectively.
The following table shows the statistics for the previous WREC conferences:
Purpose of WREC
At no time in modern history has energy played a more crucial role in the development and well being of nations than at present. The source and nature of energy, the security of supply and the equity of distribution, the environmental impact of its supply and utilization, are all crucial matters to be addressed by suppliers, consumers, governments, industry, academia, and financial institutions.
The World Renewable Energy Congress (WREC), a major recognised forum for networking between these sectors, addresses these issues through regular meetings and exhibitions, bringing together representatives of all those involved in the supply, distribution, consumption and development of energy sources which are benign, sustainable, accessible and economically viable. WREC enables policy makers, researchers, manufacturers, economists, financiers, sociologists, environmentalists and others to present their views in Plenary and Technical Sessions and to participate in discussions, both formal and informal, thus facilitating the transfer of knowledge between nations, institutions, disciplines and individuals.
WREC Renewable Energy Awards
The WREC Renewable Energy Awards were established in 1998, during the 5th edition of the WREC Congress in Florence as a way to recognize outstanding achievement and vision in the global renewable energy sector.
The WREC Renewable Energy Awards aim at highlighting the worldwide best-implemented policies, projects and research in the following topics:
Fuel Cells and Hydrogen
Low Energy Architecture
Solar Energy
Wind Technology
Biomass
Sustainable Transport
Green Energy Business
WREC/WREN Aims and Objectives
WREN is a non-profit UK company (reg. no. 1874667) limited by guarantee and not having a share capital, incorporated in 1990 as a registered charity (No. 1009879), with registered offices in England. The aims and objectives of WREC/WREN are as follows:
Ensuring renewable energy takes its proper place in the sustainable supply and use of energy for greatest benefit of all, taking due account of research requirements, energy efficiency, conservation, and cost criteria.
Assisting and promoting the real local, regional and global environmental benefits of renewable energy.
Promoting the innovation, diffusion and efficient application of economic renewable energy technologies.
Enhancing energy supply security without damage to the environment.
Widening energy availability, especially in developing countries and rural areas.
Promoting business opportunities for renewable energy projects and their successful implementation.
Ensuring the financing of, and institutional support for, economic renewable energy projects.
Encouraging improved information and education on renewable energy.
Involving young people in information and education on renewable energy with a parallel, closely #integrated programme.
Providing a technical exhibition where manufacturers and others can display their products and services.
Strengthen and expand the effectiveness of Networking among nations, institutions, agencies, organizations and individuals in research, application, commercialization and education of renewable energy technology.
Providing a forum within which participants voice their achievement and thought at various parts of the world.
References
External links
Official Website
Renewable Energy Expo
The Ultimate Electricity Plans Guide
International Renewable Energy Congress
The International Solar Energy Society (ISES)
Solar Energy and Renewable Energy Events, Fairs and Conferences
Renewable energy organizations | World Renewable Energy Network | Engineering | 1,269 |
25,517,423 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20March%2021%2C%202080 | A partial solar eclipse will occur at the Moon's ascending node of orbit on Thursday, March 21, 2080, with a magnitude of 0.8734. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
The partial solar eclipse will be visible for parts of Antarctica and Southern Africa.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2080
A partial solar eclipse on March 21.
A total lunar eclipse on April 4.
A partial solar eclipse on September 13.
A total lunar eclipse on September 29.
Metonic
Preceded by: Solar eclipse of June 1, 2076
Followed by: Solar eclipse of January 7, 2084
Tzolkinex
Preceded by: Solar eclipse of February 7, 2073
Followed by: Solar eclipse of May 2, 2087
Half-Saros
Preceded by: Lunar eclipse of March 16, 2071
Followed by: Lunar eclipse of March 26, 2089
Tritos
Preceded by: Solar eclipse of April 21, 2069
Followed by: Solar eclipse of February 18, 2091
Solar Saros 121
Preceded by: Solar eclipse of March 11, 2062
Followed by: Solar eclipse of April 1, 2098
Inex
Preceded by: Solar eclipse of April 11, 2051
Followed by: Solar eclipse of March 1, 2109
Triad
Preceded by: Solar eclipse of May 21, 1993
Followed by: Solar eclipse of January 21, 2167
Solar eclipses of 2080–2083
Saros 121
Metonic series
Tritos series
Inex series
References
External links
2080 in science
2080 3 21
2080 3 21 | Solar eclipse of March 21, 2080 | Astronomy | 499 |
4,558,584 | https://en.wikipedia.org/wiki/Precast%20concrete | Precast concrete is a construction product produced by casting concrete in a reusable mold or "form" which is then cured in a controlled environment, transported to the construction site and maneuvered into place; examples include precast beams, and wall panels, floors, roofs, and piles. In contrast, cast-in-place concrete is poured into site-specific forms and cured on site.
Recently lightweight expanded polystyrene foam is being used as the cores of precast wall panels, saving weight and increasing thermal insulation.
Precast stone is distinguished from precast concrete by the finer aggregate used in the mixture, so the result approaches the natural product.
Overview
Precast concrete is employed in both interior and exterior applications, from highway, bridge, and high-rise projects to parking structures, K-12 schools, warehouses, mixed-use, and industrial building construction. By producing precast concrete in a controlled environment (typically referred to as a precast plant), the precast concrete is afforded the opportunity to properly cure and be closely monitored by plant employees. Using a precast concrete system offers many potential advantages over onsite casting. Precast concrete production can be performed on ground level, which maximizes safety in its casting. There is greater control over material quality and workmanship in a precast plant compared to a construction site. The forms used in a precast plant can be reused hundreds to thousands of times before they have to be replaced, often making it cheaper than onsite casting in terms of cost per unit of formwork.
Precast concrete forming systems for architectural applications differ in size, function, and cost. Precast architectural panels are also used to clad all or part of a building facade or erect free-standing walls for landscaping, soundproofing, and security. In appropriate instances precast products – such as beams for bridges, highways, and parking structure decks – can be prestressed structural elements. Stormwater drainage, water and sewage pipes, and tunnels also make use of precast concrete units.
Precast concrete molds can be made of timber, steel, plastic, rubber, fiberglass, or other synthetic materials, with each giving a unique finish. In addition, many surface finishes for the four precast wall panel types – sandwich, plastered sandwich, inner layer and cladding panels – are available, including those creating the looks of horizontal boards and ashlar stone. Color may be added to the concrete mix, and the proportions and size aggregate also affect the appearance and texture of finished concrete surfaces.
History
Ancient Roman builders made use of concrete and soon poured the material into moulds to build their complex network of aqueducts, culverts, and tunnels. Modern uses for pre-cast technology include a variety of architectural and structural applications – including individual parts, or even entire building systems.
In the modern world, precast panelled buildings were pioneered in Liverpool, England, in 1905. The process was invented by city engineer John Alexander Brodie. The tram stables at Walton in Liverpool followed in 1906. The idea was not taken up extensively in Britain. However, it was adopted all over the world, particularly in Central and Eastern Europe as well as in Million Programme in Scandinavia.
In the US, precast concrete has evolved as two sub-industries, each represented by a major association. The precast concrete structures industry, represented primarily by of the Precast/Prestressed Concrete Institute (PCI), focuses on prestressed concrete elements and on other precast concrete elements used in above-ground structures such as buildings, parking structures, and bridges, while the precast concrete products industry produces utility, underground, and other non-prestressed products, and is represented primarily by the National Precast Concrete Association (NPCA).
In Australia, The New South Wales Government Railways made extensive use of precast concrete construction for its stations and similar buildings. Between 1917 and 1932, it erected 145 such buildings.
Beyond cladding panels and structural elements, entire buildings can be assembled from precast concrete. Precast assembly enables fast completion of commercial shops and offices with minimal labor. For example, the Jim Bridger Building in Williston, North Dakota, was precast in Minnesota with air, electrical, water, and fiber utilities preinstalled into the building panels. The panels were transported over 800 miles to the Bakken oilfields, and the commercial building was assembled by three workers in minimal time. The building houses over 40,000 square feet of shops and offices. Virtually the entire building was fabricated in Minnesota.
Reinforcement
Reinforcing concrete with steel improves strength and durability. On its own, concrete has good compressive strength, but lacks tensile and shear strength and can be subject to cracking when bearing loads for long periods of time. Steel offers high tensile and shear strength to make up for what concrete lacks. Steel behaves similarly to concrete in changing environments, which means it will shrink and expand with concrete, helping avoid cracking.
Rebar is the most common form of concrete reinforcement. It is typically made from steel, manufactured with ribbing to bond with concrete as it cures. Rebar is versatile enough to be bent or assembled to support the shape of any concrete structure. Carbon steel is the most common rebar material. However, stainless steel, galvanized steel, and epoxy coatings can prevent corrosion.
Products
The following is a sampling of the numerous products that utilize precast/prestressed concrete. While this is not a complete list, the majority of precast/prestressed products typically fall under one or
Agricultural products
Since precast concrete products can withstand the most extreme weather conditions and will hold up for many decades of constant usage they have wide applications in agriculture. These include bunker silos, cattle feed bunks, cattle grid, agricultural fencing, H-bunks, J-bunks, livestock slats, livestock watering trough, feed troughs, concrete panels, slurry channels, and more. Prestressed concrete panels are widely used in the UK for a variety of applications including agricultural buildings, grain stores, silage clamps, slurry stores, livestock walling and general retaining walls. Panels can be used horizontally and placed either inside the webbings of RSJs (I-beam) or in front of them. Alternatively panels can be cast into a concrete foundation and used as a cantilever retaining wall.
Building and site amenities
Precast concrete building components and site amenities are used architecturally as fireplace mantels, cladding, trim products, accessories and curtain walls. Structural applications of precast concrete include foundations, beams, floors, walls and other structural components. It is essential that each structural component be designed and tested to withstand both the tensile and compressive loads that the member will be subjected to over its lifespan. Expanded polystyrene cores are now in precast concrete panels for structural use, making them lighter and serving as thermal insulation.
Multi-storey car parks are commonly constructed using precast concrete. The constructions involve putting together precast parking parts which are multi-storey structural wall panels, interior and exterior columns, structural floors, girders, wall panels, stairs, and slabs. These parts can be large; for example, double-tee structural floor modules need to be lifted into place with the help of precast concrete lifting anchor systems.
Retaining walls
Precast concrete is employed in a wide range of engineered earth retaining systems. Products include commercial and residential retaining walls, sea walls, mechanically stabilized earth panels, and other modular block systems.
Sanitary and stormwater
Sanitary and stormwater management products are structures designed for underground installation that have been specifically engineered for the treatment and removal of pollutants from sanitary and stormwater run-off. These precast concrete products include stormwater detention vaults, catch basins, and manholes.
Utility structures
For communications, electrical, gas or steam systems, precast concrete utility structures protect the vital connections and controls for utility distribution. Precast concrete is nontoxic and environmentally safe. Products include: hand holes, hollow-core products, light pole bases, meter boxes, panel vaults, pull boxes, telecommunications structures, transformer pads, transformer vaults, trenches, utility buildings, utility vaults, utility poles, controlled environment vaults (CEVs), and other utility structures.
Water and wastewater products
Precast water and wastewater products hold or contain water, oil or other liquids for the purpose of further processing into non-contaminating liquids and soil products. Products include: aeration systems, distribution boxes, dosing tanks, dry wells, grease interceptors, leaching pits, sand-oil/oil-water interceptors, septic tanks, water/sewage storage tanks, wet wells, fire cisterns, and other water and wastewater products.
Transportation and traffic-related products
Precast concrete transportation products are used in the construction, safety, and site protection of roads, airports, and railroad transportation systems. Products include: box culverts, 3-sided culverts, bridge systems, railroad crossings, railroad ties, sound walls/barriers, Jersey barriers, tunnel segments, concrete barriers, TVCBs, central reservation barriers, bollards, and other transportation products. Precast concrete can also be used to make underpasses, surface crossings, and pedestrian subways. Precast concrete is also used for the roll ways of some rubber-tyred metros.
Modular paving
Modular paving is available in a rainbow of colors, shapes, sizes, and textures. These versatile precast concrete pieces can be designed to mimic brick, stone or wood.
Specialized products
Cemetery products
Underground vaults or mausoleums require watertight structures that withstand natural forces for extended periods of time.
Hazardous materials containment
Storage of hazardous material, whether short-term or long-term, is an increasingly important environmental issue, calling for containers that not only seal in the materials, but are strong enough to stand up to natural disasters or terrorist attacks.
Marine products
Seawalls, floating docks, underwater infrastructure, decking, railings, and a host of amenities are among the uses of precast along the waterfront. When designed with heavy weight in mind, precast products counteract the buoyant forces of water significantly better than most materials.
Structures
Prestressed concrete
Prestressing is a technique of introducing stresses into a structural member during fabrication and/or construction to improve its strength and performance. This technique is often employed in concrete beams, columns, spandrels, single and double tees, wall panels, segmental bridge units, bulb-tee girders, I-beam girders, and others. Many projects find that prestressed concrete provides the lowest overall cost, considering production and lifetime maintenance.
Precast concrete sandwich wall (or insulated double-wall) panels
Origin
The precast concrete double-wall panel has been in use in Europe for decades. The original double-wall design consisted of two wythes of reinforced concrete separated by an interior void, held together with embedded steel trusses. With recent concerns about energy use, it is recognized that using steel trusses creates a "thermal bridge" that degrades thermal performance. Also, since steel does not have the same thermal expansion coefficient as concrete, as the wall heats and cools any steel that is not embedded in the concrete can create thermal stresses that cause cracking and spalling.
Development
To achieve better thermal performance, insulation was added in the void, and in many applications today the steel trusses have been replaced by composite (fibreglass, plastic, etc.) connection systems. These systems, which are specially developed for this purpose, also eliminate the differential thermal expansion problem.The best thermal performance is achieved when the insulation is continuous throughout the wall section, i.e., the wythes are thermally separated completely to the ends of the panel. Using continuous insulation and modern composite connection systems, R-values up to R-28.2 can be achieved.
Characteristics
The overall thickness of sandwich wall panels in commercial applications is typically 8 inches, but their designs are often customized to the application. In a typical 8-inch wall panel the concrete wythes are each 2-3/8 inches thick), sandwiching 3-1/4 inches of high R-value insulating foam. The interior and exterior wythes of concrete are held together (through the insulation) with some form of connecting system that is able to provide the needed structural integrity. Sandwich wall panels can be fabricated to the length and width desired, within practical limits dictated by the fabrication system, the stresses of lifting and handling, and shipping constraints. Panels of 9-foot clear height are common, but heights up to 12 feet can be found.
The fabrication process for precast concrete sandwich wall panels allows them to be produced with finished surfaces on both sides. Such finishes can be very smooth, with the surfaces painted, stained, or left natural; for interior surfaces, the finish is comparable to drywall in smoothness and can be finished using the same prime and paint procedure as is common for conventional drywall construction. If desired, the concrete can be given an architectural finish, where the concrete itself is colored and/or textured. Colors and textures can provide the appearance of brick, stone, wood, or other patterns through the use of reusable formliners, or, in the most sophisticated applications, actual brick, stone, glass, or other materials can be cast into the concrete surface.
Window and door openings are cast into the walls at the manufacturing plant as part of the fabrication process. In many applications, electrical and telecommunications conduit and boxes are cast directly into the panels in the specified locations. In some applications, utilities, plumbing and even heating components have been cast into the panels to reduce on-site construction time. The carpenters, electricians and plumbers do need to make some slight adjustments when first becoming familiar with some of the unique aspects of the wall panels. However, they still perform most of their job duties in the manner to which they are accustomed.
Applications and benefits
Precast concrete sandwich wall panels have been used on virtually every type of building, including schools, office buildings, apartment buildings, townhouses, condominiums, hotels, motels, dormitories, and single-family homes. Although typically considered part of a building's enclosure or "envelope," they can be designed to also serve as part of the building's structural system, eliminating the need for beams and columns on the building perimeter. Besides their energy efficiency and aesthetic versatility, they also provide excellent noise attenuation, outstanding durability (resistant to rot, mold, etc.), and rapid construction.
In addition to the good insulation properties, sandwich panels require fewer work phases to complete. Compared to double-walls, for example, which have to be insulated and filled with concrete on site, sandwich panels require much less labor and scaffolding.
Precast Concrete Market
The precast concrete industry is largely dominated by Government initiated projects for infrastructural development. However, these are also being extensively used for residential (low and high rise) and commercial constructions because of their various favourable attributes. The efficiency, durability, ease, cost effectiveness, and sustainable properties of these products have brought a revolutionary shift in the time consumed in construction of any structure. Construction industry is a huge energy consuming industry, and precast concrete products are and will continue to be more energy efficient than its counterparts. The wide range of designs, colours, and structural options that these products provide is also making it a favourable choice for its consumers.
Regulations
Many state and federal transportation projects in the United States require precast concrete suppliers to be certified by either the Architectural Precast Association, National Precast Concrete Association or Precast Prestressed Concrete Institute.
See also
Cast in place concrete
Million Programme
Prestressed concrete
Roll way
Structural robustness
Tilt up
References
External links
A Comfortable House for $1,000, Popular Science monthly, February 1919, page 39, Scanned by Google Books: Popular Science
Concrete
Concrete buildings and structures
Building engineering
Reuse | Precast concrete | Engineering | 3,261 |
30,988,321 | https://en.wikipedia.org/wiki/Geopora%20sumneriana | Geopora sumneriana is a species of European fungus belonging to the family Pyronemataceae.
This fungus forms a rounded brown, roughly hairy ascocarp underground. This fruit body remains subterranean for most of the year but breaks the surface in the spring to form a cream-coloured cup (apothecium) up to 7 cm across and 5 cm tall. This species occurs in small groups and is exclusively found associated with cedar trees.
References
External links
Pyronemataceae
Fungi described in 1876
Fungi of Europe
Fungus species | Geopora sumneriana | Biology | 110 |
80,248 | https://en.wikipedia.org/wiki/Time%20immemorial | Time immemorial () is a phrase meaning time extending beyond the reach of memory, record, or tradition, indefinitely ancient, "ancient beyond memory or record". The phrase is used in legally significant contexts as well as in common parlance.
In law
In law, time immemorial denotes "a period of time beyond which legal memory cannot go", and "time out of mind". Most frequently, the phrase "time immemorial" appears as a legal term of art in judicial discussion of common law development and, in the United States, the property rights of Native Americans.
English and American common law
"Time immemorial" is frequently used to describe the time required for a custom to mature into common law. Medieval historian Richard Barber describes this as "the watershed between a primarily oral culture and a world where writing was paramount". Common law is a body of law identified by judges in judicial proceedings, rather than created by the legislature. Judges determine the common law by pinpointing the legal principles consistently reiterated in previous legal cases over a long period of time.
In English law, time immemorial ends and legal memory begins at 1189, the end of the reign of King Henry II, who is associated with the invention of the English common law. Because common law is found to have a non-historical, "immemorial" advent, it is distinct from laws created by monarchs or legislative bodies on a fixed date. In English law, "time immemorial" has also been used to specify the time required to establish a prescriptive right. The Prescription Act 1832, which noted that the full expression was "time immemorial, or time whereof the memory of man runneth not to the contrary", replaced the burden of proving "time immemorial" for the enjoyment of particular land rights with statutory fixed time periods of up to 60 years.
American law inherited the English common law tradition. Unlike English law, American law does not set "time immemorial", and American courts vary in their demands to establish "immemoriality" for the purposes of common law. In Knowles v. Dow, a New Hampshire court found that a regular usage for twenty years, unexplained and uncontradicted, is sufficient to warrant a jury in finding the existence of an immemorial custom. More often than not, however, American courts identify common law without any reference to the phrase "time immemorial".
US federal Indian law
Water rights
"Time Immemorial" is sometimes used to describe the priority date of water rights holders. In the western United States, water rights are administered under the doctrine of prior appropriation. Under prior appropriation, water rights are acquired by making a beneficial use of water. Water rights that are acquired earlier are senior, and have priority over later, junior water rights during water shortages due to drought or over-appropriation. Generally, the priority date of water rights held by Native American tribes, also called Winters rights, is the date the tribe's reservation was established. However, courts occasionally find that the tribe's water rights carry a "time immemorial" priority date, the most senior date conceivable, for aboriginal uses of water on reserved land that overlaps with the tribe's aboriginal land. For example, in U.S. v. Adair, the court reasoned that the Klamath Tribe necessarily had water rights with a priority date of "time immemorial" because they had lived and used the waters in Central Oregon and Northern California for more than a thousand uninterrupted years prior to entering a treaty with the United States in 1864.
Aboriginal title
When claiming or finding aboriginal title, the land rights Native Americans possess over the lands they have continuously and exclusively occupied for a long time prior to the intrusion of other occupants, plaintiff tribes and courts sometimes describe their occupancy as dating back to "time immemorial".
Oral tradition evidence
Historically, American judges lacked confidence in the use of Native American oral traditional evidence, oral histories shared between past and present generations, in court. Since the Pueblo de Zia decision of the United States Court of Federal Claims in 1964, oral traditional evidence has received increased judicial endorsement. In affirming the use of Native American oral traditional evidence to establish title to land, the Pueblo de Zia court described the testimony as having been handed down between tribal council members from "time immemorial".
See also
Acquiescence
Legal fiction
Prehistory
Royal lives clause
Uradel
Usucaption
References
Common law legal terminology
English law
English legal terminology
Past
Time in government
Henry II of England | Time immemorial | Physics | 964 |
11,531,966 | https://en.wikipedia.org/wiki/Ascochyta%20pisi | Ascochyta pisi is a fungal plant pathogen that causes ascochyta blight on pea, causing lesions of stems, leaves, and pods. These same symptoms can also be caused by Ascochyta pinodes, and the two fungi are not easily distinguishable.
Hosts and symptoms
The host of Ascochyta pisi is the field pea (Pisum sativum L.). Ascochyta pisi also infects 20 genera of plants and more than 50 plant species including soybean, sweet pea, lentil, alfalfa, common bean, clover, black-eyed-pea, and broad bean.
Field pea is an annual, cool season legume that is native to northwest and southwest Asia. Ascochyta blight of peas is one of the most important diseases of pea in terms of acreage affected. Yield losses of 5 to 15% are common during wet conditions.
Symptoms include:
spots on stems, leaves, tendrils, and pods (can be purplish, black, or brown in color)
lesions on stems, leaves, tendrils, and pods (can be purplish, black, or brown in color)
pod lesions become sunken
black spore-producing structures form in the lesions
with high humidity spots can enlarge and coalesce, resulting in the lower leaves being completely blighted
stem girdling
pod lesions can happen in moist conditions or if pea has lodged
infected seeds can appear discolored and purplish-brown
lightly infected seeds often appear healthy
Disease cycle
Ascochyta blight of peas is caused by a fungus. More than one fungal species can cause this disease. Other pathogens that cause Ascochyta blight, besides Ascochyta pisi, include: Mycosphaerella pinodes, Phoma medicaginis var. pinodella, and Phoma koolunga. Mycosphaerella pinodes is the only species that develops a sexual spore stage on infected residue. This stage results in the production of wind-blown ascospores. Ascospores can be dispersed several kilometers. Ascospore release begins in the spring and can continue into the summer if there is enough moisture. Didymella pisi is the teliomorph stage of Ascochyta pisi
All above ground parts of the pea plant and all growth stages are susceptible to Ascochyta pisi. The fungus overwinters in seed, soil, or infected crop residues. Infected crop residue is the primary source of infection in the main pea producing areas. The fungus survives on seeds and in the soil as resting spores, called chlamydospores. The seed to seedling transmission rate is low. Infected seeds turn purplish-brown and are often shriveled and smaller in size The pathogen survives as hyphae in the seed coat and embryo. New disease is established when spores of the fungus are carried to a new, healthy crop by wind or rain splash. These fungal spores then penetrate the leaf. In the spring, it produces conidia in pycnidia. The release of these spores begins in spring and can continue into the summer if moist conditions persist. The conidia are spread short distances by wind and rain. Disease can also be established by planting infected seed. Symptoms appear within 2–4 days after initial infection.
The Ascochyta pisi spores are viable on crop debris, although they do not survive for more than a year. Other Ascochyta blight pathogens have thick walled chlamydospores, which can survive for up to a few years in the soil.
Management
Crop rotation: In order to reduce the risk of infection of pea crops from infected residue and soil-borne survival structures in a field, pea crops should be grown only every three to four years in the same field. It is important to plant pea crops as far from the previous years' field as possible in order to limit the spread of infection. The spores have a short distance dispersal during the growing season. Crop rotation alone is not a recommended management tactic due to spores traveling several kilometers.
Stubble management: Practices include straw-chopping and harrowing to spread out the crop residue on the soil surface. This can be important in helping to speed up crop residue decomposition.
Variety selection: It is important to know the disease and lodging ratings of certain pea varieties in order to choose a variety that is most likely to resist disease.
Agronomics: Seed rate and planting date can have a major effect on exposure of the crop to disease and on susceptibility. Agronomic practices promoting varieties and conditions that limit lodging and avoiding fields with excess nitrogen can reduce the spread and intensity of disease.
Seed quality: It is suggested that farmers have their seed tested for germination levels and seed-borne disease levels. Seed with infection levels of 10% or more should be treated with fungicide. It is advised to plant seed with less than 10% ascochyta infection if that quality of seed can be sourced.
Seed treatments: Treatments provide protection against seed and soil-borne diseases. Apron Maxx RTA® and Vitaflo 280® are products registered for seed-borne ascochyta.
Scouting: Disease scouting is critical to catch ascochyta blight early. It is recommended to begin scouting during the vegetative stage and to continue scouting into the early flowering stage. The reason for this is to observe whether disease symptoms are moving upwards and are present on tendrils and flowers. If symptoms of ascochyta blight are present in at least 50% of the bottom third of the crop canopy and are progressing into the middle third of the canopy, fungicide control may be recommended. A few other reasons to use fungicides are if the weather has been humid, if there is a forecast for rain, and if a high yield of peas would justify the cost of spraying fungicides.
Foliar Fungicides: The registered fungicides used for field peas to control ascochyta blight are Bravo 500®, Headline EC®, Lance®, and Quadris®. Early flowering is the ideal time to apply these fungicides. They work by protecting the healthy green plant material, but will not repair plants affected by foot rot. High water volumes are necessary for full coverage of leaves and penetration of the plant canopy.
Before planting, some recommended management practices include destroying infected crop residues, crop rotation, and planting the current crop far from the previously infected crops' field or residues. Disease can be managed in multiple ways during and after planting. One method to manage disease is to follow the recommended seeding dates and rates to avoid fostering an ideal environment for the pathogen. If the seed density is too high and planted too early, there is increased exposure to the plant pathogen. This seeding practice also creates an ideal environment for the pathogen because the plants often produce larger canopies and experience more lodging, which creates a close, high-humidity environment ideal for the pathogen. Long term crop rotation with non-host crops is recommended. Chemical control with fungicidal seed dressings is another effective method of control.
Environment
This pathogen needs cool, moist conditions, and development occurs more quickly as plant tissues age. An increase in severity of infection is often noted when the crop canopy closes due to the dense growth that prevents dry air from penetrating the canopy. This creates a cool, humid, moist environment under the canopy, and as a result, the disease symptoms are most prevalent at the base of the canopy and spread up the plant. Plant lodging also creates a dense, humid environment favorable for the pathogen. The optimal temperature for disease establishment and development is around 20 °C. Spore dispersal and the development of the disease are slowed in the absence of high levels of moisture.
See also
List of Ascochyta species
References
Fungal plant pathogens and diseases
Eudicot diseases
pisi
Fungi described in 1830
Fungus species | Ascochyta pisi | Biology | 1,626 |
49,968,788 | https://en.wikipedia.org/wiki/Masreliez%27s%20theorem | Masreliez theorem describes a recursive algorithm within the technology of extended Kalman filter, named after the Swedish-American physicist John Masreliez, who is its author. The algorithm estimates the state of a dynamic system with the help of often incomplete measurements marred by distortion.
Masreliez's theorem produces estimates that are quite good approximations to the exact conditional mean in non-Gaussian additive outlier (AO) situations. Some evidence for this can be had by Monte Carlo simulations.
The key approximation property used to construct these filters is that the state prediction density is approximately Gaussian. Masreliez discovered in 1975 that this approximation yields an intuitively appealing non-Gaussian filter recursions, with data dependent covariance (unlike the Gaussian case) this derivation also provides one of the nicest ways of establishing the standard Kalman filter recursions. Some theoretical justification for use of the Masreliez approximation is provided by the "continuity of state prediction densities" theorem in Martin (1979).
See also
Control engineering
Hidden Markov model
Bayes' theorem
Robust optimization
Probability theory
Nyquist–Shannon sampling theorem
References
Control theory
Signal processing
Control engineering | Masreliez's theorem | Mathematics,Technology,Engineering | 247 |
29,765,241 | https://en.wikipedia.org/wiki/Coma%20Berenicids | Comae Berenicids [sic] (formerly Coma Berenicids, IMO designation: COM; IAU shower number: 20) is a minor meteor shower with a radiant in the constellation Coma Berenices. The shower appears from December 12 to December 23 with the estimated maximum around December 16. The radiant at that time is located at α=175°, δ=+18°. The shower's population index is 3.0 with the speed of .
The Comae Berenicids were first detected within the framework of Harvard Radio Meteor Project. The shower's existence was discovered by Richard Eugene McCrosky and A. Posen in 1959. The Comae Berenicids have an orbit very similar to the December Leo Minorids often leading to confusion between the two meteor showers.
Notes
External links
Comae Berenicids data (IAU Meteor Data Center)
Meteor showers
Coma Berenices
Astronomical objects discovered in 1959
December
January | Coma Berenicids | Astronomy | 197 |
77,697,156 | https://en.wikipedia.org/wiki/Carthage%20tower%20model | The Carthage tower model is a limestone model of a tower with a Punic inscription, found in Carthage by Nathan Davis in 1856–58 in Husainid Tunisia.
It has a diameter of 13.3 cm and a height of 41.1 cm. It is in the British Museum, with ID number 125324.
Of all the inscriptions found by Davis, it was one of just three that was not a traditional Carthaginian tombstone - the other two being number 71 (the Son of Baalshillek marble base) and number 90 (the Carthage Tariff), which contained a bevelled architectural ornamentation.
Donald Harden wrote that it may represent a lighthouse or a watch tower, and may provide evidence for a type of multistory building in Carthaginian architecture. The model appears to show three stories, and may have originally been more; the bottom arch is considered to be a door, the middle story contains three shallow round arched windows, and part of a top story with five deeper and narrower windows with their tops missing. CIS wrote that: “The cippus is round, rising in the form of a tower, in the lower part of which is an arched gate, and above it three windows are shaped in the same manner as a vault. The top of the tower is finned.“
The inscription states:
To the lady Tanit face of Baal and to the lord to Baal Hammon which vowed Bodmelqart son of 'Abdmelqart son of Himilkot for he heard his voice, and blessed him.
Bibliography
References
Archaeological artifacts
KAI inscriptions
Punic inscriptions
Archaeological discoveries in Tunisia
Phoenician architecture | Carthage tower model | Physics | 340 |
55,386,527 | https://en.wikipedia.org/wiki/School%20of%20Industrial%20Art%20and%20Technical%20Design%20for%20Women | School of Industrial Art and Technical Design for Women was an American school of industrial design founded in 1881 and located in New York City. Pupils were made familiar with the practicality of design, with the workings of machinery, and the technicalities of design as applied to various industries. In its day, it was said to be the only school of practical design for industrial manufacture in the world.
Florence Elizabeth Cory organized her first class of five pupils in autumn 1881, instructing them in the principles of design and the practical application of those principles to industrial art. From that nucleus sprang the prosperous school which by 1890, included 490 students, including correspondent pupils, all of whom were striving to attain a degree of proficiency in several departments of practical designing and industrial handicraft that would enable them to become self-supporting. Among these students were representatives of every State and Territory in the United States, several Canadian cities, and the Sandwich Islands. Cory died in 1902. The school had closed by 1908.
History
The organization of this particular school grew out of a forceful necessity for its existence. The schools then existing taught the principles of design only, without regard to the practical application, and consequently the young women who graduated from such schools found great difficulty in obtaining employment or in disposing of their designs.
In other schools of design, the teachers taught a woman to make a wall paper design; sit her down with paper, brushes and colors, she could make a beautiful design, but would not know (neither would the teachers) whether that design could be printed by machinery or not. She would not know how many colors she should use; how the colors should fall, the dimensions, or anything of the kind; the teachers do not know. A design may be well executed, faultlessly correct, and beautiful, yet worthless to the manufacturer, because it cannot be woven or printed. Machinery has its requirements and its limitations, all of which must be considered when making design, and without the practical knowledge necessary to do this an acceptable working design could not be made.
The school was managed by a president and a board of directors. There were 8 instructors, all of whom were graduates of the school. The number of pupils in the elementary class of 1901 was 35, and in the advanced class, 40. The number of graduates at that time was 600.
Numerous invitations were extended by manufacturers in New York and vicinity to visit their factories, and prizes amounted to several hundred dollars were offered for various designs, and a variety of valuable art specimens presented. Many designs were made and sold to manufacturers since the establishment of the school. The work done included carpets of all grades, oil cloths, linoleums, wall papers, stained glass, carved and inlaid wood panels, printed silks and silkalines, ribbons, upholstery fabrics, portieres, table linen of all kinds, calicoes, prints, awnings, lace, fan mounts, book covers, china, Christmas, Easter, and menu cards. Not only were orders filled for American manufacturers, but there were international opportunities as well: to Leeds and York, England, patterns for ingrains; to Carlsbad, Austria, designs for china; to Dundee, Scotland, patterns for table linen and towel borders; and to Japan, designs for printed and embroidered silks.
Objective
The chief object of this school was to give instruction in the practical application of art designs, so that when a pupil had completed the course, she would be competent to do practical work which would have both an artistic and a commercial value.
Admission requirements
The school is open to any young woman of good moral character upon payment of the required tuition fee. Pupils can enter the school at any time. Pupils who have not become practically familiar with drawing will be obliged to enter the elementary class. Pupils desiring to enter the advanced classes will be required to present specimens of their work—free-hand drawing—flowers from nature or conventionalized ornamental figures, scrolls, and so forth.
Tuition and commission
The charges for tuition were as follows: Elementary classes, US$25 per term, or $75 for four consecutive terms; advanced classes, $30 per term, or $85 for four consecutive terms. In addition to the standard tuition, there were a number of special courses costing from §10 to $25 per term. Arrangements were also made to receive special students at a rate of $10 per month in elementary work and $15 per month in advanced work.
All drawings made in the school were the property of the pupil who made them, with the exception of one sheet from each set made, which was retained as the property of the school. Pupils had the privilege of disposing of all salable designs to manufacturers while still under instruction. Many pupils were thus able to wholly or partly pay their expenses at the school. A commission of 10 per cent was required on all sales made by pupils while still under instruction in the school.
Program
During the first two or three years of the school's existence, lectures were given to the students by prominent artists and designers, but these were discontinued because the classes soon assumed such proportions that there was not room enough to accommodate all who wished to hear them.
The full course of instruction required two yearas. The school year was divided into four terms of three months each. Sessions were held every day from 10 AM to 4 PM. The courses of instruction included elementary and advanced work in ornamental and practical designing as applied to carpets, rugs, wall paper, oilcloth, stained glass, lace, silk, calico, book covers, and so forth.
The first year classes were taught simple designing for calico, muslin, stained glass, inlaid woods, and jewelry. The elementary class also included flower painting.
In the second year, the pupils learned advanced designs for oil-cloth, silk, carpets, and other mediums.
The optional third year was passed in the practice and design room, where no regular instruction was given, but where orders were received and work done under the supervision of the principal, and well-known designers.
In addition to the regular classes, there was a department of home study and a correspondence class for those who could not conveniently attend the school.
Building and fittings
The rooms occupied by the school were rented. The equipment cost about US$1,000, and was provided by Cory. The school was maintained by tuition fees. The cost of maintenance was $3,000 per annum.
Graduates
The graduates were fitted to do practical work before leaving the school, and were not required to undergo a period of apprenticeship.
In Cory's opinion, there was hardly any branch of industry in which artistic skill and taste was a component part which had not benefited by this class of schools. They have trained up a distinctively American class of designers, illustrators, and decorators whose talents have contributed to the development and success of many establishments, especially those engaged in the printing and textile industries. The effect upon those who have been under instruction is said to be beneficial in every respect. Many of the graduates are earning much higher wages than they could possibly command in other occupations where women are employed. She wrote in 1891: "By far the greater number of graduates are at work in their own homes, and are not employed regularly at a stated salary by any manufacturer. When their designs are finished they are sold to whichever manufactory pays the highest price."
References
Bibliography
Defunct schools in New York City
1881 establishments in New York (state)
Schools in Manhattan
Industrial design
Educational institutions established in 1881
Vocational schools in the United States
Women's education in the United States | School of Industrial Art and Technical Design for Women | Engineering | 1,543 |
72,800,198 | https://en.wikipedia.org/wiki/Lysine%20acetylsalicylate | Lysine acetylsalicylate, also known as aspirin DL-lysine or lysine aspirin, is a more soluble form of acetylsalicylic acid (aspirin). As with aspirin itself, it is a nonsteroidal anti-inflammatory drug (NSAID) with analgesic, anti-inflammatory, antithrombotic and antipyretic properties. It is composed of the ammonium form of the amino acid lysine paired with the conjugate base of aspirin.
Lysine acetylsalicylate was developed for intravenous administration in acute pain management, enabling faster onset of action compared to oral aspirin. Adverse effects are similar to those of orally administered aspirin, including upset stomach, and heartburn. In more serious cases, it can cause peptic ulcers, gastric bleeding, and exacerbate asthma. Due to its antithrombotic properties, patients using lysine acetylsalicylate or oral aspirin have an increased risk of bleeding especially for patients on blood thinning medications. It should not be used in children with infections, as it poses a risk of Reye syndrome, nor should it be used in the final trimester of pregnancy due to risks of premature closure of the foramen ovale in the fetal heart.
The therapeutic effects of salicylic acids were first documented in 1763 by Edward Stone, with acetylsalicylic acid being synthesized by Felix Hoffmann, a chemist working under Bayer, in 1897. Acetylsalicylic acid-derived salt compounds were first discovered in 1970, and the synthesis of lysine acetylsalicylate was first documented in 1978.
Mechanism of action
Lysine acetylsalicylate is considered a prodrug, requiring it to be metabolized before displaying its therapeutic properties. After administration, lysine acetylsalicylate is hydrolyzed, separating into lysine and acetylsalicylate compounds.
Cyclo-oxygenase enzyme (COX) inhibition
Two forms of COX enzymes have been identified, COX-1 and COX-2. COX enzymes are responsible for catalyzing the conversion of arachidonic acid to prostaglandins, which are used as precursors for other substances, in particular thromboxane A2. Thromboxane A2 is a potent platelet activator, inducing changes in platelets that ultimately promote aggregation and the formation of clots. Thromboxane A2 also displays vasoconstrictor properties by acting on vascular smooth muscle cells. Prostaglandins are also important mediators of the inflammatory response, with high levels of prostaglandins being seen in inflamed tissues.
Acetylsalicylate compounds act as inhibitors of COX-1 and COX-2 enzyme activity, enabling the drug to display its antiplatelet and anti-inflammatory properties. The compound irreversibly suppresses COX-1 activity by addition of an acetyl group to a serine amino acid. This disables the binding mechanism of arachidonic acid, inhibiting the synthesis of prostaglandins and thromboxane A2 which stops platelet aggregation and inflammation. The same mechanism is also shown in COX-2 enzymes, albeit with lower efficiency of binding.
Other proposed mechanisms
Acetylsalicylate compounds are also thought to have other mechanisms that exert anti-inflammatory effects on cells, which are mainly prostaglandin-independen.t Acetylsalicylate inhibits neutrophil activation by desensitizing them to endogenous chemical signals such as leukotrienes, stopping the inflammatory cascade. Acetylsalicylate also reduces the expression of nitric oxide synthase, obstructing the synthesis of nitric oxide compounds. Nitric oxide plays a key role in inflammation by activating macrophages and regulating apoptosis. Acetylsalicylate also inhibits the activation of nuclear factor kappa-B, which decreases the expression of pro-inflammatory molecules such as interleukins.
Chemical properties
Lysine acetylsalicylate exists as a white, crystalline substance displaying weakly acidic properties. Lysine acetylsalicylate is generally unstable in a basic medium, readily undergoing a multi-step hydrolysis reaction that is catalyzed by the presence of negatively charged hydroxide ions. The primary target of the hydrolysis reaction is the ester group, dissociating into a carboxylic acid and aromatic alcohol.
Synthesis
The synthesis of lysine acetylsalicylate requires the precursor sodium salicylate, another salt of salicylic acid. Sodium salicylate is prepared by adding acetylsalicylic acid to a solution of sodium hydrogen carbonate. The solution is then stirred and filtered to produce sodium salicylate crystals, which are dried to remove water.
Sodium salicylate can be synthesized into acetylsalicylate through two methods. The first method is through mixing a 30% sodium salicylate solution with lysine, and heating the mixture under reflux for 40 minutes. Next, the solution is cooled and heated again to evaporate the resulting water. When a precipitate is noticed, the solution is put into a refrigerator until fully crystallized, with the resulting crystals being lysine acetylsalicylate. The second method involves the same process, but the mixture is not initially heated and is instead left at room temperature for 48 hours. Method 1 is noted to obtain a greater yield of lysine acetylsalicylate.
Pharmacokinetics
Lysine acetylsalicylate is normally administered intravenously into the blood due to its high water solubility when compared to only acetylsalicylate. This enables aspirin to be released directly into blood circulation, bypassing the need for absorption through the stomach as well as liver metabolism.
When compared to oral doses of aspirin, lysine acetylsalicylate displays a greater antiplatelet and anti-inflammatory response. Additionally, lysine acetylsalicylate shows a faster onset of action when compared to oral aspirin of an equivalent dose. Lysine acetylsalicylate also displays a shorter mean residence time in the body (0.37 hours) as well as a shorter elimination half-life (17 minutes) when administered intravenously, which could indicate that it displays a shorter duration of exposure. Lysine acetylsalicylate also provides less interpatient variability in antiplatelet properties.
Acetylsalicylate is predominantly metabolized through a conjugation reaction with glycine to form salicyluric acid. Salicyluric acid also acts as the main compound of aspirin excretion, with 98% of aspirin being secreted via this pathway by the kidney. Salicyluric acid can undergo further metabolism to form glucuronide compounds, or hydroxylation to form gentisic acid (1% of total aspirin).
Medical uses
Lysine acetylsalicylate is used acutely in an inpatient setting, for conditions presenting with severe pain, particularly acute migraine attacks and severe headache. It is also used as an ultra-rapid platelet blockade agent for intra-procedural clearance of thrombus, and among patients with an urgent need for antiplatelet therapy without feasible nasogastric or oral access. These include, but are not limited to: patients with acute ischemic stroke, arterial dissection, and those undergoing endovascular stent placement.
Pain
In an inpatient setting, lysine acetylsalicylate has been shown to be safe and effective in the inpatient management of severe headache and migraine. Two randomized trials found that when combined with metoclopramide, lysine acetylsalicylate has comparable efficacy to sumatriptan for migraine.
Antiplatelet
Clinical trials on the use of lysine acetylsalicylate as an antiplatelet for acute coronary syndrome and chronic coronary syndrome finds comparable efficacy of lysine acetylsalicylate to oral aspirin. The economic efficiency of using lysine acetylsalicylate in the secondary prevention of ischemic stroke and myocardial infarction has also been demonstrated in one pharmacoeconomic study. Lysine acetylsalicylate is generally reserved for patients with urgent need of antiplatelet therapy with no oral or nasogastric access. However, its rapid onset through IV administration makes it applicable for thrombus clearance during stent placement and other surgical procedures.
Diagnosis of NSAID-exacerbated respiratory disease
NSAID-exacerbated respiratory disease refers to the combination of NSAID intolerance, asthma, with chronic rhinosinusitis with nasal polyposis. Lysine acetylsalicylate is used as a challenge test to diagnose NSAID-exacerbated respiratory disease. Drops of lysine acetylsalicylate are instilled via pipette or spray to both nostrils. Patients with NSAID-exacerbated respiratory disease had a significant increase in symptoms compared to those without NSAID-exacerbated respiratory disease.
Side effects
Contraindications
All NSAIDs, including aspirin, should be avoided 20 weeks or later in pregnancy to prevent risk of kidney problems in unborn babies. Due to linkage with Reye syndrome, aspirin should not be used in children under the age of 16 showing signs of recovering from infection. Those who are allergic to, or intolerant of NSAIDs such as ibuprofen and naproxen should not use lysine acetylsalicylate. Lysine acetylsalicylate is avoided in patients with Glucose-6-phosphate dehydrogenase deficiency due to the risk of hemolytic anemia.
Gastrointestinal
Non-selective blockade of COX by NSAIDs such as lysine acetylsalicylate results in the attenuation of gastric defense, resulting in an increased risk of gastrointestinal bleeding. As such, lysine acetylsalicylate should be used with caution in patients with peptic ulcer or gastritis. Combining aspirin with other NSAIDs has been shown to drastically increase the risk of gastrointestinal bleeding and should be done with caution.
Asthma and NSAID-exacerbated respiratory disease
Blockade of COX-1 increases activation of the leukotriene pathway, resulting in the release of cysteinyl leukotrienes which are potent bronchoconstrictors. Leukotriene is also a major factor in the pathogenesis of asthma. As such, caution should be applied in the use of lysine acetylsalicylate in patients with asthma.
Similarly, the increase in cysteinyl leukotrienes can also cause hyperreactivity in healthy patients, leading to NSAID-exacerbated respiratory disease. Lysine acetylsalicylate should be used with caution in patients diagnosed with NSAID-exacerbated respiratory disease.
Bleeding
Owing to its antiplatelet properties, use of oral aspirin and lysine acetylsalicylate will increase the risk of bleeding. As such, patients with hemophilia or other bleeding tendencies should not use oral aspirin nor lysine acetylsalicylate. The risk of bleeding is increased for those using warfarin and alcohol.
References
Nonsteroidal anti-inflammatory drugs
Ammonium compounds
Acetylsalicylic acids
Salicylates | Lysine acetylsalicylate | Chemistry | 2,469 |
27,908,643 | https://en.wikipedia.org/wiki/Shen-su%20Sun | Shen-su Sun (; 27 October 1943 – 5 March 2005) was a Chinese-born Australian geochemist.
Sun was born in Fuzhou, Fujian, China. He earned his bachelor's degree in geology from National Taiwan University, and obtained his Ph.D. from Columbia University in 1973. During 1981 to 1999, he was a research professor in Bureau of Mineral Resources of Australia. He did significant work in lead, oxygen and sulfur isotope geochemistry. He died in Canberra, Australia.
References
1943 births
2005 deaths
Chinese geochemists
People from Fuzhou
Chemists from Fujian
Educators from Fujian
National Taiwan University alumni
Columbia University alumni
Chinese emigrants to Australia
Taiwanese emigrants to Australia
Taiwanese geochemists
Australian geochemists
Taiwanese people from Fujian
20th-century Chinese chemists
20th-century Australian chemists | Shen-su Sun | Chemistry | 164 |
39,487,700 | https://en.wikipedia.org/wiki/Boletus%20rufomaculatus | Boletus rufomaculatus is a fungus of the genus Boletus native to North America. It was described scientifically by Ernst Both in 1998.
See also
List of Boletus species
List of North American boletes
References
External links
rufomaculatus
Fungi described in 1998
Fungi of North America
Fungus species | Boletus rufomaculatus | Biology | 68 |
1,585,155 | https://en.wikipedia.org/wiki/Weierstrass%20factorization%20theorem | In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product involving its zeroes. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root.
The theorem, which is named for Karl Weierstrass, is closely related to a second result that every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence.
A generalization of the theorem extends it to meromorphic functions and allows one to consider a given meromorphic function as a product of three factors: terms depending on the function's zeros and poles, and an associated non-zero holomorphic function.
Motivation
It is clear that any finite set of points in the complex plane has an associated polynomial whose zeroes are precisely at the points of that set. The converse is a consequence of the fundamental theorem of algebra: any polynomial function in the complex plane has a factorization
where is a non-zero constant and is the set of zeroes of .
The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of additional terms in the product is demonstrated when one considers where the sequence is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra.
A necessary condition for convergence of the infinite product in question is that for each z, the factors must approach 1 as . So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed.
Weierstrass' elementary factors have these properties and serve the same purpose as the factors above.
The elementary factors
Consider the functions of the form for . At , they evaluate to and have a flat slope at order up to . Right after , they sharply fall to some small positive value. In contrast, consider the function which has no flat slope but, at , evaluates to exactly zero. Also note that for ,
[[File:First_5_Weierstrass_factors_on_the_unit_interval.svg|thumb|right|alt=First 5 Weierstrass factors on the unit interval.|Plot of for n = 0,...,4 and x in the interval [-1,1].]]
The elementary factors,
also referred to as primary factors'',
are functions that combine the properties of zero slope and zero value (see graphic):
For and , one may express it as
and one can read off how those properties are enforced.
The utility of the elementary factors lies in the following lemma:
Lemma (15.8, Rudin) for ,
The two forms of the theorem
Existence of entire function with specified zeroes
Let be a sequence of non-zero complex numbers such that .
If is any sequence of nonnegative integers such that for all ,
then the function
is entire with zeros only at points . If a number occurs in the sequence exactly times, then function has a zero at of multiplicity .
The sequence in the statement of the theorem always exists. For example, we could always take and have the convergence. Such a sequence is not unique: changing it at finite number of positions, or taking another sequence , will not break the convergence.
The theorem generalizes to the following: sequences in open subsets (and hence regions) of the Riemann sphere have associated functions that are holomorphic in those subsets and have zeroes at the points of the sequence.
Also the case given by the fundamental theorem of algebra is incorporated here. If the sequence is finite then we can take and obtain: .
The Weierstrass factorization theorem
Let be an entire function, and let be the non-zero zeros of repeated according to multiplicity; suppose also that has a zero at of order .
Then there exists an entire function and a sequence of integers such that
Examples of factorization
The trigonometric functions sine and cosine have the factorizations
while the gamma function has factorization
where is the Euler–Mascheroni constant. The cosine identity can be seen as special case of
for .
Hadamard factorization theorem
A special case of the Weierstraß factorization theorem occurs for entire functions of finite order. In this case the can be taken independent of and the function is a polynomial. Thus where are those roots of that are not zero (), is the order of the zero of at (the case being taken to mean ), a polynomial (whose degree we shall call ), and is the smallest non-negative integer such that the seriesconverges. This is called Hadamard's canonical representation. The non-negative integer is called the genus of the entire function . The order of satisfies
In other words: If the order is not an integer, then is the integer part of . If the order is a positive integer, then there are two possibilities: or .
For example, , and are entire functions of genus .
See also
Mittag-Leffler's theorem
Wallis product, which can be derived from this theorem applied to the sine function
Blaschke product
Notes
External links
Theorems in complex analysis | Weierstrass factorization theorem | Mathematics | 1,146 |
37,873,772 | https://en.wikipedia.org/wiki/Peroxynitric%20acid | Peroxynitric acid or peroxonitric acid is a chemical compound with the formula . It is an oxyacid of nitrogen, after peroxynitrous acid.
Preparation
Peroxynitrate, the conjugate base of peroxynitric acid, is formed rapidly during decomposition of peroxynitrite in neutral conditions.
Atmospheric chemistry
Peroxynitric acid is formed in the atmosphere, although it is unstable, it is important as a reservoir for NO2 through the reversible radical reaction:
+
References
Nitrogen oxoacids
Peroxy acids
Nitrogen(V) compounds | Peroxynitric acid | Chemistry | 127 |
25,062,268 | https://en.wikipedia.org/wiki/Van%20Norman%20Dams | The Van Norman Dams, also known as the San Fernando Dams, were the terminus of the Los Angeles Aqueduct, supplying about 80 percent of Los Angeles' water, until they were damaged in the 1971 San Fernando earthquake and were subsequently decommissioned due to the inherent instability of the site and their location directly above heavily populated areas.
Construction
The Upper Van Norman Dam initially was constructed with of hydraulic fill. In 1922, the dam was raised with rolled fill.
The Lower Van Norman Dam was constructed with hydraulic and rolled fill. Hydraulic fill height was about , while rolled fill was added at least five times in the dam's history, each time increasing the dam's height, totaling rolled fill. The last addition was made in 1929–30.
1971 San Fernando earthquake
The 1971 San Fernando earthquake significantly damaged the dams, resulting in evacuation of thousands of people from the San Fernando Valley immediately below. 80,000 were evacuated for three days. Later, it was estimated that a dam failure could have killed 123,400.
Upper Van Norman dam
The Upper Van Norman reservoir was operating at about one-third capacity at the time of the earthquake. The quake lowered dam height and displaced the dam laterally .
Lower Van Norman dam
Originally, the Lower Van Norman reservoir was operated near full capacity of . However, the maximum operating height was reduced to in 1966 following seismic hazard review. Fortuitously, at the time of the 1971 San Fernando earthquake the water height was (about half capacity: of water) as a large landslide fell into the reservoir along with of the crest and upstream face reducing the freeboard to about . This failure was predominantly due to liquefaction of the hydraulic fill. To reduce the risk of catastrophic failure, the water level was lowered as rapidly as possible, in days, at the rate of . This rate was limited by earthquake damage to the outlet lines and drainage towers.
Aftermath
Reconstruction was proposed, but abandoned after geologic evaluation showed the inherent instability of the dams' foundations.
As a replacement, the Los Angeles Dam was constructed between the original Lower and Upper Van Norman Dam structures in a more stable location. During the 1994 Northridge earthquake, the Lower Van Norman reservoir area was again severely damaged, but as then it was in use only as a holding basin, the consequences were minor.
Lessons learned
The near failure of the Lower Van Norman Dam brought about major changes in the way public agencies and engineers viewed seismic safety, particularly regarding embankments of fine sands and silts and numeric dynamic analysis of dams. Also, it resulted in many mandated dam safety reassessments.
See also
List of dams and reservoirs in California
Dam failure
Baldwin Hills Reservoir
St. Francis Dam
References
External links
Dam failures in the United States
Dams in Los Angeles County, California
History of Los Angeles
History of Los Angeles County, California
Los Angeles Aqueduct
Reservoirs in California
Earthquake and seismic risk mitigation
Dams completed in 1921
San Fernando Valley
1971 in Los Angeles
1971 in California
1971 earthquakes
History of the San Fernando Valley
Geology of Los Angeles County, California | Van Norman Dams | Engineering | 603 |
11,036 | https://en.wikipedia.org/wiki/Fin | A fin is a thin component or appendage attached to a larger body or structure. Fins typically function as foils that produce lift or thrust, or provide the ability to steer or stabilize motion while traveling in water, air, or other fluids. Fins are also used to increase surface areas for heat transfer purposes, or simply as ornamentation.
Fins first evolved on fish as a means of locomotion. Fish fins are used to generate thrust and control the subsequent motion. Fish and other aquatic animals, such as cetaceans, actively propel and steer themselves with pectoral and tail fins. As they swim, they use other fins, such as dorsal and anal fins, to achieve stability and refine their maneuvering.
The fins on the tails of cetaceans, ichthyosaurs, metriorhynchids, mosasaurs and plesiosaurs are called flukes.
Thrust generation
Foil shaped fins generate thrust when moved, the lift of the fin sets water or air in motion and pushes the fin in the opposite direction. Aquatic animals get significant thrust by moving fins back and forth in water. Often the tail fin is used, but some aquatic animals generate thrust from pectoral fins. Fins can also generate thrust if they are rotated in air or water. Turbines and propellers (and sometimes fans and pumps) use a number of rotating fins, also called foils, wings, arms or blades. Propellers use the fins to translate torquing force to lateral thrust, thus propelling an aircraft or ship. Turbines work in reverse, using the lift of the blades to generate torque and power from moving gases or water.
Cavitation can be a problem with high power applications, resulting in damage to propellers or turbines, as well as noise and loss of power. Cavitation occurs when negative pressure causes bubbles (cavities) to form in a liquid, which then promptly and violently collapse. It can cause significant damage and wear. Cavitation damage can also occur to the tail fins of powerful swimming marine animals, such as dolphins and tuna. Cavitation is more likely to occur near the surface of the ocean, where the ambient water pressure is relatively low. Even if they have the power to swim faster, dolphins may have to restrict their speed because collapsing cavitation bubbles on their tail are too painful. Cavitation also slows tuna, but for a different reason. Unlike dolphins, these fish do not feel the bubbles, because they have bony fins without nerve endings. Nevertheless, they cannot swim faster because the cavitation bubbles create a vapor film around their fins that limits their speed. Lesions have been found on tuna that are consistent with cavitation damage.
Scombrid fishes (tuna, mackerel and bonito) are particularly high-performance swimmers. Along the margin at the rear of their bodies is a line of small rayless, non-retractable fins, known as finlets. There has been much speculation about the function of these finlets. Research done in 2000 and 2001 by Nauen and Lauder indicated that "the finlets have a hydrodynamic effect on local flow during steady swimming" and that "the most posterior finlet is oriented to redirect flow into the developing tail vortex, which may increase thrust produced by the tail of swimming mackerel".
Fish use multiple fins, so it is possible that a given fin can have a hydrodynamic interaction with another fin. In particular, the fins immediately upstream of the caudal (tail) fin may be proximate fins that can directly affect the flow dynamics at the caudal fin. In 2011, researchers using volumetric imaging techniques were able to generate "the first instantaneous three-dimensional views of wake structures as they are produced by freely swimming fishes". They found that "continuous tail beats resulted in the formation of a linked chain of vortex rings" and that "the dorsal and anal fin wakes are rapidly entrained by the caudal fin wake, approximately within the timeframe of a subsequent tail beat".
Motion control
Once motion has been established, the motion itself can be controlled with the use of other fins. Boats control direction (yaw) with fin-like rudders, and roll with stabilizer and keel fins. Airplanes achieve similar results with small specialised fins that change the shape of their wings and tail fins.
Stabilising fins are used as fletching on arrows and some darts, and at the rear of some bombs, missiles, rockets and self-propelled torpedoes. These are typically planar and shaped like small wings, although grid fins are sometimes used. Static fins have also been used for one satellite, GOCE.
Temperature regulation
Engineering fins are also used as heat transfer fins to regulate temperature in heat sinks or fin radiators.
Ornamentation and other uses
In biology, fins can have an adaptive significance as sexual ornaments. During courtship, the female cichlid, Pelvicachromis taeniatus, displays a large and visually arresting purple pelvic fin. "The researchers found that males clearly preferred females with a larger pelvic fin and that pelvic fins grew in a more disproportionate way than other fins on female fish."
Reshaping human feet with swim fins, rather like the tail fin of a fish, add thrust and efficiency to the kicks of a swimmer or underwater diver Surfboard fins provide surfers with means to maneuver and control their boards. Contemporary surfboards often have a centre fin and two cambered side fins.
The bodies of reef fishes are often shaped differently from open water fishes. Open water fishes are usually built for speed, streamlined like torpedoes to minimise friction as they move through the water. Reef fish operate in the relatively confined spaces and complex underwater landscapes of coral reefs. For this manoeuvrability is more important than straight line speed, so coral reef fish have developed bodies which optimize their ability to dart and change direction. They outwit predators by dodging into fissures in the reef or playing hide and seek around coral heads.
The pectoral and pelvic fins of many reef fish, such as butterflyfish, damselfish and angelfish, have evolved so they can act as brakes and allow complex maneuvers. Many reef fish, such as butterflyfish, damselfish and angelfish, have evolved bodies which are deep and laterally compressed like a pancake, and will fit into fissures in rocks. Their pelvic and pectoral fins are designed differently, so they act together with the flattened body to optimise maneuverability. Some fishes, such as puffer fish, filefish and trunkfish, rely on pectoral fins for swimming and hardly use tail fins at all.
Evolution
There is an old theory, proposed by anatomist Carl Gegenbaur, which has been often disregarded in science textbooks, "that fins and (later) limbs evolved from the gills of an extinct vertebrate". Gaps in the fossil record had not allowed a definitive conclusion. In 2009, researchers from the University of Chicago found evidence that the "genetic architecture of gills, fins and limbs is the same", and that "the skeleton of any appendage off the body of an animal is probably patterned by the developmental genetic program that we have traced back to formation of gills in sharks". Recent studies support the idea that gill arches and paired fins are serially homologous and thus that fins may have evolved from gill tissues.
Fish are the ancestors of all mammals, reptiles, birds and amphibians. In particular, terrestrial tetrapods (four-legged animals) evolved from fish and made their first forays onto land 400 million years ago. They used paired pectoral and pelvic fins for locomotion. The pectoral fins developed into forelegs (arms in the case of humans) and the pelvic fins developed into hind legs. Much of the genetic machinery that builds a walking limb in a tetrapod is already present in the swimming fin of a fish.
In 2011, researchers at Monash University in Australia used primitive but still living lungfish "to trace the evolution of pelvic fin muscles to find out how the load-bearing hind limbs of the tetrapods evolved." Further research at the University of Chicago found bottom-walking lungfishes had already evolved characteristics of the walking gaits of terrestrial tetrapods.
In a classic example of convergent evolution, the pectoral limbs of pterosaurs, birds and bats further evolved along independent paths into flying wings. Even with flying wings there are many similarities with walking legs, and core aspects of the genetic blueprint of the pectoral fin have been retained.
About 200 million years ago the first mammals appeared. A group of these mammals started returning to the sea about 52 million years ago, thus completing a circle. These are the cetaceans (whales, dolphins and porpoises). Recent DNA analysis suggests that cetaceans evolved from within the even-toed ungulates, and that they share a common ancestor with the hippopotamus. About 23 million years ago another group of bearlike land mammals started returning to the sea. These were the pinnipeds (seals). What had become walking limbs in cetaceans and seals evolved further, independently in a reverse form of convergent evolution, back to new forms of swimming fins. The forelimbs became flippers and, in pinnipeds, the hind limbs became a tail terminating in two fins (the cetacean fluke, conversely, is an entirely new organ). Fish tails are usually vertical and move from side to side. Cetacean flukes are horizontal and move up and down, because cetacean spines bend the same way as in other mammals.
Ichthyosaurs are ancient reptiles that resembled dolphins. They first appeared about 245 million years ago and disappeared about 90 million years ago.
"This sea-going reptile with terrestrial ancestors converged so strongly on fishes that it actually evolved a dorsal fin and tail in just the right place and with just the right hydrological design. These structures are all the more remarkable because they evolved from nothing — the ancestral terrestrial reptile had no hump on its back or blade on its tail to serve as a precursor."
The biologist Stephen Jay Gould said the ichthyosaur was his favorite example of convergent evolution.
Robotics
The use of fins for the propulsion of aquatic animals can be remarkably effective. It has been calculated that some fish can achieve a propulsive efficiency greater than 90%. Fish can accelerate and maneuver much more effectively than boats or submarine, and produce less water disturbance and noise. This has led to biomimetic studies of underwater robots which attempt to emulate the locomotion of aquatic animals. An example is the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion. In 2005, the Sea Life London Aquarium displayed three robotic fish created by the computer science department at the University of Essex. The fish were designed to be autonomous, swimming around and avoiding obstacles like real fish. Their creator claimed that he was trying to combine "the speed of tuna, acceleration of a pike, and the navigating skills of an eel".
The AquaPenguin, developed by Festo of Germany, copies the streamlined shape and propulsion by front flippers of penguins. Festo also developed AquaRay, AquaJelly and AiraCuda, respectively emulating the locomotion of manta rays, jellyfish and barracuda.
In 2004, Hugh Herr at MIT prototyped a biomechatronic robotic fish with a living actuator by surgically transplanting muscles from frog legs to the robot and then making the robot swim by pulsing the muscle fibers with electricity.
Robotic fish offer some research advantages, such as the ability to examine part of a fish design in isolation from the rest, and variance of a single parameter, such as flexibility or direction. Researchers can directly measure forces more easily than in live fish. "Robotic devices also facilitate three-dimensional kinematic studies and correlated hydrodynamic analyses, as the location of the locomotor surface can be known accurately. And, individual components of a natural motion (such as outstroke vs. instroke of a flapping appendage) can be programmed separately, which is certainly difficult to achieve when working with a live animal."
See also
Aquatic locomotion
Fin and flipper locomotion
Fish locomotion
Robot locomotion
RoboTuna
Sail (submarine)
Surfboard fin
References
Further reading
Blake, Robert William (1983) Fish Locomotion CUP Archive. .
Tangorra JL, CEsposito CJ and Lauder GV (2009) "Biorobotic fins for investigations of fish locomotion" In: Intelligent Robots and Systems, pages: 2120–2125. E-.
Tu X and Terzopoulos D (1994) "Artificial fishes: Physics, locomotion, perception, behavior" In: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 43–50. .
External links
Locomotion in Fish Earthlife.
Computational fluid dynamics tutorial Many examples and images, with references to robotic fish.
Fish Skin Research University of British Columbia.
A fin-tuned design The Economist, 19 November 2008.
Animal anatomy
Watercraft components
Rocketry | Fin | Engineering | 2,755 |
7,652,409 | https://en.wikipedia.org/wiki/Szilassi%20polyhedron | In geometry, the Szilassi polyhedron is a nonconvex polyhedron, topologically a torus, with seven hexagonal faces.
Coloring and symmetry
The 14 vertices and 21 edges of the Szilassi polyhedron form an embedding of the Heawood graph onto the surface of a torus.
Each face of this polyhedron shares an edge with each other face. As a result, it requires seven colours to colour all adjacent faces. This example shows that, on surfaces topologically equivalent to a torus, some subdivisions require seven colors, providing the lower bound for the seven colour theorem. The other half of the theorem states that all toroidal subdivisions can be colored with seven or fewer colors.
The Szilassi polyhedron has an axis of 180-degree symmetry. This symmetry swaps three pairs of congruent faces, leaving one unpaired hexagon that has the same rotational symmetry as the polyhedron.
Complete face adjacency
The tetrahedron and the Szilassi polyhedron are the only two known polyhedra in which each face shares an edge with each other face.
If a polyhedron with f faces is embedded onto a surface with h holes, in such a way that each face shares an edge with each other face, it follows by some manipulation of the Euler characteristic that
This equation is satisfied for the tetrahedron with h = 0 and f = 4, and for the Szilassi polyhedron with h = 1 and f = 7.
The next possible solution, h = 6 and f = 12, would correspond to a polyhedron with 44 vertices and 66 edges. However, it is not known whether such a polyhedron can be realized geometrically without self-crossings (rather than as an abstract polytope). More generally this equation can be satisfied precisely when f is congruent to 0, 3, 4, or 7 modulo 12.
History
The Szilassi polyhedron is named after Hungarian mathematician Lajos Szilassi, who discovered it in 1977. The dual to the Szilassi polyhedron, the Császár polyhedron, was discovered earlier by ; it has seven vertices, 21 edges connecting every pair of vertices, and 14 triangular faces. Like the Szilassi polyhedron, the Császár polyhedron has the topology of a torus.
References
External links
.
.
Szilassi Polyhedron – Papercraft model at CutOutFoldUp.com
Github repo containing an attempted solution
Nonconvex polyhedra
Toroidal polyhedra
Unsolved problems in mathematics | Szilassi polyhedron | Mathematics | 536 |
1,549,929 | https://en.wikipedia.org/wiki/173%20%28number%29 | 173 (one hundred [and] seventy-three) is the natural number following 172 and preceding 174.
In mathematics
173 is:
an odd number.
a deficient number.
an odious number.
a balanced prime.
an Eisenstein prime with no imaginary part.
a Sophie Germain prime.
a Pythagorean prime.
a Higgs prime.
an isolated prime.
a regular prime.
a sexy prime.
a truncatable prime.
an inconsummate number.
the sum of 2 squares: 22 + 132.
the sum of three consecutive prime numbers: 53 + 59 + 61.
Palindromic number in bases 3 (201023) and 9 (2129).
the 40th prime number following 167 and preceding 179.
External links
Number Facts and Trivia: 173
Prime curiosities: 173
Number Gossip: 173
References
Integers | 173 (number) | Mathematics | 174 |
70,053,543 | https://en.wikipedia.org/wiki/Misha%20%28Mandaeism%29 | In Mandaeism, misha () is anointing sesame oil used during rituals such as the masbuta (baptism) and masiqta (death mass), both of which are performed by Mandaean priests.
Etymology
The Mandaic word miša shares the same root with Mšiha ("Messiah"; ). However, Mandaeans do not use the word mšiha to refer to Mandaeans who have been anointed during rituals, in order to distance themselves from Christianity.
In the Qulasta
Several prayers in the Qulasta are recited over the oil, including prayers 48, 63, and 73. In some prayers, misha referred to as misha dakia, or "pure oil."
See also
Holy anointing oil
Oil of catechumens
Riha (incense)
References
Mandaic words and phrases
Oils
Mandaean religious objects | Misha (Mandaeism) | Chemistry | 186 |
40,783,144 | https://en.wikipedia.org/wiki/Pursuing%20Stacks | Pursuing Stacks () is an influential 1983 mathematical manuscript by Alexander Grothendieck. It consists of a 12-page letter to Daniel Quillen followed by about 600 pages of research notes.
The topic of the work is a generalized homotopy theory using higher category theory. The word "stacks" in the title refers to what are nowadays usually called "∞-groupoids", one possible definition of which Grothendieck sketches in his manuscript. (The stacks of algebraic geometry, which also go back to Grothendieck, are not the focus of this manuscript.) Among the concepts introduced in the work are derivators and test categories.
Some parts of the manuscript were later developed in:
Overview of manuscript
I. The letter to Daniel Quillen
Pursuing stacks started out as a letter from Grothendieck to Daniel Quillen. In this letter he discusses Quillen's progress on the foundations for homotopy theory and remarked on the lack of progress since then. He remarks how some of his friends at Bangor university, including Ronald Brown, were studying higher fundamental groupoids for a topological space and how the foundations for such a topic could be laid down and relativized using topos theory making way for higher gerbes. Moreover, he was critical of using strict groupoids for laying down these foundations since they would not be sufficient for developing the full theory he envisioned.
He laid down his ideas of what such an ∞-groupoid should look like, and gave some axioms sketching out how he envisioned them. Essentially, they are categories with objects, arrows, arrows between arrows, and so on, analogous to the situation for higher homotopies. It's conjectured this could be accomplished by looking at a successive sequence of categories and functorsthat are universal with respect to any kind of higher groupoid. This allows for an inductive definition of an ∞-groupoid that depends on the objects and the inclusion functors , where the categories keep track of the higher homotopical information up to level . Such a structure was later called a coherator since it keeps track of all higher coherences. This structure has been formally studied by George Malsiniotis making some progress on setting up these foundations and showing the homotopy hypothesis.
II. Test categories and test functors
Grothendieck's motivation for higher stacks
As a matter of fact, the description is formally analogous, and nearly identical, to the description of the homology groups of a chain complex – and it would seem therefore that that stacks (more specifically, Gr-stacks) are in a sense the closest possible non-commutative generalization of chain complexes, the homology groups of the chain complex becoming the homotopy groups of the “non-commutative chain complex” or stack. - Grothendieckpg 23
This is later explained by the intuition provided by the Dold–Kan correspondence: simplicial abelian groups correspond to chain complexes of abelian groups, so a higher stack modeled as a simplicial group should correspond to a "non-abelian" chain complex . Moreover, these should have an abelianization given by homology and cohomology, written suggestively as or , since there should be an associated six functor formalismpg 24. Moreover, there should be an associated theory of Lefschetz operations, similar to the thesis of Raynaud.
Because Grothendieck envisioned an alternative formulation of higher stacks using globular groupoids, and observed there should be a corresponding theory using cubical sets, he came up with the idea of test categories and test functors.pg 42 Essentially, test categories should be categories with a class of weak equivalences such that there is a geometric realization functor
and a weak equivalence
where Hot denotes the homotopy category.
See also
Homotopy hypothesis
∞-groupoid
Derivator
N-group (category theory)
References
External links
Pursuing stacks, A Grothendieck 1983
Conjectures in Grothendieck's “Pursuing stacks”, Mathoverflow.net
Cat as a closed model category
Is there a high-concept explanation for why “simplicial” leads to “homotopy-theoretic”?, Mathoverflow.net
What's special about the Simplex category?
R. Brown, The Origins of `Pursuing Stacks' by Alexander Grothendieck
Algebraic geometry | Pursuing Stacks | Mathematics | 913 |
22,939,577 | https://en.wikipedia.org/wiki/Belinostat | Belinostat (trade name Beleodaq, previously known as PXD101) is a histone deacetylase inhibitor drug developed by TopoTarget for the treatment of hematological malignancies and solid tumors.
It was approved in July 2014 by the US FDA to treat peripheral T-cell lymphoma.
In 2007 preliminary results were released from the Phase II clinical trial of intravenous belinostat in combination with carboplatin and paclitaxel for relapsed ovarian cancer. Final results in late 2009 of a phase II trial for T-cell lymphoma were encouraging.
Belinostat has been granted orphan drug and fast track designation by the FDA, and was approved in the US for the use against peripheral T-cell lymphoma on 3 July 2014. It is not approved in Europe .
The approved pharmaceutical formulation is given intravenously. Belinostat is primarily metabolized by UGT1A1; the initial dose should be reduced if the recipient is known to be homozygous for the UGT1A1*28 allele.
References
Acrylamides
Histone deacetylase inhibitors
Sulfonamides
Hydroxamic acids | Belinostat | Chemistry | 257 |
461,227 | https://en.wikipedia.org/wiki/Superconducting%20magnet | A superconducting magnet is an electromagnet made from coils of superconducting wire. They must be cooled to cryogenic temperatures during operation. In its superconducting state the wire has no electrical resistance and therefore can conduct much larger electric currents than ordinary wire, creating intense magnetic fields. Superconducting magnets can produce stronger magnetic fields than all but the strongest non-superconducting electromagnets, and large superconducting magnets can be cheaper to operate because no energy is dissipated as heat in the windings. They are used in MRI instruments in hospitals, and in scientific equipment such as NMR spectrometers, mass spectrometers, fusion reactors and particle accelerators. They are also used for levitation, guidance and propulsion in a magnetic levitation (maglev) railway system being constructed in Japan.
Construction
Cooling
During operation, the magnet windings must be cooled below their critical temperature, the temperature at which the winding material changes from the normal resistive state and becomes a superconductor, which is in the cryogenic range far below room temperature. The windings are typically cooled to temperatures significantly below their critical temperature, because the lower the temperature, the better superconductive windings work—the higher the currents and magnetic fields they can stand without returning to their non-superconductive state. Two types of cooling systems are commonly used to maintain magnet windings at temperatures sufficient to maintain superconductivity:
Liquid-cooled
Liquid helium is used as a coolant for many superconductive windings. It has a boiling point of 4.2 K, far below the critical temperature of most winding materials. The magnet and coolant are contained in a thermally insulated container (dewar) called a cryostat. To keep the helium from boiling away, the cryostat is usually constructed with an outer jacket containing (significantly cheaper) liquid nitrogen at 77 K. Alternatively, a thermal shield made of conductive material and maintained in 40 K – 60 K temperature range, cooled by conductive connections to the cryocooler cold head, is placed around the helium-filled vessel to keep the heat input to the latter at acceptable level. One of the goals of the search for high temperature superconductors is to build magnets that can be cooled by liquid nitrogen alone. At temperatures above about 20 K cooling can be achieved without boiling off cryogenic liquids.
Mechanical cooling
Because of increasing cost and the dwindling availability of liquid helium, many superconducting systems are cooled using two stage mechanical refrigeration. In general two types of mechanical cryocoolers are employed which have sufficient cooling power to maintain magnets below their critical temperature. The Gifford–McMahon cryocooler has been commercially available since the 1960s and has found widespread application. The G-M regenerator cycle in a cryocooler operates using a piston type displacer and heat exchanger. Alternatively, 1999 marked the first commercial application using a pulse tube cryocooler. This design of cryocooler has become increasingly common due to low vibration and long service interval as pulse tube designs use an acoustic process in lieu of mechanical displacement. In a typical two-stage refrigerator, the first stage will offer higher cooling capacity but at higher temperature (≈ 77 K) with the second stage reaching ≈ 4.2 K and < of cooling power. In use, the first stage is used primarily for ancillary cooling of the cryostat with the second stage used primarily for cooling the magnet.
Coil winding materials
The maximal magnetic field achievable in a superconducting magnet is limited by the field at which the winding material ceases to be superconducting, its "critical field", Hc, which for type-II superconductors is its upper critical field. Another limiting factor is the "critical current", Ic, at which the winding material also ceases to be superconducting. Advances in magnets have focused on creating better winding materials.
The superconducting portions of most current magnets are composed of niobium–titanium. This material has critical temperature of and can superconduct at up to about . More expensive magnets can be made of niobium–tin (Nb3Sn). These have a Tc of 18 K. When operating at 4.2 K they are able to withstand a much higher magnetic field intensity, up to 25 T to 30 T. Unfortunately, it is far more difficult to make the required filaments from this material. This is why sometimes a combination of Nb3Sn for the high-field sections and NbTi for the lower-field sections is used. Vanadium–gallium is another material used for the high-field inserts.
High-temperature superconductors (e.g. BSCCO or YBCO) may be used for high-field inserts when required magnetic fields are higher than Nb3Sn can manage. BSCCO, YBCO or magnesium diboride may also be used for current leads, conducting high currents from room temperature into the cold magnet without an accompanying large heat leak from resistive leads.
Conductor structure
The coil windings of a superconducting magnet are made of wires or tapes of Type II superconductors (e.g.niobium–titanium or niobium–tin). The wire or tape itself may be made of tiny filaments (about 20 micrometres thick) of superconductor in a copper matrix. The copper is needed to add mechanical stability, and to provide a low resistance path for the large currents in case the temperature rises above Tc or the current rises above Ic and superconductivity is lost. These filaments need to be this small because in this type of superconductor the current only flows in a surface layer whose thickness is limited to the London penetration depth (see Skin effect). The coil must be carefully designed to withstand (or counteract) magnetic pressure and Lorentz forces that could otherwise cause wire fracture or crushing of insulation between adjacent turns.
Operation
Power supply
The current to the coil windings is provided by a high current, very low voltage DC power supply, since in steady state the only voltage across the magnet is due to the resistance of the feeder wires. Any change to the current through the magnet must be done very slowly, first because electrically the magnet is a large inductor and an abrupt current change will result in a large voltage spike across the windings, and more importantly because fast changes in current can cause eddy currents and mechanical stresses in the windings that can precipitate a quench (see below). So the power supply is usually microprocessor-controlled, programmed to accomplish current changes gradually, in gentle ramps. It usually takes several minutes to energize or de-energize a laboratory-sized magnet.
Persistent mode
An alternate operating mode used by most superconducting magnets is to short-circuit the windings with a piece of superconductor once the magnet has been energized. The windings become a closed superconducting loop, the power supply can be turned off, and persistent currents will flow for months, preserving the magnetic field. The advantage of this persistent mode is that stability of the magnetic field is better than is achievable with the best power supplies, and no energy is needed to power the windings. The short circuit is made by a 'persistent switch', a piece of superconductor inside the magnet connected across the winding ends, attached to a small heater. When the magnet is first turned on, the switch wire is heated above its transition temperature, so it is resistive. Since the winding itself has no resistance, no current flows through the switch wire. To go to persistent mode, the supply current is adjusted until the desired magnetic field is obtained, then the heater is turned off. The persistent switch cools to its superconducting temperature, short-circuiting the windings. Then the power supply can be turned off. The winding current, and the magnetic field, will not actually persist forever, but will decay slowly according to a normal inductive time constant (L/R):
where is a small residual resistance in the superconducting windings due to joints or a phenomenon called flux motion resistance. Nearly all commercial superconducting magnets are equipped with persistent switches.
Magnet quench
A quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil enters the normal (resistive) state. This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a defect in the magnet can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the enormous current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal (this can take several seconds, depending on the size of the superconducting coil). This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and rapid boil-off of the cryogenic fluid. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when the beginning of a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air.
A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, necessitating the replacement of a number of magnets. In order to mitigate against potentially destructive quenches, the superconducting magnets that form the LHC are equipped with fast-ramping heaters that are activated once a quench event is detected by the complex quench protection system. As the dipole bending magnets are connected in series, each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into dumps that are massive blocks of metal which heat up to several hundreds of degrees Celsius due to the resistive heating in a matter of seconds. Although undesirable, a magnet quench is a "fairly routine event" during the operation of a particle accelerator.
Magnet "training"
In certain cases, superconducting magnets designed for very high currents require extensive bedding in, to enable the magnets to function at their full planned currents and fields. This is known as "training" the magnet, and involves a type of material memory effect. One situation this is required in is the case of particle colliders such as CERN's Large Hadron Collider. The magnets of the LHC were planned to run at 8 TeV (2 × 4 TeV) on its first run and 14 TeV (2 × 7 TeV) on its second run, but were initially operated at a lower energy of 3.5 TeV and 6.5 TeV per beam respectively. Because of initial crystallographic defects in the material, they will initially lose their superconducting ability ("quench") at a lower level than their design current. CERN states that this is due to electromagnetic forces causing tiny movements in the magnets, which in turn cause superconductivity to be lost when operating at the high precision needed for their planned current. By repeatedly running the magnets at a lower current and then slightly increasing the current until they quench under control, the magnet will gradually both gain the required ability to withstand the higher currents of its design specification without quenches occurring, and have any such issues "shaken" out of them, until they are eventually able to operate reliably at their full planned current without experiencing quenches.
History
Although the idea of making electromagnets with superconducting wire was proposed by Heike Kamerlingh Onnes shortly after he discovered superconductivity in 1911, a practical superconducting electromagnet had to await the discovery of superconducting materials that could support large critical supercurrent densities in high magnetic fields. The first successful superconducting magnet was built by G.B. Yntema in 1955 using niobium wire and achieved a field of 0.7 T at 4.2 K. Then, in 1961, J.E. Kunzler, E. Buehler, F.S.L. Hsu, and J.H. Wernick made the discovery that a compound of niobium and tin could support critical-supercurrent densities greater than 100,000 amperes per square centimetre in magnetic fields of 8.8 teslas. Despite its brittle nature, niobium–tin has since proved extremely useful in supermagnets generating magnetic fields up to 20 T.
The persistent switch was invented in 1960 by Dwight Adams while a postdoctoral associate at Stanford University. The second persistent switch was constructed at the University of Florida by M.S. student R.D. Lichti in 1963. It has been preserved in a showcase in the UF Physics Building.
In 1962, T.G. Berlincourt and R.R. Hake discovered the high-critical-magnetic-field, high-critical-supercurrent-density properties of niobium–titanium alloys. Although niobium–titanium alloys possess less spectacular superconducting properties than niobium–tin, they are highly ductile, easily fabricated, and economical. Useful in supermagnets generating magnetic fields up to 10 teslas, niobium–titanium alloys are the most widely used supermagnet materials.
In 1986, the discovery of high temperature superconductors by Georg Bednorz and Karl Müller energized the field, raising the possibility of magnets that could be cooled by liquid nitrogen instead of the more difficult-to-work-with helium.
In 2007, a magnet with windings of YBCO achieved a world record field of . The US National Research Council has a goal of creating a 30-tesla superconducting magnet.
Globally in 2014, almost six billion US dollars worth of economic activity resulted from which superconductivity is indispensable. MRI systems, most of which employ niobium–titanium, accounted for about 80% of that total.
In 2016, Yoon et al. reported a 26 T no-insulation superconducting magnet that they built out of GdBa2Cu3O7–x, using a technique which was previously reported in 2013.
In 2017, a YBCO magnet created by the National High Magnetic Field Laboratory (NHMFL) broke the previous world record with a strength of 32 T. This is an all superconducting user magnet, designed to last for many decades. They hold the current record as of March 2018.
In 2019, a new world-record of 32.35 T with all-superconducting magnet is achieved by Institute of Electrical Engineering, Chinese Academy of Sciences (IEE, CAS). No-insulation technique for the HTS insert magnet is also used.
In 2019, the NHMFL also developed a non-insulated YBCO test coil combined with a resistive magnet and broke the lab's own world record for highest continuous magnetic field for any configuration of magnet at 45.5 T.
A 1.2 GHz (28.2 T) NMR magnet was achieved in 2020 using an HTS magnet.
In 2022, the Hefei Institutes of Physical Science, Chinese Academy of Sciences (HFIPS, CAS) claims new world record for strongest steady magnetic field of 45.22 T reached, while the previous NHMFL 45.5 T record in 2019 was actually reached when the magnet failed immediately in a quench.
Uses
Superconducting magnets have a number of advantages over resistive electromagnets. They can generate much stronger magnetic fields than ferromagnetic-core electromagnets, which are limited to fields of around 2 T. The field is generally more stable, resulting in less noisy measurements. They can be smaller, and the area at the center of the magnet where the field is created is empty rather than being occupied by an iron core. Large magnets can consume much less power. In the persistent state (above), the only power the magnet consumes is that needed for refrigeration equipment. Higher fields can be achieved with cooled resistive electromagnets, as superconducting coils enter the non-superconducting state at high fields. Steady fields of over 40 T can be achieved, usually by combining a Bitter electromagnet with a superconducting magnet (often as an insert).
Superconducting magnets are widely used in MRI scanners, NMR equipment, mass spectrometers, magnetic separation processes, and particle accelerators.
Rail transport
In Japan, after decades of research and development into superconducting maglev by Japanese National Railways and later Central Japan Railway Company (JR Central), the Japanese government gave permission to JR Central to build the Chūō Shinkansen, linking Tokyo to Nagoya and later to Osaka.
Particle accelerator
One of the most challenging uses of superconducting magnets is in the LHC particle accelerator. Its niobium–titanium (Nb–Ti) magnets operate at 1.9 K to allow them to run safely at 8.3 T. Each magnet stores 7 MJ. In total the magnets store . Once or twice a day, as protons are accelerated from 450 GeV to 7 TeV, the field of the superconducting bending magnets is increased from 0.54 T to 8.3 T.
Fusion reactor
The central solenoid and toroidal field superconducting magnets designed for the ITER fusion reactor use niobium–tin (Nb3Sn) as a superconductor. The central solenoid coil carries a current of 46 kA and produce a magnetic field of 13.5 T. The 18 toroidal field coils at a maximum field of 11.8 T store an energy of 41 GJ (total?). They have been tested at a record current of 80 kA. Other lower field ITER magnets use niobium–titanium. Most of the ITER magnets have their field varied many times per hour.
Mass spectrometer
One high-resolution mass spectrometer planned to use a 21-tesla SC magnet.
See also
Fault current limiter
Flux pumping
References
Further reading
Martin N. Wilson, Superconducting Magnets (Monographs on Cryogenics), Oxford University Press, New edition (1987), .
Yukikazu Iwasa, Case Studies in Superconducting Magnets: Design and Operational Issues (Selected Topics in Superconductivity), Kluwer Academic / Plenum Publishers, (October 1994), .
Habibo Brechna, Superconducting magnet systems, New York, Springer-Verlag New York, Inc., 1973, ,
External links
Making Superconducting Magnets From the National High Magnetic Field Laboratory
1986 evaluation of NbTi and Nb3Sn for particle accelerator magnets.
Types of magnets
Superconductivity
fr:Supraconductivité#Électro-aimants | Superconducting magnet | Physics,Materials_science,Engineering | 4,132 |
24,164,843 | https://en.wikipedia.org/wiki/Environmental%20Health%20Criteria%20%28WHO%29 | Environmental Health Criteria (EHC) is a series of monographs prepared by the International Programme on Chemical Safety (IPCS) and published by the World Health Organization (WHO). They aim to give "comprehensive data from scientific sources for the establishment of safety standards and regulations." More than 230 EHCs have been published.
Many EHCs cover the properties of individual chemicals or of groups of related chemicals (see, e.g., EHC 65: Butanols). Since 1998, this role has mostly been taken over by the related Concise International Chemical Assessment Documents (CICADs), also prepared by the IPCS and published by the WHO. EHCs can also cover non-chemical (potential) hazards (see, e.g., EHC 35: Extremely low frequency (ELF) fields) and methodology (see, e.g., EHC 144: Aged Population, principles for evaluating the effects of chemicals).
EHCs are based on a search of the scientific literature, and do not include new experimentation (unlike, e.g., SIDS or EU-RARs) although they may contain recommendations for further studies. A typical monograph on a chemical substance would include:
the physical and chemical properties of the substance and analytical methods for determining concentrations and exposure;
sources of environmental and industrial exposure and environmental transport;
chemobiokinetics and metabolism including absorption, distribution, transformation and elimination;
short- and long-term effects on animals, including carcinogenicity, mutagenicity, and teratogenicity;
an evaluation of risks for human health and of the effects on the environment.
Monographs do not contain specific guidelines for regulations (although they might contain examples of national exposure limits, for example), and they do not constitute an official position of the WHO or of any of the other organizations participating in the IPCS.
References
External links
List of Environmental Health Criteria monographs
Chemical safety | Environmental Health Criteria (WHO) | Chemistry | 393 |
51,408,489 | https://en.wikipedia.org/wiki/Air-Ink | AIR-INK is a proprietary brand of ink and composites products made by condensing carbon-based gaseous effluents generated by air pollution due to incomplete combustion of fossil fuels. Founded by Graviky Labs, a spin-off group of MIT Media Lab, AIR-INK produces its materials through a step-by-step process which primarily involves capturing of emissions, separation of carbon from the emissions, and then mixing of this carbon with different types of oils and solutions to achieve advanced material properties. It uses a patented device and technique called 'KAALINK' to carry out the filtration of soot, which contains carbon and other polluting agents like heavy metals and polycyclic aromatic hydrocarbon.
AIR-INK is marketed as a solution to air pollution and its negative effects on human life, by allowing the print industry to offset its carbon. Dubbed as "the first ink made out of recycled air pollution," its products were used in June 2016 in association with Heineken to create street art and murals in Hong Kong's Sheung Wan district. 30–50 minutes of car pollution can supply enough carbon to fill one AIR-INK pen.
History
Anirudh Sharma, the founder of Graviky Labs, first conceived the idea of AIR-INK during an experiment at MIT, while designing a printer that could print with carbon nanoparticles. Sharma and his team spent close to three years researching how to purify and repurpose carbon soot from auto and factory emissions, a major contributor to air pollution and global carbon footprint. In 2013, the Fluid Interfaces research group, at the Massachusetts Institute of Technology demonstrated the process of converting carbon residue into ink for use in an inkjet cartridge.
In 2016, AIR-INK products were given to graphic artists in Hong Kong, which is known for its high air pollution, who were requested to paint murals. An artist who participated in this campaign said of the product, "genius, and deserves a chance."
Technology
Soot composed of 2.5-micrometer black carbon particles found in petrol or diesel carbon emissions is captured from the tailpipes of cars and diesel generators through a device called 'Kaalink.' A separate ensures that carbon particulate is recycled into safe inks without heavy metals/toxins A single Air Ink pen contains 30–50 minutes of air pollution. The emissions from 2,500 hours of driving one standard diesel vehicle produces about 150 litres of ink.
'Kaalink'
Kaalink is a cylindrical device that is retrofitted into a diesel generators' exhaust system or exhaust pipe to collect the emissions. It can collect up to 93% of the total exhaust, which is then processed to remove heavy metals and carcinogens. The end-product from this device is a purified carbon-based pigment. Kaalink has been tested on cars, trucks, motorcycles and fishing boats in Bangalore and Hong Kong. The company now has started to work on capturing pollution from static sources of emission such as diesel generators. Third party polluters also send in their PM2.5 pollution to Graviky's recycling warehouses.
Some critics have proposed this device will act similar to a diesel particulate filter, which has been shown to increase back pressure on the engine, thereby marginally affecting the efficiency of the engine, resulting in a loss of power, decreased mileage, and increased emissions.
References
Further reading
External links
Companies based in Bengaluru
Emissions reduction
Indian brands
Inks
Kickstarter-funded products
2016 establishments in Karnataka | Air-Ink | Chemistry | 719 |
50,931,109 | https://en.wikipedia.org/wiki/11-Aminoundecanoic%20acid | 11-Aminoundecanoic acid is an organic compound with the formula H2N(CH2)10CO2H. This white solid is classified as an amine and a fatty acid. 11-Aminoundecanoic acid is a precursor to Nylon-11.
Production
As practiced by Arkema, 11-aminoundecanoic acid is prepared industrially from undecylenic acid, which is derived from castor oil. The synthesis proceeds in four separate reactions:
Transesterification of castor oil to methyl ricinoleate
Crude castor oil consists of about 80% triglycerides, from the ricinoleic acid, itself representing about 90% of the oil. It is quantitatively transesterified with methanol to methyl ricinoleate (the methyl ester of ricinoleic acid) in the presence of the basic sodium methoxide at 80 °C within 1 h reaction time in a stirred reactor. At the end of the reaction, the resulting glycerol separates and the liquid methyl ester is washed with water to remove residual glycerol.
Pyrolysis of methylricinoleate to heptanal and methyl undecenoate
Methylricinoleate is evaporated at 250 °C, mixed with hot steam (600 °C) in a 1:1 ratio and decomposed in a cracking furnace at 400 - 575 °C at a retention time of about 10 seconds into its cleavage products heptanal and methyl undecenoate. The cleavage of the aliphatic chain occurs in this variant of the steam cracking selectively between the hydroxymethylene and the allyl-methylene group. Besides heptanal and methyl undecenoate, a mixture of methyl esters of saturated and unsaturated C18-carboxylic acids is obtained. This mixture is known under the trade name Esterol and is used as a lubricant additive.
Hydrolysis of methyl undecenoate to 10-undecenoic acid
The hydrolysis of the methyl ester with sodium hydroxide proceeds at 25 °C within 30 min with quantitative yield. After acidification with hydrochloric acid, solid 10-undecenoic acid (undecylenic acid) is obtained.
Hydrobromination of 10-undecenoic acid to 11-bromoundecanoic acid
The undecenoic acid is dissolved in toluene and, in the presence of the radical initiator benzoyl peroxide (BPO), gaseous hydrogen bromide is added, in contrary to the Markovnikov rule ("anti-Markovnikov"). When cooled to 0 °C, the fast and highly exothermic reaction produces 11-bromoundecanoic acid in 95% yield - the Markovnikov product 10-bromoundecanoic acid is produced in small quantities as a by-product. Toluene and unreacted hydrogen bromide are extracted under reduced pressure and reused.
Bromine exchange of 11-bromoundecanoic acid to 11-aminoundecanoic acid
11-Bromodecanoic acid is mixed at 30 °C with a large excess of 40% aqueous ammonia solution. When the reaction is complete, water is added and the mixture is heated to 100 °C to remove the excess ammonia.
The acid can be recrystallized from water. For further purification, the hydrochloride of 11-aminoundecanoic acid, which is available by acidification with hydrochloric acid, can be recrystallized from a methanol/ethyl acetate mixture.
Properties
11-Aminoundecanoic acid is a white crystalline and odorless solid with low solubility in water.
Use
By acylation of 11-aminoundecanoic acid with chloroacetyl chloride, chloroacetylamino-11-undecanoic acid can be produced, which acts as a fungicide and insecticide.
N-acyl derivatives of 11-aminoundecanoic acid in the form of oligomeric amides have remarkable properties as gelling agents for water and organic solvents.
Monomer for polyamide 11
By far the most important application of 11-aminoundecanoic acid is its use as a monomer for polyamide 11 (also: nylon-11). Wallace Carothers, the inventor of polyamide (nylon 66), is said to have polymerized 11-aminoundecanoic acid as early as 1931.
Although polyamide 11 is derived from a renewable raw material (i.e. biobased), it is not biodegradable. Nevertheless, it has the most advantageous ecological profile of comparable thermoplastics. Due to its excellent toughness at low temperatures, polyamide 11 can be used at temperatures as low as -70 °C. Its relatively non-polar molecular structure due to the low frequency of amide bonds in the molecule results in low moisture absorption compared to polyamide 6 or polyamide 66. In addition, polyamide 11 has very good chemical stability, e.g. against hydrocarbons, low density, good thermal stability, weather resistance and is easy to process.
References
Amino acids | 11-Aminoundecanoic acid | Chemistry | 1,099 |
2,885,832 | https://en.wikipedia.org/wiki/Geschwind%E2%80%93Galaburda%20hypothesis | The Geschwind–Galaburda hypothesis is a neurological theory proposed by Norman Geschwind and Albert Galaburda in 1987. The hypothesis posits there are sex differences in cognitive abilities by relating them to lateralisation of brain function. The maturation rates of cerebral hemispheres differ and are mediated by circuiting testosterone levels, which are substantially influenced during the foetal and post-puberty development stages.
According to the hypothesis, testosterone delays the maturation of the brain, particularly the left hemisphere, resulting in corresponding regions of the right hemisphere and unaffected areas of the left hemisphere developing more rapidly. This leads to reduced verbal skills and an increased risk of developing language disorders, e.g dyslexia, while a rapid development of the right hemisphere and the skills corresponding to it, such as attention and problem-solving.
Focusing on foetal testosterone, the rise in levels hinders the development of the individual’s neurology and immunity, potentially explaining how cerebral lateralisation links to learning disorders, giftedness, and immune deficits. In cases of an underdeveloped or functionally impaired left hemisphere, the neuroanatomical asymmetries may lead to compensatory activity in other areas of the brain.
The field of “neuropsychology of individual differences” is concerned with the understanding the relationship between brain lateralisation and behaviour variation. In their work, Geschwind and Galaburda proposed that in order to explain the differences in cognitive abilities, it is dependent on prenatal exposure to testosterone. This comprehensive theoretical framework links the association between brain development, testosterone levels, and cognitive abilities. The theory gathers a wide range of neuropsychological phenomena and their associations under a single theoretical umbrella.
Relation to dyslexia
Dyslexic individuals have varying degrees of reading, writing, and verbal impairments. The development of dyslexia has been explicitly highlighted in those who have a specific cerebral lateralisation pattern, which have shown difficulties in language processing. Typically, the left hemisphere of the brain is dominant in language processing; however, individuals with dyslexia may have an underdeveloped or functionally impaired left hemisphere, leading to language processing difficulties. In response to this, the right hemisphere and posterior parietal cortex compensate to undertake language processing tasks, resulting in inefficiencies in language processing. This compensatory activity in other areas of the brain may explain the variability in the degree of impairment experienced by dyslexic individuals. Understanding the relationship between cerebral lateralisation and language processing amongst dyslexic individuals could be an effective method of diagnosis and treatment. Further research could design appropriate schemes to enhance the language processing and communication abilities of the individuals.
Studies
The Geschwind–Galaburda hypothesis has garnered empirical support from a number of studies. For instance, Witelson et al. discovered that Einstein’s brain exhibited an atypical pattern of cerebral lateralisation, which supports the hypothesis that brain lateralisation is related to cognitive abilities. In relation to the testosterone influence, an increased glial cell density in the left hemisphere caused an increased prenatal exposure to testosterone, which led to the disruption in the development of cerebral lateralisation. Therefore, the increased cell density in Einstein’s left hemisphere suggests the prenatal testosterone may have influenced his cognitive abilities and delayed language processing.
Another study highlighted how the role of neuroanatomy in an individual leads to developmental dyslexia. Researchers note that while the study of sex differences in dyslexia is still in its early stages, hormonal differences have been shown to cause cerebral asymmetry, which is followed by language and speech difficulties. The regions of the brain that are involved in language processing and phonological awareness, such as the planum temporal and inferior parietal lobule, have been found to differ between individuals with and without dyslexia. Impairments in these brain regions can lead to dyslexia, which is characterised by difficulties in reading and writing.
Contradictions
Although the Geschwind–Galaburda hypothesis has been cited in mainstream media and publication resources as a cause for left-handedness, very little research evidence (if any) has been presented to substantiate the theory. In fact, evidence has emerged suggesting that high prenatal estrogen exposure is just as likely to enhance the gene expression for left-handedness. In a study endorsed by the Centers for Disease Control (CDC), it is suggested that men who were prenatally exposed to diethylstilbestrol (a synthetic estrogen based fertility drug), are more likely to be left-handed than unexposed men. A study by Cornish found no association between sex and handedness, contradicting the expectation that there should be more males and left-handers. While the Geschwind–Galaburda hypothesis suggests that higher levels of testosterone should lead to cerebral lateralisation asymmetries, the theory has not been definitively proven or disproven. Further research is needed to fully understand the complexity of this theory and its implications.
Moreover, the theory has the potential to oversimplify the relationship between brain lateralisation and cognitive abilities. The impacts of brain lateralisation on cognitive abilities are highly complex, and there may be multiple causes that are not fully explained by cerebral lateralisation or circulating testosterone. A more holistic approach may be needed to fully understand the relationship between brain lateralisation and cognitive abilities. The controversial evidence supporting and not supporting the theory suggests that it may not be a reliable explanation for these developments.
References and further reading
Sex differences in humans
Motor skills | Geschwind–Galaburda hypothesis | Biology | 1,155 |
21,038,459 | https://en.wikipedia.org/wiki/Goldbeter%E2%80%93Koshland%20kinetics | The Goldbeter–Koshland kinetics describe a steady-state solution for a 2-state biological system. In this system, the interconversion between these two states is performed by two enzymes with opposing effect. One example would be a protein Z that exists in a phosphorylated form ZP and in an unphosphorylated form Z; the corresponding kinase Y and phosphatase X interconvert the two forms. In this case we would be interested in the equilibrium concentration of the protein Z (Goldbeter–Koshland kinetics only describe equilibrium properties, thus no dynamics can be modeled). It has many applications in the description of biological systems.
The Goldbeter–Koshland kinetics is described by the Goldbeter–Koshland function:
with the constants
Graphically the function takes values between 0 and 1 and has a sigmoid behavior. The smaller the parameters J1 and J2 the steeper the function gets and the more of a switch-like behavior is observed. Goldbeter–Koshland kinetics is an example of ultrasensitivity.
Derivation
Since equilibrium properties are searched one can write
From Michaelis–Menten kinetics the rate at which ZP is dephosphorylated is known to be and the rate at which Z is phosphorylated is . Here the KM stand for the Michaelis–Menten constant which describes how well the enzymes X and Y bind and catalyze the conversion whereas the kinetic parameters k1 and k2 denote the rate constants for the catalyzed reactions. Assuming that the total concentration of Z is constant one can additionally write that [Z]0 = [ZP] + [Z] and one thus gets:
with the constants
If we thus solve the quadratic equation (1) for z we get:
Thus (3) is a solution to the initial equilibrium problem and describes the equilibrium concentration of [Z] and [ZP] as a function of the kinetic parameters of the phosphorylation and dephosphorylation reaction and the concentrations of the kinase and phosphatase. The solution is the Goldbeter–Koshland function with the constants from (2):
Ultrasensitivity of Goldbeter–Koshland modules
The ultrasensitivity (sigmoidality) of a Goldbeter–Koshland module can be measured by its Hill Coefficient:
.
where EC90 and EC10 are the input values needed to produce the 10% and 90% of the maximal response, respectively.
In a living cell, Goldbeter–Koshland modules are embedded in a bigger network with upstream and downstream components. This components may constrain the range of inputs that the module will receive as well as the range of the module’s outputs that network will be able to detect. Altszyler et al. (2014) studied how the effective ultrasensitivity of a modular system is affected by these restrictions. They found that Goldbeter–Koshland modules are highly sensitive to dynamic range limitations imposed by downstream components. However, in the case of asymmetric Goldbeter–Koshland modules, a moderate downstream constrain can produce effective sensitivities much larger than that of the original module when considered in isolation.
References
Enzyme kinetics
Chemical kinetics
Ordinary differential equations
Catalysis | Goldbeter–Koshland kinetics | Chemistry | 699 |
24,133,572 | https://en.wikipedia.org/wiki/C25H22O10 | {{DISPLAYTITLE:C25H22O10}}
The molecular formula C25H22O10 may refer to:
Silibinin, major active constituent of silymarin
Umbilicaric acid, an organic polyphenolic carboxylic acid
Molecular formulas | C25H22O10 | Physics,Chemistry | 61 |
16,614,215 | https://en.wikipedia.org/wiki/Reed%20Odourless%20Earth%20Closet | The Reed odourless earth closet (ROEC) is a variation on the ventilated improved pit (VIP) toilet where the pit is fully off-set from the Outhouse and is connected to the squatting plate by a curved chute.
The ROEC is fitted with a vent pipe to control odour and insect nuisance. It is claimed that the chute, in conjunction with the ventilation stack, encourages vigorous air circulation down the toilet, thereby removing odours and discouraging flies.
This type of latrine is common in southern Africa.
Design consideration for ROEC
Design life
Most likely 4 to 15 years but the design life should be as long as possible: at least 10 years is desirable. The longer the design life, the longer the interval between relocating or emptying the latrine.
Dimensions
Usually the pit cross-sectional area should not be more than 2 m2 in order to avoid cover spans with large spans. In practice, a ROEC serving one household commonly has a diameter of 1–1.5 m or in case of square or rectangular pits, a width of 1–1.5 m.
Vent pipe
Vent pipe of a wide variety of materials are used, for example polyvinyl chloride (PVC), unplasticized PVC, bricks, etc. Whatever material is used, durability (including corrosion resistance), availability, cost and ease of construction are important factors. The vent pipe is sufficiently long such that the roof does not interfere with the action of wind across the top of the vent pipe. For both flat and sloped roofs, the top of the vent pipe should be at least 500 mm higher than highest point of the roof.
The internal diameter of the vent pipe depends on the required venting velocity necessary to achieve the recommended ventilation rate of at least 20 m3/h. This in turn depends on factors like internal surface roughness of the pipe, its length (which determines the friction losses), the head loss through the flyscreen and wind direction.
Flyscreen specification
The purpose of the flyscreen is to prevent passage of flies and mosquitoes; therefore, the mesh is not larger than . The flyscreen is made of corrosion-resistant material that is able to withstand intense rainfall, high temperature and strong sunlight. Stainless steel screens are preferred.
Advantages
As the pit is offset from the squatting hole excreta will not be seen, thus convenient to the users.
References
Workshop on community management of waste water (treatment and disposal)
Toilets | Reed Odourless Earth Closet | Biology | 510 |
21,495,178 | https://en.wikipedia.org/wiki/Seismic%20inversion | In geophysics (primarily in oil-and-gas exploration/development), seismic inversion is the process of transforming seismic reflection data into a quantitative rock-property description of a reservoir. Seismic inversion may be pre- or post-stack, deterministic, random or geostatistical; it typically includes other reservoir measurements such as well logs and cores.
Introduction
Geophysicists routinely perform seismic surveys to gather information about the geology of an oil or gas field. These surveys record sound waves which have traveled through the layers of rock and fluid in the earth. The amplitude and frequency of these waves can be estimated so that any side-lobe and tuning effects introduced by the wavelet may be removed.
Seismic data may be inspected and interpreted on its own without inversion, but this does not provide the most detailed view of the subsurface and can be misleading under certain conditions. Because of its efficiency and quality, most oil and gas companies now use seismic inversion to increase the resolution and reliability of the data and to improve estimation of rock properties including porosity and net pay.
There are many different techniques used in seismic inversion. These can be roughly grouped into two categories:
pre-stack or post-stack
seismic resolution or well-log resolution
The combination of these categories yields four technical approaches to the inversion problem, and the selection of a specific technique depends on the desired objective and the characteristics of the subsurface rocks. Although the order presented reflects advances in inversion techniques over the past 20 years, each grouping still has valid uses in particular projects or as part of a larger workflow.
Wavelet estimation
All modern seismic inversion methods require seismic data and a wavelet estimated from the data. Typically, a reflection coefficient series from a well within the boundaries of the seismic survey is used to estimate the wavelet phase and frequency. Accurate wavelet estimation is critical to the success of any seismic inversion. The inferred shape of the seismic wavelet may strongly influence the seismic inversion results and, thus, subsequent assessments of the reservoir quality.
Wavelet amplitude and phase spectra are estimated statistically from either the seismic data alone or from a combination of seismic data and well control using wells with available sonic and density curves. After the seismic wavelet is estimated, it is used to estimate seismic reflection coefficients in the seismic inversion.
When the estimated (constant) phase of the statistical wavelet is consistent with the final result, the wavelet estimation converges more quickly than when starting with a zero phase assumption. Minor edits and "stretch and squeeze" may be applied to the well to better align the events. Accurate wavelet estimation requires the accurate tie of the impedance log to the seismic. Errors in well tie can result in phase or frequency artifacts in the wavelet estimation. Once the wavelet is identified, seismic inversion computes a synthetic log for every seismic trace. To ensure quality, the inversion result is convolved with the wavelet to produce synthetic seismic traces which are compared to the original seismic.
Components of inversion
Inversion includes both seismic field data and well data, where well data serves to add the high frequency below the seismic band and to constrain the inversion. Well logs are first conditioned and edited to ensure there is a suitable relationship between impedance logs and the desired properties. The logs are then converted to time, filtered to approximate the seismic bandwidth and edited for borehole effects, balanced and classified by quality.
Seismic data is band-limited, reducing resolution and quality. To extend the frequency band available, low-frequency data is derived from log data, pre-stack depth or time migrated velocities and/or a regional gradient. High frequency can be derived from well control or geostatistical analysis.
Initial inversions are often run with relaxed constraints, starting with the seismic and then adding limited-trend data from the wells. This provides a rough overview of the reservoir in an unbiased manner. It is critical at this point to evaluate the accuracy of the tie between the inversion results and the wells, and between the original seismic data and the derived synthetics. It is also important to ensure that the wavelet matches the phase and frequency of seismic data.
Without a wavelet, the solution is not unique. Deterministic inversions address this problem by constraining the answer in some way, usually to well log data. Stochastic inversions address this problem by generating a range of plausible solutions, which can then be narrowed through testing for best fit against various measurements (including production data).
Post-stack seismic resolution inversion
An example of a post-stack seismic resolution inversion technique is the constrained sparse-spike inversion (CSSI). This assumes a limited number of reflection coefficients, with larger amplitude. The inversion results in acoustic impedance (AI), which is the product of rock density and p-wave velocity. Unlike seismic reflection data (which is an interface property) AI is a rock property. The model generated is of higher quality, and does not suffer from tuning and interference caused by the wavelet.
CSSI transforms seismic data to a pseudo-acoustic impedance log at every trace. Acoustic impedance is used to produce more accurate and detailed structural and stratigraphic interpretations than can be obtained from seismic (or seismic attribute) interpretation. In many geological environments acoustic impedance has a strong relationship to petrophysical properties such as porosity, lithology, and fluid saturation.
A good (CSSI) algorithm will produce four high-quality acoustic impedance volumes from full or post-stack seismic data: full-bandwidth impedance, bandlimited Impedance, reflectivity model, and low-frequency component. Each of these components can be inspected for its contribution to the solution and to check the results for quality. To further adapt the algorithm mathematics to the behavior of real rocks in the subsurface, some CSSI algorithms use a mixed-norm approach and allow a weighting factor between minimizing the sparsity of the solution and minimizing the misfit of the residual traces.
Pre-stack seismic resolution inversion
Pre-stack inversion is often used when post-stack inversion fails to sufficiently differentiate geologic features with similar P-impedance signatures. Simultaneous inversion solves for S-impedance and density, in addition to P-impedance. While many geologic features can express similar P-impedance characteristics, few will share combined P-impedance and S-impedance traits (allowing improved separation and clarity). Often a feasibility study using the wells logs will indicate whether separation of the desired lithotype can be achieved with P-impedance alone or whether S-impedance is also required. This will dictate whether a pre- or post-stack inversion is needed.
Simultaneous Inversion (SI) is a pre-stack method that uses multiple offset or angle seismic sub-stacks and their associated wavelets as input; it generates P-impedance, S-impedance and density as outputs (although the density output resolution is rarely as high as the impedances). This helps improve discrimination between lithology, porosity and fluid effects. For each input partial stack, a unique wavelet is estimated. All models, partial stacks and wavelets are input to a single inversion algorithm — enabling inversion to effectively compensate for offset-dependent phase, bandwidth, tuning and NMO stretch effects.
The inversion algorithm works by first estimating angle-dependent P-wave reflectivities for the input-partial stacks. Next, these are used with the full Zoeppritz equations (or approximations, such as Aki–Richards, for some algorithms) to find band-limited elastic reflectivities. These are in turn merged with their low-frequency counterparts from the model and integrated to elastic properties. This approximate result is then improved in a final inversion for P-impedance, S-impedance and density, subject to various hard and soft constraints. One constraint can control the relation between density and compressional velocity; this is necessary when the range of angles is not great enough to be diagnostic of density.
An important part in the inversion procedure is the estimation of the seismic wavelets. This is accomplished by computing a filter that best shapes the angle-dependent well log reflection coefficients in the region of interest to the corresponding offset stack at the well locations. Reflection coefficients are calculated from P-sonic, S-sonic and density logs using the Zoeppritz equations. The wavelets, with amplitudes representative of each offset stack, are input directly into the inversion algorithm. Since a different wavelet is computed for each offset volume, compensation is automatically done for offset-dependent bandwidth, scaling and tuning effects. A near-stack wavelet can be used as the starting point for estimating the far-angle (or offset) wavelet.
No prior knowledge of the elastic parameters and density beyond the solution space defined by any hard constraints is provided at the well locations. This makes comparison of the filtered well logs and the inversion outputs at these locations a natural quality control. The lowest frequencies from the inversion are replaced with information from the geologic model since they are poorly constrained by the seismic data. When applied in global mode a spatial control term is added to the objective function and large subsets of traces are inverted simultaneously. The simultaneous inversion algorithm takes multiple angle-stacked seismic data sets and generates three elastic parameter volumes as output.
The resulting elastic parameters are real-rock properties that can be directly related to reservoir properties. The more advanced algorithms use the full Knott–Zoeppritz equations and there is full allowance for amplitude and phase variations with offset. This is done by deriving unique wavelets for each input-partial stack. The elastic parameters themselves can be directly constrained during the seismic inversion and rock-physics relationships can be applied, constraining pairs of elastic parameters to each other. Final elastic-parameter models optimally reproduce the input seismic, as this is part of the seismic inversion optimization.
Post stack geostatistical inversion
Geostatistical inversion integrates high resolution well data with low resolution 3-D seismic, and provides a model with high vertical detail near and away from well control. This generates reservoir models with geologically-plausible shapes, and provides a clear quantification of uncertainty to assess risk. Highly detailed petrophysical models are generated, ready for input to reservoir-flow simulation.
Geostatistics differs from statistics in that it recognizes that only certain outcomes are geologically plausible. Geostatistical inversion integrates data from many sources and creates models that have greater resolution than the original seismic, match known geological patterns, and can be used for risk assessment and reduction.
Seismic, well logs and other input data are each represented as a probability density function (PDF), which provides a geostatistical description based on histograms and variograms. Together these define the chances of a particular value at a particular location, and the expected geological scale and composition throughout the modeled area.
Unlike conventional inversion and geomodeling algorithms, geostatistical inversion takes a one-step approach, solving for impedance and discrete property types or lithofacies at the same time. Taking this approach speeds the process and improves accuracy.
Individual PDFs are merged using bayesian inference techniques, resulting in a posterior PDF conditioned to the whole data set. The algorithm determines the weighting of each data source, eliminating potential bias. The posterior PDF is then input to a Markov chain Monte Carlo algorithm to generate realistic models of impedance and lithofacies, which are then used to co-simulate rock properties such as porosity. These processes are typically iterated until a model emerges that matches all information. Even with the best model, some uncertainty remains. Uncertainty can be estimated using random seeds to generate a range of realizations. This is especially useful when dealing with parameters that are sensitive to change; an analysis of this sort enables greater understanding of development risk.
Pre-stack log-detail inversion
Amplitude versus offset (AVO) (AVA) geostatistical inversion incorporates simultaneous AVO (AVA) inversion into the geostatistical inversion algorithm so high resolution, geostatistics, and AVO may be attained in a single method. The output model (realizations) are consistent with well log information, AVO seismic data, and honor rock property relationships found in the wells. The algorithm also simultaneously produces elastic properties (P-impedance, S-impedance and density) and lithology volumes, instead of sequentially solving for lithology first and then populating the cell with impedance and density values. Because all output models match all input data, uncertainty can be quantitatively assessed to determine the range of reservoir possibilities within the constraining data.
AVA geostatistical inversion software uses leading-edge geostatistical techniques, including Markov chain Monte Carlo (MCMC) sampling and pluri-Gaussian lithology modeling. It is thus possible to exploit "informational synergies" to retrieve details that deterministic inversion techniques blur out or omit. As a result, geoscientists are more successful in reconstructing both the overall structure and the fine details of the reservoir. The use of multiple-angle-stack seismic volumes in AVA geostatistical inversion enables further evaluation of elastic rock properties and probable lithology or seismic facies and fluid distributions with greater accuracy.
The process begins with a detailed petrophysical analysis and well log calibration. The calibration process replaces unreliable and missing sonic and density measurements with synthesized values from calibrated petrophysical and rock-physics models. Well log information is used in the inversion process to derive wavelets, supply the low frequency component not present in the seismic data, and to verify and analyze the final results. Next, horizon and log data are used to construct the stratigraphic framework for the statistical information to build the models. In this way, the log data is only used for generating statistics within similar rock types within the stratigraphic layers of the earth.
Wavelet analysis is conducted by extracting a filter from each of the seismic volumes using the well elastic (angle or offset) impedance as the desired output. The quality of the inversion result is dependent upon the extracted seismic wavelets. This requires accurate p-sonic, s-sonic and density logs tied to the appropriate events on the seismic data. Wavelets are extracted individually for each well. A final "multi-well" wavelet is then extracted for each volume using the best individual well ties and used as input to the inversion.
Histograms and variograms are generated for each stratigraphic layer and lithology, and preliminary simulations are run on small areas. The AVA geostatistical inversion is then run to generate the desired number of realizations, which match all the input data. The results are quality controlled by direct comparison of the inverted rock property volumes against the well logs. Further QC involves review by a multidisciplinary team of all input parameters and the results of the simulation. Analysis of multiple realizations produces mean (P50) property cubes or maps. Most often these are lithology or seismic facies cubes and predicted lithology or facies probabilities, but other outputs are also possible. Selected lithology and facies cubes are also generated for P15 and P85 probabilities (for example). Reservoir 3-D bodies of hydrocarbon-bearing units are captured with their corresponding rock properties, and the uncertainty in reservoir size and properties is quantified.
See also
Linear seismic inversion
Seismic to simulation
Exploration geophysics
Full waveform inversion
Model inversion
References
Further reading
Caulfield, C., Feroci, M., Yakiwchuk, K. "Seismic Inversion for Horizontal Well Planning in Western Saskatchewan", Evolving Geophysics Through Innovation, pp. 213–214.
Chakrabarty, C., Fossey, J., Renard, G., Gadelle, C. "SAGD Process in the East Senlac Field: From Reservoir Characterization to Field Application", No. 1998.192.
Contreras, A., Torres-Verdin, C., Chesters, W., Kvien, K., Globe, M., "Joint Stochastic Inversion of Petrophysical Logs and 3D Pre-Stack Seismic Data to Assess the Spatial Continuity of Fluid Units Away from Wells: Application to a Gulf-of-Mexico Deepwater Hydrocarbon Reservoir", SPWLA 46th Annual Logging Symposium, June 26–29, 2005.
De Barros, Dietrich, M., "Full Waveform Inversion of Shot Gathers in Terms of Poro-elastic Parameters", EAGE, London, June 2007.
Deutsch, C., Geostatistical Reservoir Modeling, New York: Oxford University Press, 2002, 376 pages.
Francis, A., "Limitations of Deterministic and Advantages of Stochastic Seismic Inversion", CSEG Recorder, February 2005, pp. 5–11.
Hasanusi, D., Adhitiawan, E., Baasir, A., Lisapaly, L., van Eykenhof, R., "Seismic Inversion as an Exciting Tool to Delineate Facies Distribution in Tiaka Carbonate Reservoirs, Sulawesi – Indonesia", Proceedings, Indonesian Petroleum Association, Thirty-First Annual Convention and Exhibition, May 2007.
Russell, B., Hampson, D., "The Old and the New in Seismic Inversion", CSEG Recorder, December 2006, pp. 5–11.
Stephen, K., MacBeth, C., "Reducing Reservoir Prediction Uncertainty by Updating a Stochastic Model Using Seismic History Matching", SPE Reservoir Evaluation & Engineering, December 2008.
Vargas-Meleza, L., Megchun, J., Vazquez, G., "Petrophysical Properties Estimation by Integrating AVO, Seismic Inversion and Multiattribute Analysis in a 3-D Volume of Playuela, Veracruz", AAPG International Conference: October 24–27, 2004, Cancun, Mexico.
Wang, X., Wu, S., Xu, N., Zhang, G., "Estimation of Gas Hydrate Saturation Using Constrained Sparse Spike Inversion: Case Study from the Northern South China Sea", Terr. Atmos. Ocean. Sci., Vol. 17, No. 4, 799–813, December 2006.
Watson, I., Lines, L., "Seismic Inversion at Pike’s Peak, Saskatchewan", CREWES Research Report, Volume 12, 2000.
Whitfield, J., "The Relation of Net Pay to Amplitude Versus Offset Gradients: A Gulf of Mexico Case Study", University of Houston Master's Thesis, 1993.
Zou, Y., Bentley, L., Lines, L., "Integration of Reservoir Simulation with Time-Lapse Seismic Modeling", 2004 CSEG National Convention.
External links
Earthworks Seismic Inversion company
Understanding Stochastic Inversion
Jason Geoscience Workbench (JGW)
CGGVERITAS, Simultaneous Elastic Inversion
Society of Petrophysicists and Well Log Analysts (SPWLA)
The University of Texas at Austin Petroleum and Geoscience Engineering Reading Room
Geological Survey Publications Warehouse
Geostatistics
Geophysics
Economic geology
Geology software
Petroleum geology
Seismology | Seismic inversion | Physics,Chemistry | 3,963 |
3,362,378 | https://en.wikipedia.org/wiki/Merrifield%20resin | Merrifield Resin is a cross-linked polystyrene resin that carries a chloromethyl functional group. Merrifield resin is named after its inventor, Robert Bruce Merrifield (1984 winner of the Nobel Prize in Chemistry), and used in solid-phase synthesis. The material is typically available as white beads. These beads are allowed to swell in suitable solvents (ethyl acetate, DMF, DMSO), which then allows the reagents to substitute the chloride substituents.
Merrifield Resin can be prepared by chloromethylation of polystyrene or by the copolymerization of styrene and 4-vinylbenzyl chloride.
References
Copolymers
Plastics | Merrifield resin | Physics | 155 |
12,951,110 | https://en.wikipedia.org/wiki/Topcolor | Topcolor is a model in theoretical physics, of dynamical electroweak symmetry breaking in which the top quark and anti-top quark form a composite Higgs boson by a new force arising from massive "top gluons". The solution to composite Higgs models was actually anticipated in 1981, and found to be the Infrared fixed point for the top quark mass.
Analogy with known physics
The composite Higgs boson made from a bound pair of top-anti-top quarks is analogous to the phenomenon of superconductivity, where Cooper pairs are formed by the exchange of phonons. The pairing dynamics and its solution was treated in the Bardeen-Hill-Lindner model.
The original topcolor naturally involved an extension of the standard model color gauge group to a product group SU(3)×SU(3)×SU(3)×... One of the gauge groups contains
the top and bottom quarks, and has a sufficiently large coupling constant to cause the condensate to form. The topcolor model anticipates the idea of dimensional deconstruction and extra space dimensions, as well as the large mass of the top quark.
In 2019 this was revisited ("scalar democracy") in which many composite Higgs bosons may form at very high energies, composed of the known quarks and leptons, perhaps bound by universal force (e.g., gravity, or an extension of topcolor). The standard model Higgs boson is then a top-anti-top boundstate. The theory predicts many new Higgs doublets, starting at the TeV mass scale, with couplings to the known fermions, that may explain their masses and mixing angles. The first sequential new Higgs bosons should be accessible to the LHC.
See also
Fermion condensate
Technicolor (physics)
Hierarchy problem
Top quark condensate
References
Physics beyond the Standard Model | Topcolor | Physics | 409 |
583,532 | https://en.wikipedia.org/wiki/Function%20type | In computer science and mathematical logic, a function type (or arrow type or exponential) is the type of a variable or parameter to which a function has or can be assigned, or an argument or result type of a higher-order function taking or returning a function.
A function type depends on the type of the parameters and the result type of the function (it, or more accurately the unapplied type constructor , is a higher-kinded type). In theoretical settings and programming languages where functions are defined in curried form, such as the simply typed lambda calculus, a function type depends on exactly two types, the domain A and the range B. Here a function type is often denoted , following mathematical convention, or , based on there existing exactly (exponentially many) set-theoretic functions mappings A to B in the category of sets. The class of such maps or functions is called the exponential object. The act of currying makes the function type adjoint to the product type; this is explored in detail in the article on currying.
The function type can be considered to be a special case of the dependent product type, which among other properties, encompasses the idea of a polymorphic function.
Programming languages
The syntax used for function types in several programming languages can be summarized, including an example type signature for the higher-order function composition function:
When looking at the example type signature of, for example C#, the type of the function is actually Func<Func<A,B>,Func<B,C>,Func<A,C>>.
Due to type erasure in C++11's std::function, it is more common to use templates for higher order function parameters and type inference (auto) for closures.
Denotational semantics
The function type in programming languages does not correspond to the space of all set-theoretic functions. Given the countably infinite type of natural numbers as the domain and the booleans as range, then there are an uncountably infinite number (2ℵ0 = c) of set-theoretic functions between them. Clearly this space of functions is larger than the number of functions that can be defined in any programming language, as there exist only countably many programs (a program being a finite sequence of a finite number of symbols) and one of the set-theoretic functions effectively solves the halting problem.
Denotational semantics concerns itself with finding more appropriate models (called domains) to model programming language concepts such as function types. It turns out that restricting expression to the set of computable functions is not sufficient either if the programming language allows writing non-terminating computations (which is the case if the programming language is Turing complete). Expression must be restricted to the so-called continuous functions (corresponding to continuity in the Scott topology, not continuity in the real analytical sense). Even then, the set of continuous function contains the parallel-or function, which cannot be correctly defined in all programming languages.
See also
Cartesian closed category
Currying
Exponential object, category-theoretic equivalent
First-class function
Function space, set-theoretic equivalent
References
Homotopy Type Theory: Univalent Foundations of Mathematics, The Univalent Foundations Program, Institute for Advanced Study. See section 1.2.
Data types
Subroutines
Type theory | Function type | Mathematics | 695 |
26,848,503 | https://en.wikipedia.org/wiki/Motorola%20Pageboy%20II | Motorola Pageboy II was a pager and the successor to the Motorola Pageboy.
History
The Motorola pager was a small radio receiver that delivered a message individually or widespread to those carrying the device.
The first successful consumer pager was Motorola’s Pageboy I which was introduced in 1974. This type (without display) could not store messages, however, it was small, portable and notified its wearer that a message had been sent.
Pageboy II
Motorola’s Pageboy II was launched in 1975 for the United States and 1976 for Europe in various types. Pb II 5-tone only 68–88 MHz / 146–174 MHz (US and Eur). Pb II tone only for 5-tone 80,6–88 MHz / 146–174 MHz (US). Pb II tone & voice radio for 2-tone signalling systems 68–88 MHz / 146–174 MHz (US). Pb II A04FNC Series radio pager 450–512 MHz (Eur). Pb II MAA04FNC1568AA 440–470 MHz für Funftonfolge-Rufsysteme (Eur). Pb II radio pager H04BNC Series 406–420 MHz, 450–470 MHz (US and Eur). The variety and reliability made the system popular worldwide.
The European system worked strictly in the 85–87, 150–170 and later on in the 450–512 MHz band and was based on the ZVEI codes. ZVEI is the abbreviation of Zentral Verband der Electrotechnischen Industrie West Germany (central union for the electro technical industry). This organization was responsible for a fixed industrial norm sequence of 5 selective call tones.
The device was in use to alert individuals or groups of persons within fire brigades or civil protection organizations. Though the device was even smaller than the Pageboy I, its speaker was pointed upward so that the alerting beeps followed by a voice message always came through. For its time, the device had an outstanding receiver sensitivity, which was reached by using IC circuitry.
Even now, Motorola’s Pageboy II is still in use. As many fire brigades switched over to digital equipment their outdated Pageboy II’s found their way to industrial safety & medical organizations.
References
External links
Firebrigade ZVEI testalert video
ZVEI (MP3 sound) testalert
Pageboy 2
Pagers | Motorola Pageboy II | Technology | 505 |
19,934,708 | https://en.wikipedia.org/wiki/Chia-Shun%20Yih | Chia-Shun Yih (; July 25, 1918 – April 25, 1997) was the Stephen P. Timoshenko Distinguished University Professor Emeritus at the University of Michigan. He made many significant contributions to fluid mechanics. Yih was also a seal artist.
Biography
Yih was born on July 25, 1918, in Guiyang, Guizhou province of China. Yih received his junior middle school education in Zhenjiang, and entered Suzhou High School in 1934 in Suzhou, Jiangsu Province.
In 1937, Yih entered the National Central University and studied civil engineering. Yih graduated in 1941 then did research at a hydrodynamics laboratory in Guanxian (or Guan County; 灌县; current Dujiangyan) of Sichuan province. Yih also worked in a bridge construction company in Guizhou. Later, Yih taught at Guizhou University.
In 1945, Yih went to study at the University of Iowa in the United States, where he obtained his PhD in 1948. Yih served as a professor of the University of Michigan for most of his academic career.
Yih is also well known for his high language talent, which appeared already when he was just a high school student. In his first classes in high school, he could talk fluently with his American English teacher. Yih mastered German soon after he joined the college, and was able to communicate smoothly with the local German missionaries in Chongqing. Later Yih again learned French, and lectured mechanics in French at the University of Paris and the University of Grenoble.
He died on April 25, 1997, of heart failure, in his sleep, while in an airplane over Japan.
Honors and awards
Theodore von Kármán Medal, 1981
Fluid Dynamics Prize, from the American Physical Society, 1985
Otto Laporte Award, 1989
Yih was a member of the National Academy of Engineering (elected 1980), and an academician of the Academia Sinica (elected 1970), and a fellow of the American Physical Society (elected 1959).
Books
C.-S. Yih, Fluid Mechanics: A Concise Introduction to Theory, West River Press, Ann Arbor (1979).
C.-S. Yih, Stratified Flows, 2nd ed., Academic Press (1980).
C.-S. Yih, Fluid Mechanics, West River Press, Ann Arbor (1988).
C.-S. Yih (Ed.), Advances in Applied Mechanics, Academic Press (1982).
References
Fluid dynamicists
1918 births
1997 deaths
Chinese emigrants to the United States
People from Guiyang
20th-century American engineers
Members of the United States National Academy of Engineering
University of Iowa alumni
National Central University alumni
Nanjing University alumni
University of Michigan faculty
Members of Academia Sinica
Artists from Guizhou
Educators from Guizhou
Chinese seal artists
Physicists from Guizhou
Fellows of the American Physical Society | Chia-Shun Yih | Chemistry | 584 |
41,964,064 | https://en.wikipedia.org/wiki/Small%20stellated%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the small stellated 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {5/2,5,3,3}, it has three small stellated 120-cells around each face. It is dual to the pentagrammic-order 600-cell honeycomb.
It can be seen as a stellation of the 120-cell honeycomb, and is thus analogous to the three-dimensional small stellated dodecahedron {5/2,5} and four-dimensional small stellated 120-cell {5/2,5,3}. It has density 5.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213)
Honeycombs (geometry)
5-polytopes | Small stellated 120-cell honeycomb | Physics,Chemistry,Materials_science,Mathematics | 246 |
62,456,128 | https://en.wikipedia.org/wiki/Junctional%20adhesion%20molecule | A junctional adhesion molecule (JAM) is a protein that is a member of the immunoglobulin superfamily, and is expressed in a variety of different tissues, such as leukocytes, platelets, and epithelial and endothelial cells. They have been shown to regulate signal complex assembly on both their cytoplasmic and extracellular domains through interaction with scaffolding that contains a PDZ domain and adjacent cell's receptors, respectively. JAMs adhere to adjacent cells through interactions with integrins LFA-1 and Mac-1, which are contained in leukocyte β2 and α4β1, which is contained in β1. JAMs have many influences on leukocyte-endothelial cell interactions, which are primarily moderated by the integrins discussed above. They interact in their cytoplasmic domain with scaffold proteins that contain a PDZ domain, which are common protein interaction modules that target short amino acid sequences at the C-terminus of proteins, to form tight junctions in both epithelial and endothelial cells as polarity is gained in the cell.
Structure
JAMs are usually around 40 kDa in size. Based on crystallographic studies conducted with recombinant extracellular mouse JAMs (rsJAM) and human JAMs (hJAM), it has been shown that JAM consists of immunoglobulin-like V-set domain followed by a second immunoglobulin domain that are linked together by a short linker sequence. The linker makes extensive hydrogen bonds to both domains, and the side chain of one of the main linker residues, Leu128, is commonly embedded in a hydrophobic cleft between each immunoglobulin-like domain. Two JAM molecules contain N-terminal domains that react in a highly complementary fashion due to prolific ionic and hydrophobic interactions. These two molecules form U-shaped dimers and salt bridges are then formed by a R(V,I,L)E motif. This motif has been proven to be important in dimer formation and is common among different types of JAMs. It commonly consists of Arg58-Val59-Glu60 located on the N-terminus and can dissociate into monomers based on the conditions of the solution it is exposed to. This motif has been shown to be present in many common variants of JAMs, including rsJAM, hJAM, JAM-1, JAM-2, and JAM-3.
Types
Three major JAM molecules interact with various molecules and receptors within the body:
JAM-1
JAM-1 was the first of the junctional adhesion molecules to be discovered, and is located in the tight junctions of both epithelial and endothelial cells. JAM-1interacts with cells in a homophilic manner in order to preserve the structure of the junction while moderating its permeability. It can also interact with receptors as a heterophilic structure by acting as a ligand for LFA-1 and facilitating leukocyte transmigration. JAM-1 also plays a significant role in many different cellular functions, including being both a reovirus receptor and a platelet receptor.
JAM-2
Like JAM-1, JAM-2 also is a member of the immunoglobulin superfamily. JAM-2 localization is moderated by serine phosphorylation at tight junctions as the molecule adheres to other tight junction proteins like PAR-3 and ZO-1. JAM-2 has been shown to interact with these proteins, primarily through the PDZ1 domain, and also through the PDZ3 domain. JAM-2 has also shown to act as a ligand for many immune cells, and plays a role in lymphocyte attraction to specific organs.
JAM-3
JAM-3 functions similarly to JAM-2 as it is localized around the tight junctions of epithelial and endothelial cells, but has been shown to be unable to adhere to leukocytes in the manner that other JAMs can. Mutations of JAM-3 introns have been shown to lead to brain hemorrhages and development of cataracts. Like JAM-2, JAM-3 has been shown associate with tight junction proteins like PAR-3 and ZO-1. JAM-3 has also been shown to interact with PARD3 (partitioning defective 3 homolog).
Function
JAMs serve many different functions within the cell:
Cell motility
JAMs play a critical role in the regulation of cell movement in multiple different cell types, such as epithelial, endothelial, leukocyte, and germ cells. JAM-1 regulates motility in epithelial cells by moderating expression of β1 integrin protein downstream of Rap1. JAM-1 has been shown to be able to cause cell adhesion, spreading and movement along β1 ligands, like collagen IV and fibronectin. JAM-1 also acts to moderate migration of vitronectin in endothelial cells. Vitronectin is a ligand for integrins αvβ3 and αvβ5, which exhibit selective cooperativity with bFGF and VEGF in the activation of the MAPK pathway. JAM-1 and JAM-3 allow leukocytes to migrate into connective tissue by freeing polymorphonuclear leukocytes from entrapment in endothelial cells and basement membranes. In the absence of JAM-1, these leukocytes cannot moderate β1 integrin endocytosis, and cannot be effectively expressed on the surface of the cell (which is essential for motility).
Cell polarity
JAM-1 and JAM-3 have significant roles in regulating cell polarity through their interactions with cell polarity proteins. JAM-1, JAM-2, and JAM-3 all interact with PAR-3 to influence cell polarity. PAR-3 is a significant factor in a cell's polarity-regulating complex, and regulates polarity in different cell types in many different organisms. All components of the PAR complex are required for tight junction formation between cells, but premature adherens junctions can form without PAR complex components being present. However, these junctions cannot efficiently develop into mature epithelial cell junctions. JAM-3 has also shown to affect cell polarity in spermatids by regulating the localization of cytosolic polarity.
Cell Proliferation
In order to preserve homeostasis of adult tissue, aged cells must be replaced with new cells at varying frequency, depending on the organ. Some organs that require high rates of cellular turnover are the small intestine and the colon. JAM-1 has been shown to regulate the proliferation of cells in the colon. In JAM-1 deficient mice, it has been found that the amount of proliferating cells in the colon greatly increased due to the increased proliferation of TA cells. JAM-1 acts to suppress cell proliferation, which is performed by restricting Akt activity. Recent studies have also pointed to JAM-1 preserving structural integrity of tissues more so than regulating cell number.
Role in physiological processes
JAMs play a significant role in many diverse physiological processes within the human body, including:
Tight junction formation
Tight junctions serve to provide most of the function for the barrier that is present on epithelial cell surfaces. Tight junctions feature the localization of both JAM-1 and JAM-3, and JAM-3 is localized exclusively at tight junctions. The role of JAM-1 in tight junction biology is to function through mediation partly due to the localization of the Par-αPKC complex at adherens junctions during junction creation. Once the tight junction is formed, many JAM-1 proteins are present, many of which are now phosphorylated at Ser285. JAM-1 also regulates the activity of many different claudins within different epithelial cells.
Angiogenesis
Angiogenesis is the generation of blood vessels from old blood vessels. Studies have shown that proteins found in tight junctions serve as intermediaries that moderate angiogenic signaling pathways. JAM-1 induces proliferation of endothelial cells, which begins the process of angiogenesis. An analysis of JAM-1 showed a correlation between JAM-1 activity and FGF2-induced angiogenesis in both cancerous proliferation or vascular repair.
Male fertility
JAM-3 has been shown to be a primary regulator of the development of spermatids as well as the rest of the male reproductive system. Within the Sertoli cells of the male reproductive system, JAM-3 interacts with JAM-2 to influence the polarity of both round and elongated spermatids. JAM-1 and JAM-2 are also present in and contribute to the polarity of the blood-testis barrier. Studies have also shown that inactivation of JAM-3 has been shown to significantly impede fertility by blocking male germ cell development and proliferation.
References
Proteins
Immunoglobulin superfamily | Junctional adhesion molecule | Chemistry | 1,858 |
78,813,714 | https://en.wikipedia.org/wiki/Titanium%20tetraazide | Titanium tetraazide is an inorganic chemical compound with the formula . It is a highly sensitive explosive, and has been prepared from titanium tetrafluoride and trimethylsilyl azide via the corresponding fluoride-azide exchange.
Properties
Titanium tetraazide has been characterized by vibrational spectroscopy and single-crystal X-ray diffraction. The compound was predicted in 2003 to be vibrationally stable, and was expected to have a tetrahedral structures containing linear bond angles, contrasting other metal azides which generally feature bent bond angles. After synthesis in 2004, the resulting titanium tetraazide did not exhibit linear bond angles, as the coordination numbers exceeded 4.
References
azide
titanium | Titanium tetraazide | Chemistry | 146 |
3,372,103 | https://en.wikipedia.org/wiki/Pneudraulics | Derived from the words hydraulics and pneumatics, pneudraulics is the term used when discussing systems on military aircraft that use either or some combination of hydraulic and pneumatic systems.
The science of fluids made of both gas and liquid.
Pneudraulic systems
Landing gear
Flaps and slats
Rudder
Ailerons
Speed brake
Wheel brakes
Nose wheel steering
References
Fluid power | Pneudraulics | Physics | 82 |
358,364 | https://en.wikipedia.org/wiki/Dungeon | A dungeon is a room or cell in which prisoners are held, especially underground. Dungeons are generally associated with medieval castles, though their association with torture probably derives more from the Renaissance period. An oubliette (from the French , meaning 'to forget') or bottle dungeon is a basement room which is accessible only from a hatch or hole (an angstloch) in a high ceiling.
Etymology
The word dungeon comes from French donjon (also spelled dongeon), which means "keep", the main tower of a castle. The first recorded instance of the word in English was near the beginning of the 14th century when it held the same meaning as donjon. The earlier meaning of "keep" is still in use for academics, although in popular culture, it has come to mean a cell or "oubliette". Though it is uncertain, both dungeon and donjon are thought to derive from the Middle Latin word dominus, meaning "lord" or "master".
In French, the term donjon still refers to a "keep", and the English term "dungeon" refers mostly to oubliette in French. Donjon is therefore a false friend to dungeon (although the game Dungeons & Dragons is titled Donjons et Dragons in its French editions).
An oubliette (same origin as the French oublier, meaning "to forget") is a basement room which is accessible only from a hatch or hole (an angstloch) in a high ceiling.
The use of "donjons" evolved over time, sometimes to include prison cells, which could explain why the meaning of "dungeon" in English evolved over time from being a prison within the tallest, most secure tower of the castle into meaning a cell, and by extension, in popular use, an oubliette or even a torture chamber.
The earliest use of oubliette in French dates back to 1374, but its earliest adoption in English is Walter Scott's Ivanhoe in 1819: "The place was utterly dark—the oubliette, as I suppose, of their accursed convent."
History
Few Norman keeps in English castles originally contained prisons, which were more common in Scotland. Imprisonment was not a usual punishment in the Middle Ages, with most prisoners awaiting an imminent trial, sentence or a political solution. Noble prisoners were not generally held in dungeons, but lived in some comfort in castle apartments. The Tower of London is famous for housing political prisoners, and Pontefract Castle at various times held Thomas of Lancaster (1322), Richard II (1400), Earl Rivers (1483), Richard Scrope, Archbishop of York (1405), James I of Scotland (1405–1424) and Charles, Duke of Orléans (1417–1430). Purpose-built prison chambers in castles became more common after the 12th century, when they were built into gatehouses or mural towers. Some castles had larger provision for prisoners, such as the prison tower at Caernarfon Castle.
Features
Although many real dungeons are simply a single plain room with a heavy door or with access only from a hatchway or trapdoor in the floor of the room above, the use of dungeons for torture, along with their association to common human fears of being trapped underground, have made dungeons a powerful metaphor in a variety of contexts. Dungeons, as a whole, have become associated with underground complexes of cells and torture chambers. As a result, the number of true dungeons in castles is often exaggerated to interest tourists. Many chambers described as dungeons or oubliettes were in fact water-cisterns or even latrines.
An example of what might be popularly termed an "oubliette" is the particularly claustrophobic cell in the dungeon of Warwick Castle's Caesar's Tower, in central England. The access hatch consists of an iron grille. Even turning around (or moving at all) would be nearly impossible in this tiny chamber.
However, the tiny chamber that is described as the oubliette, is in reality a short shaft which opens up into a larger chamber with a latrine shaft entering it from above. This suggests that the chamber is in fact a partially back-filled drain. The positioning of the supposed oubliette within the larger dungeon, situated in a small alcove, is typical of garderobe arrangement within medieval buildings. These factors perhaps point to this feature being the remnants of a latrine rather than a cell for holding prisoners. Footage of the inside of this chamber can be seen in episode 3 of the first series of Secrets of Great British Castles.
A "bottle dungeon" is sometimes simply another term for an oubliette. It has a narrow entrance at the top and sometimes the room below is even so narrow that it would be impossible to lie down but in other designs the actual cell is larger.
The identification of dungeons and rooms used to hold prisoners is not always a straightforward task. Alnwick Castle and Cockermouth Castle, both near England's border with Scotland, had chambers in their gatehouses which have often been interpreted as oubliettes. However, this has been challenged. These underground rooms (accessed by a door in the ceiling) were built without latrines, and since the gatehouses at Alnwick and Cockermouth provided accommodation it is unlikely that the rooms would have been used to hold prisoners. An alternative explanation was proposed, suggesting that these were strong-rooms where valuables were stored. Folklore often has it that one mode of use for oubliettes in the Borders, which would obviate latrines anyway, was to throw attackers into the oubliette, close the latch, and leave them to die. It seems likely that this gruesome act was threatened more often than it was carried out in practice, with the real aim being deterrence of potential attackers via the notoriety of the rumor that such a fate was entirely possible, and (plausibly) perhaps not unlikely, for anyone who might dare to attack.
In fiction
Oubliettes and dungeons were a favorite topic of nineteenth century gothic novels or historical novels, where they appeared as symbols of hidden cruelty and tyrannical power. Usually found under medieval castles or abbeys, they were used by villainous characters to persecute blameless characters. In Alexandre Dumas's La Reine Margot, Catherine de Medici is portrayed gloating over a victim in the oubliettes of the Louvre.
Dungeons are common elements in modern fantasy literature, related tabletop, and video games. The most famous examples are the various Dungeons & Dragons media. In this context, the word "dungeon" has come to be used broadly to describe any labyrinthine complex (castle, cave system, etc) rather than a prison cell or torture chamber specifically. A role-playing game involving dungeon exploration is called a dungeon crawl.
Near the beginning of Jack Vance's high-fantasy Lyonesse Trilogy (1983–1989), King Casmir of Lyonesse commits Prince Aillas of Troicinet, who he believes to be a vagabond, to an oubliette for the crime of having seduced his daughter. After some months, the resourceful prince fashions a ladder from the bones of earlier prisoners and the rope by which he had been lowered, and escapes.
In the musical fantasy film Labyrinth, director Jim Henson includes a scene in which the heroine Sarah is freed from an oubliette by the dwarf Hoggle, who defines it for her as "a place you put people... to forget about 'em!"
In the Thomas Harris novel The Silence of the Lambs, Clarice makes a descent into Gumb's basement dungeon labyrinth in the narrative's climactic scene, where the killer is described as having an oubliette.
In the Robert A. Heinlein novel Stranger in a Strange Land, the term "oubliette" is used to refer to a trash disposal much like the "memory holes" in Nineteen Eighty-Four.
See also
Immurement
Keep
References
Further reading
Castle architecture
Rooms
Imprisonment and detention | Dungeon | Engineering | 1,675 |
5,534,001 | https://en.wikipedia.org/wiki/Matching%20pursuit | Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form
where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small,
where the residual after calculating and is denoted by
.
If converges quickly to zero, then only a few atoms are needed to get a good approximation to . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is
where is the pseudo-norm (i.e. the number of nonzero elements of ). In the previous notation, the nonzero entries of are . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used.
For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals .
By taking an extremely redundant dictionary, we can look in it for atoms (functions) that best match a signal .
The algorithm
If contains a large number of vectors, searching for the most sparse representation of is computationally unacceptable for practical applications.
In 1993, Mallat and Zhang proposed a greedy solution that they named "Matching Pursuit."
For any signal and any dictionary , the algorithm iteratively generates a sorted list of atom indices and weighting scalars, which form the sub-optimal solution to the problem of sparse signal representation.
Input: Signal: , dictionary with normalized columns .
Output: List of coefficients and indices for corresponding atoms .
Initialization:
;
;
Repeat:
Find with maximum inner product ;
;
;
;
Until stop condition (for example: )
return
In signal processing, the concept of matching pursuit is related to statistical projection pursuit, in which "interesting" projections are found; ones that deviate more from a normal distribution are considered to be more interesting.
Properties
The algorithm converges (i.e. ) for any that is in the space spanned by the dictionary.
The error decreases monotonically.
As at each step, the residual is orthogonal to the selected filter, the energy conservation equation is satisfied for each :
.
In the case that the vectors in are orthonormal, rather than being redundant, then MP is a form of principal component analysis
Applications
Matching pursuit has been applied to signal, image and video coding, shape representation and recognition, 3D objects coding, and in interdisciplinary applications like structural health monitoring. It has been shown that it performs better than DCT based coding for low bit rates in both efficiency of coding and quality of image.
The main problem with matching pursuit is the computational complexity of the encoder. In the basic version of an algorithm, the large dictionary needs to be searched at each iteration. Improvements include the use of approximate dictionary representations and suboptimal ways of choosing the best match at each iteration (atom extraction).
The matching pursuit algorithm is used in MP/SOFT, a method of simulating quantum dynamics.
MP is also used in dictionary learning. In this algorithm, atoms are learned from a database (in general, natural scenes such as usual images) and not chosen from generic dictionaries.
A very recent application of MP is its use in linear computation coding to speed-up the computation of matrix-vector products.
Extensions
A popular extension of Matching Pursuit (MP) is its orthogonal version: Orthogonal Matching Pursuit (OMP). The main difference from MP is that after every step, all the coefficients extracted so far are updated, by computing the orthogonal projection of the signal onto the subspace spanned by the set of atoms selected so far. This can lead to results better than standard MP, but requires more computation. OMP was shown to have stability and performance guarantees under certain restricted isometry conditions. The incremental multi-parameter algorithm (IMP), published three years before MP, works in the same way as OMP.
Extensions such as Multichannel MP and Multichannel OMP allow one to process multicomponent signals. An obvious extension of Matching Pursuit is over multiple positions and scales, by augmenting the dictionary to be that of a wavelet basis. This can be done efficiently using the convolution operator without changing the core algorithm.
Matching pursuit is related to the field of compressed sensing and has been extended by researchers in that community. Notable extensions are Orthogonal Matching Pursuit (OMP), Stagewise OMP (StOMP), compressive sampling matching pursuit (CoSaMP), Generalized OMP (gOMP), and Multipath Matching Pursuit (MMP).
See also
CLEAN algorithm
Image processing
Least-squares spectral analysis
Principal component analysis (PCA)
Projection pursuit
Signal processing
Sparse approximation
Stepwise regression
References
Multivariate statistics
Signal processing | Matching pursuit | Technology,Engineering | 1,171 |
71,975,313 | https://en.wikipedia.org/wiki/ReRites | ReRites (also known as RERITES, ReadingRites, Big Data Poetry) is a literary work of "Human + A.I. poetry" by David Jhave Johnston that used neural network models trained to generate poetry which the author then edited. ReRites won the Robert Coover Award for a Work of Electronic Literature in 2022.
About the project
The ReRites project began as a daily rite of writing with a neural network, expanded into a series of performances from which video documentation has been published online, and concluded with a set of 12 books and an accompanying book of essays published by Anteism Books in 2019. In Electronic Literature, Scott Rettberg describes the early phases of the project in 2016, when it bore the preliminary name Big Data Poetry.
Jhave (the artist name that David Jhave Johnston goes by) describes the process of writing ReRites as a rite: "Every morning for 2 hours (normally 6:30–8:30am) I get up and edit the poetic output of a neural net. Deleting, weaving, conjugating, lineating, cohering. Re-writing. Re-wiring authorship: hybrid augmented enhanced evolutionary". There is video documentation of the writing process.
The human editing of the neural network's output is fundamental to this project, and Jhave gives examples of both unedited text extracts and his edited versions in publications about the project. Kyle Booten describes ReRites as "simultaneously dusty and outrageously verdant, monotonously sublime and speckled with beautiful and rare specimens".
Performances
ReRites was first shared with an audience through a series of performances where audience members and poets would participate in reading the automatically generated texts, which appeared on screen so fast that human readers could barely keep up. This has been described as allowing participants to "re-discover[..] the peculiar pleasures of being embodied", or, in Jhave's own words, as a space where human participants were "playing their wits and voices against an evocative infinite deep-learning muse".
The first performance was at Brown University's Interrupt Festival in 2019. It has been performed many times since, including at the Barbican Centre in London and Anteism Books.
Print publications
For a single year Jhave published one book of poetry from the ReRites project each month. These twelve volumes are accompanied by a book of essays, all published by Anteism Books. The accompanying essays provide critical responses to the project from poets and scholars including Allison Parrish, Johanna Drucker, Kyle Booten, Stephanie Strickland, John Cayley, Lai-Tze Fan, Nick Montfort, Mairéad Byrne, and Chris Funkhouser. Allison Parrish notes elsewhere that these paratexts to ReRites serve a legitimising function for a genre of poetry that is not yet institutionally acknowledged.
Technical details
Starting in 2016 under the name Big Data Poetry, Jhave generated poems using, in his own words, "neural network code (..) adapted from three corporate github-hosted machine-learning libraries: TensorFlow (Google), PyTorch (Facebook), and AWSD (SalesForce)". He explains that the "models were trained on a customised corpus of 600,000 lines of poetry ranging from the romantic epoch to the 20th century avant garde". Jhave maintains a GitHub repository with some of the code supporting ReRites.
Reception
ReRites is described by John Cayley as "one of the most thorough and beautiful" poetic responses to machine learning. The work's influence on the field of electronic literature was acknowledged in 2022, when the work won the Electronic Literature Organization's Robert Coover Award for a Work of Electronic Literature. The jury described ReRites as particularly poignant in the time of the pandemic, as it was "a documentation of the performance of the private ritual of writing and the obsessive-compulsive need for writers to communicate — even when no one else is reading".
The question of authorship and voice in ReRites has been raised by several critics. Although generated poetry is an established genre in electronic literature, Cayley notes that unlike the combinatory poems created by authors like Nick Montfort, where the author explicitly defines which words and phrases will be recombined, ReRites has "not been directed by literary preconceptions inscribed in the program itself, but only by patterns and rhythms pre-existing in the corpora". In an essay for the Australian journal TEXT, David Thomas Henry Wright asks how to understand authorship and authority in ReRites: "Who or what is the authority of the work? The original data fed into the machine, that is not currently retrievable or discernible from the final works? The code that was taken and adapted for his purposes? Or Jhave, the human editor?" Wright concludes that Jhave is the only actor with any intentionality and therefore the authority of the work. The centrality of the human editor is also emphasised by other scholars. In a chapter analysing ReRites Malthe Stavning Erslev argues that the machine learning misrepresents the dataset it is trained on.
While ReRites uses 21st century neural networks, it has been compared to earlier literary traditions. Poet Victoria Stanton, who read at one of the ReRites performances, has compared ReRites to found poetry, while David Thomas Henry Wright compares it to the Oulipo movement and Mark Amerika to the cut-up technique. Scholars also position ReRites firmly within the long tradition of generative poetry both in electronic literature and print, stretching from the I Ching, Queneau's Cent Mille Milliards de Poemes and Nabokov's Pale Fire to computer-generated poems like Christopher Strachey's Love Letter Generator (1952) and more contemporary examples.
Jhave describes the process of working with the output from the neural network as "carving". In his book My Life as an Artificial Creative Intelligence, Mark Amerika writes that the "method of carving the digital outputs provided by the language model as part of a collaborative remix jam session with GPT-2, where the language artist and the language model play off each other’s unexpected outputs as if caught in a live postproduction set, is one I share with electronic literature composer David Jhave Johnston, whose AI poetry experiments precede my own investigations."
References
2010s electronic literature works
New media
21st-century poetry
Canadian poetry
Natural language processing
Generative literature
Canadian electronic literature works | ReRites | Technology | 1,358 |
65,627 | https://en.wikipedia.org/wiki/V%C3%B6lusp%C3%A1 | Völuspá (also Vǫluspá, Vǫlospá, or Vǫluspǫ́; Old Norse: 'Prophecy of the völva, a seeress') is the best known poem of the Poetic Edda. It dates back to the tenth century and tells the story from Norse Mythology of the creation of the world, its coming end, and its subsequent rebirth that is related to the audience by a völva addressing Odin. Her name is given twice as Heiðr. The poem is one of the most important primary sources for the study of Norse mythology. Parts of the poem appear in the Prose Edda, but the earliest known wholly-preserved version of the poem is in the Codex Regius and Hauksbók manuscripts.
Preservation
Many of stanzas of Völuspá appear first in the Prose Edda (composed , of which the oldest extant manuscript dates from the beginning of the fourteenth century () in which the stanzas are quoted or paraphrased. The full poem is found in the Icelandic Codex Regius manuscript () and in the Haukr Erlendsson Hauksbók Codex () and the later thirteenth century Codex Regius version is usually taken as a base for editions of the poem.
The order and number of the stanzas varies in the existing sources. Some editors and translators have further rearranged the material.
Synopsis
The poem starts with the völva requesting silence from "the sons of Heimdallr" (human beings) and she then asks Odin whether he wants her to recite ancient lore based on her memory. She says she remembers jötnar born in antiquity who reared her, nine worlds, and the tree of life (Mjötviður mær, or axis mundi).
The völva proceeds to recite a creation myth, mentioning Ymir and that the world was nothing but the magical void, Ginnungagap, until the sons of Burr lifted the earth out of the sea. The Æsir then established order in the cosmos by finding places for the sun, the moon, and the stars, thereby starting the cycle of day and night. A golden age ensued in which the Æsir had plenty of gold and they happily constructed temples and made tools. But then three mighty maidens came from Jötunheimar and the golden age came to an end. The Æsir then created the dwarfs, of whom Mótsognir and Durinn are the mightiest.
At this point ten of the poem's stanzas are considered complete. A section then appears in some versions that usually is considered an interpolation. It is entitled the "Dvergatal" ("Catalogue of Dwarfs") and it contains six stanzas with names of dwarves. The antiquity and role of this section in the poem is not clear and sometimes is omitted by editors and translators.
The poem continues with the creations of the first humans that are recounted along with a description of the world-tree, Yggdrasil. The völva recalls the burning of Gullveig that led to the first "folk" war, where Heiðr is a name assumed by Gullveig in connection with the war of the deities, and what occurred in the struggle between the Æsir and Vanir. She then recalls the time the goddess Freyja was given to the jötnar, which is commonly interpreted as a reference to the myth of the jötunn builder, as told in Gylfaginning 42.
The völva then reveals to Odin that she knows some of his own secrets and that he sacrificed an eye in pursuit of knowledge. She tells him that she knows where his eye is hidden and how he gave it up in exchange for knowledge. In several refrains she asks him whether he understands or whether he would like to hear more.
In the Codex Regius version, the völva goes on to describe the slaying of Baldr, best and fairest of the deities and the enmity of Loki, and of others. Then the völva prophesies the destruction of the deities where fire and flood overwhelm heaven and earth as the deities fight their final battles with their enemies. This is the "fate of the gods", Ragnarök. She describes the summons to battle, the deaths of many of the deities, including the death of Odin, who is slain by Fenrir, the great wolf. The god of thunder and sworn protector of the earth, Thor, faces the world serpent Jörmungandr and wins, but Thor is only able to take nine steps afterward before collapsing due to the serpent's venom. Víðarr faces Fenrir and kicks his jaw open before stabbing the wolf in the heart with his spear. The god Freyr fights the giant Surtr, who wields a fiery sword that shines brighter than the sun, and Freyr falls.
Finally, the völva prophesies that a beautiful reborn world will rise from the ashes of death and destruction where Baldr and Höðr will live again in a new world and where the earth sprouts abundance without sowing seed. The surviving Æsir reunite with Hœnir and meet together at the field of Iðavöllr, discussing Jörmungandr, great events of the past, and the runic alphabet. A final stanza describes the sudden appearance of the dragon Nidhogg, bearing corpses in his wings, after which the völva emerges from her trance.
Reception
Völuspá is one of the most discussed poems of the Poetic Edda and dates to the tenth century, the century before the Christianization of Iceland. In March 2018, a team of medieval historians and scientists from the University of Cambridge suggested that the Icelandic poem, Vǫluspá, that is estimated to date from 961 was a roughly contemporary chronicle of the eruption of the volcano Eldgjá in 939. These researchers suggested that the dramatic imagery of the Eldgjá eruption was purposefully invoked in order to accelerate the Christianization of Iceland.
Some scholars hold that there are Christian influences in the text, emphasizing parallels with the Sibylline Prophecies. Henry Adams Bellows stated in 1936 that the author of Völuspá would have had knowledge of Christianity and infused it into the poem. Bellows dates the poem to the tenth century that was a transitional period between paganism and Christianity and the two religions would have co-existed before Christianity was declared the official religion of Iceland and after which the old paganism was tolerated if practiced in private. He suggests that this infusion allowed the pagan traditions to survive to an extent in Iceland, unlike in mainland Scandinavia. Several researchers have suggested that the entire Dvergatal section and references to the "mighty one who rules over all" are later insertions. Although some have identified the latter figure with Jesus, Bellows thought this was not necessarily the case.
In popular culture
J. R. R. Tolkien, a philologist familiar with the Völuspá, used names from the Dvergatal for the Dwarves and for the Wizard Gandalf in his 1937 fantasy novel The Hobbit.
Stanzas from Völuspá are performed in songform in the Television series Vikings and used as battle chants.
The 2012 atmospheric black metal album Umskiptar by Burzum takes lyrics from Völuspá.
Various stanzas from Völuspá are used in the song “Twilight of the Gods” in the 2020 video game Assassin's Creed Valhalla.
References
Relevant literature
Bugge, Sophus (1867). Norræn fornkvæði. Christiania: Malling. Available online
Dronke, Ursula (1997). The Poetic Edda Volume II Mythological Poems. Oxford: Clarendon Press.
Eysteinn Björnsson (ed.). Völuspá. Available online
Gunnell, Terry and Annette Lassen, eds. 2013. The Nordic Apocalypse: Approaches to Völuspa and Nordic Days of Judgement. Brepols Publishers. 240 pages.
McKinnell, John (2008). "Völuspá and the Feast of Easter," Alvíssmál 12:3–28. (PDF)
Sigurður Nordal (1952). Völuspá. Reykjavík: Helgafell.
Ólason, Vésteinn. "Vǫluspá and time." In The Nordic Apocalypse: Approaches to Vǫluspá and Nordic Days of Judgement, pp. 25–44. 2013.
Thorpe, Benjamin (tr.) (1866). Edda Sæmundar Hinns Froða: The Edda Of Sæmund The Learned. (2 vols.) London: Trübner & Co. Norroena Society edition available online at Google Books]
External links
MyNDIR (My Norse Digital Image Repository) Illustrations of Völuspá from manuscripts and early print books
English translations
Voluspo Translation and commentary by Henry Adams Bellows
Völuspâ Translation by Benjamin Thorpe
Old Norse editions
Völuspá Sophus Bugge edition and commentary with manuscript texts
Völuspá Eysteinn Björnsson edition with manuscript texts
Völuspá Guðni Jónsson edition
13th-century poems
Eschatology in Norse mythology
Eddic poetry
Old Norse philosophy
Creation myths
Ymir | Völuspá | Astronomy | 1,891 |
23,693,706 | https://en.wikipedia.org/wiki/Remote%20dispensing | Remote dispensing is used in health care environments to describe the use of automated systems to dispense (package and label) prescription medications without an on-site pharmacist. This practice is most common in long-term care facilities and correctional institutions that do not find it practical to operate a full-service in-house pharmacy.
Remote dispensing can also be used to describe the pharmacist controlled remote prescription dispensing units which connect patients to a remotely located pharmacist over video interface to receive counseling and medication dispensing. Because these units are pharmacist controlled, the units can be located outside of typical healthcare settings such as employer sites, universities and remote locations, thus offering pharmacy services where they have previously never existed before.
A typical remote-dispensing system
A typical remote-dispensing system is monitored remotely by a central pharmacy and includes secure, automated medication dispensing hardware that is capable of producing patient-specific packages of medications on demand. The secure medication dispensing unit is placed on-site at the care facility or non-healthcare locations (such as Universities, workplaces and retail locations) and filled with pharmacist-checked medication canisters.
When patient medications are needed, the orders are submitted to a pharmacist at the central pharmacy, the pharmacist reviews the orders and, when approved, the medications are subsequently dispensed from the on-site dispensing unit at the remote care facility. Medications come out of the dispensing machine printed with the patient’s name, medication name, and other relevant information.
If the medication stock in a canister is low, the central pharmacy is alerted to fill a canister from their bulk stock. New canisters are filled, checked by the pharmacist, security sealed, and delivered to the remote care facility.
Perceived advantages
In theory, access to dispensing services 24 hours a day in locations previously unable to support full pharmacy operations. Advocates for remote dispensing additionally claim that the service provides focused, uninterrupted and personalized time with a pharmacist as the system manages the physical dispensing process while the pharmacist simply oversees it. Certain prescription dispensing units can carry over 2000 different medications tailored to the prescribing habits of local healthcare providers. Furthermore, remote dispensing terminal manufacturers state that this technology can facilitate patient continuity of care between prescriber and pharmacist.
Disadvantages
While some may purport that travel time to pharmacies is reduced, this point has been negated by an Ontarian study published in the journal Healthcare Policy as over 90% of Ontarians live within a 5 km radius of a pharmacy.
Remote dispensing also places a physical barrier between the patient and pharmacist, limiting the pharmacist's ability to detect a patient's nonverbal cues. A patient with alcohol on his or her breath would go undetected via remote dispensing, increasing the risk for dangerous interactions with drugs such as tranquilizers, sleeping pills, narcotics, and warfarin to name a few. This problem may be amplified through telecommunication service disruptions, which were reported in previous studies examining the utility of remote dispensing technology.
Remote dispensing has the potential to undermine the services offered by physically present pharmacists. Hands-on patient training on inhalers and glucose meters is not feasible with remote dispensing, and administration of injections is impossible without a physically present pharmacist. Other cognitive services such as in-depth medication consultations are also impractical to conduct over such audiovisual technology, which do not provide acoustic privacy for the patient, nor do they meet mandatory criteria for conducting such services that require an “in-person discussion” to occur.
Furthermore, the variety of drugs offered by remote dispensing is limited in comparison to traditional pharmacies, which in the province of Ontario are required to maintain a dispensary of at least 9.3 m2 in area, far greater than that of any remote dispensing machine.
See also
Telepharmacy
Telemedicine
References
Automated teller machines
Pharmacies | Remote dispensing | Engineering | 892 |
8,591,462 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Crater | This is the list of notable stars in the constellation Crater, sorted by decreasing brightness.
References
List
Crater | List of stars in Crater | Astronomy | 22 |
9,923,134 | https://en.wikipedia.org/wiki/Chrome%20orange | Chrome orange is a mixed oxide with the chemical formula Pb2CrO5. It can be made by treating a lead(II) salt with an alkaline solution of a chromate or by treating chrome yellow (PbCrO4) with strongly basic solution.
Synthesis and nanoparticles
Pb2CrO5 can be synthesized with a gas-liquid precipitation process. Changing the pH controls whether PbCrO4 or Pb2CrO5 is created.
Orthorhombic nanocrystals can be selectively synthesized in a facile room temperature solution for Pb2CrO5.
Using a microwave-assisted ionic liquid (MAIL) method, bundle and rod-like nanocrystals of Pb2CrO5 were formed. The bundles look like bundles of straw, secured in the middle. In basic solution, single-crystalline Pb2CrO5 could be formed by heating lead acetate and potassium dichromate with microwave radiation for only 10 minutes at 90 Celsius. The MAIL process is simple, fast, and does not employ surfactants. The presence of hydroxide changes the phase that is formed. Using NaOH, monoclinic Pb2CrO5 is formed. The bundle and rod-shaped structures are sensitive to electron beam irradiation, which will turn them into many small particles.
Properties
The Gibbs free energy of Pb2CrO5 was determined in 2010 and is given as
ΔfG°mPb2CrO5(s)±0.30/(kJ•mol−1)=-1161.3 +0.4059(T/K) (859≤T/K≤1021).
Visible light activity up to 550 nanometers has been recorded for Pb2CrO5.
Pigment synthesis
In a catalog published c. 1835, Winsor and Newton paint company identify ten synthetic pathways for producing chrome orange, also called deep yellow. Chrome orange is made of PbCrO4 mixed with basic lead chromate (Pb2CrO5). It has been described as a “yellowish red or sometimes a beautiful deep red” in alkaline conditions. A deep yellow can be created using PbCrO4 and lead sulfate. There are ten synthetic methods for preparing deep chrome yellow (that made with Pb2CrO5), which require a chromate source, a basic lead source, additives, and a sulfate source. CrO42- + H2SO4+Pb(Ac)2 • 2Pb(OH)2 → PbCrO4+Pb2CrO5 at a pH of approximately seven is the synthesis.
Controlling the pH was Winsor and Newton's method for creating pigments from the pale yellow to the deep chrome orange. The resulting product has a high stability to light, which is always coveted by artists and collectors.
History
The natural mineral crocoite was discovered in 1797 by Louis Vauquelin and chrome orange was synthesized as a pigment for the first time in 1809. Pb2CrO5 is found in mineral form as phoenicochroite, which is a monoclinic, red, translucent mineral found in various places across the world, including Russia, the US, and Chile.
Use as a pigment
Chrome orange can range in color from light to deep orange and is no longer in production as a pigment. It has also been known as Derby red, Persian red, and Victoria red. It was first recorded as a pigment in 1809 and was perfect for some impressionist painters in the nineteenth century. The yellow-orange pigment of the boat in Renoir’s 1879 painting, The Seine at Asnières (The Skiff) at the National Gallery, London. Chrome orange was used extensively in Frederic Leighton's Flaming June (1895; Museo de Arte de Ponce).
See also
List of inorganic pigments
References
Further reading
Kühn, H. and Curran, M., Chrome Yellow and Other Chromate Pigments, in Artists’ Pigments. A Handbook of Their History and Characteristics, Vol. 1, L. Feller, Ed., Cambridge University Press, London 1986, p. 208 – 211.
Chrome Orange at ColourLex
Inorganic pigments
Lead(II) compounds
Chromates
Shades of orange | Chrome orange | Chemistry | 891 |
58,022,488 | https://en.wikipedia.org/wiki/Blakiston%27s%20Line | The Blakiston Line or Blakiston's Line is a faunal boundary line drawn between two of the four largest islands of Japan: Hokkaidō in the north and Honshū, south of it. It can be compared with faunal boundary lines like the Wallace Line. Certain animal species can only be found north of Blakiston's Line, while certain other species can only be found south of it.
Thomas Blakiston, who lived in Japan from 1861 to 1884 and who spent much of that time in Hakodate, Hokkaido, was the first person to notice that animals in Hokkaidō, Japan's northern island, were related to northern Asian species, whereas those on Honshū to the south were related to those from southern Asia. The Tsugaru Strait between the two islands was therefore established as a zoogeographical boundary, and became known as Blakiston's Line. This finding was first published to the Asiatic Society of Japan in a paper of 14 February 1883, named Zoological Indications of Ancient Connection of the Japan islands with the Continent.
Explanations and hypotheses on the existence of Blakiston's Line
The difference in the fauna can probably be attributed to land bridges that may have existed in the past. Whilst Hokkaido may have had land bridges to the north of Asia, via Sakhalin and the Kuril Islands, the other islands of Japan like Honshu, Shikoku and Kyushu, may have been connected to the Asian continent via the Korean Peninsula. As to when these land bridges existed, scientists do not agree. It may have been between 26,000 and 18,000 years ago, it may have been later than that. Sakhalin, the island just north of Japan, and Hokkaido may even have been connected to the mainland as recently as 10,000 years ago or less. Apart from these former land bridges, there are more factors that play a role in why there is a difference in the fauna north and south of the line:
The Tsugaru Strait is relatively deep, the maximum depth is 449 m.
The narrowest part of the Tsugaru Strait is 12.1 miles (19.5 km).
Currents in the Tsugaru Strait are strong, tidal currents coincide with ocean currents
The climate in the north is generally far colder than that in the south.
The influence of the human population in the north is different from that in the south, the north is more sparsely populated and there are more Ainu living in the north.
Species north and south of Blakiston's Line
Besides birds, animals that are of different origins north and south of the Blakiston Line include wolves, bears and chipmunks. Monkeys do not even live north of Blakiston's Line. The following table gives some examples of animal species involved:
Species that can only be found on the Ryukyu Islands are excluded from this table, even though their habitat is clearly south of Blakiston's Line, as other factors account for their distribution. Besides animal species that can only be found either north or south of Blakiston's Line, there are very many that can be found at either side of the line, in part because of human involvement. Examples of the latter are the Japanese Shorthorn, a breed of small Japanese beef cattle that is distributed in northern Honshu and also in Hokkaido, and the Japanese weasel, which was introduced to Hokkaido by human intervention.
It has also been studied whether or not this biogeographic boundary applies to far smaller organisms like soil microbes.
Possible trans-Blakiston's Line movements by land animals
On the contrary to major hypothesis that terrestrial animals couldn't move across Blakiston's Line, excavations of fossils of Palaeoloxodon naumanni and Sinomegaceros yabei (Japanese) from Hokkaidō, and moose and Ussuri brown bear from Honshū indicate a new assumption that terrestrial animals could have crossed the strait periodically.
Other faunal boundary lines in Japan
Apart from Blakiston's Line, other faunal boundary lines have been proposed for Japan, like Watase's line (Tokara Straits): for mammals, reptiles, amphibians and spiders, Hatta's line (Soya line): for reptiles, amphibians and freshwater invertebrates, Hachisuka's line for birds and Miyake's line for insects.
The role of Thomas Blakiston
There has been speculation about why Blakiston was the person who discovered this faunal boundary line and no one before him had done so. Andrew Davis, who has been a professor at Hokkaido University for four years, argued that this may have been because of his unusual position in Japanese society as a European.
Blakiston spent much time researching bird species in Japan. At that time, Japanese ornithology was at its infancy. In 1886 Leonhard Stejneger remarked: "Our knowledge of Japanese ornithology is only fragmentary" The years after his stay in Japan, Blakiston made publications on birds in Japan in general and on water birds of Japan in particular. Bird species like Blakiston's Fish Owl (Bubo blakistoni) and Regulus regulus japonensis Blakiston have been named after him. In Hakodate, Blakiston made a large collection of birds, which is currently located at the museum of Hakodate. The distributions of many bird species observe the Blakiston line, since many birds do not cross even the shortest stretches of open ocean water.
For his discovery of Blakiston's Line, a monument was erected in his honor on Mount Hakodate.
References
Biogeography
Geography of Japan
1883 establishments in Japan
1883 introductions | Blakiston's Line | Biology | 1,185 |
2,431,323 | https://en.wikipedia.org/wiki/Mudflow | A mudflow, also known as mudslide or mud flow, is a form of mass wasting involving fast-moving flow of debris and dirt that has become liquified by the addition of water. Such flows can move at speeds ranging from 3 meters/minute to 5 meters/second. Mudflows contain a significant proportion of clay, which makes them more fluid than debris flows, allowing them to travel farther and across lower slope angles. Both types of flow are generally mixtures of particles with a wide range of sizes, which typically become sorted by size upon deposition.
Mudflows are often called mudslips, a term applied indiscriminately by the mass media to a variety of mass wasting events. Mudflows often start as slides, becoming flows as water is entrained along the flow path; such events are often called mud failures.
Other types of mudflows include lahars (involving fine-grained pyroclastic deposits on the flanks of volcanoes) and jökulhlaups (outbursts from under glaciers or icecaps).
A statutory definition of "flood-related mudslide" appears in the United States' National Flood Insurance Act of 1968, as amended, codified at 42 USC Sections 4001 and following.
Triggering of mudflows
Heavy rainfall, snowmelt, or high levels of groundwater flowing through cracked bedrock may trigger a movement of soil or sediments in landslides that continue as mudflows. Floods and debris flows may also occur when strong rains on hill or mountain slopes cause extensive erosion and/or mobilize loose sediment that is located in steep mountain channels. The 2006 Sidoarjo mud flow may have been caused by rogue drilling.
The point where a muddy material begins to flow depends on its grain size, the water content, and the slope of the topography. Fine grained material like mud or sand can be mobilized by shallower flows than a coarse sediment or a debris flow. Higher water content (higher precipitation/overland flow) also increases the potential to initiate a mudflow.
After a mudflow forms, coarser sediment may be picked up by the flow. Coarser sediment picked up by the flow often forms the front of a mudflow surge and is pushed by finer sediment and water that pools up behind the coarse-grained moving mudflow-front. Mudflows may contain multiple surges of material as the flow scours channels and destabilizes adjacent hillslopes (potentially nucleating new mudflows). Mudflows have mobilized boulders 1–10 m across in mountain settings.
Some broad mudflows are rather viscous and therefore slow; others begin very quickly and continue like an avalanche. They are composed of at least 50% silt and clay-sized materials and up to 30% water. Because mudflows mobilize a significant amount of sediment, mudflows have higher flow heights than a clear water flood for the same water discharge. Also, sediment within the mudflow increases granular friction within the flow structure of the flow relative to clear water floods, which raises the flow depth for the same water discharge. Difficulty predicting the amount and type of sediment that will be included in a mudflow makes it much more challenging to forecast and engineer structures to protect against mudflow hazards compared to clear water flood hazards.
Mudflows are common even in the hills around Los Angeles, California, where they have destroyed many homes built on hillsides without sufficient support after fires destroy vegetation holding the land.
On 14 December 1999 in Vargas, Venezuela, a mudflow known as The Vargas tragedy significantly altered more than 60 kilometers (37 mi) of the coastline. It was triggered by heavy rainfall and caused estimated damages of US$1.79 to US$3.5 billion, killed between 10,000 and 30,000 people, forced 85,000 people to evacuate, and led to the complete collapse of the state's infrastructure.
Mudflows and landslides
Landslide is a more general term than mudflow. It refers to the gravity-driven failure and subsequent movement downslope of any types of surface movement of soil, rock, or other debris. The term incorporates earth slides, rock falls, flows, and mudslides, amongst other categories of hillslope mass movements. They do not have to be as fluid as a mudflow.
Mudflows can be caused by unusually heavy rains or a sudden thaw. They consist mainly of mud and water plus fragments of rock and other debris, so they often behave like floods. They can move houses off their foundations or bury a place within minutes because of incredibly strong currents.
Mudflow geography
When a mudflow occurs it is given four named areas, the 'main scarp', in bigger mudflows the 'upper and lower shelves' and the 'toe'. The main scarp will be the original area of incidence, the toe is the last affected area(s). The upper and lower shelves are located wherever there is a large dip (due to mountain or natural drop) in the mudflow's path. A mudflow can have many shelves.
Largest recorded mudflow
The world's largest historic subareal (on land) landslide occurred during the 1980 eruption of Mount St. Helens, a volcano in the Cascade Mountain Range in the State of Washington, US The volume of material displaced was . Directly in the path of the huge mudflow was Spirit Lake. Normally a chilly , the lahar instantly raised the temperature to near . Today the bottom of Spirit Lake is above the original surface, and it has two and a half times more surface area than it did before the eruption.
The largest known of all prehistoric landslides was an enormous submarine landslide that disintegrated 60,000 years ago and produced the longest flow of sand and mud yet documented on Earth. The massive submarine flow travelled – the distance from London to Rome.
By volume, the largest submarine landslide (the Agulhas slide off South Africa) occurred approximately 2.6 million years ago. The volume of the slide was .
Areas at risk
The areas most generally recognized as being at risk of a dangerous mudflow are:
Areas where wildfires or human modification of the land have destroyed vegetation
Areas where landslides have occurred before
Steep slopes and areas at the bottom of slopes or canyons
Slopes that have been altered for the construction of buildings and roads
Channels along streams and rivers
Areas where surface runoff is directed
See also
Quick clay, also known as Leda clay
Osceola Mudflow, occurred on Mt. Rainier's White River drainage.
Citations
References
. On-line publication 30 November 2013.
. Draft of , with page numbers.
Further reading
.
External links
United States Department of Homeland Security: Facts about Mudflows/Landslides.
Weather hazards
Geology articles needing expert attention
Landslide types
Natural disasters | Mudflow | Physics | 1,389 |
44,638,758 | https://en.wikipedia.org/wiki/Identity%20type | In type theory, the identity type represents the concept of equality. It is also known as propositional equality to differentiate it from "judgemental equality". Equality in type theory is a complex topic and has been the subject of research, such as the field of homotopy type theory.
Comparison with Judgemental Equality
The identity type is one of 2 different notions of equality in type theory. The more fundamental notion is "judgemental equality", which is a judgement.
Beyond Judgemental Equality
The identity type can do more than what judgemental equality can do. It can be used to show "for all ", which is impossible to show with judgemental equality. This is accomplished by using the eliminator (or "recursor") of the natural numbers, known as "R".
The "R" function lets us define a new function on the natural numbers. That new function "P" is defined to be "(λ x:nat . x+1 = 1+x)". The other arguments act like the parts of an induction proof. The argument "PZ : P 0" becomes the base case "0+1 = 1+0", which is the term "refl nat 1". The argument "PS : P n → P (S n)" becomes the inductive case. Essentially, this says that when "x+1 = 1+x" has "x" replaced with a canonical value, the expression will be the same as "refl nat (x+1)".
Versions of the Identity Type
The identity type is complex and is the subject of research in type theory. While every version agrees on the constructor, "refl". Their properties and eliminator functions differ dramatically.
For "extensional" versions, any identity type can be converted into a judgemental equality. A computational version is known as "Axiom K" due to Thomas Streicher. These are not very popular lately.
Complexity of Identity Type
Martin Hoffman and Thomas Streicher refuted that idea type theory required all terms of the identity type to be the same.
A popular branches of research into the identity type are homotopy type theory and its Cubical type theory.
References
Type theory | Identity type | Mathematics | 460 |
2,784,497 | https://en.wikipedia.org/wiki/Evolutionarily%20significant%20unit | An evolutionarily significant unit (ESU) is a population of organisms that is considered distinct for purposes of conservation. Delineating ESUs is important when considering conservation action. An ESU is not always equivalent to a biological species but can be also a subspecies, variety, geographic race, or population. In marine animals the term "stock" is often used as well.
Definition
Definitions of an ESU generally include at least one of the following criteria:
Current geographic separation,
Genetic differentiation at neutral markers among related ESUs caused by past restriction of gene flow, or
Locally adapted phenotypic traits caused by differences in selection.
Criterion 2 considers the gene flow between populations, measured by FST. A high degree of differentiation between two populations among genes that provide no adaptive advantage to either population (known as neutral markers) implies a lack of gene flow, showing that random drift has occurred in isolation from other populations. Very few migrants per generation are needed to prevent strong differentiation of neutral markers. Even a single migrant per generation may be enough for neutral markers to show gene flow between populations, making it difficult to differentiate the populations through neutral markers.
Criterion 3 does not consider neutral genetic markers, instead looking at locally adapted traits of the population. Local adaptations may be present even with some gene flow from other populations, and even when there is little differentiation at neutral markers among ESUs. Reciprocal transplantation experiments are necessary to test for genetic differentiation for phenotypic traits, and differences in selection gradients across habitats. Such experiments are generally more difficult than the fixation index tests of criterion 2, and may be impossible for very rare or endangered species.
For example, Cryan's buckmoth (Hemileuca maia) feeds only on the herb Menyanthes trifoliata, commonly known as buckbean, and while indistinguishable morphologically from related buckmoths, and not differentiated at the genetic markers tested, the moth is highly adapted to its host plant, having 100% survivorship on Menyanthes, while close genetic relatives all died when reared on the plant. In this case gene flow was sufficient to reduce differentiation at neutral markers, but did not prevent local host adaptation.
Both criteria 2 and 3 have the problem that there is no clear dichotomy between ESU and not-ESU, as genetic differentiation between populations forms a continuum, prompting a contention for consideration of both genetic and ecological processes in identifying ESUs. Because the different approaches to designating ESUs each have their benefits, and the need and form of management prescriptions may vary across contexts, some support an "adaptive" approach to identification of ESUs, for instance suggesting consideration of facets from numerous designation methods.
United States Endangered Species Act
For the purposes of the Endangered Species Act a "species" is defined to include "any distinct population segment of any species of vertebrate fish or wildlife which interbreeds when mature." However, the act does not define what constitutes a "distinct population segment", but this is generally considered to be synonymous with an evolutionarily significant unit, so that it must:
be substantially reproductively isolated from other conspecific populations, and
represent an important component in the evolutionary legacy of the biological species
Other equivalent terms
The equivalent term used by COSEWIC is "Wildlife Species", which is used to refer to biological species, subspecies, varieties, or geographically or genetically distinct populations of organisms.
See also
The species problem
FSTAT (software)
References
Conservation biology
Environmental law in the United States | Evolutionarily significant unit | Biology | 716 |
606,000 | https://en.wikipedia.org/wiki/Papillomaviridae | Papillomaviridae is a family of non-enveloped DNA viruses whose members are known as papillomaviruses. Several hundred species of papillomaviruses, traditionally referred to as "types", have been identified infecting all carefully inspected mammals, but also other vertebrates such as birds, snakes, turtles and fish. Infection by most papillomavirus types, depending on the type, is either asymptomatic (e.g. most Beta-PVs) or causes small benign tumors, known as papillomas or warts (e.g. human papillomavirus 1, HPV6 or HPV11). Papillomas caused by some types, however, such as human papillomaviruses 16 and 18, carry a risk of becoming cancerous.
Papillomaviruses are usually considered as highly host- and tissue-tropic, and are thought to rarely be transmitted between species. Papillomaviruses replicate exclusively in the basal layer of the body surface tissues. All known papillomavirus types infect a particular body surface, typically the skin or mucosal epithelium of the genitals, anus, mouth, or airways. For example, human papillomavirus (HPV) type 1 tends to infect the soles of the feet, and HPV type 2 the palms of the hands, where they may cause warts. Additionally, there are descriptions of the presence of papillomavirus DNA in the blood and in the peripheral blood mononuclear cells.
Papillomaviruses were first identified in the early 20th century, when it was shown that skin warts, or papillomas, could be transmitted between individuals by a filterable infectious agent. In 1935 Francis Peyton Rous, who had previously demonstrated the existence of a cancer-causing sarcoma virus in chickens, went on to show that a papillomavirus could cause skin cancer in infected rabbits. This was the first demonstration that a virus could cause cancer in mammals.
Taxonomy of papillomaviruses
There are over 100 species of papillomavirus recognised, though the ICTV officially recognizes a smaller number, categorized into 53 genera, as of 2019. All papillomaviruses (PVs) have similar genomic organizations, and any pair of PVs contains at least five homologous genes, although the nucleotide sequence may diverge by more than 50%. Phylogenetic algorithms that permit the comparison of homologies led to phylogenetic trees that have a similar topology, independent of the gene analyzed.
Phylogenetic studies strongly suggest that PVs normally evolve together with their mammalian and bird host species, but adaptive radiations, occasional zoonotic events and recombinations may also impact their diversification. Their basic genomic organization appears maintained for a period exceeding 100 million years, and these sequence comparisons have laid the foundation for a PV taxonomy, which is now officially recognized by the International Committee on Taxonomy of Viruses. All PVs form the family Papillomaviridae, which is distinct from the Polyomaviridae thus eliminating the term Papovaviridae. Major branches of the phylogenetic tree of PVs are considered genera, which are identified by Greek letters. Minor branches are considered species and unite PV types that are genomically distinct without exhibiting known biological differences. This new taxonomic system does not affect the traditional identification and characterization of PV "types" and their independent isolates with minor genomic differences, referred to as "subtypes" and "variants", all of which are taxa below the level of "species". Additionally, phylogenetic groupings at higher taxonomic level have been proposed.
This classification may need revision in the light of the existence of papilloma–polyoma virus recombinants. Additional species have also been described. Sparus aurata papillomavirus 1 has been isolated from fish.
Human papillomaviruses
Over 170 human papillomavirus types have been completely sequenced. They have been divided into 5 genera: Alphapapillomavirus, Betapapillomavirus, Gammapapillomavirus, Mupapillomavirus and Nupapillomavirus. At least 200 additional viruses have been identified that await sequencing and classification.
Animal papillomaviruses
Individual papillomavirus types tend to be highly adapted to replication in a single animal species. In one study, researchers swabbed the forehead skin of a variety of zoo animals and used PCR to amplify any papillomavirus DNA that might be present. Although a wide variety of papillomavirus sequences were identified in the study, the authors found little evidence for inter-species transmission. One zookeeper was found to be transiently positive for a chimpanzee-specific papillomavirus sequence. However, the authors note that the chimpanzee-specific papillomavirus sequence could have been the result of surface contamination of the zookeeper's skin, as opposed to productive infection.
Cottontail rabbit papillomavirus (CRPV) can cause protuberant warts in its native host, the North American rabbit genus Sylvilagus. These horn-like warts may be the original basis for the urban legends of the American antlered rabbit the Jackalope and European Wolpertinger. European domestic rabbits (genus Oryctolagus) can be transiently infected with CRPV in a laboratory setting. However, since European domestic rabbits do not produce infectious progeny virus, they are considered an incidental or "dead-end" host for CRPV.
Inter-species transmission has also been documented for bovine papillomavirus (BPV) type 1. In its natural host (cattle), BPV-1 induces large fibrous skin warts. BPV-1 infection of horses, which are an incidental host for the virus, can lead to the development of benign tumors known as sarcoids. The agricultural significance of BPV-1 spurred a successful effort to develop a vaccine against the virus.
A few reports have identified papillomaviruses in smaller rodents, such as Syrian hamsters, the African multimammate rat and the Eurasian harvest mouse. However, there are no papillomaviruses known to be capable of infecting laboratory mice. The lack of a tractable mouse model for papillomavirus infection has been a major limitation for laboratory investigation of papillomaviruses.
Four papillomaviruses are known to infect birds: Fringilla coelebs papillomavirus 1, Francolinus leucoscepus papillomavirus 1, Psittacus erithacus papillomavirus 1 and Pygoscelis adeliae papillomavirus 1. All these species have a gene (E9) of unknown function, suggesting a common origin.
Evolution
The evolution of papillomaviruses is thought to be slow compared to many other virus types, but there are no experimental measurements currently available. This is probably because the papillomavirus genome is composed of genetically stable double-stranded DNA that is replicated with high fidelity by the host cell's DNA replication machinery.
It is believed that papillomaviruses generally co-evolve with a particular species of host animal over many years, although there are strong evidences against the hypothesis of coevolution. In a particularly speedy example, HPV-16 has evolved slightly as human populations have expanded across the globe and now varies in different geographic regions in a way that probably reflects the history of human migration. Cutaneotropic HPV types are occasionally exchanged between family members during the entire lifetime, but other donors should also be considered in viral transmission.
Other HPV types, such as HPV-13, vary relatively little in different human populations. In fact, the sequence of HPV-13 closely resembles a papillomavirus of bonobos (also known as pygmy chimpanzees). It is not clear whether this similarity is due to recent transmission between species or because HPV-13 has simply changed very little in the six or so million years since humans and bonobos diverged.
The most recent common ancestor of this group of viruses has been estimated to have existed .
There are five main genera infecting humans (Alpha, Beta, Gamma, Mu and Nu). The most recent common ancestor of these genera evolved -. The most recent ancestor of the gamma genus was estimated to have evolved between and .
Structure
Papillomaviruses are non-enveloped, meaning that the outer shell or capsid of the virus is not covered by a lipid membrane. A single viral protein, known as L1, is necessary and sufficient for formation of a 55–60 nanometer capsid composed of 72 star-shaped capsomers (see figure). Like most non-enveloped viruses, the capsid is geometrically regular and presents icosahedral symmetry. Self-assembled virus-like particles composed of L1 are the basis of a successful group of prophylactic HPV vaccines designed to elicit virus-neutralizing antibodies that protect against initial HPV infection. As such, papillomaviridæ are stable in the environment.
The papillomavirus genome is a double-stranded circular DNA molecule ~8,000 base pairs in length. It is packaged within the L1 shell along with cellular histone proteins, which serve to wrap and condense DNA.
The papillomavirus capsid also contains a viral protein known as L2, which is less abundant. Although not clear how L2 is arranged within the virion, it is known to perform several important functions, including facilitating the packaging of the viral genome into nascent virions as well as the infectious entry of the virus into new host cells. L2 is of interest as a possible target for more broadly protective HPV vaccines.
The viral capsid consists of 72 capsomeres of which 12 are five-coordinated and 60 are six-coordinated capsomeres, arranged on a T = 7d icosahedral surface lattice.
Tissue specificity
Papillomaviruses replicate exclusively in keratinocytes. Keratinocytes form the outermost layers of the skin, as well as some mucosal surfaces, such as the inside of the cheek or the walls of the vagina. These surface tissues, which are known as stratified squamous epithelia, are composed of stacked layers of flattened cells. The cell layers are formed through a process known as cellular differentiation, in which keratinocytes gradually become specialized, eventually forming a hard, crosslinked surface that prevents moisture loss and acts as a barrier against pathogens. Less-differentiated keratinocyte stem cells, replenished on the surface layer, are thought to be the initial target of productive papillomavirus infections. Subsequent steps in the viral life cycle are strictly dependent on the process of keratinocyte differentiation. As a result, papillomaviruses can only replicate in body surface tissues.
Life cycle
Infectious entry
Papillomaviruses gain access to keratinocyte stem cells through small wounds, known as microtraumas, in the skin or mucosal surface. Interactions between L1 and sulfated sugars on the cell surface promote initial attachment of the virus. The virus is then able to get inside from the cell surface via interaction with a specific receptor, likely via the alpha-6 beta-4 integrin, and transported to membrane-enclosed vesicles called endosomes. The capsid protein L2 disrupts the membrane of the endosome through a cationic cell-penetrating peptide, allowing the viral genome to escape and traffic, along with L2, to the cell nucleus.
Viral persistence and latency
After successful infection of a keratinocyte, the virus expresses E1 and E2 proteins, which are for replicating and maintaining the viral DNA as a circular episome. The viral oncogenes E6 and E7 promote cell growth by inactivating the tumor suppressor proteins p53 and pRb. Keratinocyte stem cells in the epithelial basement layer can maintain papillomavirus genomes for decades.
Production of progeny virus
The current understanding is that viral DNA replication likely occurs in the G2 phase of the cell cycle and rely on recombination-dependent replication supported by DNA damage response mechanisms (activated by the E7 protein) to produce progeny viral genomes. Papillomavirus genomes are sometimes integrated into the host genome, especially noticeable with oncogenic HPVs, but is not a normal part of the virus life cycle and a dead-end that eliminates the potential of viral progeny production.
The expression of the viral late genes, L1 and L2, is exclusively restricted to differentiating keratinocytes in the outermost layers of the skin or mucosal surface. The increased expression of L1 and L2 is typically correlated with a dramatic increase in the number of copies of the viral genome. Since the outer layers of stratified squamous epithelia are subject to relatively limited surveillance by cells of the immune system, it is thought that this restriction of viral late gene expression represents a form of immune evasion.
New infectious progeny viruses are assembled in the cell nucleus. Papillomaviruses have evolved a mechanism for releasing virions into the environment. Other kinds of non-enveloped animal viruses utilize an active lytic process to kill the host cell, allowing release of progeny virus particles. Often this lytic process is associated with inflammation, which might trigger immune attack against the virus. Papillomaviruses exploit desquamation as a stealthy, non-inflammatory release mechanism.
Association with cancer
Although some papillomavirus types can cause cancer in the epithelial tissues they inhabit, cancer is not a typical outcome of infection. The development of papillomavirus-induced cancers typically occurs over the course of many years. Papillomaviruses have been associated with the development of cervical cancer, penile cancer and oral cancers. An association with vulval cancer and urothelial carcinoma with squamous differentiation in patients with neurogenic bladder has also been noted. There are cancer causing papillomavirus genome that encodes two small proteins called E6 and E7 that mimic cancer causing oncogenes. The way they work is that they stimulate unnatural growth of cells and block their natural defenses. Also they act on many signaling proteins that control proliferation and apoptosis.
Laboratory study
The fact that the papillomavirus life cycle strictly requires keratinocyte differentiation has posed a substantial barrier to the study of papillomaviruses in the laboratory, since it has precluded the use of conventional cell lines to grow the viruses. Because infectious BPV-1 virions can be extracted from the large warts the virus induces on cattle, it has been a workhorse model papillomavirus type for many years. CRPV, rabbit oral papillomavirus (ROPV) and canine oral papillomavirus (COPV) have also been used extensively for laboratory studies. As soon as researchers discovered that these viruses cause cancer, they worked together to find a vaccine to it. Currently, the most effective way to go about it is to mimic a virus that is composed of L1 protein but lack the DNA. Basically, our immune system builds defenses against infections, but if these infections do not cause disease they can be used as a vaccine. PDB entry 6bt3 shows how antibodies surfaces attack the surface of the virus to disable it.
Some sexually transmitted HPV types have been propagated using a mouse "xenograft" system, in which HPV-infected human cells are implanted into immunodeficient mice. More recently, some groups have succeeded in isolating infectious HPV-16 from human cervical lesions. However, isolation of infectious virions using this technique is arduous and the yield of infectious virus is very low.
The differentiation of keratinocytes can be mimicked in vitro by exposing cultured keratinocytes to an air/liquid interface. The adaptation of such "raft culture" systems to the study of papillomaviruses was a significant breakthrough for in vitro study of the viral life cycle. However, raft culture systems are relatively cumbersome and the yield of infectious HPVs can be low.
The development of a yeast-based system that allows stable episomal HPV replication provides a convenient, rapid and inexpensive means to study several aspects of the HPV lifecycle (Angeletti 2002). For example, E2-dependent transcription, genome amplification and efficient encapsidation of full-length HPV DNAs can be easily recreated in yeast (Angeletti 2005).
Recently, transient high-yield methods for producing HPV pseudoviruses carrying reporter genes has been developed. Although pseudoviruses are not suitable for studying certain aspects of the viral life cycle, initial studies suggest that their structure and initial infectious entry into cells is probably similar in many ways to authentic papillomaviruses.
Human papillomavirus binds to heparin molecules on the surface of the cells that it infects. Studies have shown that the crystal of isolated L1 capsomeres has the heparin chains recognized by lysines lines grooves on the surface of the virus. Also those with the antibodies show that they can block this recognition.
Genetic organization and gene expression
The papillomavirus genome is divided into an early region (E), encoding six open reading frames (ORF) (E1, E2, E4, E5, E6, and E7) that are expressed immediately after initial infection of a host cell, and a late region (L) encoding a major capsid protein L1 and a minor capsid protein L2. All viral ORFs are encoded on one DNA strand (see figure). This represents a dramatic difference between papillomaviruses and polyomaviruses, since the latter virus type expresses its early and late genes by bi-directional transcription of both DNA strands. This difference was a major factor in establishment of the consensus that papillomaviruses and polyomaviruses probably never shared a common ancestor, despite the striking similarities in the structures of their virions.
After the host cell is infected, HPV16 early promoter is activated and a polycistronic primary RNA containing all six early ORFs is transcribed. This polycistronic RNA contains three exons and two introns and undergoes active RNA splicing to generate multiple isoforms of mRNAs. One of the spliced isoform RNAs, E6*I, serves as an E7 mRNA to translate E7 oncoprotein. In contrast, an intron in the E6 ORF that remains intact without splicing is necessary for translation of E6 oncoprotein. However, viral early transcription subjects to viral E2 regulation and high E2 levels repress the transcription. HPV genomes integrate into host genome by disruption of E2 ORF, preventing E2 repression on E6 and E7. Thus, viral genome integration into host DNA genome increases E6 and E7 expression to promote cellular proliferation and the chance of malignancy.
A major viral late promoter in viral early region becomes active only in differentiated cells and its activity can be highly enhanced by viral DNA replication. The late transcript is also a polycistronic RNA which contains two introns and three exons. Alternative RNA Splicing of this late transcript is essential for L1 and L2 expression and can be regulated by RNA cis-elements and host splicing factors.
Technical discussion of papillomavirus gene functions
Genes within the papillomavirus genome are usually identified after similarity with other previously identified genes. However, some spurious open reading frames might have been mistaken as genes simply after their position in the genome, and might not be true genes. This applies specially to certain E3, E4, E5 and E8 open reading frames.
E1
Encodes a protein that binds to the viral origin of replication in the long control region of the viral genome. E1 uses ATP to exert a helicase activity that forces apart the DNA strands, thus preparing the viral genome for replication by cellular DNA replication factors.
E2
The E2 protein serves as a master transcriptional regulator for viral promoters located primarily in the long control region. The protein has a transactivation domain linked by a relatively unstructured hinge region to a well-characterized DNA binding domain. E2 facilitates the binding of E1 to the viral origin of replication. E2 also utilizes a cellular protein known as Bromodomain-4 (Brd4) to tether the viral genome to cellular chromosomes. This tethering to the cell's nuclear matrix ensures faithful distribution of viral genomes to each daughter cell after cell division. It is thought that E2 serves as a negative regulator of expression for the oncogenes E6 and E7 in latently HPV-infected basal layer keratinocytes. Genetic changes, such as integration of the viral DNA into a host cell chromosome, that inactivate E2 expression tend to increase the expression of the E6 and E7 oncogenes, resulting in cellular transformation and possibly further genetic destabilization.
E3
This small putative gene exists only in a few papillomavirus types. The gene is not known to be expressed as a protein and does not appear to serve any function.
E4
Although E4 proteins are expressed at low levels during the early phase of viral infection, expression of E4 increases dramatically during the late phase of infection. In other words, its "E" appellation may be something of a misnomer. In the case of HPV-1, E4 can account for up to 30% of the total protein at the surface of a wart. The E4 protein of many papillomavirus types is thought to facilitate virion release into the environment by disrupting intermediate filaments of the keratinocyte cytoskeleton. Viral mutants incapable of expressing E4 do not support high-level replication of the viral DNA, but it is not yet clear how E4 facilitates DNA replication. E4 has also been shown to participate in arresting cells in the G2 phase of the cell cycle.
E5
The E5 are small, very hydrophobic proteins that destabilise the function of many membrane proteins in the infected cell. The E5 protein of some animal papillomavirus types (mainly bovine papillomavirus type 1) functions as an oncogene primarily by activating the cell growth-promoting signaling of platelet-derived growth factor receptors. The E5 proteins of human papillomaviruses associated to cancer, however, seem to activate the signal cascade initiated by epidermal growth factor upon ligand binding. HPV16 E5 and HPV2 E5 have also been shown to down-regulate the surface expression of major histocompatibility complex class I proteins, which may prevent the infected cell from being eliminated by killer T cells.
E6
E6 is a 151 amino-acid peptide that incorporates a type 1 motif with a consensus sequence –(T/S)-(X)-(V/I)-COOH. It also has two zinc finger motifs.
E6 is of particular interest because it appears to have multiple roles in the cell and to interact with many other proteins. Its major role, however, is to mediate the degradation of p53, a major tumor suppressor protein, reducing the cell's ability to respond to DNA damage.
E6 has also been shown to target other cellular proteins, thereby altering several metabolic pathways. One such target is NFX1-91, which normally represses production of telomerase, a protein that allows cells to divide an unlimited number of times. When NFX1-91 is degraded by E6, telomerase levels increase, inactivating a major mechanism keeping cell growth in check. Additionally, E6 can act as a transcriptional cofactor—specifically, a transcription activator—when interacting with the cellular transcription factor, E2F1/DP1.
E6 can also bind to PDZ-domains, short sequences which are often found in signaling proteins. E6's structural motif allows for interaction with PDZ domains on DLG (discs large) and hDLG (Drosophila large) tumor suppressor genes. Binding at these locations causes transformation of the DLG protein and disruption of its suppressor function. E6 proteins also interact with the MAGUK (membrane-associated guanylate kinase family) proteins. These proteins, including MAGI-1, MAGI-2, and MAGI-3 are usually structural proteins, and can help with signaling. More significantly, they are believed to be involved with DLG's suppression activity. When E6 complexes with the PDZ domains on the MAGI proteins, it distorts their shape and thereby impedes their function. Overall, the E6 protein serves to impede normal protein activity in such a way as to allow a cell to grow and multiply at the increased rate characteristic of cancer.
Since the expression of E6 is strictly required for maintenance of a malignant phenotype in HPV-induced cancers, it is an appealing target of therapeutic HPV vaccines designed to eradicate established cervical cancer tumors.
E7
In most papillomavirus types, the primary function of the E7 protein is to inactivate members of the pRb family of tumor suppressor proteins. Together with E6, E7 serves to prevent cell death (apoptosis) and promote cell cycle progression, thus priming the cell for replication of the viral DNA. E7 also participates in immortalization of infected cells by activating cellular telomerase. Like E6, E7 is the subject of intense research interest and is believed to exert a wide variety of other effects on infected cells. As with E6, the ongoing expression of E7 is required for survival of cancer cell lines, such as HeLa, that are derived from HPV-induced tumors.
E8
Only a few papillomavirus types encode a short protein from the E8 gene. In the case of BPV-4 (papillomavirus genus Xi), the E8 open reading frame may substitute for the E6 open reading frame, which is absent in this papillomavirus genus. These E8 genes are chemically and functionally similar to the E5 genes from some human papillomaviruses, and are also called E5/E8.
L1
L1 spontaneously self-assembles into pentameric capsomers. Purified capsomers can go on to form capsids, which are stabilized by disulfide bonds between neighboring L1 molecules. L1 capsids assembled in vitro are the basis of prophylactic vaccines against several HPV types. Compared to other papillomavirus genes, the amino acid sequences of most portions of L1 are well-conserved between types. However, the surface loops of L1 can differ substantially, even for different members of a particular papillomavirus species. This probably reflects a mechanism for evasion of neutralizing antibody responses elicited by previous papillomavirus infections.
L2
L2 exists in an oxidized state within the papillomavirus virion, with the two conserved cysteine residues forming an intramolecular disulfide bond. In addition to cooperating with L1 to package the viral DNA into the virion, L2 has been shown to interact with a number of cellular proteins during the infectious entry process. After the initial binding of the virion to the cell, L2 must be cleaved by the cellular protease furin. The virion is internalized, probably through a clathrin-mediated process, into an endosome, where acidic conditions are thought to lead to exposure of membrane-destabilizing portions of L2. The cellular proteins beta-actin and syntaxin-18 may also participate in L2-mediated entry events. After endosome escape, L2 and the viral genome are imported into the cell nucleus where they traffic to a sub-nuclear domain known as an ND-10 body that is rich in transcription factors. Small portions of L2 are well-conserved between different papillomavirus types, and experimental vaccines targeting these conserved domains may offer protection against a broad range of HPV types.
See also
Deer cutaneous fibroma
References
External links
ICTV Report Papillomaviridae
Viralzone: Papillomaviridae
Los Alamos National Laboratory maintains a comprehensive (albeit somewhat dated) papillomavirus sequence database. This useful database provides detailed descriptions and references for various papillomavirus types.
A short video which shows the effects of papillomavirus on the skin of an Indonesian man with epidermodysplasia verruciformis, the genetic inability to defend against some types of cutaneous HPV.
Best Joint Supplement That Actually Works for Men, Women and Knee de Villiers, E.M., Bernard, H.U., Broker, T., Delius, H. and zur Hausen, H. Index of Viruses – Papillomaviridae (2006). In: ICTVdB – The Universal Virus Database, version 4. Büchen-Osmond, C (Ed), Columbia University, New York, USA.
00.099. Papillomaviridae description In: ICTVdB – The Universal Virus Database, version 4. Büchen-Osmond, C. (Ed), Columbia University, New York, USA
Human papillomavirus particle and genome visualization
ICTV
Virus families | Papillomaviridae | Biology | 6,261 |
314,960 | https://en.wikipedia.org/wiki/Oxime | In organic chemistry, an oxime is an organic compound belonging to the imines, with the general formula , where R is an organic side-chain and R' may be hydrogen, forming an aldoxime, or another organic group, forming a ketoxime. O-substituted oximes form a closely related family of compounds. Amidoximes are oximes of amides () with general structure .
Oximes are usually generated by the reaction of hydroxylamine with aldehydes () or ketones (). The term oxime dates back to the 19th century, a combination of the words oxygen and imine.
Structure and properties
If the two side-chains on the central carbon are different from each other—either an aldoxime, or a ketoxime with two different "R" groups—the oxime can often have two different geometric stereoisomeric forms according to the E/Z configuration. An older terminology of syn and anti was used to identify especially aldoximes according to whether the R group was closer or further from the hydroxyl. Both forms are often stable enough to be separated from each other by standard techniques.
Oximes have three characteristic bands in the infrared spectrum, whose wavelengths corresponding to the stretching vibrations of its three types of bonds: 3600 cm−1 (O−H), 1665 cm−1 (C=N) and 945 cm−1 (N−O).
In aqueous solution, aliphatic oximes are 102- to 103-fold more resistant to hydrolysis than analogous hydrazones.
Preparation
Oximes can be synthesized by condensation of an aldehyde or a ketone with hydroxylamine. The condensation of aldehydes with hydroxylamine gives aldoximes, and ketoximes are produced from ketones and hydroxylamine. In general, oximes exist as colorless crystals or as thick liquids and are poorly soluble in water. Therefore, oxime formation can be used for the identification of ketone or aldehyde functional groups.
Oximes can also be obtained from reaction of nitrites such as isoamyl nitrite with compounds containing an acidic hydrogen atom. Examples are the reaction of ethyl acetoacetate and sodium nitrite in acetic acid, the reaction of methyl ethyl ketone with ethyl nitrite in hydrochloric acid. and a similar reaction with propiophenone, the reaction of phenacyl chloride, the reaction of malononitrile with sodium nitrite in acetic acid
A conceptually related reaction is the Japp–Klingemann reaction.
Reactions
The hydrolysis of oximes proceeds easily by heating in the presence of various inorganic acids, and the oximes decompose into the corresponding ketones or aldehydes, and hydroxylamines. The reduction of oximes by sodium metal, sodium amalgam, hydrogenation, or reaction with hydride reagents produces amines. Typically the reduction of aldoximes gives both primary amines and secondary amines; however, reaction conditions can be altered (such as the addition of potassium hydroxide in a 1/30 molar ratio) to yield solely primary amines.
In general, oximes can be changed to the corresponding amide derivatives by treatment with various acids. This reaction is called Beckmann rearrangement. In this reaction, a hydroxyl group is exchanged with the group that is in the anti position of the hydroxyl group. The amide derivatives that are obtained by Beckmann rearrangement can be transformed into a carboxylic acid by means of hydrolysis (base or acid catalyzed). Beckmann rearrangement is used for the industrial synthesis of caprolactam (see applications below).
The Ponzio reaction (1906) concerning the conversion of m-nitrobenzaldoxime to m-nitrophenyldinitromethane using dinitrogen tetroxide was the result of research into TNT analogues:
In the Neber rearrangement certain oximes are converted to the corresponding alpha-amino ketones.
Oximes can be dehydrated using acid anhydrides to yield corresponding nitriles.
Certain amidoximes react with benzenesulfonyl chloride to make substituted ureas in the Tiemann rearrangement:
Uses
In their largest application, an oxime is an intermediate in the industrial production of caprolactam, a precursor to Nylon 6. About half of the world's supply of cyclohexanone, more than a million tonnes annually, is converted to the oxime. In the presence of sulfuric acid catalyst, the oxime undergoes the Beckmann rearrangement to give the cyclic amide caprolactam:
Metal extractant
Oximes are commonly used as ligands and sequestering agents for metal ions. Dimethylglyoxime (dmgH2) is a reagent for the analysis of nickel and a popular ligand in its own right. In the typical reaction, a metal reacts with two equivalents of dmgH2 concomitant with ionization of one proton. Salicylaldoxime is a chelator in hydrometallurgy.
Amidoximes such as polyacrylamidoxime can be used to capture trace amounts of uranium from sea water. In 2017 researchers announced a configuration that absorbed up to nine times as much uranyl as previous fibers without saturating.
Other applications
Oxime compounds are used as antidotes for nerve agents. A nerve agent inactivates acetylcholinesterase by phosphorylation. Oxime compounds can reactivate acetylcholinesterase by attaching to phosphorus, forming an oxime-phosphonate, which then splits away from the acetylcholinesterase molecule. Oxime nerve-agent antidotes are pralidoxime (also known as 2-PAM), obidoxime, methoxime, HI-6, Hlo-7, and TMB-4. The effectiveness of the oxime treatment depends on the particular nerve agent used.
Perillartine, the oxime of perillaldehyde, is used as an artificial sweetener in Japan. It is 2000 times sweeter than sucrose.
Diaminoglyoxime is a key precursor to various compounds containing the highly reactive furazan ring.
Methyl ethyl ketoxime is a skin-preventing additive in many oil-based paints.
Buccoxime and 5-methyl-3-heptanone oxime ("Stemone") are perfume ingredients.
Fluvoxamine is used as an antidepressant.
See also
:Category:Oximes – specific chemicals containing this functional group
Nitrone – the N-oxide of an imine
References
Functional groups
Organic compounds
Chelating agents | Oxime | Chemistry | 1,448 |
4,706,825 | https://en.wikipedia.org/wiki/Cyclic%20code | In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.
Definition
Let be a linear code over a finite field (also called Galois field) of block length . is called a cyclic code if, for every codeword from , the word in obtained by a cyclic right shift of components is again a codeword. Because one cyclic right shift is equal to cyclic left shifts, a cyclic code may also be defined via cyclic left shifts. Therefore, the linear code is cyclic precisely when it is invariant under all cyclic shifts.
Cyclic codes have some additional structural constraint on the codes. They are based on Galois fields and because of their structural properties they are very useful for error controls. Their structure is strongly related to Galois fields because of which the encoding and decoding algorithms for cyclic codes are computationally efficient.
Algebraic structure
Cyclic codes can be linked to ideals in certain rings. Let be a polynomial ring over the finite field . Identify the elements of the cyclic code with polynomials in such that
maps to the polynomial
: thus multiplication by corresponds to a cyclic shift. Then is an ideal in , and hence principal, since is a principal ideal ring. The ideal is generated by the unique monic element in of minimum degree, the generator polynomial .
This must be a divisor of . It follows that every cyclic code is a polynomial code.
If the generator polynomial has degree then the rank of the code is .
The idempotent of is a codeword such that (that is, is an idempotent element of ) and is an identity for the code, that is for every codeword . If and are coprime such a word always exists and is unique; it is a generator of the code.
An irreducible code is a cyclic code in which the code, as an ideal is irreducible, i.e. is minimal in , so that its check polynomial is an irreducible polynomial.
Examples
For example, if and , the set of codewords contained in cyclic code generated by is precisely
.
It corresponds to the ideal in generated by .
The polynomial is irreducible in the polynomial ring, and hence the code is an irreducible code.
The idempotent of this code is the polynomial , corresponding to the codeword .
Trivial examples
Trivial examples of cyclic codes are itself and the code containing only the zero codeword. These correspond to generators and respectively: these two polynomials must always be factors of .
Over the parity bit code, consisting of all words of even weight, corresponds to generator . Again over this must always be a factor of .
Quasi-cyclic codes and shortened codes
Before delving into the details of cyclic codes first we will discuss quasi-cyclic and shortened codes which are closely related to the cyclic codes and they all can be converted into each other.
Definition
Quasi-cyclic codes:
An quasi-cyclic code is a linear block code such that, for some which is coprime to , the polynomial is a codeword polynomial whenever is a codeword polynomial.
Here, codeword polynomial is an element of a linear code whose code words are polynomials that are divisible by a polynomial of shorter length called the generator polynomial. Every codeword polynomial can be expressed in the form , where is the generator polynomial. Any codeword of a cyclic code can be associated with a codeword polynomial, namely, . A quasi-cyclic code with equal to is a cyclic code.
Definition
Shortened codes:
An linear code is called a proper shortened cyclic code if it can be obtained by deleting positions from an cyclic code.
In shortened codes information symbols are deleted to obtain a desired blocklength smaller than the design blocklength. The missing information symbols are usually imagined to be at the beginning of the codeword and are considered to be 0. Therefore, − is fixed, and then is decreased which eventually decreases . It is not necessary to delete the starting symbols. Depending on the application sometimes consecutive positions are considered as 0 and are deleted.
All the symbols which are dropped need not be transmitted and at the receiving end can be reinserted. To convert cyclic code to shortened code, set symbols to zero and drop them from each codeword. Any cyclic code can be converted to quasi-cyclic codes by dropping every th symbol where is a factor of . If the dropped symbols are not check symbols then this cyclic code is also a shortened code.
For correcting errors
Cyclic codes can be used to correct errors, like Hamming codes as cyclic codes can be used for correcting single error. Likewise, they are also used to correct double errors and burst errors. All types of error corrections are covered briefly in the further subsections.
The (7,4) Hamming code has a generator polynomial . This polynomial has a zero in Galois extension field at the primitive element , and all codewords satisfy . Cyclic codes can also be used to correct double errors over the field . Blocklength will be equal to and primitive elements and as zeros in the because we are considering the case of two errors here, so each will represent one error.
The received word is a polynomial of degree given as
where can have at most two nonzero coefficients corresponding to 2 errors.
We define the syndrome polynomial, as the remainder of polynomial when divided by the generator polynomial i.e.
as .
For correcting two errors
Let the field elements and be the two error location numbers. If only one error occurs then is equal to zero and if none occurs both are zero.
Let and .
These field elements are called "syndromes". Now because is zero at primitive elements and , so we can write and . If say two errors occur, then
and
.
And these two can be considered as two pair of equations in with two unknowns and hence we can write
and
.
Hence if the two pair of nonlinear equations can be solved cyclic codes can used to correct two errors.
Hamming code
The Hamming(7,4) code may be written as a cyclic code over GF(2) with generator . In fact, any binary Hamming code of the form Ham(r, 2) is equivalent to a cyclic code, and any Hamming code of the form Ham(r,q) with r and q-1 relatively prime is also equivalent to a cyclic code. Given a Hamming code of the form Ham(r,2) with , the set of even codewords forms a cyclic -code.
Hamming code for correcting single errors
A code whose minimum distance is at least 3, have a check matrix all of whose columns are distinct and non zero. If a check matrix for a binary code has rows, then each column is an -bit binary number. There are possible columns. Therefore, if a check matrix of a binary code with at least 3 has rows, then it can only have columns, not more than that. This defines a code, called Hamming code.
It is easy to define Hamming codes for large alphabets of size . We need to define one matrix with linearly independent columns. For any word of size there will be columns who are multiples of each other. So, to get linear independence all non zero -tuples with one as a top most non zero element will be chosen as columns. Then two columns will never be linearly dependent because three columns could be linearly dependent with the minimum distance of the code as 3.
So, there are nonzero columns with one as top most non zero element. Therefore, a Hamming code is a code.
Now, for cyclic codes, Let be primitive element in , and let . Then and thus is a zero of the polynomial and is a generator polynomial for the cyclic code of block length .
But for , . And the received word is a polynomial of degree given as
where, or where represents the error locations.
But we can also use as an element of to index error location. Because , we have and all powers of from to are distinct. Therefore, we can easily determine error location from unless which represents no error. So, a Hamming code is a single error correcting code over with and .
For correcting burst errors
From Hamming distance concept, a code with minimum distance can correct any errors. But in many channels error pattern is not very arbitrary, it occurs within very short segment of the message. Such kind of errors are called burst errors. So, for correcting such errors we will get a more efficient code of higher rate because of the less constraints. Cyclic codes are used for correcting burst error. In fact, cyclic codes can also correct cyclic burst errors along with burst errors. Cyclic burst errors are defined as
A cyclic burst of length is a vector whose nonzero components are among (cyclically) consecutive components, the first and the last of which are nonzero.
In polynomial form cyclic burst of length can be described as with as a polynomial of degree with nonzero coefficient . Here defines the pattern and defines the starting point of error. Length of the pattern is given by deg. The syndrome polynomial is unique for each pattern and is given by
A linear block code that corrects all burst errors of length or less must have at least check symbols. Proof: Because any linear code that can correct burst pattern of length or less cannot have a burst of length or less as a codeword because if it did then a burst of length could change the codeword to burst pattern of length , which also could be obtained by making a burst error of length in all zero codeword. Now, any two vectors that are non zero in the first components must be from different co-sets of an array to avoid their difference being a codeword of bursts of length . Therefore, number of such co-sets are equal to number of such vectors which are . Hence at least co-sets and hence at least check symbol.
This property is also known as Rieger bound and it is similar to the Singleton bound for random error correcting.
Fire codes as cyclic bounds
In 1959, Philip Fire presented a construction of cyclic codes generated by a product of a binomial and a primitive polynomial. The binomial has the form for some positive odd integer . Fire code is a cyclic burst error correcting code over with the generator polynomial
where is a prime polynomial with degree not smaller than and does not divide . Block length of the fire code is the smallest integer such that divides
.
A fire code can correct all burst errors of length t or less if no two bursts and appear in the same co-set. This can be proved by contradiction. Suppose there are two distinct nonzero bursts and of length or less and are in the same co-set of the code. So, their difference is a codeword. As the difference is a multiple of it is also a multiple of . Therefore,
.
This shows that is a multiple of , So
for some . Now, as is less than and is less than so is a codeword. Therefore,
.
Since degree is less than degree of , cannot divide . If is not zero, then also cannot divide as is less than and by definition of , divides for no smaller than . Therefore and equals to zero. That means both that both the bursts are same, contrary to assumption.
Fire codes are the best single burst correcting codes with high rate and they are constructed analytically. They are of very high rate and when and are equal, redundancy is least and is equal to . By using multiple fire codes longer burst errors can also be corrected.
For error detection cyclic codes are widely used and are called cyclic redundancy codes.
On Fourier transform
Applications of Fourier transform are widespread in signal processing. But their applications are not limited to the complex fields only; Fourier transforms also exist in the Galois field . Cyclic codes using Fourier transform can be described in a setting closer to the signal processing.
Fourier transform over finite fields
Fourier transform over finite fields
The discrete Fourier transform of vector is given by a vector where,
= where,
where exp() is an th root of unity. Similarly in the finite field th root of unity is element of order . Therefore
If is a vector over , and be an element of of order , then Fourier transform of the vector is the vector and components are given by
= where,
Here is time index, is frequency and is the spectrum. One important difference between Fourier transform in complex field and Galois field is that complex field exists for every value of while in Galois field exists only if divides . In case of extension fields, there will be a Fourier transform in the extension field if divides for some .
In Galois field time domain vector is over the field but the spectrum may be over the extension field .
Spectral description
Any codeword of cyclic code of blocklength can be represented by a polynomial of degree at most . Its encoder can be written as . Therefore, in frequency domain encoder can be written as . Here codeword spectrum has a value in but all the components in the time domain are from . As the data spectrum is arbitrary, the role of is to specify those where will be zero.
Thus, cyclic codes can also be defined as
Given a set of spectral indices, , whose elements are called check frequencies, the cyclic code is the set of words over whose spectrum is zero in the components indexed by . Any such spectrum will have components of the form .
So, cyclic codes are vectors in the field and the spectrum given by its inverse fourier transform is over the field and are constrained to be zero at certain components. But every spectrum in the field and zero at certain components may not have inverse transforms with components in the field . Such spectrum can not be used as cyclic codes.
Following are the few bounds on the spectrum of cyclic codes.
BCH bound
If be a factor of for some . The only vector in of weight or less that has consecutive components of its spectrum equal to zero is all-zero vector.
Hartmann-Tzeng bound
If be a factor of for some , and an integer that is coprime with . The only vector in of weight or less whose spectral
components equal zero for , where and , is the all zero vector.
Roos bound
If be a factor of for some and . The only vector in
of weight or less whose spectral components equal to zero for , where and takes at least values in the range , is the all-zero vector.
Quadratic residue codes
When the prime is a quadratic residue modulo the prime there is a quadratic residue code which is a cyclic code of length , dimension and minimum weight at least over .
Generalizations
A constacyclic code is a linear code with the property that for some constant λ if (c1,c2,...,cn) is a codeword then so is (λcn,c1,...,cn-1). A negacyclic code is a constacyclic code with λ=-1. A quasi-cyclic code has the property that for some s, any cyclic shift of a codeword by s places is again a codeword. A double circulant code is a quasi-cyclic code of even length with s=2. Quasi-twisted codes and multi-twisted codes are further generalizations of constacyclic codes.
See also
BCH code
Binary Golay code
Cyclic redundancy check
Eugene Prange
Reed–Muller code
Ternary Golay code
Notes
References
Further reading
Ranjan Bose, Information theory, coding and cryptography,
Irving S. Reed and Xuemin Chen, Error-Control Coding for Data Networks, Boston: Kluwer Academic Publishers, 1999, .
Scott A. Vanstone, Paul C. Van Oorschot, An introduction to error correcting codes with applications,
External links
John Gill's (Stanford) class notes – Notes #3, October 8, Handout #9 , EE 387.
Jonathan Hall's (MSU) class notes – Chapter 8. Cyclic codes - pp. 100 - 123
Coding theory
Finite fields | Cyclic code | Mathematics | 3,285 |
43,714,458 | https://en.wikipedia.org/wiki/Information%20causality | Information causality is a physical principle suggested in 2009. Information causality states that the information gain a receiver (Bob) can reach about data, previously unknown to him, from a sender (Alice), by using all his local resources and classical bits communicated by the sender, is at most bits; and that this limitation should hold even in the case where Alice and Bob pre-share a physical non-signaling resource, such as an entangled quantum state.
The principle assumes classical communication: if quantum bits were allowed to be transmitted, the information gain could be higher (for example if Alice and Bob pre-share some entangled qubits) as demonstrated in the quantum superdense coding protocol.
The principle is respected by all correlations accessible with quantum physics, while it excludes all correlations which violate the quantum Tsirelson bound for the CHSH inequality. However, it does not exclude beyond-quantum correlations in multipartite situations. The principle has also been related to a principle called thermodynamic sufficiency.
See also
Tsirelson's bound
Quantum nonlocality
References
Quantum information science | Information causality | Physics | 232 |
961,237 | https://en.wikipedia.org/wiki/TransUnion | TransUnion LLC is an American consumer credit reporting agency. TransUnion collects and aggregates information on over one billion individual consumers in over thirty countries including "200 million files profiling nearly every credit-active consumer in the United States". Its customers include over 65,000 businesses. Based in Chicago, Illinois, TransUnion's 2014 revenue was US$1.3 billion. It is the smallest of the three largest credit agencies, along with Experian and Equifax (known as the "Big Three").
TransUnion also markets credit reports and other credit and fraud-protection products directly to consumers. Like all credit reporting agencies, the company is required by U.S. law to provide consumers with one free credit report every year.
Additionally a growing segment of TransUnion's business is its business offerings that use advanced big data, particularly its deep AI-TLOxp product.
History
TransUnion was originally formed in 1968 as a holding company for Union Tank Car Company, making TransUnion a descendant of Standard Oil through Union Tank Car Company. The following year, it acquired the Credit Bureau of Cook County, which possessed and maintained 3.6 million credit accounts. In 1981, a Chicago-based holding company, The Marmon Group, acquired TransUnion for approximately $688 million.
In 2010, Goldman Sachs Capital Partners and Advent International acquired it from Madison Dearborn Partners. In 2014, TransUnion acquired Hank Asher's data company TLO. On June 25, 2015, TransUnion became a publicly traded company for the first time, trading under the symbol TRU.
TransUnion eventually began to offer products and services for both businesses and consumers. For businesses, TransUnion updated its traditional credit score offering to include trended data that helps predict consumer repayment and debt behavior. This product, referred to as CreditVision, launched in October 2013.
Its SmartMove™ service facilitates credit and background checks for landlords. The service also provides credit and background checks for partner companies, such as RentSpree.
In September 2013, the company acquired eScan Data Systems of Austin, Texas, to provide post-service eligibility determination support to hospitals and healthcare systems. The technology was integrated into TransUnion's ClearIQ platform, which tracks patients demographic and insurance related information to support benefit verification.
In November 2013, TransUnion acquired TLO LLC, a company that leverages data in support of its investigative and risk management tools. Its TLOxp technology aggregates data sets and uses a proprietary algorithm to uncover relationships between data. TLOxp also allows licensed investigators and law enforcement professionals to access personally identifiable information from credit header data.
In 2014, a TransUnion analysis found that reporting rental payment information to credit bureaus can positively affect credit scores. As a result, TransUnion initiated a service called ResidentCredit, making it easy for property owners to report data about their tenants on a monthly basis. These reports include the amount each tenant pays, the timeliness of their last payment, and any remaining balance the tenant currently owes. As a result, some companies have started reporting rent payment information to TransUnion.
In 2015, TransUnion acquired Trustev, a digital verification company specializing in online fraud for $21 million, minus debts.
In 2017, TransUnion acquired FactorTrust, a consumer reporting agency specializing in alternative credit data.
In mid-April 2018, TransUnion announced it intended to buy UK-based CallCredit Information Group for $1.4 billion, subject to regulatory approval.
In December 2021, TransUnion completed the acquisitions of Neustar, initially announced in September 2021 for $3.1 billion, and Sontiq which included IdentityForce, initially announced in October 2021 for $638 million.
In February 2023, TransUnion announced it was rebranding its "thousands of existing B2B products into seven business lines." These include: TruAudience, TruValidate, TruContact (all based on former offerings from Neustar), TruVision, TruIQ, TruEmpower, and TruLookup.
Legal and regulatory issues
In 2003, Judy Thomas of Klamath Falls, Oregon, was awarded $5.3 million in a successful lawsuit against TransUnion. The award was made on the grounds that it took her six years to get TransUnion to remove incorrect information in her credit report.
In 2006, after spending two years trying to correct erroneous credit information that resulted from being a victim of identity theft, a fraud victim named Sloan filed suit against all three of the US's largest credit agencies. TransUnion and Experian settled out of court for an undisclosed amount. In Sloan v. Equifax, a jury awarded Sloan $351,000. "She wrote letters. She called them. They saw the problem. They just didn't fix it," said her attorney, A. Hugo Blankingship III.
TransUnion has also been criticized for concealing charges. Many users complained of not being aware of a $17.95/month charge for holding a TransUnion account.
In March 2015, following a settlement with the New York Attorney-General, TransUnion, along with other credit reporting companies, Experian and Equifax, agreed to help consumers with errors and red flags on credit reports. Under the new settlement, credit-reporting firms are required to use trained employees to respond when a consumer flags a mistake on their file. These employees are responsible for communicating with the lender and resolving the dispute.
In January 2017, TransUnion was fined $5.5 million and ordered to pay $17.6 million in restitution, along with Equifax, by the Consumer Financial Protection Bureau (CFPB). The federal agency fined the companies "for deceiving consumers about the usefulness and actual cost of credit scores they sold to consumers". The CFPB also said the companies "lured consumers into costly recurring payments for credit-related products with false promises". Credit bureaus had the most complaints of all companies filed with the CFPB by consumers in 2018, with 34% of all complaints directed at TransUnion, Equifax, and Experian that year.
In June 2017, a California jury ruled against TransUnion with a $60 million verdict in the largest Fair Credit Reporting Act (FCRA) verdict in history.
The San Francisco federal court jury awarded $60 million in damages to consumers who were falsely reported on a government list of terrorists and other security threats. The plaintiffs' team of attorneys at Francis & Mailman, P.C. partnered with another California-based firm in the class action.
Following up on this, in April 2022, the Consumer Financial Protection Bureau (CFPB) said TransUnion is "incapable of operating its businesses lawfully".
Security issues
On 13 October 2017, the website for TransUnion's Central American division was reported to have been redirecting visitors to websites that attempted drive-by downloads of malware disguised as Adobe Flash updates. The attack had been performed by hijacking third-party analytics JavaScript from Digital River brand FireClick.
On 17 March 2022, TransUnion South Africa disclosed that hackers breached one of their servers and allegedly stole data of 54 million customers, demanding a ransom to not release it, the group N4ughtysecTU claims responsibility.
See also
Equifax
Experian
TransUnion Canada
TransUnion CIBIL
References
External links
Financial services companies of the United States
Companies listed on the New York Stock Exchange
Companies based in Chicago
American companies established in 1968
Financial services companies established in 1968
2015 initial public offerings
Credit scoring
Data collection
1968 establishments in Illinois
Data companies
Data brokers | TransUnion | Technology | 1,607 |
301,647 | https://en.wikipedia.org/wiki/Barrier%20island | Barrier islands are a coastal landform, a type of dune system and sand island, where an area of sand has been formed by wave and tidal action parallel to the mainland coast. They usually occur in chains, consisting of anything from a few islands to more than a dozen. They are subject to change during storms and other action, but absorb energy and protect the coastlines and create areas of protected waters where wetlands may flourish. A barrier chain may extend for hundreds of kilometers, with islands periodically separated by tidal inlets. The largest barrier island in the world is Padre Island of Texas, United States, at long. Sometimes an important inlet may close permanently, transforming an island into a peninsula, thus creating a barrier peninsula, often including a beach, barrier beach.
Though many are long and narrow, the length and width of barriers and overall morphology of barrier coasts are related to parameters including tidal range, wave energy, sediment supply, sea-level trends, and basement controls. The amount of vegetation on the barrier has a large impact on the height and evolution of the island.
Chains of barrier islands can be found along approximately 13-15% of the world's coastlines. They display different settings, suggesting that they can form and be maintained in a variety of environments. Numerous theories have been given to explain their formation.
A human-made offshore structure constructed parallel to the shore is called a breakwater. In terms of coastal morphodynamics, it acts similarly to a naturally occurring barrier island by dissipating and reducing the energy of the waves and currents striking the coast. Hence, it is an important aspect of coastal engineering.
Constituent parts
Upper shoreface
The shoreface is the part of the barrier where the ocean meets the shore of the island. The barrier island body itself separates the shoreface from the backshore and lagoon/tidal flat area. Characteristics common to the upper shoreface are fine sands with mud and possibly silt. Further out into the ocean the sediment becomes finer. The effect of waves at this point is weak because of the depth. Bioturbation is common and many fossils can be found in upper shoreface deposits in the geologic record.
Middle shoreface
The middle shoreface is located in the upper shoreface. The middle shoreface is strongly influenced by wave action because of its depth. Closer to shore the sand is medium-grained, with shell pieces common. Since wave action is heavier, bioturbation is not likely.
Lower shoreface
The lower shoreface is constantly affected by wave action. This results in development of herringbone sedimentary structures because of the constant differing flow of waves. The sand is coarser.
Foreshore
The foreshore is the area on land between high and low tide. Like the upper shoreface, it is constantly affected by wave action. Cross-bedding and lamination are present and coarser sands are present because of the high energy present by the crashing of the waves. The sand is also very well sorted.
Backshore
The backshore is always above the highest water level point. The berm is also found here which marks the boundary between the foreshore and backshore. Wind is the important factor here, not water. During strong storms high waves and wind can deliver and erode sediment from the backshore.
Dunes
Coastal dunes, created by wind, are typical of a barrier island. They are located at the top of the backshore. The dunes will display characteristics of typical aeolian wind-blown dunes. The difference is that dunes on a barrier island typically contain coastal vegetation roots and marine bioturbation.
Lagoon and tidal flats
The lagoon and tidal flat area is located behind the dune and backshore area. Here the water is still, which allows fine silts, sands, and mud to settle out. Lagoons can become host to an anaerobic environment. This will allow high amounts of organic-rich mud to form. Vegetation is also common.
Location
Barrier Islands can be observed on every continent on Earth, except Antarctica. They occur primarily in areas that are tectonically stable, such as "trailing edge coasts" facing (moving away from) ocean ridges formed by divergent boundaries of tectonic plates, and around smaller marine basins such as the Mediterranean Sea and the Gulf of Mexico. Areas with relatively small tides and ample sand supply favor barrier island formation.
Australia
Moreton Bay, on the east coast of Australia and directly east of Brisbane, is sheltered from the Pacific Ocean by a chain of very large barrier islands. Running north to south they are Bribie Island, Moreton Island, North Stradbroke Island and South Stradbroke Island (the last two used to be a single island until a storm created a channel between them in 1896). North Stradbroke Island is the second largest sand island in the world and Moreton Island is the third largest.
Fraser Island, another barrier island lying 200 km north of Moreton Bay on the same coastline, is the largest sand island in the world.
United States
Barrier islands are found most prominently on the United States' East and Gulf Coasts, where every state, from Maine to Florida (East Coast) and from Florida to Texas (Gulf coast), features at least part of a barrier island. Many have large numbers of barrier islands; Florida, for instance, had 29 (in 1997) in just along the west (Gulf) coast of the Florida peninsula, plus about 20 others on the east coast and several barrier islands and spits along the panhandle coast. Padre Island, in Texas, is the world's longest barrier island; other well-known islands on the Gulf Coast include Galveston Island in Texas and Sanibel and Captiva Islands in Florida. Those on the East Coast include Miami Beach and Palm Beach in Florida; Hatteras Island in North Carolina; Assateague Island in Virginia and Maryland; Absecon Island in New Jersey, where Atlantic City is located; and Jones Beach Island and Fire Island, both off Long Island in New York. No barrier islands are found on the Pacific Coast of the United States due to the rocky shore and short continental shelf, but barrier peninsulas can be found. Barrier islands can also be seen on Alaska's Arctic coast.
Canada
Barrier Islands can also be found in Maritime Canada, and other places along the coast. A good example is found at Miramichi Bay, New Brunswick, where Portage Island as well as Fox Island and Hay Island protect the inner bay from storms in the Gulf of Saint Lawrence.
Mexico
Mexico's Gulf of Mexico coast has numerous barrier islands and barrier peninsulas.
New Zealand
Barrier islands are more prevalent in the north of both of New Zealand's main islands. Notable barrier islands in New Zealand include Matakana Island, which guards the entrance to Tauranga Harbour, and Rabbit Island, at the southern end of Tasman Bay. See also Nelson Harbour's Boulder Bank, below.
India
The Vypin Island in the Southwest coast of India in Kerala is 27 km long. It is also one of the most densely populated islands in the world.
Indonesia
The Indonesian Barrier Islands lie off the western coast of Sumatra. From north to south along this coast they include Simeulue, the Banyak Islands (chiefly Tuangku and Bangkaru), Nias, the Batu Islands (notably Pini, Tanahmasa and Tanahbala), the Mentawai Islands (mainly Siberut, Sipura, North Pagai and South Pagai Islands) and Enggano Island.
Europe
Barrier islands can be observed in the Baltic Sea from Poland to Lithuania as well as distinctly in the Wadden Islands, which stretch from the Netherlands to Denmark. Lido di Venezia and Pellestrina are notable barrier islands of the Lagoon of Venice which have for centuries protected the city of Venice in Italy. Chesil Beach on the south coast of England developed as a barrier beach. Barrier beaches are also found in the north of the Azov and Black seas.
Processes
Migration and overwash
Water levels may be higher than the island during storm events. This situation can lead to overwash, which brings sand from the front of the island to the top and/or landward side of the island. This process leads to the evolution and migration of the barrier island.
Critical width concept
Barrier islands are often formed to have a certain width. The term "critical width concept" has been discussed with reference to barrier islands, overwash, and washover deposits since the 1970s. The concept basically states that overwash processes were effective in migration of the barrier only where the barrier width is less than a critical value. The island did not narrow below these values because overwash was effective at transporting sediment over the barrier island, thereby keeping pace with the rate of ocean shoreline recession. Sections of the island with greater widths experienced washover deposits that did not reach the bayshore, and the island narrowed by ocean shoreline recession until it reached the critical width. The only process that widened the barrier beyond the critical width was breaching, formation of a partially subaerial flood shoal, and subsequent inlet closure.
Critical barrier width can be defined as the smallest cross-shore dimension that minimizes net loss of sediment from the barrier island over the defined project lifetime. The magnitude of critical width is related to sources and sinks of sand in the system, such as the volume stored in the dunes and the net long-shore and cross-shore sand transport, as well as the island elevation. The concept of critical width is important for large-scale barrier island restoration, in which islands are reconstructed to optimum height, width, and length for providing protection for estuaries, bays, marshes and mainland beaches.
Formation theories
Scientists have proposed numerous explanations for the formation of barrier islands for more than 150 years. There are three major theories: offshore bar, spit accretion, and submergence. No single theory can explain the development of all barriers, which are distributed extensively along the world's coastlines. Scientists accept the idea that barrier islands, including other barrier types, can form by a number of different mechanisms.
There appears to be some general requirements for formation. Barrier island systems develop most easily on wave-dominated coasts with a small to moderate tidal range. Coasts are classified into three groups based on tidal range: microtidal, 0–2 meter tidal range; mesotidal, 2–4 meter tidal range; and macrotidal, >4 meter tidal range. Barrier islands tend to form primarily along microtidal coasts, where they tend to be well developed and nearly continuous. They are less frequently formed in mesotidal coasts, where they are typically short with tidal inlets common. Barrier islands are very rare along macrotidal coasts. Along with a small tidal range and a wave-dominated coast, there must be a relatively low gradient shelf. Otherwise, sand accumulation into a sandbar would not occur and instead would be dispersed throughout the shore. An ample sediment supply is also a requirement for barrier island formation. This often includes fluvial deposits and glacial deposits. The last major requirement for barrier island formation is a stable sea level. It is especially important for sea level to remain relatively unchanged during barrier island formation and growth. If sea level changes are too drastic, time will be insufficient for wave action to accumulate sand into a dune, which will eventually become a barrier island through aggradation. The formation of barrier islands requires a constant sea level so that waves can concentrate the sand into one location.
Offshore bar theory
In 1845 the Frenchman Elie de Beaumont published an account of barrier formation. He believed that waves moving into shallow water churned up sand, which was deposited in the form of a submarine bar when the waves broke and lost much of their energy. As the bars developed vertically, they gradually rose above sea level, forming barrier islands.
Several barrier islands have been observed forming by this process along the Gulf coast of the Florida peninsula, including: the North and South Anclote Bars associated with Anclote Key, Three Rooker Island, Shell Key, and South Bunces Key.
Spit accretion theory
American geologist Grove Karl Gilbert first argued in 1885 that the barrier sediments came from longshore sources. He proposed that sediment moving in the breaker zone through agitation by waves in longshore drift would construct spits extending from headlands parallel to the coast. The subsequent breaching of spits by storm waves would form barrier islands.
Submergence theory
William John McGee reasoned in 1890 that the East and Gulf coasts of the United States were undergoing submergence, as evidenced by the many drowned river valleys that occur along these coasts, including Raritan, Delaware and Chesapeake bays. He believed that during submergence, coastal ridges were separated from the mainland, and lagoons formed behind the ridges. He used the Mississippi–Alabama barrier islands (consists of Cat, Ship, Horn, Petit Bois and Dauphin Islands) as an example where coastal submergence formed barrier islands. His interpretation was later shown to be incorrect when the ages of the coastal stratigraphy and sediment were more accurately determined.
Along the coast of Louisiana, former lobes of the Mississippi River delta have been reworked by wave action, forming beach ridge complexes. Prolonged sinking of the marshes behind the barriers has converted these former vegetated wetlands to open-water areas. In a period of 125 years, from 1853 to 1978, two small semi-protected bays behind the barrier developed as the large water body of Lake Pelto, leading to Isles Dernieres's detachment from the mainland.
Boulder Bank
An unusual natural structure in New Zealand may give clues to the formation processes of barrier islands. The Boulder Bank, at the entrance to Nelson Haven at the northern end of the South Island, is a unique 13 km-long stretch of rocky substrate a few metres in width. It is not strictly a barrier island, as it is linked to the mainland at one end. The Boulder Bank is composed of granodiorite from Mackay Bluff, which lies close to the point where the bank joins the mainland. It is still debated what process or processes have resulted in this odd structure, though longshore drift is the most accepted hypothesis. Studies have been conducted since 1892 to determine the speed of boulder movement. Rates of the top-course gravel movement have been estimated at 7.5 metres a year.
Types
Richard Davis distinguishes two types of barrier islands, wave-dominated and mixed-energy.
Wave-dominated
Wave-dominated barrier islands are long, low, and narrow, and usually are bounded by unstable inlets at either end. The presence of longshore currents caused by waves approaching the island at an angle will carry sediment long, extending the island. Longshore currents, and the resultant extension, are usually in one direction, but in some circumstances the currents and extensions can occur towards both ends of the island (as occurs on Anclote Key, Three Rooker Bar, and Sand Key, on the Gulf Coast of Florida). Washover fans on the lagoon side of barriers, where storm surges have over-topped the island, are common, especially on younger barrier islands. Wave-dominated barriers are also susceptible to being breached by storms, creating new inlets. Such inlets may close as sediment is carried in them by longshore currents, but may become permanent if the tidal prism (volumn and force of tidal flow) is large enough. Older barrier islands that have accumulated dunes are less subject to washovers and opening of inlets. Wave-dominated islands require an abundant supply of sediment to grow and develop dunes. If a barrier island does not receive enough sediment to grow, repeated washovers from storms will migrate the island towards the mainland.
Mixed-energy
Wave-dominated barrier islands may eventually develop into mixed-energy barrier islands. Mixed-energy barrier islands are molded by both wave energy and tidal flux. The flow of a tidal prism moves sand. Sand accumulates at both the inshore and off shore sides of an inlet, forming a flood delta or shoal on the bay or lagoon side of the inlet (from sand carried in on a flood tide), and an ebb delta or shoal on the open water side (from sand carried out by an ebb tide). Large tidal prisms tend to produce large ebb shoals, which may rise enough to be exposed at low tide. Ebb shoals refract waves approaching the inlet, locally reversing the longshore current moving sand along the coast. This can modify the ebb shoal into swash bars, which migrate into the end of the island up current from the inlet, adding to the barrier's width near the inlet (creating a "drumstick" barrier island). This process captures sand that is carried by the longshore current, preventing it from reaching the downcurrent side of the inlet, starving that island.
Many of the Sea Islands in the U.S. state of Georgia are relatively wide compared to their shore-parallel length. Siesta Key, Florida has a characteristic drumstick shape, with a wide portion at the northern end near the mouth of Phillipi Creek.
Ecological importance
Barrier islands are critically important in mitigating ocean swells and other storm events for the water systems on the mainland side of the barrier island, as well as protecting the coastline. This effectively creates a unique environment of relatively low energy, brackish water. Multiple wetland systems such as lagoons, estuaries, and/or marshes can result from such conditions depending on the surroundings. They are typically rich habitats for a variety of flora and fauna. Without barrier islands, these wetlands could not exist; they would be destroyed by daily ocean waves and tides as well as ocean storm events. One of the most prominent examples is the Louisiana barrier islands.
See also
North Frisian Barrier Island
Outer Banks
Virginia Barrier Islands
New York Barrier Islands
Texas barrier islands
Sea Islands
Long Beach Island
Bald Head Island
Notes
References
Sources
External links
Physical oceanography
Coastal geography
Hydrology
Coastal and oceanic landforms
Oceanographical terminology
Islands by type | Barrier island | Physics,Chemistry,Engineering,Environmental_science | 3,650 |
74,510,963 | https://en.wikipedia.org/wiki/Spheroidene | Spheroidene is a carotenoid pigment. It is a component of the photosynthetic reaction center of certain purple bacteria of the Rhodospirillaceae family, including Rhodobacter sphaeroides and Rhodopseudomonas sphaeroides. Like other carotenoids, it is a tetraterpenoid. In purified form, it is a brick-red solid soluble in benzene.
Spheroidene was discovered by microbiologist C. B. van Niel, who named it "pigment Y". It was renamed by Basil Weedon, who was the first to prepare it synthetically, and to determine its structure, in the mid-1960s.
Function
Spheroidene is bound to the type II photosynthetic reaction center of purple bacteria, and together with the bacteriochlorophyll forms part of the light-harvesting complex. Spheroidene has two major functions in the complex. First, it absorbs visible light in the blue-green part of the visible spectrum (320–500 nm), where bacteriochlorophyll has little absorbance. It then transfers energy to the bacteriochlorophyll via singlet–singlet energy transfer. In this manner the reaction center is able to harness more of the visible light spectrum than would be possible with bacteriochlorophyll alone. Second, spheroidene quenches excited singlet states of bacteriochlorophyll by forming a stable triplet state. This quenching helps to prevent the formation of harmful singlet oxygen. Other functions of spheroidene may include scavenging of singlet oxygen, nonradiative dissipation of excess light energy, and structural stabilization of the photosystem proteins.
Spheroidene is thought to exist as the 15,15'-cis isomer, and not the all-trans isomer commonly shown in the literature, in native photosynthetic reaction centers.
Biosynthesis
The proteins involved in spheroidene biosynthesis are encoded by a gene cluster. Geranylgeranyl pyrophosphate (GGPP) is the precursor to spheroidene and the other carotenoids; two molecules of GGPP condense to form the symmetric tetraterpene phytoene. This molecule then undergoes three desaturations to form neurosporene, which is then hydroxylated, desaturated again, and methoxylated to produce spheroidene. In some species, spheroidene is further oxygenated to produce the ketone spheroidenone.
See also
Photosynthesis
Förster resonance energy transfer
Antioxidant
References
Carotenoids
Photosynthetic pigments
Methoxy compounds | Spheroidene | Chemistry,Biology | 595 |
34,408,890 | https://en.wikipedia.org/wiki/WeChat | WeChat or Weixin in Chinese () is a Chinese instant messaging, social media, and mobile payment app developed by Tencent. First released in 2011, it became the world's largest standalone mobile app in 2018 with over 1 billion monthly active users. WeChat has been described as China's "app for everything" and a super-app because of its wide range of functions. WeChat provides text messaging, hold-to-talk voice messaging, broadcast (one-to-many) messaging, video conferencing, video games, mobile payment, sharing of photographs and videos and location sharing.
Accounts registered using Chinese phone numbers are managed under the Weixin brand, and their data is stored in mainland China and subject to Weixin's terms of service and privacy policy, which forbids content which "endanger[s] national security, divulge[s] state secrets, subvert[s] state power and undermine[s] national unity". Non-Chinese numbers are registered under WeChat, and WeChat users are subject to a different, less strict terms of service and stricter privacy policy, and their data is stored in the Netherlands for users in the European Union, and in Singapore for other users. User activity on Weixin, the Chinese version of the app, is analyzed, tracked and shared with Chinese authorities upon request as part of the mass surveillance network in China. Chinese-registered Weixin accounts censor politically sensitive topics. Any interactions between Weixin and WeChat users are subject to the terms of service and privacy policies of both services.
History
By 2010, Tencent had already attained a massive user base with their desktop messenger app QQ. Recognizing smart phones were likely to disrupt this status quo, CEO Pony Ma sought to proactively invest in alternatives to their own QQ messenger app.
WeChat began as a project at Tencent Guangzhou Research and Project center in October 2010. The original version of the app was created by Allen Zhang, named "Weixin" () by Pony Ma, and launched in 2011. The user adoption of WeChat was initially very slow, with users wondering why key features were missing; however, after the release of the Walkie-talkie-like voice messaging feature in May of that year, growth surged. By 2012, when the number of users reached 100 million, Weixin was re-branded "WeChat" by President Martin Lau for the international market.
During a period of government support of e-commerce development—for example in the 12th five-year plan (2011–2015)—WeChat also saw new features enabling payments and commerce in 2013, which saw massive adoption after their virtual Red envelope promotion for Chinese New Year 2014.
WeChat had over 889 million monthly active users by 2016, and as of 2019 WeChat's monthly active users had risen to an estimate of one billion. As of January 2022, it was reported that WeChat has more than 1.2 billion users. After the launch of WeChat payment in 2013, its users reached 400 million the next year, 90 percent of whom were in China. By comparison, Facebook Messenger and WhatsApp had about one billion monthly active users in 2016 but did not offer most of the other services available on WeChat. For example, in Q2 2017, WeChat's revenues from social media advertising were about US$0.9 billion (RMB6 billion) compared with Facebook's total revenues of US$9.3 billion, 98% of which were from social media advertising. WeChat's revenues from its value-added services were US$5.5 billion.
By 2018, WeChat had been used by 93.5% of Chinese internet users.
In response to a border dispute between India and China, WeChat was banned in India in June 2020 along with several other Chinese apps, including TikTok. U.S. president Donald Trump sought to ban U.S. "transactions" with WeChat through an executive order but was blocked by a preliminary injunction issued in the United States District Court for the Northern District of California in September 2020. Joe Biden officially dropped Trump's efforts to ban WeChat in the U.S. in June 2021.
Features
Messaging
WeChat provides a variety of features including text messaging, hold-to-talk voice messaging, broadcast (one-to-many) messaging, video calls and conferencing, video games, photograph and video sharing, as well as location sharing. WeChat also allows users to exchange contacts with people nearby via Bluetooth, as well as providing various features for contacting people at random if desired (if people are open to it). It can also integrate with other social networking services such as Facebook and Tencent QQ. Photographs may also be embellished with filters and captions, and automatic translation service is available and could also translate the conversation during messaging.
WeChat supports different instant messaging methods, including text messages, voice messages, walkie talkie, and stickers. Users can send previously saved or live pictures and videos, profiles of other users, coupons, lucky money packages, or current GPS locations with friends either individually or in a group chat.
WeChat also provides a message recall feature to allow users to recall and withdraw information (e.g. Images, documents) that are sent within 2 minutes in a conversation. To use this feature, users can select the message or file to be recalled by long pressing. In the menu that appears select 'recall' and 'ok' to complete the withdrawal process. Eventually, the selected messages or files will be removed from the WeChat chatting box on both the sender's and recipient's phones.
WeChat also provides a voice-to-text feature that brings convenience when it is not convenient to listen to voice messages, as well as the basic ability to recognize emojis based on different tones of voice, but the privacy leaks involved are also thought-provoking.
A distance sensing feature is implemented in WeChat. It has the ability to activate the receivers' hold-to-talk function when the phone was brought in close proximity to the ear. After the receiver was held at a certain distance from the ear, the sensor would then proceed to automatically disable the phone speakers. This feature eliminates the risk of the user's voice messages being inadvertently broadcast to the general public.
Public accounts
WeChat users can register as a public account (), which enables them to push feeds to subscribers, interact with subscribers, and provide subscribers with services. Users can also create an official account, which fall under service, subscription, or enterprise accounts. Once users as individuals or organizations set up a type of account, they cannot change it to another type. By the end of 2014, the number of WeChat official accounts had reached 8 million. Official accounts of organizations can apply to be verified (cost 300 RMB or about US$45). Official accounts can be used as a platform for services such as hospital pre-registrations, or credit card service. To create an official account, the applicant must register with Chinese authorities, which discourages "foreign companies". In April 2022, WeChat announced that it will start displaying the location of users in China everytime they post on a public account. Meanwhile, overseas users on public accounts will also display the country based on their IP address.
Moments
"Moments" () is WeChat's brand name for its social feed of friends' updates. "Moments" is an interactive platform that allows users to post images, text, and short videos taken by users. It also allows users to share articles and music (associated with QQ Music or other web-based music services). Friends in the contact list can like the content and leave comments, functioning similarly to a private social network.
In 2017 WeChat had a policy of a maximum of two advertisements per day per Moments user.
Privacy in WeChat works by groups of friends: only the friends from the user's contact are able to view their Moments' contents and comments. The friends of the user will only be able to see the likes and comments from other users only if they are in a mutual friend group. For example, friends from high school are not able to see the comments and likes from friends from a university. When users post their moments, they can separate their friends into a few groups, and they can decide whether this Moment can be seen by particular groups of people. Contents posted can be set to "Private", and then only the user can view it. Unlike Weibo or Instagram, these are only shared to the user's friends. These are unlikely to go viral.
Recently, WeChat launched a new foundation that users can choose to top their posts in their own Moments. No matter how long the posts can be viewed set by users, the posts topped by them can be seen all the time. This foundation enable people to mark some important posts that it will be easy to find them. Besides, users can permanently display some posts while ensuring overall privacy.
Weixin Pay digital payment services
Users who have provided bank account information may use the app to pay bills, order goods and services, transfer money to other users, and pay in stores if the stores have a Weixin payment option. Vetted third parties, known as "official accounts", offer these services by developing lightweight "apps within the app". Users can link their Chinese bank accounts, as well as Visa, MasterCard and JCB.
WeChat Pay, officially referred to as Weixin Pay () in China, is a digital wallet service incorporated into Weixin, which allows users to perform mobile payments and send money between contacts.
Although users receive immediate notification of the transaction, the Weixin Pay system is not an instant payment instrument, because the funds transfer between counterparts is not immediate. The settlement time depends on the payment method chosen by the customer.
All Weixin users have their own Weixin Pay accounts. Users can acquire a balance by linking their Weixin account to their debit cards, or by receiving money from other users. For non-Chinese users of Weixin Pay, an additional identity verification process of providing a photo of a valid ID is required before certain functions of Weixin Pay become available. Users who link their credit card can only make payments to vendors, and cannot use this to top up WeChat balances. Weixin Pay can be used for digital payments, as well as payments from participating vendors. As of March 2016, Weixin Pay had over 300 million users.
Weixin Pay's main competitor in China and the market leader in online payments is Alibaba Group's Alipay. Alibaba company founder Jack Ma considered Weixin's red envelope feature to be a "Pearl Harbor moment", as it began to erode Alipay's historic dominance in the online payments industry in China, especially in peer-to-peer money transfer. The success prompted Alibaba to launch its own version of virtual red envelopes in its competing Laiwang service. Other competitors, Baidu Wallet and Sina Weibo, also launched similar features.
In 2019 it was reported that Weixin had overtaken Alibaba with 800 million active Weixin mobile payment users versus 520 million for Alibaba's Alipay. However Alibaba had a 54 per cent share of the Chinese mobile online payments market in 2017 compared to Weixin's 37 per cent share. In the same year, Tencent introduced "WeChat Pay HK", a payment service for users in Hong Kong. Transactions are carried out with the Hong Kong dollar. In 2019 it was reported that Chinese users can use WeChat Pay in 25 countries outside China, including, Italy, South Africa and the UK.
Enterprise WeChat
For work purposes, companies and business communication, a special version of WeChat called WeCom (formally known as Enterprise WeChat (or Qiye Weixin) and WeChat Work before Nov 2020) was launched in 2016. The app was meant to help employees separate work from private life. In addition to the usual chat features, the program let companies and their employees keep track of annual leave days and expenses that need to be reimbursed, employees could ask for time off or clock in to show they were at work.
WeChat Mini Program
In 2017, WeChat launched a feature called "Mini Programs" (). A mini program is an app within an app. Business owners can create mini apps in the WeChat system, implemented using proprietary versions of CSS, JavaScript, and templated XML JavaScript with proprietary APIs. Users may install these inside the WeChat app. In January 2018, WeChat announced a record of 580,000 mini programs. With one Mini Program, consumers could scan the Quick Response code using their mobile phone at a supermarket counter and pay the bill through the user's WeChat mobile wallet. WeChat Games have received huge popularity, with its "Jump Jump" game attracting 400 million players in less than 3 days and attaining 100 million daily active users in just two weeks after its launch, as of January 2018. Ever since WeChat Mini Program's Launch, the daily active user count of WeChat Mini Programs are increasing dramatically. In 2017, there were only 160 million daily active users, however, the number reached 450 million in 2021.
WeChat Channels
In 2020, WeChat Channels were launched. They are a short video platform within WeChat that allows users to create and share short video clips and photos to their own WeChat Channel. Users of Channels can also discover content posted to other Channels by others via the in-built feed. Each post can include hashtags, a location tag, a short description, and a link to an WeChat official account article. In September 2021, it was reported that WeChat Channels began allowing users to upload hour-long videos, twice of the duration limit previously imposed on all WeChat Channels videos. Comparisons are often drawn between WeChat Channels and TikTok (or Douyin) for their similarity in features.
In January 2022, there were reports that WeChat is set to diversify further and place more emphasis on new products and services like WeChat Channels, amid new regulatory restrictions imposed in China.
By June 2021, WeChat Channels had accumulated over 200 million users, and WeChat Channels have 500 million DAU (Daily Active Users), growing at 79% year-on -year. More than 27 million people had used the platform to watch Irish boy band Westlife's online concert in 2021, and 15 million users also viewed the Shenzhou 12 spaceflight launch using the app service.
Easy Mode
In September 2021, WeChat introduced a brand-new feature on its platform called Easy Mode. It was mainly designed for elderly people with higher readability by providing a larger font size, sharper colours, and bigger buttons. Another feature provided in this update was the ability to listen to text messages. Easy Mode was released in version 8.0.14 for both iOS and Android.
Guardian Mode
Guardian Mode is a function in WeChat for protecting users under 14 years old. It was introduced to promote safety and provide security environment for WeChat users. After operating the Guardian Mode, the functions of "people nearby", "games", "search" will not be accessible in the interface. The channels function in WeChat, a video mini program, would only show contents suitable for adolescents. Additionally, WeChat users who turn on the Guardian Mode are only able to add friends through QR codes and group chats. Moreover, WeChat users would only be able to view 10 latest Moments posts and would not be able to view the 10 latest Moments posts of non-friend users under the privacy setting of Guardian Mode.
Others
In January 2016, Tencent launched WeChat Out, a VOIP service allowing users to call mobile phones and landlines around the world. The feature allowed purchasing credit within the app using a credit card. WeChat Out was originally only available in the United States, India, and Hong Kong, but later coverage was expanded to Thailand, Macau, Laos, and Italy.
In March 2017, Tencent released WeChat Index. By inserting a search term in the WeChat Index page, users could check the popularity of this term in the past 7, 30, or 90 days. The data was mined from data in official WeChat accounts and metrics such as social sharing, likes and reads were used in the evaluation.
In May 2017, Tencent started news feed and search functions for its WeChat app. The Financial Times reported this was a "direct challenge to Chinese search engine Baidu".
In 2017, WeChat was reported to be developing an augmented reality (AR) platform as part of its service offering. Its artificial intelligence team was working on a 3D rendering engine to create a realistic appearance of detailed objects in smartphone-based AR apps. They were also developing a simultaneous localization and mapping technology, which would help calculate the position of virtual objects relative to their environment, enabling AR interactions without the need for markers, such as Quick Response codes or special images.
Chinese courts allow the parties to communicate with the courts via WeChat, through which parties can file lawsuits, participate in proceedings, present evidence, and listen to verdicts. As of December 2019, more than 3 million parties had used WeChat for litigation.
In spring 2020, WeChat users are now able to change their WeChat ID more than once, being allowed to change their username only once per year. Prior to this, a WeChat ID could not be changed more than once.
On 17 June 2020, WeChat released a new add-on called "WeChat Nudge". The feature was first introduced in MSN Messenger 7.0, in 2005. The feature was called Buzz in Yahoo! Messenger and the feature had interoperability with MSN Messenger's Nudge. Similar to Messenger and Yahoo, users can access WeChat Nudge by double-clicking on other users' profiles in the chat. This virtually shakes user's profile photo and sends a vibration notification. Both users must have the latest Wechat update. If a user does not have the latest update they will be unable to nudge another user, but can still receive nudges. A user can only nudge another user if they have previous conversations. Newly added friends without previous messages cannot nudge each other.
On January 16, 2022, a new version of WeChat has added seven major functions for the iOS 8.0.17, Android 8.0.18 or newer version users. In the function of Personal Information Authority, users can check the number of times personal information has been edited in the past year through the personal information collection list, including head portrait, name, mobile number, gender, region, personalized signature, and address.
On March 30, 2022, according to the relevant laws and regulations of China, in order to prevent the risk of publicity stunts in virtual currency transactions, the Wechat public platform standardized the official account and mini program of secondary sales of digital collections.
WeChat Business
WeChat Business () is one of the latest mobile social network business model after e-commerce, which utilizes business relationships and friendships to maintain a customer relationship. Comparing with the traditional E-business like JD.com and Alibaba, WeChat Business has a large range of influence and profits with less input and lower threshold, which attracts lots of people to join in WeChat business.
Marketing modes
B2C Mode
This is the main profit mode of WeChat Business. The first one is to launch advertisements and provide services through the WeChat Official Account, which is a B2C mode. This mode has been used by many hospitals, banks, fashion brands, internet companies and personal blogs because the Official Account can access online payment, location sharing, voice messages, and mini-games. It is like a 'mini app', so the company has to hire specific staff to manage the account. By 2015, there were more than 100 million WeChat Official Accounts on this platform.
B2B Mode
WeChat salesperson in this mode is for promoting products by individuals, which belongs to C2C mode. In this mode, individual sellers post relevant photos and messages of their agent products on the WeChat Moments or WeChat groups and sell products to their WeChat friends. Besides, they develop friendships with their customers by sending messages in festivals or write comments under their updates on WeChat moments to increase their trust. Also, continuing to communicate with the regular customers raises the 'WOF' (word-of-mouth) communications, which influences decision-making. Some WeChat businessmen already have an online shop in Taobao, but use WeChat to maintain existing customers.
Existing problems
As more and more people have joined WeChat Business, it has brought many problems. For example, some sellers have begun to sell counterfeit luxury goods such as bags, clothes and watches. Some sellers have disguised themselves as international flight attendants or overseas students to post fake stylish photos on WeChat Moments. They then claim that they can provide overseas purchasing services but sell counterfeit luxury goods at the same price as the authentic ones. Other popular products selling on WeChat are facial masks. The marketing mode is like that of Amway but most goods are unbranded products which come from illegal factories making excess hormones which could have serious effects on customers' health. However, it is difficult for customers to defend their rights because a large number of sellers' identities are uncertified. Additionally, the lack of any supervision mechanism in WeChat business also provides opportunities for criminals to continue this illegal behavior. In early 2022, WeChat suspended more than a dozen NFT (non-fungible token) public accounts to clean up crypto speculation and scalping. The crackdown on NFT-related content comes from domestic digital collectibles, which cannot be resold for profit.
Marketing
Campaigns
In a 2016 campaign, users could upload a paid photo on "Moments" and other users could pay to see the photo and comment on it. The photos were taken down each night.
Collaborations
In 2014, Burberry partnered with WeChat to create its own WeChat apps around its fall 2014 runway show, giving users live streams from the shows. Another brand, Michael Kors used WeChat to give live updates from their runway show, and later to run a photo contest "Chic Together WeChat campaign".
In 2016, L'Oréal China cooperated with Papi Jiang to promote their products. Over one million people watched her first video promoting L'Oreal's beauty brand MG.
In 2016, WeChat partnered with 60 Italian companies (WeChat had an office in Milan) who were able to sell their products and services on the Chinese market without having to get a license to operate a business in China. In 2017, Andrea Ghizzoni, European director of Tencent, said that 95 percent of global luxury brands used WeChat.
In 2020 Burberry and WeChat collaborated to design a shop in Shenzhen where Burberry has a flagship store, as well as an app allowing shoppers to interact with the shop digitally.
Platforms
WeChat's mobile phone app is available only to Android, HarmonyOS and iOS. BlackBerry, Windows Phone, and Symbian phones were supported before. However, as of 22 September 2017, WeChat was no longer working on Windows Phones. The company ceased the development of the app for Windows Phones before the end of 2017. Although Web-based OS X and Windows clients exist, this requires the user to have the app installed on a supported mobile phone for authentication, and neither message roaming nor 'Moments' are provided. Thus, without the app on a supported phone, it is not possible to use the web-based WeChat clients on the computer.
The company also provides WeChat for Web, a web-based client with messaging and file transfer capabilities. Other functions cannot be used on it, such as the detection of nearby people, or interacting with Moments or Official Accounts. To use the Web-based client, it is necessary to first scan a QR code using the phone app. This means it is not possible to access the WeChat network if a user does not possess a suitable smartphone with the app installed.
WeChat could be accessed on Windows using BlueStacks until December 2014. After that, WeChat blocked Android emulators and accounts that have signed in from emulators may be frozen.
There have been some reported issues with the Web client. Specifically when using English, some users have experienced autocorrect, autocomplete, auto-capitalization, and auto-delete behavior as they type messages and even after the message was sent. For example, "gonna" was autocorrected to "go", the E's were auto-deleted in "need", "wechat" was auto-capitalized to "Wechat" but not "WeChat", and after the message was sent, "don't" got auto-corrected to "do not". However, the auto-corrected word(s) after the message was sent appeared on the phone app as the user had originally typed it ("don't" was seen on the phone app whereas "do not" was seen on the Web client). Users could translate a foreign language during a conversation and the words were posted on Moments.
WeChat allows group video calls.
Controversies
State surveillance and intelligence gathering
Weixin, the Chinese version of WeChat, operates from China under Chinese law, which includes strong censorship provisions and interception protocols. Its parent company is obliged to share data with the Chinese government under the China Internet Security Law and National Intelligence Law. Weixin can access and expose the text messages, contact books, and location histories of its users. Due to Weixin's popularity, the Chinese government uses Weixin as a data source to conduct mass surveillance in China.
Some states and regions such as India, Australia the United States, and Taiwan fear that the app poses a threat to national or regional security for various reasons. In June 2013, the Indian Intelligence Bureau flagged WeChat for security concerns. India has debated whether or not they should ban WeChat for the possibility that too much personal information and data could be collected from its users. In Taiwan, legislators were concerned that the potential exposure of private communications was a threat to regional security.
In 2016, Tencent was awarded a score of zero out of 100 in an Amnesty International report ranking technology companies on the way they implement encryption to protect the human rights of their users. The report placed Tencent last out of a total of 11 companies, including Facebook, Apple, and Google, for the lack of privacy protections built into Weixin and QQ. The report found that Tencent did not make use of end-to-end encryption, which is a system that allows only the communicating users to read the messages. It also found that Tencent did not recognize online threats to human rights, did not disclose government requests for data, and did not publish specific data about its use of encryption.
A September 2017 update to the platform's privacy policy detailed that log data collected by Weixin included search terms, profiles visited, and content that had been viewed within the app. Additionally, metadata related to the communications between Weixin users—including call times, duration, and location information—was also collected. This information, which was used by Tencent for targeted advertising and marketing purposes, might be disclosed to representatives of the Chinese government:
To comply with an applicable law or regulations.
To comply with a court order, subpoena, or other legal process.
In response to a request by a government authority, law enforcement agency, or similar body.
In May 2020, Citizen Lab published a study which claimed that WeChat monitors foreign chats to hone its censorship algorithms.
On August 14, 2020, Radio Free Asia reported that in 2019, Gao Zhigang, a citizen of Taiyuan city, Shanxi Province, China, used Weixin to forward a video to his friend Geng Guanjun in USA. Gao was later convicted on the charge of the crime of picking quarrels and provoking trouble, and sentenced to ten-months imprisonment. The Court documents show that China's network management and propaganda departments directly monitor Weixin users, and the Chinese police used big data facial technology to identify Geng Guanjun as an overseas democracy activist.
In September 2020, Chevron Corporation mandated that its employees delete WeChat from company-issued phones.
Privacy issues
Users inside and outside of China also have expressed concern about the privacy issues of the app. Human rights activist Hu Jia was jailed for three years for sedition. He speculated that the officials of the Internal Security Bureau of the Ministry of Public Security listened to his voicemail messages that were directed to his friends, repeating the words displayed within the voice mail messages to Hu. Chinese authorities have further accused the Weixin app of threatening individual safety. China Central Television (CCTV), a state-run broadcaster, featured a piece in which Weixin was described as an app that helped criminals due to its location-reporting features. CCTV gave an example of such accusations through reporting the murder of a single woman who, after he attempted to rob her, was murdered by a man she met on Weixin. The location-reporting feature, according to reports, was the reason for the man knowing the victim's whereabouts. Authorities within China have linked Weixin to numerous crimes. The city of Hangzhou, for example, reported over twenty crimes related to Weixin in the span of three months.
XcodeGhost malware
In 2015, Apple published a list of the top 25 most popular apps infected with the XcodeGhost malware, confirming earlier reports that version 6.2.5 of WeChat for iOS was infected with it. The malware originated in a counterfeit version of Xcode (dubbed "XcodeGhost"), Apple's software development tools, and made its way into the compiled app through a modified framework. Despite Apple's review process, WeChat and other infected apps were approved and distributed through the App Store. Even though the cybersecurity company Palo Alto Networks claims that the malware was capable of prompting the user for their account credentials, opening URLs and reading the device's clipboard, Apple responded that the malware was not capable of doing "anything malicious" or transmitting any personally identifiable information beyond "apps and general system information" and that it had no information that suggested that this had happened. In 2015 internet security company Malwarebytes considered this to be the largest security breach in the App Store's history.
Ban in India
In June 2020, the Government of India banned WeChat along with 58 other Chinese apps citing data and privacy issues, in response to a border clash between India and China earlier in the year. The banned Chinese apps were "stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India," and was "hostile to national security and defense of India", claimed India's Ministry of Electronics and Information Technology.
Previous ban in Russia
On 6 May 2017, Russia blocked access to WeChat for failing to give its contact details to the Russian communications watchdog. The ban was swiftly lifted on 11 May 2017 after Tencent provided "relevant information" for registration to Roskomnadzor.
In March 2023, Russia banned government officials from using messaging apps operated by foreign companies, including WeChat.
Ban and injunction against ban in the United States
On August 6, 2020, U.S. President Donald Trump signed an executive order, invoking the International Emergency Economic Powers Act, seeking to ban WeChat in the U.S. in 45 days, due to its connections with the Chinese-owned Tencent. This was signed alongside a similar executive order targeting TikTok and its Chinese-owned ByteDance.
The Department of Commerce issued orders on September 18, 2020, to enact the ban on WeChat and TikTok by the end of September 20, 2020, citing national security and data privacy concerns. The measures ban the transferring of funds or processing through WeChat in the U.S. and ban any company from offering hosting, content delivery networks or internet transit to WeChat.
Magistrate Judge Laurel Beeler of the United States District Court for the Northern District of California issued a preliminary injunction blocking the Department of Commerce order on both TikTok and WeChat on September 20, 2020, based on respective lawsuits filed by TikTok and US WeChat Users Alliance, citing the merits of the plaintiffs' First Amendment claims. The Justice Department had previously asked Beeler to not block the order to ban the apps saying it would undermine the presidents ability to deal with threats to national security. In her ruling, Beeler said that while the government had established that Chinese government activities raised significant national security concerns, it showed little evidence that the WeChat ban would address those concerns.
On June 9, 2021, U.S. President Joe Biden signed an executive order revoking the ban on WeChat and TikTok. Instead, he directed the commerce secretary to investigate foreign influence enacted through the apps.
Montana banned the installation of WeChat on government devices since June 1, 2023.
Ban in Canada
In October 2023, Canada banned WeChat on all government devices.
Notorious Markets list
In 2022, the Office of the United States Trade Representative (USTR) added WeChat's ecommerce ecosystem to its list of Notorious Markets for Counterfeiting and Piracy. In January 2025, USTR removed WeChat from its list of notorious markets.
2023 Australian Indigenous Voice referendum
In the lead-up to the 2023 Australian Indigenous Voice referendum, an unsuccessful attempt to enshrine an Indigenous Voice to Parliament in the Constitution, WeChat and other popular Chinese social media platforms were criticised by both Yes and No supporters and by both Chinese and non-Chinese Australians for its excessive amount of misleading content about the referendum, as well as its excessive amount of posts that allegedly promote anti-Indigenous racism. Researchers from Monash University in Melbourne found that less than one in 10 WeChat posts related to the referendum were supportive of the Yes case, most of which were paid advertisements from the official Yes campaign. The study also found that the vast majority of comments on Voice-related WeChat posts were explicitly supportive of the No case.
Chinese Australians are a very large minority group in Australia, with many using WeChat as a social media platform. While the usage of Chinese apps such as WeChat in Australia has long been controversial over its potential links to the Chinese government, but it nevertheless is seen as a major social media platform in Australia, directly competing with Western platforms among Chinese speakers in Australia. As voting is compulsory for all Australian citizens over the age of 18, social media advertising is crucial for election campaigns in Australia. Therefore, the significance of the number of No campaign material, some of which even contained misinformation that most No supporters do not agree with, had the potential to sway the votes of Chinese Australians towards the ultimately successful No case.
Censorship
Censorship of global issues and separation into two separate platforms
Starting in 2013, reports arose that Chinese-language searches even outside China were being keyword filtered and then blocked. This occurred on incoming traffic to China from foreign countries but also exclusively between foreign parties (the service had already censored its communications within China). In the international example of blocking, a message was displayed on users' screens: "The message "" your message contains restricted words. Please check it again." These are the Chinese characters for a Guangzhou-based paper called Southern Weekly (or, alternatively, Southern Weekend). The next day Tencent released a statement addressing the issue saying "A small number of WeChat international users were not able to send certain messages due to a technical glitch this Thursday. Immediate actions have been taken to rectify it. We apologize for any inconvenience it has caused to our users. We will continue to improve the product features and technological support to provide a better user experience." WeChat eventually built two different platforms to avoid this problem; one for the Chinese mainland (Weixin) and one for the rest of the world (WeChat). The problem existed because WeChat's servers were all located in China and thus subjected to its censorship rules.
Following the overwhelming victory of pro-democracy candidates in the 2019 Hong Kong local elections Weixin censored messages related to the election and disabled the accounts of posters in other countries such as U.S. and Canada. Many of those targeted were of Chinese ancestry.
In 2020, Weixin started censoring messages concerning the COVID-19 pandemic.
In December 2020 Weixin blocked a post by Australian Prime Minister Scott Morrison during a diplomatic spat between Australia and China. In his Weixin post Morrison had criticized a doctored image posted by a Chinese diplomat and praised the Chinese-Australian community. According to Reuters the company claimed to have blocked the post for "violated regulations, including distorting historical events and confusing the public."
Two censorship systems
In 2016, the Citizen Lab published a report saying that WeChat was using different censorship policies in mainland China and other areas. They found that:
Keyword filtering was only enabled for users who registered via phone numbers from mainland China;
Users did not get notices anymore when messages are blocked;
Filtering was more strict on group chat;
Keywords were not static. Some newfound censored keywords were in response to current news events;
The Internal browser in WeChat blocked Chinese accounts from accessing some websites such as gambling, Falun Gong and critical reports on China. International users were not blocked except for accessing some gambling and pornography websites.
Later, WeChat was split into Weixin (the Chinese version) and WeChat (the international version) as described in the previous section, with only Weixin being subject to censoring. Accounts registered using Chinese phone numbers are now managed under the Weixin brand, and their data is stored in mainland China and subject to Weixin's terms of service and privacy policy, which forbids content which "endanger[s] national security, divulge[s] state secrets, subvert[s] state power and undermine[s] national unity". Non-Chinese numbers are registered under WeChat, and WeChat users are subject to a different, less strict terms of service and stricter privacy policy, and their data is stored in the Netherlands for users in the European Union, and in Singapore for other users.
Censorship in Iran
In September 2013, WeChat was blocked in Iran. The Iranian authorities cited WeChat Nearby (Friend Radar) and the spread of pornographic content as the reason of censorship.
The Committee for Determining Instances of Criminal Content (a working group under the supervision of the attorney general) website FAQ says:
Because WeChat collects phone data and monitors member activity and because app developers are outside of the country and not cooperating, this software has been blocked, so you can use domestic applications for cheap voice calls, video calls and messaging.
On 4 January 2018, WeChat was unblocked in Iran.
Crackdown on LGBTQ accounts in China
On July 6, 2021, several Weixin accounts associated with China's university campuses LGBTQ movement were blocked and then deleted without warning; the official media said they had no knowledge of this. Some of the accounts, which consisted of a mix of registered student clubs and unofficial grassroots groups had operated for years as safe spaces for China's LGBTQ youth, with tens of thousands of followers. Many of the closed Weixin accounts display messages saying that they had "violated" Internet regulations, without giving further details, with account names being deleted and replaced with "unnamed", with a notice claiming that all content was blocked and accounts were suspended after receiving relevant complaints. The U.S. State Department expressed concern that the accounts were deleted when they were merely expressing their views, exercising their right to freedom of expression and freedom of speech. Several groups that had their accounts deleted spoke out against the ban with one stating "[W]e hope to use this opportunity to start again with a continued focus on gender and society, and to embrace courage and love".
In August 2023, immediately prior to the Qixi Festival, Weixin launched a mass closure of accounts related to LGBT rights and feminism.
Notes
References
External links
Software companies established in 2011
2011 software
Chinese brands
Linux software
Android (operating system) software
BlackBerry software
Communication software
Instant messaging clients
IOS software
HarmonyOS software
Universal Windows Platform apps
Mobile applications
Mobile telecommunication services
Proprietary cross-platform software
Symbian software
Super-apps
Tencent software
WatchOS software
Delisted applications
Internet properties established in 2011
Tencent
Notorious markets
Internet censorship in India | WeChat | Technology | 8,498 |
53,236,762 | https://en.wikipedia.org/wiki/Epitranscriptomic%20sequencing | In epitranscriptomic sequencing, most methods focus on either (1) enrichment and purification of the modified RNA molecules before running on the RNA sequencer, or (2) improving or modifying bioinformatics analysis pipelines to call the modification peaks. Most methods have been adapted and optimized for mRNA molecules, except for modified bisulfite sequencing for profiling 5-methylcytidine which was optimized for tRNAs and rRNAs.
There are seven major classes of chemical modifications found in RNA molecules: N6-methyladenosine, 2'-O-methylation, N6,2'-O-dimethyladenosine, 5-methylcytidine, 5-hydroxylmethylcytidine, inosine, and pseudouridine. Various sequencing methods have been developed to profile each type of modification. The scale, resolution, sensitivity, and limitations associated with each method and the corresponding bioinformatics tools used will be discussed.
Methods for profiling N6-methyladenosine
Methylation of adenosine does not affect its ability to base-pair with thymidine or uracil, so N6-methyladenosine (m6A) cannot be detected using standard sequencing or hybridization methods. This modification is marked by the methylation of the adenosine base at the nitrogen-6 position. It is abundantly found in polyA+ mRNA; also found in tRNA, rRNA, snRNA, and long ncRNA.
m6A-seq and MeRIP-seq
In 2012, the first two methods for m6A sequencing came out that enabled transcriptome-wide profile of m6A in mammalian cells. These two techniques, called m6A-seq and MeRIP-seq (m6A-specific methylated RNA immunoprecipitation), are also the first methods to allow for any type of RNA modification sequencing. These methods were able to detect 10,000 m6A peaks in the mammalian transcriptome; the peaks were found to be enriched in 3’UTR regions, near STOP codons, and within long exons.
The two methods were optimized to detect methylation peaks in poly(A)+ mRNA, but the protocol could be adapted to profile any type of RNA. Collected RNA sample is fragmented into ~100-nucleotide-long oligonucleotides using a fragmentation buffer, immunoprecipitation with purified anti-m6A antibody, elution and collection of antibody-tagged RNA molecules. The immunoprecipitation procedure in MeRIP-Seq is able to produce >130fold enrichment of m6A sequences. Random primed cDNA library generation was performed, followed by adaptor ligation and Illumina sequencing. Since the RNA strands are randomly chopped up, the m6A site should, in principle, lie somewhere in the center of the regions to which sequence reads align. At extremes, the region would be roughly 200nt wide (100nt up- and downstream of the m6A site).
When the first nucleotide of a transcript is an adenosine, in addition to the ribose 2’-O-methylation, this base can be further methylated at the N6 position.
m6A-seq was confirmed to be able to detect m6Am peaks at transcription start sites. Adapter ligation at both ends of RNA fragment results in reads tending to pileup at the 5’ terminus of the transcript. Schwartz et al. (2015) leveraged this knowledge to detect mTSS sites by picking out sites with a high ratio of the size of pileups in the IP samples compared to input sample. As confirmation, >80% of the highly enriched pileup sites contained adenosine.
The resolution of these methods is 100-200nt, which was the range of the fragment size.
These two methods had several drawbacks: (1) required substantial input material, (2) low resolution which made pinpointing the actual site with the m6A mark difficult, and (3) cannot directly assess false positives.
Especially in MeRIP-Seq, the bioinformatics tools that are currently available are only able to call 1 site per ~100-200nt wide peak, so a substantial portion of clustered m6As (~64nt between each individual site within a cluster) are missed. Each cluster can contain up to 15 m6A residues.
In 2013, a modified version of m6A-seq based on the previous two methods m6A-seq and MeRIP-seq came out which aimed to increase resolution, and demonstrated this in the yeast transcriptome. They achieved this by decreasing fragment size and employing a ligation-based strand-specific library preparation protocol capturing both ends of the fragmented RNA, ensuring that the methylated position is within the sequenced fragment. By additionally referencing the m6A consensus motif and eliminating false positive m6A peaks using negative control samples, the m6A profiling in yeast was able to be done at single-base resolution.
UV-based Methods
PA-m6A-seq
UV-induced RNA-antibody crosslinking was added on top of m6A-seq to produce PA-m6A-seq (photo-crosslinking-assisted m6A-seq) which increases resolution up to ~23nt. First, 4-thiourodine (4SU) is incorporated into the RNA by adding 4SU in growth media, some incorporation sites presumably near m6A location. Immunoprecipitation is then performed on full-length RNA using m6A-specific antibody [36]. UV light at 365 nm is then shined onto RNA to activate the crosslinking to the antibody with 4SU. Crosslinked RNA was isolated via competition elution and fragmented further to ~25-30nt; proteinase K was used to dissociate the covalent bond between crosslinking site and antibody. Peptide fragments that remain after antibody removal from RNA cause the base to be read as a C as opposed to a T during reverse transcription, effectively inducing a point mutation at the 4SU crosslinking site. The short fragments are subjected to library construction and Illumina sequencing, followed by finding the consensus methylation sequence.
The presence of the T to C mutation helps increase the signal to noise ratio of methylation site detection as well as providing greater resolution to the methylation sequence.
One shortcoming of this method is that m6A sites that did not incorporate 4SU can't be detected.
Another caveat is that position of 4SU incorporation can vary relative to any single m6A residue, so it still remains challenging to precisely locate m6A site using the T to C mutation.
m6A-CLIP and miCLIP
m6A-CLIP (crosslinking immunoprecipitation) and miCLIP (m6A individual-nucleotide-resolution crosslinking and immunoprecipitation) are UV-based sequencing techniques. These two methods activate crosslinking at 254 nm, fragments RNA molecules before immunoprecipitation with antibody, and do not depend on the incorporation of photoactivatable ribonucleosides - the antibody directly crosslinks with a base close (very predictable location) to the m6A site. These UV-based strategies uses antibodies that induces consistent and predictable mutational and truncation patterns in the cDNA strand during reverse-transcription that could be leveraged to more precisely locate the m6A site. Though both m6A-CLIP and miCLIP reply on UV induced mutations, m6A-CLIP is distinct by taking advantage that m6A alone can induce cDNA truncation during reverse transcription and generate single-nucleotide mapping for over ten folds more precise m6A sites (MITS, m6A-induced truncation sites), permitting comprehensive and unbiased precise m6A mapping. In contrast, UV-mapped m6A sites by miCLIP is only a small subset of total precise m6A sites. The precise location of tens of thousands of m6A sites in human and mouse mRNAs by m6A-CLIP reveals that m6A is enriched at last exon but not around stop codon.
In m6A-CLIP and miCLIP, RNA is fragmented to ~20-80nt first, then the 254 nm UV-induced covalent RNA/m6A antibody complex was formed in the fragments containing m6A. The antibody was removed with proteinase K before reverse-transcription, library construction and sequencing. Remnants of peptides at the crosslinking site on the RNA after antibody removal, leads to insertions, truncations, and C to T mutations during reverse transcription to cDNA, especially at the +1 position to the m6A site (5’ to the m6A site) in the sequence reads.
Positive sites seen using m6A-CLIP and miCLIP had high percent of matches with those detected using SCARLET, which has higher local resolution around a specific site, (see below), implicating m6A-CLIP and miCLIP has high spatial resolution and low false discovery rate.
miCLIP has been used to detect m6Am by looking at crosslinking-induced truncation sites at the 5’UTR.
Methods for quantifying m6A modification status
Although m6A sites could be profiled at high resolution using UV-based methods, the stoichiometry of m6A sites - the methylation status or the ratio m6A+ to m6A- for each individual site within a type of RNA - is still unknown. SCARLET (2013) and m6A-LAIC-seq (2016) allows for the quantitation of stoichiometry at a specific locus and transcriptome-wide, respectively.
Bioinformatics methods used to analyze m6A peaks do not make any prior assumptions about the sequence motifs within which m6A sites are usually found, and take into consideration all possible motifs. Therefore, it is less likely to miss sites.
SCARLET
SCARLET (site-specific cleavage and radioactive-labeling followed by ligation-assisted extraction and thin-layer chromatography) is used determining the fraction of RNA in a sample that carries a methylated adenine at a specific site. One can start with total RNA without having to enrich for the target RNA molecule. Therefore, it is an especially suitable method for quantifying methylation status in low abundance RNAs such as tRNAs. However, it is not suitable or practical for large-scale location of m6A sites.
The procedure begins with a chimeric DNA oligonucleotide annealing to the target RNA around the candidate modification site. The chimeric ssDNA has 2’OMe/2’H modifications and is complementary to the target sequence. The chimeric oligonucleotide serves as a guide to allow RNase H to cleave the RNA strand precisely at the 5’-end of the candidate site. The cut site is then radiolabeled with phosphorus-32 and splint-ligated to a 116nt ssDNA oligonucleotide using DNA ligase. RNase T1/A is introduced to the sample to digest all RNA, except for the RNA molecules with the 116-mers DNA attached. This radiolabeled product is then isolated and digested by nuclease to generate a mixture of modified and unmodified adenosines (5’P-m6A and 5’-P-A) which is separated using thin layer chromatography. The relative proportions of the two groups can be determined using UV absorption levels.
m6A-LAIC-seq
m6A-LAIC-seq (m6A-level and isoform-characterization sequencing) is a high-throughput approach to quantify methylation status on a whole-transcriptome scale. Full-length RNA samples are used in this method. RNAs are first subjected to immunoprecipitation with an anti-m6A antibody. Excess antibody is added to the mixture to ensure all m6A-containing RNAs are pulled down. The mixture is separated into eluate (m6A+ RNAs) and supernatant (m6A- RNAs) pools. External RNA Controls Consortium (ERCC) spike ins are added to the eluate and supernatant, as well as an independent control arm consisting of just ERCC spike in. After antibody cleavage in the eluate pool, each of the three mixtures are sequenced on a next generation sequencing platform. The m6A levels per site or gene could be quantified by the ERCC-normalized RNA abundances in different pools. Since full-length RNA is used, it is possible to directly compare alternatively spliced isoforms between the m6A+ and m6A- fractions as well as comparing isoform abundance within the m6A+ portion.
Despite the advances in m6A-sequencing, several challenges still remain: (1) A method has yet to be developed that characterizes the stoichiometry between different sites in the same transcript; (2) Analysis results are heavily dependent on the bioinformatics algorithm used to call the peaks; (3) Current methods all use m6A-specific antibodies to tag m6A sites, but it has been reported that the antibodies contain intrinsic bias for RNA sequences.
Methods for 2'-O-methylation Profiling
The 2'-O-methylation of the ribose moiety is one of the most common RNA modifications and is present in diverse highly abundant non-coding RNAs (ncRNAs) and at the 5' cap of mRNAs. Moreover, many studies have revealed that Nm at 3’-end is presented in some ncRNAs, such as microRNAs (miRNAs) in plants as well as PIWI-interacting RNAs (piRNAs) in animals.This modification can perturb the function of ribosomes and disrupt tRNA decoding, regulate alternative splicing fidelity, protect ncRNAs from 3’-5’ exonucleolytic degradation and provide a molecular signature for discrimination of self from non-self mRNA.
Nm-REP-seq
A novel method, Nm-REP-seq, was developed for the transcriptome-wide identification of 2'-O-methylation sites at single-base resolution by using RNA exoribonuclease (Mycoplasma genitalium RNase R, MgR) and periodate oxidation reactivity to eliminate 2'-hydroxylated (2'-OH) nucleosides. Nm-REP-seq discovered telomerase RNA component (TERC) RNA, scaRNAs and snoRNAs as new classes of Nm-containing ncRNAs as well as identified many 2'-O-methylation sites in various ncRNAs and mRNAs. Furthermore, Nm-REP-seq revealed 2'-O-Methylation located at the 3’-end of snoRNAs, snRNAs, tRNAs and fragments derived from them, as well as piRNAs and miRNAs.
Methods for N6,2'-O-dimethyladenosine (m6Am) Profiling
N6,2'-O-dimethyladenosine, abundant in polyA+ mRNAs, occurs at the first nucleotide after the 5' cap, when an additional methyl group is added to a 2ʹ-O-methyladenosine residue at the ‘capped’ 5ʹ end of mRNA.
Since m6Am can be recognized by anti-m6A antibodies at transcription start sites, the methods used for m6A profiling can be and were adapted for m6Am profiling, namely m6A-seq, and miCLIP (see m6A-seq and miCLIP descriptions above).
Methods for 5-methylcytidine profiling
5-methylcytidine, m5C, is abundantly found in mRNA and ncRNAs, especially tRNA and rRNAs. In tRNAs, this modification stabilizes the secondary structure and influences anticodon stem-loop conformation. In rRNAs, m5C affects translational fidelity.
Two principles have been used to develop m5C sequencing methods. The first one is antibody-based approach (bisuphite sequencing and m5C-RIP), similar to m6C sequencing. The second is detecting targets of m5C RNA methyltransferases by covalently linking the enzyme to its target, and then using IP specific to the target enzyme to enrich for RNA molecules containing the mark (Aza-IP and miCLIP).
Modified bisulfite sequencing
Modified bisulfite sequencing was optimized for rRNA, tRNA, and miRNA molecules from Drosophila.
Bisulfite treatment has been most widely used to detect dm5C (DNA m5C). The treatment essentially converts a cytosine to a uridine, but methylated cytosines would be unchanged by the treatment.
Previous attempts to develop m5C sequencing protocols using bisulfite treatment were not able to effectively address the problem of the harsh treatment of RNA which causes significant degradation of the molecules. Specifically, bisulfite deamination treatment (high pH) of RNA is detrimental to the stability of phosphodiester bonds. As a result, it is difficult to pre-enrich RNA molecules or to obtain enough PCR product of the correct size for deep sequencing.
A modified version of bisulfite sequencing was developed by Schaefer et al. (2009) which decreased the temperature at which bisulfite treatment of RNA from 95 °C to 60 °C. The rationale behind the modification was that since RNA, unlike DNA, is not double-stranded, but rather, consists of regions of single-strandedness, double-stranded stem structures and loops, it could be possible to unwind RNA at a much lower temperature. Indeed, RNA could be treated for 180 minutes at 60C without significant loss of PCR amplicons of the expected size. Deamination rates were determined to be 99% at 180min of treatment.
After bisulfite treatment of fragmented RNA, reverse transcription is performed, followed by PCR amplification of the cDNA products, and finally deep sequencing was done using the Roche 454 platform.
Since the developers of the method used the Roche platform, they also used GS Amplicon Variant Analyzer (Roche) for analyzing deep sequencing data to quantify sequence-specific cytosine content.
However, recent papers have suggested that the method have several flaws: (1) Incomplete conversion of regular cytosines in double-stranded regions of RNA; (2) areas containing other modifications that resulted in bisulfite-treatment resistance; and (3) sites containing potential false-positives due to (1) and (2) In addition, it is possible the sequencing depth is still not high enough to correctly detect all methylated sites.
Aza-IP
Aza-IP 5-azacytidine-mediated RNA immunoprecipitation has been optimized on and used for detecting targets of methyltransferases, particularly NSUN2 and DNMT2 — the two main enzymes responsible for laying down the m5C mark.
First, the cell is made to overexpress an epitope-tagged m5C-RNA methytransferase derivative so that the antibody used later on for immunoprecipitation could recognize the enzyme. Second, 5-aza-C is introduced to the cells so that it could be incorporated into nascent RNA in place of cytosine. Normally, the methyltransferases are released (i.e. covalent bond between cytosine and methyltransferase is broken) following methylation of the residue. For 5-aza-C, due to a nitrogen substitution in the C5 position of cytosine, the RNA methytransferase enzyme remains covalently bound to the target RNA molecule at the C6 position.
Third, the cell is lysed and the m5C-RNA methyltransferase of interest is immunoprecipitated along with the RNA molecules that are covalently linked to the protein. The IP step enabled >200-fold enrichment of RNA targets, which were mainly tRNAs. The enriched molecules were then fragmented and purified. cDNA library is then constructed and sequencing is performed.
An important additional feature is that RNA methyltransferase covalent linkage to the C5 of m-aza-C induces rearrangement and ring opening. This ring opening results in preferential pairing with cytosine and is therefore read as guanosine during sequencing. This C to G transversion allows for base resolution detection of m5C sites.
One caveat is that m5C sites not replaced by 5-azacytosine will be missed.
miCLIP
miCLIP (Methylation induced crosslinking immunoprecipitation) was used to detect NSUN2 targets, which were found to be mostly non-coding RNAs such as tRNA. An induced mutation of C271A in NSUN2 inhibits release of enzyme from RNA target. This mutation was over-expressed in the cells of interest, and the mutated NSUN2 was also tagged with the Myc epitope. The covalently linked RNA-protein complexes are isolated via immunoprecipitation for a Myc-specific antibody. These complexes are confirmed and detected by radiolabeling with phosphorus-32. The RNA is then extracted from the complex, reverse-transcribed, amplified with PCR, and sequenced using next-generation platforms.
Both miCLIP and Aza-IP, though limited by specific targeting of enzymes, can allow for the detection of low-abundance methylated RNA without deep sequencing.
Methods for Inosine Profiling
Inosine is created enzymatically when an adenosine residue is modified.
Analysis of base-pairing properties
Since the chemical makeup of inosine is a deaminated adenosine, this is one of few methylation alterations that has an accompanying alteration in base pairing, which can be capitalised on. The original adenosine nucleotide will pair with a thymine, whereas the methylated inosine will pair with a cytosine. cDNA sequences obtained by rtPCR can therefore be compared to the corresponding genomic sequences; in sites where A residues are repeatedly interpreted as G, a methylation event can be assumed. At high enough accuracy, it is feasible that the quantity of mRNA molecules in the population that have been methylated can be calculated as a percentage. This method potentially has single-nucleotide resolution. In fact, the abundance of RNA-seq data that is now publicly available can be leveraged to investigate G (in cDNA) versus A (in genome). One particular pipeline, called RNA and DNA differences (RDD), claims to excludes false positives, but only 56.8% of its A-to-I sites were found to be valid by ICE-seq (see below).
Limitations
The background noise caused by single nucleotide polymorphisms (SNPs), somatic mutations, pseudogenes and sequencing errors reduce the reliability of the signal, especially in a single-cell context.
Chemical methods
Inosine-specific cleavage
The first method to detect A-to-I RNA modifications, developed in 1997, was inosine-specific cleavage. RNA samples are treated with glyoxal and borate to specifically modify all G bases, and subsequently enzymatically digested to by RNase T1, which cleaves after I sites. The amplification of these fragments then allows analysis of cleavage sites and inference of A-to-I modification.
. It was used to prove the position of inosine at specific sites rather than identify novel sites or transcriptome-wide profiles.
Limitations
The existence of two A-to-I modifications in relatively close proximity, which is common in Alu elements, means the downstream mod is less likely to be detected since the cDNA synthesis will be truncated at a prior nucleotide. The throughput is low, and the initial method required specific primers; the protocol is complicated and labour-intensive.
ICE and ICE-seq
Inosine chemical erasing (ICE) refer to a process in which acrylonitrile is reacted with inosine to form N1-cyanoethylinosine (ce1I). This serves to stall reverse transcriptase and lead to truncated cDNA molecules. This was combined with deep-sequencing in a developed method called ICE-seq. Computational methods for automated analysis of the data are available, the main premise being the comparison of treated and untreated samples to identify truncated transcripts and thus infer an inosine modification by read count, with a step to reduce false positives by comparison to online database dbSNP.
Limitations
The original ICE protocol involved an RT-PCR amplification step and therefore required primers and knowledge of the location or regions to be investigated, alongside a maximum cDNA length of 300–500bp.
The ICE-seq method is complicated, along with being labour-, reagent- and time-intensive. One protocol from 2015 took 22 days. This shares a limitation with inosine-specific cleavage, in that if there are two A-to-I modifications in relatively close proximity, the downstream mod is less likely to be detected since the cDNA synthesis will be truncated at a prior nucleotide.
Both ICE and ICE-seq suffer from a lack of sensitivity to infrequently edited locations: it becomes difficult to distinguish a modification with a frequency of <10% from a false positive. An increase in read depth and quality can increase sensitivity, but also then suffer from further amplification bias.
Biological methods
ADAR knockdown
The modification of A to I is effected by adenosine deaminases that act on RNA (ADARs), of which in mice there are three. The knockdown of these in the cell, therefore, and the subsequent cell–cell comparison of ADAR+ and ADAR- RNA content would be anticipated to provide a basis for A-to-I modification profiling. However, there are further functions of ADAR enzymes within the cell — for example, they have further roles in RNA processing, and in miRNA biogenesis — which would also be likely to change the landscape of cellular mRNA. Recently a map of A-to-I editing in mice was generated using editing-deficient ADAR1 and ADAR2 double-knockout mice as a negative control. Thereby, A-to-I editing was detected with high confidence.
Methods for Pseudouridine Methylation Profiling
Pseudouridine, or Ψ, the overall most abundant post-translational RNA modification, is created when a uridine base is isomerised. In eukaryotes, this can occur by either of two distinct mechanisms; it is sometimes referred to as the ‘fifth RNA nucleotide’. It is incorporated into stable non-coding RNAs such as tRNA, rRNA, and snRNA, with roles in ribosomal ligand binding and translational fidelity in tRNA, and in fine-tuning branching events and splicing events in snRNAs. Pseudouridine has one more hydrogen bond donor from an imino group and a more stable C–C bond, since a C-glycosidic linkage has replaced the N-glycosidic linkage found in its counterpart (regular uridine). As neither of these changes affect its base-pairing properties, both will have the same output when directly sequenced; therefore methods for its detection involve prior biochemical modification.
Biochemical methods
CMCT methods
There are multiple pseudouridine detection methods beginning with the addition of N-cyclohexyl-N′-b-(4-methylmorpholinium) ethylcarbodiimide metho-p-toluene-sulfonate (CMCT; also known as CMC), since its reaction with pseudouridine produces CMC-Ψ. CMC-Ψ causes reverse transcriptase to stall one nucleotide in the 3’ direction. These methods have single-nucleotide resolution.
In an optimisation step, azido-CMC can confer the ability to add biotinylation; subsequent biotin pulldown will enrich Ψ-containing transcripts, allowing identification of even low-abundance transcripts.
Limitations
As with other procedures predicated on biochemical alteration followed by sequencing, the development of high-throughput sequencing has removed the limitations requiring prior knowledge of sites of interest and primer design. The method causes a lot of RNA degradation, so it is necessary to start with a large amount of sample, or use effective normalisation techniques to account for amplification biases. One final limitation is that, for CMC labelling of pseudouridine to be specific, it is not complete, and therefore nor is it quantitative. A new reactant that could achieve a higher sensitivity with specificity would be beneficial.
Methods for 5-hydroxylmethylcytidine Profiling
Cytidine residues, modified once to m5C (discussed above), can be further modified: either oxidised once for 5-hydroxylmethylcytidine (hm5C), or oxidised twice for 5-formylcytidine (f5C). Arising from the oxidative processing of m5C enacted in mammals by ten-eleven translocation (TET) family enzymes, hm5C is known to occur in all three kingdoms and to have roles in regulation. While 5-hydroxymethylcytidine (hm5dC) is known to be found in DNA in a widespread manner, hm5C is also found in organisms for which no hm5dC has been detected, indicating it is a separate process with distinct regulatory stipulations. To observe the in vivo addition of methyl groups to cytosine RNA residues followed by oxidative processing, mice can be fed on a diet incorporating particular isotopes and these be traced by LC-MS/MS analysis. Since the metabolic pathway from nutritional intake to nucleotide incorporation is known to progress from dietary methionine --> S-adenosylmethionine (SAM) --> methyl group on RNA base, the labelling of dietary methionine with 13C and D means these will end up in hm5C residues that have been altered since the addition of these into the diet. In contrast to m5C, a large quantity of hm5C modifications have been recorded within coding sequences.
hMeRIP-seq
hMeRIP-seq is an immunoprecipitation method, in which RNA–protein complexes are crosslinked for stability, and antibodies specific to hm5C are added. Using this method, over 3,000 hm5C peaks have been called in Drosophila melanogaster S2 cells.
Limitations
Despite two distinct base-resolution methods being available for hm5dC, there are no base-resolution methods for detection of hm5C.
Biophysical validation of RNA modifications
Apart from mass spectrometry and chromatography, other two validation techniques have been developed, namely
Pre- and post-labelling techniques:
Pre-labelling → involves the use of 32P: cells are grown in 32P containing medium, thus allowing the incorporation of [α-32P]NTPs during transcription by T7 RNA polymerase. The modified RNA is then extracted, and each RNA species is isolated and subsequently digested by T2 RNase. Next, RNA is hydrolyzed into 5' nucleoside monophosphates, which are analyzed 2D-TLC (two-dimensional thin-layer chromatography). This method is able to detect and quantify every modification but will not contribute to the characterization of the sequence.
Post-labelling → implicates the selective labelling of a specific position within the sequence: these techniques rely on the Stanley-Vassilenko approach principles, that has been adjusted to achieve a better validation quality. First, RNA is cleaved into free 5’-OH fragments either by RNase H or DNAzymes, by sequence specific hydrolysis. The polynucleotide kinase (PKN) then performs the 5’ radioactive post-labelling phosphorylation using [γ-32P]ATP. At this point, the labelled fragments undergo a size fragmentation, that can be performed either by Nuclease P1 or according to the SCARLET method. In both cases, the final product is a group of 5’ nucleoside monophosphates (5’ NMPs) that will be analyzed by TLC.
SCARLET: this recent approach exploits not just one, but two sequence selection steps, the last of which is obtained during the splinted ligation of the radioactive-labelled fragments with a long DNA oligonucleotide, at its 3’-end. After degradation, the labelled residue is purified together with the ligated DNA oligonucleotide and finally hydrolyzed and therefore released thanks to the activity of the Nuclease P1.
This method has proven to be very useful in the validation of modified residues in mRNAs and lncRNAs, such as m6A and Ψ
Oligonucleotide-based techniques: this method includes several variants
Splinted ligation of particular modified DNAs, that exploits the ligase sensitivity to 3’ and 5’ nucleotides (so far used for m6A, 2’-O-Me, Ψ)
Microarray modification identification through a DNA-chip, that exploits the decrease in duplex stability of cDNA oligonucleotides, due to the impediment in conventional base-pairing caused by modifications (ex. m1A, m1G, m22G)
RT primer extension at low dNTPs concentration, for mapping of RT arrest signals.
Single-Molecule Real-Time Sequencing for epitranscriptome sequencing
Single-molecule real-time sequencing (SMRT) is used in the epigenomic and epitranscriptomic fields. As regards epigenomics, thousands of zero-mode waveguides (ZMWs) are used to capture the DNA polymerase: when a modified base is present, the biophysical dynamics of its movement changes, creating a unique kinetic signature before, during, and after the base incorporation.
SMRT sequencing can be used to detect modified bases in RNA, including m6A sites. In this case, a reverse transcriptase is used as enzyme with ZMWs to observe the cDNA synthesis in real time. The incorporation of synthetically designed m6A sites leaves a kinetic signature and increases the interpulse duration (IPD).
There are some issues concerning the reading of homonucleotide stretches and the base resolution of m6A therein, due to the stuttering of reverse transcriptase. Secondly, the throughput is too low for transcriptome-wide approaches.
One of the most commonly used platform is the SMRT sequencing technology by Pacific Biosciences.
Nanopore sequencing in epitranscriptomics
A possible alternative to the detection of epitranscriptomic modifications by SMRT sequencing is the direct detection using the Nanopore sequencing technologies. This technique exploits nanometer-sized protein channels embedded into a membrane or solid materials, and coupled to sensors, able to detect the amplitude and duration of the variations of the ionic current passing through the pore. As the RNA passes through the nanopore, the blockage leads to a disruption in current stream, which is different for the different bases, included modified ones, and therefore can be used to identify possible modifications. By producing single-molecule reads, without previous RNA amplification and conversion to cDNA, these techniques can lead to the production of quantitative transcriptome-wide maps.
In particular, the Nanopore technology proved to be effective in detecting the presence of two nucleotide analogs in RNA: N6-methyladenosine (m6A) and 5-methylcytosine (5-mC). Using Hidden Markov Models (HMM) or recurrent neural networks (RNN) trained with known sequences, it was possible to demonstrate that the modified nucleotides produce a characteristic disruption in the ionic current when passing through the pore, and that these data can be used to identify the nucleotide.
References
RNA
Nucleosides
Bioinformatics
Molecular biology | Epitranscriptomic sequencing | Chemistry,Engineering,Biology | 7,662 |
24,703,027 | https://en.wikipedia.org/wiki/Sketch%20recognition | Sketch recognition describes the process by which a computer, or artificial intelligence can interpret hand-drawn sketches created by a human being, or other machine. Sketch recognition is a key frontier in the field of artificial intelligence and human-computer interaction, similar to natural language processing or conversational artificial intelligence
Uses and Applications
Research in sketch recognition lies at the crossroads of artificial intelligence and human–computer interaction. Recognition algorithms usually are gesture-based, appearance-based, geometry-based, or a combination thereof.
Advances in the field of sketch recognition would have significant application in the field of forensic science, in which sketches are often used to identify suspects associated with a crime.
In 2023, two developers used OpenAI's DallE-2 image generation platform to create a forensic sketch program. The program's results were described as "hyper-realistic" and purported the potential of exponentially decreasing the creation time of a forensic sketch, while increasing accuracy.
Sketch recognition technology has also been linked to applications in the fields of architecture, videogame production, animation, construction, and academia, among others.
See also
Gesture recognition
Handwriting recognition
Human–computer interaction
Multi-touch gestures
Pen computing
Sketch-based modeling
Tablet computer
References
External links
Notes on the History of Pen-based Computing (Youtube)
Annotated Bibliography in Tablets, Gesture and Handwriting Recognition, and Pen Computing
Human–computer interaction
History of human–computer interaction
Applications of computer vision | Sketch recognition | Technology,Engineering | 285 |
54,516,626 | https://en.wikipedia.org/wiki/Direction%20%28geometry%29 | In geometry, direction, also known as spatial direction or vector direction, is the common characteristic of all rays which coincide when translated to share a common endpoint; equivalently, it is the common characteristic of vectors (such as the relative position between a pair of points) which can be made equal by scaling (by some positive scalar multiplier).
Two vectors sharing the same direction are said to be codirectional or equidirectional.
All codirectional line segments sharing the same size (length) are said to be equipollent. Two equipollent segments are not necessarily coincident; for example, a given direction can be evaluated at different starting positions, defining different unit directed line segments (as a bound vector instead of a free vector).
A direction is often represented as a unit vector, the result of dividing a vector by its length. A direction can alternately be represented by a point on a circle or sphere, the intersection between the sphere and a ray in that direction emanating from the sphere's center; the tips of unit vectors emanating from a common origin point lie on the unit sphere.
A Cartesian coordinate system is defined in terms of several oriented reference lines, called coordinate axes; any arbitrary direction can be represented numerically by finding the direction cosines (a list of cosines of the angles) between the given direction and the directions of the axes; the direction cosines are the coordinates of the associated unit vector.
A two-dimensional direction can also be represented by its angle, measured from some reference direction, the angular component of polar coordinates (ignoring or normalizing the radial component). A three-dimensional direction can be represented using a polar angle relative to a fixed polar axis and an azimuthal angle about the polar axis: the angular components of spherical coordinates.
Non-oriented straight lines can also be considered to have a direction, the common characteristic of all parallel lines, which can be made to coincide by translation to pass through a common point. The direction of a non-oriented line in a two-dimensional plane, given a Cartesian coordinate system, can be represented numerically by its slope.
A direction is used to represent linear objects such as axes of rotation and normal vectors. A direction may be used as part of the representation of a more complicated object's orientation in physical space (e.g., axis–angle representation).
Two directions are said to be opposite if the unit vectors representing them are additive inverses, or if the points on a sphere representing them are antipodal, at the two opposite ends of a common diameter. Two directions are parallel (as in parallel lines) if they can be brought to lie on the same straight line without rotations; parallel directions are either codirectional or opposite.
Two directions are obtuse or acute if they form, respectively, an obtuse angle (greater than a right angle) or acute angle (smaller than a right angle);
equivalently, obtuse directions and acute directions have, respectively, negative and positive scalar product (or scalar projection).
See also
Body relative direction
Euclidean vector
Tangent direction
Notes
References
Elementary mathematics
Euclidean geometry | Direction (geometry) | Mathematics | 656 |
52,820 | https://en.wikipedia.org/wiki/Resident%20Evil | Resident Evil, known as in Japan, is a Japanese horror game series and media franchise created by Capcom. It consists of survival horror, third-person shooter and first-person shooter games, with players typically surviving in environments inhabited by zombies and other mutated creatures. The franchise has expanded into other media, including a live-action film series, animated films, television series, comic books, novels, audiobooks, and merchandise. Resident Evil is the highest-grossing horror franchise.
The first Resident Evil game was created by Shinji Mikami and Tokuro Fujiwara for PlayStation, and released in 1996. It is credited for defining the survival horror genre and returning zombies to popular culture. With Resident Evil 4 (2005), the franchise shifted to more dynamic shooting action, popularizing the "over-the-shoulder" third-person view in action-adventure games.
The franchise returned to survival horror with Resident Evil 7: Biohazard (2017) and Resident Evil Village (2021), which used a first-person perspective. Capcom has also released four Resident Evil remakes: Resident Evil (2002), Resident Evil 2 (2019), Resident Evil 3 (2020) and Resident Evil 4 (2023). Resident Evil is Capcom's best-selling franchise and the best-selling horror game series, with more than copies sold worldwide as of December 2024.
The first Resident Evil film was released in 2002, starring Milla Jovovich, followed by five sequels and a reboot, Welcome to Raccoon City (2021). The films received mostly negative reviews, but have grossed more than $1.2 billion, making Resident Evil the third-highest-grossing video game film series.
History
The development of the first Resident Evil, released as Biohazard in Japan, began in 1993 when Capcom's Tokuro Fujiwara told Shinji Mikami and other co-workers to create a game using elements from Fujiwara's 1989 game Sweet Home on the Family Computer (Famicom) in Japan. When in late 1994 marketing executives were setting up to release Biohazard in the United States, it was pointed out that securing the rights to the name Biohazard would be very difficult as a DOS game had been registered under that name, as well as a New York hardcore punk band called Biohazard. A contest was held among company personnel to choose a new name; this competition turned up Resident Evil, the name under which it was released in the west. Resident Evil made its debut on the PlayStation in 1996 and was later ported to the Sega Saturn.
The first entry in the series was the first game to be dubbed a "survival horror", a term coined for the new genre it initiated, and its critical and commercial success led to the production of two sequels, Resident Evil 2 in 1998 and Resident Evil 3: Nemesis in 1999, both for the PlayStation. A port of Resident Evil 2 was released for the Nintendo 64. In addition, ports of all three were released for Windows. The fourth game in the series, Resident Evil – Code: Veronica, was developed for the Dreamcast and released in 2000, followed by ports of Resident Evil 2 and Resident Evil 3: Nemesis. Resident Evil – Code: Veronica was later re-released for Dreamcast in Japan in an updated form as Code: Veronica Complete, which included slight changes, many of which revolved around story cutscenes. This updated version was later ported to the PlayStation 2 and GameCube as Code: Veronica X.
Despite earlier announcements that the next game in the series would be released for the PlayStation 2, which resulted in the creation of an unrelated game, Devil May Cry, Mikami decided to make the series exclusively for the GameCube. The next three games in the series—a remake of the original Resident Evil and the prequel Resident Evil Zero, both released in 2002, as well as Resident Evil 4 (2005)—were all released initially as GameCube exclusives. Resident Evil 4 was later released for Windows, PlayStation 2, and Wii.
A trilogy of GunCon-compatible light gun games known as the Gun Survivor series featured first-person gameplay. The first, Resident Evil Survivor, was released in 2000 for the PlayStation and PC but received mediocre reviews. The subsequent games, Resident Evil Survivor 2 – Code: Veronica and Resident Evil: Dead Aim, fared somewhat better. Dead Aim is the fourth Gun Survivor game in Japan, with Gun Survivor 3 being the Dino Crisis spin-off Dino Stalker. In a similar vein, the Chronicles series features first-person gameplay, albeit on an on-rails path. Resident Evil: The Umbrella Chronicles was released in 2007 for the Wii, with a sequel, Resident Evil: The Darkside Chronicles released in 2009 (both were later ported to the PlayStation 3 in 2012).
Resident Evil Outbreak is an online game for the PlayStation 2, released in 2003, depicting a series of episodic storylines in Raccoon City set during the same period as Resident Evil 2 and Resident Evil 3: Nemesis. It was the first in the series and the first survival horror game to feature cooperative gameplay. It was followed by a sequel, Resident Evil Outbreak: File #2. Raccoon City is a metropolis located in the Arklay Mountains of the Midwestern United States that succumbed to the deadly T-virus outbreak and was consequently destroyed via a nuclear missile attack issued by the United States government. The town served as a critical junction for the series' progression as one of the main catalysts to Umbrella's downfall and the entry point for some of the series' most notable characters.
Resident Evil Gaiden is an action-adventure game for the Game Boy Color featuring a role-playing-style combat system. There have been several downloadable mobile games based on the Resident Evil series in Japan. Some of these mobile games have been released in North America and Europe through T-Mobile. At the Sony press conference during E3 2009, Resident Evil Portable was announced for the PlayStation Portable, described as a new game being developed with "the PSP Go in mind" and "totally different for a Resident Evil game". No further announcements have been made, and the game is considered to have been canceled.
In 2009, Resident Evil 5 was released for PlayStation 3, Windows and Xbox 360, becoming the best selling game of the franchise despite mixed fan reception. Capcom revealed the third-person shooter Resident Evil: Operation Raccoon City, which was developed by Slant Six Games for the PlayStation 3, Xbox 360 and Windows and released in March 2012. A survival horror game for the Nintendo 3DS, Resident Evil: Revelations, was released in February 2012. In October of the same year, the next numbered entry in the main series, Resident Evil 6, was released to mixed reviews, but enthusiastic pre-order sales.
In 2013, producer Masachika Kawata said the Resident Evil franchise would return to focus on elements of horror and suspense over action, adding that "survival horror as a genre is never going to be on the same level, financially, as shooters and much more popular, mainstream games. At the same time, I think we need to have the confidence to put money behind these projects, and it doesn't mean we can't focus on what we need to do as a survival horror game to meet fan's needs." Resident Evil: Revelations 2, an episodic game set between Resident Evil 5 and Resident Evil 6, was released in March 2015. A series of team-based multiplayer games were developed beginning with the poorly received Umbrella Corps, which was released in June 2016. Resident Evil: Resistance was released in April 2020, followed by Resident Evil Re:Verse in October 2022, with both being available for free to those who bought Resident Evil 3 and Village respectively.
Using the new RE Engine, which would develop the next generation of Resident Evil games, the series continued to shift back towards more horror elements. The next mainline game, Resident Evil 7: Biohazard was released for Windows, PlayStation 4 and Xbox One in January 2017. Set in a dilapidated mansion in Louisiana, the game uses a first-person perspective and emphasizes horror and exploration over action, unlike previous installments. The first-person perspective continued in the eighth mainline game Resident Evil Village. Released in May 2021, the game, set in a mysterious European village, is a direct sequel to Resident Evil 7: Biohazard although it incorporates more action elements inspired from Resident Evil 4. The game also marked the franchise's debut on PlayStation 5 and Xbox Series X/S
A new generation of remakes of older entries began in 2019 with a remake of Resident Evil 2, being released for the PlayStation 4, Windows, and Xbox One. The remake outsold the original game within a year, selling over five million copies. Following in the success of the Resident Evil 2 remake, Capcom revealed a remake of Resident Evil 3: Nemesis in December 2019, known as Resident Evil 3. It was released in April 2020. In June 2022, a remake of Resident Evil 4 was announced, and released on March 24, 2023, for PlayStation 4, PlayStation 5, Xbox Series X/S, and PC.
Story overview
The early Resident Evil games focused on the Umbrella Corporation, an international pharmaceutical company that secretly develops mutagenic viruses to further their "bio-organic weapons" (BOW) research. The company's viruses can transform humans into mindless zombies while also mutating plants and animals into horrifying monstrosities. The Umbrella Corporation uses its vast resources to effectively control Raccoon City, a fictional midwestern American city. In the original Resident Evil, members of an elite police task force, Special Tactics and Rescue Service (STARS), are lured to a derelict mansion on the outskirts of Raccoon City. The STARS team is mostly decimated by zombies and other BOWs, leaving only a handful of survivors, including Chris Redfield, Jill Valentine, and Albert Wesker. Chris and Jill explore the zombie-infested mansion and uncover a secret underground Umbrella research facility. Wesker reveals himself to be a double agent for Umbrella and betrays his comrades. However, Wesker is seemingly murdered by a Tyrant, a special BOW that is the culmination of the Umbrella Corporation's research.
Chris and Jill escape the mansion, but their testimony is ridiculed by Raccoon City's officials due to Umbrella's influence. Meanwhile, a separate viral outbreak occurs in another Umbrella research facility underneath Raccoon City. Most of the city's residents are infected and become zombies. Resident Evil 2 introduces two new protagonists, Leon S. Kennedy, a rookie police officer and Claire Redfield, the younger sister of Chris. Leon and Claire arrive in Raccoon City amidst the chaos of the viral outbreak. Leon is aided by Ada Wong, a corporate spy posing as an FBI agent, while Claire rescues Sherry Birkin, the daughter of two prominent Umbrella researchers. At the same time, Jill makes her escape from the city in Resident Evil 3: Nemesis. She is relentlessly pursued by a new Tyrant, Nemesis, who is deployed by Umbrella to eliminate all surviving STARS members. The U.S. Government destroys Raccoon City with a missile strike to sterilize the viral outbreak. Leon, Claire, Sherry, Ada, and Jill escape the city before its eradication. Claire continues to look for Chris, whereas Leon is recruited to work for the U.S. Government. Resident Evil – Code: Veronica follows Claire as she escapes from a prison camp in the Southern Ocean and later reunites with Chris at an Umbrella research facility in Antarctica. Resident Evil 4 is set six years after the Raccoon City incident and focuses on Leon as he tries to rescue the U.S. President's daughter from a cult in Spain.
A government investigation into the Umbrella Corporation reveals its involvement in the Raccoon City disaster and leads to the company's dissolution. Despite the downfall of the Umbrella Corporation, the company's research and BOWs proliferate across the black market and lead to the rise of bioterrorism. Chris and Jill establish the Bioterrorism Security Assessment Alliance (BSAA) to combat these ever-growing threats on a global scale. Wesker is revealed to be alive and involved in the development of new potent viral agents and BOWs. In Resident Evil 5, Wesker seeks to unleash a highly mutagenic virus that will infect all of humanity. Chris and the BSAA confront and kill Wesker in Africa before he can fulfill his mission. Resident Evil 6 features Leon and Chris meeting for the first time in the video game series. The two work separately to triage bioterrorist attacks in the United States, Eastern Europe, and China. They are assisted by Sherry, Wesker's illegitimate son Jake Muller, Ada, and many members of the BSAA and U.S. government.
Resident Evil 7: Biohazard and Resident Evil Village introduce a new protagonist, Ethan Winters, who becomes entangled in a bioterrorism incident while searching for his missing wife. He encounters Chris and the BSAA, who help him rescue his wife and defeat Eveline, a powerful BOW. Ethan, Mia, and their newborn daughter, Rosemary, are relocated to Eastern Europe but are abducted by a cult. Ethan ultimately sacrifices himself to destroy a fungal colony being weaponized by bioterrorists and save his family.
Gameplay
The Resident Evil franchise has had a variety of control schemes and gameplay mechanics throughout its history. Puzzle-solving has figured prominently throughout the series.
Tank controls
The first game introduced a control scheme that the player community has come to refer to as "tank controls" to the series. In a game with tank controls, players control movement relative to the position of the player character, rather than relative to the fixed virtual camera from which the player views the current scene. Pressing up (for example on a D-pad, analog stick, or cursor movement keys) on the game controller moves the character in the direction being faced, pressing down backpedals, and left and right rotates the character. This can feel counter-intuitive when the character is facing the camera, as the controls are essentially reversed in this state. This differs from many 3D games, in which characters move in the direction the player pushes the controls from the perspective of the camera. Some critics have posited that the control scheme is intentionally clumsy, meant to enhance stress and exacerbate difficulty.
While the first three entries in the series featured this control scheme, the third, Resident Evil 3: Nemesis, saw some action-oriented additions. These included a 180 degree turn and dodge command that, according to GameSpot, "hinted at a new direction that the series would go in." Later games in the series, like Resident Evil 4, would feature a more fluid over-the-shoulder third-person camera instead of a fixed camera for each room, while Resident Evil 7 and Resident Evil Village are played from the first-person perspective.
Third-person shooter gameplay
Resident Evil 4 saw significant changes to the established gameplay, including switching from fixed camera perspectives to a tracking camera, and more action-oriented gameplay and mechanics. This was complemented by an abundance of ammunition and revised aiming and melee mechanics. Some critics claimed that this overhauled control scheme "made the game less scary." The next two games in the franchise furthered the action-oriented mechanics: Resident Evil 5 featured cooperative play and added strafing, while Resident Evil 6 allowed players to move while aiming and shooting for the first time, fully abandoning the series' signature tank controls.
First-person shooter gameplay and VR
Resident Evil 7 is the first main Resident Evil game to use the first-person perspective and to use virtual reality. It drew comparisons to modern survival horror games such as Outlast and PT. The eighth main-series game, Resident Evil Village, also features a first-person perspective. A VR version of Resident Evil 4 was released on the Oculus Quest 2 on October 21, 2021.
Other media
The Resident Evil franchise features video games and tie-in merchandise and products, including various live-action and animated films, comic books, and novels.
Films
Live-action films
From 2002 to 2016, six live-action Resident Evil films were produced, all written and produced by Paul W. S. Anderson. The films do not follow the games' premise but feature some game characters. The series' protagonist is Alice, an original character created for the films portrayed by Milla Jovovich. Despite a negative reaction from critics, the live-action film series has made over $1 billion worldwide. They are, to date, the only video game adaptations to increase the amount of money made with each successive film. The series holds the record for the "Most Live-Action Film Adaptations of a Video Game" in the 2012 Guinness World Records Gamer's Edition, which also described it as "the most successful movie series to be based on a video game."
A reboot, Resident Evil: Welcome to Raccoon City, was released on November 24, 2021, with Johannes Roberts as writer/director.
Animated films
The first computer animated film for the franchise was Biohazard 4D-Executer. It was a short 3D film produced for Japanese theme parks and did not feature any characters from the game.
Starting in 2008, a series of feature-length computer-animated films have been released. These films take place in the same continuity with the games of the series, and feature characters such as Leon Kennedy, Claire Redfield, Ada Wong, Chris Redfield, Jill Valentine and Rebecca Chambers.
Television
Resident Evil: Infinite Darkness, a four-part CG anime series, premiered on July 8, 2021, on Netflix. Starring the Resident Evil 2 protagonists Leon S. Kennedy and Claire Redfield, the series features both uncovering a worldwide plot. The series released on July 8, 2021 on Netflix.
Resident Evil premiered on July 14, 2022, on Netflix. An eight episode live-action series, two plotlines set in 2022 and 2036 follow Albert Wesker and his daughters navigating Umbrella's experiments in New Raccoon City.
Merchandise
Over the years, various toy companies have acquired the Resident Evil license, with each producing their own unique line of Resident Evil action figures or models. These include, but are not limited to, Toy Biz, Palisades Toys, NECA, and Hot Toys.
Tokyo Marui also produced replicas of the guns used in the Resident Evil series in the form of gas blow-back airsoft guns. Some models included the STARS Beretta featured in Resident Evil 3, and the Desert Eagle in a limited edition that came with other memorabilia in a wooden case, along with the Gold Lugers from Code: Veronica and the "Samurai Edge" pistol from the Resident Evil remake. Other merchandise includes an energy drink called "T-virus Antidote".
Resident Evil Archives is a reference guide of the Resident Evil series written by staff members of Capcom. It was translated into English and published by BradyGames. The guide describes and summarizes all of the key events that occur in Resident Evil Zero, Resident Evil, Resident Evil 2, Resident Evil 3: Nemesis, and Code: Veronica. The main plot analysis also contains character relationship charts, artwork, item descriptions, and file transcripts for all five games. A second Archives book was later released in December 2011 and covers Resident Evil 4, Resident Evil 5, the new scenarios detailed in Resident Evil: The Umbrella Chronicles and Resident Evil: The Darkside Chronicles, and the 2008 CGI movie, Resident Evil: Degeneration. The second Archives volume was also translated by Capcom and published by BradyGames.
A Resident Evil theme restaurant called Biohazard Cafe & Grill S.T.A.R.S. opened in Tokyo in 2012. Halloween Horror Nights 2013, held at Universal Orlando, featured a haunted house titled Resident Evil: Escape from Raccoon City, based on Resident Evil 2 and Resident Evil 3: Nemesis.
Novels
The first Resident Evil novel was Hiroyuki Ariga's novella Biohazard: The Beginning, published in 1997 as a portion of the book The True Story of Biohazard, which was given away as a pre-order bonus with the Sega Saturn version of Biohazard. The story serves as a prelude to the original Resident Evil, in which Chris investigates the disappearance of his missing friend, Billy Rabbitson.
S. D. Perry has written novelizations of the first five games, as well as two original novels taking place between games. The novels often take liberties with the games' plot by exploring events occurring outside and beyond the games. This often meant that the games would later contradict the books on a few occasions. One notable addition from the novels is the original character Trent, who often served as a mysterious behind-the-scenes string-puller who aided the main characters. Perry's novels were translated and released in Japan with new cover arts by Wolfina. Perry's novels, particularly The Umbrella Conspiracy, also alluded to events in Biohazard: The Beginning, such as the disappearance of Billy Rabbitson and Brian Irons' bid to run for Mayor. A reprinting of Perry's novels with new cover artwork began in 2012 to coincide with the release of Resident Evil: Retribution and its respective novelization.
There are a trilogy of original Biohazard novels in Japan. was published in 1998 and was written by Kyū Asakura and the staff of Flagship. Two additional novels were published in 2002, To the Liberty by Sudan Kimura and Rose Blank by Tadashi Aizawa. While no official English translation of these novels has been published yet, the last two books were translated into German and published in 2006.
Novelizations of the films Genesis, Apocalypse, and Extinction were written by Keith DeCandido. Afterlife did not receive a novelization due to Capcom's decision to discontinue working with Pocket Books, who had been their primary source of publishing books up to that point, Capcom would later make Titan Books their primary publisher going forth. Retribution was written by John Shirley, while The Final Chapter was written by Tim Waggoner. Genesis was published over two years after that film's release and coincided with the publication of Apocalypse, Genesis being marketed as a prequel to Apocalypse, while the Extinction novel was released in late July 2007, two months before the film's release. The Final Chapter was published in December 2016 alongside the film's theatrical release. There was also a Japanese novelization of the first film, unrelated to DeCandido's version, written by Osamu Makino. Makino also wrote two novels based on the game Resident Evil: The Umbrella Chronicles. The books are a two-part direct novelization of the game and are published in Japanese and German only. The first novel, titled Biohazard: The Umbrella Chronicles Side A in Japan and Resident Evil: The Umbrella Chronicles 1 in Germany, was released on December 22, 2007. The second novel, titled Biohazard: The Umbrella Chronicles Side B in Japan and Resident Evil: The Umbrella Chronicles 2 in Germany, was published in January 2008.
Comics
In 1997, Marvel Comics published a single-issue prologue comic based on the original Resident Evil, released through a promotional giveaway alongside the original PlayStation game.
In 1998, WildStorm began producing a monthly comic book series based on the first two games, Resident Evil: The Official Comic Magazine, which lasted five issues. The first four issues were published by Image, while Wildstorm themselves published the fifth and final issue. Each issue was a compilation of short stories that were both adaptations of events from the games and related side stories. Like the Perry novels, the comics also explored events occurring beyond Resident Evil 2 (the latest game during the series' publication) and thus were contradicted by later games. Wildstorm also published a four-issue miniseries, Resident Evil: Fire & Ice, which depicted the ordeal of Charlie Team, a third STARS team created specifically for the comic. In 2009, Wildstorm reprinted Fire & Ice in a trade paperback collection.
In Hong Kong, there has been officially licensed Biohazard manhua adaptations of Biohazard 0 by publisher Yulang Group, Biohazard 2 by Kings Fountain, Biohazard 3 Supplemental Edition by Cao Zhihao and, Biohazard 3 The Last Escape, and Biohazard Code: Veronica by Lee Chung Hing published by Tinhangse Publishing. The Code: Veronica manhua was translated into English, formatted to look like an American comic and distributed by WildStorm as a series of four graphic novel collections.
In 2009, Wildstorm began publishing a Resident Evil comic book prequel to Resident Evil 5, which centers on two original members of the BSAA, Mina Gere and Holiday Sugarman. Written by Ricardo Sanchez and illustrated by Kevin Sharpe and Jim Clark, the first issue was published on March 11, 2009. On November 11, 2009, the third issue was released, and the fourth was released March 24, 2010. The sixth and final book was finally published in February 2011.
Plays
In the summer of 2000, Bioroid: Year Zero was performed in Japan. It was a musical horror-comedy but took the perspective of the infected. Super Eccentric Theater put on the production under the direction of Osamu Yagihashi. The stage play was performed from early July to late August.
Biohazard The Stage was released in Japan in 2015. The play focused on iconic characters, Chris Redfield and Rebecca Chambers, as Philosophy University in Australia is experiencing a bioterrorist attack. The production was handled by Avex Live Creative and Ace Crew Entertainment, under supervision from Capcom.
The following year, Musical Biohazard ~Voice of Gaia~ was released in September. It was produced by Umeda Arts Theater by director G2 and composer, Shunsuke Wada.
Biohazard the Experience was the second Resident Evil play produced by Avex Live Creative and Ace Crew Entertainment. The story is set in 2015 and follows a cast of thirteen survivors who were abducted and woke up in a mansion during an outbreak.
Reception and legacy
Most of the games in the prominent Resident Evil series have been released to positive reviews. Some of the games, most notably Resident Evil, Resident Evil 2 and Resident Evil 4, have been bestowed with multiple Game of the Year honors and often placed on lists of the best video games ever made.
In 1999, Next Generation listed the Resident Evil series as number 13 on their "Top 50 Games of All Time", commenting that, "Flawless graphics, excellent music, and a top-notch storyline all combined to make a game of unparalleled atmosphere and suspense." In 2012, Complex ranked Resident Evil at number 22 on the list of the best video game franchises. That same year, G4tv called it "one of the most successful series in gaming history."
Commercial performance
By December 2022, around 135 million Resident Evil games had been sold. The first two Resident Evil games had collectively sold approximately units worldwide by March 1999. By early 2001, the series had sold units worldwide, earning more than . By 2011, it had sold about copies and was estimated to have grossed at least . It is recognized by Guinness World Records as the best-selling survival horror series, with Resident Evil 2 remake being the best-selling survival horror game . Seven of the top ten best-selling horror games in North America are Resident Evil games.
The 2023 Resident Evil 4 remake sold more than three million copies in its first two days of release. It sold four million copies in its first two weeks, making it one of the fastest-selling Resident Evil games. In Japan, it was the best-selling retail game in its first week, selling 89,662 copies on PlayStation 5 and 85,371 on PlayStation 4.
The Resident Evil film series was the highest-grossing film series based on video games by 2012. By 2011, the films had grossed over at the box office, bringing the franchise's estimated revenue to at least more than in combined video game sales and box office gross up until then. , the films have grossed more than in box office and home video sales. The success of the video games and films have made Resident Evil the highest-grossing franchise in the horror and zombie genres.
Cultural impact
GameSpot listed the original Resident Evil as one of the fifteen most influential video games of all time. It is credited with defining and popularizing the survival horror genre of games. It is also credited with taking video games in a cinematic direction with its B-movie style cut-scenes, including live-action full-motion video (FMV) footage. Its live-action opening, however, was controversial; it became one of the first action games to receive the "Mature 17+" (M) rating from the Entertainment Software Rating Board (ESRB), despite the opening cutscene being censored in North America.
The Resident Evil franchise is credited with sparking a revival of the zombie genre in popular culture, leading to a renewed interest in zombie films during the 2000s. Resident Evil also helped redefine the zombie genre, playing an important role in its shift from supernatural themes to scientific themes by using science to explain the origins of zombies. According to Kim Newman in the book Nightmare Movies (2011), "the zombie revival began in the Far East" mainly due to the 1996 Japanese zombie games Resident Evil and The House of the Dead. George A. Romero, in 2013, said it was the video games Resident Evil and House of the Dead "more than anything else" that popularised his zombie concept in early 21st-century popular culture. In a 2015 interview with Huffington Post, screenwriter-director Alex Garland credited the Resident Evil series as a primary influence on his script for the horror film 28 Days Later (2002), and credited the first Resident Evil game for revitalizing the zombie genre. Screenwriter Edgar Wright cited Resident Evil 2 as a primary influence on his zombie comedy film Shaun of the Dead (2004), with the film's star and co-writer Simon Pegg also crediting the first game with starting the zombie revival in popular culture. The Walking Dead comic book creator Robert Kirkman cited Resident Evil as his favorite zombie game, while The Walking Dead television series director Greg Nicotero credited Resident Evil and The House of the Dead with introducing the zombie genre "to a whole generation of younger people who didn't grow up watching Night of the Living Dead and Dawn of the Dead."
The Resident Evil Apocalypse zombies were conceptualized and choreographed by Sharon B. Moore and Derek Aasland. Through script analysis and movement research a "scientific logic" was devised for the T-virus accounting for each Zombie behaviour envisioned in Paul W. S. Anderson's script. Sharon B. Moore and Derek Aasland then wrote the so-called Undead Bible - a Handbook for the Undead - used as the guide for the nearly 1000 cast under the choreographic department (stunt performers, actors, dancers, extras) to ensure the Undead physicality was performed in a unified way across the picture. The Stunt and Core teams participated in the "Undead Bootcamp". See also 2007 Documentary Undead Bootcamp starring producer Jeremy Bolt, director Alexander Witt, and choreographers Sharon B. Moore and Derek Aasland.
On the DVD Featurette Resident Evil; Game Over Apocalypse director Alexander Witt said the zombies needed to be "more aggressive and more dangerous" than the original film, so they were created by the film's choreographers Sharon B. Moore and Derek Aasland as "liquid zombie[s]' in terms of their relentless forward motion: unstoppable, flowing around any kind of resistance, and then rushing in on the final attack. This is also detailed in the University of Liverpool book Biopunk Dystopias Genetic Engineering, Society, and Science Fiction (Lars Schmeink, 2016, p. 214).
Additionally, the first Resident Evil film adaptation also contributed to the revival of zombie films, with the success of the film and the games resulting in zombies achieving greater mainstream prominence and several zombie films being greenlit, such as the video game film adaptation House of the Dead (2003), the remake Dawn of the Dead (2004) and Romero's Land of the Dead (2005). The Resident Evil films, 28 Days Later and the Dawn of the Dead remake all set box office records for the zombie genre, reaching levels of commercial success not seen since the original Dawn of the Dead (1978). They were followed by other zombie films such as 28 Weeks Later (2007), Zombieland (2009), Cockneys vs Zombies (2012), and World War Z (2013), as well as zombie-themed graphic novels and television shows such as The Walking Dead and The Returned, and books such as World War Z (2006), Pride and Prejudice and Zombies (2009) and Warm Bodies (2010). The zombie revival trend was popular across different media up until the mid-2010s. Since then, zombie films have declined in popularity during the late 2010s, but zombie video games have remained popular, as seen with the commercial success of the Resident Evil 2 remake and Days Gone in 2019.
See also
Genetic engineering in fiction
List of fictional diseases
List of zombie video games
Dino Crisis, another horror series by Capcom
Dead Rising, another zombie-themed series by Capcom
Devil May Cry, another series by Capcom, initially conceived as a Resident Evil game
Onimusha, another series by Capcom with similar gameplay, initially conceived as a Resident Evil game
The Evil Within, other horror game made by Shinji Mikami
References
External links
Undead Bootcamp - Resident Evil Apocalypse - (2007 - Documentary)
Biopunk
Capcom franchises
Experimental medical treatments in fiction
Fiction about genetic engineering
Mythopoeia
Video game franchises introduced in 1996
Human experimentation in fiction
Mutants in fiction
Fiction about bioterrorism
Science fiction franchises | Resident Evil | Engineering,Biology | 6,914 |
24,838,946 | https://en.wikipedia.org/wiki/Quantum%20LC%20circuit | An LC circuit can be quantized using the same methods as for the quantum harmonic oscillator. An LC circuit is a variety of resonant circuit, and consists of an inductor, represented by the letter L, and a capacitor, represented by the letter C. When connected together, an electric current can alternate between them at the circuit's resonant frequency:
where L is the inductance in henries, and C is the capacitance in farads. The angular frequency has units of radians per second. A capacitor stores energy in the electric field between the plates, which can be written as follows:
Where Q is the net charge on the capacitor, calculated as
Likewise, an inductor stores energy in the magnetic field depending on the current, which can be written as follows:
Where is the branch flux, defined as
Since charge and flux are canonically conjugate variables, one can use canonical quantization to rewrite the classical hamiltonian in the quantum formalism, by identifying
and enforcing the canonical commutation relation
One-dimensional harmonic oscillator
Hamiltonian and energy eigenstates
Like the one-dimensional harmonic oscillator problem, an LC circuit can be quantized by either solving the Schrödinger equation or using creation and annihilation operators. The energy stored in the inductor can be looked at as a "kinetic energy term" and the energy stored in the capacitor can be looked at as a "potential energy term".
The Hamiltonian of such a system is:
where Q is the charge operator, and is the magnetic flux operator. The first term represents the energy stored in an inductor, and the second term represents the energy stored in a capacitor. In order to find the energy levels and the corresponding energy eigenstates, we must solve the time-independent Schrödinger equation,
Since an LC circuit really is an electrical analog to the harmonic oscillator, solving the Schrödinger equation yields a family of solutions (the Hermite polynomials).
Magnetic flux as a conjugate variable
A completely equivalent solution can be found using magnetic flux as the conjugate variable where the conjugate "momentum" is equal to capacitance times the time derivative of magnetic flux. The conjugate "momentum" is really the charge.
Using Kirchhoff's Junction Rule, the following relationship can be obtained:
Since , the above equation can be written as follows:
Converting this into a Hamiltonian, one can develop a Schrödinger equation as follows:
where is a function of magnetic flux
Quantization of coupled LC circuits
Two inductively coupled LC circuits have a non-zero mutual inductance. This is equivalent to a pair of harmonic oscillators with a kinetic coupling term.
The Lagrangian for an inductively coupled pair of LC circuits is as follows:
As usual, the Hamiltonian is obtained by a Legendre transform of the Lagrangian.
Promoting the observables to quantum mechanical operators yields the following Schrödinger equation.
One cannot proceed further using the above coordinates because of the coupled term. However, a coordinate transformation from the wave function as a function of both charges to the wave function as a function of the charge difference , where and a coordinate (somewhat analogous to a "Center-of-Mass"), the above Hamiltonian can be solved using the Separation of Variables technique.
The CM coordinate is as seen below:
The Hamiltonian under the new coordinate system is as follows:
In the above equation is equal to and equals the reduced inductance.
The separation of variables technique yields two equations, one for the "CM" coordinate that is the differential equation of a free particle, and the other for the charge difference coordinate, which is the Schrödinger equation for a harmonic oscillator.
The solution for the first differential equation once the time dependence is appended resembles a plane wave, while the solution of the second differential equation is seen above.
Hamiltonian mechanics
Classical case
Stored energy (Hamiltonian) for classical LC circuit:
Hamiltonian's equations:
,
where stored capacitor charge (or electric flux) and magnetic momentum (magnetic flux),
capacitor voltage and inductance current, time variable.
Nonzero initial conditions:
At we shall have oscillation frequency:
,
and wave impedance of the LC circuit (without dissipation):
Hamiltonian's equations solutions:
At we shall have the following values of charges, magnetic flux and energy:
Definition of the phasor
In the general case the wave amplitudes can be defined in the complex space
where .
,
where – electric charge at zero time, capacitance area.
,
where – magnetic flux at zero time,
inductance area.
Note that, at the equal area elements
we shall have the following relationship for the wave impedance:
.
Wave amplitude and energy could be defined as:
.
Quantum case
In the quantum case we have the following definition for momentum operator:
Momentum and charge operators produce the following commutator:
.
Amplitude operator can be defined as:
,
and phazor:
.
Hamilton's operator will be:
Amplitudes commutators:
.
Heisenberg uncertainty principle:
.
Wave impedance of free space
When wave impedance of quantum LC circuit takes the value of free space
,
where electron charge, fine-structure constant, and von Klitzing constant
then "electric" and "magnetic" fluxes at zero time point will be:
,
where magnetic flux quantum.
Quantum LC circuit paradox
General formulation
In the classical case the energy of LC circuit will be:
where capacitance energy, and
inductance energy. Furthermore, there are the following relationships between charges (electric or magnetic) and voltages or currents:
Therefore, the maximal values of capacitance and inductance energies will be:
Note that the resonance frequency has nothing to do with the energy in the classical case. But it has the following relationship with energy in the quantum case:
So, in the quantum case, by filling capacitance with the one electron charge:
and
The relationship between capacitance energy and the ground state oscillator energy will then be:
where quantum impedance of LC circuit.
The quantum impedance of the quantum LC circuit could be in practice of the two types:
So, the energy relationships will be:
and that is the main problem of the quantum LC circuit: energies stored on capacitance and inductance are not equal to the ground state energy of the quantum oscillator.
This energy problem produces the quantum LC circuit paradox (QLCCP).
Possible solution
Some simple solution of the QLCCP could be found in the following way. Yakymakha (1989) (eqn.30) proposed the following DOS quantum impedance definition:
where magnetic flux, and
electric flux,
So, there are no electric or magnetic charges in the quantum LC circuit, but electric and magnetic fluxes only. Therefore, not only in the DOS LC circuit, but in the other LC circuits too, there are only the electromagnetic waves. Thus, the quantum LC circuit is the minimal geometrical-topological value of the quantum waveguide, in which there are no electric or magnetic charges, but electromagnetic waves only. Now one should consider the quantum LC circuit as a "black wave box" (BWB), which has no electric or magnetic charges, but waves. Furthermore, this BWB could be "closed" (in Bohr atom or in the vacuum for photons), or "open" (as for QHE and Josephson junction). So, the quantum LC circuit should has BWB and "input – output" supplements. The total energy balance should be calculated with considering of "input" and "output" devices. Without "input – output" devices, the energies "stored" on capacitances and inductances are virtual or "characteristics", as in the case of characteristic impedance (without dissipation). Very close to this approach now are Devoret (2004), which consider Josephson junctions with quantum inductance, Datta impedance of Schrödinger waves (2008) and Tsu (2008), which consider quantum wave guides.
Explanation for DOS quantum LC circuit
As presented below, the resonance frequency for QHE is:
where cyclotron frequency,
and
The scaling current for QHE will be:
Therefore, the inductance energy will be:
So for quantum magnetic flux , inductance energy is half as much as the ground state oscillation energy. This is due to the spin of electron (there are two electrons on Landau level on the same quantum area element). Therefore, the inductance/capacitance energy considers the total Landau level energy per spin.
Explanation for "wave" quantum LC circuit
By analogy to the DOS LC circuit, we have
two times lesser value due to the spin. But here there is the new dimensionless fundamental constant:
which considers topological properties of the quantum LC circuit. This fundamental constant first appeared in the Bohr atom for Bohr radius:
where Compton wavelength of electron.
Thus, the wave quantum LC circuit has no charges in it, but electromagnetic waves only. So capacitance or inductance "characteristic energies" are
times less than the total energy of the oscillator. In other words, charges "disappear" at the "input" and "generate" at the "output" of the wave LC circuit, adding energies to keep balance.
Total energy of quantum LC circuit
Energy stored on the quantum capacitance:
Energy stored on the quantum inductance:
Resonance energy of the quantum LC circuit:
Thus, the total energy of the quantum LC circuit should be:
In the general case, resonance energy could be due to the "rest mass" of electron, energy gap for Bohr atom, etc.
However, energy stored on capacitance is due to electric charge. Actually, for free electron and Bohr atom LC circuits we have quantized electric fluxes, equal to the electronic charge,
.
Furthermore, energy stored on inductance is due to magnetic momentum. Actually, for Bohr atom we have Bohr Magneton:
In the case of free electron, Bohr Magneton will be:
the same, as for Bohr atom.
Applications
Electron as LC circuit
Electron capacitance could be presented as the spherical capacitor:
where electron radius and Compton wavelength.
Note, that this electron radius is consistent with the standard definition of the spin. Actually, rotating momentum of electron is:
where is considered.
Spherical inductance of electron:
Characterictic impedance of electron:
Resonance frequency of electron LC circuit:
Induced electric flux on electron capacitance:
Energy, stored on electron capacitance:
where is the "rest energy" of electron. So, induced electric flux will be:
Thus, through electron capacitance we have quantized electric flux, equal to the electron charge.
Magnetic flux through inductance:
Magnetic energy, stored on inductance:
So, induced magnetic flux will be:
where magnetic flux quantum. Thus, through electron inductance there are no quantization of magnetic flux.
Bohr atom as LC circuit
Bohr radius:
where Compton wavelength of electron,
fine-structure constant.
Bohr atomic surface:
.
Bohr inductance:
.
Bohr capacitance:
.
Bohr wave impedance:
Bohr angular frequency:
where Bohr wavelength for the first energy level.
Induced electric flux of the Bohr first energy level:
Energy, stored on the Bohr capacitance:
where is the Bohr energy. So, induced electric flux will be:
Thus, through the Bohr capacitance we have quantized electric flux, equal to the electron charge.
Magnetic flux through the Bohr inductance:
So, induced magnetic flux will be:
Thus, through the Bohr inductance there are no quantization of magnetic flux.
Photon as LC circuit
Photon "resonant angular frequency":
Photon "wave impedance":
Photon "wave inductance":
Photon "wave capacitance":
Photon "magnetic flux quantum":
Photon "wave current":
Quantum Hall effect as LC circuit
In the general case 2D- density of states (DOS) in a solid could be defined by the following:
,
where current carriers effective mass in a solid, electron mass, and dimensionless parameter, which considers band structure of a solid. So, the quantum inductance can be defined as follows:
,
where – the ‘’ideal value’’ of quantum inductance at and another ideal quantum inductance:
, (3)
where magnetic constant,
magnetic "fine-structure constant" (p. 62), fine-structure constant and Compton wavelength of electron, first defined by Yakymakha (1994) in the spectroscopic investigations of the silicon MOSFETs.
Since defined above quantum inductance is per unit area, therefore its absolute value will be in the QHE mode:
,
where the carrier concentration is:
,
and is the Planck constant. By analogically, the absolute value of the quantum capacitance will be in the QHE mode:
,
where
,
is DOS definition of the quantum capacitance according to Luryi, – quantum capacitance ‘’ideal value’’ at , and other quantum capacitance:
,
where dielectric constant, first defined by Yakymakha (1994) in the spectroscopic investigations of the silicon MOSFETs.
The standard wave impedance definition for the QHE LC circuit could be presented as:
,
where von Klitzing constant for resistance.
The standard resonant frequency definition for the QHE LC circuit could be presented as:
,
where standard cyclotron frequency in the magnetic field B.
Hall scaling current quantum will be
,
where Hall angular frequency.
Josephson junction as LC circuit
Electromagnetic induction (Faraday) law:
where magnetic flux, Josephson junction quantum inductance and
Josephson junction current.
DC Josephson equation for current:
where Josephson scale for current,
phase difference between superconductors.
Current derivative on time variable will be:
AC Josephson equation:
where reduced Planck constant, Josephson magnetic flux quantum,
and electron charge.
Combining equations for derivatives yields junction voltage:
where
is the Devoret (1997) quantum inductance.
AC Josephson equation for angular frequency:
Resonance frequency for Josephson LC circuit:
where is the Devoret quantum capacitance, that can be defined as:
Quantum wave impedance of Josephson junction:
For mV and A wave impedance will be
Flat atom as LC circuit
Quantum capacitance of flat atom (FA):
F,
where .
Quantum inductance of FA:
H.
Quantum area element of FA:
m2.
Resonance frequency of FA:
rad/s.
Characteristic impedance of FA:
where is the impedance of free space.
Total electric charge on the first energy level of FA:
,
where Bohr quantum area element. First FA was discovered by Yakymakha (1994) as very low frequency resonance on the p-channel MOSFETs. Contrary to the spherical Bohr atom, the FA has hyperbolic dependence on the number of energy level (n)
See also
LC circuit
Harmonic oscillator
Quantum harmonic oscillator
Quantum Electromagnetic Resonator
References
Sources
W. H. Louisell, "Quantum Statistical Properties of Radiation" (Wiley, New York, 1973)
Michel H.Devoret. Quantum Fluctuation in Electric Circuit. PDF
Fan Hong-yi, Pan Xiao-yin. Chin.Phys.Lett. No.9(1998)625. PDF
Xu, Xing-Lei; Li, Hong-Qi; Wang, Ji-Suo Quantum fluctuations of mesoscopic damped double resonance RLC circuit with mutual capacitance inductance coupling in thermal excitation state. Chinese Physics, vol. 16, issue 8, pp. 2462–2470 (2007).
Hong-Qi Li, Xing-Lei Xu and Ji-Suo Wang. Quantum Fluctuations of the Current and Voltage in Thermal Vacuum State for Mesoscopic Quartz Piezoelectric Crystal.
Boris Ya. Zel’dovich. Impedance and parametric excitation of oscillators. UFN, 2008, v. 178, no 5. PDF
Quantum models
Quantum information science | Quantum LC circuit | Physics | 3,406 |
25,133,525 | https://en.wikipedia.org/wiki/Catalytic%20oxidation | Catalytic oxidation are processes that rely on catalysts to introduce oxygen into organic and inorganic compounds. Many applications, including the focus of this article, involve oxidation by oxygen. Such processes are conducted on a large scale for the remediation of pollutants, production of valuable chemicals, and the production of energy.
Oxidations of organic compounds
Carboxylic acids, ketones, epoxides, and alcohols are often obtained by partial oxidation of alkanes and alkenes with dioxygen. These intermediates are essential to the production of consumer goods. Partial oxidation is challenging because the most favored reaction between oxygen and hydrocarbons is combustion.
Oxidations of inorganic compounds
Sulfuric acid is produced from sulfur trioxide which is obtained by oxidation of sulfur dioxide. Food-grade phosphates are generated via oxidation of white phosphorus. Carbon monoxide in automobile exhaust is converted to carbon dioxide in catalytic converters.
Examples
Industrially important examples include both inorganic and organic substrates.
Catalysts
Oxidation catalysis is conducted by both heterogeneous catalysis and homogeneous catalysis. In the heterogeneous processes, gaseous substrate and oxygen (or air) are passed over solid catalysts. Typical catalysts are platinum, and redox-active oxides of iron, vanadium, and molybdenum. In many cases, catalysts are modified with a host of additives or promoters that enhance rates or selectivities.
Important homogeneous catalysts for the oxidation of organic compounds are carboxylates of cobalt, iron, and manganese. To confer good solubility in the organic solvent, these catalysts are often derived from naphthenic acids and ethylhexanoic acid, which are highly lipophilic. These catalysts initiate radical chain reactions, autoxidation that produce organic radicals that combine with oxygen to give hydroperoxide intermediates. Generally the selectivity of oxidation is determined by bond energies. For example, benzylic C-H bonds are replaced by oxygen faster than aromatic C-H bonds.
Fine chemicals
Many selective oxidation catalysts have been developed for producing fine chemicals of pharmaceutical or academic interest. Nobel Prize–winning examples are the Sharpless epoxidation and the Sharpless dihydroxylation.
Biological catalysis
Catalytic oxidations are common in biology, especially since aerobic life subsists on energy obtained by oxidation of organic compounds by air. In contrast to the industrial processes, which are optimized for producing chemical compounds, energy-producing biological oxidations are optimized to produce energy. Many metalloenzymes mediate these reactions.
Fuel cells, etc
Fuel cells rely on oxidation of organic compounds (or hydrogen) using catalysts. Catalytic heaters generate flameless heat from a supply of combustible fuel and oxygen from air as oxidant.
Challenges
The foremost challenge in catalytic oxidation is the conversion of methane to methanol. Most methane is stranded, i.e. not located near metropolitan areas. Consequently, it is flared (converted to carbon dioxide). One challenge is that methanol is more easily oxidized than is methane.
Catalytic oxidation with oxygen or air is a major application of green chemistry. There are however many oxidations that cannot be achieved so straightforwardly. The conversion of propylene to propylene oxide is typically effected using hydrogen peroxide, not oxygen or air.
References
External links
https://archive.today/20130626171216/https://portal.navfac.navy.mil/portal/page/portal/NAVFAC/NAVFAC_WW_PP/NAVFAC_NFESC_PP/ENVIRONMENTAL/ERB/THERMCATOX
http://www.frtr.gov/matrix2/section4/4-59.html
Catalysis | Catalytic oxidation | Chemistry | 797 |
38,206,335 | https://en.wikipedia.org/wiki/Pignora%20imperii | The pignora imperii ("pledges of rule") were objects that were supposed to guarantee the continued imperium of Ancient Rome. One late source lists seven. The sacred tokens most commonly regarded as such were:
The Palladium, the wooden image of Minerva (Greek Athena) that the Romans claimed had been rescued from the fall of Troy and was in the keeping of the Vestals;
The sacred fire of Vesta tended by the Vestals, which was never allowed to go out; and the ancilia, the twelve shields of Mars wielded by his priests, the Salii, in their processions, dating to the time of Numa Pompilius, the second king of Rome.
In the later Roman Empire, the maintenance of the Altar of Victory in the Curia took on a similar symbolic value for those such as Symmachus who were trying to preserve Rome's religious traditions in the face of Christian hegemony. The extinguishing of the fire of Vesta by the Christian emperor Theodosius I is one of the events that mark the abolition of Rome's ancestral religion and the imposition of Christianity as a state religion that excluded all others.
In late antiquity, some narratives of the founding of Constantinople claim that Constantine I, the first emperor to convert to Christianity, transferred the pignora imperii to the new capital. Though the historicity of this transferral may be in doubt, the claim indicates the symbolic value of the tokens.
Servius's list
The 4th-century scholar Servius notes in his commentary to Vergil's Aeneid that "there were seven tokens (pignora) which maintain Roman rule (imperium Romanum)," and gives the following list:
the needle of the Mother of the Gods (Acus Matris Deum), kept in the Temple of Cybele on the Palatine Hill.;
the terracotta four-horse chariot brought from Veii (Quadriga Fictilis Veientanorum), supposed to have been commissioned by the last king of Rome Tarqinius Superbus, which was displayed on the roof of the Temple of Jupiter Optimus Maximus on the Capitolium;
the ashes of Orestes (Cineres Orestis), kept at the same temple;
the scepter of Priam (Sceptrum Priami), brought to Rome by Aeneas;
the veil of Ilione (Velum Ilionae), daughter of Priam, another Trojan token attributed to Aeneas;
the Palladium, kept in the Temple of Vesta;
the Ancile, the sacred shield of Mars Gradivus given to Numa Pompilius, kept in the Regia hidden among eleven other identical copies to confuse would-be thieves. All twelve shields were ritually paraded each year through Rome by the Salii during the Agonum Martialis.
Classicist Alan Cameron notes that three of these supposed tokens were fictional (the ashes, scepter, and veil) and are not named in any other sources as sacred guarantors of Rome. The other four objects were widely attested in Latin literature, but have left no archaeological trace. In the 1730 excavations of the Palatine Hill by Francesco Bianchini, he noted a stone matching the description of Cybele's needle. However its ultimate fate is unknown, with its destruction likely.
See also
Translatio imperii
Palladium (protective image), the general concept
References
Roman mythology
Religious objects | Pignora imperii | Physics | 728 |
72,337,786 | https://en.wikipedia.org/wiki/HD%20201772 | HD 201772, also known as HR 8104, is a yellowish-white hued star located in the southern constellation Microscopium. It has an apparent magnitude of 5.26, making it one of the brighter members of this generally faint constellation. The object is located relatively close at a distance of 111 light-years based on Gaia DR3 parallax measurements but is approaching closer with a heliocentric radial velocity of . At its current distance, HD 201772's brightness is diminished by 0.11 magnitudes due to interstellar dust.
The star has been given multiple stellar classifications over the years. It was given the luminosity class of a subgiant and main sequence star (IV/V; IV-V) and a dwarf (V). Most sources generally agree that it is a F5 star. Richard O. Gray and colleagues give HD 201772 a class of F6 V Fe−0.9 CH−0.5, which indicates that it is a F-type main-sequence star with an underabundance of iron and CH molecules in its spectrum.
It has 1.47 times the mass of the Sun and an enlarged radius of . It radiates 7.8 times the luminosity of the Sun from its photosphere at an effective temperature of . At an age of 2.5 billion years, HD 201722 is currently 1.33 magnitudes above the ZAMS, consistent with a star that is evolving off the main sequence. The star has an iron abundance 66% that of the Sun, making it metal deficient. It spins modestly with a projected rotational velocity of .
HD 201772 is suspected to be a spectroscopic binary consisting of the subgiant described above and an ordinary F6 V star with a mass of . However, the stars have no separation or an orbital period. This is because the companion might be a result of spectrum contamination, so HD 201772 is more likely to be a solitary star.
References
F-type subgiants
F-type main-sequence stars
Microscopium
Microscopii, 56
CD-39 14152
201772
104738
8104 | HD 201772 | Astronomy | 452 |
52,361,732 | https://en.wikipedia.org/wiki/IFRS%2017 | IFRS 17 is an International Financial Reporting Standard that was issued by the International Accounting Standards Board in May 2017. It will replace IFRS 4 on accounting for insurance contracts and has an effective date of 1 January 2023. The original effective date was meant to be 1 January 2021. In November 2018 the International Accounting Standards Board proposed to delay the effective date by one year to 1 January 2022. In March 2020, the International Accounting Standards Board further deferred the effective date to 1 January 2023.
List of insurance contracts to which IFRS 17 applies:
Insurance and reinsurance contracts issued by an insurer;
Reinsurance contracts held by an insurer;
Investment contracts with discretionary participation features (DPF) issued by an insurer, provided the insurer also issues insurance contracts.
Under the IFRS 17 general model, insurance contract liabilities will be calculated as the expected present value of future insurance cash flows with a provision for non-financial risk. The discount rate will reflect current time value of money adjusted for financial risk. If the risk-adjusted expected present value of future cash flows would produce a gain at the time a contract is recognized, the model would also require a "contractual service margin" to offset the day 1 gain. The contractual service margin would be released to insurance revenue over the life of the contract. There would also be a new income statement presentation for insurance contracts, including a conceptual definition of revenue, and additional disclosure requirements.
For short-duration insurance contracts, insurers are permitted to use a simplified method, aka. Premium Allocation Approach ('PAA'). Under this simplified method, insurance liability is similar to premium unearned (less insurance acquisition cash flows).
Some insurance contracts include participation features where the entity shares the performance of underlying items with policyholders in an extent that the remaining profit of the insurer has the character of a contractual fee. IFRS 17 has a specific accounting approach for such participating contracts, defined as ‘insurance contracts with direct participation features’. That approach is referred to as the variable fee approach (‘VFA’).
Criticism
Several features of IFRS 17 have been criticized by preparers. One example is the volatility caused by applying current rates for time value of money. IFRS 17 permits presenting the effects of changes in the discount rate under Other Comprehensive Income to eliminate the volatility from the P&L.
Former IASB chairman Hans Hoogervorst regarded the use of a current discount rate as one of the benefits of the new standard, stating that by doing otherwise "the devastating impact of the current low-interest-rate environment on long-term obligations is not nearly as visible in the insurance industry as it is in the defined benefit pension schemes of many companies." He also stated that current discount rates would "increase comparability between insurance companies and between insurance and other parts of the financial industry, such as banks and asset management." Other benefits Hoogervorst saw in the new standard were increased consistency across companies in accounting for insurance contracts and a more theoretically valid measurement of revenue.
Adoption
In November 2021 EU has adopted IFRS 17 with an exemption regarding the limitation of aggregating contracts for purposes of subsequent measurement of the contractual service margin, the so-called groups of insurance contracts; under IFRS 17 contracts may be only aggregated in groups which were issued not more than one year apart. This limitation is optional to be applied in the EU.
2020 Amendments
On 26 June 2019, the IASB released an exposure draft proposing several amendments. Comments on the amendments were open for three months, closing on 25 September 2019. In total, 123 submissions were received. In June 2020 the IASB adopted the final set of amendments and deferred the effective date of the standard to January 1, 2023.
External links
IFRS17 text
IFRS17 on ifrs.org
References
International Financial Reporting Standards
Actuarial science
Insurance | IFRS 17 | Mathematics | 793 |
19,291,172 | https://en.wikipedia.org/wiki/Baikonur%20Cosmodrome%20Site%20200 | Site 200 at the Baikonur Cosmodrome is a launch site used by Proton rockets. It consists of two launch pads, areas 39 and 40. Area 39 is currently (as of 2021) used for Proton-M launches, including commercial flights conducted by International Launch Services. Area 40 is currently (as of 2021) inactive, as it was slated to be rebuilt as a launch site for the Angara rocket. Although the project was relocated to Site 250, Area 40 was not put back into service.
A number of planetary probes have been launched from Site 200. Venera 14, Venera 15, Vega 1, Fobos 1, the failed Mars-96, and ExoMars were launched from area 39. Venera 13, Venera 16, Vega 2, Fobos 2 were launched from Area 40. Area 39 was also the launch site for the core of the Mir space station, along with both Kvant modules, and the Kristall module. Salyut 7 and Granat were launched from Area 40.
On 13 May 2021 the pad was modified to support the launch of Nauka.
References
Baikonur Cosmodrome | Baikonur Cosmodrome Site 200 | Astronomy | 240 |
24,667,008 | https://en.wikipedia.org/wiki/CALERIE | CALERIE (Comprehensive Assessment of Long-term Effects of Reducing Intake of Energy) is a trial currently underway in the U.S. to study the effects of prolonged calorie restriction on healthy human subjects.
The CALERIE study is being carried out at the Pennington Biomedical Research Center (Baton Rouge, Louisiana), the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University (Boston, Massachusetts) and the Washington University School of Medicine (St. Louis, Missouri). It is hoped that caloric restriction reduces the incidence of cardiovascular disease and cancer and leads to a longer life, as has been demonstrated previously in numerous animal studies. CALERIE is the first study to investigate prolonged calorie restriction in healthy humans. Study subjects are selected from people who are not obese (because calorie restriction on obese people is already known to lengthen life, but possibly for different reasons).
A smaller predecessor study ended in 2006. Forty-eight subjects were randomly assigned to a control group and a treatment group; those in the treatment group were put on a 25% calorie reduction over a 6-month period. It was found that the treatment group had lower insulin resistance, lower levels of LDL cholesterol, lower body temperature and blood-insulin levels as well as less oxidative damage to their DNA.
The second, larger, phase of CALERIE began in 2007. The participants are subjected to a 25% calorie restriction over a 2-year period, and several physiological variables are regularly monitored. Participants are paid $5,000 at Tufts and Pennington and $2,400 at Washington University. As of October 2009 the study had 132 participants and was still accepting new ones.
Study subjects have to be highly motivated and organized enough to keep a detailed journal of all foods they eat. Their daily baseline calorie requirements are precisely determined before the trial: in a two-week laboratory test the rate of carbon dioxide production is measured, allowing to compute the number of burned calories. The subjects are then taught a diet of low-energy density foods, such as vegetables, fruits (especially apples), insoluble fiber and soups. Most subjects reported that they felt hungry for the first few weeks, after which they adjusted to the new diet. Complaints focused on the rigid bookkeeping scheme imposed on them.
Results were posted to the Clinical Trial website in 2018 with a (paywalled) Lancet article published in 2019. MSN published article based on an interview with Dr Kraus.
Background
That calorie restriction (CR) might lengthen human lifespan was suggested by various studies on laboratory animals. However, when the studies were extended to long-lived primates (rhesus monkeys), while it was indeed found in a 2012 study by the US National Institute on Aging (NIA) that CR had benefits to immune function, motor coordination, and resistance to sarcopenia, a CR regimen implemented in rhesus monkeys of various ages did not improve survival outcomes.
The authors considered that the study suggested "a separation between health effects, morbidity and mortality", and that "study design, husbandry and diet composition may strongly affect the life-prolonging effect of CR in a long-lived non-human primate".
References
External links
CALERIE web site
Clinical Trials
Gerontology
Diets
Clinical trials | CALERIE | Biology | 685 |
72,230,055 | https://en.wikipedia.org/wiki/Neoproterozoic%20oxygenation%20event | The Neoproterozoic Oxygenation Event (NOE), also called the Second Great Oxidation Event, was a geologic time interval between around 850 and 540 million years ago during the Neoproterozoic era, which saw a very significant increase in oxygen levels in Earth's atmosphere and oceans. Taking place after the end to the Boring Billion, an euxinic period of extremely low atmospheric oxygen spanning from the Statherian period of the Paleoproterozoic era to the Tonian period of the Neoproterozoic era, the NOE was the second major increase in atmospheric and oceanic oxygen concentration on Earth, though it was not as prominent as the Great Oxidation Event (GOE) of the Neoarchean-Paleoproterozoic boundary. Unlike the GOE, it is unclear whether the NOE was a synchronous, global event or a series of asynchronous, regional oxygenation intervals with unrelated causes.
Evidence for oxygenation
Carbon isotopes
Beginning around 850 Mya to around 720 Mya, a time interval roughly corresponding to the Late Tonian, between the end of the Boring Billion and the onset of the Cryogenian “Snowball Earth”, marine deposits record a very significant positive carbon isotope excursion. These elevated δ13C values are believed to be linked to an evolutionary radiation of eukaryotic plankton and enhanced organic burial, which in turn indicate a spike in oxygen production during this interval. Further positive carbon isotope excursions occurred during the Cryogenian. Although several negative carbon isotope excursions, associated with warming events, are known from the Late Tonian all the way up to the Proterozoic-Phanerozoic boundary, the carbon isotope record nonetheless maintains a noticeable positive trend throughout the Neoproterozoic.
Nitrogen isotopes
δ15N data from 750 to 580 million year-old marine sediments hailing from four different Neoproterozoic basins show similar nitrogen isotope ratios to modern oceans, with a mode of +4% and a range from -4% to +11%. No significant change is observed across the Cryogenian-Ediacaran boundary, implying that oxygen was already ubiquitous in the global ocean as early as 750 Mya, during the Tonian period.
Sulfur isotopes
Seawater sulfate δ34S values, which saw a gradual increase over most of the Neoproterozoic punctuated by major drops during glaciations, show a significant positive excursion during the Ediacaran, with a corresponding decrease in pyritic δ34S. High fractionation rates between sulfte and sulfide indicate an increase in the availability of sulfate in the water column, which in turn is indicative of increased reaction of pyrite with oxygen. In addition, genetic evidence points to the occurrence of a radiation of non-photosynthetic sulfide-reducing bacteria during the Neoproterozoic. Through bacterial sulfur disproportionation, such bacteria further deplete marine sulfide of heavier sulfur isotopes. Because such bacteria require significant amounts of oxygen to survive, an oxygenation event during the Neoproterozoic raising oxygen concentrations to over 5-18% of modern levels is believed to have been a necessary prerequisite for the diversification of these microorganisms.
Strontium isotopes
δ13C can reliably indicate changes in net primary productivity and oxygenation if the rates of weathering into the oceans and carbon dioxide outgassing remain constant or increase, since a decrease in either of these could cause a positive δ13C excursion through continued preferential biological consumption of carbon-12 by existing communities while the supply of available carbon decreased, without indicating an increase in primary productivity and oxygen production. The ratio of strontium-87 to strontium-86 is used as a determinant of the relative contribution of continental weathering to the ocean's nutrient supply; an increase in this ratio, as observed throughout the Neoproterozoic and into the Cambrian until reaching a peak at the end of the Cambrian, suggests a rise in continental weathering and bolsters evidence from carbon isotope ratios for high oxygenation in this interval of time.
Chromium isotopes
Surface oxidation of Cr(III) to Cr(VI) causes isotopic fractionation of chromium; Cr(VI), typically present in the environment as either chromate or dichromate, has elevated values of δ53Cr, or the ratio of chromium-53 to chromium-52, whereas bacterial reduction of Cr(VI) to Cr(III) is associated with negative chromium isotope excursions. Following the riverine transport of oxidised chromium into the ocean, the reaction reducing Cr(VI) back into Cr(III) and subsequently oxidising ferrous iron into ferric iron is highly efficient at sequestering Cr(VI), as is the precipitation of Cr(III) with ferric oxyhydroxide, meaning that chemically precipitated chromium isotope ratios in sediments abundant in ferric iron accurately reflect seawater chromium isotope ratios at the time of deposition. Because efficient oxidation of Cr(III) to Cr(VI) is only possible in the presence of the catalyst manganese dioxide, which is only stable and abundant at high oxygen fugacities, a positive excursion of δ53Cr indicates an increase in atmospheric oxygen concentrations. Banded iron formations (BIFs) deposited during the Neoproterozoic consistently display highly positive δ53Cr values, from 0.9% to 4.9%, demonstrating the era's oxygenation of the atmosphere. Oxidative chromium cycling began approximately 0.8 Ga, indicating that oxygen level rise began well before the Cryogenian glaciations. Chromium isotopes also show that during the Cryogenian interglacial interval, between the Sturtian and Marinoan glaciations, oxygenation of the ocean and atmosphere was slow and subdued; this interval marked a lull in the NOE.
Molybdenum isotopes
δ98Mo values were slightly higher during the Late Ediacaran than in the Cryogenian or the Early and Middle Ediacaran. This isotopic proxy indicates the level of oxygenation of the Late Ediacaran ocean was comparable to that of Mesozoic oceanic anoxic events.
Uranium isotopes
The very low values of δ238U, commonly used as an isotopic measurement of changes in seawater oxygenation, during much of the Neoproterozoic have been interpreted to reflect progressive oxygenation punctuated by temporary, transient expansions of anoxic and euxinic waters. During the Early Ediacaran, the shift in uranium isotopes occurred in tandem with enrichment in light carbon isotopes.
Causes
Increase in nitrogen fixation
During the Boring Billion, open ocean productivity was very low compared to the Neoproterozoic and Phanerozoic as a result of the absence of planktonic nitrogen-fixing bacteria. The evolution and radiation of nitrogen-fixing bacteria and non-nitrogen-fixing picocyanobacteria capable of occupying marine planktonic niches and consequent changes to the nitrogen cycle during the Cryogenian are believed to be a culprit behind the rapid oxygenation of and removal of carbon dioxide from the atmosphere, which also helps explain the development of extremely severe glaciations that characterised this period of the Neoproterozoic.
Increase in day length
The slowdown of the Earth's rotation and corresponding increase in day length has been suggested as a possible cause of the NOE on the basis of experimental findings that cyanobacterial productivity is higher during longer periods of uninterrupted daylight compared to shorter periods more frequently interrupted by darkness.
Organic carbon burial
The Neoproterozoic saw organic carbon burial occur in large lakes with anoxic bottom waters on a massive scale. As carbon was locked away in sedimentary rock, it was unable to be oxidised, permitting a buildup of atmospheric oxygen.
Phosphorus removal
The increasing diversity of eukaryotes has been proposed as a cause of increased deep ocean oxygenation by means of phosphorus removal from the deep ocean. The evolution of large multicellular organisms led to increased amounts of organic matters sinking to the seafloor (marine snow). This, combined with the evolution of benthic filter feeders (e.g. choanoflagellates and primitive poriferans such as Otavia), is believed to have shifted oxygen demand further down in the water column, which would result in a positive feedback loop wherein phosphorus was removed from the ocean, which reduced productivity and decreased oxygen demand, which in turn led to increasing oxygenation of deep ocean water. Increasingly well oxygenated oceans enabled further eukaryotic dispersal, which likely acted as a positive feedback loop that accelerated oxygenation.
Consequences
Glaciation
The rapid increase in organic carbon sequestration as a result of the increased rates of global photosynthesis by both cyanobacteria and eukaryotic photoautotrophs (green and red algae), occurring in conjunction with an increase in silicate weathering of continental flood basalts resulting from the breakup of the supercontinent Rodinia, is believed to have been a trigger of the Sturtian and Marinoan glaciations during the Cryogenian, the middle period of the Neoproterozoic.
Biological diversity
During the Tonian, very early multicellular organisms may have evolved and diversified in oxygen "oases" in the deep oceans, which acted as cradles in these early stages of eukaryote evolution. However, the persistence of anoxia and euxinia over the late Tonian despite some increases in oxygen content meant eukaryotic diversity overall remained low. Over the course of the Ediacaran period, the oceans gradually became better oxygenated, with the time interval immediately after the Gaskiers Glaciation displaying evidence of significantly increasing marine oxygen content. The rapid diversification of multicellular life during this geologic period has been attributed by some authors to an increase in oxygen content, enabling the iconic oxygen-consuming multicellular eukaryotes of the Ediacaran biota to become ubiquitous and widespread. Initially restricted to deeper, colder waters that possessed the most dissolved oxygen, metazoan life gradually expanded into warmer zones of the ocean as global oxygen levels rose.
See also
Great Oxidation Event
Avalon explosion
Cambrian explosion
Silurian-Devonian Terrestrial Revolution
References
Neoproterozoic events
Origin of life
Oxygen
Events in the geological history of Earth
Evolution of the biosphere
Meteorological hypotheses | Neoproterozoic oxygenation event | Biology | 2,189 |
14,285,191 | https://en.wikipedia.org/wiki/Certified%20wireless%20network%20administrator | The Certified Wireless Network Administrator (CWNA) is a foundation level certification from the CWNP that measures the ability to administer any wireless LAN. A wide range of topics focusing on the 802.11 wireless LAN technology are covered in the coursework and exam, which is vendor neutral.
Certification track
The Certified Wireless Network Administrator (CWNA) is a foundation level wireless certification for the Certified Wireless Network Professional (CWNP) program. The CWNP next offers three professional level certifications: Certified Wireless Security Professional (CWSP), Certified Wireless Analysis Professional (CWAP) and Certified Wireless Design Professional (CWDP). A candidate can only achieve the expert level CWNE certification after earning the CWNA, CWSP, CWAP and CWDP certifications. A candidate no longer has to pass an exam for the expert level Certified Wireless Network Expert (CWNE) certification. In addition to passing the CWNA, CWSP, CWAP and CWDP a candidate must also provide: 3 professional endorsements, 3 years of documented enterprise Wi-Fi experience, 2 other current valid networking certifications and documentation of 3 enterprise Wi-Fi projects the candidate has participated in or led.
CWNA requirements
The main subject areas covered by the CWNA are as follows:
Radio Technologies
Antenna Concepts
Wireless LAN hardware and software
Network Design Installation and Management
Wireless Standards and Organization
802.11 Network Architecture
Wireless LAN Security
Troubleshooting
How to perform site surveys
These subjects are covered at an introductory level in the CWNA coursework and examination. The other certifications specialize in one or more of these subjects.
Recertification
The CWNA certification is valid for three years. The certification may be renewed by retaking the CWNA exam or by passing one of the 3 professional level certification exams (CWSP, CWAP or CWDP).
See also
Professional certification (Computer technology)
References
External links
Sybex Publishing Study Guide: https://www.wiley.com/go/cwnasg
Wireless networking
Information technology qualifications | Certified wireless network administrator | Technology,Engineering | 403 |
44,567,225 | https://en.wikipedia.org/wiki/Furcellaria | Furcellaria is a genus of red algae. It is a monotypic genus, the only species being Furcellaria lumbricalis, which has commercial importance as a raw material for carrageenan production. It is mainly harvested from the waters of Denmark and Canada.
It grows on submerged rocks to a depth of about , but it can also grow in large floating mats, which are easier to harvest.
F. lumbricalis is also an important habitat-forming seaweed, forming underwater "belts" often just below those of bladderwrack. These belts provide spawning habitat for many fish species, and for this reason some governments place regulations on the harvesting of this seaweed.
Description
Furcellaria lumbricalis is a common red macroalgal species. The species has two different ecotypes – attached and loose-lying (drifting) thallus forms (previously also known as Furcellaria fastigiata f. aegagropila). Attached F. lumbricalis is widely distributed sublittoral species on both sides of the North Atlantic.
The attached form grows typically as an epilith on stable hard substrates such as stony bottoms, boulder fields and rocks. It is a perennial macroalgae with a life-span up to 10 years, that tolerates salinities down to 3.6 psu. Although the species has been reported to grow up to 30 m deep, the main occurrence is between 8−12 m. F. lumbricalis forms monotypic dense meadows in the central and northern Baltic Sea, where most of the other perennial red algae are not able to sustain the low salinity.
Over the last half a century, communities of loose-lying F. lumbricalis in Kattegat, Denmark and Puck Lagoon, Poland have been disappeared due to overharvesting or eutrophication. In other places, the species is too sparsely distributed, making it incompatible for industrial practices. The drifting forms of F. lumbricalis and Coccotylus truncatus form a loose-lying algal stratum in Kassari bay, which is the most abundant community in the Baltic Sea. Because of its unique location and relatively high biomass, it has been used for furcellaran production since the mid 1960s and is an example of a sustainable bioresource utilization.
The stratum's (average depth 7.5 m) density seems to differ greatly year to year (Table 1), ranging between 100 000 to 200 000 tons by wet weight. The change could be as a result of meteorological factors such as harsher winters or hotter summers, storms and the like.
Distribution
It is commonly found near the coasts of Eastern Canada, British Isles and is the only widely distributed red algal species in the Baltic Sea. Found also in Northern Russia, Iceland, Faeroes and Norway to France.
Quantitative characteristics
Key quantitative characteristics of the loose-lying Furcellaria-Coccotylus community in the Kassari Bay monitored by the Estonian Marine Institute.
Biomolecules from Furcellaria lumbricalis
Due to the polysaccharides in the cell walls, F. lumbricalis is grouped with other commercially important carrageenophytes (red algae that produce carrageenans).
From F. lumbricalis a polysaccharide called furcellaran (hybrid β/κ-carrageenan) can be extracted. Furcellaran is non-stoichometrically undersulphated κ-carrageenan, where every 3rd or 4th 3-linked-β-galactose monomer possesses a sulphate ester group at the 4th carbon position. For comparison, an ideal κ-carrageenan molecule would have a sulphate ester group at the 4th carbon in every 3-linked-β-galactose monomer. Furcellaran’s physical properties (gel strengths, gelling and melting temperatures) are similar to κ-carrageenan.
Carrageenans found within certain seaweed species and locations are not universally similar, samples collected from different locations may have variable sulphation degrees.
Studies show that total extraction yield is up to 31% (dry weight). However, in its unattached state, it is noted that polysaccharide yields are lower and some consider this to be the result of narrower thallus filaments giving way to a smaller amount of galactan present.
Also, phycobiliproteins can be extracted from F. lumbricalis, from which the R-phycoerythrin yield is ~0.1% by dry weight.
Industrial use
Cations need to be present to form a strong gel in an aqueous solution. It is a process that depends on the nature of the polysaccharide, polymer concentration, temperature and the ions. K+, Rb+ and Cs+ ions produce strong κ-carrageenan and furcellaran gels, whereas Ca2+ ions aid the gelling of ι-carrageenan (extracted from the cell walls of C. truncatus). An initial coil-to-helix transition has been observed as the primary change in the gelling process, which is followed by the aggregation of these helices to form a gel. These sorts of gels are thermoreversible, meaning that they gel when temperature drops and melt when the gel is heated.
The food industry depends on this natural component and are used to add texture as a way of additive to certain foods candies, ice cream and puddings. When carrageenans are used as food additives in the EU, they are referred to as E407 (E407a is a Processed Eucheuma seaweed, where most impurities are washed out, but most of the cellulose remains). Additionally, it can be found in the pharmaceutical and cosmetic industries in which it's included to things such as foams and soluble tablets. Furcellaran can also be used instead of κ-carrageenan as a beer wort fining agent.
Similar species
Polyides rotunda is similar but can be distinguished by having a discoid holdfast.
References
Red algae genera
Gigartinales
Seaweeds
Edible algae
Monotypic algae genera | Furcellaria | Biology | 1,314 |
56,571,453 | https://en.wikipedia.org/wiki/Nokia%20Steel%20HR | Nokia Steel HR is a "hybrid" smartwatch and activity/fitness tracker developed by Nokia and released in December 2017. Its design is mostly based on the Withings Steel HR. The watch is available in 36 mm and 40 mm variants, available in various colours and in silicone, leather and woven straps. It pairs with a smartphone with the Nokia Health Mate application and also relays smartphone notifications. Steel HR features a heart rate monitor and is water resistant.
It was the major smartwatch carrying the Nokia brand, until the company sold back the health division to the co-founder of Withings in September 2018.
References
External links
Nokia
Smartwatches
Smart bands | Nokia Steel HR | Technology | 135 |
36,333,997 | https://en.wikipedia.org/wiki/Rising%20moving%20average | The rising moving average is a technical indicator used in stock market trading. Most commonly found visually, the pattern is spotted with a moving average overlay on a stock chart or price series. When the moving average has been rising consecutively for a number of days, this is used as a buy signal, to indicate a rising trend forming.
While the rising moving average indicator is commonly used by investors without realising, there has been significant backtesting on historic stock data to calculate the performance of the rising moving average. Simulations have found that shorter rising averages, within the 3- to 10-day period, are more profitable overall than longer rising averages (e.g. 20 days). These have only been tested on US equity stocks however.
Notes
Mathematical finance
Time series
Technical indicators | Rising moving average | Mathematics | 157 |
739,349 | https://en.wikipedia.org/wiki/Bulletproof%20glass | Bulletproof glass, ballistic glass, transparent armor, or bullet-resistant glass is a strong and optically transparent material that is particularly resistant to penetration by projectiles, although, like any other material, it is not completely impenetrable. It is usually made from a combination of two or more types of glass, one hard and one soft. The softer layer makes the glass more elastic, so that it can flex instead of shatter. The index of refraction for all of the glasses used in the bulletproof layers must be almost the same to keep the glass transparent and allow a clear, undistorted view through the glass. Bulletproof glass varies in thickness from .
Bulletproof glass is used in windows of buildings that require such security, such as jewelry stores and embassies, and of military and private vehicles.
Construction
Bullet-resistant glass is constructed using layers of laminated glass. The more layers there are, the more protection the glass offers. When a weight reduction is needed, polycarbonate (a thermoplastic) is laminated onto the safe side to stop spall. The aim is to make a material with the appearance and clarity of standard glass but with effective protection from small arms. Polycarbonate designs usually consist of products such as Armormax, Makroclear, Cyrolon: a soft coating that heals after being scratched (such as elastomeric carbon-based polymers) or a hard coating that prevents scratching (such as silicon-based polymers).
The plastic in laminate designs also provides resistance to impact from physical assault from blunt and sharp objects. The plastic provides little in the way of bullet-resistance. The glass, which is much harder than plastic, flattens the bullet, and the plastic deforms, with the aim of absorbing the rest of the energy and preventing penetration. The ability of the polycarbonate layer to stop projectiles with varying energy is directly proportional to its thickness, and bulletproof glass of this design may be up to 3.5 inches thick.
Laminated glass layers are built from glass sheets bonded together with polyvinyl butyral, polyurethane, Sentryglas, or ethylene-vinyl acetate. When treated with chemical processes, the glass becomes much stronger. This design has been in regular use on combat vehicles since World War II. It is typically thick and is usually extremely heavy.
9mm 124gr @ 1175-1293fps (1400-1530fps for Level 6), 357M 158gr @ 1250-1375fps, 44M 240gr @ 1350-1485fps, 30-06 180gr @ 2540-2794fps, 5.56NATO 55gr @ 3080-3388fps, 7.62NATO 150gr @ 2750-3025fps. For all ratings in the above chart; all copper-jacketed lead FMJ, except 44 mg is lead semi-wadcutter gas-check, and 30-06 is LEAD core soft point.
Test standards
Bullet-resistant materials are tested using a gun to fire a projectile from a set distance into the material, in a specific pattern. Levels of protection are based on the ability of the target to stop a specific type of projectile traveling at a specific speed. Experiments suggest that polycarbonate fails at lower velocities with regular shaped projectiles compared to irregular ones (like fragments), meaning that testing with regular shaped projectiles gives a conservative estimate of its resistance. When projectiles do not penetrate, the depth of the dent left by the impact can be measured and related to the projectile’s velocity and thickness of the material. Some researchers have developed mathematical models based on results of this kind of testing to help them design bulletproof glass to resist specific anticipated threats.
Environmental effects
The properties of bullet-resistant glass can be affected by temperature and by exposure to solvents or UV radiation, usually from sunlight. If the polycarbonate layer is below a glass layer, it has some protection from UV radiation due to the glass and bonding layer. Over time the polycarbonate becomes more brittle because it is an amorphous polymer (which is necessary for it to be transparent) that moves toward thermodynamic equilibrium.
An impact on polycarbonate by a projectile at temperatures below −7 °C sometimes creates spall, pieces of polycarbonate that are broken off and become projectiles themselves. Experiments have demonstrated that the size of the spall is related to the thickness of the laminate rather than the size of the projectile. The spall starts in surface flaws caused by bending of the inner, polycarbonate layer and the cracks move “backwards” through to the impact surface. It has been suggested that a second inner layer of polycarbonate may effectively resist penetration by the spall.
2000s advances
In 2005, it was reported that U.S. military researchers were developing a class of transparent armor incorporating aluminum oxynitride (ALON) as the outside "strike plate" layer. Traditional glass/polymer was demonstrated by ALON's manufacturer to require 2.3 times more thickness than ALON's, to guard against a .50 BMG projectile. ALON is much lighter and performs much better than traditional glass/polymer laminates. Aluminum oxynitride "glass" can defeat threats like the .50 caliber armor-piercing rounds using material that is not prohibitively heavy.
Spinel ceramics
Certain types of ceramics can also be used for transparent armor due to their properties of increased density and hardness when compared to traditional glass. These types of synthetic ceramic transparent armors can allow for thinner armor with equivalent stopping power to traditional laminated glass.
See also
Transparent Armor Gun Shield
Prince Rupert's Drop
References
Glass coating and surface modification
Armour | Bulletproof glass | Chemistry | 1,179 |
45,413,683 | https://en.wikipedia.org/wiki/Scale%20%28chemistry%29 | The scale of a chemical process refers to the rough ranges in mass or volume of a chemical reaction or process that define the appropriate category of chemical apparatus and equipment required to accomplish it, and the concepts, priorities, and economies that operate at each. While the specific terms used—and limits of mass or volume that apply to them—can vary between specific industries, the concepts are used broadly across industry and the fundamental scientific fields that support them. Use of the term "scale" is unrelated to the concept of weighing; rather it is related to cognate terms in mathematics (e.g., geometric scaling, the linear transformation that enlarges or shrinks objects, and scale parameters in probability theory), and in applied areas (e.g., in the scaling of images in architecture, engineering, cartography, etc.).
Practically speaking, the scale of chemical operations also relates to the training required to carry them out, and can be broken out roughly as follows:
procedures performed at the laboratory scale, which involve the sorts of procedures used in academic teaching and research laboratories in the training of chemists and in discovery chemistry venues in industry,
operations at the pilot plant scale, e.g., carried out by process chemists, which, though at the lowest extreme of manufacturing operations, are on the order of 200- to 1000-fold larger than laboratory scale, and used to generate information on the behavior of each chemical step in the process that might be useful to design the actual chemical production facility;
intermediate bench scale sets of procedures, 10- to 200-fold larger than the discovery laboratory, sometimes inserted between the preceding two;
operations at demonstration scale and full-scale production, whose sizes are determined by the nature of the chemical product, available chemical technologies, the market for the product, and manufacturing requirements, where the aim of the first of these is literally to demonstrate operational stability of developed manufacturing procedures over extended periods (by operating the suite of manufacturing equipment at the feed rates anticipated for commercial production).
For instance, the production of the streptomycin-class of antibiotics, which combined biotechnologic and chemical operations, involved use of a 130,000 liter fermenter, an operational scale approximately one million-fold larger than the microbial shake flasks used in the early laboratory scale studies.
As noted, nomenclature can vary between manufacturing sectors; some industries use the scale terms pilot plant and demonstration plant interchangeably.
Apart from defining the category of chemical apparatus and equipment required at each scale, the concepts, priorities and economies that obtain, and the skill-sets needed by the practicing scientists at each, defining scale allows for theoretical work prior to actual plant operations (e.g., defining relevant process parameters used in the numerical simulation of large-scale production processes), and allows economic analyses that ultimately define how manufacturing will proceed.
Besides the chemistry and biology expertises involved in scaling designs and decisions, varied aspects of process engineering and mathematical modeling, simulations, and operations research are involved.
See also
Medicinal chemistry
Process chemistry
Pilot plant
Chemical engineering
Process engineering
Operations research
Further reading
R. Dach, J. J. Song, F. Roschangar, W. Samstag & C.H. Senanayake, 2012, "The eight criteria defining a good chemical manufacturing process," Org. Process Res. Dev. 16:1697ff, DOI 10.1021/op300144g.
M. D. Johnson, S.A. May, J.R. Calvin, J. Remacle, J.R. Stout, W.D. Dieroad, N. Zaborenko, B.D. Haeberle, W.-M. Sun, M.T. Miller & J. Brannan, "Development and scale-up of a continuous, high-pressure, asymmetric hydrogenation reaction, workup, and isolation." Org. Process Res. Rev. 16:1017ff, DOI 10.1021/op200362h.
M. Levin, Ed., 2011, Pharmaceutical Process Scale-Up: Drugs and the Pharmaceutical, 3rd edn., London, U.K.:Informa Healthcare, .
A.A. Desai, 2011, "Sitagliptin manufacture: a compelling tale of green chemistry, process intensification, and industrial asymmetric catalysis," Angew. Chem. Int. Ed. 50:1974ff, DOI 10.1002/anie.201007051.
M. Zlokarnik, 2006, Scale-up in Chemical Engineering, 2nd edn., Weinheim, Germany:Wiley-VCH, .
M.C.M. Hensing, R.J. Rouwenhorst, J.J. Heijnen, J.R van Dijken & J.T. Pronk, 1995, "Physiological and technological aspects of large-scale heterologous-protein production with yeasts," Antonie van Leeuwenhoek 67:261-279.
Karl A. Thiel, 2004, "Biomanufacturing, from bust to boom...to bubble?," Nature Biotechnology 22:1365-1372, esp. Table 1, DOI 10.1038/nbt1104-1365, see , accessed 15 February 2015.
Maximilian Lackner, Ed., 2009, Scale-up in Combustion, Wien, Austria:Process Engineering GmbH, .
References
Chemistry
Biochemistry
Chemical engineering
Chemical synthesis
Medicinal chemistry
Organic chemistry | Scale (chemistry) | Chemistry,Engineering,Biology | 1,136 |
20,648 | https://en.wikipedia.org/wiki/Melting | Melting, or fusion, is a physical process that results in the phase transition of a substance from a solid to a liquid. This occurs when the internal energy of the solid increases, typically by the application of heat or pressure, which increases the substance's temperature to the melting point. At the melting point, the ordering of ions or molecules in the solid breaks down to a less ordered state, and the solid melts to become a liquid.
Substances in the molten state generally have reduced viscosity as the temperature increases. An exception to this principle is elemental sulfur, whose viscosity increases in the range of 130 °C to 190 °C due to polymerization.
Some organic compounds melt through mesophases, states of partial order between solid and liquid.
First order phase transition
From a thermodynamics point of view, at the melting point the change in Gibbs free energy ∆G of the substances is zero, but there are non-zero changes in the enthalpy (H) and the entropy (S), known respectively as the enthalpy of fusion (or latent heat of fusion) and the entropy of fusion. Melting is therefore classified as a first-order phase transition. Melting occurs when the Gibbs free energy of the liquid becomes lower than the solid for that material. The temperature at which this occurs is dependent on the ambient pressure.
Low-temperature helium is the only known exception to the general rule. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.8 K. This means that, at appropriate constant pressures, heat must be removed from these substances in order to melt them.
Criteria
Among the theoretical criteria for melting, the Lindemann and Born criteria are those most frequently used as a basis to analyse the melting conditions.
The Lindemann criterion states that melting occurs because of "vibrational instability", e.g. crystals melt; when the average amplitude of thermal vibrations of atoms is relatively high compared with interatomic distances, e.g. <δu2>1/2 > δLRs, where δu is the atomic displacement, the Lindemann parameter δL ≈ 0.20...0.25 and Rs is one-half of the inter-atomic distance. The "Lindemann melting criterion" is supported by experimental data both for crystalline materials and for glass-liquid transitions in amorphous materials.
The Born criterion is based on a rigidity catastrophe caused by the vanishing elastic shear modulus, i.e. when the crystal no longer has sufficient rigidity to mechanically withstand the load, it becomes liquid.
Supercooling
Under a standard set of conditions, the melting point of a substance is a characteristic property. The melting point is often equal to the freezing point. However, under carefully created conditions, supercooling, or superheating past the melting or freezing point can occur. Water on a very clean glass surface will often supercool several degrees below the freezing point without freezing. Fine emulsions of pure water have been cooled to −38 °C without nucleation to form ice. Nucleation occurs due to fluctuations in the properties of the material. If the material is kept still there is often nothing (such as physical vibration) to trigger this change, and supercooling (or superheating) may occur. Thermodynamically, the supercooled liquid is in the metastable state with respect to the crystalline phase, and it is likely to crystallize suddenly.
Glasses
Glasses are amorphous solids, which are usually fabricated when the molten material cools very rapidly to below its glass transition temperature, without sufficient time for a regular crystal lattice to form. Solids are characterised by a high degree of connectivity between their molecules, and fluids have lower connectivity of their structural blocks. Melting of a solid material can also be considered as a percolation via broken connections between particles e.g. connecting bonds. In this approach melting of an amorphous material occurs, when the broken bonds form a percolation cluster with Tg dependent on quasi-equilibrium thermodynamic parameters of bonds e.g. on enthalpy (Hd) and entropy (Sd) of formation of bonds in a given system at given conditions:
where fc is the percolation threshold and R is the universal gas constant.
Although Hd and Sd are not true equilibrium thermodynamic parameters and can depend on the cooling rate of a melt, they can be found from available experimental data on viscosity of amorphous materials.
Even below its melting point, quasi-liquid films can be observed on crystalline surfaces. The thickness of the film is temperature-dependent. This effect is common for all crystalline materials. This pre-melting shows its effects in e.g. frost heave, the growth of snowflakes, and, taking grain boundary interfaces into account, maybe even in the movement of glaciers.
Related concept
In ultrashort pulse physics, a so-called nonthermal melting may take place. It occurs not because of the increase of the atomic kinetic energy, but because of changes of the interatomic potential due to excitation of electrons. Since electrons are acting like a glue sticking atoms together, heating electrons by a femtosecond laser alters the properties of this "glue", which may break the bonds between the atoms and melt a material even without an increase of the atomic temperature.
In genetics, melting DNA means to separate the double-stranded DNA into two single strands by heating or the use of chemical agents, polymerase chain reaction.
Table
See also
List of chemical elements providing melting points
Phase diagram
Zone melting
References
External links
Phase transitions
Materials science
Thermodynamics | Melting | Physics,Chemistry,Materials_science,Mathematics,Engineering | 1,185 |
2,908,847 | https://en.wikipedia.org/wiki/Chemical%20clock | A chemical clock (or clock reaction) is a complex mixture of reacting chemical compounds in which the onset of an observable property (discoloration or coloration) occurs after a predictable induction time due to the presence of clock species at a detectable amount.
In cases where one of the reagents has a visible color, crossing a concentration threshold can lead to an abrupt color change after a reproducible time lapse.
Types
Clock reactions may be classified into three or four types:
Substrate-depletive clock reaction
The simplest clock reaction featuring two reactions:
A → C (rate k1)
B + C → products (rate k2, fast)
When substrate (B) is present, the clock species (C) is quickly consumed in the second reaction. Only when substrate B is all used up or depleted, species C can build up in amount causing the color to change. An example for this clock reaction is the sulfite/iodate reaction or iodine clock reaction, also known as Landolt's reaction.
Sometimes, a clock reaction involves the production of intermediate species in three consecutive reactions.
P + Q → R
R + Q → C
P + C → 2R
Given that Q is in excess, when substrate (P) is depleted, C builds up resulting in the change in color.
Autocatalysis-driven clock reaction
The basis of the reaction is similar to substrate-depletive clock reaction, except for the fact that rate k2 is very slow leading to the co-existing of substrates and clock species, so there is no need for substrate to be depleted to observe the change in color. The example for this clock is pentathionate/iodate reaction.
Pseudoclock behavior
The reactions in this category behave like a clock reaction, however they are irreproducible, unpredictable and hard to control. Examples are chlorite/thiosulfate and iodide/chlorite reactions.
Crazy clock reaction
The reaction is irreproducible in each run due to the initial inhomogeneity of the mixture which result from variation in stirring rate, overall volume as well as geometry of the reactors. Repeating the reaction in the statistically meaningful manners leads to the reproducible cumulative probability distribution curve. The example for this clock is iodate/arsenous acid reaction.
One reaction may fall into more than one classification above depending on the circumstance. For example, iodate−arsenous acid reaction can be substrate-depletive clock reaction, autocatalysis-driven clock reaction and crazy clock reaction.
Examples
One class of example is the iodine clock reactions, in which an iodine species is mixed with redox reagents in the presence of starch. After a delay, a dark blue color suddenly appears due to the formation of a triiodide-starch complex.
Additional reagents can be added to some chemical clocks to build a chemical oscillator. For example, the Briggs–Rauscher reaction is derived from an iodine clock reaction by adding perchloric acid, malonic acid and manganese sulfate.
See also
Circadian clock
Chemical oscillator
References
Chemical kinetics
Clocks
Non-equilibrium thermodynamics
Oscillation
Articles containing video clips | Chemical clock | Physics,Chemistry,Mathematics,Technology,Engineering | 668 |
13,643,525 | https://en.wikipedia.org/wiki/Norwegian%20Black%20List | The Norwegian Black List (Fremmedartslista) is an overview of alien species in Norway, with ecological risk assessments for some of the species. The Norwegian Black List was first published in 2007 by the Norwegian Biodiversity Information Centre and developed in cooperation with 18 scientific experts from six research institutions.
The 2007 Norwegian Black List is the first issue, and is compiled as a counterpart to the Norwegian Red List of 2006.
The 2007 Norwegian Black List
The 2007 Norwegian Black List contains a total of 2483 species of plants, animals and other organisms, 217 of which are risk assessed. A set of criteria has been developed to ensure a standardised assessment of the ecological consequences of alien species.
The assessed species are placed in categories according to the risk they represent.
High risk – 93 species
Unknown risk – 83 species
Low risk – 41 species
Alien species on Svalbard, Bjørnøya and Jan Mayen are not assessed.
Result
Among the 93 species which are found to threaten the natural local biodiversity, are bacteria, macroalgae, microalgae, pseudofungi, fungi, mosses, vascular plants, comb jellies, flatworms, roundworms, crustaceans, arachnids, insects, snails, bivalves, tunicates, fishes and mammals.
Among the vascular plants with a high risk, are Heracleum tromsoensis (aka Heracleum persicum), sycamore maple (Acer pseudoplatanus) and garden lupin (Lupinus polyphyllus). Among the flatworms; Gyrodactylus salaris, among the crustaceans the red king crab (Paralithodes camtschaticus) and American lobster (Homarus americanus). Five species of mammals are noted as high risk species; West European hedgehog, European rabbit, southern vole, American mink and raccoon.
See also
IUCN Red List
References
External links
The 2007 Norwegian Black List – artsdatabanken.no
Nature conservation in Norway
Introduced species
Invasive species | Norwegian Black List | Biology | 422 |
42,223,602 | https://en.wikipedia.org/wiki/ResultSource | ResultSource is a San Diego–based book marketing company that conducts "bestseller campaigns" on behalf of authors. The company states "We create campaigns that reach a specific goal, like: 'On the bestsellers list', or '100,000 copies sold'." For example, for a negotiated fee ResultSource will guarantee that a book becomes a bestseller. It does this through bulk book buying programs designed to manipulate the metrics used by Nielsen BookScan and the New York Times Best Seller list, among other strategies. As a result of ResultSource's business practices, Amazon.com has stopped doing business with the company. The company was founded by Kevin Small.
The details of ResultSource's business are private and few in the publishing industry will speak openly about it. "It's no wonder few people in the industry want to talk about bestseller campaigns. Put bluntly, they allow people with enough money, contacts, and know-how to buy their way onto bestseller lists." However, some information has come to light. In 2013, author Soren Kaplan discussed the matter with The Wall Street Journal in an article titled "The Mystery of the Book Sales Spike – How Are Some Authors Landing On Best-Seller Lists? They're Buying Their Way". In 2014, the Los Angeles Times published a story titled "Can bestseller lists be bought?" It describes how author and pastor Mark Driscoll contracted with ResultSource to place his book Real Marriage on the New York Times bestseller list for a $200,000 fee. The contract was for ResultSource "to conduct a bestseller campaign for your book, 'Real Marriage' on the week of January 2, 2012. The bestseller campaign is intended to place 'Real Marriage' on the New York Times bestseller list for the Advice How-to list." To achieve this, the contract stated that "RSI will be purchasing at least 11,000 total orders in one week." This took place and as a result, the book was successfully ranked No. 1 on the hardcover advice bestseller list on January 22, 2014. Driscoll later published an apology letter.
ResultSource was also implicated in Handbook for Mortals' 23 hour stint at the top of the New York Times Bestseller List.
See also
Vanity award
References
Marketing companies of the United States
Book publishing in the United States
Companies based in San Diego
Ethics and statistics | ResultSource | Technology | 488 |
6,012,260 | https://en.wikipedia.org/wiki/Telechelic%20polymer | A telechelic polymer or oligomer is a prepolymer capable of entering into further polymerization or other reactions through its reactive end-groups. It can be used for example to synthesize block copolymers.
By definition, a telechelic polymer is a di-end-functional polymer where both ends possess the same functionality. Where the chain-ends of the polymer are not of the same functionality they are termed di-end-functional polymers.
All polymers resulting from living polymerization are end-functional but may not necessarily be telechelic.
Telechelic polymers with different number of reactive end groups can be termed according to the number of end-groups as “hemi-” (one), “di-” (two),” and “tri-telechelic” (three) polymers. When it presents many end groups it is called “polytelechelic”.
To prepare polymers by step-growth polymerization, telechelic polymers like polymeric diols and epoxy prepolymers can be used. The main examples are:
Polyether diols;
Polyester diols;
Polycarbonate diols: Polyhexamethylene carbonate diol (PHMCD);
Polyalcadiene diols: Hydroxyl-terminated polybutadiene (PBHT)...
Other examples of telechelic polymers are the halato-telechelic polymers or halatopolymers. The end-groups of these polymers are ionic or ionizable like carboxylate or quaternary ammonium groups.
Synthesis
Telechelic polymers can be synthesized by different polymerization mechanisms. From vinyl monomers, among synthetic strategies are controlled radical polymerization and anionic polymerization. In the case of olefins, that is difficult to be functionalized, recent advances in insertion polymerization and post-polymerization can be used to produce telechelic polyolefins.
Application
Telechelic polymers are important in the preparation of block copolymers acting as building blocks for the structural design of these copolymers. Particularly, ABA triblock copolymers has received much industrial interest for development of thermoplastic elastomers.
References
Polymers | Telechelic polymer | Chemistry,Materials_science | 464 |
15,420,495 | https://en.wikipedia.org/wiki/Joseph%20Gottlieb%20K%C3%B6lreuter | Joseph Gottlieb Kölreuter (27 April 1733 – 11 November 1806), also spelled Koelreuter or Kohlreuter, was a German botanist who pioneered the study of plant fertilization, hybridization and was the first to detect self-incompatibility. He was an observer as well as a rigorous experimenter who used careful crossing experiments although he did not inquire into the nature of heritability.
Biography
Kölreuter was the oldest of three sons of an apothecary in Karlsruhe, Germany, and grew up in Sulz. He took an early interest in natural history and made a collection of local insects. At the age of fifteen he went to study medicine at the University of Tübingen under physician and botanist Johann Georg Gmelin who had returned from St. Petersburg. Gmelin had an interest in floral biology and he reprinted a work by Rudolf Jakob Camerarius (who also taught at Tübingen) who was the first to demonstrate sexual reproduction in plants. In his inaugural address in 1749 Gmelin talked the need for research on the origin of new species by hybridization. This may have had an influence on Kölreuter. Gmelin died in 1755, and Kolreuter earned his degree and received an appointment at the Imperial Academy of Sciences at St. Petersburg on 23 December 1755. Here his work included botany as well as the curation of the fish and coral collections. He stayed on until 6 June 1761. From 1759 he experimented on plant hybridization before returning to Germany. He moved to Calw in 1763 and Karlsruhe in 1764 where he was briefly professor of natural history and director of the botanical garden at Baden. He was dismissed from the botanical garden after a dispute with the head gardener in 1783 but stayed as a professor until 1806 when he died.
Researches
Kölreuter followed the standard idea of the period of plants and nature personified by a Creator. He expected patterns, for instance, homogeneity in the male and female contributions to the progeny. He also strongly believed in epigenetic influences which may have been derived from the teachings of C. F. Wolff. The dominant belief during his time was that an offspring was already preformed in the female or the male and that the embryo was developed after sex and the origin decided the offspring's characteristics or similarities to the parent. Kölreuter, however noted a mixing of characters and proposed the idea of “seed matters” (Saamenstoffe). According to Kölreuter there had to be two uniform fluids, male and female semen which combined in the process of fertilization. He believed that equal quantities of the male and female fluid were needed and he therefore examined how much pollen was needed in fertilization of a given number of seeds. In flowers with multiple stigmas, he cut all but one and found that pollinating it was enough to fertilize all the seeds. He examined the action of stigma fluid on pollen, described many plant species, and studied pollen and its transfer.
Kölreuter's major works were produced as four reports Vorlaufige Nachricht von einigen das Geschlecht der Pflanzen betreffenden Versuchen und Beobachtungen (1761), Fortsetzung (1763), Zweyte Fortsetzung (1764), and Dritte Fortsetzung (1766). They were reprinted in 1893 in Wilhelm Ostwald's Klassiker der exakten Wissenschaften. Kolreuter's findings are not reported in easy to read sections but are distributed throughout the text. Many parts have not been fully translated to English and this has led to many of the results not being examined well. In all he conducted nearly 500 different hybridization experiments across 138 species and examined the pollen characteristics of over 1000 plant species. The first documentation of male sterility in 1763 was by Kölreuter, who observed anther abortion within species and specific hybrids. Koelreuter was the first who reported self-incompatibility in Verbascum phoeniceum plants. He also observed heterosis, that hybrids surpassed their parents. His experimental method included repetitions and controls. He wanted to test if hybrids across species could be fertile. Buffon had used the idea of sterility of crosses as a method of testing species boundaries. Buffon used sterility versus fertility as a criterion for species but he gave up the idea in 1753 when he found fertile hybrids in domestic animals and cagebirds. Linnaeus through his student J. J. Hartmann reported the possibility of new "species" arising from hybridization but Kölreuter was skeptical of the results. In one experiment Kölreuter sat beside a flower from dawn to dusk and shooed away all insects to find that the flower remained unfertilized. He tested a hypothesis by Jan Swammerdam that honey was nectar that underwent fermentation in the crop of a bee. Kölreuter collected nectar from many hundreds of orange trees and kept them in vials to evaporate and he reported that it thickened and tasted like honey with time. Kölreuter produced interspecific hybrids - specifically the tobacco plants Nicotiana rustica and Nicotiana paniculata in 1760. The hybrids showed male sterility. He also worked on Dianthus and Verbascum. He found that reciprocal crossing produced identical results. He also pondered over the commercial applications of hybridization - "I would wish that I or somebody else would be so lucky someday to produce a species hybrid of trees which, with respect to the use of its lumber, would have a large influence on the economy. Among other good properties such trees might perhaps also have the one that they would reach their full size in one half of the time of normal trees" (translated by Ernst Mayr). Although Kölreuter conducted a variety of repeated crossing experiments much in the manner of Gregor Mendel, his interpretations were based on alchemical notions and he did not seek to examine the nature of heritability or the particulateness of heritable traits. Kölreuter followed an idea from alchemistry that metals were a mixture of mercury and sulphur and considered likewise that an equilibrium of the male and female "seed matters" had a role in deciding the qualities of hybrid offspring.
Although Koelreuter did not endorse the transmutation of species, his hybridisation research influenced the development of evolutionary theory in the eighteenth century.
The genus Koelreuteria has been named in his honour.
Works
Dissertatio inauguralis medica de insectis coleopteris, nec non de plantis quibusdam rarioribus... Tubingae: litteris Erhardianis (1755)
Vorläufige Nachricht von einigen, das Geschlecht der Pflanzen betreffenden Versuchen (1761-1766)
Das entdeckte Geheimniss der Cryptogamie (1777)
References
Bibliography
External links
Digital reproductions of Kölreuter's works
1733 births
1806 deaths
18th-century German botanists
People from Sulz am Neckar
Proto-evolutionary biologists | Joseph Gottlieb Kölreuter | Biology | 1,488 |
50,446,731 | https://en.wikipedia.org/wiki/Phase%20contrast%20magnetic%20resonance%20imaging | Phase contrast magnetic resonance imaging (PC-MRI) is a specific type of magnetic resonance imaging used primarily to determine flow velocities. PC-MRI can be considered a method of Magnetic Resonance Velocimetry. It also provides a method of magnetic resonance angiography. Since modern PC-MRI is typically time-resolved, it provides a means of 4D imaging (three spatial dimensions plus time).
How it Works
Atoms with an odd number of protons or neutrons have a randomly aligned angular spin momentum. When placed in a strong magnetic field, some of these spins align with the axis of the external field, which causes a net 'longitudinal' magnetization. These spins precess about the axis of the external field at a frequency proportional to the strength of that field. Then, energy is added to the system through a Radio frequency (RF) pulse to 'excite' the spins, changing the axis that the spins precess about. These spins can then be observed by receiver coils (Radiofrequency coils) using Faraday's law of induction. Different tissues respond to the added energy in different ways, and imaging parameters can be adjusted to highlight desired tissues.
All of these spins have a phase that is dependent on the atom's velocity.
Phase shift of a spin is a function of the gradient field :
where is the Gyromagnetic ratio and is defined as:
,
is the initial position of the spin, is the spin velocity, and is the spin acceleration.
If we only consider static spins and spins in the x-direction, we can rewrite equation for phase shift as:
We then assume that acceleration and higher order terms are negligible to simplify the expression for phase to:
where is the zeroth moment of the x-gradient and is the first moment of the x gradient.
If we take two different acquisitions with applied magnetic gradients that are the opposite of each other (bipolar gradients), we can add the results of the two acquisitions together to calculate a change in phase that is dependent on gradient:
where .
The phase shift is measured and converted to a velocity according to the following equation:
where is the maximum velocity that can be recorded and is the recorded phase shift.
The choice of defines range of velocities visible, known as the ‘dynamic range’.
A choice of below the maximum velocity in the slice will induce aliasing in the image where a velocity just greater than will be incorrectly calculated as moving in the opposite direction. However, there is a direct trade-off between the maximum velocity that can be encoded and the signal-to-noise ratio of the velocity measurements. This can be described by:
where is the signal-to-noise ratio of the image (which depends on the magnetic field of the scanner, the voxel volume, and the acquisition time of the scan).
For an example, setting a ‘low’ (below the maximum velocity expected in the scan) will allow for better visualization of slower velocities (better SNR), but any higher velocities will alias to an incorrect value. Setting a ‘high’ (above the maximum velocity expected in the scan) will allow for the proper velocity quantification, but the larger dynamic range will obscure the smaller velocity features as well as decrease SNR. Therefore, the setting of will be application dependent and care must be exercised in the selection. In order to further allow for proper velocity quantification, especially in clinical applications where the velocity dynamic range of flow is high (e.g. blood flow velocities in vessels across the thoracoabdominal cavity), a dual-echo PC-MRI (DEPC) method with dual velocity encoding in the same repetition time has been developed. The DEPC method does not only allow for proper velocity quantification, but also reduces the total acquisition time (especially when applied to 4D flow imaging) compared to a single-echo single- PC-MRI acquisition carried out at two separate values.
To allow for more flexibility in selecting , instantaneous phase (phase unwrapping) can be used to increase both dynamic range and SNR.
Encoding Methods
When each dimension of velocity is calculated based on acquisitions from oppositely applied gradients, this is known as a six-point method. However, more efficient methods are also used. Two are described here:
Simple Four-point Method
Four sets of encoding gradients are used. The first is a reference and applies a negative moment in ,, and . The next applies a positive moment in , and a negative moment in and . The third applies a positive moment in , and a negative moment in and . And the last applies a positive moment in , and a negative moment in and .
Then, the velocities can be solved based on the phase information from the corresponding phase encodes as follows:
Balanced Four-Point Method
The balanced four-point method also includes four sets of encoding gradients. The first is the same as in the simple four-point method with negative gradients applied in all directions. The second has a negative moment in , and a positive moment in and . The third has a negative moment in , and a positive moment in and . The last has a negative moment in and a positive moment in and .
This gives us the following system of equations:
Then, the velocities can be calculated:
Retrospective Cardiac and Respiratory Gating
For medical imaging, in order to get highly resolved scans in 3D space and time without motion artifacts from the heart or lungs, retrospective cardiac gating and respiratory compensation are employed. Beginning with cardiac gating, the patient's ECG signal is recorded throughout the imaging process. Similarly, the patient's respiratory patterns can be tracked throughout the scan. After the scan, the continuously collected data in k-space (temporary image space) can be assigned accordingly to match-up with the timing of the heart beat and lung motion of the patient. This means that these scans are cardiac-averaged so the measured blood velocities are an average over multiple cardiac cycles.
Applications
Phase contrast MRI is one of the main techniques for magnetic resonance angiography (MRA). This is used to generate images of arteries (and less commonly veins) in order to evaluate them for stenosis (abnormal narrowing), occlusions, aneurysms (vessel wall dilatations, at risk of rupture) or other abnormalities. MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (the latter exam is often referred to as a "run-off").
Limitations
In particular, a few limitations of PC-MRI are of importance for the measured velocities:
Partial volume effects (when a voxel contains the boundary between static and moving materials) can overestimate phase leading to inaccurate velocities at the interface between materials or tissues.
Intravoxel phase dispersion (when velocities within a pixel are heterogeneous or in areas of turbulent flow) can produce a resultant phase that does not resolve the flow features accurately.
Assuming that acceleration and higher orders of motion are negligible can be inaccurate depending on the flow field.
Displacement artifacts (also known as misregistration and oblique flow artifacts) occur when there is a time difference between the phase and frequency encoding. These artifacts are highest when the flow direction is within the slice plane (most prominent in the heart and aorta for biological flows)
Vastly undersampled Isotropic Projection Reconstruction (VIPR)
A Vastly undersampled Isotropic Projection Reconstruction (VIPR) is a radially acquired MRI sequence which results in high-resolution MRA with significantly reduced scan times, and without the need for breath-holding.
References
Magnetic resonance imaging | Phase contrast magnetic resonance imaging | Chemistry | 1,598 |
48,764,123 | https://en.wikipedia.org/wiki/Male%20warrior%20hypothesis | The male warrior hypothesis (MWH) is an evolutionary psychology hypothesis by Professor Mark van Vugt which argues that human psychology has been shaped by between-group competition and conflict. Specifically, the evolutionary history of coalitional aggression between groups of men may have resulted in sex-specific differences in the way outgroups are perceived, creating ingroup vs. outgroup tendencies that are still observable today.
Overview
Violence and warfare
Violence and aggression are universal across human societies, and have likely been features of human behavior since prehistory. Archaeologists have found mass graves dating to the late Pleistocene and early Holocene that contain primarily male skeletons showing signs of blunt force trauma, indicating the cause of death was by weapons used in combat.
Violence among humans occurs in distinct patterns, differing most obviously by sex. Ethnographic findings and modern crime data indicate that the majority of violence is both perpetrated by and targeted at males, and males are the most likely to be victims of violence. This male-male pattern of violence has been observed so repeatedly and in so many cultures that it may qualify as a human universal.
Tribal behavior
Humans are a social species with a long history of living in tribal groups. The psychological mechanisms that evolved to handle the complexities of group living have also created heuristics for quickly categorizing others as ingroup or outgroup members, with different behavioral strategies for each: treat ingroup members (those in one’s own group) favorably, and react to outgroup members (those who belong to a different group) with fear and aggression. These tendencies arise with little motivation, and have been provoked over superficial groups in lab studies—for example, by showing paintings to participants and creating groups based on which painting participants prefer.
The male warrior hypothesis suggests that the ease with which individuals discriminate against others is an adaptation resulting from a long history of being threatened by outgroup males, who are in competition for resources.
Sex differences in parental investment
The MWH argues that the sex differences in attitudes towards outgroup members may be a result of the different reproductive strategies used by males and females—specifically, the greater competition among males for mates. In mammals, males and females have distinct reproductive strategies based on the physiology of reproduction. Because females gestate, birth, feed, and invest more overall resources in each of their offspring, they are more selective with their mates but have greater certainty of being able to reproduce.
Males, in contrast, can mate at a very low energetic cost once they have found a partner, but are only able to attract a female if they have physical or social characteristics that can be converted into resources—e.g., territory, food resources, status, power, or influence—or the strength and alliances to coerce females to mate. As a result, there is typically much greater variability in the reproductive success of males within a species and higher competition among males for mates. The strongest, best adapted, and most powerful males may have a harem, while less fit males never reproduce.
For more details on this topic, see Trivers' theory of parental investment.
Male attitudes towards groups
The male warrior hypothesis predicts that because males may have historically remained in the groups in which they were born rather than moving away at adulthood (see patrilocality), they have a higher overall relatedness to their group than the female members, who would have moved to their new husbands’ group upon marriage. Males may have a stronger interest in defending their group, and will be more likely to act aggressively towards outgroup males they encounter who may be attempting to steal resources or weaken the group with violence.
For men at risk of never finding a mate, the fitness benefit to engaging in aggressive, violent behavior could outweigh the potential costs of fightings, especially if fighting alongside a coalition. Furthermore, the groups with more individuals who formed coalitions and acted altruistically to in-group members but aggressively to outgroup members would prosper (see multi-level selection).
Observational evidence/studies
Sex differences
Consistent with the expectations of the male warrior hypothesis, several studies have shown more ethnocentric and xenophobic beliefs and behaviors among men (compared to women), including the more frequent use of dehumanizing speech to describe outgroup members; stronger identification with their groups; greater cooperation when faced with competition from another group; a greater desire to engage in war when presented with images of attractive (but not unattractive) members of the opposite sex; greater overall rates of male-male competition and violence (as shown in violent crime and homicide statistics); and larger body size correlating with quicker anger responses.
Studies have also tested the responses of women to outgroups, and have shown that women are most likely to fear outgroup males during the periovulatory phase of the menstrual cycle, when fertility is at its peak. Women also have more negative responses around peak fertility when the males belong to an outgroup that the woman associates with physical formidability, even if the group was constructed in the lab. Overall, women who feel most at risk of sexual coercion are more likely to fear outgroup males, which aligns with the predictions of the MWH.
Prepared learning studies
In studies of prepared learning, conditioned fear responses to images of outgroup males were far more difficult to extinguish than conditioned fear responses to outgroup females or ingroup members of either sex, as measured by conductivity tests of perspiration on the skin. These results held true whether the participant was male or female. Because the neural circuitry for fear responses are more developed towards stimuli that have posed a larger threat for most of human history (snakes and spiders, for example, which were dangers frequently encountered by foragers), these findings suggest that outgroup males may have been more of a threat to physical safety than outgroup women or ingroup members, supporting the male warrior hypothesis.
Sport matches
It is hypothesized that sport began as a way for men to develop the skills needed in primitive hunting and warfare, and later developed to act primarily as a lek where male athletes display and male spectators evaluate the qualities of potential allies and rivals. This hypothesis is supported by the observation that the most popular modern male sports require the skills needed for success in male-male physical competition and primitive hunting and warfare, and that champion male athletes obtain high status and thereby reproductive opportunities in ways that parallel those gained by successful primitive hunters and warriors.
There is evidence that male and female athletes generally differ in their motivation in sports, specifically their competitiveness and risk taking, in accordance with the spectator lek hypothesis.
The male warrior hypothesis proposes that men must engage in maximally effective intra-group cooperation. Post-conflict affiliation between opponents is proposed to facilitate future cooperation. Regarding sports matches as a proxy for intra-group conflict, a study found that unrelated human males are more predisposed than females to invest in post-conflict affiliation that is expected to facilitate future intra-group cooperation.
Non-human evidence
Coalitionary violence has also been observed in social species besides humans, including other primates. Chimpanzee (Pan troglodytes) males demonstrate similar violent behavior: groups of males form coalitions that patrol the borders of their territory and attack neighboring bands. Chimpanzees also have patrilocal living patterns, which aid with forming close coalitions, as all males are likely kin.
A study of 72 species of group-living mammals found that males are more involved than females in inter-group conflict where male fitness is limited by access to mates whereas female fitness is limited by access to food and safety.
See also
Challenge hypothesis
Gangs
Sex differences in humans
Sex differences in psychology
Sexual selection in humans
Shame-stroke
Tribalism
War rape
Warrior culture
References
Evolutionary psychology
Aggression
Testosterone
Masculinity
War
Gangs | Male warrior hypothesis | Biology | 1,598 |
52,855,912 | https://en.wikipedia.org/wiki/Sodium%20calcium%20edetate | Sodium calcium edetate (sodium calcium EDTA), also known as edetate calcium disodium among other names, is a medication primarily used to treat lead poisoning, including both short-term and long-term lead poisoning. Sodium calcium edetate came into medical use in the United States in 1953.
Chelation agent
Sodium calcium edetate is in the chelating agent family of medication. It is a salt of edetate with two sodium atoms and one calcium atom.
It works by binding to a number of heavy metals, which renders them almost inert and allows them to leave the body in the urine.
Edetate disodium (Endrate) is a different formulation which does not have the same effects.
Medical use
Sodium calcium edetate's primary use is to treat lead poisoning,
for which it is an alternative to succimer.
It is given by slow injection into a vein or into a muscle.
For lead encephalopathy sodium calcium edetate is typically used together with dimercaprol.
It may also be used to treat plutonium poisoning.
It does not appear to be useful for poisoning by tetra-ethyl lead.
Side effects
Common side effects include pain at the site of injection. Other side effects may include kidney problems, diarrhea, fever, muscle pains, and low blood pressure. Benefits when needed in pregnancy are likely greater than the risks.
History
Sodium calcium edetate came into medical use in the United States in 1953. It is on the World Health Organization's List of Essential Medicines.
References
Calcium compounds
Chelating agents used as drugs
Organic sodium salts
World Health Organization essential medicines | Sodium calcium edetate | Chemistry | 339 |
16,800,036 | https://en.wikipedia.org/wiki/Cramer%20Systems | Cramer Systems Group Ltd. is a British telecommunications firm, founded in 1996 by Jon Craton, Mark Farmer and Don Gibson. The firm developed Operations support systems (OSSs) for the telecommunication industry clients such as Vodafone, KPN Telecom, and British Telecom. In August 2006 Amdocs announced the completion of the acquisition of the company. The products developed by Cramer have now been integrated into the Amdocs product suite. The company name is a combination of three letters from the names of Craton and Farmer.
Products
The company produced the Cramer OSS Suite, a set of applications built around Resource Manager, an inventory of a company's telecommunications network infrastructure and configuration. This includes equipment such as switches, routers, Synchronous optical networking (SDH), and Plesiochronous Digital Hierarchy (PDH) nodes, and Customer-premises equipment (CPEs), but also cables, buildings, rooms, cabinets, and other such furniture. Originally developed to administer networks such as SDH and PDH, the product grew to encompass many modern telecommunications technologies. A key feature of the product was being 'service aware': not only knowing what equipment is installed, the system knows what each system, module, card, interface or cable can do, which circuits or connections are configured on the network and which customer services use these connections.
Additional modules grew to include: Task Engine, for complex "design and assign" task automation and basic workflow management; Delivery Engine, to manage complex change control (planning, plan execution including external workforce management system integration, reporting); Sync Engine, used to prevent mismatches between the information in the inventory system and the live network; Discovery Engine, to retrieve configuration information from network devices and element managers; Activation Engine, to provision network resources; Service Manager & Service Catalog, to take requests for services from a catalog, and convert them into requests in Delivery Engine and Task Engine; Route Finder, to identify physically redundant paths through the network; Class of Service manager, used to manage IP network policies; Site Manager, used to model and manage physical layout of exchanges, datacenters, their power and cooling; IT Manager, used to model services running on servers and virtual machines; Partition Manager, used to provide "multi-tenant" like data security; and numerous integration points to other OSS systems, such as Inventory Import/Export, FCAPS alarm enrichment, and others.
The latest two versions of the OSS suite used Java as middleware platform for the GUI and some interfaces or adapters between the suite and external systems.
Acquisition by Amdocs
Although Cramer was acquired by Amdocs in August 2006, the development of their products continues and is still based at their technology center in Bath, England. The company continue to exist as the operations support systems (OSS) department of the much larger BSS developer Amdocs.
References
External links
Amdocs OSS homepage
Software companies of the United Kingdom
Network management
British companies established in 1996
1996 establishments in England
Amdocs | Cramer Systems | Engineering | 622 |
600,835 | https://en.wikipedia.org/wiki/Canada%20balsam | Canada balsam, also called Canada turpentine or balsam of fir, is the oleoresin of the balsam fir tree (Abies balsamea) of boreal North America. The resin, dissolved in essential oils, is a viscous, sticky, colourless or yellowish liquid that turns to a transparent yellowish mass when the essential oils have been allowed to evaporate.
Canada balsam is amorphous when dried. It has poor thermal and solvent resistance.
Uses
Due to its high optical quality and the similarity of its refractive index to that of crown glass (n = 1.55), purified and filtered Canada balsam was traditionally used in optics as an invisible-when-dry glue for glass, such as lens elements. Other optical elements can be cemented with Canada balsam, such as two prisms bonded to form a beam splitter.
Canada balsam was also commonly used for making permanent microscope slides. From about 1830 molten Canada balsam was used for microscope slides. Canada balsam in solution was then introduced in 1843, becoming popular in the 1850s. In biology, for example, it can be used to conserve microscopic samples by sandwiching the sample between a microscope slide and a glass coverslip, using Canada balsam to glue the arrangement together and enclose the sample to conserve it.
Canada balsam dissolved in xylene is also used for preparing slide mounts. Some workers prefer terpene resin for slide mounts, as it is both less acidic and cheaper than balsam.
Another important application of Canada balsam is in the construction of the Nicol prism. A Nicol prism consists of a calcite crystal cut into two halves. Canada balsam is placed between the two layers. Calcite is an anisotropic crystal and has different refractive indices for rays polarized along directions parallel and perpendicular to its optic axis. These rays with differing refractive indices are known as the ordinary and extraordinary rays. The refractive index for Canada balsam is in between the refractive index for the ordinary and extraordinary rays. Hence the ordinary ray will be totally internally reflected. The emergent ray will be linearly polarized, and traditionally this has been one of the popular ways of producing polarized light.
Some other uses (traditional and current) include:
In geology, it is used as a common thin section cement and glue and for refractive-index studies and tests, such as the Becke line test;
To fix scratches in glass (car glass, for instance) as invisibly as possible;
In oil painting, to achieve glow and facilitate fusion;
In Buckley's cough syrup.
Balsam was phased out as an optical adhesive during World War II, in favour of polyester, epoxy, and urethane-based adhesives. In modern optical manufacturing, UV-cured epoxies are often used to bond lens elements. Synthetic resins have largely replaced organic balsams for use in slide mounts.
See also
Balm of Gilead, a healing compound made from the resinous gum of Commiphora gileadensis.
References
Adhesives
Resins
Microscopy mountants | Canada balsam | Physics,Chemistry | 641 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.