id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
11,295,114 | https://en.wikipedia.org/wiki/Bond%20order%20potential | Bond order potential is a class of empirical (analytical) interatomic potentials which is used in molecular dynamics and molecular statics simulations. Examples include the Tersoff potential, the EDIP potential, the Brenner potential, the Finnis–Sinclair potentials,
ReaxFF,
and the second-moment tight-binding potentials.
They have the advantage over conventional molecular mechanics force fields in that they can, with the same parameters, describe several different bonding states of an atom, and thus to some extent may be able to describe chemical reactions correctly. The potentials were developed partly independently of each other, but share the common idea that the strength of a chemical bond depends on the bonding environment, including the number of bonds and possibly also angles and bond lengths. It is based on the Linus Pauling bond order concept
and can be written in the form
This means that the potential is written as a simple pair potential depending on the distance between two atoms , but the strength of this bond is modified by the environment of the atom via the bond order . is a function that in Tersoff-type potentials depends inversely on the number of bonds to the atom , the bond angles between sets of three atoms , and optionally on the relative bond lengths , . In case of only one atomic bond (like in a diatomic molecule), which corresponds to the strongest and shortest possible bond. The other limiting case, for increasingly many number of bonds within some interaction range, and the potential turns completely repulsive (as illustrated in the figure to the right).
Alternatively, the potential energy can be written in the embedded atom model form
where is the electron density at the location of atom . These two forms for the energy can be shown to be equivalent (in the special case that the bond-order function contains no angular dependence).
A more detailed summary of how the bond order concept can be motivated by the second-moment approximation of tight binding and both of these functional forms derived from it can be found in.
The original bond order potential concept has been developed further to include distinct bond orders for sigma bonds and pi bonds in the so-called BOP potentials.
Extending the analytical expression for the bond order of the sigma bonds to include fourth moments of the exact tight binding bond order reveals contributions from both sigma- and pi- bond integrals between neighboring atoms. These pi-bond contributions to the sigma bond order are responsible to stabilize the asymmetric before the symmetric (2x1) dimerized reconstruction of the Si(100) surface.
Also the ReaxFF potential can be considered a bond order potential, although the motivation of its bond order terms is different from that described here.
References
Computational chemistry
Computational physics | Bond order potential | Physics,Chemistry | 549 |
8,116,008 | https://en.wikipedia.org/wiki/Cohn%20process | The Cohn process, developed by Edwin J. Cohn, is a series of purification steps with the purpose of extracting albumin from blood plasma. The process is based on the differential solubility of albumin and other plasma proteins based on pH, ethanol concentration, temperature, ionic strength, and protein concentration. Albumin has the highest solubility and lowest isoelectric point of all the major plasma proteins. This makes it the final product to be precipitated, or separated from its solution in a solid form. Albumin was an excellent substitute for human plasma in World War Two. When administered to wounded soldiers or other patients with blood loss, it helped expand the volume of blood and led to speedier recovery. Cohn's method was gentle enough that isolated albumin protein retained its biological activity.
Process details
During the operations, the ethanol concentration change from zero initially to 40%. The pH decreases from neutral at 7 to more acidic at 4.8 over the course of the fractionation. The temperature starts at room temperature and decreases to −5 degrees Celsius. Initially, the blood is frozen. There are five major fractions. Each fraction ends with a specific precipitate. These precipitates are the separate fractions.
Fractions I, II, and III are precipitated out at earlier stages. The conditions of the earlier stages are 8% ethanol, pH 7.2, −3 °C, and 5.1% protein for Fraction I; 25% ethanol, pH of 6.9, −5 °C, and 3% protein. The albumin remains in the supernatant fraction during the solid/liquid separation under these conditions. Fraction IV has several unwanted proteins that need to be removed. In order to do this, the conditions are varied in order to precipitate the proteins out. The conditions to precipitate these proteins are raising the ethanol concentration from 18 to 40% and raising the pH from 5.2 to 5.8. Finally, albumin is located in fraction V. The precipitation of albumin is done by reducing the pH to 4.8, which is near the pI of the protein, and maintaining the ethanol concentration to be 40%, with a protein concentration of 1%. Thus, only 1% of the original plasma remains in the fifth fraction.
However, albumin is lost at each process stage, with roughly 20% of the albumin lost through precipitation stages before fraction V. In order to purify the albumin, there is an extraction with water, and adjustment to 10% ethanol, pH of 4.5 at −3 °C. Any precipitate formed here is done so by filtration and is an impurity. These precipitates are discarded. Reprecipitation, or repetition of the precipitation step in order to improve purity, is done so by raising ethanol concentration back to 40% from the extraction stage. The pH is 5.2 and it is conducted at −5 °C. Several variations of Cohn fraction were created to account for lower cost and higher yield. Generally, if the yield is high, the purity is lowered, to roughly 85–90%.
Products other than albumin
Cohn was able to start the Plasma Fractionation Laboratory after he was given massive funding from the government agencies and the private pharmaceutical companies. This led to the fractionation of human plasma. Human plasma proved to have several useful components other than albumin. Human blood plasma fractionation yielded human serum albumin, serum gamma globulin, fibrinogen, thrombin, and blood group globulins. The fibrinogen and thrombin fractions were further combined during the War into additional products, including liquid fibrin sealant, solid fibrin foam and a fibrin film.
Gamma globulins are found in Fractions II and III and proved to be essential in treating measles for soldiers. Gamma globulin also was useful in treatment for polio, but did not have much effect in treating mumps or scarlet fever. Most importantly, the gamma globulins were useful in modifying and preventing infectious hepatitis during the Second World War. It eventually became a treatment for children exposed to this type of hepatitis.
Liquid fibrin sealant was used in treating burn victims, including some from the attack at Pearl Harbor, to attach skin grafts with an increased success rate. It was also found helpful at re-connecting or anastomosing severed nerves. Fibrin foam and thrombin were used to control blood vessel oozing especially in liver injuries and near tumors. It also minimized bleeding from large veins as well as dealing with blood vessel malformations within the brain. Fibrin film was used to stop bleeding in various surgical applications, including neurosurgery. However, it was not useful in controlling arterial bleeding. The first fibrinogen/fibrin based product capable of stopping arterial hemorrhage was the "Fibrin Sealant Bandage" or "Hemostatic Dressing (HD)" invented by Martin MacPhee at the American Red Cross in the early 1990s, and tested in collaboration with the U.S. Army.
Process variations
The Gerlough method, developed in 1955 improved process economics by reducing the consumption of ethanol. Instead of 40% in certain steps, Gerlough used 20% ethanol for precipitation. This is especially used for Fractions II and III. In addition, Gerlough combined the two fractions with IV into one step to reduce the number of fractionations required. While this method proved less expensive, it was not adopted by industry because of this combination of fractions II, III, and IV, for fear of mixing and high impurities.
The Hink method developed in 1957. This method gave higher yields through recovery of some of the plasma proteins discarded in the Fractions of IV. The improved yields, however, balanced by the lower purities obtained, within the 85% range.
The Mulford method, akin to the Hink, used the fractions II and III supernatant as the last step before finishing and heat treatment. The method combined fractions IV and V, but in this case, the albumin would not be as pure, although the yields may be higher.
Another variation was developed by Kistler and Nitschmann, to provide a purer form of albumin, even though offset by lower yields. Similar to Gerlough, the Precipitate A, which is equivalent to Cohn’s Fraction II and III, was done at a lower ethanol concentration of 19%, but the pH, in this case, was also lower to 5.85. Also similar to Gerlough and Mulford, the fraction IV was combined and precipitated at conditions of 40% ethanol, pH of 5.85, and temperature of −8 degrees C. The albumin, which is recovered in fraction V, is recovered in Precipitate C at a pH adjustment to 4.8. Similar to the Cohn Process, the albumin is purified by extraction into water followed by precipitation of the impurities at 10% ethanol, pH 4.6, and −3 degrees C. Akin to the Cohn Process, the precipitate formed here is filtered out and discarded. Then Precipitate C (fraction V) is reprecipitated at pH 5.2 and stored as a paste at −40 degrees C. This process has been more widely accepted because it separates the fractions and makes each stage independent of each other.
Another variation involved a heat ethanol fractionation. It was originally developed to inactivate the hepatitis virus. In this process, recovery of high yield, high purity albumin is the most important goal, while the other plasma proteins are neglected. In order to make sure the albumin does not denature in the heat, there are stabilizers such as sodium octanoate, which allow the albumin to tolerate higher temperatures for long periods. In heat ethanol, the plasma is heat treated at 68 degrees C with sodium octanoate with 9% ethanol at pH of 6.5. This results in improved albumin recovery with yields of 90%, and purities of 100%. It is not nearly as expensive as cold ethanol procedures such as the Cohn Process. One drawback is the presences of new antigens due to possible heat denaturation of the albumin. In addition, the other plasma proteins have practical uses and to neglect them would not be worth it. Finally, the expensive heat treatment vessels offset the lower cost compared to the cold ethanol format that do not need it. For these reasons, several companies haven not adopted this method even though it has the most impressive results. However, one prominent organization that uses it is the German Red Cross.
The latest variation was developed by Hao in 1979. This method is significantly simplified compared to the Cohn Process. Its goal is to create high albumin yields as long as albumin is the sole product. Through a two-stage process, impurities are precipitated directly from fractions II and III supernatant at 42% ethanol, pH 5.8, temperature −5 degrees C, 1.2% protein, and 0.09 ionic strength. Fraction V is precipitated at pH 4.8. Fractions I, II, III, and IV are coprecipitated at 40% ethanol, with pH of 5.4 to 7.0, and temperature −3 to −7 degrees C. Fraction V is then precipitated at pH 4.8 and −10 degrees C. The high yields are due to a combination of a simplified process, with lower losses due to coprecipitation, and use of filtration. Higher purities were also achieved at 98% because of the higher ethanol levels, but the yields were lowered with the high purity.
More recent methods involve the use of chromatography.
Influences of Cohn process
The Cohn process was a major development in the field of blood fractionation. It has several practical uses in treating diseases such as hepatitis and polio. It was most useful during the Second World War where soldiers recovered at a faster rate because of the transfusions with albumin. The Cohn Process has been modified over the years as seen above. In addition, it has influenced other processes with the blood fractionation industry. This has led to new forms of fractionation such as chromatographic plasma fractionation in ion exchange and albumin finishing processes. In general, the Cohn Process and its variations have given a huge boost to and serve as a foundation for the fractionation industry to this day.
However, the process has not been studied well because it is archaic. Most importantly, it has never been modernized by manufacturing companies. The cold ethanol format may be too gentle to kill off certain viruses that require heat inactivation. Since this process remains unchanged for so long, several built-in inefficiencies and inconsistencies affect the economics of the process for pharmaceutical and manufacturing companies. One exception to this was the application in Scotland of continuous-flow processing instead of batch processing. This process was devised at the Protein Fractionation Centre (PFC), the plasma fractionation facility of the Scottish National Blood Transfusion Service (SNBTS). This process involved in-line monitoring and control of pH and temperature, with flow control of plasma and ethanol streams using precision gear pumps, all under computerised feedback control . As a result, Cohn Fractions I+II+III, IV and V were produced in a few hours, rather than over many days. The continuous-flow preparation of cryoprecipitate was subsequently integrated into the process upstream of Cohn Fractionation.
Nevertheless, this process still serves as a major foundation for the blood industry in general and its influence can be seen as it is referred to in the development of newer methods. Although it has its drawbacks depending on the variation, the Cohn Process’ main advantage is its practical uses and its utility within pharmacological and medical industries.
References
Biochemical separation processes
Blood
Blood products
Industrial processes
Medical technology
Transfusion medicine
Fractionation | Cohn process | Chemistry,Biology | 2,513 |
19,661,214 | https://en.wikipedia.org/wiki/Lee%E2%80%93Yang%20theorem | In statistical mechanics, the Lee–Yang theorem states that if partition functions of certain models in statistical field theory with ferromagnetic interactions are considered as functions of an external field, then all zeros
are purely imaginary (or on the unit circle after a change of variable). The first version was proved for the Ising model by . Their result was later extended to more general models by several people. Asano in 1970 extended the Lee–Yang theorem to the Heisenberg model and provided a simpler proof using Asano contractions. extended the Lee–Yang theorem to certain continuous probability distributions by approximating them by a superposition of Ising models. gave a general theorem stating roughly that the Lee–Yang theorem holds for a ferromagnetic interaction provided it holds for zero interaction. generalized Newman's result from measures on R to measures on higher-dimensional Euclidean space.
There has been some speculation about a relationship between the Lee–Yang theorem and the Riemann hypothesis about the Riemann zeta function; see .
Statement
Preliminaries
Along the formalization in the Hamiltonian is given by
where Sj's are spin variables, zj external field.
The system is said to be ferromagnetic if all the coefficients in the interaction term Jjk are non-negative reals.
The partition function is given by
where each dμj is an even measure on the reals R decreasing at infinity so fast that all Gaussian functions are integrable, i.e.
A rapidly decreasing measure on the reals is said to have the Lee-Yang property if all zeros of its Fourier transform are real as the following.
Theorem
The Lee–Yang theorem states that if the Hamiltonian is ferromagnetic and all the measures dμj have the Lee-Yang property, and all the numbers zj have positive real part, then
the partition function is non-zero.
In particular if all the numbers zj are equal to some number z, then all zeros of the partition function (considered as a function of z) are imaginary.
In the original Ising model case considered by Lee and Yang, the measures all have support on the 2 point set −1, 1,
so the partition function can be considered a function of the variable ρ = eπz. With this change of variable the Lee–Yang theorem says that all zeros ρ lie on the unit circle.
Examples
Some examples of measure with the Lee–Yang property are:
The measure of the Ising model, which has support consisting of two points (usually 1 and −1) each with weight 1/2. This is the original case considered by Lee and Yang.
The distribution of spin n/2, whose support has n+1 equally spaced points, each of weight 1/(n + 1). This is a generalization of the Ising model case.
The density of measure uniformly distributed between −1 and 1.
The density
The density for positive λ and real b. This corresponds to the (φ4)2 Euclidean quantum field theory.
The density for positive λ does not always have the Lee-Yang property.
If dμ has the Lee-Yang property, so does exp(bS2) dμ for any positive b.
If dμ has the Lee-Yang property, so does Q(S) dμ for any even polynomial Q all of whose zeros are imaginary.
The convolution of two measures with the Lee-Yang property also has the Lee-Yang property.
See also
Lee–Yang theory
References
Yang Chen-Ning
Tsung-Dao Lee
Statistical mechanics theorems | Lee–Yang theorem | Physics,Mathematics | 738 |
55,632,076 | https://en.wikipedia.org/wiki/Author%20profiling | Author profiling is the analysis of a given set of texts in an attempt to uncover various characteristics of the author based on stylistic- and content-based features, or to identify the author. Characteristics analysed commonly include age and gender, though more recent studies have looked at other characteristics, like personality traits and occupation.
Author profiling is one of the three major fields in automatic authorship identification (AAI), the other two being authorship attribution and authorship identification. The process of AAI emerged at the end of the 19th century. Thomas Corwin Mendenhall, an American autodidact physicist and meteorologist, was the first to apply this process to the works of Francis Bacon, William Shakespeare, and Christopher Marlowe. From these three historic figures, Mendenhall sought to uncover their quantitative stylistic differences by inspecting word lengths.
Although much progress has been made in the 21st century, the task of author profiling remains an unsolved problem due to its difficulty.
Techniques
Through the analysis of texts, various author profiling techniques can be applied to predict information about the author. For example, function words, as well as part-of-speech analysis, can be referenced to determine the author's gender and truth of a text.
The process of author profiling usually involves the following steps:
Identifying specific features to be extracted from the text
Building an adopted, standard representation (e.g. Bag-of-words model) for the target profile
Building a classification model using a standard classifier (e.g. Support Vector Machines) for the target profile
Machine learning algorithms for author profiling have become increasingly complex over time. Algorithms used in author profiling include:
Support Vector Machines
Naive Bayes classifiers
Deep averaging networks, many layers in a cycle of machine learning that uses the mean of word embeddings within a text
Long Short-Term memory
In the past, author profiling was limited to physical documents, often in the form of books and newspaper articles. Different combinations of textual attributes belonging to the authors were identified and analyzed using author profiling, including lexical and syntactical features. Pioneering research in author profiling focused mostly on a single genre until the shift towards author profiling on social media and the Internet. While attributes, such as content words and POS tags, are effective in author profile predictions on physical documents, their effectiveness in author profile predictions on digital texts is subjective and dependent on the type of online content being analyzed.
With the advances in technology, author profiling on the Internet has become increasingly common. Digital texts, such as social media posts, blog posts and emails, are now being used. This has sparked greater research efforts because of the advantages analysing digital texts can bring to sectors like marketing and business. Author profiling on digital texts has also enabled predictions of a wider range of author characteristics such as personality, income and occupation.
The most effective attributes for author profiling on digital texts involve a combinations of stylistic and content features. Author profiling on digital texts focuses on cross-genre author profiling, whereby one genre is used for training data and another genre is used for testing data, though both need to be relatively similar for good results.
There are some problems when performing author profiling techniques on online texts. These problems include:
Wide variation in lengths of texts used
Class imbalance in data
Author profiling and the Internet
The rise of the internet in the 20th to 21st century catalysed an increase in author profiling research, since data could be mined from the web, including social media platforms, emails and blogs. Content from the web have been analysed in tasks of author profiling to identify the age, gender, geographic origins, nationality and psychometric traits of web users. The information obtained has been used to serve various applications, including marketing and forensics.
Social media
The increased integration of social media in people's daily lives have made them a rich source of textual data for author profiling. This is mainly because users frequently upload and share content for various purposes including self-expression, socialisation, and personal businesses. The Social bot is also a frequent feature of social media platforms, especially Twitter, generating content that may be analysed for author profiling. While different platforms contain similar data they may also contain different features depending on the format and structure of the particular platform.
There are still limitations in using social media as data sources for author profiling, because data obtained may not always be reliable or accurate. Users sometimes provide false information about themselves or withhold information. As a result, the training of algorithms for author profiling may be impeded by data that is less accurate. Another limitation is the irregularity of text in social media. Features of irregularity include deviation from normal linguistic standards such as spelling errors, unstandardised transliteration as with the substitution of letters with numbers, shorthands, user-created abbreviations for phrases and et cetera, which may pose a challenge to author profiling. Researchers have adopted methods to overcome these limitations in training their algorithms for author profiling.
Facebook
Facebook is useful for author profiling studies as a social networking service. This is because of how a social network may be built, expanded, and used for social action in the site. In such processes, users share personal content that may be used for author profiling studies. Textual data is obtained from Facebook for author profiling from user's personal posts such as 'status updates'. These are acquired to produce a corpus in the selected language(s) for author profiling, to create either a bilingual or multilingual database of content words, which may then be used for author profiling.
In the context of Facebook, author profiling mainly involves English textual data, but also uses non-english languages that include: Roman Urdu, Arabic, Brazilian Portuguese, Spanish. While author profiling studies on Facebook have been predominantly for gender and age-group identification, there have been attempts to derive attributes to predict religiosity, the IT background of users, and even basic emotions (as defined by Paul Ekman) among others.
Weibo
Sina Weibo is one of the few Asian social media platforms that contain texts in Asian languages to have been analysed for author profiling. Primary content of focus for author profiling on Weibo content include classical Chinese characters, hashtags, emoticons, kaomoji, homogenous punctuation, Latin sequences (due to the multilingualism of text) and even poetic formats. Particularly popular Chinese expressions, POS tags and word types are also tracked for author profiling.
Author profiling for Weibo content requires algorithms different from those used for other social media platforms, mainly due to the linguistic differences between Mandarin Chinese and Western languages. For example, Chinese emotions involve Chinese characters describing the gesture or facial expression in brackets, such as: e.g. [哈哈] 'laughter', [泪] 'tears', [偷笑] 'giggle', [爱你] 'love', [心] 'heart'. This differs from the use of punctuation symbols for emoticons in Western languages, or the common use of the Unicode emojis in other platforms such as Facebook, Instagram, et cetera. Further, while there are around 161 western emoticons, there are around 2900 emoticons regularly used in mainland China for web content as in Weibo. To tackle these differences, author profiling algorithms have been trained on Chinese emoticons and linguistic features. For example, author profiling algorithms have been designed to detect Chinese stylistic expressions expressing formality and sentiment, in place of algorithms detecting English linguistic features such as capital letters.
As compared to other more popular, globalised platforms, texts on Weibo are not as commonly used in the task of author profiling. This is likely due to the centralisation of Weibo in the Chinese population of mainland China, limiting its usage to predominantly China Nationals. Studies done for this platform have used bots, machine learning algorithms to identify authors' age and gender. Data is acquired from Weibo microblog posts of willing participants to be analysed, and used to train algorithms that build concept-based profiles of users to a certain accuracy.
Chat logs
Chat logs have been studied for author profiling as they include much textual discourse, the analysis of which have contributed to applicational studies including social trends and forensic science. Sources of data for author profiling from chat logs include platforms such as Yahoo!, AIM (software) and WhatsApp. Computational systems have been devised to produce concept-based profiles listing chat topics discussed in a single chat room or by independent users.
Blogs
Author profiling can be used to identify characteristics of blog writers, such as their age, gender and geographical location, based on their different writing styles, This is especially useful when it comes to anonymous blogs. The choice of content words, style-based features and topic-based features are analyzed to discover characteristics of the author.
In general, features that are frequently occur in blogs include a high distribution of verbs per writing and a relatively high use of pronouns. The frequency of verbs, pronouns and other word classes are used to profile and classify emotions in the writings of authors, as well as their gender and age. Author profiling using classification models that were used on physical documents in the past, such as Support Vector Machines, have also been tested on blogs. However, it has been proven to be unsuitable for the latter due to its low performance.
The machine learning algorithms that work well for author profiling on blogs include:
Instance-based learning
Random Decision Forests
Email
Email has been a consistent focus for author profiling due to rich textual data that can be found in various sections of a typical emailing platform. These sections include the sent, inbox, spam, trash, and archived folders. Multilingual approaches to author profiling for emails have included English, Spanish, and Arabic emails as data sources, among others. Through author profiling, details of email users may be identified, such as their age, gender, geographical origin, level of education, nationality and even psychometrics traits of personality, which includes neuroticism, agreeableness, conscientiousness and extraversion and introversion from the Big Five personality traits.
In author profiling for email, content is processed for important textual data, while unimportant features such as metadata and other hyper-text markup language (HTML) redundancies are excluded. Important parts of the Multi-purpose Internet Mail Extensions (MIME) that contain content of the emails are also included in the analysis. Obtained data is often parsed into various sections of content, including author text, signature text, advertisement, quoted text, and reply lines. Further analysis of email textual content in author profiling tasks involves the extraction of tone of voice, sentiment, semantics and other linguistic features to be processed.
Applications
Author profiling has applications in various fields where there is a need to identify specific characteristics of an author of a text, with a growing importance in fields like forensics and marketing. Depending on its application, the task of author profiling can vary in terms of the characteristics to be identified, number of authors studied and number of texts available for analysis.
Although its applications have traditionally been limited to written texts, such as literary works, this has extended to online texts with the advancement of the computer and the Internet.
Forensic linguistics
In the context of forensic linguistics, author profiling is used to identify characteristics of the author of anonymous, pseudonymous or forged text, based on the author's use of the language. Through linguistic analysis, forensic linguists seek to identify the suspect's motivation and ideology, along with other class features, such as the suspect's ethnicity or profession. While this does not always lead to decisive author identification, such information can help law enforcement narrow the pool of suspects.
In most cases, author profiling in the context of forensic linguistics involves a single text problem, in which there is either no or few comparison texts available and no external evidence that points to the author. Examples of text analysed by forensic linguists include blackmailing letters, confessions, testaments, suicide letters and plagiarised writing. This has also extended to online texts as well, such as sexually explicit online chat logs between middle-aged men and underaged girls, with the increasing number of cybercrimes committed on the Internet.
One of the earliest and best-known examples of the use of author profiling is by Roger Shuy, who was asked to examine a ransom note linked to a notorious kidnapping case in 1979. Based on his analysis of the kidnapper's idiolect, Shuy was able to identify crucial elements of the kidnappers identity from his misspellings and a dialect item, that is, the kidnapper was well-educated and from Akron, Ohio. This eventually led to a successful arrest and confession by the suspect.
However, there are criticisms that author profiling methods lack objectivity, since these methods are reliant on a forensic linguist's subjective identification of crucial sociolinguistic markers . These methods, such as those adopted by literary critic Donald Wayne Foster, are said to be speculative and based entirely on one's subjective experience, and therefore cannot be tested empirically.
Bot detection
Author profiling is adopted in the identification of social bots, the most common being Twitter bots. Social bots have been deemed as a threat given their commercial, political and ideological influence, such as the 2016 United States presidential election, during which they polarised political conversations, and spread misinformation and unverified information. In the context of marketing, social bots can artificially inflate the popularity of a product by posting positive reviews, and undermine the reputation of competitive products with unfavourable reviews. Therefore, bot detection from an author profiling perspective is a task of high importance.
Made to appear as human accounts, bots can mostly be identified by information on their profiles, like their username, profile photo and time of posting. However, the task of identifying bots solely from textual data (i.e. without meta-data) is significantly more challenging, requiring author profiling techniques. This usually involves a classification task based on semantic and syntactic features.
The task of bot and gender profiling was one of four shared tasks organised by PAN, which organises a series of scientific events and shared tasks of digital text forensics and stylometry, in its 2019 edition. Participating teams had achieved much success, with the best results for bot detection for English and Spanish tweets at 95.95% and 93.33% respectively.
Marketing
Author profiling is also useful from a marketing viewpoint, as it allows businesses to identify the demographics of people that like or dislike their products based on an analysis of blogs, online product reviews and social media content. This is important since most individuals post their reviews on products anonymously. Author profiling techniques are helpful to business experts in making better informed strategic decisions based on the demographics of their target group. In addition, businesses can target their marketing campaigns at groups of consumers who match the demographics and profile of current customers.
Author identification and influence tracing
Author profiling techniques are used to study traditional media and literature to identify the writing style of various authors as well as their written topics of content. Author profiling for literature is also been done to deduce the social networks of authors and their literary influence based on their bibliographic records of co-authorship. In cases of anonymous or pseudepigraphic works, sometimes the technique has been used to attempt to identify the author or authors, or determine which works were written by the same person.
Some examples of author profiling studies on literature and traditional media include studies on the following:
The Bible (see Authorship of the Bible)
Gospels of the New Testament
Shakespeare's works
The Federalist Papers in the 1990s and 1960s
Author profiling studies for Lithuanian Literary Texts
Primary Colors, 1996 novel whose author was for a time anonymous
A Warning, a 2019 political book whose author was for a time anonymous
Library cataloguing
Another application of author profiling is in devising strategies for cataloguing library resources based on standard attributes. In this approach, author profiling techniques may improve the efficiency of library cataloguing in which library resources are automatically classified based on the authors' bibliographic records. This was a significant issue in the early 21st century when much of library cataloguing was still done manually.
In using author profiling for library cataloguing, researchers have used machine learning for automatic processes in the library, such as Support Vector Machine algorithms (SVMs). With the use of SVMs for author profiling, bibliographic records of authors within existing databases may be identified, tracked, and updated to identify an author based on her topics of literary content and expertise as indicated in his or her bibliographic records. In this case, author profiling uses the social structures of authors that may be derived from physical copies of published media to catalogue library resources.
In popular culture
Author profiling has been featured in popular culture. The 2017 Discovery Channel mini-series Manhunt: Unabomber is a fictionalised account of the FBI investigation surrounding the Unabomber. It features a criminal profiler who identifies defining characteristics of the Unabomber's identity based on his analysis of the Unabomber's idiolect in his published manifesto and letters. The show highlighted the importance of author profiling in criminal forensics, as it was critical in the capture of the real Unabomber culprit in 1996.
See also
Related subjects
Computational linguistics
Forensic linguistics
Native-language identification
Social bot
Stylometry
References
Authorship debates
Computational fields of study | Author profiling | Technology | 3,620 |
6,834,050 | https://en.wikipedia.org/wiki/Submucosa | The submucosa (or tela submucosa) is a thin layer of tissue in various organs of the gastrointestinal, respiratory, and genitourinary tracts. It is the layer of dense irregular connective tissue that supports the mucosa (mucous membrane) and joins it to the muscular layer, the bulk of overlying smooth muscle (fibers running circularly within layer of longitudinal muscle).
The submucosa (sub- + mucosa) is to a mucous membrane what the subserosa (sub- + serosa) is to a serous membrane.
Structure
Blood vessels, lymphatic vessels, and nerves (all supplying the mucosa) will run through here. In the intestinal wall, tiny parasympathetic ganglia are scattered around forming the submucous plexus (or "Meissner's plexus") where preganglionic parasympathetic neurons synapse with postganglionic nerve fibers that supply the muscularis mucosae. Histologically, the wall of the alimentary canal shows four distinct layers (from the lumen moving out): mucosa, submucosa, muscularis externa, and either a serous membrane or an adventitia.
In the gastrointestinal tract and the respiratory tract the submucosa contains the submucosal glands that secrete mucus.
Clinical significance
Identification of the submucosa plays an important role in diagnostic and therapeutic endoscopy, where special fibre-optic cameras are used to perform procedures on the gastrointestinal tract. Abnormalities of the submucosa, such as gastrointestinal stromal tumors, usually show integrity of the mucosal surface.
The submucosa is also identified in endoscopic ultrasound to identify the depth of tumours and to identify other abnormalities. An injection of dye, saline, or epinephrine into the submucosa is imperative in the safe removal of certain polyps.
Endoscopic mucosal resection involves removal of the mucosal layer, and in order to be done safely, a submucosal injection of dye is performed to ensure integrity at the beginning of the procedure.
Female uterine submucosal layers are liable to develop fibroids during pregnancy and are often excised upon discovery.
Small intestinal submucosa
Small intestinal submucosa (SIS) is submucosal tissue in the small intestines of vertebrates. SIS is harvested (typically from pigs) for transplanted structural material in several clinical applications, typically biologic meshes. They have low immunogenicity. Some uses under investigation include a scaffold for intervertebral disc regeneration.
Unlike other scaffold materials, the resorbable SIS extracellular matrix (SIS-ECM) scaffold is replaced by well-organized host tissues, including differentiated skeletal muscle.
History
A scientific article published in March 2018 proposed a revision of the anatomical definition of the submucosa. They first saw a non compact tissue which should be submucosa using a technology called endomicroscopy. They hypothesised that the submucosa was not compact as it was previously seen on histological analysis but form a reticular pattern. To confirm their findings, they performed fixed samples of bile duct into a freezing media in order to conserve the shape of the submucosa. They then performed a histological analysis and with several staining techniques, they described the submucosa as a network of collagenous bands separating open, formerly fluid-filled spaces. Theses spaces are bordered by fibroblast-like cells CD34 positive. However, these cells are devoid of ultrastructural features indicative of endothelial differentiation, including pinocytotic vesicles and Weibel-Palade bodies.
Additional images
References
Membrane biology
Digestive system | Submucosa | Chemistry,Biology | 825 |
267,753 | https://en.wikipedia.org/wiki/RoboCup | RoboCup is an annual international robotics competition founded in 1996 by a group of university professors (including Hiroaki Kitano, Manuela M. Veloso, Itsuki Noda and Minoru Asada). The aim of the competition is to promote robotics and AI research by offering a publicly appealing – but formidable – challenge.
The name RoboCup is a contraction of the competition's full name, "Robot World Cup Initiative" (based on the FIFA World Cup), but there are many other areas of competition such as "RoboCupRescue", "RoboCup@Home" and "RoboCupJunior". Claude Sammut is the current president of RoboCup, and has been since 2019.
The official goal of the project is:
"By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup."
RoboCup leagues
The contest currently has six major domains of competition, each with a number of leagues and sub-leagues. These include:
RoboCup Soccer
Standard Platform League (formerly Four Legged League)
Small Size League
Middle Size League
Simulation League
2D Soccer Simulation
3D Soccer Simulation
Humanoid League
RoboCup Rescue League
Rescue Robot League
Rescue Simulation League
Rapidly Manufactured Robot Challenge
RoboCup@Home, which debuted in 2006, focuses on the introduction of autonomous robots to human society
RoboCup@Home Open Platform League (formerly just RoboCup@Home)
Robocup@Home Domestic Standard Platform League
RoboCup@Home Social Standard Platform League
RoboCup Logistics League, which debuted in 2012, is an application-driven league inspired by the industrial scenario of a smart factory
RoboCup@Work, which debuted in 2016, "targets the use of robots in work-related scenarios"
RoboCup Junior
Soccer League
OnStage (formerly Dance) League
Rescue League
Rescue CoSpace League
Each team is fully autonomous in all RoboCup leagues. Once the game starts, the only input from any human is from the referee.
RoboCup editions
The formal RoboCup competition was preceded by the (often unacknowledged) first International Micro Robot World Cup Soccer Tournament (MIROSOT) held by KAIST in Taejon, Korea, in November 1996. This was won by an American team from Newton Labs, and the competition was shown on CNN.
RoboCup was canceled in 2020 due to COVID-19. The planned host location of Bordeaux will host in 2023.
RoboCup Asia-Pacific editions
European RoboCupJunior Championship
RoboCup local events
2024
•German open in Kassel
2023
•German open
2021
RoboCup Kazakhstan, Nur-Sultan, Kazakhstan
RoboCup Portugal Open, virtual
RoboCup Russia Open, Tomsk, Russia
RoboCup Brazil Open, virtual
2020
RoboCup Japan Open 2020, virtual
RoboCup China Open 2020, virtual
RoboCup Brazil Open 2020, virtual
Events were cancelled due to COVID-19
2019
RoboCup Portuguese Open 2019, Gondomar, Portugal
RoboCup Brazil Open 2019, Rio Grande, Brazil
RoboCup Asia Pacific 2019, Moscow, Russia
RoboCup German Open 2019, Magdeburg, Germany
RoboCup China Open 2019, Shaoxing, China
2018
RoboCup Portugal Open 2018, Torres Vedras, Portugal
RoboCup Asia Pacific 2018, Kish Island, Iran
RoboCup Iran Open 2018, Tehran, Iran
RoboCup UAE 2018, Abu Dhabi, United Arab Emirates
RoboCup German Open 2018, Magdeburg, Germany
2017
RoboCup Portugal Open 2017, Coimbra, Portugal
RoboCup Iran Open 2017, Tehran, Iran
RoboCup German Open 2017, Magdeburg, Germany
RoboCup Russia Open 2017, Tomsk, Russia
RoboCup US Open 2017, Miami, United States
RoboCup China Open 2017, Shaoxing, China
2016
RoboCup Portugal Open 2016, Bragança, Portugal
RoboCup China Open 2016, Hefei, China
RoboCup European Open 2016, Eindhoven, Netherlands
2015
RoboCup Portugal Open 2015, Vila Real, Portugal
RoboCup China Open 2015, Guiyang, China
RoboCup Iran Open 2015, Tehran, Iran
GermanOpen 2015, Magdeburg, Germany
2014
RoboCup Portugal Open 2014, Espinho, Portugal
RoboCup China Open 2014, Hefei, China
RoboCup Iran Open 2014, Tehran, Iran
RoboCup German Open, Magdeburg, Germany
2013
RoboCup Portugal Open 2013, Lisbon, Portugal
RoboCup Iran Open 2013, Tehran, Iran
RoboCup German Open, Magdeburg, Germany
2012
RoboCup Portugal Open 2012, Guimarães, Portugal
RoboCup Dutch Open, Eindhoven, Netherlands
RoboCup German Open, Magdeburg, Germany
RoboCup Iran Open, Tehran, Iran
RoboCup SSL North American Open, Vancouver, British Columbia, Canada
2011
RoboCup German Open, Magdeburg, Germany
RoboCup Portugal Open, Lisboa, Portugal
RoboCup Iran Open 2011, Tehran, Iran
2010
RoboCup Portugal Open, Leiria, Portugal
Iran Open 2010, Tehran, Iran
Latin America & Brazil Open 2010, São Bernardo do Campo, Brazil
RoboCup Mediterranean Open 2010, Rome, Italy
RoboCup German Open (unofficial all-European tournament), Magdeburg, Germany
AUT Cup 2010, Tehran, Iran
See also
Robot
Botball
FIRST
BEST Robotics
RobotCub Consortium, a humanoid robot project to study cognition via robotics
References
External links
RoboCup@Home league, aims to develop service and assistive robot technology with high relevance for future personal domestic applications.
Engineering competitions
Dance animation
Recurring events established in 1997
Articles containing video clips
Robotics competitions | RoboCup | Technology | 1,191 |
29,053,548 | https://en.wikipedia.org/wiki/Comparison%20of%20optimization%20software | Given a system transforming a set of inputs to output values, described by a mathematical function f, optimization refers to the generation and selection of the best solution from some set of available alternatives, by systematically choosing input values from within an allowed set, computing the value of the function, and recording the best value found during the process. Many real-world and theoretical problems may be modeled in this general framework. For example, the inputs can be design parameters of a motor while the output can be the power consumption. Other inputs can be business choices with the output being obtained profit. or describing the configuration of a physical system with the output being its energy.
An optimization problem can be represented in the following way
Given: a function f : A R from some set A to the real numbers
Search for: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization").
Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. Maximization can be reduced to minimization by multiplying the function by minus one.
The use of optimization software requires that the function f is defined in a suitable programming language and linked to the optimization software. The optimization software will deliver input values in A, the software module realizing f will deliver the computed value f(x). In this manner, a clear separation of concerns is obtained: different optimization software modules can be easily tested on the same function f, or a given optimization software can be used for different functions f.
The following tables provide a comparison of notable optimization software libraries, either specialized or general purpose libraries with significant optimization coverage.
See also
List of optimization software
References
External links
OR/MS Today: 2013 Linear Programming Software Survey
OR/MS Today: 1998 Nonlinear Programming Software Survey
Software comparisons | Comparison of optimization software | Mathematics,Technology | 388 |
11,141,813 | https://en.wikipedia.org/wiki/Peter%20L.%20Hagelstein | Peter L. Hagelstein is an associate professor of electrical engineering at the Massachusetts Institute of Technology (MIT), affiliated with the Research Laboratory of Electronics (RLE).
Hagelstein received a B.S. and M.S. in 1976 and Ph.D. in electrical engineering in 1981, from MIT.
Hagelstein began his career at the Lawrence Livermore National Laboratory, working on high-energy laser and plasma physics from 1981 to 1985. While working in the Lawrence Livermore National Laboratory, he pioneered the work that later produced the first X-ray laser, which would later become important for the US Strategic Defense Initiative, popularly referred to as the "Star Wars" program. His work on X-ray lasers was honored with the Ernest Orlando Lawrence Award in 1984. Following this time, he took up an academic appointment at MIT in 1986.
In 1989, he started investigating cold fusion (also called low-energy nuclear reactions) with the hope of making a breakthrough similar to the X-ray laser. In the period between 1989 and 2004, the field became discredited in the eyes of many scientists. Hagelstein continued his research activity in the field, chairing the Tenth International Conference on Cold Fusion in 2003. On November 14, 2017, he gave a 90 minute presentation reviewing relevant experiments and describing possible mechanisms.
Following the cold fusion episode, his primary research has shifted to solid-state physics, including the development of new thermoelectric materials. In addition, he is active in education, writing a textbook on quantum and statistical mechanics.
References
Bibliography
Hagelstein's profile at MIT
Living people
MIT School of Engineering faculty
Cold fusion
American electrical engineers
Year of birth missing (living people)
MIT School of Engineering alumni | Peter L. Hagelstein | Physics,Chemistry | 353 |
55,928,968 | https://en.wikipedia.org/wiki/List%20of%20Apple%20TV%2B%20original%20programming | Beginning in 2016, Apple Inc. began to produce and distribute its own original content. The first television show produced by Apple was Planet of the Apps, a reality competition series. Their second, released in late 2017, was Carpool Karaoke: The Series based on the popular recurring segment from The Late Late Show with James Corden. Apple also released a short film, Peanuts in Space: Secrets of Apollo 10 in May 2019 prior to the release of Apple TV+.
In June 2017, Apple appointed Jamie Erlicht and Zack Van Amburg to head their newly formed worldwide video unit. By November, Apple confirmed that it was branching out into original scripted programming when announcing straight-to-series orders for two television shows: a reboot of the anthology series Amazing Stories by Steven Spielberg, and The Morning Show, a drama series starring Jennifer Aniston and Reese Witherspoon.
In 2017, Apple was reportedly planning on spending around $1 billion on original programming over the next year. Later that year, another report projected that they would spend $4.2 billion on original programming by 2022. In August 2019, it was reported that Apple had already spent over $6 billion on original programming.
On March 25, 2019, Apple announced their streaming service as Apple TV+, along with the announcement of Apple's slate of original programming. The service launched on November 1, 2019, in over 100 countries through the Apple TV app.
Original programming
Drama
Comedy
Kids & family
Animation
Adult animation
Kids & family
Non-English language scripted
Unscripted
Docuseries
Reality
Sports programming
Variety
Co-productions
These shows have been commissioned by Apple TV+ with a partner network.
Continuations
Specials
Upcoming original programming
Drama
Comedy
Animation
Kids & family
Non-English language scripted
Unscripted
Docuseries
In development
Notes
References
External links
– official site
Apple TV+
Apple TV+
Apple TV+
Apple TV+ | List of Apple TV+ original programming | Technology | 381 |
1,377,775 | https://en.wikipedia.org/wiki/Prompt%20corner | In a theatre, the prompt corner or prompt box is the place where the prompter—usually the stage manager in the US or deputy stage manager in the UK—stands in order to coordinate the performance and to remind performers of their lines when required. It is traditionally located at stage left.
Location
Historically, the prompt corner was situated at stage left. Prompt side (abbreviated to PS) and opposite prompt (abbreviated to OP, sometimes called off prompt) are widely used terms for stage left and stage right. However some theatres choose to install prompt corner in a discreet area of the auditorium. Certain theatres which locate their prompt corner on stage right would inform cast and crew that they were operating on a bastard prompt system.
In opera houses, the prompt box is traditionally located downstage centre; see prompter (opera).
Prompt desk
The prompt corner is usually equipped with a prompt desk to facilitate the coordination of a performance. This can vary from a small table in the wings to an elaborate installation in a dedicated booth, being equipped with all the necessary aids for the specific production and venue. The prompt desk minimally holds a carefully annotated copy of the performance script, with blocking and other stage directions and, in professional theatres:
A communications intercom headset, or 'cans', to talk to the rest of the technical team during a show;
Red and green cue lights. (In some theatres a computerised cue light system is used);
Telephones to front of house areas;
A public address system so that the stage manager and deputy stage manager (normally the person calling the show) can make announcements, or give calls, to the foyer ('front of house'), auditorium ('house'), dressing rooms or other 'back of house' areas in the theatre;
A silent fire alarm indicator, such as a strobe light; and,
Controls for the safety curtain and other emergency measures.
References
Stage terminology
Parts of a theatre | Prompt corner | Technology | 394 |
42,557,085 | https://en.wikipedia.org/wiki/Quasi-commutative%20property | In mathematics, the quasi-commutative property is an extension or generalization of the general commutative property. This property is used in specific applications with various definitions.
Applied to matrices
Two matrices and are said to have the commutative property whenever
The quasi-commutative property in matrices is defined as follows. Given two non-commutable matrices and
satisfy the quasi-commutative property whenever satisfies the following properties:
An example is found in the matrix mechanics introduced by Heisenberg as a version of quantum mechanics. In this mechanics, p and q are infinite matrices corresponding respectively to the momentum and position variables of a particle. These matrices are written out at Matrix mechanics#Harmonic oscillator, and z = iħ times the infinite unit matrix, where ħ is the reduced Planck constant.
Applied to functions
A function is said to be if
If is instead denoted by then this can be rewritten as:
See also
References
Mathematical relations
Properties of binary operations | Quasi-commutative property | Mathematics | 200 |
15,956,651 | https://en.wikipedia.org/wiki/SN%202004gt | SN 2004GT was a type Ic supernova that happened in the interacting galaxy NGC 4038 on December 12, 2004. The event occurred in a region of condensed matter in the western spiral arm. The progenitor was not identified from older images of the galaxy, and is either a type WC Wolf-Rayet star with a mass over 40 times that of the Sun, or a star 20 to 40 times as massive as the Sun in a binary star system.
References
External links
Light curves and spectra on the Open Supernova Catalog
Simbad
Supernova remnants
Supernovae
Corvus (constellation)
20041212 | SN 2004gt | Chemistry,Astronomy | 126 |
24,385,422 | https://en.wikipedia.org/wiki/C10H11NO2 | {{DISPLAYTITLE:C10H11NO2}}
The molecular formula C10H11NO2 (molar mass: 177.20 g/mol, exact mass: 177.0790 u) may refer to:
Acetoacetanilide
MDAI (5,6-methylenedioxy-2-aminoindane)
TDIQ | C10H11NO2 | Chemistry | 76 |
750,772 | https://en.wikipedia.org/wiki/Cooling%20tower | A cooling tower is a device that rejects waste heat to the atmosphere through the cooling of a coolant stream, usually a water stream, to a lower temperature. Cooling towers may either use the evaporation of water to remove heat and cool the working fluid to near the wet-bulb air temperature or, in the case of dry cooling towers, rely solely on air to cool the working fluid to near the dry-bulb air temperature using radiators.
Common applications include cooling the circulating water used in oil refineries, petrochemical and other chemical plants, thermal power stations, nuclear power stations and HVAC systems for cooling buildings. The classification is based on the type of air induction into the tower: the main types of cooling towers are natural draft and induced draft cooling towers.
Cooling towers vary in size from small roof-top units to very large hyperboloid structures that can be up to tall and in diameter, or rectangular structures that can be over tall and long. Hyperboloid cooling towers are often associated with nuclear power plants, although they are also used in many coal-fired plants and to some extent in some large chemical and other industrial plants. The steam turbine is what necessitates the cooling tower. Although these large towers are very prominent, the vast majority of cooling towers are much smaller, including many units installed on or near buildings to discharge heat from air conditioning. Cooling towers are also often thought to emit smoke or harmful fumes by the general public and environmental activists, when in reality the emissions from those towers mostly do not contribute to carbon footprint, consisting solely of water vapor.
History
Cooling towers originated in the 19th century through the development of condensers for use with the steam engine. Condensers use relatively cool water, via various means, to condense the steam coming out of the cylinders or turbines. This reduces the back pressure, which in turn reduces the steam consumption, and thus the fuel consumption, while at the same time increasing power and recycling boiler water. However, the condensers require an ample supply of cooling water, without which they are impractical. While water usage is not an issue with marine engines, it forms a significant limitation for many land-based systems.
By the turn of the 20th century, several evaporative methods of recycling cooling water were in use in areas lacking an established water supply, as well as in urban locations where municipal water mains may not be of sufficient supply, reliable in times of high demand, or otherwise adequate to meet cooling needs. In areas with available land, the systems took the form of cooling ponds; in areas with limited land, such as in cities, they took the form of cooling towers.
These early towers were positioned either on the rooftops of buildings or as free-standing structures, supplied with air by fans or relying on natural airflow. An American engineering textbook from 1911 described one design as "a circular or rectangular shell of light plate—in effect, a chimney stack much shortened vertically (20 to 40 ft. high) and very much enlarged laterally. At the top is a set of distributing troughs, to which the water from the condenser must be pumped; from these it trickles down over "mats" made of wooden slats or woven wire screens, which fill the space within the tower".
A hyperboloid cooling tower was patented by the Dutch engineers Frederik van Iterson and Gerard Kuypers in the Netherlands on August 16, 1916. The first hyperboloid reinforced concrete cooling towers were built by the Dutch State Mine (DSM) Emma in 1918 in Heerlen. The first ones in the United Kingdom were built in 1924 at Lister Drive power station in Liverpool, England. On both locations they were built to cool water used at a coal-fired electrical power station.
According to a Gas Technology Institute (GTI) report, the indirect–dew-point evaporative-cooling Maisotsenko Cycle (M-Cycle) is a theoretically sound method of reducing a working fluid to the ambient fluid’s dew point, which is lower than the ambient fluid’s wet-bulb temperature. The M-cycle utilizes the psychrometric energy (or the potential energy) available from the latent heat of water evaporating into the air. While its current manifestation is as the M-Cycle HMX for air conditioning, through engineering design this cycle could be applied as a heat- and moisture-recovery device for combustion devices, cooling towers, condensers, and other processes involving humid gas streams.
The consumption of cooling water by inland processing and power plants is estimated to reduce power availability for the majority of thermal power plants by 2040–2069.
In 2021, researchers presented a method for steam recapture. The steam is charged using an ion beam, and then captured in a wire mesh of opposite charge. The water's purity exceeded EPA potability standards.
Classification by use
Heating, ventilation and air conditioning (HVAC)
An HVAC (heating, ventilating, and air conditioning) cooling tower is used to dispose of ("reject") unwanted heat from a chiller. Liquid-cooled chillers are normally more energy efficient than air-cooled chillers due to heat rejection to tower water at or near wet-bulb temperatures. Air-cooled chillers must reject heat at the higher dry-bulb temperature, and thus have a lower average reverse–Carnot-cycle effectiveness. In hot climates, large office buildings, hospitals, and schools typically use cooling towers in their air conditioning systems. Generally, industrial cooling towers are much larger than HVAC towers.
HVAC use of a cooling tower pairs the cooling tower with a liquid-cooled chiller or liquid-cooled condenser. A ton of air-conditioning is defined as the removal of . The equivalent ton on the cooling tower side actually rejects about due to the additional waste-heat–equivalent of the energy needed to drive the chiller's compressor. This equivalent ton is defined as the heat rejection in cooling or of water by , which amounts to , assuming a chiller coefficient of performance (COP) of 4.0. This COP is equivalent to an energy efficiency ratio (EER) of 14.
Cooling towers are also used in HVAC systems that have multiple water source heat pumps that share a common piping water loop. In this type of system, the water circulating inside the water loop removes heat from the condenser of the heat pumps whenever the heat pumps are working in the cooling mode, then the externally mounted cooling tower is used to remove heat from the water loop and reject it to the atmosphere. By contrast, when the heat pumps are working in heating mode, the condensers draw heat out of the loop water and reject it into the space to be heated. When the water loop is being used primarily to supply heat to the building, the cooling tower is normally shut down (and may be drained or winterized to prevent freeze damage), and heat is supplied by other means, usually from separate boilers.
Industrial cooling towers
Industrial cooling towers can be used to remove heat from various sources such as machinery or heated process material. The primary use of large, industrial cooling towers is to remove the heat absorbed in the circulating cooling water systems used in power plants, petroleum refineries, petrochemical plants, natural gas processing plants, food processing plants, semi-conductor plants, and for other industrial facilities such as in condensers of distillation columns, for cooling liquid in crystallization, etc. The circulation rate of cooling water in a typical 700 MWth coal-fired power plant with a cooling tower amounts to about 71,600 cubic metres an hour (315,000 US gallons per minute) and the circulating water requires a supply water make-up rate of perhaps 5 percent (i.e., 3,600 cubic metres an hour, equivalent to one cubic metre every second).
If that same plant had no cooling tower and used once-through cooling water, it would require about 100,000 cubic metres an hour A large cooling water intake typically kills millions of fish and larvae annually, as the organisms are impinged on the intake screens. A large amount of water would have to be continuously returned to the ocean, lake or river from which it was obtained and continuously re-supplied to the plant. Furthermore, discharging large amounts of hot water may raise the temperature of the receiving river or lake to an unacceptable level for the local ecosystem. Elevated water temperatures can kill fish and other aquatic organisms (see thermal pollution), or can also cause an increase in undesirable organisms such as invasive species of zebra mussels or algae.
A cooling tower serves to dissipate the heat into the atmosphere instead, so that wind and air diffusion spreads the heat over a much larger area than hot water can distribute heat in a body of water. Evaporative cooling water cannot be used for subsequent purposes (other than rain somewhere), whereas surface-only cooling water can be re-used.
Some coal-fired and nuclear power plants located in coastal areas do make use of once-through ocean water. But even there, the offshore discharge water outlet requires very careful design to avoid environmental problems.
Petroleum refineries may also have very large cooling tower systems. A typical large refinery processing 40,000 metric tonnes of crude oil per day ( per day) circulates about 80,000 cubic metres of water per hour through its cooling tower system.
The world's tallest cooling tower is the tall cooling tower of the Pingshan II Power Station in Huaibei, Anhui Province, China.
Classification by build
Package type
These types of cooling towers are factory preassembled, and can be simply transported on trucks, as they are compact machines. The capacity of package type towers is limited and, for that reason, they are usually preferred by facilities with low heat rejection requirements such as food processing plants, textile plants, some chemical processing plants, or buildings like hospitals, hotels, malls, automotive factories, etc.
Due to their frequent use in or near residential areas, sound level control is a relatively more important issue for package type cooling towers.
Field-erected type
Facilities such as power plants, steel processing plants, petroleum refineries, or petrochemical plants usually install field-erected type cooling towers due to their greater capacity for heat rejection. Field-erected towers are usually much larger in size compared to the package type cooling towers.
A typical field-erected cooling tower has a pultruded fiber-reinforced plastic (FRP) structure, FRP cladding, a mechanical unit for air draft, and a drift eliminator.
Heat transfer methods
With respect to the heat transfer mechanism employed, the main types are:
Wet cooling towers or open-circuit Cooling Tower or evaporative cooling towers operate on the principle of evaporative cooling. The working coolant (usually water) is the evaporated fluid, and is exposed to the elements.
Closed circuit cooling towers (also called fluid coolers) pass the working coolant through a large heat exchanger, usually a radiator, upon which clean water is sprayed and a fan-induced draft applied. The resulting heat transfer performance is close to that of a wet cooling tower, with the advantage of protecting the working fluid from environmental exposure and contamination.
Adiabatic cooling towers spray water into the incoming air or onto a cardboard pad to cool the air before it passes over an air-cooled heat exchanger. Adiabatic cooling towers use less water than other cooling towers but do not cool the fluid as close to the wet bulb temperature. Most adiabatic cooling towers are also hybrid cooling towers.
Dry cooling towers (or dry coolers) are closed circuit cooling towers which operate by heat transfer through a heat exchanger that separates the working coolant from ambient air, such as in a radiator, utilizing convective heat transfer. They do not use evaporation and are air-cooled heat exchangers.
Hybrid cooling towers or wet-dry cooling towers are closed circuit cooling towers that can switch between wet or adiabatic and dry operation. This helps balance water and energy savings across a variety of weather conditions. Some hybrid cooling towers can switch between dry, wet, and adiabatic modes. Thermal efficiencies up to 92% have been observed in hybrid cooling towers.
In a wet cooling tower (or open circuit cooling tower), the warm water can be cooled to a temperature lower than the ambient air dry-bulb temperature, if the air is relatively dry (see dew point and psychrometrics). As ambient air is drawn past a flow of water, a small portion of the water evaporates, and the energy required to evaporate that portion of the water is taken from the remaining mass of water, thus reducing its temperature. Approximately of heat energy is absorbed for the evaporated water. Evaporation results in saturated air conditions, lowering the temperature of the water processed by the tower to a value close to wet-bulb temperature, which is lower than the ambient dry-bulb temperature, the difference determined by the initial humidity of the ambient air.
To achieve better performance (more cooling), a medium called fill is used to increase the surface area and the time of contact between the air and water flows. Splash fill consists of material placed to interrupt the water flow causing splashing. Film fill is composed of thin sheets of material (usually PVC) upon which the water flows. Both methods create increased surface area and time of contact between the fluid (water) and the gas (air), to improve heat transfer.
Air flow generation methods
With respect to drawing air through the tower, there are three types of cooling towers:
Natural draftUtilizes buoyancy via a tall chimney. Warm, moist air naturally rises due to the density differential compared to the dry, cooler outside air. Warm moist air is less dense than drier air at the same pressure. This moist air buoyancy produces an upwards current of air through the tower.
Mechanical draftUses power-driven fan motors to force or draw air through the tower.
Induced draftA mechanical draft tower with a fan at the discharge (at the top) which pulls air up through the tower. The fan induces hot moist air out the discharge. This produces low entering and high exiting air velocities, reducing the possibility of recirculation in which discharged air flows back into the air intake. This fan/fin arrangement is also known as draw-through.
Forced draftA mechanical draft tower with a blower type fan at the intake. The fan forces air into the tower, creating high entering and low exiting air velocities. The low exiting velocity is much more susceptible to recirculation. With the fan on the air intake, the fan is more susceptible to complications due to freezing conditions. Another disadvantage is that a forced draft design typically requires more motor horsepower than an equivalent induced draft design. The benefit of the forced draft design is its ability to work with high static pressure. Such setups can be installed in more-confined spaces and even in some indoor situations. This fan/fin geometry is also known as blow-through.
Fan assisted natural draftA hybrid type that appears like a natural draft setup, though airflow is assisted by a fan.
Hyperboloid cooling tower
On 16 August 1916, Frederik van Iterson took out the UK patent (108,863) for Improved Construction of Cooling Towers of Reinforced Concrete. The patent was filed on 9 August 1917, and published on 11 April 1918. In 1918, DSM built the first hyperboloid natural-draft cooling tower at the Staatsmijn Emma, to his design.
Hyperboloid (sometimes incorrectly known as hyperbolic) cooling towers have become the design standard for all natural-draft cooling towers because of their structural strength and minimum usage of material. The hyperboloid shape also aids in accelerating the upward convective air flow, improving cooling efficiency. These designs are popularly associated with nuclear power plants. However, this association is misleading, as the same kind of cooling towers are often used at large coal-fired power plants and some geothermal plants as well. The steam turbine is what necessitates the cooling tower. Conversely, not all nuclear power plants have cooling towers, and some instead cool their working fluid with lake, river or ocean water.
Categorization by air-to-water flow
Crossflow
Typically lower initial and long-term cost, mostly due to pump requirements.
Crossflow is a design in which the airflow is directed perpendicular to the water flow (see diagram at left). Airflow enters one or more vertical faces of the cooling tower to meet the fill material. Water flows (perpendicular to the air) through the fill by gravity. The air continues through the fill and thus past the water flow into an open plenum volume. Lastly, a fan forces the air out into the atmosphere.
A distribution or hot water basin consisting of a deep pan with holes or nozzles in its bottom is located near the top of a crossflow tower. Gravity distributes the water through the nozzles uniformly across the fill material. Cross Flow V/s Counter Flow
Advantages of the crossflow design:
Gravity water distribution allows smaller pumps and maintenance while in use.
Non-pressurized spray simplifies variable flow.
Disadvantages of the crossflow design:
More prone to freezing than counterflow designs.
Variable flow is useless in some conditions.
More prone to dirt buildup in the fill than counterflow designs, especially in dusty or sandy areas.
Counterflow
In a counterflow design, the air flow is directly opposite to the water flow (see diagram at left). Air flow first enters an open area beneath the fill media, and is then drawn up vertically. The water is sprayed through pressurized nozzles near the top of the tower, and then flows downward through the fill, opposite to the air flow.
Advantages of the counterflow design:
Spray water distribution makes the tower more freeze-resistant.
Breakup of water in spray makes heat transfer more efficient.
Disadvantages of the counterflow design:
Typically higher initial and long-term cost, primarily due to pump requirements.
Difficult to use variable water flow, as spray characteristics may be negatively affected.
Typically noisier, due to the greater water fall height from the bottom of the fill into the cold water basin
Common aspects
Common aspects of both designs:
The interactions of the air and water flow allow a partial equalization of temperature, and evaporation of water.
The air, now saturated with water vapor, is discharged from the top of the cooling tower.
A "collection basin" or "cold water basin" is used to collect and contain the cooled water after its interaction with the air flow.
Both crossflow and counterflow designs can be used in natural draft and in mechanical draft cooling towers.
Wet cooling tower material balance
Quantitatively, the material balance around a wet, evaporative cooling tower system is governed by the operational variables of make-up volumetric flow rate, evaporation and windage losses, draw-off rate, and the concentration cycles.
In the adjacent diagram, water pumped from the tower basin is the cooling water routed through the process coolers and condensers in an industrial facility. The cool water absorbs heat from the hot process streams which need to be cooled or condensed, and the absorbed heat warms the circulating water (C). The warm water returns to the top of the cooling tower and trickles downward over the fill material inside the tower. As it trickles down, it contacts ambient air rising up through the tower either by natural draft or by forced draft using large fans in the tower. That contact causes a small amount of the water to be lost as windage or drift (W) and some of the water (E) to evaporate. The heat required to evaporate the water is derived from the water itself, which cools the water back to the original basin water temperature and the water is then ready to recirculate. The evaporated water leaves its dissolved salts behind in the bulk of the water which has not been evaporated, thus raising the salt concentration in the circulating cooling water. To prevent the salt concentration of the water from becoming too high, a portion of the water is drawn off or blown down (D) for disposal. Fresh water make-up (M) is supplied to the tower basin to compensate for the loss of evaporated water, the windage loss water and the draw-off water.
Using these flow rates and concentration dimensional units:
A water balance around the entire system is then:
Since the evaporated water (E) has no salts, a chloride balance around the system is:
and, therefore:
From a simplified heat balance around the cooling tower:
Windage (or drift) losses (W) is the amount of total tower water flow that is entrained in the flow of air to the atmosphere. From large-scale industrial cooling towers, in the absence of manufacturer's data, it may be assumed to be:
W = 0.3 to 1.0 percent of C for a natural draft cooling tower without windage drift eliminators
W = 0.1 to 0.3 percent of C for an induced draft cooling tower without windage drift eliminators
W = about 0.005 percent of C (or less) if the cooling tower has windage drift eliminators
W = about 0.0005 percent of C (or less) if the cooling tower has windage drift eliminators and uses sea water as make-up water.
Cycles of concentration
Cycle of concentration represents the accumulation of dissolved minerals in the recirculating cooling water. Discharge of draw-off (or blowdown) is used principally to control the buildup of these minerals.
The chemistry of the make-up water, including the amount of dissolved minerals, can vary widely. Make-up waters low in dissolved minerals such as those from surface water supplies (lakes, rivers etc.) tend to be aggressive to metals (corrosive). Make-up waters from ground water supplies (such as wells) are usually higher in minerals, and tend to be scaling (deposit minerals). Increasing the amount of minerals present in the water by cycling can make water less aggressive to piping; however, excessive levels of minerals can cause scaling problems.
As the cycles of concentration increase, the water may not be able to hold the minerals in solution. When the solubility of these minerals have been exceeded they can precipitate out as mineral solids and cause fouling and heat exchange problems in the cooling tower or the heat exchangers. The temperatures of the recirculating water, piping and heat exchange surfaces determine if and where minerals will precipitate from the recirculating water. Often a professional water treatment consultant will evaluate the make-up water and the operating conditions of the cooling tower and recommend an appropriate range for the cycles of concentration. The use of water treatment chemicals, pretreatment such as water softening, pH adjustment, and other techniques can affect the acceptable range of cycles of concentration.
Concentration cycles in the majority of cooling towers usually range from 3 to 7. In the United States, many water supplies use well water which has significant levels of dissolved solids. On the other hand, one of the largest water supplies, for New York City, has a surface rainwater source quite low in minerals; thus cooling towers in that city are often allowed to concentrate to 7 or more cycles of concentration.
Since higher cycles of concentration represent less make-up water, water conservation efforts may focus on increasing cycles of concentration. Highly treated recycled water may be an effective means of reducing cooling tower consumption of potable water, in regions where potable water is scarce.
Maintenance
Clean visible dirt & debris from the cold water basin and surfaces with any visible biofilm (i.e., slime).
Disinfectant and other chemical levels in cooling towers and hot tubs should be continuously maintained and regularly monitored.
Regular checks of water quality (specifically the aerobic bacteria levels) using dipslides should be taken as the presence of other organisms can support legionella by producing the organic nutrients that it needs to thrive.
Water treatment
Besides treating the circulating cooling water in large industrial cooling tower systems to minimize scaling and fouling, the water should be filtered to remove particulates, and also be dosed with biocides and algaecides to prevent growths that could interfere with the continuous flow of the water. Under certain conditions, a biofilm of micro-organisms such as bacteria, fungi and algae can grow very rapidly in the cooling water, and can reduce the heat transfer efficiency of the cooling tower. Biofilm can be reduced or prevented by using sodium chlorite or other chlorine based chemicals. A normal industrial practice is to use two biocides, such as oxidizing and non-oxidizing types to complement each other's strengths and weaknesses, and to ensure a broader spectrum of attack. In most cases, a continual low level oxidizing biocide is used, then alternating to a periodic shock dose of non-oxidizing biocides.
Algaecides and biocides
Algaecides, as their name might suggest, is intended to kill algae and other related plant-like microbes in the water. Biocides can reduce other living matter that remains, improving the system and keeping clean and efficient water usage in a cooling tower. One of the most common options when it comes to biocides for your water is bromine.
Scale inhibitors
Among the issues that cause the most damage and strain to a water tower's systems is scaling. When an unwanted material or contaminant in the water builds up in a certain area, it can create deposits that grow over time. This can cause issues ranging from the narrowing of pipes to total blockages and equipment failures.
The water consumption of the cooling tower comes from Drift, Bleed-off, Evaporation loss, The water that is immediately replenished into the cooling tower due to loss is called Make-up Water. The function of make-up water is to make machinery and equipment run safely and stably.
Legionnaires' disease
Another very important reason for using biocides in cooling towers is to prevent the growth of Legionella, including species that cause legionellosis or Legionnaires' disease, most notably L. pneumophila, or Mycobacterium avium. The various Legionella species are the cause of Legionnaires' disease in humans and transmission is via exposure to aerosols—the inhalation of mist droplets containing the bacteria. Common sources of Legionella include cooling towers used in open recirculating evaporative cooling water systems, domestic hot water systems, fountains, and similar disseminators that tap into a public water supply. Natural sources include freshwater ponds and creeks.
French researchers found that Legionella bacteria travelled up to through the air from a large contaminated cooling tower at a petrochemical plant in Pas-de-Calais, France. That outbreak killed 21 of the 86 people who had a laboratory-confirmed infection.
Drift (or windage) is the term for water droplets of the process flow allowed to escape in the cooling tower discharge. Drift eliminators are used in order to hold drift rates typically to 0.001–0.005% of the circulating flow rate. A typical drift eliminator provides multiple directional changes of airflow to prevent the escape of water droplets. A well-designed and well-fitted drift eliminator can greatly reduce water loss and potential for Legionella or water treatment chemical exposure. Also, about every six months, inspect the conditions of the drift eliminators making sure there are no gaps to allow the free flow of dirt.
The US Centers for Disease Control and Prevention (CDC) does not recommend that health-care facilities regularly test for the Legionella pneumophila bacteria. Scheduled microbiologic monitoring for Legionella remains controversial because its presence is not necessarily evidence of a potential for causing disease. The CDC recommends aggressive disinfection measures for cleaning and maintaining devices known to transmit Legionella, but does not recommend regularly-scheduled microbiologic assays for the bacteria. However, scheduled monitoring of potable water within a hospital might be considered in certain settings where persons are highly susceptible to illness and mortality from Legionella infection (e.g. hematopoietic stem cell transplantation units, or solid organ transplant units). Also, after an outbreak of legionellosis, health officials agree that monitoring is necessary to identify the source and to evaluate the efficacy of biocides or other prevention measures.
Studies have found Legionella in 40% to 60% of cooling towers.
Terminology
Windage or DriftWater droplets that are carried out of the cooling tower with the exhaust air. Drift droplets have the same concentration of impurities as the water entering the tower. The drift rate is typically reduced by employing baffle-like devices, called drift eliminators, through which the air must travel after leaving the fill and spray zones of the tower. Drift can also be reduced by using warmer entering cooling tower temperatures.
Blow-outWater droplets blown out of the cooling tower by wind, generally at the air inlet openings. Water may also be lost, in the absence of wind, through splashing or misting. Devices such as wind screens, louvers, splash deflectors and water diverters are used to limit these losses.
PlumeThe stream of saturated exhaust air leaving the cooling tower. The plume is visible when water vapor it contains condenses in contact with cooler ambient air, like the saturated air in one's breath fogs on a cold day. Under certain conditions, a cooling tower plume may present fogging or icing hazards to its surroundings. Note that the water evaporated in the cooling process is "pure" water, in contrast to the very small percentage of drift droplets or water blown out of the air inlets.
Draw-off or blow-downThe portion of the circulating water flow that is removed (usually discharged to a drain) in order to maintain the amount of total dissolved solids (TDS) and other impurities at an acceptably low level. Higher TDS concentration in solution may result from greater cooling tower efficiency. However the higher the TDS concentration, the greater the risk of scale, biological growth, and corrosion. The amount of blow-down is primarily regulated by measuring by the electrical conductivity of the circulating water. Biological growth, scaling, and corrosion can be prevented by chemicals (respectively, biocide, sulfuric acid, corrosion inhibitor). On the other hand, the only practical way to decrease the electrical conductivity is by increasing the amount of blow-down discharge and subsequently increasing the amount of clean make-up water.
Zero bleed for cooling towers, also called zero blow-down for cooling towers, is a process for significantly reducing the need for bleeding water with residual solids from the system by enabling the water to hold more solids in solution.
Make-upThe water that must be added to the circulating water system in order to compensate for water losses such as evaporation, drift loss, blow-out, blow-down, etc.
NoiseSound energy emitted by a cooling tower and heard (recorded) at a given distance and direction. The sound is generated by the impact of falling water, by the movement of air by fans, the fan blades moving in the structure, vibration of the structure, and the motors, gearboxes or drive belts.
ApproachThe approach is the difference in temperature between the cooled-water temperature and the entering-air wet bulb temperature (twb). Since the cooling towers are based on the principles of evaporative cooling, the maximum cooling tower efficiency depends on the wet bulb temperature of the air. The wet-bulb temperature is a type of temperature measurement that reflects the physical properties of a system with a mixture of a gas and a vapor, usually air and water vapor
RangeThe range is the temperature difference between the warm water inlet and cooled water exit.
FillInside the tower, fills are added to increase contact surface as well as contact time between air and water, to provide better heat transfer. The efficiency of the tower depends on the selection and amount of fill. There are two types of fills that may be used:
Film type fill (causes water to spread into a thin film)
Splash type fill (breaks up falling stream of water and interrupts its vertical progress)
Full-flow filtrationFull-flow filtration continuously strains particulates out of the entire system flow. For example, in a 100-ton system, the flow rate would be roughly 300 gal/min. A filter would be selected to accommodate the entire 300 gal/min flow rate. In this case, the filter typically is installed after the cooling tower on the discharge side of the pump. While this is the ideal method of filtration, for higher flow systems it may be cost-prohibitive.
Side-stream filtrationSide-stream filtration, although popular and effective, does not provide complete protection. With side-stream filtration, a portion of the water is filtered continuously. This method works on the principle that continuous particle removal will keep the system clean. Manufacturers typically package side-stream filters on a skid, complete with a pump and controls. For high flow systems, this method is cost-effective. Properly sizing a side-stream filtration system is critical to obtain satisfactory filter performance, but there is some debate over how to properly size the side-stream system. Many engineers size the system to continuously filter the cooling tower basin water at a rate equivalent to 10% of the total circulation flow rate. For example, if the total flow of a system is 1,200 gal/min (a 400-ton system), a 120 gal/min side-stream system is specified.
Cycle of concentrationMaximum allowed multiplier for the amount of miscellaneous substances in circulating water compared to the amount of those substances in make-up water.
Treated timberA structural material for cooling towers which was largely abandoned in the early 2000s. It is still used occasionally due to its low initial costs, in spite of its short life expectancy. The life of treated timber varies a lot, depending on the operating conditions of the tower, such as frequency of shutdowns, treatment of the circulating water, etc. Under proper working conditions, the estimated life of treated timber structural members is about 10 years.
LeachingThe loss of wood preservative chemicals by the washing action of the water flowing through a wood structure cooling tower.
Pultruded FRPA common structural material for smaller cooling towers, fibre-reinforced plastic (FRP) is known for its high corrosion-resistance capabilities. Pultruded FRP is produced using pultrusion technology, and has become the most common structural material for small cooling towers. It offers lower costs and requires less maintenance compared to reinforced concrete, which is still in use for large structures.
Fog production
Under certain ambient conditions, plumes of water vapor can be seen rising out of the discharge from a cooling tower, and can be mistaken as smoke from a fire. If the outdoor air is at or near saturation, and the tower adds more water to the air, saturated air with liquid water droplets can be discharged, which is seen as fog. This phenomenon typically occurs on cool, humid days, but is rare in many climates. Fog and clouds associated with cooling towers can be described as homogenitus, as with other clouds of man-made origin, such as contrails and ship tracks.
This phenomenon can be prevented by decreasing the relative humidity of the saturated discharge air. For that purpose, in hybrid towers, saturated discharge air is mixed with heated low relative humidity air. Some air enters the tower above drift eliminator level, passing through heat exchangers. The relative humidity of the dry air is even more decreased instantly as being heated while entering the tower. The discharged mixture has a relatively lower relative humidity and the fog is invisible.
Cloud formation
Issues related to applied meteorology of cooling towers, including the assessment of the impact of cooling towers on cloud enhancement were considered in a series of models and experiments. One of the results by Haman's group indicated significant dynamic influences of the condensation trails on the surrounding atmosphere, manifested in temperature and humidity disturbances. The mechanism of these influences seemed to be associated either with the airflow over the trail as an obstacle or with vertical waves generated by the trail, often at a considerable altitude above it.
Salt emission pollution
When wet cooling towers with seawater make-up are installed in various industries located in or near coastal areas, the drift of fine droplets emitted from the cooling towers contain nearly 6% sodium chloride which deposits on the nearby land areas. This deposition of sodium salts on the nearby agriculture and vegetative lands can convert them into sodic saline or sodic alkaline soils depending on the nature of the soil and enhance the sodicity of ground and surface water. The salt deposition problem from such cooling towers aggravates where pollution control standards are not imposed or not implemented to minimize the drift emissions from wet cooling towers using seawater make-up.
Respirable suspended particulate matter, of less than 10 micrometers (μm) in size, can be present in the drift from cooling towers. Larger particles above 10 μm in size are generally filtered out in the nose and throat via cilia and mucus but particulate matter smaller than 10 μm, referred to as PM10, can settle in the bronchi and lungs and cause health problems. Similarly, particles smaller than 2.5 μm, (PM2.5), tend to penetrate into the gas exchange regions of the lung, and very small particles (less than 100 nanometers) may pass through the lungs to affect other organs. Though the total particulate emissions from wet cooling towers with fresh water make-up is much less, they contain more PM10 and PM2.5 than the total emissions from wet cooling towers with sea water make-up. This is due to lesser salt content in fresh water drift (below 2,000 ppm) compared to the salt content of sea water drift (60,000 ppm).
Use as a flue-gas stack
At some modern power stations equipped with flue gas purification, such as the Großkrotzenburg Power Station and the Rostock Power Station, the cooling tower is also used as a flue-gas stack (industrial chimney), thus saving the cost of a separate chimney structure. At plants without flue gas purification, problems with corrosion may occur, due to reactions of raw flue gas with water to form acids.
Sometimes, natural draft cooling towers are constructed with structural steel in place of concrete (RCC) when the construction time of natural draft cooling tower is exceeding the construction time of the rest of the plant or the local soil is of poor strength to bear the heavy weight of RCC cooling towers or cement prices are higher at a site to opt for cheaper natural draft cooling towers made of structural steel.
Operation in freezing weather
Some cooling towers (such as smaller building air conditioning systems) are shut down seasonally, drained, and winterized to prevent freeze damage.
During the winter, other sites continuously operate cooling towers with water leaving the tower. Basin heaters, tower draindown, and other freeze protection methods are often employed in cold climates. Operational cooling towers with malfunctions can freeze during very cold weather. Typically, freezing starts at the corners of a cooling tower with a reduced or absent heat load. Severe freezing conditions can create growing volumes of ice, resulting in increased structural loads which can cause structural damage or collapse.
To prevent freezing, the following procedures are used:
The use of water modulating by-pass systems is not recommended during freezing weather. In such situations, the control flexibility of variable speed motors, two-speed motors, and/or two-speed motors multi-cell towers should be considered a requirement.
Do not operate the tower unattended. Remote sensors and alarms may be installed to monitor tower conditions.
Do not operate the tower without a heat load. Basin heaters may be used to keep the water in the tower pan at an above-freezing temperature. Heat trace ("heating tape") is a resistive heating element that is installed along water pipes to prevent freezing in cold climates.
Maintain design water flow rate over the tower fill.
Manipulate or reduce airflow to maintain water temperature above freezing point.
Fire hazard
Cooling towers constructed in whole or in part of combustible materials can support internal fire propagation. Such fires can become very intense, due to the high surface-volume ratio of the towers, and fires can be further intensified by natural convection or fan-assisted draft. The resulting damage can be sufficiently severe to require the replacement of the entire cell or tower structure. For this reason, some codes and standards recommend that combustible cooling towers be provided with an automatic fire sprinkler system. Fires can propagate internally within the tower structure when the cell is not in operation (such as for maintenance or construction), and even while the tower is in operation, especially those of the induced-draft type, because of the existence of relatively dry areas within the towers.
Structural stability
Being very large structures, cooling towers are susceptible to wind damage, and several spectacular failures have occurred in the past. At Ferrybridge power station on 1 November 1965, the station was the site of a major structural failure, when three of the cooling towers collapsed owing to vibrations in winds. Although the structures had been built to withstand higher wind speeds, the shape of the cooling towers caused westerly winds to be funneled into the towers themselves, creating a vortex. Three out of the original eight cooling towers were destroyed, and the remaining five were severely damaged. The towers were later rebuilt and all eight cooling towers were strengthened to tolerate adverse weather conditions. Building codes were changed to include improved structural support, and wind tunnel tests were introduced to check tower structures and configuration.
See also
List of tallest cooling towers
Alkali soils
Architectural engineering
Deep lake water cooling
Evaporative cooler
Evaporative cooling
Fossil fuel power plant
Heating, ventilating and air conditioning
Hyperboloid structure
Mechanical engineering
Nuclear power plant
Power station
Spray pond
Water cooling
Willow Island disaster
References
External links
What is a cooling tower? – Cooling Technology Institute
"Cooling Towers" – includes diagrams – Virtual Nuclear Tourist
Wet cooling tower guidance for particulate matter, Environment Canada.
Striking pictures of Europe's abandoned cooling towers by Reginald Van de Velde, Lonely Planet, 15 February 2017 (see also excerpt from radio interview, World Update, BBC, 21 November 2016)
Building engineering
Heating, ventilation, and air conditioning
Nuclear power plant components
Cooling technology | Cooling tower | Engineering | 8,735 |
2,110,162 | https://en.wikipedia.org/wiki/Web%20log%20analysis%20software | Web log analysis software (also called a web log analyzer) is a kind of web analytics software that parses a server log file from a web server, and based on the values contained in the log file, derives indicators about when, how, and by whom a web server is visited. Reports are usually generated immediately, but data extracted from the log files can alternatively be stored in a database, allowing various reports to be generated on demand.
Features supported by log analysis packages may include "hit filters", which use pattern matching to examine selected log data.
Common indicators
Number of visits and number of unique visitors
Visit duration and last visits
Authenticated users, and last authenticated visits
Days of week and rush hours
Domains/countries of host's visitors.
Hosts list
Number of page views
Most viewed, entry, and exit pages
File types
OS used
Browsers used
Robots used
HTTP referrer
Search engines, key phrases and keywords used to find the analyzed web site
HTTP errors
Some of the log analyzers also report on who is on the site, conversion tracking, visit time and page navigation.
See also
List of web analytics software | Web log analysis software | Technology | 227 |
44,189 | https://en.wikipedia.org/wiki/Reciprocal%20altruism | In evolutionary biology, reciprocal altruism is a behaviour whereby an organism acts in a manner that temporarily reduces its fitness while increasing another organism's fitness, with the expectation that the other organism will act in a similar manner at a later time.
The concept was initially developed by Robert Trivers to explain the evolution of cooperation as instances of mutually altruistic acts. The concept is close to the strategy of "tit for tat" used in game theory. In 1987, Trivers presented at a symposium on reciprocity, noting that he initially titled his article "The Evolution of Delayed Return Altruism," but reviewer W. D. Hamilton suggested renaming it "The Evolution of Reciprocal Altruism." While Trivers adopted the new title, he retained the original examples, causing confusion about reciprocal altruism for decades. Rothstein and Pierotti (1988) addressed this issue at the symposium, proposing new definitions that clarified the concepts. They argued that Delayed Return Altruism was a superior term and introduced "pseudo-reciprocity" to replace it.
Theory
The concept of "reciprocal altruism", as introduced by Trivers, suggests that altruism, defined as an act of helping another individual while incurring some cost for this act, could have evolved since it might be beneficial to incur this cost if there is a chance of being in a reverse situation where the individual who was helped before may perform an altruistic act towards the individual who helped them initially. This concept finds its roots in the work of W.D. Hamilton, who developed mathematical models for predicting the likelihood of an altruistic act to be performed on behalf of one's kin.
Putting this into the form of a strategy in a repeated prisoner's dilemma would mean to cooperate unconditionally in the first period and behave cooperatively (altruistically) as long as the other agent does as well. If chances of meeting another reciprocal altruist are high enough, or if the game is repeated for a long enough amount of time, this form of altruism can evolve within a population.
This is close to the notion of "tit for tat" introduced by Anatol Rapoport, although there still seems a slight distinction in that "tit for tat" cooperates in the first period and from thereon always replicates an opponent's previous action, whereas "reciprocal altruists" stop cooperation in the first instance of non-cooperation by an opponent and stay non-cooperative from thereon. This distinction leads to the fact that in contrast to reciprocal altruism, tit for tat may be able to restore cooperation under certain conditions despite cooperation having broken down.
Christopher Stephens shows a set of necessary and jointly sufficient conditions "... for an instance of reciprocal altruism:
the behaviour must reduce a donor's fitness relative to a selfish alternative;
the fitness of the recipient must be elevated relative to non-recipients;
the performance of the behaviour must not depend on the receipt of an immediate benefit;
conditions 1, 2, and 3 must apply to both individuals engaging in reciprocal helping.
There are two additional conditions necessary "...for reciprocal altruism to evolve:"
A mechanism for detecting 'cheaters' must exist.
A large (indefinite) number of opportunities to exchange aid must exist.
The first two conditions are necessary for altruism as such, while the third is distinguishing reciprocal altruism from simple mutualism and the fourth makes the interaction reciprocal.
Condition number five is required as otherwise non-altruists may always exploit altruistic behaviour without any consequences and therefore evolution of reciprocal altruism would not be possible. However, it is pointed out that this "conditioning device" does not need to be conscious. Condition number six is required to avoid cooperation breakdown through forward induction—a possibility suggested by game theoretical models.
In 1987, Trivers told a symposium on reciprocity that he had originally submitted his article under the title "The Evolution of Delayed Return Altruism", but reviewer W. D. Hamilton suggested that he change the title to "The Evolution of Reciprocal Altruism". Trivers changed the title, but not the examples in the manuscript, which has led to confusion about what were appropriate examples of reciprocal altruism for the last 50 years. In their contribution to that symposium, Rothstein and Pierotti (1988) addressed this issue and proposed new definitions concerning the topic of altruism, that clarified the issue created by Trivers and Hamilton. They proposed that Delayed Return Altruism was a superior concept and used the term pseudo-reciprocity in place of DRA.
Examples
The following examples could be understood as altruism. However, showing reciprocal altruism in an unambiguous way requires more evidence as will be shown later.
Cleaner fish
An example of reciprocal altruism is cleaning symbiosis, such as between cleaner fish and their hosts, though cleaners include shrimps and birds, and clients include fish, turtles, octopuses and mammals. Aside from the apparent symbiosis of the cleaner and the host during actual cleaning, which cannot be interpreted as altruism, the host displays additional behaviour that meets the criteria for delayed return altruism:
The host fish allows the cleaner fish free entrance and exit and does not eat the cleaner, even after the cleaning is done. The host signals the cleaner it is about to depart the cleaner's locality, even when the cleaner is not in its body. The host sometimes chases off possible dangers to the cleaner.
The following evidence supports the hypothesis:
The cleaning by cleaners is essential for the host. In the absence of cleaners the hosts leave the locality or suffer from injuries inflicted by ectoparasites. There is difficulty and danger in finding a cleaner. Hosts leave their element to get cleaned. Others wait no longer than 30 seconds before searching for cleaners elsewhere.
A key requirement for the establishment of reciprocal altruism is that the same two individuals must interact repeatedly, as otherwise the best strategy for the host would be to eat the cleaner as soon as cleaning was complete. This constraint imposes both a spatial and a temporal condition on the cleaner and on its host. Both individuals must remain in the same physical location, and both must have a long enough lifespan, to enable multiple interactions. There is reliable evidence that individual cleaners and hosts do indeed interact repeatedly.
This example meets some, but not all, of the criteria described in Trivers's model. In the cleaner-host system the benefit to the cleaner is always immediate. However, the evolution of reciprocal altruism is contingent on opportunities for future rewards through repeated interactions. In one study, nearby host fish observed "cheater" cleaners and subsequently avoided them. In these examples, true reciprocity is difficult to demonstrate since failure means the death of the cleaner. However, if Randall's claim that hosts sometimes chase off possible dangers to the cleaner is correct, an experiment might be constructed in which reciprocity could be demonstrated. In actuality this is one of Trivers' examples of Delayed Return Altruism as discussed by Rothstein and Pierotti 1988.
Warning calls in birds
Warning calls, although exposing a bird and putting it in danger, are frequently given by birds. An explanation in terms of altruistic behaviors given by Trivers:
It has been shown that predators learn specific localities and specialize individually on prey types and hunting techniques.
It is therefore disadvantageous for a bird to have a predator eat a conspecific, because the experienced predator may then be more likely to eat them. Alarming another bird by giving a warning call tends to prevent predators from specializing on the caller's species and locality. In this way, birds in areas in which warning calls are given will be at a selective advantage relative to birds in areas free from warning calls.
Nevertheless, this presentation lacks important elements of reciprocity. It is very hard to detect and ostracize cheaters. There is no evidence that a bird refrains from giving calls when another bird is not reciprocating, nor evidence that individuals interact repeatedly. Given the aforementioned characteristics of bird calling, a continuous bird emigration and immigration environment (true of many avian species) is most likely to be partial to cheaters, since selection against the selfish gene is unlikely.
Another explanation for warning calls is that these are not warning calls at all:
A bird, once it has detected a bird of prey, calls to signal to the bird of prey that it was detected, and that there is no use trying to attack the calling bird. Two facts support this hypothesis:
The call frequencies match the hearing range of the predator bird.
Calling birds are less attacked—predator birds attack calling birds less frequently than other birds.
Nest protecting
Red-winged blackbird males help defend neighbor's nests. There are many theories as to why males behave this way. One is that males only defend other nests which contain their extra-pair offspring. Extra-pair offspring are juveniles which may contain some of the male bird's DNA. Another is the tit-for-tat strategy of reciprocal altruism. A third theory is, males help only other closely related males. A study done by The Department of Fisheries and Wildlife provided evidence that males used a tit-for-tat strategy. The Department of Fisheries and Wildlife tested many different nests by placing stuffed crows by nests, and then observing behavior of neighboring males. The behaviors they looked for included the number of calls, dives, and strikes. After analyzing the results, there was not significance evidence for kin selection; the presence of extra-pair offspring did not affect the probability of help in nest defense. However, males reduced the amount of defense given to neighbors when neighbor males reduced defense for their nests. This demonstrates a tit-for-tat strategy, where animals help those who previously helped them. This strategy is one type of reciprocal altruism.
Vampire bats
Vampire bats also display reciprocal altruism, as described by Wilkinson.
The bats feed each other by regurgitating blood. Since bats only feed on blood and will die after just 70 hours of not eating, this food sharing is a great benefit to the receiver and a great cost to the giver.
To qualify for reciprocal altruism, the benefit to the receiver would have to be larger than the cost to the donor. This seems to hold as these bats usually die if they do not find a blood meal two nights in a row. Also, the requirement that individuals who have behaved altruistically in the past are helped by others in the future is confirmed by the data. However, the consistency of the reciprocal behaviour, namely that a previously non-altruistic bat is refused help when it requires it, has not been demonstrated. Therefore, the bats do not seem to qualify yet as an unequivocal example of reciprocal altruism.
Primates
Grooming in primates meets the conditions for reciprocal altruism according to some studies. One of the studies in vervet monkeys shows that among unrelated individuals, grooming induce higher chance of attending to each other's calls for aid. However, vervet monkeys also display grooming behaviors within group members, displaying alliances. This would demonstrate vervet monkey's grooming behavior as a part of kin selection since the activity is done between siblings in this study. Moreover, following the criteria by Stephen, if the study is to be an example of reciprocal altruism, it must prove the mechanism for detecting cheaters.
Bacteria
Numerous species of bacteria engage in reciprocal altruistic behaviors with other species. Typically, this takes the form of bacteria providing essential nutrients for another species, while the other species provides an environment for the bacteria to live in. Reciprocal altruism is exhibited between nitrogen-fixing bacteria and plants in which they reside. Additionally, it can be observed between bacteria and some species of flies such as Bactrocera tryoni. These flies consume nutrient-producing bacteria found on the leaves of plants; in exchange, they reside within the flies' digestive system. This reciprocal altruistic behavior has been exploited by techniques designed to eliminate B. tryoni, which are fruit fly pests native to Australia.
Humans
Exceptions
Some animals seem to be unable to develop reciprocal altruism. For example, pigeons defect instead of a random response or a tit-for-tat in a prisoner's dilemma game against a computer. This may be due to favoring short-term thinking over long-term thinking.
Regulation by emotional disposition
In comparison to that of other animals, the human altruistic system is a sensitive and unstable one. Therefore, the tendency to give, to cheat, and the response to other's acts of giving and cheating must be regulated by a complex psychology in each individual, social structures, and cultural traditions. Individuals differ in the degree of these tendencies and responses.
According to Trivers, the following emotional dispositions and their evolution can be understood in terms of regulation of altruism.
Friendship and emotions of liking and disliking.
Moralistic aggression. A protection mechanism from cheaters acts to regulate the advantage of cheaters in selection against altruists. The moralistic altruist may want to educate or even punish a cheater.
Gratitude and sympathy. A fine regulation of altruism can be associated with gratitude and sympathy in terms of cost/benefit and the level in which the beneficiary will reciprocate.
Guilt and reparative altruism. Prevents the cheater from cheating again. The cheater shows regret to avoid paying too dearly for past acts.
Subtle cheating. A stable evolutionary equilibrium could include a low percentage of mimics in controversial support of adaptive sociopathy.
Trust and suspicion. These are regulators for cheating and subtle cheating.
Partnerships. Altruism to create friendships.
It is not known how individuals pick partners as there has been little research on choice. Modeling indicates that altruism about partner choices is unlikely to evolve, as costs and benefits between multiple individuals are variable. Therefore, the time or frequency of reciprocal actions contributes more to an individual's choice of partner than the reciprocal act itself.
See also
Altruism (biology)
Collaboration
The common good
Competitive altruism
Enlightened self-interest
Evolutionary models of food sharing
Gift economy
Helping behavior
Koinophilia
Mutual Aid: A Factor of Evolution (1902)
Norm of reciprocity
Prosocial behavior
Psychological egoism
Reciprocity (social psychology)
Reciprocity (evolution)
Signalling theory
References
Evolutionary biology
Symbiosis
Evolutionary psychology
Altruism | Reciprocal altruism | Biology | 2,984 |
41,359,677 | https://en.wikipedia.org/wiki/Thermosalinograph | The Thermosalinograph or TSG is an measuring instrument mounted near the water intake of ships to continuously measure sea surface temperature and conductivity while the ship is in motion. Various programs have been developed to assist in the collection and analysis of data from a TSG. The data can be used to calculate salinity, density, sound velocity within the water, and other parameters. There are various types of thermosalinographs available on the market today.
Background
Programs collecting TSG data
NOAA fleet
Ship of Opportunity Program (SOOP)
Global Ocean Observing System (GOOS)
GOSUD Global Ocean Surface Underway Data (http://www.gosud.org)
Measurement devices
The thermosalinograph uses a conductivity cell to measure conductivity, which can then be translated into a value of salinity. Also a thermistor cell measures the temperature of the surface water, which when combined with the conductivity can be used calculate the density of the water and the sound velocity within it.
Types
Sources of error
Water is measured in the engine room, which can cause biases from heat in the room.
References
Water transport
Measuring instruments | Thermosalinograph | Technology,Engineering | 238 |
2,575,805 | https://en.wikipedia.org/wiki/Cryolathe | A cryolathe is a device used for freezing and grinding human corneal tissue into different refractive powers.
See also
Epikeratophakia
Keratomileusis
Refractive surgery
References
External links
Medical equipment | Cryolathe | Biology | 46 |
463,044 | https://en.wikipedia.org/wiki/Addition%20polymer | In polymer chemistry, an addition polymer is a polymer that forms by simple linking of monomers without the co-generation of other products. Addition polymerization differs from condensation polymerization, which does co-generate a product, usually water. Addition polymers can be formed by chain polymerization, when the polymer is formed by the sequential addition of monomer units to an active site in a chain reaction, or by polyaddition, when the polymer is formed by addition reactions between species of all degrees of polymerization. Addition polymers are formed by the addition of some simple monomer units repeatedly. Generally polymers are unsaturated compounds like alkenes, alkalines etc. The addition polymerization mainly takes place in free radical mechanism. The free radical mechanism of addition polymerization completed by three steps i.e. Initiation of free radical, Chain propagation, Termination of chain.
Polyolefins
Many common addition polymers are formed from unsaturated monomers (usually having a C=C double bond). The most prevalent addition polymers are polyolefins, i.e. polymers derived by the conversion of olefins (alkenes) to long-chain alkanes. The stoichiometry is simple:
n RCH=CH2 → [RCH-CH2]n
This conversion can be induced by a variety of catalysts including free radicals, acids, carbanions and metal complexes.
Examples of such polyolefins are polyethenes, polypropylene, PVC, Teflon, Buna rubbers, polyacrylates, polystyrene, and PCTFE.
Copolymers
When two or more types of monomers undergo addition polymerization, the resulting polymer is an addition copolymer. Saran wrap, formed from polymerization of vinyl chloride and vinylidene chloride, is an addition copolymer.
Ring-opening polymerization
Ring-opening polymerization is an additive process but tends to give condensation-like polymers but follows the stoichiometry of addition polymerization. For example, polyethylene glycol is formed by opening ethylene oxide rings:
HOCH2CH2OH + n C2H4O → HO(CH2CH2O)n+1H
Nylon 6 (developed to thwart the patent on nylon 6,6) is produced by addition polymerization, but chemically resembles typical polyamides.
Further contrasts with condensation polymers
One universal distinction between polymerization types is development of molecular weight by the different modes of propagation. Addition polymers form high molecular weight chains rapidly, with much monomer remaining. Since addition polymerization has rapidly growing chains and free monomer as its reactants, and condensation polymerization occurs in step-wise fashion between monomers, dimers, and other smaller growing chains, the effect of a polymer molecule's current size on a continuing reaction is profoundly different in these two cases. This has important effects on the distribution of molecular weights, or polydispersity, in the finished polymer.
Biodegradation
Addition polymers are generally chemically inert, involving strong C-C bonds. For this reason they are non-biodegradable and difficult to recycle. In contrast, condensation polymers tend to be more readily bio-degradable because their backbones contain weaker bonds.
History
The first useful addition polymer was made by accident in 1933 by ICI chemists Reginald Gibson and Eric Fawcett. They were carrying out a series of experiments that involved reacting organic compounds under high temperatures and high pressures. They set up an experiment to react ethene with benzaldehyde in the hope of producing a ketone. They left the reaction vessel overnight, and the next morning they found a small amount of a white waxy solid. It was shown later that this solid was polyethylene.
The term "addition polymerization" is deprecated by IUPAC (International Union of Pure and Applied Chemistry) which recommends the alternative term chain polymerization.
References
Polymer chemistry
Polymerization reactions | Addition polymer | Chemistry,Materials_science,Engineering | 836 |
5,595,981 | https://en.wikipedia.org/wiki/P21 | p21Cip1 (alternatively p21Waf1), also known as cyclin-dependent kinase inhibitor 1 or CDK-interacting protein 1, is a cyclin-dependent kinase inhibitor (CKI) that is capable of inhibiting all cyclin/CDK complexes, though is primarily associated with inhibition of CDK2. p21 represents a major target of p53 activity and thus is associated with linking DNA damage to cell cycle arrest. This protein is encoded by the CDKN1A gene located on chromosome 6 (6p21.2) in humans.
Function
CDK inhibition
p21 is a potent cyclin-dependent kinase inhibitor (CKI). The p21 (CIP1/WAF1) protein binds to and inhibits the activity of cyclin-CDK2, -CDK1, and -CDK4/6 complexes, and thus functions as a regulator of cell cycle progression at G1 and S phase. The binding of p21 to CDK complexes occurs through p21's N-terminal domain, which is homologous to the other CIP/KIP CDK inhibitors p27 and p57. Specifically it contains a Cy1 motif in the N-terminal half, and weaker Cy2 motif in the C-terminal domain that allow it to bind CDK in a region that blocks its ability to complex with cyclins and thus prevent CDK activation.
Experiments looking at CDK2 activity within single cells have also shown p21 to be responsible for a bifurcation in CDK2 activity following mitosis, cells with high p21 enter a G0/quiescent state, whilst those with low p21 continue to proliferate. Follow up work, found evidence that this bistability is underpinned by double negative feedback between p21 and CDK2, where CDK2 inhibits p21 activity via ubiquitin ligase activity.
PCNA inhibition
p21 interacts with proliferating cell nuclear antigen (PCNA), a DNA polymerase accessory factor, and plays a regulatory role in S phase DNA replication and DNA damage repair. Specifically, p21 has a high affinity for the PIP-box binding region on PCNA, binding of p21 to this region is proposed to block the binding of processivity factors necessary for PCNA dependent S-phase DNA synthesis, but not PCNA dependent nucleotide excision repair (NER). As such, p21 acts as an effective inhibitor of S-phase DNA synthesis though permits NER, leading to the proposal that p21 acts to preferentially select polymerase processivity factors depending on the context of DNA synthesis.
Apoptosis inhibition
This protein was reported to be specifically cleaved by CASP3-like caspases, which thus leads to a dramatic activation of CDK2, and may be instrumental in the execution of apoptosis following caspase activation. However p21 may inhibit apoptosis and does not induce cell death on its own. The ability of p21 to inhibit apoptosis in response to replication fork stress has also been reported.
Regulation
p53 dependent response
Studies of p53 dependent cell cycle arrest in response to DNA damage identified p21 as the primary mediator of downstream cell cycle arrest. Notably, El-Deiry et al. identified a protein p21 (WAF1) which was present in cells expressing wild type p53 but not those with mutant p53, moreover constitutive expression of p21 led to cell cycle arrest in a number of cell types. Dulcic et al. also found that γ-irradiation of fibroblasts induced a p53 and p21 dependent cell cycle arrest, here p21 was found bound to inactive cyclin E/CDK2 complexes. Working in mouse models, it was also shown that whilst mice lacking p21 were healthy, spontaneous tumours developed and G1 checkpoint control was compromised in cells derived from these mice. Taken together, these studies thus defined p21 as the primary mediator of p53-dependent cell cycle arrest in response to DNA damage.
Recent work exploring p21 activation in response to DNA damage at a single-cell level have demonstrated that pulsatile p53 activity leads to subsequent pulses of p21, and that the strength of p21 activation is cell cycle phase dependent. Moreover, studies of p21-levels in populations of cycling cells, not exposed to DNA damaging agents, have shown that DNA damage occurring in mother cell S-phase can induce p21 accumulation over both mother G2 and daughter G1 phases which subsequently induces cell cycle arrest; this responsible for the bifurcation in CDK2 activity observed in Spencer et al..
Degradation
p21 is negatively regulated by ubiquitin ligases both over the course of the cell cycle and in response to DNA damage. Specifically, over the G1/S transition it has been demonstrated that the E3 ubiquitin ligase complex SCFSkp2 induces degradation of p21. Studies have also demonstrated that the E3 ubiquitin ligase complex CRL4Cdt2 degrades p21 in a PCNA dependent manner over S-phase, necessary to prevent p21 dependent re-replication, as well as in response to UV irradiation. Recent work has now found that in human cell lines SCFSkp2 degrades p21 towards the end of G1 phase, allowing cells to exit a quiescent state, whilst CRL4Cdt2 acts to degrade p21 at a much higher rate than SCFSkp2 over the G1/S transition and subsequently maintain low levels of p21 throughout S-phase.
Clinical significance
Cytoplasmic p21 expression can be significantly correlated with lymph node metastasis, distant metastases, advanced TNM stage (a classification of cancer staging that stands for: tumor size, describing nearby lymph nodes, and distant metastasis), depth of invasion and OS (overall survival rate). A study on immunohistochemical markers in malignant thymic epithelial tumors shows that p21 expression has a negatively influenced survival and significantly correlated with WHO (World Health Organization) type B2/B3. When combined with low p27 and high p53, DFS (Disease-Free Survival) decreases.
p21 mediates the resistance of hematopoietic cells to an infection with HIV by complexing with the HIV integrase and thereby aborting chromosomal integration of the provirus. HIV infected individuals who naturally suppress viral replication have elevated levels of p21 and its associated mRNA. p21 expression affects at least two stages in the HIV life cycle inside CD4 T cells, significantly limiting production of new viruses.
Metastatic canine mammary tumors display increased levels of p21 in the primary tumors but also in their metastases, despite increased cell proliferation.
Mice that lack the p21 gene gain the ability to regenerate lost appendages.
Interactions
P21 has been shown to interact with:
Nrf2
BCCIP,
CIZ1,
CUL4A,
CCNE1,
CDK,
DDB1,
DTL,
GADD45A,
GADD45G,
HDAC,
PCNA,
PIM1,
TK1, and
TSG101.
References
Further reading
External links
Drosophila dacapo - The Interactive Fly
Cell cycle regulators
Tumor suppressor genes | P21 | Chemistry | 1,539 |
28,184 | https://en.wikipedia.org/wiki/Sound%20card | A sound card (also known as an audio card) is an internal expansion card that provides input and output of audio signals to and from a computer under the control of computer programs. The term sound card is also applied to external audio interfaces used for professional audio applications.
Sound functionality can also be integrated into the motherboard, using components similar to those found on plug-in cards. The integrated sound system is often still referred to as a sound card. Sound processing hardware is also present on modern video cards with HDMI to output sound along with the video using that connector; previously they used a S/PDIF connection to the motherboard or sound card.
Typical uses of sound cards or sound card functionality include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation, education and entertainment (games) and video projection. Sound cards are also used for computer-based communication such as voice over IP and teleconferencing.
General characteristics
Sound cards use a digital-to-analog converter (DAC), which converts recorded or generated digital signal data into an analog format. The output signal is connected to an amplifier, headphones, or external device using standard interconnects, such as a TRS phone connector.
A common external connector is the microphone connector. Input through a microphone connector can be used, for example, by speech recognition or voice over IP applications. Most sound cards have a line in connector for an analog input from a sound source that has higher voltage levels than a microphone. In either case, the sound card uses an analog-to-digital converter (ADC) to digitize this signal.
Some cards include a sound chip to support the production of synthesized sounds, usually for real-time generation of music and sound effects using minimal data and CPU time.
The card may use direct memory access to transfer the samples to and from main memory, from where a recording and playback software may read and write it to the hard disk for storage, editing, or further processing.
Sound channels and polyphony
An important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, which may correspond to a speaker configuration such as 2.0 (stereo), 2.1 (stereo and sub woofer), 5.1 (surround), or other configurations. Sometimes, the terms voice and channel are used interchangeably to indicate the degree of polyphony, not the output speaker configuration. For example, much older sound chips could accommodate three voices, but only one output audio channel (i.e., a single mono output), requiring all voices to be mixed together. Later cards, such as the AdLib sound card, had a 9-voice polyphony combined in 1 mono output channel.
Early PC sound cards had multiple FM synthesis voices (typically 9 or 16) which were used for MIDI music. The full capabilities of advanced cards are often not fully used; only one (mono) or two (stereo) voice(s) and channel(s) are usually dedicated to playback of digital sound samples, and playing back more than one digital sound sample usually requires a software downmix at a fixed sampling rate. Modern low-cost integrated sound cards (i.e., those built into motherboards) such as audio codecs like those meeting the AC'97 standard and even some lower-cost expansion sound cards still work this way. These devices may provide more than two sound output channels (typically 5.1 or 7.1 surround sound), but they usually have no actual hardware polyphony for either sound effects or MIDI reproduction these tasks are performed entirely in software. This is similar to the way inexpensive softmodems perform modem tasks in software rather than in hardware.
In the early days of wavetable synthesis, some sound card manufacturers advertised polyphony solely on the MIDI capabilities alone. In this case, typically, the card is only capable of two channels of digital sound and the polyphony specification solely applies to the number of MIDI instruments the sound card is capable of producing at once.
Modern sound cards may provide more flexible audio accelerator capabilities which can be used in support of higher levels of polyphony or other purposes such as hardware acceleration of 3D sound, positional audio and real-time DSP effects.
List of sound card standards
Color codes
Connectors on the sound cards are color-coded as per the PC System Design Guide. They may also have symbols of arrows, holes and soundwaves that are associated with each jack position.
History of sound cards for the IBM PC architecture
Sound cards for IBM PC–compatible computers were very uncommon until 1988. For the majority IBM PC users, the internal PC speaker was the only way for early PC software to produce sound and music. The speaker hardware was typically limited to square waves. The resulting sound was generally described as "beeps and boops" which resulted in the common nickname beeper. Several companies, most notably Access Software, developed techniques for digital sound reproduction over the PC speaker like RealSound. The resulting audio, while functional, suffered from the heavily distorted output and low volume, and usually required all other processing to be stopped while sounds were played. Other home computers of the 1980s like the Commodore 64 included hardware support for digital sound playback or music synthesis, leaving the IBM PC at a disadvantage when it came to multimedia applications. Early sound cards for the IBM PC platform were not designed for gaming or multimedia applications, but rather on specific audio applications, such as music composition with the AdLib Personal Music System, IBM Music Feature Card, and Creative Music System, or on speech synthesis like Digispeech DS201, Covox Speech Thing, and Street Electronics Echo.
In 1988, a panel of computer-game CEOs stated at the Consumer Electronics Show that the PC's limited sound capability prevented it from becoming the leading home computer, that it needed a $49–79 sound card with better capability than current products, and that once such hardware was widely installed, their companies would support it. Sierra On-Line, which had pioneered supporting EGA and VGA video, and 3-1/2" disks, promised that year to support the AdLib, IBM Music Feature, and Roland MT-32 sound cards in its games. A 1989 Computer Gaming World survey found that 18 of 25 game companies planned to support AdLib, six Roland and Covox, and seven Creative Music System/Game Blaster.
Hardware manufacturers
One of the first manufacturers of sound cards for the IBM PC was AdLib, which produced a card based on the Yamaha YM3812 sound chip, also known as the OPL2. The AdLib had two modes: A 9-voice mode where each voice could be fully programmed, and a less frequently used percussion mode with 3 regular voices producing 5 independent percussion-only voices for a total of 11.
Creative Labs also marketed a sound card called the Creative Music System (C/MS) at about the same time. Although the C/MS had twelve voices to AdLib's nine and was a stereo card while the AdLib was mono, the basic technology behind it was based on the Philips SAA1099 chip which was essentially a square-wave generator. It sounded much like twelve simultaneous PC speakers would have except for each channel having amplitude control, and failed to sell well, even after Creative renamed it the Game Blaster a year later, and marketed it through RadioShack in the US. The Game Blaster retailed for under $100 and was compatible with many popular games, such as Silpheed.
A large change in the IBM PC-compatible sound card market happened when Creative Labs introduced the Sound Blaster card. Recommended by Microsoft to developers creating software based on the Multimedia PC standard, the Sound Blaster cloned the AdLib and added a sound coprocessor for recording and playback of digital audio. The card also included a game port for adding a joystick, and the capability to interface to MIDI equipment using the game port and a special cable. With AdLib compatibility and more features at nearly the same price, most buyers chose the Sound Blaster. It eventually outsold the AdLib and dominated the market.
Roland also made sound cards in the late 1980s such as the MT-32 and LAPC-I. Roland cards sold for hundreds of dollars. Many games, such as Silpheed and Police Quest II, had music written for their cards. The cards were often poor at sound effects such as laughs, but for music were by far the best sound cards available until the mid-nineties. Some Roland cards, such as the SCC, and later versions of the MT-32 were made to be less expensive.
By 1992, one sound card vendor advertised that its product was "Sound Blaster, AdLib, Disney Sound Source and Covox Speech Thing Compatible!" Responding to readers complaining about an article on sound cards that unfavorably mentioned the Gravis Ultrasound, Computer Gaming World stated in January 1994 that, "The de facto standard in the gaming world is Sound Blaster compatibility ... It would have been unfair to have recommended anything else." The magazine that year stated that Wing Commander II was "Probably the game responsible" for making it the standard card. The Sound Blaster line of cards, together with the first inexpensive CD-ROM drives and evolving video technology, ushered in a new era of multimedia computer applications that could play back CD audio, add recorded dialogue to video games, or even reproduce full motion video (albeit at much lower resolutions and quality in early days). The widespread decision to support the Sound Blaster design in multimedia and entertainment titles meant that future sound cards such as Media Vision's Pro Audio Spectrum and the Gravis Ultrasound had to be Sound Blaster compatible if they were to sell well. Until the early 2000s, when the AC'97 audio standard became more widespread and eventually usurped the SoundBlaster as a standard due to its low cost and integration into many motherboards, Sound Blaster compatibility was a standard that many other sound cards supported to maintain compatibility with many games and applications released.
Industry adoption
When game company Sierra On-Line opted to support add-on music hardware in addition to built-in hardware such as the PC speaker and built-in sound capabilities of the IBM PCjr and Tandy 1000, what could be done with sound and music on the IBM PC changed dramatically. Two of the companies Sierra partnered with were Roland and AdLib, opting to produce in-game music for King's Quest 4 that supported the MT-32 and AdLib Music Synthesizer. The MT-32 had superior output quality, due in part to its method of sound synthesis as well as built-in reverb. Since it was the most sophisticated synthesizer they supported, Sierra chose to use most of the MT-32's custom features and unconventional instrument patches, producing background sound effects (e.g., chirping birds, clopping horse hooves, etc.) before the Sound Blaster brought digital audio playback to the PC. Many game companies also supported the MT-32, but supported the Adlib card as an alternative because of the latter's higher market base. The adoption of the MT-32 led the way for the creation of the MPU-401, Roland Sound Canvas and General MIDI standards as the most common means of playing in-game music until the mid-1990s.
Feature evolution
Early ISA bus sound cards were half-duplex, meaning they couldn't record and play digitized sound simultaneously. Later, ISA cards like the SoundBlaster AWE series and Plug-and-play Soundblaster clones supported simultaneous recording and playback, but at the expense of using up two IRQ and DMA channels instead of one. Conventional PCI bus cards generally do not have these limitations and are mostly full-duplex.
Sound cards have evolved in terms of digital audio sampling rate (starting from 8-bit , to 32-bit, that the latest solutions support). Along the way, some cards started offering wavetable synthesis, which provides superior MIDI synthesis quality relative to the earlier Yamaha OPL based solutions, which uses FM-synthesis. Some higher-end cards (such as Sound Blaster AWE32, Sound Blaster AWE64 and Sound Blaster Live!) introduced their own RAM and processor for user-definable sound samples and MIDI instruments as well as to offload audio processing from the CPU. Later, the integrated audio (AC'97 and later HD Audio) prefer the use of a software MIDI synthesizer, for example, Microsoft GS Wavetable SW Synth in Microsoft Windows.
With some exceptions, for years, sound cards, most notably the Sound Blaster series and their compatibles, had only one or two channels of digital sound. Early games and MOD-players needing more channels than a card could support had to resort to mixing multiple channels in software. Even today, the tendency is still to mix multiple sound streams in software, except in products specifically intended for gamers or professional musicians.
Crippling of features
As of 2024, sound cards are not commonly programmed with the audio loopback systems commonly called stereo mix, wave out mix, mono mix or what u hear, which previously allowed users to digitally record output otherwise only accessible to speakers.
Lenovo and other manufacturers fail to implement the feature in hardware, while other manufacturers disable the driver from supporting it. In some cases, loopback can be reinstated with driver updates. Alternatively, software such as virtual audio cable applications can be purchased to enable the functionality. According to Microsoft, the functionality was hidden by default in Windows Vista to reduce user confusion, but is still available, as long as the underlying sound card drivers and hardware support it.
Ultimately, the user can use the analog loophole and connect the line out directly to the line in on the sound card. However, in laptops, manufacturers have gradually moved from providing 3 separate jacks with TRS connectorsusually for line in, line out/headphone out and microphoneinto just a single combo jack with TRRS connector that combines inputs and outputs.
Outputs
The number of physical sound channels has also increased. The first sound card solutions were mono. Stereo sound was introduced in the early 1980s, and quadraphonic sound came in 1989. This was shortly followed by 5.1 channel audio. The latest sound cards support up to 8 audio channels for the 7.1 speaker setup.
A few early sound cards had sufficient power to drive unpowered speakers directlyfor example, two watts per channel. With the popularity of amplified speakers, sound cards no longer have a power stage, though in many cases they can adequately drive headphones.
Professional sound cards
Professional sound cards are sound cards optimized for high-fidelity, low-latency multichannel sound recording and playback. Their drivers usually follow the Audio Stream Input/Output protocol for use with professional sound engineering and music software.
Professional sound cards are usually described as audio interfaces, and sometimes have the form of external rack-mountable units using USB, FireWire, or an optical interface, to offer sufficient data rates. The emphasis in these products is, in general, on multiple input and output connectors, direct hardware support for multiple input and output sound channels, as well as higher sampling rates and fidelity as compared to the usual consumer sound card.
On the other hand, certain features of consumer sound cards such as support for 3D audio, hardware acceleration in video games, or real-time ambiance effects are secondary, nonexistent or even undesirable in professional audio interfaces.
The typical consumer-grade sound card is intended for generic home, office, and entertainment purposes with an emphasis on playback and casual use, rather than catering to the needs of audio professionals. In general, consumer-grade sound cards impose several restrictions and inconveniences that would be unacceptable to an audio professional. Consumer sound cards are also limited in the effective sampling rates and bit depths they can actually manage and have lower numbers of less flexible input channels. Professional studio recording use typically requires more than the two channels that consumer sound cards provide, and more accessible connectors, unlike the variable mixture of internal—and sometimes virtual—and external connectors found in consumer-grade sound cards.
Sound devices other than expansion cards
Integrated sound hardware on PC motherboards
In 1984, the first IBM PCjr had a rudimentary 3-voice sound synthesis chip (the SN76489) which was capable of generating three square-wave tones with variable amplitude, and a pseudo-white noise channel that could generate primitive percussion sounds. The Tandy 1000, initially a clone of the PCjr, duplicated this functionality, with the Tandy 1000 TL/SL/RL models adding digital sound recording and playback capabilities. Many games during the 1980s that supported the PCjr's video standard (described as Tandy-compatible, Tandy graphics, or TGA) also supported PCjr/Tandy 1000 audio.
In the late 1990s, many computer manufacturers began to replace plug-in sound cards with an audio codec chip (a combined audio AD/DA-converter) integrated into the motherboard. Many of these used Intel's AC'97 specification. Others used inexpensive ACR slot accessory cards.
From around 2001, many motherboards incorporated full-featured sound cards, usually in the form of a custom chipset, providing something akin to full Sound Blaster compatibility and relatively high-quality sound. However, these features were dropped when AC'97 was superseded by Intel's HD Audio standard, which was released in 2004, again specified the use of a codec chip, and slowly gained acceptance. As of 2011, most motherboards have returned to using a codec chip, albeit an HD Audio compatible one, and the requirement for Sound Blaster compatibility relegated to history.
Integrated sound on other platforms
Many home computers have their own motherboard-integrated sound devices: Commodore 64, Amiga, PC-88, FM-7, FM Towns, Sharp X1, X68000, BBC Micro, Electron, Archimedes, Atari 8-bit computers, Atari ST, Atari Falcon, Amstrad CPC, later revisions of the ZX Spectrum, MSX, Mac, and Apple IIGS. Workstations from Sun, Silicon Graphics and NeXT do as well. In some cases, most notably in those of the Macintosh, IIGS, Amiga, C64, SGI Indigo, X68000, MSX, Falcon, Archimedes, FM-7 and FM Towns, they provide very advanced capabilities (as of the time of manufacture), in others they are only minimal capabilities. Some of these platforms have also had sound cards designed for their bus architectures that cannot be used in a standard PC.
Several Japanese computer platforms, including the MSX, X1, X68000, FM Towns and FM-7, have built-in FM synthesis sound from Yamaha by the mid-1980s. By 1989, the FM Towns computer platform featured built-in PCM sample-based sound and supported the CD-ROM format.
The custom sound chip on Amiga, named Paula, has four digital sound channels (2 for the left speaker and 2 for the right) with 8-bit resolution for each channel and a 6-bit volume control per channel. Sound playback on Amiga was done by reading directly from the chip RAM without using the main CPU.
Most arcade video games have integrated sound chips. In the 1980s it was common to have a separate microprocessor for handling communication with the sound chip.
Sound cards on other platforms
The earliest known sound card used by computers was the Gooch Synthetic Woodwind, a music device for PLATO terminals, and is widely hailed as the precursor to sound cards and MIDI. It was invented in 1972.
Certain early arcade machines made use of sound cards to achieve playback of complex audio waveforms and digital music, despite being already equipped with onboard audio. An example of a sound card used in arcade machines is the Digital Compression System card, used in games from Midway. For example, Mortal Kombat II on the Midway T-Unit hardware. The T-Unit hardware already has an onboard YM2151 OPL chip coupled with an OKI 6295 DAC, but said game uses an added-on DCS card instead. The card is also used in the arcade version of Midway and Aerosmith's Revolution X for complex looping music and speech playback.
MSX computers, while equipped with built-in sound capabilities, also relied on sound cards to produce better-quality audio. The card, known as Moonsound, uses a Yamaha OPL4 sound chip. Prior to the Moonsound, there were also sound cards called MSX Music and MSX Audio for the system, which uses OPL2 and OPL3 chipsets.
The Apple II computers, which did not have sound capabilities beyond rapidly clicking a speaker until the IIGS, could use plug-in sound cards from a variety of manufacturers. The first, in 1978, was ALF's Apple Music Synthesizer, with 3 voices; two or three cards could be used to create 6 or 9 voices in stereo. Later ALF created the Apple Music II, a 9-voice model. The most widely supported card, however, was the Mockingboard. Sweet Micro Systems sold the Mockingboard in various models. Early Mockingboard models ranged from 3 voices in mono, while some later designs had 6 voices in stereo. Some software supported use of two Mockingboard cards, which allowed 12-voice music and sound. A 12-voice, single-card clone of the Mockingboard called the Phasor was made by Applied Engineering.
The ZX Spectrum that initially only had a beeper had some sound cards made for it. Examples include TurboSound Other examples are the Fuller Box, and Zon X-81.
The Commodore 64, while having an integrated SID (Sound Interface Device) chip, also had sound cards made for it. For example, the Sound Expander, which added on an OPL FM synthesizer.
The PC-98 series of computers, like their IBM PC cousins, also do not have integrated sound contrary to popular belief, and their default configuration is a PC speaker driven by a timer. Sound cards were made for the C-Bus expansion slots that these computers had, most of which used Yamaha's FM and PSG chips and made by NEC themselves, although aftermarket clones can also be purchased, and Creative did release a C-Bus version of the SoundBlaster line of sound cards for the platform.
External sound devices
Devices such as the Covox Speech Thing could be attached to the parallel port of an IBM PC and fed 6- or 8-bit PCM sample data to produce audio. Also, many types of professional sound cards take the form of an external FireWire or USB unit, usually for convenience and improved fidelity.
Sound cards using the PC Card interface were available before laptop and notebook computers routinely had onboard sound. Most of these units were designed for mobile DJs, providing separate outputs to allow both playback and monitoring from one system, however, some also target mobile gamers.
USB sound cards
USB sound cards are external devices that plug into the computer via USB. They are often used in studios and on stage by electronic musicians including live PA performers and DJs. DJs who use DJ software typically use sound cards integrated into DJ controllers or specialized DJ sound cards. DJ sound cards sometimes have inputs with phono preamplifiers to allow turntables to be connected to the computer to control the software's playback of music files with vinyl emulation.
The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices and interfaces on the market. Mac OS X, Windows, and Linux support this standard. However, some USB sound cards do not conform to the standard and require proprietary drivers from the manufacturer.
Cards meeting the older USB 1.1 specification are capable of high-quality sound with a limited number of channels, but USB 2.0 or later is more capable with their higher bandwidths.
Uses
The main function of a sound card is to play audio, usually music, with varying formats (monophonic, stereophonic, various multiple speaker setups) and degrees of control. The source may be a CD or DVD, a file, streamed audio, or any external source connected to a sound card input. Audio may be recorded. Sometimes sound card hardware and drivers do not support recording a source that is being played.
Non-sound uses
Sound cards can be used to generate (output) arbitrary electrical waveforms, as any digital waveform played by the soundcard is converted to the desired output within the bounds of its capabilities. In other words, sound cards are consumer-grade arbitrary waveform generators. A number of free and commercial software allow sound cards to act like function generators by generating desired waveforms from functions; there are also online services that generate audio files for any desired waveforms, playable through a sound card.
Sound cards can also be used to record electrical waveforms, in the same way it records an analog audio input. The recording can be displayed by special or general-purpose audio-editing software (acting as an oscilloscope) or further transformed and analyzed. A protection circuit should be used to keep the input voltage within acceptable bounds.
As general-purpose waveform generators and analyzers, sound cards are bound by several design and physical limitations.
Sound cards have a limited sample rate, typically up to 192 kHz. Under the assumptions of the Nyquist–Shannon sampling theorem, this means a maximum signal frequency (bandwidth) of half that: 96 kHz. Real sound cards tend to have a bandwidth smaller than implied by the Nyquist limit from internal filtering.
As with all ADCs and DACs, sound cards produce distortion and noise. A typical integrated sound card, the Realtek ALC887, according to its data sheet has distortion about 80 dB below the fundamental; cards are available with distortion better than −100 dB.
Sound cards commonly suffer from some clock drift, requiring correction of measurement results.
Sound cards have been used to analyze and generate the following types of signals:
Sound equipment testing. A very-low-distortion sinewave oscillator can be used as input to equipment under test; the output is sent to a sound card's line input and run through Fourier transform software to find the amplitude of each harmonic of the added distortion. Alternatively, a less pure signal source may be used, with circuitry to subtract the input from the output, attenuated and phase-corrected; the result is distortion and noise only, which can be analyzed.
Gamma spectroscopy. A sound card can serve as a cheap multichannel analyzer for gamma spectroscopy, which allows one to distinguish different radioactive isotopes.
Longwave radio. A 192 KHz sound card can be used to receive radio signals up to 96 kHz. This bandwidth is enough for longwave time signals such as the DCF77 (77.5 KHz). A coil is attached to the input side as an antenna, while special software decodes the signal. A sound card can also work in the opposite direction and generate low power time signal transmissions (JJY at 40 KHz, using harmonics).
Driver architecture
To use a sound card, the operating system (OS) typically requires a specific device driver, a low-level program that handles the data connections between the physical hardware and the operating system. Some operating systems include the drivers for many cards; for cards not so supported, drivers are supplied with the card, or available for download.
DOS programs for the IBM PC often had to use universal middleware driver libraries (such as the HMI Sound Operating System, the Miles Audio Interface Libraries (AIL), the Miles Sound System etc.) which had drivers for most common sound cards, since DOS itself had no real concept of a sound card. Some card manufacturers provided terminate-and-stay-resident drivers for their products. Often the driver is a Sound Blaster and AdLib emulator designed to allow their products to emulate a Sound Blaster and AdLib, and to allow games that could only use SoundBlaster or AdLib sound to work with the card. Finally, some programs simply had driver or middleware source code incorporated into the program itself for the sound cards that were supported.
Microsoft Windows uses drivers generally written by the sound card manufacturers. Many device manufacturers supply the drivers on their own discs or to Microsoft for inclusion on Windows installation disc. USB audio device class support is present from Windows 98 onwards. Since Microsoft's Universal Audio Architecture (UAA) initiative which supports HD Audio, FireWire and USB audio device class standards, a universal class driver by Microsoft can be used. The driver is included with Windows Vista. For Windows XP, Windows 2000 or Windows Server 2003, the driver can be obtained by contacting Microsoft support. Almost all manufacturer-supplied drivers for such devices also include this universal class driver.
A number of versions of UNIX make use of the portable Open Sound System (OSS). Drivers are seldom produced by the card manufacturer.
Most present-day Linux distributions make use of the Advanced Linux Sound Architecture (ALSA).
Mockingboard support on the Apple II is usually incorporated into the programs itself as many programs for the Apple II boot directly from disk. However a TSR is shipped on a disk that adds instructions to Apple Basic so users can create programs that use the card, provided that the TSR is loaded first.
List of notable sound card manufacturers
Asus
Advanced Gravis Computer Technology (defunct)
AdLib (defunct)
Aureal Semiconductor (defunct)
Auzentech (defunct)
C-Media
Creative Technology
E-mu (bought out by Creative)
ESS Technology
Hercules Computer Technology
HT Omega
IBM
Korg
Media Vision
M-Audio
Onkyo
Turtle Beach Systems
VIA Technologies
See also
Audio signal processing
Cross-platform Audio Creation Tool (XACT)
DirectMusic
DirectSound
EAX
OpenAL
PC System Design Guide
Sound card mixer
Notes
References
External links
Hardware acceleration | Sound card | Technology | 6,151 |
1,440,695 | https://en.wikipedia.org/wiki/Elementary%20symmetric%20polynomial | In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree in variables for each positive integer , and it is formed by adding together all distinct products of distinct variables.
Definition
The elementary symmetric polynomials in variables , written for , are defined by
and so forth, ending with
In general, for we define
so that if .
(Sometimes, is included among the elementary symmetric polynomials, but excluding it allows generally simpler formulation of results and properties.)
Thus, for each positive integer less than or equal to there exists exactly one elementary symmetric polynomial of degree in variables. To form the one that has degree , we take the sum of all products of -subsets of the variables. (By contrast, if one performs the same operation using multisets of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.)
Given an integer partition (that is, a finite non-increasing sequence of positive integers) , one defines the symmetric polynomial , also called an elementary symmetric polynomial, by
.
Sometimes the notation is used instead of .
Examples
The following lists the elementary symmetric polynomials for the first four positive values of .
For :
For :
For :
For :
Properties
The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity
That is, when we substitute numerical values for the variables , we obtain the monic univariate polynomial (with variable ) whose roots are the values substituted for and whose coefficients are – up to their sign – the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are called Vieta's formulas.
The characteristic polynomial of a square matrix is an example of application of Vieta's formulas. The roots of this polynomial are the eigenvalues of the matrix. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain – up to their sign – the coefficients of the characteristic polynomial, which are invariants of the matrix. In particular, the trace (the sum of the elements of the diagonal) is the value of , and thus the sum of the eigenvalues. Similarly, the determinant is – up to the sign – the constant term of the characteristic polynomial, i.e. the value of . Thus the determinant of a square matrix is the product of the eigenvalues.
The set of elementary symmetric polynomials in variables generates the ring of symmetric polynomials in variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring . (See below for a more general statement and proof.) This fact is one of the foundations of invariant theory. For another system of symmetric polynomials with the same property see Complete homogeneous symmetric polynomials, and for a system with a similar, but slightly weaker, property see Power sum symmetric polynomial.
Fundamental theorem of symmetric polynomials
For any commutative ring , denote the ring of symmetric polynomials in the variables with coefficients in by . This is a polynomial ring in the n elementary symmetric polynomials for .
This means that every symmetric polynomial has a unique representation
for some polynomial . Another way of saying the same thing is that the ring homomorphism that sends to for defines an isomorphism between and .
Proof sketch
The theorem may be proved for symmetric homogeneous polynomials by a double induction with respect to the number of variables and, for fixed , with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric).
In the case the result is trivial because every polynomial in one variable is automatically symmetric.
Assume now that the theorem has been proved for all polynomials for variables and all symmetric polynomials in variables with degree . Every homogeneous symmetric polynomial in can be decomposed as a sum of homogeneous symmetric polynomials
Here the "lacunary part" is defined as the sum of all monomials in which contain only a proper subset of the variables , i.e., where at least one variable is missing.
Because is symmetric, the lacunary part is determined by its terms containing only the variables , i.e., which do not contain . More precisely: If and are two homogeneous symmetric polynomials in having the same degree, and if the coefficient of before each monomial which contains only the variables equals the corresponding coefficient of , then and have equal lacunary parts. (This is because every monomial which can appear in a lacunary part must lack at least one variable, and thus can be transformed by a permutation of the variables into a monomial which contains only the variables .)
But the terms of which contain only the variables are precisely the terms that survive the operation of setting to 0, so their sum equals , which is a symmetric polynomial in the variables that we shall denote by . By the inductive hypothesis, this polynomial can be written as
for some . Here the doubly indexed denote the elementary symmetric polynomials in variables.
Consider now the polynomial
Then is a symmetric polynomial in , of the same degree as , which satisfies
(the first equality holds because setting to 0 in gives , for all ). In other words, the coefficient of before each monomial which contains only the variables equals the corresponding coefficient of . As we know, this shows that the lacunary part of coincides with that of the original polynomial . Therefore the difference has no lacunary part, and is therefore divisible by the product of all variables, which equals the elementary symmetric polynomial . Then writing , the quotient is a homogeneous symmetric polynomial of degree less than (in fact degree at most ) which by the inductive hypothesis can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for and one finds a polynomial representation for .
The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the polynomials are algebraically independent over the ring .) The fact that the polynomial representation is unique implies that is isomorphic to .
Alternative proof
The following proof is also inductive, but does not involve other polynomials than those symmetric in , and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogeneous of degree ; different homogeneous components can be decomposed separately. Order the monomials in the variables lexicographically, where the individual variables are ordered , in other words the dominant term of a polynomial is one with the highest occurring power of , and among those the one with the highest power of , etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree (they are in fact homogeneous) as follows by partitions of . Order the individual elementary symmetric polynomials in the product so that those with larger indices come first, then build for each such factor a column of boxes, and arrange those columns from left to right to form a Young diagram containing boxes in all. The shape of this diagram is a partition of , and each partition of arises for exactly one product of elementary symmetric polynomials, which we shall denote by ) (the is present only because traditionally this product is associated to the transpose partition of ). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables .
Lemma. The leading term of is .
Proof. The leading term of the product is the product of the leading terms of each factor (this is true whenever one uses a monomial order, like the lexicographic order used here), and the leading term of the factor is clearly . To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is .
Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogeneous symmetric polynomial of degree can be written as polynomial in the elementary symmetric polynomials. Since is symmetric, its leading monomial has weakly decreasing exponents, so it is some with a partition of . Let the coefficient of this term be , then is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back to it, one obtains the sought for polynomial expression for .
The fact that this expression is unique, or equivalently that all the products (monomials) of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the were zero, one focuses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables ) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction.
See also
Symmetric polynomial
Complete homogeneous symmetric polynomial
Schur polynomial
Newton's identities
Newton's inequalities
Maclaurin's inequality
MacMahon Master theorem
Symmetric function
Representation theory
References
External links
Homogeneous polynomials
Symmetric functions
Articles containing proofs | Elementary symmetric polynomial | Physics,Mathematics | 1,950 |
40,634,404 | https://en.wikipedia.org/wiki/Pingbeinine | Pingbeinine is a steroidal alkaloid isolated from Fritillaria.
External links
Two new steroidal alkaloids from Fritillaria ussuriensis
Steroidal alkaloids | Pingbeinine | Chemistry | 44 |
73,824,591 | https://en.wikipedia.org/wiki/Attachment%20Play | Attachment Play is a term created by developmental psychologist, Aletha Solter and the title of one of her books. It is one aspect of her Aware Parenting approach. The term refers to nine specific kinds of parent/child play that can strengthen attachment, solve behavior problems, and help children recover from traumatic experiences. These forms of play incorporate many traditional play therapy techniques as well as some newer ones.
Research basis
The forms of play are based on attachment theory, and their effectiveness is supported by research in child development, neurobiology, and psychotherapy. For example, nondirective child-centered play has been studied for decades and has been shown to help children become less aggressive. It can also help to reduce learning difficulties while increasing social competence. Symbolic play with specific props or themes is based on exposure therapy techniques and can help children overcome traumatic experiences.
Contingency play is an important activity in helping traumatized children feel empowered, and the therapeutic value of separation games such as peek-a-boo has been recognized for decades. Playful activities with body contact can strengthen parent/child attachment and meet children's need for touch, which reduces stress while stimulating growth and healing. Cooperative games and activities (with or without touch) are especially effective in fostering cooperative behavior in children.
Laughter is an important component of several of these forms of play. In addition to strengthening parent/child attachment, laughter can help reduce anxiety and strengthen the immune system., Nonsense play (humor based on exaggeration, mistakes, or general silliness) has been shown to decrease a child's anxiety during medical interventions. Power-reversal play (such as a pillow fight in which the adult lets the child “win”) also involves laughter and can help to strengthen attachment while reducing anger and aggressive behavior.
A controlled pilot study was conducted in Australia to evaluate the effectiveness of three kinds of Attachment Play in a brief parent education program. The researchers found that the program increased parents’ feelings of self-efficacy Another pilot study was done in Ireland to teach Attachment Play to social workers, who then trained parents to implement the approach with their children. The training helped parents engage playfully with children, strengthen attachment, enhance cooperation, reduce behavior problems, and avoid the use of punishment.
References
External links
Attachment theory
Play (activity)
Child development | Attachment Play | Biology | 465 |
41,445,100 | https://en.wikipedia.org/wiki/White-box%20cryptography | In cryptography, the white-box model refers to an extreme attack scenario, in which an adversary has full unrestricted access to a cryptographic implementation, most commonly of a block cipher such as the Advanced Encryption Standard (AES). A variety of security goals may be posed (see the section below), the most fundamental being "unbreakability", requiring that any (bounded) attacker should not be able to extract the secret key hardcoded in the implementation, while at the same time the implementation must be fully functional. In contrast, the black-box model only provides an oracle access to the analyzed cryptographic primitive (in the form of encryption and/or decryption queries). There is also a model in-between, the so-called gray-box model, which corresponds to additional information leakage from the implementation, more commonly referred to as side-channel leakage.
White-box cryptography is a practice and study of techniques for designing and attacking white-box implementations. It has many applications, including digital rights management (DRM), pay television, protection of cryptographic keys in the presence of malware, mobile payments and cryptocurrency wallets. Examples of DRM systems employing white-box implementations include CSS, Widevine.
White-box cryptography is closely related to the more general notions of obfuscation, in particular, to Black-box obfuscation, proven to be impossible, and to Indistinguishability obfuscation, constructed recently under well-founded assumptions but so far being infeasible to implement in practice.
As of January 2023, there are no publicly known unbroken white-box designs of standard symmetric encryption schemes. On the other hand, there exist many unbroken white-box implementations of dedicated block ciphers designed specifically to achieve incompressibility (see ).
Security goals
Depending on the application, different security goals may be required from a white-box implementation. Specifically, for symmetric-key algorithms the following are distinguished:
Unbreakability is the most fundamental goal requiring that a bounded attacker should not be able to recover the secret key embedded in the white-box implementation. Without this requirement, all other security goals are unreachable since a successful attacker can simply use a reference implementation of the encryption scheme together with the extracted key.
One-wayness requires that a white-box implementation of an encryption scheme can not be used by a bounded attacker to decrypt ciphertexts. This requirement essentially turns a symmetric encryption scheme into a public-key encryption scheme, where the white-box implementation plays the role of the public key associated to the embedded secret key. This idea was proposed already in the famous work of Diffie and Hellman in 1976 as a potential public-key encryption candidate.
Code lifting security is an informal requirement on the context, in which the white-box program is being executed. It demands that an attacker can not extract a functional copy of the program. This goal is particularly relevant in the DRM setting. Code obfuscation techniques are often used to achieve this goal.
A commonly used technique is to compose the white-box implementation with so-called external encodings. These are lightweight secret encodings that modify the function computed by the white-box part of an application. It is required that their effect is canceled in other parts of the application in an obscure way, using code obfuscation techniques. Alternatively, the canceling counterparts can be applied on a remote server.
Incompressibility requires that an attacker can not significantly compress a given white-box implementation. This can be seen as a way to achieve code lifting security (see above), since exfiltrating a large program from a constrained device (for example, an embedded or a mobile device) can be time-consuming and may be easy to detect by a firewall.
Examples of incompressible designs include SPACE cipher, SPNbox, WhiteKey and WhiteBlock. These ciphers use large lookup tables that can be pseudorandomly generated from a secret master key. Although this makes the recovery of the master key hard, the lookup tables themselves play the role of an equivalent secret key. Thus, unbreakability is achieved only partially.
Traceability (Traitor tracing) requires that each distributed white-box implementation contains a digital watermark allowing identification of the guilty user in case the white-box program is being leaked and distributed publicly.
History
The white-box model with initial attempts of white-box DES and AES implementations were first proposed by Chow, Eisen, Johnson and van Oorshot in 2003. The designs were based on representing the cipher as a network of lookup tables and obfuscating the tables by composing them with small (4- or 8-bit) random encodings. Such protection satisfied a property that each single obfuscated table individually does not contain any information about the secret key. Therefore, a potential attacker has to combine several tables in their analysis.
The first two schemes were broken in 2004 by Billet, Gilbert, and Ech-Chatbi using structural cryptanalysis. The attack was subsequently called "the BGE attack".
The numerous consequent design attempts (2005-2022) were quickly broken by practical dedicated attacks.
In 2016, Bos, Hubain, Michiels and Teuwen showed that an adaptation of standard side-channel power analysis attacks can be used to efficiently and fully automatically break most existing white-box designs. This result created a new research direction about generic attacks (correlation-based, algebraic, fault injection) and protections against them.
Competitions
Four editions of the WhibOx contest were held in 2017, 2019, 2021 and 2024 respectively. These competitions invited white-box designers both from academia and industry to submit their implementation in the form of (possibly obfuscated) C code. At the same time, everyone could attempt to attack these programs and recover the embedded secret key. Each of these competitions lasted for about 4-5 months.
WhibOx 2017 / CHES 2017 Capture the Flag Challenge targeted the standard AES block cipher. Among 94 submitted implementations, all were broken during the competition, with the strongest one staying unbroken for 28 days.
WhibOx 2019 / CHES 2019 Capture the Flag Challenge again targeted the AES block cipher. Among 27 submitted implementations, 3 programs stayed unbroken throughout the competition, but were broken after 51 days since the publication.
WhibOx 2021 / CHES 2021 Capture the Flag Challenge changed the target to ECDSA, a digital signature scheme based on elliptic curves. Among 97 submitted implementations, all were broken within at most 2 days.
WhibOx 2024 / CHES 2024 Capture the Flag Challenge again targeted ECDSA. Among 47 submitted implementations, all were broken during the competition, with the strongest one staying unbroken for almost 5 days.
See also
Black-box obfuscation, a stronger form of obfuscation proven to be impossible
Indistinguishability obfuscation, a more formal theoretic notion of obfuscation
Obfuscation (software), non-cryptographic code obfuscation
Digital rights management, a widely used application of white-box cryptography
External links
WhibOx Contests
References
Cryptography | White-box cryptography | Mathematics,Engineering | 1,474 |
68,222,807 | https://en.wikipedia.org/wiki/HD%2060150 | HD 60150 (HR 2888) is a solitary star located in the southern circumpolar constellation Volans. It has an apparent magnitude of 6.39, placing it near the limit for naked eye visibility. Parallax measurements place the star at a distance of 738 light years and it is currently receding with a heliocentric radial velocity of .
HD 60150 has a classification of K5 III, indicating that it is a red giant. It has 1.2 times the mass of the Sun but has expanded to 41 times its girth. It radiates 329 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,007 K, giving it a reddish orange hue. HD 60150 is metal enriched, with an iron abundance 38% greater than the Sun. It spins leisurely with projected rotational velocity of about .
References
Volantis, 12
060150
2888
Durchmusterung objects
K-type giants
Volans
036346 | HD 60150 | Astronomy | 212 |
1,712,211 | https://en.wikipedia.org/wiki/Fragrance%20oil | Fragrance oils, also known as aroma oils, aromatic oils, and flavor oils, are blended synthetic aroma compounds or natural essential oils that are diluted with a carrier like propylene glycol, vegetable oil, or mineral oil.
To allergic or otherwise sensitive people, synthetic fragrance oils are often less desirable than plant-derived essential oils as components of perfume. Essential oils, widely used in society, emit numerous volatile organic compounds (VOCs). Some of these VOCs are considered as potentially hazardous under federal regulations. Most high quality essential oils are extracted from natural sources such as plants, herbs, and flowers. However, synthetic versions of the same compound as a natural essential oil are usually very comparable. Furthermore, natural oils are in many cases significantly more expensive than their synthetic equivalents.
Aromatic oils are used in perfumery, candles, cosmetics, flavoring of food.
Some include (out of a very diverse range):
Ylang ylang
Vanilla
Sandalwood
Cedar wood
Mandarin orange
Cinnamon
Lemongrass
Rosehip
Peppermint
Frankincense
Bergamot
Patchouli
Blackcurrant
Candle fragrance oils
Scented candles are produced when fragrance oils are combined with hot wax like paraffin, forming a homogenous solution. Fragrance oils are retained like a sponge when the wax is cooled to room temperature. Lighting the candle wick increases the wax temperature, gradually releasing an aroma through the evaporation of the fragrance oil.
See also
Perfume
Essential oil
Aroma compound
Fragrance allergy
References
Essential oils
Perfume ingredients | Fragrance oil | Chemistry | 304 |
75,399,196 | https://en.wikipedia.org/wiki/Therese%20Flapper | Therese Flapper is an Australian environmental engineer, and was elected to be a fellow of the Australian Academy of Technological Sciences and Engineering, in 2023. She has worked with infrastructure including water, roads, waste, energy and buildings, and was past president of Engineers Australia, Canberra, and the local Landcare group.
Education
Flapper was one of two girls from her class of 120, in the Western Suburbs of Sydney, to go to university. She lived in housing commission accommodation, and needed to travel more than two hours each way to get to university.
Her first year of university involved living in the backyard of her grandmother's housing commission house. “Then my grandmother kicked my uncle out of the house so I could have his room. I was the ant’s pants to my grandmother, the bee’s knees. She would brag about me to everyone. She was incredibly supportive of me and my determination to go to uni...To me, education was a way to bust you out. But ‘busting out’ meant breaking down a lot of walls."Flapper was awarded a Bachelor of Science (honours) from the University of New South Wales (UNSW) in 1991, and a Masters of Engineering Sciences from UNSW in 1996. She graduated with a PhD in Engineering Sciences for her thesis "Development of a fungal bioreactor to treat industrial wastewater" in 2001, also from UNSW.
Career
Flapper was the Business Group Manager of GHD's Business Group Manager from 2011 to 2015. Flapper was deputy president for the Australian arm of the International Water Association (IWA), and also is the President of NSW Australian Water Association. She was also president of Engineers Australia, Canberra.
Flapper has worked with the local community as a volunteer bush firefighter with the NSW Rural Fire Service, and she was also the President of the local Landcare group.
Flapper has worked with the independent audit of the Molonglo Valley Strategic Assessment, "Protecting Molonglo", and the Strategic Assessment of the region, including, and species protected under the Environmental Protection and Biodiversity Act 1999, including Pink-tailed Worm-lizard, Superb and Swift parrots, Grasslands and Box-Gum woodland.
Publications
McGill, G, Horner, R, Flapper, T. (2014) Reclaimed water for the Shoalhaven region: Expansion of the existing REMS to double the volume of available reclaimed water in the northern Shoalhaven City Council local government area. Water: Journal of the Australian Water Association: 1 September 2014.
Flapper, T. Benjamin, R. Mosse, P (2003) Online, real time measurement of Cod and SS at Lower Molonglo WQCC.
Flapper, T. Campbell, B. O’Connor, N. (2010) SWF Project No 512-001 Quantification of pathogen removal in Australian activated sludge plants.
Awards
2023 – ATSE – Fellow.
2022 – NSW RFS Commissioners award – for service to NSW during the 2019/2020 fire season.
2020 – National Emergency Services Medal.
2021 – Crystal Vision Award.
2007 – Women of the Year (Nancy Mills) – Australian Water Association
2000 – National Engineers Excellence Award – Engineers Australia.
References
External links
ATSE Fellows
Living people
Year of birth missing (living people)
Environmental engineers
Women engineers
Fellows of the Australian Academy of Technological Sciences and Engineering
University of New South Wales alumni | Therese Flapper | Chemistry,Engineering | 692 |
72,545,622 | https://en.wikipedia.org/wiki/Fake%20building | A fake building (also known as a fake house, false-front house, fake façade, or transformer house in specific situations) is a government building, structure, or public utility housing that uses urban and/or suburban camouflage, specifically with the intention to disguise equipment and city infrastructure facilities that some may consider aesthetically unpleasing in non-industrial neighborhoods.
History
Post-Industrial Revolution
After the Industrial Revolution, cities in industrialized countries were required to construct and maintain infrastructure facilities to support city growth. Originally, such infrastructure facilities weren't designed or intended to be concealed.
For example, the pumping stations that housed large steam engines in the 19th and early 20th centuries were intentionally built to publicly communicate a message of safety and reliability in addition to expressing functionality. Additionally, building designs inherited from beam engine buildings required strong rigid walls and raised floors to support engines, large-arched and multi-story windows to allow natural light in, and roof ventilation via structures like decorative dormers. These functional features became known as "waterworks style." More elaborate designs were also used to communicate a sacred atmosphere and highlight the critical tasks performed at facilities like sewage pumping stations. An example of simple waterworks architectural style is the Springhead Pumping Station, while a baroque eclecticist example is the Abbey Mills Pumping Station.
Other types of infrastructure facilities—such as gas supply, electrical supply, and communications buildings—developed their own styles as well Examples include the Radialsystem (sewage pumping station in Berlin), the Kempton Park Steam Engines house, the Chestnut Hill Waterworks (Massachusetts), the Spotswood Pumping Station (Melbourne), the Palacio de Aguas Corrientes (Buenos Aires), the Sewage Plant in Bubeneč (Prague), and the R. C. Harris Water Treatment Plant (Toronto). The International Committee for the Conservation of the Industrial Heritage considers the aforementioned buildings to be heritage sites of the global water industry.
Twentieth century
One of the earliest known examples of a fake building was 58 Joralemon Street in New York. The property was acquired by the Interborough Rapid Transit Company in 1907, after which it was internally transformed into infrastructure for ventilating underground transportation. As a historic property, the local community wanted the façade to be historically appropriate and compatible with the neighborhood.
A few years later, in early 1911, substations were introduced to Toronto. Rather than being unenclosed, electronic converters were housed within fake buildings meant to imitate civic buildings such as museums and city halls.
After the end of World War II, suburban developments began to flourish all across the world. As such, electric demand grew exponentially, leading architects to figure out where to place new substations. Harold Alphonso Bodwell, a utility employee appointed as a lead designer in Toronto, introduced the idea of disemboweling unused housing to set up substations within them. Eventually, Toronto Hydro built house-shaped substations with six different base models ranging from ranch-style houses to Georgian mansions. Throughout the 20th century, the company went on to build hundreds of fake buildings in a litany of established styles.
In 1963, a property owner in Prairie Village, Kansas gave $300,000 to Johnson County Wastewater, a wastewater management authority, to build a fake building for a local sewage pumping station which would blend into the neighborhood. Very few in the neighborhood knew about it, as sewage smell was hardly reported in the area. Later, the same authority would go on to build another fake building for a pumping station.
Known locations
Municipalities across the globe have used fake buildings for numerous purposes. Pump stations and subway ventilation shafts have often been subject to concealment by fake buildings. Some also conceal the locations of secret facilities, such as chalets in Switzerland which hide military installations. Such façades can be discovered in New York City, Paris, and London. Specifically in Los Angeles, many fake buildings conceal oil rigs.
The following are further examples of fake buildings.
For ventilation
145 rue La Fayette, 10th arrondissement of Paris
Buildings along Holland Tunnel in New York City
58 Joralemon Street in Brooklyn Heights of Brooklyn
23/24 Leinster Gardens in London
For power conversion
The Strecker Memorial Laboratory on Roosevelt Island in New York City
51 W Ontario Street in Chicago
640 Millwood Road in Toronto
29 Nelson Street in Toronto
For water and wastewater management
WNY1 Solids and Floatables Screening Facility of North Hudson Sewerage Authority in West New York, New Jersey
H1 Screening and Wet Weather Pump Station at 99 Observer Highway in Hoboken, New Jersey
Belindeer Pump Station of Johnson County Wastewater at 5700 Belinder Avenue, Fairway, Kansas
Nall Avenue Holding Station of Johnson County Wastewater at 7490 Nall Avenue, Prairie Village, Kansas
Design
Most fake buildings are intended to resemble the design of surrounding buildings, with some exceptions. Some may also fail to blend in due to design flaws caused by the contained equipment, such as blacked-out windows; the lack of a roof, doorway, window panes, or some enclosed walls; gated extrusions and/or heavy fencing; warning signs; industrial doors and windows; unusually pristine landscaping; security cameras; and/or some components printed on.
In municipalities that require public consultations for the construction of public facilities, the general public can command influence in the designs of fake buildings. When the City of Hoboken presented an initial design of a fake building to house a new flood pump station, some criticized it for looking more like a colonial townhouse and thus dishonoring the industrial heritage of the city. Ultimately, The final design was completely changed into a modern building with a similar design to a nearby transportation building.
Some fake buildings have designs that imitate other structures to match their surrounding areas. For example, an electrical substation in an urban neighborhood of Washington, D.C. was disguised as an old train station, while another substation in a mixed commercial and residential area of D.C. imitated an office building. In a more rural area of Gaithersburg, Maryland, a substation was designed to look like a large barn with a metal silo beside it.
References
External links
20th-century architecture
Architectural terminology
Urban planning
Infrastructure | Fake building | Engineering | 1,262 |
1,968,626 | https://en.wikipedia.org/wiki/Tranexamic%20acid | Tranexamic acid is a medication used to treat or prevent excessive blood loss from major trauma, postpartum bleeding, surgery, tooth removal, nosebleeds, and heavy menstruation. It is also used for hereditary angioedema. It is taken either by mouth, injection into a vein, or by intramuscular injection.
Tranexamic acid is a synthetic analog of the amino acid lysine. It serves as an antifibrinolytic by reversibly binding four to five lysine receptor sites on plasminogen. This decreases the conversion of plasminogen to plasmin, preventing fibrin degradation and preserving the framework of fibrin's matrix structure. Tranexamic acid has roughly eight times the antifibrinolytic activity of an older analogue, ε-aminocaproic acid. Tranexamic acid also directly inhibits the activity of plasmin with weak potency (IC50 = 87 mM), and it can block the active-site of urokinase plasminogen activator (uPA) with high specificity (Ki = 2 mM), one of the highest among all the serine proteases.
Side effects are rare; they include changes in color vision, seizures, blood clots, and allergic reactions. Tranexamic acid appears to be safe for use during pregnancy and breastfeeding. Tranexamic acid is an antifibrinolytic medication.
Tranexamic acid was first made in 1962 by Japanese researchers Shosuke and Utako Okamoto. It is on the World Health Organization's List of Essential Medicines. Tranexamic acid is available as a generic drug.
Uses
Medical uses
Tranexamic acid is frequently used following major trauma. Tranexamic acid is used to prevent and treat blood loss in a variety of situations, such as dental procedures, heavy menstrual bleeding, and surgeries with high risk of blood loss.
Trauma
Tranexamic acid has been found to decrease the risk of death due to any cause in people who have significant bleeding due to trauma. It is most effective if taken within the first three hours following major trauma. It also decreases the risk of death if given within the first three hours of brain injury.
Menstrual bleeding
Tranexamic acid is sometimes used to treat heavy menstrual bleeding. When taken by mouth it both safely and effectively treats regularly occurring heavy menstrual bleeding and improves quality of life. Another study demonstrated that the dose does not need to be adjusted in females who are between ages 12 and 16. In a 10-year study, tranexamic acid and other oral medicines (mefenamic acid) were found to be as effective as the levonorgestrel intrauterine coil; the same proportion of women had not had surgery for heavy bleeding and had similar improvements in their quality of life.
Childbirth
Tranexamic acid is sometimes used (often in conjunction with oxytocin) to reduce bleeding after childbirth. Death due to postpartum bleeding is reduced in women receiving tranexamic acid.
Surgery
Tranexamic acid is sometimes used in orthopedic surgery to reduce blood loss, to the extent of reducing or altogether abolishing the need for perioperative blood transfusion. It is of proven value in clearing the field of surgery and reducing blood loss when given before or after surgery. Drain and number of transfusions are reduced.
In surgical corrections of craniosynostosis in children it reduces the need for blood transfusions.
In spinal surgery (e.g., scoliosis), correction with posterior spinal fusion using instrumentation, to prevent excessive blood loss.
In cardiac surgery, both with and without cardiopulmonary bypass (e.g., coronary artery bypass surgery), it is used to prevent excessive blood loss.
Dentistry
In the United States, tranexamic acid is FDA-approved for short-term use in people with severe bleeding disorders who are about to have dental surgery. Tranexamic acid is used for a short period before and after the surgery to prevent major blood loss and decrease the need for blood transfusions.
Tranexamic acid is used in dentistry in the form of a 5% mouth rinse after extractions or surgery in patients with prolonged bleeding time; e.g., from acquired or inherited disorders.
In China, tranexamic acid is allowed in over-the-counter toothpaste, with six products using the drug. , there are no limits on dosage, nor requirements for labeling the concentration. 0.05% TXA in toothpaste is allowed OTC in Hong Kong. <5% TXA in over-the-counter toothpaste is first patented and marketed by Lion Corporation in Japan, where it is still sold. Presence of unauthorized TXA has led to the Canadian recall of a Yunnan Baiyao toothpaste in 2019.
Hematology
There is not enough evidence to support the routine use of tranexamic acid to prevent bleeding in people with blood cancers. However, several trials are currently assessing this use of tranexamic acid. For people with inherited bleeding disorders (e.g. von Willebrand's disease), tranexamic acid is often given. It has also been recommended for people with acquired bleeding disorders (e.g., directly acting oral anticoagulants (DOACs)) to treat serious bleeding.
Nosebleeds
The use of tranexamic acid, applied directly to the area that is bleeding or taken by mouth, appears useful to treat nose bleeding compared to packing the nose with cotton pledgets alone. It decreases the risk of rebleeding within 10 days.
Cosmetic Uses
Tranexamic acid can be used in skincare products as a cosmetic active to reduce the appearance of inflammation and hyperpigmentation. Tranexamic acid is a zwitterion amino acid, and has a low permeability coefficient in the stratum corneum. Tranexamic acid can be combined with penetration enhancers and microneedling to overcome this limitation. Cosmetic uses may also employ lipophilic derivatives of tranexamic acid (ester prodrugs like Cetyl tranexamate mesylate) that are not zwitterionic and thus have improved skin permeability.
Contraindications
Allergic to tranexamic acid
History of seizures
History of venous or arterial thromboembolism or active thromboembolic disease
Severe kidney impairment due to accumulation of the medication, dose adjustment is required in mild or moderate kidney impairment
Adverse effects
Side effects are rare. Reported adverse events include seizures, changes in color vision, blood clots, and allergic reactions such as anaphylaxis. Whether the risk of venous thromboembolism (blood clots) is increased is a matter of debate. The risk is mentioned in the product literature, and they were reported in post marketing experience. Despite this, and the inhibitory effect of tranexamic acid on blood clot breakdown, large studies of the use of tranexamic acid have not shown an increase in the risk of venous or arterial thrombosis, even in people who had previously experienced thrombosis under other circumstances.
Special populations
For pregnancy, no harm has been found in animal studies.
Small amounts appear in breast milk if taken during lactation.
Society and culture
Tranexamic acid was first synthesized in 1962 by Japanese researchers Shosuke and Utako Okamoto. It is on the World Health Organization's List of Essential Medicines.
Brand names
Tranexamic acid is marketed in the US and Australia in tablet form as Lysteda and in Australia, Sweden and Jordan it is marketed in an IV form and tablet form as Cyklokapron, in the UK and Sweden as Cyclo-F. In the UK it is also marketed as Femstrual, in Asia as Transcam, in Bangladesh as Intrax & Tracid, in India as Pause, in Pakistan as Transamin, in Indonesia as Kalnex, in South America as Espercil, in Japan as Nicolda, in France, Poland, Belgium, and Romania as Exacyl and in Egypt as Kapron. In the Philippines, its capsule form is marketed as Hemostan and in Israel as Hexakapron.
Legal status
The US Food and Drug Administration (FDA) approved tranexamic acid oral tablets (brand name Lysteda) for the treatment of heavy menstrual bleeding in November 2009.
In March 2011, the status of tranexamic acid for the treatment of heavy menstrual bleeding was changed in the UK, from POM (Prescription only Medicines) to P (Pharmacy Medicines) and became available over the counter in UK pharmacies under the brand names of Cyklo-F and Femstrual.
Research
Tranexamic acid might alleviate neuroinflammation in some experimental settings.
Tranexamic acid can be used in case of postpartum hemorrhage; it can decrease the risk of death due to bleeding by one third according to the WHO.
Tentative evidence supports the use of tranexamic acid in hemoptysis.
In hereditary angioedema
In hereditary hemorrhagic telangiectasia: tranexamic acid has been shown to reduce the frequency of epistaxis in patients with severe and frequent nosebleed episodes from hereditary hemorrhagic telangiectasia.
In melasma: tranexamic acid is sometimes used in skin whitening as a topical agent, injected into a lesion, or taken by mouth, both alone and as an adjunct to laser therapy; as of 2017 its safety seemed reasonable but its efficacy for this purpose was uncertain because there had been no large scale randomized controlled studies nor long term follow-up studies. It is allowed as a quasi-drug for skin whitening in Japan.
In hyphema: tranexamic acid is effective in reducing the risk of secondary hemorrhage outcomes in people with traumatic hyphema.
In liver resection: tranexamic acid did not reduce bleeding or transfusions but did increase complications.
References
Amino acids
Antifibrinolytics
Cyclohexanecarboxylic acids
Non-proteinogenic amino acids
Wikipedia medicine articles ready to translate
Transfusion medicine
World Health Organization essential medicines
Skin whitening | Tranexamic acid | Chemistry | 2,182 |
2,148,918 | https://en.wikipedia.org/wiki/Anionic%20addition%20polymerization | In polymer chemistry, anionic addition polymerization is a form of chain-growth polymerization or addition polymerization that involves the polymerization of monomers initiated with anions. The type of reaction has many manifestations, but traditionally vinyl monomers are used. Often anionic polymerization involves living polymerizations, which allows control of structure and composition.
History
As early as 1936, Karl Ziegler proposed that anionic polymerization of styrene and butadiene by consecutive addition of monomer to an alkyl lithium initiator occurred without chain transfer or termination. Twenty years later, living polymerization was demonstrated by Michael Szwarc and coworkers. In one of the breakthrough events in the field of polymer science, Szwarc elucidated that electron transfer occurred from radical anion sodium naphthalene to styrene. The results in the formation of an organosodium species, which rapidly added styrene to form a "two – ended living polymer." An important aspect of his work, Szwarc employed the aprotic solvent tetrahydrofuran. Being a physical chemist, Szwarc elucidated the kinetics and the thermodynamics of the process in considerable detail. At the same time, he explored the structure property relationship of the various ion pairs and radical ions involved. This work provided the foundations for the synthesis of polymers with improved control over molecular weight, molecular weight distribution, and the architecture.
The use of alkali metals to initiate polymerization of 1,3-dienes led to the discovery by Stavely and co-workers at Firestone Tire and Rubber company of cis-1,4-polyisoprene. This sparked the development of commercial anionic polymerization processes that utilize alkyllithium initiators.
Roderic Quirk won the 2019 Charles Goodyear Medal in recognition of his contributions to anionic polymerization technology. He was introduced to the subject while working in a Phillips Petroleum lab with Henry Hsieh.
Monomer characteristics
Two broad classes of monomers are susceptible to anionic polymerization.
Vinyl monomers have the formula CH2=CHR, the most important are styrene (R = C6H5), butadiene (R = CH=CH2), and isoprene (R = C(Me)=CH2). A second major class of monomers are acrylate esters, such as acrylonitrile, methacrylate, cyanoacrylate, and acrolein. Other vinyl monomers include vinylpyridine, vinyl sulfone, vinyl sulfoxide, vinyl silanes.
Cyclic monomers
Many cyclic compounds are susceptible to ring-opening polymerization. Epoxides, cyclic trisiloxanes, some lactones, lactides, cyclic carbonates, and amino acid N-carboxyanhydrides.
In order for polymerization to occur with vinyl monomers, the substituents on the double bond must be able to stabilize a negative charge. Stabilization occurs through delocalization of the negative charge. Because of the nature of the carbanion propagating center, substituents that react with bases or nucleophiles either must not be present or be protected.
Initiation
Initiators are selected based on the reactivity of the monomers. Highly electrophilic monomers such as cyanoacrylates require only weakly nucleophilic initiators, such as amines, phosphines, or even halides. Less reactive monomers such as styrene require powerful nucleophiles such as butyl lithium. Reactions of intermediate strength are used for monomers of intermediate reactivity such as vinylpyridine.
The solvents used in anionic addition polymerizations are determined by the reactivity of both the initiator and nature of the propagating chain end. Anionic species with low reactivity, such as heterocyclic monomers, can use a wide range of solvents.
Initiation by electron transfer
Initiation of styrene polymerization with sodium naphthalene proceeds by electron transfer from the naphthalene radical anion to the monomer. The resulting radical dimerizes to give a disodium compound, which then functions as the initiator. Polar solvents are necessary for this type of initiation both for stability of the anion-radical and to solvate the cation species formed. The anion-radical can then transfer an electron to the monomer.
Initiation can also involve the transfer of an electron from the alkali metal to the monomer to form an anion-radical. Initiation occurs on the surface of the metal, with the reversible transfer of an electron to the adsorbed monomer.
Initiation by strong anions
Nucleophilic initiators include covalent or ionic metal amides, alkoxides, hydroxides, cyanides, phosphines, amines and organometallic compounds (alkyllithium compounds and Grignard reagents). The initiation process involves the addition of a neutral (B:) or negative (:B−) nucleophile to the monomer.
The most commercially useful of these initiators has been the alkyllithium initiators. They are primarily used for the polymerization of styrenes and dienes.
Monomers activated by strong electronegative groups may be initiated even by weak anionic or neutral nucleophiles (i.e. amines, phosphines). Most prominent example is the curing of cyanoacrylate, which constitutes the basis for superglue. Here, only traces of basic impurities are sufficient to induce an anionic addition polymerization or zwitterionic addition polymerization, respectively.
Propagation
Propagation in anionic addition polymerization results in the complete consumption of monomer. This stage is often fast, even at low temperatures.
Living anionic polymerization
Living anionic polymerization is a living polymerization technique involving an anionic propagating species.
Living anionic polymerization was demonstrated by Szwarc and co workers in 1956. Their initial work was based on the polymerization of styrene and dienes.
One of the remarkable features of living anionic polymerization is that the mechanism involves no formal termination step. In the absence of impurities, the carbanion would still be active and capable of adding another monomer. The chains will remain active indefinitely unless there is inadvertent or deliberate termination or chain transfer. This gave rise to two important consequences:
The number average molecular weight, Mn, of the polymer resulting from such a system could be calculated by the amount of consumed monomer and the initiator used for the polymerization, as the degree of polymerization would be the ratio of the moles of the monomer consumed to the moles of the initiator added.
, where Mo = formula weight of the repeating unit, [M]o = initial concentration of the monomer, and [I] = concentration of the initiator.
All the chains are initiated at roughly the same time. The final result is that the polymer synthesis can be done in a much more controlled manner in terms of the molecular weight and molecular weight distribution (Poisson distribution).
The following experimental criteria have been proposed as a tool for identifying a system as living polymerization system.
Polymerization until the monomer is completely consumed and until further monomer is added.
Constant number of active centers or propagating species.
Poisson distribution of molecular weight
Chain end functionalization can be carried out quantitatively.
However, in practice, even in the absence of terminating agents, the concentration of the living anions will reduce with time due to a decay mechanism termed as spontaneous termination.
Consequences of living polymerization
Block copolymers
Synthesis of block copolymers is one of the most important applications of living polymerization as it offers the best control over structure. The nucleophilicity of the resulting carbanion will govern the order of monomer addition, as the monomer forming the less nucleophilic propagating species may inhibit the addition of the more nucleophilic monomer onto the chain. An extension of the above concept is the formation of triblock copolymers where each step of such a sequence aims to prepare a block segment with predictable, known molecular weight and narrow molecular weight distribution without chain termination or transfer.
Sequential monomer addition is the dominant method, also this simple approach suffers some limitations.
Moreover, this strategy, enables synthesis of linear block copolymer structures that are not accessible via sequential monomer addition. For common A-b-B structures, sequential block copolymerization gives access to well defined
block copolymers only if the crossover reaction rate constant is significantly higher than the rate constant of the homopolymerization
of the second monomer, i.e., kAA >> kBB.
End-group functionalization/termination
One of the remarkable features of living anionic polymerization is the absence of a formal termination step. In the absence of impurities, the carbanion would remain active, awaiting the addition of new monomer. Termination can occur through unintentional quenching by impurities, often present in trace amounts. Typical impurities include oxygen, carbon dioxide, or water. Termination intentionally allows the introduction of tailored end groups.
Living anionic polymerization allow the incorporation of functional end-groups, usually added to quench polymerization. End-groups that have been used in the functionalization of α-haloalkanes include hydroxide, -NH2, -OH, -SH, -CHO,-COCH3, -COOH, and epoxides.
An alternative approach for functionalizing end-groups is to begin polymerization with a functional anionic initiator. In this case, the functional groups are protected since the ends of the anionic polymer chain is a strong base. This method leads to polymers with controlled molecular weights and narrow molecular weight distributions.
Additional reading
Cowie, J.; Arrighi,V. Polymers: Chemistry and Physics of Modern Materials; CRC Press: Boca Raton, FL, 2008.
References
Polymerization reactions
ja:重合反応#アニオン重合 | Anionic addition polymerization | Chemistry,Materials_science | 2,161 |
1,191,031 | https://en.wikipedia.org/wiki/Graviphoton | In theoretical physics and quantum physics, a graviphoton or gravivector is a hypothetical particle which emerges as an excitation of the metric tensor (i.e. gravitational field) in spacetime dimensions higher than four, as described in Kaluza–Klein theory.
However, its crucial physical properties are analogous to a (massive) photon: it induces a "vector force", sometimes dubbed a "fifth force". The electromagnetic potential emerges from an extra component of the metric tensor , where the figure 5 labels an additional, fifth dimension.
In gravity theories with extended supersymmetry (extended supergravities), a graviphoton is normally a superpartner of the graviton that behaves like a photon, and is prone to couple with gravitational strength, as was appreciated in the late 1970s. Unlike the graviton, it may provide a repulsive (as well as an attractive) force, and thus, in some technical sense, a type of anti-gravity. Under special circumstances, in several natural models, often descending from five-dimensional theories mentioned, it may actually cancel the gravitational attraction in the static limit. Joël Scherk investigated semirealistic aspects of this phenomenon, stimulating searches for physical manifestations of this mechanism.
See also
Graviscalar (a.k.a. radion)
Supergravity
List of hypothetical particles
References
Supersymmetry
Bosons
Photons
Hypothetical elementary particles
Force carriers
Subatomic particles with spin 1 | Graviphoton | Physics | 306 |
22,284,234 | https://en.wikipedia.org/wiki/Arithmetical%20ring | In algebra, a commutative ring R is said to be arithmetical (or arithmetic) if any of the following equivalent conditions hold:
The localization of R at is a uniserial ring for every maximal ideal of R.
For all ideals , and ,
For all ideals , and ,
The last two conditions both say that the lattice of all ideals of R is distributive.
An arithmetical domain is the same thing as a Prüfer domain.
References
External links
Ring theory | Arithmetical ring | Mathematics | 102 |
64,577,329 | https://en.wikipedia.org/wiki/Committee%20for%20Veterinary%20Medicinal%20Products | The Committee for Veterinary Medicinal Products (CVMP) is the European Medicines Agency's committee responsible for elaborating the agency's opinions on all issues regarding veterinary medicines.
See also
Committee for Medicinal Products for Human Use
References
External links
Health and the European Union
Veterinary organizations
Pharmacy
Animal health
Animal husbandry
Medicated feed | Committee for Veterinary Medicinal Products | Chemistry,Biology | 66 |
30,922,663 | https://en.wikipedia.org/wiki/Toky%C5%8D%20%28architecture%29 | (also called or ) is a system of and supporting the eaves of a Japanese building, usually part of a Buddhist temple or Shinto shrine. The use of tokyō is made necessary by the extent to which the eaves protrude, a functionally essential element of Japanese Buddhist architecture. The system also has an important decorative function. The system is a localized form of the Chinese dougong that has evolved since its arrival into several original forms.
In its simplest configuration, the bracket system has a single projecting bracket and a single block, and is called hitotesaki. If the first bracket and block group support a second similar one, the whole system is called futatesaki, if three brackets are present it is called mitesaki, and so on until a maximum of six brackets as in the photo to the right.
Each supporting block in most cases supports, besides the next bracket, a U-shaped supporting bracket set at 90° to the first.
Function and structure
The roof is the most visually impressive part of a Buddhist temple, often constituting half the size of the whole edifice. The slightly curved eaves extend far beyond the walls, covering verandas. Besides being determinant to the general look of the edifice, the oversize eaves give its interior a characteristic dimness, a factor which contributes to the temple's atmosphere. Finally, the eaves have a practical function in a country where rain is a common event, because they protect the building by carrying the rain as far as possible from its walls. The roof's weight must however be supported by complex bracket systems called tokyō. The further the eaves extend, the greater and more complex must the tokyō be. An added benefit of the tokyō system is its inherent elasticity, which lessens the impact of an earthquake by acting as a shock absorber.
This bracketing system, being essential both structurally and esthetically, has been altered and refined many times since it was imported from China. It is made of a combination of weight bearing blocks (masu) and bracket arms (hijiki). The bearing block, when set directly on a post, is called daito, or "large block". When it connects two brackets, it is instead called . Bearing blocks installed on top of corner posts are of necessity more complex and are called because of how difficult they are to make. In its simplest configuration, each tokyō includes a single outwardly-projecting bracket with a single supporting block, in which case the complex is called . The projecting bracket is just the tip of one of the roof's beams. If the first bracket and block group supports a second similar one, the whole complex is called (). The tokyō may also have three () or more such steps, up to six (. The number of steps used to indicate the rank of a butsudō, the higher ranks having more, but the custom was abandoned after the Heian period. In most cases, besides the projecting bracket above it, a bearing block supports another bracket set at 90° (see schematic photo below), extending laterally the support provided by the system.
Wayō-, Zenshūyō- and Daibutsuyō-style tokyō all differ in details, the first being the simplest of the three. The Daibutsuyō style has for example a dish-shaped decoration called under each block, while the Zen'yō rounds up in an arc the bracket's lower ends. Another Zenshūyō feature is the or , a nose-like decoration carved after the last protruding bracket. (See photo in the gallery.) Some of these features can also be found in temples of non-Zen sects.
Notable types
Sumisonae
The or are the brackets at the corner of a roof, having a particularly complex structure. The regular brackets between two sumisonae are called or .
Futatesaki
Very common two-step bracketing system used in a variety of structures. See in the gallery for example the photo of a belltower (shōrō).
Mitesaki
The three-step complex (mitesaki) is the most common in Wayō-style structures. Its third step is usually supported by a so-called , a cantilever set between the second and the third step (see illustration above and photo in the gallery).
Yotesaki
The is used mainly in the top section of a tahōtō.
Mutesaki
The mutesaki tokyō (see photo above) is a six-step bracketing system whose most famous example can be seen at Tōdai-ji's Nandaimon. In that gate's case, it consists of just six projecting brackets with no brackets at right angles (see photo above).
Kumo tokyō
The is the Japanese equivalent of dieji (疊枅) in early Chinese architecture. It is a bracket system where the projecting bracket is shaped in a way thought to resemble a cloud. It is rare in extant temples, and its most important examples are found in Hōryū-ji's Kondō, five-storied pagoda and Chūmon. These bracket systems are believed to be a Japanese invention of the Asuka period, as there is no evidence they came from the Continent.
Sashihijiki
The is the Japanese equivalent of chagong in Chinese architecture. It is a bracket arm inserted directly into a pillar instead of resting onto a supporting block on top of a pillar, as was normal in the wayō style. Typical of the Daibutsuyō style, these brackets are clearly visible in the photo at the top of the article.
Tsumegumi
are intercolumnar supporting brackets, usually futatesaki or mitesaki, installed one immediately after the other. The result is an extremely compact row of brackets. Tsumegumi are typical of the Zenshūyō style, which arrived to Japan with Zen Buddhism at the end of the 12th century.
Gallery
Footnotes
References
External links
Japanese architectural history
Buddhist
Timber framing
Shinto architecture | Tokyō (architecture) | Technology | 1,222 |
2,995,237 | https://en.wikipedia.org/wiki/Glucocorticoid%20receptor | The glucocorticoid receptor (GR or GCR) also known as NR3C1 (nuclear receptor subfamily 3, group C, member 1) is the receptor to which cortisol and other glucocorticoids bind.
The GR is expressed in almost every cell in the body and regulates genes controlling the development, metabolism, and immune response. Because the receptor gene is expressed in several forms, it has many different (pleiotropic) effects in different parts of the body.
When glucocorticoids bind to GR, its primary mechanism of action is the regulation of gene transcription. The unbound receptor resides in the cytosol of the cell. After the receptor is bound to glucocorticoid, the receptor-glucocorticoid complex can take either of two paths. The activated GR complex up-regulates the expression of anti-inflammatory proteins in the nucleus or represses the expression of pro-inflammatory proteins in the cytosol (by preventing the translocation of other transcription factors from the cytosol into the nucleus).
In humans, the GR protein is encoded by gene which is located on chromosome 5 (5q31).
Structure
Like the other steroid receptors, the glucocorticoid receptor is modular in structure and contains the following domains (labeled A - F):
A/B - N-terminal regulatory domain
C - DNA-binding domain (DBD)
D - hinge region
E - ligand-binding domain (LBD)
F - C-terminal domain
Ligand binding and response
In the absence of hormone, the glucocorticoid receptor (GR) resides in the cytosol complexed with a variety of proteins including heat shock protein 90 (hsp90), the heat shock protein 70 (hsp70) and the protein FKBP4 (FK506-binding protein 4). The endogenous glucocorticoid hormone cortisol diffuses through the cell membrane into the cytoplasm and binds to the glucocorticoid receptor (GR) resulting in release of the heat shock proteins. The resulting activated form GR has two principal mechanisms of action, transactivation and transrepression, described below.
Transactivation
A direct mechanism of action involves homodimerization of the receptor, translocation via active transport into the nucleus, and binding to specific DNA response elements activating gene transcription. This mechanism of action is referred to as transactivation. The biological response depends on the cell type.
Transrepression
In the absence of activated GR, other transcription factors such as NF-κB or AP-1 themselves are able to transactivate target genes. However activated GR can complex with these other transcription factors and prevent them from binding their target genes and hence repress the expression of genes that are normally upregulated by NF-κB or AP-1. This indirect mechanism of action is referred to as transrepression. GR transrepression via NF-κB and AP-1 is restricted only to certain cell types, and is not considered the universal mechanism for IκBα repression.
Clinical significance
The GR is abnormal in familial glucocorticoid resistance.
In central nervous system structures, the glucocorticoid receptor is gaining interest as a novel representative of neuroendocrine integration, functioning as a major component of endocrine influence - specifically the stress response - upon the brain. The receptor is now implicated in both short and long-term adaptations seen in response to stressors and may be critical to the understanding of psychological disorders, including some or all subtypes of depression and post-traumatic stress disorder (PTSD). Indeed, long-standing observations such as the mood dysregulations typical of Cushing's disease demonstrate the role of corticosteroids in regulating psychologic state; recent advances have demonstrated interactions with norepinephrine and serotonin at the neural level.
In preeclampsia (a hypertensive disorder commonly occurring in pregnant women), the level of a miRNA sequence possibly targeting this protein is elevated in the blood of the mother. Rather, the placenta elevates the level of exosomes containing this miRNA, which can result in inhibition of translation of molecule. Clinical significance of this information is not yet clarified.
Agonists and antagonists
Dexamethasone and other corticosteroids are agonists, while mifepristone and ketoconazole are antagonists of the GR. Anabolic steroids also prevent cortisol from binding to the glucocorticoid receptor.
Interactions
Glucocorticoid receptor has been shown to interact with:
BAG1,
CEBPB,
CREBBP,
DAP3,
DAXX,
HSP90AA1,
HNRPU,
MED1,
MED14,
Mineralocorticoid receptor,
NRIP1,
NCOR1,
NCOA1,
NCOA2,
NCOA3,
POU2F1,
RANBP9,
RELA,
SMAD3,
SMARCD1,
SMARCA4
STAT3,
STAT5B,
Thioredoxin,
TRIM28, and
YWHAH.
See also
Membrane glucocorticoid receptor
Selective glucocorticoid receptor agonist (SEGRA)
References
Further reading
External links
Human Protein Reference Database
Genome projects
3 | Glucocorticoid receptor | Biology | 1,133 |
4,415,547 | https://en.wikipedia.org/wiki/Relaxation%20%28NMR%29 | In magnetic resonance imaging (MRI) and nuclear magnetic resonance spectroscopy (NMR), an observable nuclear spin polarization (magnetization) is created by a homogeneous magnetic field. This field makes the magnetic dipole moments of the sample precess at the resonance (Larmor) frequency of the nuclei. At thermal equilibrium, nuclear spins precess randomly about the direction of the applied field. They become abruptly phase coherent when they are hit by radiofrequency (RF) pulses at the resonant frequency, created orthogonal to the field. The RF pulses cause the population of spin-states to be perturbed from their thermal equilibrium value. The generated transverse magnetization can then induce a signal in an RF coil that can be detected and amplified by an RF receiver. The return of the longitudinal component of the magnetization to its equilibrium value is termed spin-lattice relaxation while the loss of phase-coherence of the spins is termed spin-spin relaxation, which is manifest as an observed free induction decay (FID).
For spin- nuclei (such as 1H), the polarization due to spins oriented with the field N− relative to the spins oriented against the field N+ is given by the Boltzmann distribution:
where ΔE is the energy level difference between the two populations of spins, k is the Boltzmann constant, and T is the sample temperature. At room temperature, the number of spins in the lower energy level, N−, slightly outnumbers the number in the upper level, N+. The energy gap between the spin-up and spin-down states in NMR is minute by atomic emission standards at magnetic fields conventionally used in MRI and NMR spectroscopy. Energy emission in NMR must be induced through a direct interaction of a nucleus with its external environment rather than by spontaneous emission. This interaction may be through the electrical or magnetic fields generated by other nuclei, electrons, or molecules. Spontaneous emission of energy is a radiative process involving the release of a photon and typified by phenomena such as fluorescence and phosphorescence. As stated by Abragam, the probability per unit time of the nuclear spin-1/2 transition from the + into the
- state through spontaneous emission of a photon is a negligible phenomenon.
Rather, the return to equilibrium is a much slower thermal process induced by the fluctuating local magnetic fields due to molecular or electron (free radical) rotational motions that return the excess energy in the form of heat to the surroundings.
T1 and T2
The decay of RF-induced NMR spin polarization is characterized in terms of two separate processes, each with their own time constants. One process, called T1, is responsible for the loss of resonance intensity following pulse excitation. The other process, called T2, characterizes the width or broadness of resonances. Stated more formally, T1 is the time constant for the physical processes responsible for the relaxation of the components of the nuclear spin magnetization vector M parallel to the external magnetic field, B0 (which is conventionally designated as the z-axis). T2 relaxation affects the coherent components of M perpendicular to B0. In conventional NMR spectroscopy, T1 limits the pulse repetition rate and affects the overall time an NMR spectrum can be acquired. Values of T1 range from milliseconds to several seconds, depending on the size of the molecule, the viscosity of the solution, the temperature of the sample, and the possible presence of paramagnetic species (e.g., O2 or metal ions).
T1
The longitudinal (or spin-lattice) relaxation time T1 is the decay constant for the recovery of the z component of the nuclear spin magnetization, Mz, towards its thermal equilibrium value, . In general,
In specific cases:
If M has been tilted into the xy plane, then and the recovery is simply
i.e. the magnetization recovers to 63% of its equilibrium value after one time constant T1.
In the inversion recovery experiment, commonly used to measure T1 values, the initial magnetization is inverted, , and so the recovery follows
T1 relaxation involves redistributing the populations of the nuclear spin states in order to reach the thermal equilibrium distribution. By definition, this is not energy conserving. Moreover, spontaneous emission is negligibly slow at NMR frequencies. Hence truly isolated nuclear spins would show negligible rates of T1 relaxation. However, a variety of relaxation mechanisms allow nuclear spins to exchange energy with their surroundings, the lattice, allowing the spin populations to equilibrate. The fact that T1 relaxation involves an interaction with the surroundings is the origin of the alternative description, spin-lattice relaxation.
Note that the rates of T1 relaxation (i.e., 1/T1) are generally strongly dependent on the NMR frequency and so vary considerably with magnetic field strength B. Small amounts of paramagnetic substances in a sample speed up relaxation very much. By degassing, and thereby removing dissolved oxygen, the T1/T2 of liquid samples easily go up to an order of ten seconds.
Spin saturation transfer
Especially for molecules exhibiting slowly relaxing (T1) signals, the technique spin saturation transfer (SST) provides information on chemical exchange reactions. The method is widely applicable to fluxional molecules. This magnetization transfer technique provides rates, provided that they exceed 1/T1.
T2
The transverse (or spin-spin) relaxation time T2 is the decay constant for the component of M perpendicular to B0, designated Mxy, MT, or . For instance, initial xy magnetization at time zero will decay to zero (i.e. equilibrium) as follows:
i.e. the transverse magnetization vector drops to 37% of its original magnitude after one time constant T2.
T2 relaxation is a complex phenomenon, but at its most fundamental level, it corresponds to a decoherence of the transverse nuclear spin magnetization. Random fluctuations of the local magnetic field lead to random variations in the instantaneous NMR precession frequency of different spins. As a result, the initial phase coherence of the nuclear spins is lost, until eventually the phases are disordered and there is no net xy magnetization. Because T2 relaxation involves only the phases of other nuclear spins it is often called "spin-spin" relaxation.
T2 values are generally much less dependent on field strength, B, than T1 values.
Hahn echo decay experiment can be used to measure the T2 time, as shown in the animation below. The size of the echo is recorded for different spacings of the two applied pulses. This reveals the decoherence which is not refocused by the 180° pulse. In simple cases, an exponential decay is measured which is described by the time.
T2* and magnetic field inhomogeneity
In an idealized system, all nuclei in a given chemical environment, in a magnetic field, precess with the same frequency. However, in real systems, there are minor differences in chemical environment which can lead to a distribution of resonance frequencies around the ideal. Over time, this distribution can lead to a dispersion of the tight distribution of magnetic spin vectors, and loss of signal (free induction decay). In fact, for most magnetic resonance experiments, this "relaxation" dominates. This results in dephasing.
However, decoherence because of magnetic field inhomogeneity is not a true "relaxation" process; it is not random, but dependent on the location of the molecule in the magnet. For molecules that aren't moving, the deviation from ideal relaxation is consistent over time, and the signal can be recovered by performing a spin echo experiment.
The corresponding transverse relaxation time constant is thus T2*, which is usually much smaller than T2. The relation between them is:
where γ represents gyromagnetic ratio, and ΔB0 the difference in strength of the locally varying field.
Unlike T2, T2* is influenced by magnetic field gradient irregularities. The T2* relaxation time is always shorter than the T2 relaxation time and is typically milliseconds for water samples in imaging magnets.
Is T1 always longer than T2?
In NMR systems, the following relation holds absolute true . In most situations (but not in principle) is greater than . The cases in which are rare, but not impossible.
Bloch equations
Bloch equations are used to calculate the nuclear magnetization M = (Mx, My, Mz) as a function of time when relaxation times T1 and T2 are present. Bloch equations are phenomenological equations that were introduced by Felix Bloch in 1946.
Where is the cross-product, γ is the gyromagnetic ratio and B(t) = (Bx(t), By(t), B0 + Bz(t)) is the magnetic flux density experienced by the nuclei.
The z component of the magnetic flux density B is typically composed of two terms: one, B0, is constant in time, the other one, Bz(t), is time dependent. It is present in magnetic resonance imaging and helps with the spatial decoding of the NMR signal.
The equation listed above in the section on T1 and T2 relaxation are those in the Bloch equations.
Solomon equations
Solomon equations are used to calculate the transfer of magnetization as a result of relaxation in a dipolar system. They can be employed to explain the nuclear Overhauser effect, which is an important tool in determining molecular structure.
Common relaxation time constants in human tissues
Following is a table of the approximate values of the two relaxation time constants for hydrogen nuclear spins in nonpathological human tissues.
Following is a table of the approximate values of the two relaxation time constants for chemicals that commonly show up in human brain magnetic resonance spectroscopy (MRS) studies, physiologically or pathologically.
Relaxation in the rotating frame, T1ρ
The discussion above describes relaxation of nuclear magnetization in the presence of a constant magnetic field B0. This is called relaxation in the laboratory frame.
Another technique, called relaxation in the rotating frame, is the relaxation of nuclear magnetization in the presence of the field B0 together with a time-dependent magnetic field B1. The field B1 rotates in the plane perpendicular to B0 at the Larmor frequency of the nuclei in the B0. The magnitude of B1 is typically much smaller than the magnitude of B0. Under these circumstances the relaxation of the magnetization is similar to laboratory frame relaxation in a field B1. The decay constant for the recovery of the magnetization component along B1 is called the spin-lattice relaxation time in the rotating frame and is denoted T1ρ.
Relaxation in the rotating frame is useful because it provides information on slow motions of nuclei.
Microscopic mechanisms
Relaxation of nuclear spins requires a microscopic mechanism for a nucleus to change orientation with respect to the applied magnetic field and/or interchange energy with the surroundings (called the lattice). The most common mechanism is the magnetic dipole-dipole interaction between the magnetic moment of a nucleus and the magnetic moment of another nucleus or other entity (electron, atom, ion, molecule). This interaction depends on the distance between the pair of dipoles (spins) but also on their orientation relative to the external magnetic field. Several other relaxation mechanisms also exist. The chemical shift anisotropy (CSA) relaxation mechanism arises whenever the electronic environment around the nucleus is non spherical, the magnitude of the electronic shielding of the nucleus will then be dependent on the molecular orientation relative to the (fixed) external magnetic field. The spin rotation (SR) relaxation mechanism arises from an interaction between the nuclear spin and a coupling to the overall molecular rotational angular momentum. Nuclei with spin I ≥ 1 will have not only a nuclear dipole but a quadrupole. The nuclear quadrupole has an interaction with the electric field gradient at the nucleus which is again orientation dependent as with the other mechanisms described above, leading to the so-called quadrupolar relaxation mechanism.
Molecular reorientation or tumbling can then modulate these orientation-dependent spin interaction energies.
According to quantum mechanics, time-dependent interaction energies cause transitions between the nuclear spin states which result in nuclear spin relaxation. The application of time-dependent perturbation theory in quantum mechanics shows that the relaxation rates (and times) depend on spectral density functions that are the Fourier transforms of the autocorrelation function of the fluctuating magnetic dipole interactions. The form of the spectral density functions depend on the physical system, but a simple approximation called the BPP theory is widely used.
Another relaxation mechanism is the electrostatic interaction between a nucleus with an electric quadrupole moment and the electric field gradient that exists at the nuclear site due to surrounding charges. Thermal motion of a nucleus can result in fluctuating electrostatic interaction energies. These fluctuations produce transitions between the nuclear spin states in a similar manner to the magnetic dipole-dipole interaction.
BPP theory
In 1948, Nicolaas Bloembergen, Edward Mills Purcell, and Robert Pound proposed the so-called Bloembergen-Purcell-Pound theory (BPP theory) to explain the relaxation constant of a pure substance in correspondence with its state, taking into account the effect of tumbling motion of molecules on the local magnetic field disturbance. The theory agrees well with experiments on pure substances, but not for complicated environments such as the human body.
This theory makes the assumption that the autocorrelation function of the microscopic fluctuations causing the relaxation is proportional to , where is called the correlation time. From this theory, one can get T1 > T2 for magnetic dipolar relaxation:
,
where is the Larmor frequency in correspondence with the strength of the main magnetic field . is the correlation time of the molecular tumbling motion. is defined for spin-1/2 nuclei and is a constant with being the magnetic permeability of free space of the the reduced Planck constant, γ the gyromagnetic ratio of such species of nuclei, and r the distance between the two nuclei carrying magnetic dipole moment.
Taking for example the H2O molecules in liquid phase without the contamination of oxygen-17, the value of K is 1.02×1010 s−2 and the correlation time is on the order of picoseconds = s, while hydrogen nuclei 1H (protons) at 1.5 tesla precess at a Larmor frequency of approximately 64 MHz (Simplified. BPP theory uses angular frequency indeed). We can then estimate using τc = 5×10−12 s:
(dimensionless)
= 3.92 s
= 3.92 s,
which is close to the experimental value, 3.6 s. Meanwhile, we can see that at this extreme case, T1 equals T2.
As follows from the BPP theory, measuring the T1 times leads to internuclear distances r. One of the examples is accurate determinations of the metal – hydride (M-H) bond lengths in solutions by measurements of 1H selective and non-selective T1 times in variable-temperature relaxation experiments via the equation:
, with
where r, frequency and T1 are measured in Å, MHz and s, respectively, and IM is the spin of M.
See also
Nuclear magnetic resonance
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance spectroscopy of carbohydrates
Nuclear magnetic resonance spectroscopy of nucleic acids
Nuclear magnetic resonance spectroscopy of proteins
Protein dynamics
Relaxation (physics)
Relaxometry
Spin–lattice relaxation
Spin–spin relaxation
References
External links
The Basics of NMR, RIT
Relaxation in high-resolution NMR spectroscopy
Field-cycling NMR relaxometry
relax Software for the analysis of NMR dynamics
Estimation of T1 and T2 relaxation parameters in MRI
Nuclear magnetic resonance
Articles containing video clips
Magnetic resonance imaging | Relaxation (NMR) | Physics,Chemistry | 3,257 |
387,545 | https://en.wikipedia.org/wiki/Anycast | Anycast is a network addressing and routing methodology in which a single IP address is shared by devices (generally servers) in multiple locations. Routers direct packets addressed to this destination to the location nearest the sender, using their normal decision-making algorithms, typically the lowest number of BGP network hops. Anycast routing is widely used by content delivery networks such as web and name servers, to bring their content closer to end users.
History
The first documented use of anycast routing for topological load-balancing of Internet-connected services was in 1989; the technique was first formally documented in the IETF four years later. It was first applied to critical infrastructure in 2001 with the anycasting of the I-root nameserver.
Early objections
Early objections to the deployment of anycast routing centered on the perceived conflict between long-lived TCP connections and the volatility of the Internet's routed topology. In concept, a long-lived connection, such as an FTP file transfer (which can take hours to complete for large files) might be re-routed to a different anycast instance in mid-connection due to changes in network topology or routing, with the result that the server changes mid-connection, and the new server is not aware of the connection and does not possess the TCP connection state of the previous anycast instance.
In practice, such problems were not observed, and these objections dissipated by the early 2000s. Many initial anycast deployments consisted of DNS servers, using principally UDP transport. Measurements of long-term anycast flows revealed very few failures due to mid-connection instance switches, far fewer (less than 0.017% or "less than one flow per ten thousand per hour of duration" according to various sources) than were attributed to other causes of failure. Numerous mechanisms were developed to efficiently share state between anycast instances. And some TCP-based protocols, notably HTTP, incorporated "redirect" mechanisms, whereby anycast service addresses could be used to locate the nearest instance of a service, whereupon a user would be redirected to that specific instance prior to the initiation of any long-lived stateful transaction.
Internet Protocol version 4
Anycast can be implemented via Border Gateway Protocol (BGP). Multiple hosts (usually in different geographic areas) are given the same unicast IP address and different routes to the address are announced through BGP. Routers consider these to be alternative routes to the same destination, even though they are actually routes to different destinations with the same address. As usual, routers select a route by whatever distance metric is in use (the least cost, least congested, shortest). Selecting a route in this setup amounts to selecting a destination.
Internet Protocol version 6
Anycast is supported explicitly in the IPv6 addressing architecture. The lowest address within an IPv6 subnet (interface identifier 0) is reserved as the "Subnet Router" anycast address. In addition, the highest 128 interface identifiers within a subnet are also reserved as anycast addresses.
Most IPv6 routers on the path of an anycast packet through the network will not distinguish it from a unicast packet, but special handling is required from the routers near the destination (that is, within the scope of the anycast address) as they are required to route an anycast packet to the "nearest" interface within that scope which has the proper anycast address, according to whatever measure of distance (hops, cost, etc.) is being used.
The method used in IPv4 of advertising multiple routes in BGP to multiply-assigned unicast addresses also still works in IPv6, and can be used to route packets to the nearest of several geographically dispersed hosts with the same address. This approach, which does not depend on anycast-aware routers, has the same use cases together with the same problems and limitations as in IPv4.
Applications
With the growth of the Internet, network services increasingly have high-availability requirements. As a result, operation of anycast services has grown in popularity among network operators.
Domain Name System
All Internet root nameservers are implemented as clusters of hosts using anycast addressing. All 13 root servers A–M exist in multiple locations, with 11 on multiple continents. (Root servers B and H exist in two U.S. locations.) The servers use anycast address announcements to provide a decentralized service. This has accelerated the deployment of physical (rather than logical) root servers outside the United States. Many commercial DNS providers have switched to an IP anycast environment to increase query performance and redundancy, and to implement load balancing.
IPv6 transition
In IPv4 to IPv6 transitioning, anycast addressing may be deployed to provide IPv6 compatibility to IPv4 hosts. This method, 6to4, uses a default gateway with the IP address . This allows multiple providers to implement 6to4 gateways without hosts having to know each individual provider's gateway addresses. 6to4 has been deprecated in response to native IPv6 becoming more prevalent.
Content delivery networks
Content delivery networks may use anycast for actual HTTP connections to their distribution centers, or for DNS. Because most HTTP connections to such networks request static content such as images and style sheets, they are generally short-lived and stateless across subsequent TCP sessions. The general stability of routes and statelessness of connections makes anycast suitable for this application, even though it uses TCP.
Connectivity between Anycast and Multicast network
Anycast rendezvous point can be used in Multicast Source Discovery Protocol (MSDP) and its advantageous application as Anycast RP is an intra-domain feature that provides redundancy and load-sharing capabilities. If the multiple anycast rendezvous point is used, IP routing automatically will select the topologically closest rendezvous point for each source and receiver. It would provide a multicast network with the fault tolerance requirements.
Security
Anycast allows any operator whose routing information is accepted by an intermediate router to hijack any packets intended for the anycast address. While this at first sight appears insecure, it is no different from the routing of ordinary IP packets, and no more or less secure. As with conventional IP routing, careful filtering of who is and is not allowed to propagate route announcements is crucial to prevent man-in-the-middle or blackhole attacks. The former can also be prevented by encrypting and authenticating messages, such as using Transport Layer Security, while the latter can be frustrated by onion routing.
Reliability
Anycast is normally highly reliable, as it can provide automatic failover without adding complexity or new potential points of failure. Anycast applications typically feature external "heartbeat" monitoring of the server's function, and withdraw the route announcement if the server fails. In some cases this is done by the actual servers announcing the anycast prefix to the router over OSPF or another IGP. If the servers die, the router will automatically withdraw the announcement. "Heartbeat" functionality is important because, if the announcement continues for a failed server, the server will act as a "black hole" for nearby clients; this is the most serious mode of failure for an anycast system. Even in this event, this kind of failure will only cause a total failure for clients that are closer to this server than any other, and will not cause a global failure. However, even the automation necessary to implement "heartbeat" routing withdrawal can itself add a potential point of failure, as seen in the 2021 Facebook outage.
Mitigation of denial-of-service attacks
In denial-of-service attacks, a rogue network host may advertise itself as an anycast server for a vital network service, to provide false information or simply block service.
Anycast methodologies on the Internet may be exploited to distribute DDoS attacks and reduce their effectiveness: As traffic is routed to the closest node, a process over which the attacker has no control, the DDoS traffic flow will be distributed amongst the closest nodes. Thus, not all nodes might be affected. This may be a reason to deploy anycast addressing.
The effectiveness of this technique depends upon maintaining the secrecy of any unicast addresses associated with anycast service nodes, however, since an attacker in possession of the unicast addresses of individual nodes can attack them from any location, bypassing anycast addressing methods.
Local and global nodes
Some anycast deployments on the Internet distinguish between local and global nodes to benefit the local community, by addressing local nodes preferentially. An example is the Domain Name System. Local nodes are often announced with the no-export BGP community to prevent hosts from announcing them to their peers, i.e. the announcement is kept in the local area. Where both local and global nodes are deployed, the announcements from global nodes are often AS prepended (i.e. the AS is added a few more times) to make the path longer so that a local node announcement is preferred over a global node announcement.
See also
Multihoming
Line hunting, for an equivalent system for telephones
References
External links
Best Practices in IPv4 Anycast Routing Tutorial on anycast routing configuration.
Computer network technology
Internet architecture
Multihoming
Domain Name System | Anycast | Technology | 1,900 |
37,647,174 | https://en.wikipedia.org/wiki/Kappa%20Librae | Kappa Librae, Latinized from κ Librae, is the Bayer designation for a star system in the zodiac constellation of Libra. Its apparent visual magnitude is 4.72, so it can be seen with the naked eye. The annual parallax shift of 10.57 mas indicates it is roughly 310 light years away. It is positioned 0.02 degrees south of the ecliptic.
The star shows acceleration components in its proper motion, indicating with high probability that it is an astrometric binary. The visible component is an evolved K-type giant star with a stellar classification of K5 III. It is a suspected variable star with a brightness that ranges between 4.70 and 4.75. The measured angular diameter is , which, at the estimated distance of Kappa Librae, yields a physical size of about 38 times the radius of the Sun. It radiates 296 times the solar luminosity from its outer atmosphere at an effective temperature of 3,930 K.
In Chinese astronomy, Kappa Librae is called 日, Pinyin: Rì, meaning Sun, because this star is marking itself and stand alone in Sun asterism, Room mansion (see : Chinese constellation).
References
K-type giants
Suspected variables
Astrometric binaries
Libra (constellation)
Librae, Kappa
BD-19 4188
5838
Librae, 43
139997
076880 | Kappa Librae | Astronomy | 289 |
68,684,091 | https://en.wikipedia.org/wiki/Nokia%20XR20 | The Nokia XR20 is a Nokia-branded smartphone that was manufactured by HMD Global.
The Nokia XR20 is also the last Nokia-branded smartphone that utilizes Zeiss optics due to HMD Global and Zeiss mutually parting way from the partnership.
References
External links
XR20
Phablets
Mobile phones introduced in 2021
Mobile phones with multiple rear cameras | Nokia XR20 | Technology | 74 |
6,188,769 | https://en.wikipedia.org/wiki/Metal%20nitrosyl%20complex | Metal nitrosyl complexes are complexes that contain nitric oxide, NO, bonded to a transition metal. Many kinds of nitrosyl complexes are known, which vary both in structure and coligand.
Bonding and structure
Most complexes containing the NO ligand can be viewed as derivatives of the nitrosyl cation, NO+. The nitrosyl cation is isoelectronic with carbon monoxide, thus the bonding between a nitrosyl ligand and a metal follows the same principles as the bonding in carbonyl complexes. The nitrosyl cation serves as a two-electron donor to the metal and accepts electrons from the metal via back-bonding. The compounds Co(NO)(CO)3 and Ni(CO)4 illustrate the analogy between NO+ and CO. In an electron-counting sense, two linear NO ligands are equivalent to three CO groups. This trend is illustrated by the isoelectronic pair Fe(CO)2(NO)2 and [Ni(CO)4]. These complexes are isoelectronic and, incidentally, both obey the 18-electron rule. The formal description of nitric oxide as NO+ does not match certain measureable and calculated properties. In an alternative description, nitric oxide serves as a 3-electron donor, and the metal-nitrogen interaction is a triple bond.
Linear vs bent nitrosyl ligands
The M-N-O unit in nitrosyl complexes is usually linear, or no more than 15° from linear. In some complexes, however, especially when back-bonding is less important, the M-N-O angle can strongly deviate from 180°. Linear and bent NO ligands can be distinguished using infrared spectroscopy. Linear M-N-O groups absorb in the range 1650–1900 cm−1, whereas bent nitrosyls absorb in the range 1525–1690 cm−1. The differing vibrational frequencies reflect the differing N-O bond orders for linear (triple bond) and bent NO (double bond).
The bent NO ligand is sometimes described as the anion, NO−. Prototypes for such compounds are the organic nitroso compounds, such as nitrosobenzene. A complex with a bent NO ligand is trans-[Co(en)2(NO)Cl]+. The NO− is also common for alkali-metal or alkaline-earth metal-NO molecules. For example. LiNO and BeNO bear Li+NO− and Be+NO− ionic form.
The adoption of linear vs bent bonding can be analyzed with the Enemark-Feltham notation. In their framework, the factor that determines the bent vs linear NO ligands is the sum of electrons of pi-symmetry. Complexes with "pi-electrons" in excess of 6 tend to have bent NO ligands. Thus, [Co(en)2(NO)Cl]+, with eight electrons of pi-symmetry (six in t2g orbitals and two on NO, {CoNO}8), adopts a bent NO ligand, whereas [Fe(CN)5(NO)]2−, with six electrons of pi-symmetry, {FeNO}6), adopts a linear nitrosyl. In a further illustration, the {MNO} d-electron count of the [Cr(CN)5NO]3− anion is shown. In this example, the cyanide ligands are "innocent", i.e., they have a charge of −1 each, −5 total. To balance the fragment's overall charge, the charge on {CrNO} is thus +2 (−3 = −5 + 2). Using the neutral electron counting scheme, Cr has 6 d electrons and NO· has one electron for a total of 7. Two electrons are subtracted to take into account that fragment's overall charge of +2, to give 5. Written in the Enemark-Feltham notation, the d electron count is {CrNO}5. The results are the same if the nitrosyl ligand were considered NO+ or NO−.
Bridging nitrosyl ligands
Nitric oxide can also serve as a bridging ligand. In the compound [Mn3(η5C5H5)3 (μ2-NO)3 (μ3-NO)], three NO groups bridge two metal centres and one NO group bridge to all three.
Isonitrosyl ligands
Usually only of transient existence, complexes of isonitrosyl ligands are known where the NO is coordinated by its oxygen atom. They can be generated by UV-irradiation of nitrosyl complexes.
Representative classes of compounds
Homoleptic nitrosyl complexes
Metal complexes containing only nitrosyl ligands are called isoleptic nitrosyls. They are rare, the premier member being Cr(NO)4. Even trinitrosyl complexes are uncommon, whereas polycarbonyl complexes are routine.
Roussin red and black salts
One of the earliest examples of a nitrosyl complex to be synthesized is Roussin's red salt, which is a sodium salt of the anion [Fe2(NO)4S2]2−. The structure of the anion can be viewed as consisting of two tetrahedra sharing an edge. Each iron atom is bonded linearly to two NO+ ligands and shares two bridging sulfidi ligands with the other iron atom. Roussin's black salt has a more complex cluster structure. The anion in this species has the formula [Fe4(NO)7S3]−. It has C3v symmetry. It consists of a tetrahedron of iron atoms with sulfide ions on three faces of the tetrahedron. Three iron atoms are bonded to two nitrosyl groups. The iron atom on the threefold symmetry axis has a single nitrosyl group which also lies on that axis.
Preparation
Many nitrosyl complexes are quite stable, thus many methods can be used for their synthesis.
From NO
Nitrosyl complexes are traditionally prepared by treating metal complexes with nitric oxide. The method is mainly used with reduced precursors. Illustrative is the nitrosylation of cobalt carbonyl to give cobalt tricarbonyl nitrosyl:
Co2(CO)8 + 2NO → 2CoNO(CO)3 + 2CO
From NO+ and NOCl
Replacement of ligands by the nitrosyl cation may be accomplished using nitrosyl tetrafluoroborate. This reagent has been applied to the hexacarbonyls of molybdenum and tungsten:
M(CO)6 + 4MeCN + 2NOBF4 → [M(NO)2(MeCN)4](BF4)2
Nitrosyl chloride and molybdenum hexacarbonyl react to give [Mo(NO)2Cl2]n. Diazald is also used as an NO source.
From hydroxylamine
Hydroxylamine is a source of nitric oxide anion via a disproportionation:
K2[Ni(CN)4] + 2NH2OH + KOH → K2[Ni(CN)3)NO] + NH3 + 2H2O + KCN
From nitric acid
Nitric acid is a source of nitric oxide complexes, although the details are obscure. Probably relevant is the conventional self-dehydration of nitric acid:
2 HNO3 → NO2+NO3− + H2O
Nitric acid is used in some preparations of nitroprusside from ferrocyanide:
HNO3 + [Fe(CN)6]4- → [Fe(CN)5(NO)]2- + OH− + OCN−
From nitrous acid
Some anionic nitrito complexes undergo acid-induced deoxygenation to give the linear nitrosyl complex.
[LnMNO2]− + H+ → [LnMNO] + OH−
The reaction is reversible in some cases.
Oxidation of ammine complexes
In some metal-ammine complexes, the ammonia ligand can be oxidized to nitrosyl:
H2O + [Ru(terpy)(bipy)(NH3)]+ → [Ru(terpy)(bipy)(NO)]2+ + 5H+ + 6e−
Reactions
An important reaction is the acid/base equilibrium, yielding transition metal nitrite complexes:
[LnMNO]2+ + 2OH− LnMNO2 + H2O
This equilibrium serves to confirm that the linear nitrosyl ligand is, formally, NO+, with nitrogen in the oxidation state +3
NO+ + 2 OH− NO2− + H2O
Since nitrogen is more electronegative than carbon, metal-nitrosyl complexes tend to be more electrophilic than related metal carbonyl complexes. Nucleophiles often add to the nitrogen. The nitrogen atom in bent metal nitrosyls is basic, thus can be oxidized, alkylated, and protonated, e.g.:
(Ph3P)2(CO)ClOsNO + HCl → (Ph3P)2(CO)ClOsN(H)O
In rare cases, NO is cleaved by metal centers:
Cp2NbMe2 + NO → Cp2(Me)Nb(O)NMe
2 Cp2(Me)Nb(O)NMe → 2 Cp2Nb(O)Me + ½MeN=NMe
Applications
Metal-nitrosyls are assumed to be intermediates in catalytic converters, which reduce the emission of from internal combustion engines. This application has been described as "one of the most successful stories in the development of catalysts."
Metal-catalyzed reactions of NO are not often useful in organic chemistry. In biology and medicine, nitric oxide is however an important signalling molecule in nature and this fact is the basis of the most important applications of metal nitrosyls. The nitroprusside anion, [Fe(CN)5NO]2−, a mixed nitrosyl cyano complex, has pharmaceutical applications as a slow release agent for NO. The signalling function of NO is effected via its complexation to haem proteins, where it binds in the bent geometry. Nitric oxide also attacks iron-sulfur proteins giving dinitrosyl iron complexes.
Thionitrosyls
Several complexes are known with NS ligands. Like nitrosyls, thionitrosyls exist as both linear and bent geometries.
References
Oxides
Nitrogen compounds
Nitrosyl complexes
Coordination chemistry | Metal nitrosyl complex | Chemistry | 2,260 |
693,282 | https://en.wikipedia.org/wiki/Noncommutative%20logic | Noncommutative logic is an extension of linear logic that combines the commutative connectives of linear logic with the noncommutative multiplicative connectives of the Lambek calculus. Its sequent calculus relies on the structure of order varieties (a family of cyclic orders that may be viewed as a species of structure), and the correctness criterion for its proof nets is given in terms of partial permutations. It also has a denotational semantics in which formulas are interpreted by modules over some specific Hopf algebras.
Noncommutativity in logic
By extension, the term noncommutative logic is also used by a number of authors to refer to a family of substructural logics in which the exchange rule is inadmissible. The remainder of this article is devoted to a presentation of this acceptance of the term.
The oldest noncommutative logic is the Lambek calculus, which gave rise to the class of logics known as categorial grammars. Since the publication of Jean-Yves Girard's linear logic there have been several new noncommutative logics proposed, namely the cyclic linear logic of David Yetter, the pomset logic of Christian Retoré, and the noncommutative logics BV and NEL.
Noncommutative logic is sometimes called ordered logic, since it is possible with most proposed noncommutative logics to impose a total or partial order on the formulas in sequents. However this is not fully general since some noncommutative logics do not support such an order, such as Yetter's cyclic linear logic. Although most noncommutative logics do not allow weakening or contraction together with noncommutativity, this restriction is not necessary.
The Lambek calculus
Joachim Lambek proposed the first noncommutative logic in his 1958 paper Mathematics of Sentence Structure to model the combinatory possibilities of the syntax of natural languages. His calculus has thus become one of the fundamental formalisms of computational linguistics.
Cyclic linear logic
David N. Yetter proposed a weaker structural rule in place of the exchange rule of linear logic, yielding cyclic linear logic. Sequents of cyclic linear logic form a cycle, and so are invariant under rotation, where multipremise rules glue their cycles together at the formulas described in the rules. The calculus supports three structural modalities, a self-dual modality allowing exchange, but still linear, and the usual exponentials (? and !) of linear logic, allowing nonlinear structural rules to be used together with exchange.
Pomset logic
Pomset logic was proposed by Christian Retoré in a semantic formalism with two dual sequential operators existing together with the usual tensor product and par operators of linear logic, the first logic proposed to have both commutative and noncommutative operators. A sequent calculus for the logic was given, but it lacked a cut-elimination theorem; instead the sense of the calculus was established through a denotational semantics.
BV and NEL
Alessio Guglielmi proposed a variation of Retoré's calculus, BV, in which the two noncommutative operations are collapsed onto a single, self-dual, operator, and proposed a novel proof calculus, the calculus of structures to accommodate the calculus. The principal novelty of the calculus of structures was its pervasive use of deep inference, which it was argued is necessary for calculi combining commutative and noncommutative operators; this explanation concurs with the difficulty of designing sequent systems for pomset logic that have cut-elimination.
Lutz Straßburger devised a related system, NEL, also in the calculus of structures in which linear logic with the mix rule appears as a subsystem.
Structads
Structads are an approach to the semantics of logic that are based upon generalising the notion of sequent along the lines of Joyal's combinatorial species, allowing the treatment of more drastically nonstandard logics than those described above, where, for example, the ',' of the sequent calculus is not associative.
See also
Ordered type system, a substructural type system
Quantum logic
References
External links
Non-commutative logic I: the multiplicative fragment by V. Michele Abrusci and Paul Ruet, Annals of Pure and Applied Logic 101(1), 2000.
Logical aspects of computational linguistics by Patrick Blackburn, Marc Dymetman, Alain Lecomte, Aarne Ranta, Christian Retoré and Eric Villemonte de la Clergerie.
Papers on Commutative/Non-commutative Linear Logic in the calculus of structures: a research homepage from which the papers proposing BV and NEL are available.
Substructural logic | Noncommutative logic | Mathematics | 994 |
9,007,003 | https://en.wikipedia.org/wiki/NGC%206027 | NGC 6027 is a lenticular galaxy which is the brightest member of Seyfert's Sextet, a compact group of galaxies. It was discovered by French astronomer Édouard Stephan on 20 March 1882.
See also
NGC 6027a
NGC 6027b
NGC 6027c
NGC 6027d
NGC 6027e
Seyfert's Sextet
List of NGC objects (6001–7000)
References
External links
Lenticular galaxies
10116 NED01
056575
+04-38-008
6027
Serpens
Astronomical objects discovered in 1882
Discoveries by Édouard Stephan | NGC 6027 | Astronomy | 119 |
702,848 | https://en.wikipedia.org/wiki/Bernard%20Morin | Bernard Morin (; 3 March 1931 in Shanghai, China – 12 March 2018) was a French mathematician, specifically a topologist.
Early life and education
Morin lost his sight at the age of six due to glaucoma, but his blindness did not prevent him from having a successful career in mathematics. He received his Ph.D. in 1972 from the Centre National de la Recherche Scientifique.
Career
Morin was a member of the group that first exhibited an eversion of the sphere, i.e., a homotopy which starts with a sphere and ends with the same sphere but turned inside-out. He also discovered the Morin surface, which is a half-way model for the sphere eversion, and used it to prove a lower bound on the number of steps needed to turn a sphere inside out.
Morin discovered the first parametrization of Boy's surface (earlier used as a half-way model), in 1978. His graduate student François Apéry, in 1986, discovered another parametrization of Boy's surface, which conforms to the general method for parametrizing non-orientable surfaces.
Morin worked at the Institute for Advanced Study in Princeton, New Jersey. Most of his career, though, he spent at the University of Strasbourg.
Morin's surface.
See also
Blind mathematicians: Leonhard Euler, Nicholas Saunderson, Lev Pontryagin, Louis Antoine, Zachary Battles
References
George K. Francis & Bernard Morin (1980) "Arnold Shapiro's Eversion of the Sphere", Mathematical Intelligencer 2(4):200–3.
External links
Photos of Morin with stereolithography models of sphere eversion.
The World of Blind Mathematicians, PDF file at the American Mathematical Society's website.
1931 births
2018 deaths
French mathematicians
French blind people
Blind scholars and academics
Institute for Advanced Study visiting scholars
Topologists
Academic staff of the University of Strasbourg
Scientists from Shanghai
Educators from Shanghai
French scientists with disabilities | Bernard Morin | Mathematics | 409 |
11,422,159 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z122 | In molecular biology, Small nucleolar RNA Z122 (also known as snoR72Y) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA Z122 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
Plant snoRNA Z122 was identified in a screen of Oryza sativa.
References
External links
Plant snoRNA database entry for SnoR72Y
Small nuclear RNA | Small nucleolar RNA Z122 | Chemistry | 219 |
69,461,222 | https://en.wikipedia.org/wiki/Vombatid%20gammaherpesvirus%201 | Vombatid gammaherpesvirus 1 (VoHV-1) is a species of virus in the genus Manticavirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales.
Host
It is hosted by the common wombat (Vombatus ursinus).
References
Gammaherpesvirinae | Vombatid gammaherpesvirus 1 | Biology | 74 |
6,063,405 | https://en.wikipedia.org/wiki/Gheorghe%20Vr%C4%83nceanu | Gheorghe Vrănceanu (June 30, 1900 – April 27, 1979) was a Romanian mathematician, best known for his work in differential geometry and topology. He was titular member of the Romanian Academy and vice-president of the International Mathematical Union.
Biography
He was born in 1900 in Valea Hogei, then a village in Vaslui County, now a component of Lipova commune, in Bacău County. He was the eldest of five children in his family. After attending primary school in his village and high school in Vaslui, he went to study mathematics at the University of Iași in 1919. There, he took courses with , Vera Myller, , Victor Vâlcovici, and Simion Stoilow. After graduating in 1922, he went in 1923 to the University of Göttingen, where he studied under David Hilbert. Thereafter, he went to the University of Rome, where he studied under Tullio Levi-Civita, obtaining his doctorate on November 5, 1924, with thesis Sopra una teorema di Weierstrass e le sue applicazioni alla stabilità. The thesis defense committee was composed of 11 faculty, and was headed by Vito Volterra.
Vrănceanu returned to Iași, where he was appointed a lecturer at the university. In 1927–1928, he was awarded a Rockefeller Foundation scholarship to study in France and the United States, where he was in a contact with Élie Cartan and Oswald Veblen. In 1929, he returned to Romania, and was appointed professor at the University of Cernăuți. In 1939, he moved to the University of Bucharest, where he was appointed Head of the Geometry and Topology department in 1948, a position he held until his retirement in 1970. His doctoral students include Henri Moscovici and .
Vrănceanu was elected to the Romanian Academy as a corresponding member in 1946, then as a full member in 1955. From 1964 he was president of the Mathematics Section of the Romanian Academy. Also from 1964, he was an editor of the journal Revue Roumaine de mathématiques pures et appliquées, founded that year. At the International Congress of Mathematicians held in Vancouver, Canada in 1974, he was elected vice-president of the International Mathematical Union, a position he held from 1975 to 1978. He died in Bucharest in 1979 of an intestinal obstruction and was buried at the city's Bellu Cemetery.
A high school in Bacău (Colegiul Național "Gheorghe Vrânceanu") is named after him, and so is a school in Lipova.
Research
During his career, Vrănceanu published over 300 articles in journals throughout the world. His work covers a whole range of modern geometry, from the classical theory of surfaces, to the notion of non-holonomic spaces, which he discovered.
In 1928 he gave an invited talk at the International Congress of Mathematicians in Bologna, titled Parallelisme et courbure dans une variété non holonome. In it, he introduced the notion of "non-holonomic manifolds," which are smooth manifolds provided with a smooth distribution that is generally not integrable.
Publications
Notes
References
1900 births
1979 deaths
People from Bacău County
Alexandru Ioan Cuza University alumni
Sapienza University of Rome alumni
20th-century Romanian mathematicians
Topologists
Differential geometers
Academic staff of Chernivtsi University
Academic staff of the University of Bucharest
Titular members of the Romanian Academy
Members of the Romanian Academy of Sciences
Romanian expatriates in Italy
Romanian expatriates in the United States
Romanian expatriates in Germany
Deaths from bowel obstruction
Burials at Bellu Cemetery | Gheorghe Vrănceanu | Mathematics | 761 |
9,138,400 | https://en.wikipedia.org/wiki/PX%20domain | The PX domain is a phosphoinositide-binding structural domain involved in targeting of proteins to cell membranes.
This domain was first found in P40phox and p47phox domains of NADPH oxidase (phox stands for phagocytic oxidase). It was also identified in many other proteins involved in membrane trafficking, including nexins, Phospholipase D, and phosphoinositide-3-kinases.
The PX domain is structurally conserved in eukaryotes, although amino acid sequences show little similarity. PX domains interact primarily with PtdIns(3)P lipids. However some of them bind to phosphatidic acid, PtdIns(3,4)P2, PtdIns(3,5)P2, PtdIns(4,5)P2, and PtdIns(3,4,5)P3. The PX-domain can also interact with other domains and proteins.
Human proteins containing this domain
Sorting nexins contain this domain. Other examples include:
HS1BP3
KIF16B (SNX23)
NCF1; NCF1C; NCF4; NISCH
PIK3C2A; PIK3C2B; PIK3C2G; PLD1; PLD2; PXK
RPS6KC1
SGK3; SH3PXD2A; SNAG1; SNX9
References
Peripheral membrane proteins
Protein domains | PX domain | Biology | 325 |
17,584,743 | https://en.wikipedia.org/wiki/Oral%20and%20maxillofacial%20pathology | Oral and maxillofacial pathology refers to the diseases of the mouth ("oral cavity" or "stoma"), jaws ("maxillae" or "gnath") and related structures such as salivary glands, temporomandibular joints, facial muscles and perioral skin (the skin around the mouth). The mouth is an important organ with many different functions. It is also prone to a variety of medical and dental disorders.
The specialty oral and maxillofacial pathology is concerned with diagnosis and study of the causes and effects of diseases affecting the oral and maxillofacial region. It is sometimes considered to be a specialty of dentistry and pathology. Sometimes the term head and neck pathology is used instead, which may indicate that the pathologist deals with otorhinolaryngologic disorders (i.e. ear, nose and throat) in addition to maxillofacial disorders. In this role there is some overlap between the expertise of head and neck pathologists and that of endocrine pathologists.
Diagnosis
The key to any diagnosis is thorough medical, dental, social and psychological history as well as assessing certain lifestyle risk factors that may be involved in disease processes. This is followed by a thorough clinical investigation including extra-oral and intra-oral hard and soft tissues.
It is sometimes the case that a diagnosis and treatment regime are possible to determine from history and examination, however it is good practice to compile a list of differential diagnoses. Differential diagnosis allows for decisions on what further investigations are needed in each case.
There are many types of investigations in diagnosis of oral and maxillofacial diseases, including screening tests, imaging (radiographs, CBCT, CT, MRI, ultrasound) and histopathology (biopsy).
Biopsy
A biopsy is indicated when the patient's clinical presentation, past history or imaging studies do not allow a definitive diagnosis. A biopsy is a surgical procedure that involves the removal of a piece of tissue sample from the living organism for the purpose of microscopic examination. In most cases, biopsies are carried out under local anaesthesia. Some biopsies are carried out endoscopically, others under image guidance, for instance ultrasound, computed tomography (CT) or magnetic resonance imaging (MRI) in the radiology suite. Examples of the most common tissues examined by means of a biopsy include oral and sinus mucosa, bone, soft tissue, skin and lymph nodes.
Types of biopsies typically used for diagnosing oral and maxillofacial pathology are:
Excisional biopsy: A small lesion is totally excised. This method is preferred if the lesions are approximately 1 cm or less in diameter, clinically and seemingly benign and surgically accessible. Large lesions which are more diffused and dispersed in nature or those which are seemed to be more clinically malignant are not conducive to total removal.
Incisional biopsy: A small portion of the tissue is removed from an abnormal-looking area for examination. This method is useful in dealing with large lesions. If the abnormal region is easily accessed, the sample may be taken at the doctor's office. If the tumour is deeper inside the mouth or throat, the biopsy may need to be performed in an operating room. General anaesthesia is administered to eliminate any pain.
Exfoliative cytology: A suspected area is gently scraped to collect a sample of cells for examination. These cells are placed on a glass slide and stained with dye, so that they can be viewed under a microscope. If any cells appear abnormal, a deeper biopsy will be performed.
Diseases
Oral and maxillofacial pathology can involve many different types of tissues of the head. Different disease processes affect different tissues within this region with various outcomes. A great many diseases involve the mouth, jaws and orofacial skin. The following list is a general outline of pathologies that can affect oral and maxillofacial region; some are more common than others. This list is by no means exhaustive.
Congenital
Cleft lip and palate
Cleft lip and palate is one of the most common occurring multi-factorial congenital disorder occurring in 1 in 500–1000 live births in several forms. The most common form is combined cleft lip and palate and it accounts for approximately 50% of cases, whereas isolated cleft lip concerns 20% of the patients.
People with cleft lip and palate malformation tend to be less social and report lower self-esteem, anxiety and depression related to their facial malformation. One of the major goals in the treatment of patients with cleft is to enhance social acceptance by surgical reconstruction.
A cleft lip is an opening of the upper lip, mainly due to the failure of fusion of the medial nasal processes with the palatal processes; a cleft palate is the opening of the soft and hard palate in the mouth, which is due to the failure of the palatal shelves to fuse together.
The palate's main function is to demarcate the nasal and oral cavity, without which the patient will have problems with swallowing, eating and speech, thus affecting the quality of life and in some cases certain functions.
Some examples include food going up into the nasal cavity during swallowing as the soft palate is not present to close the cavity during the process. Speech is also affected as the nasal cavity is a source of resonance during speech and failure to manipulate spaces in the cavities will result in the lack of ability to produce certain consonants in audible language.
Macroglossia
Macroglossia is a rare condition, categorised by tongue enlargement which will eventually create a crenated border in relation to the embrasures between the teeth.
Hereditary causes include vascular malformations, Down syndrome, Beckwith–Wiedemann syndrome, Duchenne muscular dystrophy, and Neurofibromatosis type I.
Acquired causes include carcinoma, lingual thyroid, myxedema, and amyloidosis.
Consequences may include noisy breaths – airway obstruction in severe cases, drooling, difficulty eating, lisping speech, open bite, and protruding tongue, which may ulcerate and undergo necrosis.
For mild cases, surgical treatment is not mandatory but if speech is affected, speech therapy may be useful. Reduction glossectomy may be required for severe cases.
Ankyloglossia
Ankyloglossia (also known as tongue-tie) may decrease the mobility of the tongue tip and is caused by an unusually short, thick lingual frenulum, a membrane connecting the underside of the tongue to the floor of the mouth.
Stafne defect
Stafne defect is a depression of the mandible, most commonly located on the lingual surface (the side nearest the tongue).
Torus palatinus
Torus palatinus is a bony protrusion on the palate, usually present on the midline of the hard palate.
Torus mandibularis
Torus mandibularis is a bony growth in the mandible along the surface nearest to the tongue. Mandibular tori usually are present near the premolars and above the location on the mandible of the mylohyoid muscle attachment.
Eagle syndrome
Eagle syndrome is a condition where there is an abnormal ossification of the stylohyoid ligament. This leads to an increase in the thickness and the length of the stylohyoid process and the ligament. Pain is felt due to the pressure applied to the internal jugular vein. Eagle syndrome occurs due to elongation of the styloid process or calcification of the stylohyoid ligament. However, the cause of the elongation has not been known clearly. It could occur spontaneously or could arise since birth. Usually normal stylohyoid process is 2.5–3 cm in length, if the length is longer than 3 cm, it is classified as an elongated stylohyoid process.
Acquired
Infective
Bacterial
(Plaque-induced) gingivitis—A common periodontal (gum) disease is gingivitis. Periodontal refers to the area the infection affects, which include the teeth, gums, and tissues surrounding the teeth. Bacteria cause inflammation of the gums which become red, swollen and can bleed easily. The bacteria along with mucus form a sticky colorless substance called plaque which harbours the bacteria. Plaque that is not removed by brushing and flossing hardens to form tartar that brushing does not clean. Smoking is a major risk factor. Treatment of gingivitis is dependent on how severe and how far the disease has progressed. If the disease is not too severe it is possible to treat it with chlorhexidine rinse and brushing with fluoride toothpaste to kill the bacteria and remove the plaque, but once the infection has progressed antibiotics may be needed to kill the bacteria.
Periodontitis—When gingivitis is not treated it can advance to periodontitis, when the gums pull away from the teeth and form pockets that harbor the bacteria. Bacterial toxins and the body's natural defenses start to break down the bone and connective tissues. The tooth may eventually become loose and have to be removed.
Scarlet fever is caused by a particular streptococci species, Streptococci Pyogenes, and is classified be a severe form of bacterial sore throat. The condition involves the release of pyrogenic and erythrogenic endotoxins from the immune system. It starts as tonsilitis and pharyngitis before involving the soft palate and the tongue. It usually occurs in children where a fever occurs and an erythematous rash develops on the face and spreads to most part of the body. If not treated, late stages of this condition may include a furred, raw, red tongue. Treatment options include penicillin and the prognosis is generally excellent.
Viral
Herpes simplex (infection with herpes simplex virus, or HSV) is very common in the mouth and lips. This virus can cause blisters and sores around the mouth (herpetic gingivostomatitis) and lips (herpes labialis). HSV infections tend to recur periodically. Although many people get infected with the virus, only 10% actually develop the sores. The sores may last anywhere from 3–10 days and are very infectious. Some people have recurrences either in the same location or at a nearby site. Unless the individual has an impaired immune system, e.g., owing to HIV or cancer-related immune suppression, recurrent infections tend to be mild in nature and may be brought on by stress, sun, menstrual periods, trauma or physical stress.
Mumps of the salivary glands is a viral infection of the parotid glands. This results in painful swelling at the sides of the mouth in both adults and children, which leads to a sore throat, and occasionally pain in chewing. The infection is quite contagious. Mumps is prevented through vaccination in infancy with the measles, mumps, and rubella (MMR) vaccination and subsequent boosters. There is no specific treatment for mumps except for hydration and painkillers with complete recovery ranging from 5–10 days. Sometimes mumps can cause inflammation of the brain, pancreatitis, testicular swelling or hearing loss.
Fungal
Oral candidiasis is by far the most common fungal infection that occurs in the mouth. It usually occurs in immunocompromised individuals. Individuals who have undergone a transplant, HIV, cancer or use corticosteroids commonly develop candida of the mouth and oral cavity. Other risk factors are dentures and tongue piercing. The typical signs are a white patch that may be associated with burning, soreness, irritation or a white cheesy-like appearance. Once the diagnosis is made, candida can be treated with a variety of anti fungal drugs.
Traumatic
Chemical, thermal, mechanical or electrical trauma to the oral soft tissues can cause traumatic oral ulceration.
Autoimmune
Sjögren syndrome is an autoimmune chronic inflammatory disorder characterised by some of the body's own immune cells infiltrating and destroying lacrimal and salivary glands (and other exocrine glands). There are two types of Sjögren syndrome: primary and secondary. In primary Sjögren syndrome (pSS) individuals have dry eyes (keratoconjunctivitis sicca) and a dry mouth (xerostomia). Based on a meta-analysis, the prevalence of pSS worldwide is estimated to 0.06%, with 90% of the patients being female. In secondary Sjögren syndrome (sSS), individuals have a dry mouth, dry eyes and a connective tissue disorder such as rheumatoid arthritis (prevalence 7% in the UK), systemic lupus erythematosus (prevalence 6.5%–19%) and systemic sclerosis (prevalence 14%–20.5%). Additional features and symptoms include:
Erythema and lobulation of the tongue
Oral discomfort
Difficulty in swallowing and talking
Altered taste
Poor retention of dentures (if worn)
Oral fungal and bacterial infections
Salivary glands swelling
Dryness of skin; nose; throat; vagina
Peripheral neuropathies
Pulmonary; thyroid; and renal disorders;
Arthralgias and myalgias;
Tests used to diagnose Sjögren syndrome include:
tear break-up time and Schirmer's tests
a minor salivary gland biopsy taken from the lip
blood tests
salivary flow rate
There is no cure for Sjögren syndrome; however, there are treatments used to help with the associated symptoms.
Eye care: artificial tears, moisture chamber spectacles, punctal plugs, pilocarpine medication
Mouth care: increase oral intake, practice good oral hygiene, use sugar free gum (to increase saliva flow), regular use of mouth rinses, pilocarpine medication, reduce alcohol intake and smoking cessation. Saliva substitutes are also available as a spray, gel, gum or in the form of a medicated sweet
Dry skin: creams, moisturising soaps
Vaginal dryness: lubricant, oestrogen creams, hormonal replacement therapy
Muscle and joint pains: Non-steroidal anti-inflammatory drugs
Complications of Sjögren syndrome include ulcers that can develop on the surface of the eyes if the dryness is not treated. These ulcers can then cause more worrying issues such as loss of eyesight and life-long damage. Individuals with Sjögren syndrome have a slightly increased risk of developing non-Hodgkin lymphoma, a type of cancer. Other conditions such as peripheral neuropathy, Raynaud's phenomenon, kidney problems, underactive thyroid gland and irritable bowel syndrome have been linked to Sjögren syndrome.
Inflammatory
Angioedema
Neoplastic
Oral cancer may occur on the lips, tongue, gums, floor of the mouth or inside the cheeks. The majority of cancers of the mouth are squamous cell carcinoma. Oral cancers are usually painless in the initial stages or may appear like an ulcer. Causes of oral cancer include smoking, excessive alcohol consumption, exposure to sunlight (lip cancer), chewing tobacco, infection with human papillomavirus, and hematopoietic stem cell transplantation. The earlier the oral cancer is diagnosed, the better the chances for full recovery. Persistent suspicious masses or ulcers on the mouth should always be examined. Diagnosis is usually made with a biopsy; treatment depends on the exact type of cancer, where it is situated, and extent of spreading.
Environmental
Unknown
There are many oral and maxillofacial pathologies which are not fully understood.
Burning mouth syndrome (BMS) is a disorder where there is a burning sensation in the mouth that has no identifiable medical or dental cause. The disorder can affect anyone but tends to occur most often in middle-aged women. BMS has been hypothesized to be linked to a variety of factors such as the menopause, dry mouth (xerostomia) and allergies. BMS usually lasts for several years before disappearing for unknown reasons. Other features of this disorder include anxiety, depression and social isolation. There is no cure for this disorder and treatment includes use of hydrating agents, pain medications, vitamin supplements or the usage of antidepressants.
Aphthous stomatitis is a condition where ulcers (canker sores) appear on the inside of the mouth, lips and on tongue. Most small canker sores disappear within 10–14 days. Canker sores are most common in young and middle aged individuals. Sometimes individuals with allergies are more prone to these sores. Besides an awkward sensation, these sores can also cause pain or tingling or a burning sensation. Unlike herpes sores, canker sores are always found inside the mouth and are usually less painful. Good oral hygiene helps but topical corticosteroids may be necessary.
Migratory stomatitis is a condition that involves the tongue and other oral mucosa. The common migratory glossitis (geographic tongue) affects the anterior two thirds of the dorsal and lateral tongue mucosa of 1% to 2.5% of the population, with one report of up to 12.7% of the population. The tongue is often fissured, especially. in elderly individuals. In the American population, a lower prevalence was reported among Mexican Americans (compared with Caucasians and African Americans) and cigarette smokers. When other oral mucosa, beside the dorsal and lateral tongue, are involved, the term migratory stomatitis (or ectopic geographic tongue) is preferred. In this condition, lesions infrequently involve also the ventral tongue and buccal or labial mucosa. They are rarely reported on the soft palate and floor of the mouth.
Specialty
Oral and maxillofacial pathology, previously termed oral pathology, is a speciality involved with the diagnosis and study of the causes and effects of diseases affecting the oral and maxillofacial regions (i.e. the mouth, the jaws and the face). It can be considered a speciality of dentistry and pathology. Oral pathology is a closely allied speciality with oral and maxillofacial surgery and oral medicine.
The clinical evaluation and diagnosis of oral mucosal diseases are in the scope of oral and maxillofacial pathology specialists and oral medicine practitioners, both disciplines of dentistry.
When a microscopic evaluation is needed, a biopsy is taken, and microscopically observed by a pathologist. The American Dental Association uses the term oral and maxillofacial pathology, and describes it as "the specialty of dentistry and pathology which deals with the nature, identification, and management of diseases affecting the oral and maxillofacial regions. It is a science that investigates the causes, processes and effects of these diseases."
In some parts of the world, oral and maxillofacial pathologists take on responsibilities in forensic odontology.
Geographic variation
United Kingdom
There are approximately 30 consultant oral and maxillofacial pathologists in the UK. A dental degree is mandatory, but a medical degree is not. The shortest pathway to becoming an oral pathologist in the UK is completion of two years' general professional training and then five years in a diagnostic histopathology training course. After passing the required Royal College of Pathologists exams and gaining a Certificate of Completion of Specialist Training, the trainee is entitled to apply for registration as a specialist. Many oral and maxillofacial pathologists in the UK are clinical academics, having undertaken a PhD either prior to or during training. Generally, oral and maxillofacial pathologists in the UK are employed by dental or medical schools and undertake their clinical work at university hospital departments.
New Zealand
There are five practising oral pathologists in New Zealand (). Oral pathologists in New Zealand also take part in forensic evaluations.
See also
Tongue disease
Salivary gland disease
Head and neck cancer
Oral surgery
Tooth pathology
References
Further reading
External links
British Society of Oral & Maxillo-facial Pathologists Website
Academy of Oral and Maxillofacial Pathology Website
Main
Pathology | Oral and maxillofacial pathology | Biology | 4,214 |
25,066,328 | https://en.wikipedia.org/wiki/Nuclear%20factor%20I | Nuclear factor I (NF-I) is a family of closely related transcription factors. They constitutively bind as dimers to specific sequences of DNA with high affinity. Family members contain an unusual DNA binding domain that binds to the recognition sequence 5'-TTGGCXXXXXGCCAA-3'.
Subtypes include:
NFIA
NFIB
NFIC
NFIX
References
Transcription factors | Nuclear factor I | Chemistry,Biology | 87 |
73,043,984 | https://en.wikipedia.org/wiki/8-Hydroxyhexahydrocannabinol | 8-Hydroxyhexahydrocannabinols (8-OH-9α-HHC and 8-OH-9β-HHC) are active primary metabolites of hexahydrocannabinol (HHC) in animals and trace phytocannabinoids. The 8-OH-HHCs are produced in notable concentrations following HHC administration in several animal species, including humans. They have drawn research interest for their role in HHC toxicology and stereoisomeric probes of the cannabinoid drug/receptor interaction.
Like Δ9-THC and Δ8-THC, HHC is processed by cytochrome p450 (CYP3A4, CYP2C9 and CYP2C19) to a series of oxygenated derivatives, some of which maintain activity. While 11-OH-HHC and its downstream products are the major metabolites of HHC metabolism, hydroxylation at C8 plays a varyingly significant role in animal species. Metabolite ratios are also subject to interspecies variation, with one study finding mice hepatocytes preferentially produced 8α-OH-HHC (49/5 α/β) while hamster hematocytes evidenced the opposing selectivity (20/43 α/β).
While 11-OH-HHC is quickly oxidized to the inactive, water-soluble 11-COOH-HHC, further oxidation of 8-OH instead yields the 8-oxo derivatives, which are then conjugated and excreted.
Stereoisomerism
There are four possible 8-OH-HHC metabolites arising from naturally derived HHCs: cis- and trans-8-OH-9α-HHC & cis- and trans-8-OH-9β-HHC. All four have been prepared synthetically to probe stereochemical effects on cannabinoid biological activity. In in vivo tests on rhesus macaques, Mechoulam and coworkers found the highest activity in the cis-8-OH-9β-HHC stereoisomer. All four forms are believed to be active.
References
Cannabinoids
Human drug metabolites
Recreational drug metabolites
Benzochromenes
Diols | 8-Hydroxyhexahydrocannabinol | Chemistry | 476 |
48,397 | https://en.wikipedia.org/wiki/Isocyanate | In organic chemistry, isocyanate is the functional group with the formula . Organic compounds that contain an isocyanate group are referred to as isocyanates. An organic compound with two isocyanate groups is known as a diisocyanate. Diisocyanates are manufactured for the production of polyurethanes, a class of polymers.
Isocyanates should not be confused with cyanate esters and isocyanides, very different families of compounds. The cyanate (cyanate ester) functional group () is arranged differently from the isocyanate group (). Isocyanides have the connectivity , lacking the oxygen of the cyanate groups.
Structure and bonding
In terms of bonding, isocyanates are closely related to carbon dioxide (CO2) and carbodiimides (C(NR)2). The C−N=C=O unit that defines isocyanates is planar, and the N=C=O linkage is nearly linear. In phenyl isocyanate, the C=N and C=O distances are respectively 1.195 and 1.173 Å. The C−N=C angle is 134.9° and the N=C=O angle is 173.1°.
Production
Isocyanates are usually produced from amines by phosgenation, i.e. treating with phosgene:
These reactions proceed via the intermediacy of a carbamoyl chloride (). Owing to the hazardous nature of phosgene, the production of isocyanates requires special precautions. A laboratory-safe variation masks the phosgene as oxalyl chloride. Also, oxalyl chloride can be used to form acyl isocyanates from primary amides, which phosgene typically dehydrates to nitriles instead.
Another route to isocyanates entails addition of isocyanic acid to alkenes. Complementarily, alkyl isocyanates form by displacement reactions involving alkyl halides and alkali metal cyanates.
Aryl isocyanates can be synthesized from carbonylation of nitro- and nitrosoarenes; a palladium catalyst is necessary to avoid side-reactions of the nitrene intermediate.
Three rearrangement reactions involving nitrenes give isocyanates:
Schmidt reaction, a reaction where a carboxylic acid is treated with ammonia and hydrazoic acid yielding an isocyanate.
Curtius rearrangement degradation of an acyl azide to an isocyanate and nitrogen gas.
Lossen rearrangement, the conversion of a hydroxamic acid to an isocyanate via the formation of an O-acyl, sulfonyl, or phosphoryl intermediate.
An isocyanate is also the immediate product of the Hofmann rearrangement, but typically hydrolyzes under reaction conditions.
Reactivity
With nucleophiles
Isocyanates are electrophiles, and as such they are reactive toward a variety of nucleophiles including alcohols, amines, and even water having a higher reactivity compared to structurally analogous isothiocyanates.
Upon treatment with an alcohol, an isocyanate forms a urethane linkage:
where R and R' are alkyl or aryl groups.
If a diisocyanate is treated with a compound containing two or more hydroxyl groups, such as a diol or a polyol, polymer chains are formed, which are known as polyurethanes.
Isocyanates react with water to form carbon dioxide:
This reaction is exploited in tandem with the production of polyurethane to give polyurethane foams. The carbon dioxide functions as a blowing agent.
Isocyanates also react with amines to give ureas:
The addition of an isocyanate to a urea gives a biuret:
Reaction between a di-isocyanate and a compound containing two or more amine groups produces long polymer chains known as polyureas.
Carbodiimides are produced by the decarboxylation of alkyl and aryl isocyanate using phosphine oxides as a catalyst:
Cyclization
Isocyanates also can react with themselves. Aliphatic diisocyanates can trimerise to from substituted isocyanuric acid groups. This can be seen in the formation of polyisocyanurate resins (PIR) which are commonly used as rigid thermal insulation. Isocyanates participate in Diels–Alder reactions, functioning as dienophiles.
Rearrangement reactions
Isocyanates are common intermediates in the synthesis of primary amines via hydrolysis:
Hofmann rearrangement, a reaction in which a primary amide is treated with a strong oxidizer such as sodium hypobromite or lead tetraacetate to form an isocyanate intermediate.
Common isocyanates
The global market for diisocyanates in the year 2000 was 4.4 million tonnes, of which 61.3% was methylene diphenyl diisocyanate (MDI), 34.1% was toluene diisocyanate (TDI), 3.4% was the total for hexamethylene diisocyanate (HDI) and isophorone diisocyanate (IPDI), and 1.2% was the total for various others. A monofunctional isocyanate of industrial significance is methyl isocyanate (MIC), which is used in the manufacture of pesticides.
Common applications
MDI is commonly used in the manufacture of rigid foams and surface coating. Polyurethane foam boards are used in construction for insulation. TDI is commonly used in applications where flexible foams are used, such as furniture and bedding. Both MDI and TDI are used in the making of adhesives and sealants due to weather-resistant properties. Isocyanates, both MDI and TDI are widely used in as spraying applications of insulation due to the speed and flexibility of applications. Foams can be sprayed into structures and harden in place or retain some flexibility as required by the application. HDI is commonly utilized in high-performance surface-coating applications, including automotive paints.
Health and safety
The risks of isocyanates was brought to the world's attention with the 1984 Bhopal disaster, which caused the death of nearly 4000 people from the accidental release of methyl isocyanate. In 2008, the same chemical was involved in an explosion at a pesticide manufacturing plant in West Virginia.
LD50s for isocyanates are typically several hundred milligrams per kilogram. Despite this low acute toxicity, an extremely low short-term exposure limit (STEL) of 0.07 mg/m3 is the legal limit for all isocyanates (except methyl isocyanate: 0.02 mg/m3) in the United Kingdom. These limits are set to protect workers from chronic health effects such as occupational asthma, contact dermatitis, or irritation of the respiratory tract.
Since they are used in spraying applications, the properties of their aerosols have attracted attention. In the U.S., OSHA conducted a National Emphasis Program on isocyanates starting in 2013 to make employers and workers more aware of the health risks.
Polyurethanes have variable curing times, and the presence of free isocyanates in foams vary accordingly.
Both the US National Toxicology Program (NTP) and International Agency for Research on Cancer (IARC) have evaluated TDI as a potential human carcinogen and Group 2B "possibly carcinogenic to humans". MDI appears to be relatively safer and is unlikely a human carcinogen. The IARC evaluates MDI as Group 3 "not classifiable as to its carcinogenicity in humans".
All major producers of MDI and TDI are members of the International Isocyanate Institute, which promotes the safe handling of MDI and TDI.
Hazards
Toxicity
Isocyanates can present respiratory hazards as particulates, vapors or aerosols. Autobody shop workers are a very commonly examined population for isocyanate exposure as they are repeatedly exposed when spray painting automobiles and can be exposed when installing truck bed liners. Hypersensitivity pneumonitis has slower onset and features chronic inflammation that can be seen on imaging of the lungs. Occupational asthma is a worrisome outcome of respiratory sensitization to isocyanates as it can be acutely fatal. Diagnosis of occupational asthma is generally performed using pulmonary function testing (PFT) and performed by pulmonology or occupational medicine physicians. Occupational asthma is much like asthma in that it causes episodic shortness of breath and wheezing. Both the dose and duration of exposure to isocyanates can lead to respiratory sensitization. Dermal exposures to isocyanates can sensitize an exposed person to respiratory disease.
Dermal exposures can occur via mixing, spraying coatings or applying and spreading coatings manually. Dermal exposures to isocyanates is known to lead to respiratory sensitization. Even when the right personal protective equipment (PPE) is used, exposures can occur to body areas not completely covered. Isocyanates can also permeate improper PPE, necessitating frequent changes of both disposable gloves and suits if they become over exposed.
Flammability
Methyl isocyanate (MIC) is highly flammable. MDI and TDI are much less flammable. Flammability of materials is a consideration in furniture design. The specific flammability hazard is noted on the safety data sheet (SDS) for specific isocyanates.
Hazard minimization
Industrial science attempts to minimize the hazards of isocyanates through multiple techniques. The EPA has sponsored ongoing research on polyurethane production without isocyanates. Where isocyanates are unavoidable but interchangeable, substituting a less hazardous isocyanate may control hazards. Ventilation and automation can also minimizes worker exposure to the isocyanates used.
If human workers must enter isocyanate-contaminated regions, personal protective equipment (PPE) can reduce their intake. In general, workers wear eye protection and gloves and coveralls to reduce dermal exposure For some autobody paint and clear-coat spraying applications, a full-face mask is required.
The US Occupational Safety and Health Administration (OSHA) requires frequent training to ensure isocyanate hazards are appropriately minimized. Moreover, OSHA requires standardized isocyanate concentration measurements to avoid violating occupational exposure limits. In the case of MDI, OSHA expects sampling with glass-fiber filters at standard air flow rates, and then liquid chromatography.
Combined industrial hygiene and medical surveillance can significantly reduce occupational asthma incidence. Biological tests exist to identify isocyanate exposure; the US Navy uses regular pulmonary function testing and screening questionnaires.
Emergency management is a complex process of preparation and should be considered in a setting where a release of bulk chemicals may threaten the well-being of the public. In the Bhopal disaster, an uncontrolled MIC release killed thousands, affected hundreds of thousands more, and spurred the development of modern disaster preparation.
Occupational exposure limits
Exposure limits can be expressed as ceiling limits, a maximal value, short-term exposure limits (STEL), a 15-minute exposure limit or an 8-hour time-weighted average limit (TWA). Below is a sampling, not exhaustive, as less common isocyanates also have specific limits within the United States, and in some regions there are limits on total isocyanate, which recognizes some of the uncertainty regarding the safety of mixtures of chemicals as compared to pure chemical exposures. For example, while there is no OEL for HDI, NIOSH has a REL of 5 ppb for an 8-hour TWA and a ceiling limit of 20 ppb, consistent with the recommendations for MDI.
Regulation
United States
The Occupational Safety and Health Administration (OSHA) is the regulatory body covering worker safety. OSHA puts forth permissible exposure limit (PEL) 20 ppb for MDI and detailed technical guidance on exposure assessment.
The National Institutes of Health (NIOSH) is the agency responsible for providing the research and recommendations regarding workplace safety, while OSHA is more of an enforcement body. NIOSH is responsible for producing the science that can result in recommended exposure limits (REL), which can be lower than the PEL. OSHA is tasked with enforcement and defending the enforceable limits (PELs). In 1992, when OSHA reduced the PEL for TDI to the NIOSH REL, the PEL reduction was challenged in court, and the reduction was reversed.
The Environmental Protection Agency (EPA) is also involved in the regulation of isocyanates with regard to the environment and also non-worker persons that might be exposed.
The American Conference of Governmental Industrial Hygienists (ACGIH) is a non-government organization that publishes guidance known as threshold limit values (TLV) for . The TLV is not an OSHA-enforceable value, unless the PEL is the same.
European Union
The European Chemicals Agency (ECHA) provides regulatory oversight of chemicals used within the European Union. ECHA has been implementing policy aimed at limiting worker exposure through elimination by lower allowable concentrations in products and mandatory worker training, an administrative control. Within the European Union, many nations set their own occupational exposure limits for isocyanates.
International groups
The United Nations, through the World Health Organization (WHO) together with the International Labour Organization (ILO) and United Nations Environment Programme (UNEP), collaborate on the International Programme on Chemical Safety (IPCS) to publish summary documents on chemicals. The IPCS published one such document in 2000 summarizing the status of scientific knowledge on MDI.
The IARC evaluates the hazard data on chemicals and assigns a rating on the risk of carcinogenesis. In the case of TDI, the final evaluation is possibly carcinogenic to humans (Group 2B). For MDI, the final evaluation is not classifiable as to its carcinogenicity to humans (Group 3).
The International Isocyanate Institute is an international industry consortium that seeks promote the safe utilization of isocyanates by promulgating best practices.
See also
Isothiocyanate
Polymethylene polyphenylene isocyanate
References
External links
NIOSH Safety and Health Topic: Isocyanates, from the website of the National Institute for Occupational Safety and Health (NIOSH)
Health and Safety Executive, website of the UK Health and Safety Executive, useful search terms on this site — isocyanates, MVR, asthma
International Isocyanate Institute | dii International Isocyanate Institute
Safe Working Procedure for Isocyanate-Containing Products, June 200.
Isocyanates – Measurement Methodology, Exposure and Effects, Swedish National Institute for Working Life Workshop (1999)
Health and Safety Executive, Guidance Note (EH16) Isocyanates: Toxic Hazards and Precautions (1984)
The Society of the Plastics Industry – Technical Bulletin AX119 MDI-Based Polyurethane
Foam Systems: Guidelines for Safe Handling and Disposal (1993)
An occupational hygiene assessment of the use and control of isocyanates in the UK by Hilary A Cowie et al. HSE Research Report RR311/2005. Prepared by the Institute of Occupational Medicine for the Health and Safety Executive
Functional groups
Commodity chemicals
Chemical hazards | Isocyanate | Chemistry | 3,291 |
19,301,037 | https://en.wikipedia.org/wiki/Coulombi%20egg%20tanker | The Coulombi egg tanker is a design that is aimed at reducing oil spills. It was approved by the International Maritime Organization (IMO) as an alternative to the double hull concept. The United States Coast Guard does not allow this design to enter US waters, effectively preventing it from being built.
Concept
The design is an enhanced mid-deck tanker and consists of a series of centre and wing tanks that are divided by horizontal bulkheads. The upper wing tanks form ballast tanks and act as emergency receiver tanks for cargo should the lower tanks be fractured. The lower tanks are connected to these ballast tanks by non-return valves.
When a lower tank is damaged, the incoming sea water pushes the oil in the damaged tank up into the ballast tank. Because of the hydrostatic pressure, there is an automatic transfer out of the damaged tank.
Advantages
The double-hull design is aimed at the probability of zero outflow. In low energy collisions where only the outside hull is penetrated, this will be the case. However, in high energy collisions both hulls are penetrated. As the tanks of a double hull tanker are larger than those of MARPOL tankers and pre-MARPOL tankers and the height of the cargo above the water line is higher, the resulting spill can be much larger than these single hull designs. In the Coulombi egg design spillage is greatly reduced, possibly to zero.
Where a double hull VLCC has a ballast tank coated area of about 225,000 m³, in a Coulombi egg tanker this area is reduced to 66,000 m³. This reduces maintenance and corrosion risks, which otherwise may result in structural failure, as was the case with the Erika and Prestige.
References
Bibliography
External links
+
+
Ship types
Ship design
Shipbuilding | Coulombi egg tanker | Engineering | 359 |
4,023,059 | https://en.wikipedia.org/wiki/Two-dimensional%20nuclear%20magnetic%20resonance%20spectroscopy | Two-Dimensional Nuclear Magnetic Resonance (2D NMR) is an advanced spectroscopic technique that builds upon the capabilities of one-dimensional (1D) NMR by incorporating an additional frequency dimension. This extension allows for a more comprehensive analysis of molecular structures. In 2D NMR, signals are distributed across two frequency axes, providing improved resolution and separation of overlapping peaks, particularly beneficial for studying complex molecules. This technique identifies correlations between different nuclei within a molecule, facilitating the determination of connectivity, spatial proximity, and dynamic interactions.
2D NMR encompasses a variety of experiments, including COSY (Correlation Spectroscopy), TOCSY (Total Correlation Spectroscopy), NOESY (Nuclear Overhauser Effect Spectroscopy), and HSQC (Heteronuclear Single Quantum Coherence). These techniques are indispensable in fields such as structural biology, where they are pivotal in determining protein and nucleic acid structures; organic chemistry, where they aid in elucidating complex organic molecules; and materials science, where they offer insights into molecular interactions in polymers and metal-organic frameworks. By resolving signals that would typically overlap in the 1D NMR spectra of complex molecules, 2D NMR enhances the clarity of structural information. 2D NMR can provide detailed information about the chemical structure and the three-dimensional arrangement of molecules.
The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at the Université Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976.
Fundamental concepts
Each experiment consists of a sequence of radio frequency (RF) pulses with delay periods in between them. The timing, frequencies, and intensities of these pulses distinguish different NMR experiments from one another. Almost all two-dimensional experiments have four stages: the preparation period, where a magnetization coherence is created through a set of RF pulses; the evolution period, a determined length of time during which no pulses are delivered and the nuclear spins are allowed to freely precess (rotate); the mixing period, where the coherence is manipulated by another series of pulses into a state which will give an observable signal; and the detection period, in which the free induction decay signal from the sample is observed as a function of time, in a manner identical to one-dimensional FT-NMR.
The two dimensions of a two-dimensional NMR experiment are two frequency axes representing a chemical shift. Each frequency axis is associated with one of the two time variables, which are the length of the evolution period (the evolution time) and the time elapsed during the detection period (the detection time). They are each converted from a time series to a frequency series through a two-dimensional Fourier transform. A single two-dimensional experiment is generated as a series of one-dimensional experiments, with a different specific evolution time in successive experiments, with the entire duration of the detection period recorded in each experiment.
The end result is a plot showing an intensity value for each pair of frequency variables. The intensities of the peaks in the spectrum can be represented using a third dimension. More commonly, intensity is indicated using contour lines or different colors.
Homonuclear through-bond correlation methods
In these methods, magnetization transfer occurs between nuclei of the same type, through J-coupling of nuclei connected by up to a few bonds.
Correlation spectroscopy (COSY)
The first and most popular two-dimension NMR experiment is the homonuclear correlation spectroscopy (COSY) sequence, which is used to identify spins which are coupled to each other. It consists of a single RF pulse (p1) followed by the specific evolution time (t1) followed by a second pulse (p2) followed by a measurement period (t2).
The Correlation Spectroscopy experiment operates by correlating nuclei coupled to each other through scalar coupling, also known as J-coupling. This coupling is the interaction between nuclear spins connected by bonds, typically observed between nuclei that are 2-3 bonds apart (e.g., vicinal protons). By detecting these interactions, COSY provides vital information about the connectivity between atoms within a molecule, making it a crucial tool for structural elucidation in organic chemistry.
The COSY experiment generates a two-dimensional spectrum with chemical shifts along the x-axis (horizontal) and y-axis (vertical) and involves several key steps. First, the sample is excited using a series of radiofrequency (RF) pulses, bringing the nuclear spins into a higher energy state. After the first RF pulse, the system evolves freely for a period called t1, during which the spins precess at frequencies corresponding to their chemical shifts. The correlation between nuclei is achieved by incrementally varying the evolution time (t1) to capture indirect interactions. This series of experiments, each with a different value of t1, allows for the detection of chemical shifts from nuclei that may not be observed directly in a one-dimensional spectrum. As t1 is incremented, cross-peaks are produced in the resulting 2D spectrum, representing interactions like coupling or spatial proximity between nuclei. This approach helps map out atomic connections, providing deeper insight into molecular structure and aiding in the interpretation of complex systems.
Cross peaks result from a phenomenon called magnetization transfer, and their presence indicates that two nuclei are coupled which have the two different chemical shifts that make up the cross peak's coordinates. Each coupling gives two symmetrical cross peaks above and below the diagonal. That is, a cross-peak occurs when there is a correlation between the signals of the spectrum along each of the two axes at these values.
An easy visual way to determine which couplings a cross peak represents is to find the diagonal peak which is directly above or below the cross peak, and the other diagonal peak which is directly to the left or right of the cross peak. The nuclei represented by those two diagonal peaks are coupled.
Next, a second RF pulse is applied to allow magnetization to transfer between coupled nuclei. The resulting signal is recorded continuously during a detection period ( t2) after the second RF pulse. The data are then processed through Fourier transformation along both the t1 and t2 axes, creating a 2D spectrum with peaks plotted along the diagonal and off-diagonal.
When interpreting the COSY spectrum, diagonal peaks correspond to the 1D chemical shifts of individual nuclei, similar to the standard peaks in a 1D NMR spectrum. The key feature of a COSY spectrum is the presence of cross-peaks as shown in Figure 1, indicating coupling between pairs of nuclei. These cross-peaks provide crucial information about the connectivity within a molecule, showing that the two nuclei are connected by a small number of bonds, usually two or three bonds.
COSY is especially useful when dealing with complex molecules such as natural products, peptides, and proteins, where understanding the connectivity of different nuclei through bonds is crucial. While 1D NMR is more straightforward and ideal for identifying basic structural features, COSY enhances the capabilities of NMR by providing deeper insights into molecular connectivity.
The two-dimensional spectrum that results from the COSY experiment shows the frequencies for a single isotope, most commonly hydrogen (1H) along both axes. (Techniques have also been devised for generating heteronuclear correlation spectra, in which the two axes correspond to different isotopes, such as 13C and 1H.) Diagonal peaks correspond to the peaks in a 1D-NMR experiment, while the cross peaks indicate couplings between pairs of nuclei (much as multiplet splitting indicates couplings in 1D-NMR).
COSY-90 is the most common COSY experiment. In COSY-90, the p1 pulse tilts the nuclear spin by 90°. Another member of the COSY family is COSY-45. In COSY-45 a 45° pulse is used instead of a 90° pulse for the second pulse, p2. The advantage of a COSY-45 is that the diagonal-peaks are less pronounced, making it simpler to match cross-peaks near the diagonal in a large molecule. Additionally, the relative signs of the coupling constants (see J-coupling#Magnitude of J-coupling) can be elucidated from a COSY-45 spectrum. This is not possible using COSY-90. Overall, the COSY-45 offers a cleaner spectrum while the COSY-90 is more sensitive.
Another related COSY technique is double quantum filtered (DQF) COSY. DQF COSY uses a coherence selection method such as phase cycling or pulsed field gradients, which cause only signals from double-quantum coherences to give an observable signal. This has the effect of decreasing the intensity of the diagonal peaks and changing their lineshape from a broad "dispersion" lineshape to a sharper "absorption" lineshape. It also eliminates diagonal peaks from uncoupled nuclei. These all have the advantage that they give a cleaner spectrum in which the diagonal peaks are prevented from obscuring the cross peaks, which are weaker in a regular COSY spectrum.
Exclusive correlation spectroscopy (ECOSY)
Total correlation spectroscopy (TOCSY)
The TOCSY experiment is similar to the COSY experiment, in that cross peaks of coupled protons are observed. However, cross peaks are observed not only for nuclei which are directly coupled, but also between nuclei which are connected by a chain of couplings. This makes it useful for identifying the larger interconnected networks of spin couplings. This ability is achieved by inserting a repetitive series of pulses which cause isotropic mixing during the mixing period. Longer isotropic mixing times cause the polarization to spread out through an increasing number of bonds.
In the case of oligosaccharides, each sugar residue is an isolated spin system, so it is possible to differentiate all the protons of a specific sugar residue. A 1D version of TOCSY is also available, and by irradiating a single proton the rest of the spin system can be revealed. Recent advances in this technique include the 1D-CSSF (chemical shift selective filter) TOCSY experiment, which produces higher quality spectra and allows coupling constants to be reliably extracted and used to help determine stereochemistry.
TOCSY is sometimes called "homonuclear Hartmann–Hahn spectroscopy" (HOHAHA).
Incredible natural-abundance double-quantum transfer experiment (INADEQUATE)
INADEQUATE is a method often used to find 13C couplings between adjacent carbon atoms. Because the natural abundance of 13C is only about 1%, only about 0.01% of molecules being studied will have the two nearby 13C atoms needed for a signal in this experiment. However, correlation selection methods are used (similarly to DQF COSY) to prevent signals from single 13C atoms, so that the double 13C signals can be easily resolved. Each coupled pair of nuclei gives a pair of peaks on the INADEQUATE spectrum which both have the same vertical coordinate, which is the sum of the chemical shifts of the nuclei; the horizontal coordinate of each peak is the chemical shift for each of the nuclei separately.
Heteronuclear through-bond correlation methods
Heteronuclear correlation spectroscopy gives signal based upon coupling between nuclei of two different types. Often the two nuclei are protons and another nucleus (called a "heteronucleus"). For historical reasons, experiments which record the proton rather than the heteronucleus spectrum during the detection period are called "inverse" experiments. This is because the low natural abundance of most heteronuclei would result in the proton spectrum being overwhelmed with signals from molecules with no active heteronuclei, making it useless for observing the desired, coupled signals. With the advent of techniques for suppressing these undesired signals, inverse correlation experiments such as HSQC, HMQC, and HMBC are actually much more common today. "Normal" heteronuclear correlation spectroscopy, in which the heteronucleus spectrum is recorded, is known as HETCOR.
Heteronuclear single-quantum correlation spectroscopy (HSQC)
Heteronuclear Single Quantum Coherence (HSQC) is a 2D NMR technique utilized for the detection of interactions between different types of nuclei which are separated by one bond, particularly a proton (1H) and a heteronucleus such as carbon (13C) or nitrogen (15N). This method gives one peak per pair of coupled nuclei, whose two coordinates are the chemical shifts of the two coupled atoms.
This method plays a role in structural elucidation, particularly in analyzing organic compounds, natural products, and biomolecules such as proteins and nucleic acids. HSQC is designed to detect one-bond correlations between protons and heteronuclear atoms, providing insight into the connectivity of hydrogen and heteronuclear atoms through the transfer of magnetization.
The HSQC experiment involves a series of steps to generate a two-dimensional NMR spectrum. Initially, the sample is excited using radiofrequency (RF) pulses, bringing the nuclear spins into an excited state and preparing them for magnetization transfer. Magnetization is then transferred from the proton to the heteronucleus through a one-bond scalar coupling (J-coupling), ensuring that only directly bonded nuclei participate in the transfer. Subsequently, the system evolves during a period called t1, and the magnetization is transferred back from the heteronuclear to the proton. The final signal is detected, encoding both the proton and the heteronuclear information, and a Fourier transformation is performed to create a 2D spectrum correlating the proton and heteronuclear chemical shifts.
HSQC works by transferring magnetization from the I nucleus (usually the proton) to the S nucleus (usually the heteroatom) using the INEPT pulse sequence; this first step is done because the proton has a greater equilibrium magnetization and thus this step creates a stronger signal. The magnetization then evolves and then is transferred back to the I nucleus for observation. An extra spin echo step can then optionally be used to decouple the signal, simplifying the spectrum by collapsing multiplets to a single peak. The undesired uncoupled signals are removed by running the experiment twice with the phase of one specific pulse reversed; this reverses the signs of the desired but not the undesired peaks, so subtracting the two spectra will give only the desired peaks.
Interpretation of the HSQC spectrum is based on the observation of cross-peaks, which indicates the direct bonding between protons and carbons or nitrogens. Each cross-peak corresponds to a specific 1H-13C or 1H-15N pair, providing direct assignments of 1H-Xconnectivity, where X is the heteronucleus The HSQC technique offers several advantages, including its focus on one-bond correlations, increased sensitivity due to the direct detection of protons, and the simplification of crowded spectra by resolving overlapping signals and aiding in the analysis of complex molecules.
Heteronuclear multiple-quantum correlation spectroscopy (HMQC) gives an identical spectrum as HSQC, but using a different method. The two methods give similar quality results for small to medium-sized molecules, but HSQC is considered to be superior for larger molecules.
Heteronuclear multiple-bond correlation spectroscopy (HMBC)
HMBC detects heteronuclear correlations over longer ranges of about 2–4 bonds. The difficulty of detecting multiple-bond correlations is that the HSQC and HMQC sequences contain a specific delay time between pulses which allows detection only of a range around a specific coupling constant. This is not a problem for the single-bond methods since the coupling constants tend to lie in a narrow range, but multiple-bond coupling constants cover a much wider range and cannot all be captured in a single HSQC or HMQC experiment.
In HMBC, this difficulty is overcome by omitting one of these delays from an HMQC sequence. This increases the range of coupling constants that can be detected, and also reduces signal loss from relaxation. The cost is that this eliminates the possibility of decoupling the spectrum, and introduces phase distortions into the signal. There is a modification of the HMBC method which suppresses one-bond signals, leaving only the multiple-bond signals.
Through-space correlation methods
These methods establish correlations between nuclei which are physically close to each other regardless of whether there is a bond between them. They use the nuclear Overhauser effect (NOE) by which nearby atoms (within about 5 Å) undergo cross relaxation by a mechanism related to spin–lattice relaxation.
Nuclear Overhauser effect spectroscopy (NOESY)
In NOESY, the nuclear Overhauser cross relaxation between nuclear spins during the mixing period is used to establish the correlations. The spectrum obtained is similar to COSY, with diagonal peaks and cross peaks, however the cross peaks connect resonances from nuclei that are spatially close rather than those that are through-bond coupled to each other. NOESY spectra also contain extra axial peaks which do not provide extra information and can be eliminated through a different experiment by reversing the phase of the first pulse.
One application of NOESY is in the study of large biomolecules, such as in protein NMR, in which relationships can often be assigned using sequential walking.
The NOESY experiment can also be performed in a one-dimensional fashion by pre-selecting individual resonances. The spectra are read with the pre-selected nuclei giving a large, negative signal while neighboring nuclei are identified by weaker, positive signals. This only reveals which peaks have measurable NOEs to the resonance of interest but takes much less time than the full 2D experiment. In addition, if a pre-selected nucleus changes environment within the time scale of the experiment, multiple negative signals may be observed. This offers exchange information similar to the EXSY (exchange spectroscopy) NMR method.
NOESY experiments are important tool to identify stereochemistry of a molecule in solvent whereas single crystal XRD used to identify stereochemistry of a molecule in solid form.
Heteronuclear Overhauser effect spectroscopy (HOESY)
In HOESY, much like NOESY is used for the cross relaxation between nuclear spins. However, HOESY can offer information about other NMR active nuclei in a spatially relevant manner. Examples include any nuclei X{Y} or X→Y such as 1H→13C, 19F→13C, 31P→13C, or 77Se→13C. The experiments typically observe NOEs from protons on X, X{1H}, but do not have to include protons.
Rotating-frame nuclear Overhauser effect spectroscopy (ROESY)
ROESY is similar to NOESY, except that the initial state is different. Instead of observing cross relaxation from an initial state of z-magnetization, the equilibrium magnetization is rotated onto the x axis and then spin-locked by an external magnetic field so that it cannot precess. This method is useful for certain molecules whose rotational correlation time falls in a range where the nuclear Overhauser effect is too weak to be detectable, usually molecules with a molecular weight around 1000 daltons, because ROESY has a different dependence between the correlation time and the cross-relaxation rate constant. In NOESY the cross-relaxation rate constant goes from positive to negative as the correlation time increases, giving a range where it is near zero, whereas in ROESY the cross-relaxation rate constant is always positive.
ROESY is sometimes called "cross relaxation appropriate for minimolecules emulated by locked spins" (CAMELSPIN).
Resolved-spectrum methods
Unlike correlated spectra, resolved spectra spread the peaks in a 1D-NMR experiment into two dimensions without adding any extra peaks. These methods are usually called J-resolved spectroscopy, but are sometimes also known as chemical shift resolved spectroscopy or δ-resolved spectroscopy. They are useful for analysing molecules for which the 1D-NMR spectra contain overlapping multiplets as the J-resolved spectrum vertically displaces the multiplet from each nucleus by a different amount. Each peak in the 2D spectrum will have the same horizontal coordinate that it has in a non-decoupled 1D spectrum, but its vertical coordinate will be the chemical shift of the single peak that the nucleus has in a decoupled 1D spectrum.
For the heteronuclear version, the simplest pulse sequence used is called a Müller–Kumar–Ernst (MKE) experiment, which has a single 90° pulse for the heteronucleus for the preparation period, no mixing period, and applies a decoupling signal to the proton during the detection period. There are several variants on this pulse sequence which are more sensitive and more accurate, which fall under the categories of gated decoupler methods and spin-flip methods. Homonuclear J-resolved spectroscopy uses the spin echo pulse sequence.
Higher-dimensional methods
3D and 4D experiments can also be done, sometimes by running the pulse sequences from two or three 2D experiments in series. Many of the commonly used 3D experiments, however, are triple resonance experiments; examples include the HNCA and HNCOCA experiments, which are often used in protein NMR.
See also
Two-dimensional correlation analysis
References
Nuclear magnetic resonance spectroscopy | Two-dimensional nuclear magnetic resonance spectroscopy | Physics,Chemistry | 4,462 |
1,169,761 | https://en.wikipedia.org/wiki/Diamond%20v.%20Chakrabarty | Diamond v. Chakrabarty, 447 U.S. 303 (1980), was a United States Supreme Court case dealing with whether living organisms can be patented. Writing for a five-justice majority, Chief Justice Warren E. Burger held that human-made bacteria could be patented under the patent laws of the United States because such an invention constituted a "manufacture" or "composition of matter". Justice William J. Brennan Jr., along with Justices Byron White, Thurgood Marshall, and Lewis F. Powell Jr., dissented from the Court's ruling, arguing that because Congress had not expressly authorized the patenting of biological organisms, the Court should not extend patent law to cover them.
In the decades since the Court's ruling, the case has been recognized as a landmark case for U.S. patent law, with industry and legal commentators identifying it as a turning point for the biotechnology industry.
Background
Genetic engineer Ananda Mohan Chakrabarty, working for General Electric, developed a bacterium (derived from the Pseudomonas genus and now known as Pseudomonas putida) capable of breaking down crude oil, which he proposed to use in treating oil spills. General Electric filed a patent application for the bacterium in the United States listing Chakrabarty as the inventor, but the application was rejected by a patent examiner, because under patent law at that time, living things were generally understood to not be patentable subject matter under 35 U.S.C. § 101.
General Electric and Chakrabarty appealed the examiner's decision to the Board of Patent Appeals and Interferences. The Board however agreed with the examiner that the bacterium was not patentable under current law. General Electric and Chakrabarty thereafter appealed the Board's decision to the United States Court of Customs and Patent Appeals. This time, General Electric and Chakrabarty prevailed with the court overturning the examiner's decision and holding "the fact that micro-organisms are alive is without legal significance for purposes of the patent law." The Patent Office, in the name of its Commissioner, Sidney A. Diamond, appealed this decision to the Supreme Court.
Supreme Court opinion
The Supreme Court heard oral argument from the parties on March 17, 1980 and issued its decision on June 16, 1980. In a 5–4 ruling, the Court ruled in favor of Chakrabarty and affirmed the decision of the Court of Customs and Patent Appeals.
Writing for the majority, Chief Justice Warren E. Burger began by noting that 35 U.S.C. § 101 allowed inventors to obtain patents for a "manufacture" or "composition of matter". The majority noted that while these words indicated that Congress intended for the patent laws to be given a "broad scope", this scope was not unlimited and that, under the Court's precedents, "laws of nature, physical phenomena, and abstract ideas" were not patentable. However, the Court held that these precedents were inapplicable to Chakrabarty's case as he was not trying to patent a "natural phenomena" but rather a human-made bacterium that he, himself, had developed. The majority contrasted this outcome with the one reached nearly 50 years prior in Funk Bros. Seed Co. v. Kalo Inoculant Co., where the Court had rejected a patent application for the discovery of a naturally occurring bacterium that could be used to improve crops. Unlike the patentee in Funk Bros., the Supreme Court here held that Chakrabarty had not merely discovered the bacteria's existence, he had created it himself and adapted it to a particular purpose.
Justice William J. Brennan Jr., joined by Justices Byron White, Thurgood Marshall, and Lewis F. Powell Jr., dissented from the Court's ruling. Looking at the legislative history of the patent laws, Justice Brennan concluded that Congress had demonstrated an intent to exclude living organisms from the scope of the country's patent laws. Justice Brennan also expressed concern that the Court was extending patent protections into areas not expressly authorized by Congress and that this constituted an inappropriate extension of monopoly patent power.
Impact
In the decades following the Supreme Court's ruling, commentators have classified Diamond v. Chakrabarty as an important legal decision, particularly with respect to the patent laws and the biotechnology industry. In 2018, Time identified the decision as one of 25 important moments in American history, with Professor Gerardo Con Diaz remarking that the decision allowed "inventors at private and public institutions alike to obtain patents for genetically modified organisms — from plants and animals for laboratory research, to many foods available in supermarkets today" and allowed biotechnology firms to protect their developments in new ways. Writing for IP Watchdog on the decision's 30th anniversary, Gene Quinn called the decision a "turning point for the biotech industry" and praised the Court's ruling as "emblematic of the need for an expansive view of what is patentable subject matter." Likewise, the Biotechnology Innovation Organization praised the decision as being "instrumental in spurring the creation of a dynamic and flourishing biotech industry." Nature similarly noted that, at least according to industry participants, "without Diamond v. Chakrabarty, commercial biotechnology based on recombinant DNA technologies would not exist today."
However, the Supreme Court's ruling has also attracted some criticism from scholars who believe the Court extended patent law in a way that Congress did not authorize. Writing in the Ohio State Law Journal, Frank Darr criticized the Court's decision as containing "serious interpretive problems" and "reflect[ing] a policy choice" by the majority rather than a neutral legal analysis.
Further criticisms
George Mason University's Center for Intellectual Property and Innovation Policy has pointed out that, in the wake of Diamond v. Chakrabarty, the courts have continued to affirm the right of biotech industry developers to continue to claim ownership of altered biological life, while clarifying some limits in Mayo v. Prometheus and AMP v. Myriad. The Center has expressed concern over what may be interpreted as judicial activism, with this ambitious legal thrust in advance of Congressional ability to thoughtfully consider appropriate legislation.
While cases subsequent to Chakrabarty have provided some safeguards, such as forbidding the patenting of "limited DNA sequences," concerns have arisen that these safeguards do not go far enough, and that "biopiracy" of the human genome could take place, especially in an era of global health crisis demanding a rapid pharmaceutical response. A legal collaboration at the University of Pittsburgh suggests that it "is a stretch" to label such presumptuous genomic editing as outright slavery. However, as such editing in its most contemporary form may include the insertion of what has been termed by the industry as an entire "operating platform," concerns may continue.
See also
List of United States Supreme Court cases, volume 447
Genetic engineering in the United States
References
Further reading
.
.
External links
United States biotechnology case law
Genetically modified organisms
United States patent case law
United States Supreme Court cases
1980 in the environment
1980 in United States case law
United States environmental case law
Genetic engineering in the United States
General Electric litigation
United States Supreme Court cases of the Burger Court
Biological patent law
Oil spill remediation technologies
Bacteria and humans | Diamond v. Chakrabarty | Engineering,Biology | 1,487 |
23,437,360 | https://en.wikipedia.org/wiki/Main%20Avenue%20Bridge | The Main Avenue (Harold H. Burton Memorial) Bridge (alternately Main Avenue Viaduct) is a cantilever truss bridge in Cleveland, Ohio carrying Ohio State Route 2/Cleveland Memorial Shoreway over the Cuyahoga River. The bridge, completed in 1939, is in length, and was the longest elevated structure in Ohio until the 2007 completion of the Veterans' Glass City Skyway in Toledo. It was named for Harold H. Burton, 45th mayor of Cleveland, in late January 1986. The bridge replaced an 1869 bridge at the same site, and was built in conjunction with construction of the Cleveland Memorial Shoreway.The bridge received extensive renovations 1991–1992; it subsequently received major structural repairs in 2007 and again in 2012–2013, both instances necessitating re-routing of large vehicles.
The bridge is visible at the end of the "Cleveland Rocks" version of the opening credits of The Drew Carey Show.
In 2013, the Federal Highway Administration listed the Main Avenue Bridge as "structurally deficient" and "fracture critical".
The bridge has been designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers.
See also
List of crossings of the Cuyahoga River
References
External links
Transcription at The Cleveland Memory Project website.
Bridges completed in 1939
Bridges in Cleveland
Road bridges in Ohio
Cantilever bridges in the United States
Bridges over the Cuyahoga River
Historic Civil Engineering Landmarks | Main Avenue Bridge | Engineering | 281 |
69,479,403 | https://en.wikipedia.org/wiki/5%20Trianguli | 5 Trianguli is a solitary star located in the northern constellation Triangulum. With an apparent magnitude of 6.23, it’s barely visible to the naked eye under ideal conditions. The star is located 399 light years away from the Solar System, but is drifting away with a radial velocity of 7.7 km/s.
5 Trianguli has a classification of A0 Vm, which states it’s an A-type main-sequence star with unusually strong metallic lines. It has 2.22 times the mass of the Sun and 2.96 times the radius of the Sun. 5 Trianguli radiates at 48 solar luminosities from its photosphere at an effective temperature of 8,836 kelvin, which gives it a white-hue of an A-type star. It has a low projected rotational velocity of 15 km/s, common for Am stars.
References
A-type main-sequence stars
Am stars
Triangulum
Trianguli, 5
0634
010220
013372 | 5 Trianguli | Astronomy | 215 |
48,514,371 | https://en.wikipedia.org/wiki/Systemic%20design | Systemic design is an interdiscipline that integrates systems thinking and design practices. It is a pluralistic field, with several dialects including systems-oriented design. Influences have included critical systems thinking and second-order cybernetics. In 2021, the Design Council (UK) began advocating for a systemic design approach and embedded it in a revision of their double diamond model.
Systemic design is closely related to sustainability as it aims to create solutions that are not only designed to have a good environmental impact, but are also socially and economically beneficial. In fact, from a systemic design approach, the system to be designed, its context with its relationships and its environment receive synchronous attention. Systemic design's discourse has been developed through Relating Systems Thinking and Design—a series of symposia held annually since 2012.
History
1960 to 1990
Systems thinking in design has a long history with origins in the design methods movement during the 1960s and 1970s, such as the idea of wicked problems developed by Horst Rittel.
The theories about complexity help the management of an entire system, and the suggested design approaches help the planning of different divergent elements. The complexity theories evolved on the basis that living systems continually draw upon external sources of energy and maintain a stable state of low entropy, on the basis of the General Systems Theory by Karl Ludwig von Bertalanffy (1968). Some of the next rationales applied those theories also on artificial systems: complexity models of living systems address also productive models with their organizations and management, where the relationships between parts are more important than the parts themselves.
1990 to 2010
Treating productive organizations as complex adaptive systems allows for new management models that address economical, social and environmental benefits (Pisek and Wilson, 2001.) In that field, cluster theory (Porter, 1990) evolved in more environmentally sensitive theories, like industrial ecology (Frosh and Gallopoulos, 1989) and industrial symbiosis (Chertow, 2000). Design thinking offers a way to creatively and strategically reconfigure a design concept in a situation with systemic integration (Buchanan, 1992).
In 1994, Gunter Pauli and Heitor Gurgulino de Souza founded the research institute Zero Emission Research and Initiatives (ZERI), starting from the idea that progress should embed respect for the environment and natural techniques that will allow production processes to be part of the ecosystem.
Strong interdisciplinary and transdisciplinarity approaches are critical during the design phase (Fuller, 1981) with the increasing involvement of different disciplines, including urban planning, public policy, business management and environmental sciences (Chertow et al., 2004). As an interdiscipline, systemic design joins systems thinking and design methodology to support humanity centred and systems oriented design academe and practice (Bistagnino, 2011; Sevaldson, 2011; Nelson and Stolterman, 2012; Jones, 2014; Toso at al., 2012).
2010 to present
Numerous design projects demonstrate systemic design in their approach, including diverse topics involving food networks, industrial processes and water purification, revitalization of internal areas through art and tourism, circular economy, exhibition and fairs, social inclusion, and marginalization.
Since 2014 several scholarly journals have acknowledged systemic design with special publications, and in 2022, the Systemic Design Association launched “Contexts—The Journal of Systemic Design.” The proceedings repository, Relating Systems Thinking and Design, exceeded 1000 articles in 2023.
Relating Systems Thinking and Design (RSD)
Since 2012, host organisations have held an annual symposium dedicated to systemic design, Relating Systems Thinking and Design (RSD). Proceedings are available via the searchable repository on RSDsymposium.org.
Research groups and innovation labs
Academic research groups with a focus on systemic design include:
Communication, Culture & Technology lab at Georgetown University, Washington DC, hosts of RSD12 in 2023.
Policy Lab is a part of the UK Civil Service with a "mission is to radically improve policy making through design, innovation and people-centred approaches".
Radical Methodologies Research Group at the University of Brighton, Brighton, UK, hosts of RSD11 in 2022.
Relating Systems Thinking and Design a searchable repository of articles from the proceedings of the annual symposia.
Strategic Innovation Lab (sLab) at OCADU, Toronto, Canada.
Sys—Systemic Design Lab at the Politecnico di Torino, Torino, Italy.
Systemic Design and Sustainability Research Group at Oslo Metropolitan University.
Systemic Design Association the international membership organisation.
Systems Engineering Design research group at Chalmers University of Technology, Gothenburg, Sweden.
Academic programmes
Academic programmes in systemic design include:
Systems oriented design is an example of a systemic design approach being used at the Oslo School of Architecture and Design.
Politecnico di Torino: Master of Science in Systemic Design.
The Strategic Foresight and innovation master program at OCAD University Toronto.
National Institute of Design (NID) India. Systems Thinking and Design is part of the academic programme at NID.
At the University of Montreal, the Master's degree in Applied Science in Design, Design and Complexity (DESCO).
The Kunsthochschule Kassel, in Kassel (Germany) offered the "Systemdesign" degree in the Product Design programme
References
Design
Sys | Systemic design | Engineering | 1,073 |
3,122,052 | https://en.wikipedia.org/wiki/Schnyder%27s%20theorem | In graph theory, Schnyder's theorem is a characterization of planar graphs in terms
of the order dimension of their incidence posets. It is named after Walter Schnyder, who published its proof in 1989.
The incidence poset of an undirected graph with vertex set and edge set is the partially ordered set of height 2 that has as its elements. In this partial order, there is an order relation when is a vertex, is an edge, and is one of the two endpoints of .
The order dimension of a partial order is the smallest number of total orderings whose intersection is the given partial order; such a set of orderings is called a realizer of the partial order.
Schnyder's theorem states that a graph is planar if and only if the order dimension of is at most three.
Extensions
This theorem has been generalized by to a tight bound on the dimension of the height-three partially ordered sets formed analogously from the vertices, edges and faces of a convex polyhedron, or more generally of an embedded planar graph: in both cases, the order dimension of the poset is at most four. However, this result cannot be generalized to higher-dimensional convex polytopes, as there exist four-dimensional polytopes whose face lattices have unbounded order dimension.
Even more generally, for abstract simplicial complexes, the order dimension of the face poset of the complex is at most , where is the minimum dimension of a Euclidean space in which the complex has a geometric realization .
Other graphs
As Schnyder observes, the incidence poset of a graph G has order dimension two if and only if the graph is a path or a subgraph of a path. For, in when an incidence poset has order dimension is two, its only possible realizer consists of two total orders that (when restricted to the graph's vertices) are the reverse of each other. Any other two orders would have an intersection that includes an order relation between two vertices, which is not allowed for incidence posets. For these two orders on the vertices, an edge between consecutive vertices can be included in the ordering by placing it immediately following the later of the two edge endpoints, but no other edges can be included.
If a graph can be colored with four colors, then its incidence poset has order dimension at most four .
The incidence poset of a complete graph on n vertices has order dimension .
References
.
.
.
.
.
.
Order theory
Planar graphs
Theorems in graph theory | Schnyder's theorem | Mathematics | 520 |
40,670,179 | https://en.wikipedia.org/wiki/C10H20O3 | {{DISPLAYTITLE:C10H20O3}}
The molecular formula C10H20O3 may refer to:
Hydroxydecanoic acids
10-Hydroxydecanoic acid
Myrmicacin (3-hydroxydecanoic acid)
Promoxolane
Molecular formulas | C10H20O3 | Physics,Chemistry | 63 |
48,896,376 | https://en.wikipedia.org/wiki/Operator%20Toll%20Dialing | Operator Toll Dialing was a telephone call routing and toll-switching system for the Bell System and the independent telephone companies in the United States and Canada that was developed in the 1940s. It automated the switching and billing of long-distance calls. The concept and technology evolved from the General Toll Switching Plan of 1929, and gained technical merits by the cutover of a new type of crossbar switching system (No. 4XB) in Philadelphia to commercial service in August 1943. This was the first system of its kind for automated forwarding of calls between toll switching centers, but it served customers only for regional toll traffic. It established initial experience with automatic toll switching for the design of a nationwide effort that was sometimes referred to as Nationwide Operator Toll Dialing.
By the time of the first promotions of Nationwide Operator Toll Dialing to the general telecommunication industry in 1945, approximately 5% of the 2.7 million toll board calls per day were handled by the early incarnations of this system.
Operator Toll Dialing eliminated the need for intermediate operators to complete toll calls to distant central offices, where it eliminated the inward operators for call completion to the local wire line. This system involved stepwise routing from one toll center to another one logically closer to the destination to set up each circuit.
An essential aspect of the eventual success of the system was the concept of destination code routing, which required a uniform telephone numbering plan for all telephone networks across the continent. By 1947, a newly devised nationwide numbering plan established a geographic partitioning of the continent into numbering plan areas (NPAs), and designated the original North American area codes. An area code is a unique three-digit code serving as a destination routing code to a specific numbering plan area (NPA). This code was the same for all switching systems nationwide, and eliminated the need to publish specific trunk codes for each toll office to various destinations. The translation from NPA code to trunk codes was performed at each toll center without the need for outside operators to know the details. When automatic apparatus was installed for machine translation of the universal area codes to location-specific trunk codes, it freed operators from lookup of trunk codes to send the call one toll office closer to the destination telephone.
Within each NPA, central offices also received unique three-digit codes, so that each central office could be reached by a six-digit dialing prefix (NPA-XXX). Each telephone on the continent was uniquely identified by a telephone number of ten digits.
By the end of 1948, AT&T commenced the wider use of the system with the cutover of new crossbar switching systems for toll-dialing in New York and Chicago, which resulted in the handling of about ten percent of all Bell System long-distance calling by Operator Toll Dialing. Altogether, the toll networks enabled operators to place calls directly to distant telephones in some three hundred cities. On average, it took about two minutes for a long-distance call to be completed to its destination. As foreseen and stated in 1949, the target goal for call completion, after full implementation of the system across the nation, was one minute.
For entering the destination codes and telephone numbers into newly designed machine-switching equipment, long-distance operators did not use a slow rotary dials, but a ten-button key set, operating at least twice as fast, which transmitted tone pulses (multi-frequency signaling) over regular voice channels to the remote switching centers. Such channels were incapable of transmitting the direct-current pulses of a rotary dial.
Operator Toll Dialing was gradually supplemented and superseded by Direct Distance Dialing (DDD) in the decades following. With DDD, customers themselves dialed an area code followed by a seven-digit telephone number to initiate long-distance calls without operator assistance. Activated first in 1951 for about ten thousand customers in Englewood, New Jersey, DDD was available in the major cities by the early 1960s, but was not fully implemented until the 1970s.
See also
Notes on the Network
Trunk prefix
Subscriber trunk dialling (STD)
References
Bell System
Public sphere
Telecommunications systems
Telephony | Operator Toll Dialing | Technology | 833 |
51,322,013 | https://en.wikipedia.org/wiki/Volixibat | Volixibat (INN; development code SHP626) is a medication under development as a possible treatment for nonalcoholic steatohepatitis (NASH), the most severe form of non-alcoholic fatty liver disease (NAFLD). No other pharmacotherapy yet exists for NASH, so there is interest in whether volixibat can prove to be both safe and effective. To encourage development and testing, the U.S. Food and Drug Administration (FDA) has issued fast track status.
Volixibat is an IBAT inhibitor, meaning that it blocks the function of the IBAT protein (ileal bile acid transporter), which is also called SLC10A2 (solute carrier family 10 member 2) or ASBT (apical sodium–bile acid transporter). IBAT is most highly expressed in the ileum, where it is found on the brush border membrane of enterocytes. It is responsible for the initial uptake of bile acids, particularly conjugated bile acids, from the intestine as part of their enterohepatic circulation.
References
Amino sugars
Anilides
Benzosulfones
Drugs acting on the gastrointestinal system and metabolism
Ethers
Secondary alcohols
Sulfate esters
Ureas
Butyl compounds | Volixibat | Chemistry | 266 |
1,286,978 | https://en.wikipedia.org/wiki/Rate%20of%20fire | Rate of fire is the frequency at which a specific weapon can fire or launch its projectiles. This can be influenced by several factors, including operator training level, mechanical limitations, ammunition availability, and weapon condition. In modern weaponry, it is usually measured in rounds per minute (RPM or round/min) or rounds per second (RPS or round/s).
There are three different measurements for the rate of fire: cyclic, sustained, and rapid. Cyclic is the maximum rate of fire given only mechanical function, not taking into account degradation of function due to heat, wear, or ammunition constraints. Sustained is the maximum efficient rate of fire given the time taken to load the weapon and keep it cool enough to operate. Finally, rapid is the maximum reasonable rate of fire in an emergency when the rate of fire need not be upheld for long periods.
Overview
For manually operated weapons such as bolt-action rifles or artillery pieces, the rate of fire is governed primarily by the training of the operator or crew, within some mechanical limitations. Rate of fire may also be affected by ergonomic factors. For rifles, ease-of-use features such as the design of the bolt or magazine release can affect the rate of fire.
For artillery pieces, a gun on a towed mount can usually achieve a higher rate of fire than the same weapon mounted within the cramped confines of a tank or self-propelled gun. This is because the crew operating in the open can move more freely and can stack ammunition where it is most convenient. Inside a vehicle, ammunition storage may not be optimized for fast handling due to other design constraints, and crew movement may be constricted. Artillery rates of fire were increased in the late 19th century by innovations including breech-loading and quick-firing guns.
For automatic weapons such as machine guns, the rate of fire is primarily a mechanical property. A high cyclic firing rate is advantageous for use against targets that are exposed to a machine gun for a limited time span, like aircraft or targets that minimize their exposure time by quickly moving from cover to cover. For targets that can be fired on by a machine gun for longer periods than just a few seconds the cyclic firing rate becomes less important.
For a third hybrid class of weapons, common in handguns and rifles, known as a semi-automatic firearm, the rate of fire is primarily governed by the ability of the operator to actively pull the trigger and, for aimed fire, the operator's shot-to-shot recovery time. No other factors significantly contribute to the rate of fire. Generally, a semi-automatic firearm automatically chambers a round using blowback energy, but does not fire the new round until the trigger is released to a reset point and actively pulled again. A semi-automatic's rate of fire is significantly different from and should not be confused with a full-automatic's rate of fire. Many full-automatic small arms have a selective fire feature that 'downgrades' them to semi-automatic mode by changing a switch.
Over time, weapons have attained higher rates of fire. A small infantry unit armed with modern rifles and machine guns can generate more firepower than much larger units equipped with older weapons. Over the 20th century, this increased firepower was due almost entirely to the higher rate of fire of modern weapons.
An example of increase in rate of fire is the Maxim machine gun that was developed in 1884 and used until World War I ended in 1918. Its performance was improved during that time mainly by advances in the field of cooling.
Measurement
There are diverse measurements of rate of fire. The speed of the fire will vary depending on the type of automatic weapon.
Cyclic rate
This measures how quickly an automatic or semi-automatic firearm can fire a single cartridge. At the end of a cycle, the weapon should be ready to fire or begin firing another round. In an open bolt simple blowback weapon, this starts with pulling the trigger to release the bolt. The bolt pushes a cartridge into the barrel from a magazine and fires it. The energy propelling the bullet also pushes the bolt rearward against the recoil spring. After the bolt is stopped by either the spring or the rear of the receiver, it is pushed forward to either fire again or catch on the sear. Typical cyclic rates of fire are 600–1100 rpm for assault rifles, 400–1400 rpm for submachine guns and machine pistols, and 600–1,500 rpm for machine guns. M134 Miniguns mounted on attack helicopters and other combat vehicles can achieve rates of fire of over 100 rounds per second (6,000 rpm).
Effective rate
This is the duration of firing that a weapon could be expected to realistically withstand or output in a realistic environment. On paper, the M134 is capable of firing up to 6,000 rpm. Realistically, firing the weapon for a continuous sixty seconds would likely melt parts of the weapon. Sustained rate-of-fire depends on several factors, including reloading, aiming, barrel changes, cartridge fired, and user expertise. Knowing the effective rate of fire for a weapon can be useful for determining ammunition reserve and resupply requirements. Machine guns are typically fired in short bursts to preserve ammunition and barrel life, reserving long strings of fire for emergencies. Sustained rate-of-fire also applies to box magazine fed assault rifles and semi-automatic rifles, although these weapons rarely expend ammunition at the same rate as light machine guns.
Sustained or rapid rate
Rapid or sustained rate of fire may be considered a weapon's absolute maximum firing rate. The term sustained refers to firing a fully-automatic weapon continuously, while rapid is limited to semi-automatic or manually operated firearms. Rapid and sustained fire are usually reserved for close-range defense against ambushes or human wave attacks. Such scenarios trade control, ammunition, and even aiming for sheer volume of fire. These fire rates push weapons and soldiers to their physical limits and cannot be sustained for long periods.
Technical limitations
The major limitation in higher rates of fire arises due to the problem of heat. Even a manually operated rifle generates heat as rounds are fired. A machine gun builds up heat so rapidly that steps must be taken to prevent overheating. Solutions include making barrels heavier so that they heat up more slowly, making barrels rapidly replaceable by the crews, or using water jackets around the barrel to cool the weapon. A modern machine gun team will carry at least one spare barrel for their weapon, which can be swapped out within a few seconds by a trained crew. Problems with overheating can range from ammunition firing unintentionally (cook-off), or, what is much worse in combat, failure to fire, or even explosion of the weapon.
Water-cooled weapons can achieve very high effective rates of fire (approaching their cyclic rate) but are very heavy and vulnerable to damage. A well-known example is the M1917 Browning machine gun, a heavy machine gun designed by John Browning and used by US forces during WWI. It became the basis of the much more common Browning M1919 machine gun, used by US forces throughout World War II, as well as the Browning M2 .50 caliber heavy machine gun, which is still in service, as well as many adaptions, such as the Japanese Ho-103 aircraft machine gun during World War II. Another legendarily reliable heavy machine gun is the British Vickers machine gun, based on the Maxim machine gun design, which saw service both on the air and ground during World War I and World War II. Due to their disadvantages, water-cooled weapons have gradually been replaced by much lighter air-cooled weapons. For weapons mounted on aircraft, no cooling device is necessary due to the outside air cooling the weapon as the aircraft is moving. Consequently, aircraft-mounted machine guns, autocannon or Gatling-type guns can sustain fire far longer than ground-based counterparts, firing close to their cyclic rate of fire. However, due to the weight of the ammunition, sustained fire is constrained by ammunition payload, as many aircraft cannons only carry enough ammunition for a few seconds' worth of firing; for example, the F-16 Falcon and its variants carry 511 rounds of 20mm ammunition, and the F-22 Raptor carries a similar amount at 480 rounds, which equates to roughly five seconds of firing at the M61 Vulcan's 6000 rpm (100 rounds per second) cyclic rate. (Some aircraft, due to the purpose of the design, do carry more, such as the GAU-8 Avenger mounted on the A-10 Thunderbolt, which carries 1,150 rounds of ammunition sufficient for 17 seconds of firing).
Another factor influencing rate of fire is the supply of ammunition. At 50 rps (3,000 rpm), a five-second burst from an M134 Minigun would use approximately of 7.62 mm ammunition; this alone would make it an impractical weapon for infantry who have to carry a reasonable supply of ammunition with them. For this and other reasons, weapons with such high rates of fire are typically only found on vehicles or fixed emplacements.
See also
Multiple-barrel firearm
References
Ammunition
Firearm actions
Firing | Rate of fire | Physics | 1,862 |
18,517,738 | https://en.wikipedia.org/wiki/Ulrike%20Tillmann | Ulrike Luise Tillmann FRS is a mathematician specializing in algebraic topology, who has made important contributions to the study of the moduli space of algebraic curves. She was the president of the London Mathematical Society in the period 2021–2022.
She is titular Professor of Mathematics at the University of Oxford and a Fellow of Merton College, Oxford. In 2021 she was appointed Director of the Isaac Newton Institute at the University of Cambridge, and N.M. Rothschild & Sons Professor of Mathematical Sciences at Cambridge, but continued to hold a part-time position at Oxford.
Education
Tillmann completed her Abitur at in Vreden. She received a BA from Brandeis University in 1985, followed by a MA from Stanford University in 1987. She read for a PhD under the supervision of Ralph Cohen at Stanford University, where she was awarded her doctorate in 1990. She was awarded Habilitation in 1996 from the University of Bonn.
Awards and honours
In 2004 she was awarded the Whitehead Prize of the London Mathematical Society.
She was elected a Fellow of the Royal Society in 2008 and a Fellow of the American Mathematical Society in 2013. She has served on the council of the Royal Society and in 2018 was its vice-president. In 2017, she became a member of the German Academy of Sciences Leopoldina.
Tillmann was awarded the Bessel Prize by the Alexander von Humboldt Foundation in 2008 and was the Emmy Noether Lecturer of the German Mathematical Society in 2009.
She was elected as president-designate of the London Mathematical Society in June 2020 and took over the presidency from Jonathan Keating in November 2021. She was elected to the (EURASC) in 2021. In October 2021 she became the director of the Isaac Newton Institute, taking a post which lasts for five years.
Personal life
Tillmann's parents are Ewald and Marie-Luise Tillmann. In 1995 she married Jonathan Morris with whom she has had three daughters.
Publications
References
External links
Living people
People from Rhede
Women mathematicians
Fellows of the Royal Society
Female fellows of the Royal Society
Fellows of the American Mathematical Society
Members of the German National Academy of Sciences Leopoldina
Topologists
20th-century German mathematicians
Whitehead Prize winners
Brandeis University alumni
Stanford University alumni
Academics of the University of Oxford
Fellows of Merton College, Oxford
21st-century German mathematicians
Year of birth missing (living people) | Ulrike Tillmann | Mathematics | 471 |
30,413,705 | https://en.wikipedia.org/wiki/Fragility%20%28glass%20physics%29 | In glass sciences, fragility or "kinetic fragility" is a concept proposed by the Australian-American physical chemist C. Austen Angell. Fragility characterizes how rapidly the viscosity of a glass forming liquid approaches a very large value approximately 1012 Pa s during cooling. At this viscosity, the liquid is "frozen" into a solid and the corresponding temperature is known as the glass transition temperature Tg. Materials with a higher fragility have a more rapid increase in viscosity as approaching Tg, while those with a lower fragility have a slower increase in viscosity. Fragility is one of the most important concepts to understand viscous liquids and glasses. Fragility may be related to the presence of dynamical heterogeneity in glass forming liquids, as well as to the breakdown of the usual Stokes–Einstein relationship between viscosity and diffusion. Fragility has no direct relationship with the colloquial meaning of the word "fragility", which more closely relates to the brittleness of a material.
Definition
Formally, fragility reflects the degree to which the temperature dependence of the viscosity (or relaxation time) deviates from Arrhenius behavior. This classification was originally proposed by Austen Angell. The most common definition of fragility is the "kinetic fragility index" m, which characterizes the slope of the viscosity (or relaxation time) of a material with temperature as it approaches the glass transition temperature from above:
where is viscosity, Tg is the glass transition temperature, m is fragility, and T is temperature. Glass-formers with a high fragility are called "fragile"; those with a low fragility are called "strong". For example, silica has a relatively low fragility and is called "strong", whereas some polymers have relatively high fragility and are called "fragile".
Several fragility parameters have been introduced to characterise the fragility of liquids, including the Bruning–Sutton, Avramov and Doremus fragility parameters. The Bruning–Sutton fragility parameter m relies on the curvature or slope of the viscosity curves. The Avramov fragility parameter α is based on a Kohlraush-type formula of viscosity derived for glasses: strong liquids have α ≈ 1 whereas liquids with higher α values become more fragile. Doremus indicated that practically all melts deviate from the Arrhenius behaviour, e.g. the activation energy of viscosity changes from a high QH at low temperature to a low QL at high temperature. However asymptotically both at low and high temperatures the activation energy of viscosity becomes constant, e.g. independent of temperature. Changes that occur in the activation energy are unambiguously characterised by the ratio between the two values of activation energy at low and high temperatures, which Doremus suggested could be used as a fragility criterion: RD=QH/QL. The higher RD, the more fragile are the liquids, Doremus’ fragility ratios range from 1.33 for germania to 7.26 for diopside melts.
The Doremus’ criterion of fragility can be expressed in terms of thermodynamic parameters of the defects mediating viscous flow in the oxide melts: RD=1+Hd/Hm, where Hd is the enthalpy of formation and Hm is the enthalpy of motion of such defects. Hence the fragility of oxide melts is an intrinsic thermodynamic parameter of melts which can be determined unambiguously by experiment.
The fragility can also be expressed analytically in terms of physical parameters that are related to the interatomic or intermolecular interaction potential. It is given as function of a parameter which measures the steepness of the interatomic or intermolecular repulsion, and as a function of the thermal expansion coefficient of the liquid, which, instead, is related to the attractive part of the interatomic or intermolecular potential. The analysis of various systems (from Lennard-Jones model liquids to metal alloys) has evidenced that a steeper interatomic repulsion leads to more fragile liquids, or, conversely, that soft atoms make strong liquids.
Recent synchrotron radiation X-ray diffraction experiments showed a clear link between structure evolution of the supercooled liquid on cooling, for example, intensification of Ni-P and Cu-P peaks in the radial distribution function close to the glass-transition, and liquid fragility.
Physical implications
The physical origin of the non-Arrhenius behavior of fragile glass formers is an area of active investigation in glass physics. Advances over the last decade have linked this phenomenon with the presence of locally heterogeneous dynamics in fragile glass formers; i.e. the presence of distinct (if transient) slow and fast regions within the material. This effect has also been connected to the breakdown of the Stokes–Einstein relation between diffusion and viscosity in fragile liquids.
References
Glass physics
es:Fragilidad | Fragility (glass physics) | Physics,Materials_science,Engineering | 1,084 |
65,123,578 | https://en.wikipedia.org/wiki/Pentenoic%20acid | Pentenoic acid is any of five mono-carboxylic acids whose molecule has an unbranched chain of five carbons connected by three single bonds and one double bond. That is, any compound with one of the formulas (2-pentenoic), (3-pentenoic), or (4-pentenoic). In the IUPAC-recommended nomenclature, these acids are called pent-2-enoic, pent-3-enoic, and pent-4-enoic, respectively. All these compounds have the empirical formula .
Pentenoic acids are technically mono-unsaturated fatty acids, although they are rare or unknown in biological lipids (fats, waxes, phospholipids, etc.). A salt or ester of such an acid is called a pentenoate.
Geometric isomers
There are actually two 2-pentenoic acids, distinguished by the conformation of the two single C–C bonds adjacent to the double bond: either on the same side of the double bond's plane (cis or Z configuration) or on opposite sides of it (trans or E configuration).
Likewise, there are two 3-pentenoic acids. On the other hand, there is only one 4-pentenoic acid, since the two hydrogen atoms on the last carbon are symmetrically placed across the double bond's plane.
The full list of pentenoic acids is, therefore:
cis-2-pentenoic or (2Z)-pent-2-enoic acid (CAS 16666-42-5, Nikkaji J97.998H, PUBchem 643793).
trans-2-pentenoic or (2E)-pent-2-enoic acid (CAS 13991-37-2, FDA 1RG66883CF, Nikkaji J97.997J, Beilstein 1720312, PUBchem 638122, JECFA 1804, FEMA 4193). MP ~10 °C; BP ~108 °C at 17 torr, ~198 °C; odor cheesy, sour. Occurs in banana, beer. Flavoring agent.
cis-3-pentenoic or (3Z)-pent-3-enoic acid (CAS 33698-87-2, Nikkaji J98.001C, PUBchem 5463134).
trans-3-pentenoic or (3E)-pent-3-enoic acid (BP ~52 °C at 4 torr) (CAS 1617-32-9, Nikkaji J98.000E, PUBchem 5282706). BP ~187 °C.
4-pentenoic or pent-4-enoic acid, 3-vinylpropionic acid (CAS 591-80-0, FDA D4S77Y29FB, Nikkaji J53.731D, Beilstein 1633696, PUBchem 61138, JECFA 314, FEMA 2843). Dens ~0.975 at 25 °C; IoR ~1.428; MP ~ -23 °C; BP ~83 °C at 12 torr, ~188 °C; sol. water, slightly; odor cheese, mustard. Toxic.
Esters
Esters of pentenoic acids include:
Ethyl cis-2-pentenoate (CAS 27805-84-1, Nikkaji J181.277G, PUBchem 11332473).
Ethyl trans-2-pentenoate (CAS 24410-84-2, Nikkaji J181.274B, PUBchem 5367761) BP ~150 °C.
Butyl 2-pentenoate (CAS 79947-84-5, Nikkaji J2.425.129B, PUBchem 5463534).
Ethyl trans-3-pentenoate (CAS 3724-66-1, Nikkaji J500.934K, PUBchem 5463078).
ethyl cis-3-pentenoate (CAS 27829-70-5, Nikkaji J746.757E, PUBchem 10997059).
isopropyl 3-pentenoate (CAS 62030-41-5, Nikkaji J746.761C, PUBchem 5463374).
Butyl 3-pentenoate (CAS 19825-93-5, PUBchem 5462980) Odor of chamomile. Flavoring agent.
Derivatives
Some derivatives of pentenoic acid include:
2-Oxopent-4-enoic acid, transient species possibly produced by Azotobacter vinelandii
2-Amino-5-chloro-4-pentenoic acid, found in the mushroom Amanita cokeri
2-Methyl-3-pentenoic acid. Some esters are berry fruit flavors.
2-Propyl-trans-2-pentenoic acid (2-en-valproic acid), major metabolite of anticonvulsant valproic acid.
cis-2-methyl-2-pentenoic acid 2-methyl-(2Z)-pent-2-enoic acid. (CAS 1617-37-4, FDA 07B8HQZ433, Nikkaji J421.583D, PUBchem 6436344) BP ~214 °C. Flavoring agent.
trans-2-Methyl-2-pentenoic acid 2-methyl-(2E)-pent-2-enoic acid. (CAS 16957-70-3, FDA 44I99E898B, Nikkaji J150.063E, PUBchem 5365909) dens ~0.987; IoR ~1.46; MP ~25 °C; BP ~124 °C at 30 torr, ~214 °C; odor fruity, strawberry. Flavoring agent.
2-Methyl-2-pentenoic acid, cis/trans mix (CAS 3142-72-1, JECFA 1210, FEMA 3195, PUBchem 18458, US patent 3976801). Dens ~0.983 at 25 °C; IoR ~1.46; MP ~25 °C; BP ~124 °C at 30 torr, ~112 °C at 12 torr; sol. water, slightly; odor acidic, fruity, sweaty. Flavoring and perfuming agent.
2-Methyl-4-pentenoic acid, 2-methyl-pent-4-enoic acid. The hexyl ester (CAS 58031-03-1, FDA MGQ3MUU64F, FEMA 3693, PUBchem 53426766, US patents 3966799,3976801) is a flavoring agent for chewing-gum, candy, beverages; .
Gallery
See also
Valeric acid or pentanoic acid
Pentynoic acid
Pentenedioic acid
hexenoic acid
Butenoic acid
References
Carboxylic acids | Pentenoic acid | Chemistry | 1,546 |
14,001,865 | https://en.wikipedia.org/wiki/Bluebelt | The Bluebelt is a large scale system of stormwater best management practices (BMPs) in New York City. The program originated on Staten Island in the early 1990s, but has also been implemented in Queens and the Bronx. The Bluebelt includes structural and nonstructural stormwater management control measures taken to mitigate changes to both quantity and quality of runoff caused through changes to land use.
History
The Bluebelt program was initiated in the late 1980s by New York City’s Departments of Environmental Protection and City Planning, based on a suggestion made several decades earlier by Ian McHarg, a landscape architect. Acquisition of land began in 1991 for the project, one of the Northeast United States’ most ambitious stormwater management efforts. The overall goal is to provide the necessary stormwater drainage infrastructure for a region on the southern end of the island while at the same time preserving the last freshwater wetlands in New York City. The bluebelt uses a series of carefully placed BMPs at the storm sewer/wetland interface to reduce flooding and improve water quality. Creation of a self-regulating ecosystem that is native to the region is of primary importance to the program.
BMPs used in the bluebelt include stormwater wetlands, stream restoration, outlet stilling basins, and sand filters. Ninety-two stormwater wetlands were included as part of the Staten Island project. In order to integrate the wetlands into the natural ecology, the construction process is advised by restoration specialists since general contractors are typically not trained in proper plant selection and installation. The planting design focuses on quick establishment of the preferred successional communities that will complement the surrounding landscape, before invasive species take over the site.
The performance of the Bluebelt during the storms that battered the city in the early 21st century – including Hurricane Sandy – has been described as "brilliant".
See also
Low-impact development (U.S. and Canada)
References
Notes
Further reading
"Staten Island nationally recognized for its stormwater management" Staten Island Advance
"Staten Island Bluebelt" Landscape Architecture (November 2005).
Articles on Bluebelt Clear Waters (Winter 2009) New York Water Environment Association
"Natural Security: Staten Island, New York - Natural Drainage Systems" American Rivers
Articles Stormwater Weekly (2000–2001)
External links
Staten Island Bluebelt Program - New York City Department of Environmental Protection
Center for Watershed Protection
Environmental engineering
Water pollution in the United States
Protected areas of New York City
Water infrastructure of New York City | Bluebelt | Chemistry,Engineering | 489 |
1,552,884 | https://en.wikipedia.org/wiki/Cylinder-head-sector | Cylinder-head-sector (CHS) is an early method for giving addresses to each physical block of data on a hard disk drive.
It is a 3D-coordinate system made out of a vertical coordinate head, a horizontal (or radial) coordinate cylinder, and an angular coordinate sector. Head selects a circular surface: a platter in the disk (and one of its two sides). Cylinder is a cylindrical intersection through the stack of platters in a disk, centered around the disk's spindle. Combined, cylinder and head intersect to a circular line, or more precisely: a circular strip of physical data blocks called track. Sector finally selects which data block in this track is to be addressed, as the track is subdivided into several equally-sized portions, each of which is an arc of (360/n) degrees, where n is the number of sectors in the track.
CHS addresses were exposed, instead of simple linear addresses (going from 0 to the total block count on disk - 1), because early hard drives didn't come with an embedded disk controller, that would hide the physical layout. A separate generic controller card was used, so that the operating system had to know the exact physical "geometry" of the specific drive attached to the controller, to correctly address data blocks. The traditional limits were 512 bytes/sector × 63 sectors/track × 255 heads (tracks/cylinder) × 1024 cylinders, resulting in a limit of 8032.5 MiB for the total capacity of a disk.
As the geometry became more complicated (for example, with the introduction of zone bit recording) and drive sizes grew over time, the CHS addressing method became restrictive. Since the late 1980s, hard drives began shipping with an embedded disk controller that had good knowledge of the physical geometry; they would however report a false geometry to the computer, e.g., a larger number of heads than actually present, to gain more addressable space. These logical CHS values would be translated by the controller, thus CHS addressing no longer corresponded to any physical attributes of the drive.
By the mid 1990s, hard drive interfaces replaced the CHS scheme with logical block addressing (LBA), but many tools for manipulating the master boot record (MBR) partition table still aligned partitions to cylinder boundaries; thus, artifacts of CHS addressing were still seen in partitioning software by the late 2000s.
In the early 2010s, the disk size limitations imposed by MBR became problematic and the GUID Partition Table (GPT) was designed as a replacement; modern computers using UEFI firmware without MBR support no longer use any notions from CHS addressing.
Definitions
CHS addressing is the process of identifying individual sectors (aka. physical block of data) on a disk by their position in a track, where the track is determined by the head and cylinder numbers. The terms are explained bottom up, for disk addressing the sector is the smallest unit. Disk controllers can introduce address translations to map logical to physical positions, e.g., zone bit recording stores fewer sectors in shorter (inner) tracks, physical disk formats are not necessarily cylindrical, and sector numbers in a track can be skewed.
Sectors
Floppy disks and controllers had used physical sector sizes of 128, 256, 512 and 1024 bytes (e.g., PC/AX), but formats with 512 bytes per physical sector became dominant in the 1980s.
The most common physical sector size for hard disks today is 512 bytes, but there have been hard disks with 520 bytes per sector as well for non-IBM compatible machines. In 2005 some Seagate custom hard disks used sector sizes of 1024 bytes per sector. Advanced Format hard disks use 4096 bytes per physical sector (4Kn) since 2010, but will also be able to emulate 512 byte sectors (512e) for a transitional period.
Magneto-optical drives use sector sizes of 512 and 1024 bytes on 5.25-inch drives and 512 and 2048 bytes on 3.5-inch drives.
In CHS addressing the sector numbers always start at 1, there is no sector 0, which can lead to confusion since logical sector addressing schemes typically start counting with 0, e.g., logical block addressing (LBA), or "relative sector addressing" used in DOS.
For physical disk geometries the maximal sector number is determined by the low level format of the disk. However, for disk access with the BIOS of IBM-PC compatible machines, the sector number was encoded in six bits, resulting in a maximal number of 111111 (63) sectors per track. This maximum is still in use for virtual CHS geometries.
Tracks
The tracks are the thin concentric circular strips of sectors. At least one head is required to read a single track. With respect to disk geometries the terms track and cylinder are closely related. For a single or double sided floppy disk track is the common term; and for more than two heads cylinder is the common term. Strictly speaking a track is a given CH combination consisting ofSPT sectors, while a cylinder consists ofSPT×H sectors.
Cylinders
A cylinder is a division of data in a disk drive, as used in the CHS addressing mode of a fixed-block architecture (FBA) disk or the cylinder–head–record (CCHHR) addressing mode of a CKD disk.
The concept is concentric, hollow, cylindrical slices through the physical disks (platters), collecting the respective circular tracks aligned through the stack of platters. The number of cylinders of a disk drive exactly equals the number of tracks on a single surface in the drive. It comprises the same track number on each platter, spanning all such tracks across each platter surface that is able to store data (without regard to whether or not the track is "bad"). Cylinders are vertically formed by tracks. In other words, track 12 on platter 0 plus track 12 on platter 1 etc. is cylinder 12.
Other forms of Direct Access Storage Device (DASD), such as drum memory devices or the IBM 2321 Data Cell, might give blocks addresses that include a cylinder address, although the cylinder address doesn't select a (geometric) cylindrical slice of the device.
Heads
A device called a head reads and writes data in a hard drive by manipulating the magnetic medium that composes the surface of an associated disk platter. Naturally, a platter has 2 sides and thus 2 surfaces on which data can be manipulated; usually there are 2 heads per platter, one per side. (Sometimes the term side is substituted for head, since platters might be separated from their head assemblies, as with the removable media of a floppy drive.)
The CHS addressing supported in IBM-PC compatible BIOSes code used eight bits for a maximum of 256 heads counted as head 0 up to 255 (FFh). However, a bug in all versions of Microsoft DOS/IBM PC DOS up to and including 7.10 will cause these operating systems to crash on boot when encountering volumes with 256 heads. Therefore, all compatible BIOSes will use mappings with up to 255 heads (00h..FEh) only, including in virtual 255×63 geometries.
This historical oddity can affect the maximum disk size in old BIOS INT 13h code as well as old PC DOS or similar operating systems:
(512 bytes/sector)×(63 sectors/track)×(255 heads (tracks/cylinder))×(1024 cylinders)=8032.5 MB, but actually 512×63×256×1024=8064 MB yields what is known as 8 GB limit. In this context relevant definition of 8 GB = 8192 MB is another incorrect limit, because it would require CHS 512×64×256 with 64 sectors per track.
Tracks and cylinders are counted from 0, i.e., track 0 is the first (outer-most) track on floppy or other cylindrical disks. Old BIOS code supported ten bits in CHS addressing with up to 1024 cylinders (1024=210). Adding six bits for sectors and eight bits for heads results in the 24 bits supported by BIOS interrupt 13h. Subtracting the disallowed sector number 0 in 1024×256 tracks corresponds to 128 MB for a sector size of 512 bytes (128 MB=1024×256×(512 byte/sector)); and 8192-128=8064 confirms the (roughly) 8 GB limit.
CHS addressing starts at 0/0/1 with a maximal value 1023/255/63 for 24=10+8+6 bits, or 1023/254/63 for 24 bits limited to 255 heads. CHS values used to specify the geometry of a disk have to count cylinder 0 and head 0 resulting in
a maximum (1024/256/63 or) 1024/255/63 for 24 bits with (256 or) 255 heads. In CHS tuples specifying a geometry S actually means sectors per track, and where the (virtual) geometry still matches the capacity the disk contains C×H×S sectors. As larger hard disks have come into use, a cylinder has become also a logical disk structure, standardised at 16 065 sectors (16065=255×63).
CHS addressing with 28 bits (EIDE and ATA-2) permits eight bits for sectors still starting at 1, i.e., sectors 1...255, four bits for heads 0...15, and sixteen bits for cylinders 0...65535. This results in a roughly 128 GB limit; actually 65536×16×255=267386880 sectors corresponding to 130560 MB for a sector size of 512 bytes. The 28=16+4+8 bits in the ATA-2 specification are also covered by Ralf Brown's Interrupt List, and an old working draft of this now expired standard was published.
With an old BIOS limit of 1024 cylinders and the ATA limit of 16 heads the combined effect was 1024×16×63=1032192 sectors, i.e., a 504 MB limit for sector size 512. BIOS translation schemes known as ECHS and revised ECHS mitigated this limitation by using 128 or 240 instead of 16 heads, simultaneously reducing the numbers of cylinders and sectors to fit into 1024/128/63 (ECHS limit: 4032 MB) or 1024/240/63 (revised ECHS limit: 7560 MB) for the given total number of sectors on a disk.
Blocks and clusters
The Unix communities employ the term block to refer to a sector or group of sectors. For example, the Linux fdisk utility, before version 2.25, displayed partition sizes using 1024-byte blocks.
Clusters are allocation units for data on various file systems (FAT, NTFS, etc.), where data mainly consists of files. Clusters are not directly affected by the physical or virtual geometry of the disk, i.e., a cluster can begin at a sector near the end of a given CH track, and end in a sector on the physically or logically next CH track.
CHS to LBA mapping
In 2002 the ATA-6 specification introduced an optional 48 bits Logical Block Addressing and declared CHS addressing as obsolete, but still allowed to implement the ATA-5 translations. Unsurprisingly the CHS to LBA translation formula given below also matches the last ATA-5 CHS translation. In the ATA-5 specification CHS support was mandatory for up to 16 514 064 sectors and optional for larger disks. The ATA-5 limit corresponds to CHS 16383 16 63 or equivalent disk capacities , and requires ().
CHS tuples can be mapped onto LBA addresses using the following formula:
where is the LBA address, is the number of heads on the disk, is the maximum number of sectors per track, and is the CHS address.
A Logical Sector Number formula in the ECMA-107 and ISO/IEC 9293:1994 (superseding ISO 9293:1987) standards for FAT file systems matches exactly the LBA formula given above: Logical Block Address and Logical Sector Number (LSN) are synonyms. The formula does not use the number of cylinders, but requires the number of heads and the number of sectors per track in the disk geometry, because the same CHS tuple addresses different logical sector numbers depending on the geometry.
Examples:
For geometry 1020 16 63 of a disk with 1028160 sectors, CHS 3 2 1 is LBA ;
For geometry 1008 4 255 of a disk with 1028160 sectors, CHS 3 2 1 is LBA
For geometry 64 255 63 of a disk with 1028160 sectors, CHS 3 2 1 is LBA
For geometry 2142 15 32 of a disk with 1028160 sectors, CHS 3 2 1 is LBA
To help visualize the sequencing of sectors into a linear LBA model, note that:
The first LBA sector is sector # zero, the same sector in a CHS model is called sector # one.
All the sectors of each head/track get counted before incrementing to the next head/track.
All the heads/tracks of the same cylinder get counted before incrementing to the next cylinder.
The outside half of a whole hard drive would be the first half of the drive.
History
Cylinder Head Record format has been used by Count Key Data (CKD) hard disks on IBM mainframes since at least the 1960s. This is largely comparable to the Cylinder Head Sector format used by PCs, with the exception that the sector size was not fixed but could vary from track to track based on the needs of each application. In contemporary use, the disk geometry presented to the mainframe is emulated by the storage firmware, and no longer has any relation to physical disk geometry.
Earlier hard drives used in the PC, such as MFM and RLL drives, divided each cylinder into an equal number of sectors, so the CHS values matched the physical properties of the drive. A drive with a CHS tuple of 500 4 32 would have 500 tracks per side on each platter, two platters (4 heads), and 32 sectors per track, with a total of 32 768 000 bytes (31.25 MiB).
ATA/IDE drives were much more efficient at storing data and have replaced the now-obsolete MFM and RLL drives. They use zone bit recording (ZBR), where the number of sectors dividing each track varies with the location of groups of tracks on the surface of the platter. Tracks nearer to the edge of the platter contain more blocks of data than tracks close to the spindle, because there is more physical space within a given track near the edge of the platter. Thus, the CHS addressing scheme cannot correspond directly with the physical geometry of such drives, due to the varying number of sectors per track for different regions on a platter. Because of this, many drives still have a surplus of sectors (less than 1 cylinder in size) at the end of the drive, since the total number of sectors rarely, if ever, ends on a cylinder boundary.
An ATA/IDE drive can be set in the system BIOS with any configuration of cylinders, heads and sectors that do not exceed the capacity of the drive (or the BIOS), since the drive will convert any given CHS value into an actual address for its specific hardware configuration. This however can cause compatibility problems.
For operating systems such as Microsoft DOS or older version of Windows, each partition must start and end at a cylinder boundary. Only some of the relatively modern operating systems (Windows XP included) may disregard this rule, but doing so can still cause some compatibility issues, especially if the user wants to perform dual booting on the same drive. Microsoft does not follow this rule with internal disk partition tools since Windows Vista.
See also
CD-ROM format
Block (data storage)
Disk storage
Disk formatting
File Allocation Table
Disk partitioning
References
Notes
1.This rule is true at least for all formats where the physical sectors are named 1 upwards. However, there are a few odd floppy formats (e.g., the 640 KB format used by BBC Master 512 with DOS Plus 2.1), where the first sector in a track is named "0" not "1".
2.While computers begin counting at 0, DOS would begin counting at 1. In order to do this, DOS would add a 1 to the head count before displaying it on the screen. However, instead of converting the 8-bit unsigned integer to a larger size (such as a 16-bit integer) first, DOS just added the 1. This would overflow a head count of 255 (0xFF) into 0 (0x100 & 0xFF = 0x00) instead of the 256 that would be expected. This was fixed with DOS 8, but by then, it had become a de facto standard to not use a head value of 255.
AT Attachment
BIOS
Computer file systems
Hard disk computer storage
Rotating disc computer storage media
Computer storage devices
IBM storage devices | Cylinder-head-sector | Technology | 3,550 |
70,477,161 | https://en.wikipedia.org/wiki/Toyota%20RI%20engine | The Toyota RI is a family of prototype four-stroke 2.0-litre single-turbocharged inline-4 racing engines, developed and produced by Toyota, for the Super GT series and Super Formula under the Nippon Race Engine framework. The RI engine is fully custom-built.
Versions
The RI engine comes in two different versions for different applications; the RI4A for use in Super Formula and the RI4AG for use in Super GT.
RI4A (2014–present, also known as TRD-01F)
RI4AG (2014–2023)
RI4BG (2024-present)
Applications
Dallara SF14
Dallara SF19
Dallara SF23
Lexus RC F GT500
Lexus LC 500 GT500
Toyota GR Supra GT500
See also
Honda HR-414E/HR-417E/HR-420E engine, similar engines also developed under the Nippon Race Engine framework
Nissan NR engine, similar engine also developed under the Nippon Race Engine framework
References
Engines by model
Gasoline engines by model
Toyota engines
Four-cylinder engines
Straight-four engines
Toyota in motorsport | Toyota RI engine | Technology | 227 |
4,662,210 | https://en.wikipedia.org/wiki/Chernobyl%20liquidators | Chernobyl liquidators were the civil and military personnel who were called upon to deal with the consequences of the 1986 Chernobyl nuclear disaster in the Soviet Union on the site of the event. The liquidators are widely credited with limiting both the immediate and long-term damage from the disaster.
Surviving liquidators are qualified for significant social benefits due to their veteran status. Many liquidators were praised as heroes by the Soviet government and the press, while some struggled for years to have their participation officially recognized.
Name
The euphemism "liquidator" (, , , likvidator) originates from the Soviet official definition "участник ликвидации последствий аварии на Чернобыльской АЭС" (uchastnik likvidatsii posledstviy avarii na Chernobylʹskoy AES, literally "participant in liquidation of the Chernobyl NPP accident consequences") which was widely used to describe the liquidators' activities regarding their employment, healthcare, and retirement. This exact phrase is engraved on the Soviet medals and badges awarded to the liquidators.
Roles
Disaster management at Chernobyl included a diverse range of occupations, positions, and tasks, and in particular:
Operational personnel of the Chernobyl nuclear power plant
Firefighters who immediately responded to the reactor accident
Civil defense troops of the Soviet Armed Forces who removed contaminated materials and the deactivation on the reactor and all affected territories
Internal Troops and police who provided security, access control and population evacuation
Military and civil medical and sanitation personnel, including:
Groups of female janitors tasked with the cleanup of food left inside abandoned homes to prevent outbreaks of infectious diseases
Special hunting squads assigned to exterminate domestic animals left in evacuated settlements
Soviet Air Force and civil aviation units who fulfilled critical helicopter-assisted operations on the reactor building, air transportation and aerial radioactive contamination monitoring, including Mykola Melnyk, a civilian helicopter pilot who placed radiation sensors on the reactor
Various civilian scientists, engineers, and workers involved in all stages of disaster management:
Transportation workers
A team of coal miners who built a large protective foundation to prevent radioactive material from entering the aquifer below the reactor
Construction professionals
Media professionals who risked their lives to document the disaster on the ground, including photographers Igor Kostin and Volodymyr Shevchenko, who are credited with taking the most immediate and graphic pictures of the destroyed reactor, and liquidators conducting hazardous manual tasks
A small number of foreigners (mostly from the Western countries) volunteered to participate in international medicine- and science-related on-the-ground projects related to the relief operation. Technically, they may also qualify for liquidator status depending on their exact location and tasks at the time of participation.
Health effects
According to the WHO, 240,000 recovery workers were called upon in 1986 and 1987 alone. Altogether, special certificates were issued for 600,000 people recognizing them as liquidators.
Total recorded doses to individual workers in Chernobyl recovery operations during the period through 1990 ranged from less than 10 millisieverts (less than 1 rem) to more than 1 sievert (100 rems), due primarily to external radiation. The average dose is estimated to have been 120 millisieverts (12 rem) and 85% of the recorded doses were between 20 and 500 millisieverts (2 to 50 rems). There are large uncertainties in these individual doses; estimates of the size of the uncertainty range from 50% to a factor of five and dose records for military personnel are thought to be biased toward high values. The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) estimates the total collective dose to the total of about 530,000 recovery operations workers as about 60,000 person-sieverts (6,000,000 person-rem).
According to Vyacheslav Grishin of the Chernobyl Union, the main organization of liquidators, "25,000 of the Russian liquidators are dead and 70,000 disabled, about the same in Ukraine, and 10,000 dead in Belarus and 25,000 disabled", which makes a total of 60,000 dead (10% of the 600,000 liquidators) and 165,000 disabled. Estimates of the number of deaths potentially resulting from the accident vary enormously: the World Health Organization (WHO) suggest it could reach 4,000:
Ivanov et al. (2001) studied nearly 66,000 liquidators from Russia, and found no increase in overall mortality from cancer or non-cancer causes. However, a statistically significant dose-related excess mortality risk was found for both cancer and heart disease.
Rahu et al. (2006) studied some 10,000 liquidators from Latvia and Estonia and found no significant increase in overall cancer rate. Among specific cancer types, statistically significant increases in both thyroid and brain cancer were found, although the authors believe these may have been the result of better cancer screening among liquidators (for thyroid cancer) or a random result (for brain cancer) because of the very low overall incidence.
While there is rough agreement that a total of either 31 or 54 people died from blast trauma or acute radiation syndrome (ARS) as a direct result of the disaster, there is considerable debate concerning the accurate number of deaths due to the disaster's long-term health effects, with estimates ranging from 4,000 (per the 2005 and 2006 conclusions of a joint consortium of the United Nations and the governments of Ukraine, Belarus, and Russia), to no fewer than 93,000 (per the conflicting conclusions of various scientific, health, environmental, and survivors' organizations).
Legacy
The 20th anniversary of the Chernobyl catastrophe in 2006 was marked by a series of events and developments.
The liquidators held a rally in Kyiv to complain about deteriorated compensation and medical support. Similar rallies were held in many other cities of the former Soviet Union.
More than 4,500 Estonian residents were sent to help in the liquidation. The liquidators who reside in Estonia (some 4,200 as reported in 2006, 3,140 as of 2011) campaigned in hope for the introduction of an Estonian law for their relief. Under Estonian law, the state was only obliged to provide help and relief only to citizens, who are "legal descendants" of the citizens of 1918–1940 Republic of Estonia. At the same time, Russia, Belarus and Ukraine do not provide any relief to the liquidators residing abroad. The problem is tied to the fact that Chernobyl veterans are classified under the Estonian Persons Repressed by Occupying Powers Act. It was reported in 2017 that an agreement had been reached by the Estonian parliament to provide all liquidators residing in Estonia, including over 1,400 non citizens, with a payment of €230 per year.
The most highly exposed clean-up workers were significantly more symptomatic on the somatization and posttraumatic stress disorder (PTSD) symptom scales. The workers with the greatest exposure reported more impairment than the two less-exposed groups, especially on the PTSD measures. Consistent with the findings of The Chernobyl Forum (2006) and with findings from other disasters involving radiation, the results show that the accident had a deleterious effect on mental health.
A number of military liquidators residing in Khabarovsk (Russia) were denied a certain compensation for loss of health on grounds that they were not salaried workers, but rather under military order. They had to appeal to the European Court of Human Rights. On 29 December 2004 and 21 March 2006 the Russian government adopted ECHR Rulings, according to which accommodation for Chernobyl victims and servicemen, including former servicemen, shall be granted either financial aid or state housing. However an interim ECHR Resolution in 2009 CM/ResDH(2009)43 indicated that the Russian government was failing to implement the policies.
Public record
The National Chernobyl Museum in Kyiv, Ukraine keeps a "Remembrance Book" (, Knyha Pamyati) an open to the public online database of liquidators featuring personal pages with photo and brief structured information on their input. Data fields include "Radiation damage suffered", "Field of liquidation activity" and "Subsequent fate". The project started in 1997, containing over 5,000 entries as of February, 2013. The database is currently available in the Ukrainian language only.
See also
Chernobyl: Abyss, a 2021 Russian film about a fictionalized liquidator
Fukushima 50, a similar group of workers from the 2011 nuclear disaster in Japan
Hibakusha, Japanese terms for a person who has been irradiated by a nuclear bomb
Nuclear labor issues
List of Chernobyl-related articles
References
External links
Pictures: "Liquidators" Endured Chernobyl 25 Years Ago - annotated set of Liquidators photos by Igor Kostin in National Geographic
A Worker Recalls the Chernobyl Disaster 2006 article by The Washington Post
Radiation health effects | Chernobyl liquidators | Chemistry,Materials_science | 1,844 |
1,440,140 | https://en.wikipedia.org/wiki/Macquarium | A Macquarium is an aquarium made from, or made to sit within, the shell of an Apple Macintosh computer. The term was coined by computer writer Andy Ihnatko as a joke at the outdated Macintosh 512K; Macquariums have since been built both by Ihnatko himself and by others.
Ihnatko originally designed his Macquarium to use the Compact Macintosh-style shell. In the early 1990s, several Mac models in this form factor (such as the Macintosh 128K, Macintosh 512K and Macintosh Plus) were becoming obsolete, and Ihnatko considered that turning one into an aquarium might be "the final upgrade", as well as an affordable way to have a color Compact Mac. Ihnatko has mentioned in interviews that he saw attempts to build Macintosh aquariums at trade shows that, among other drawbacks, suffered from noticeable water level lines across the "screen" that spoiled the illusion of a "really good screensaver". This drove him to design a version without a visible water line, and which allowed the external case of the donor Mac to remain intact.
Ihnatko's slant-front tank design, made of glass, had a nominal capacity of approximately 10 liters (2.2 UK gallons or 2.5 US gallons). Some subsequent designs have utilized acrylic glass or lexan. Because of its small capacity relative to most other aquariums, the Macquarium is considered a form of nano aquarium, which requires a higher level of diligence to maintain proper water chemistry and cleanliness.
The parts for some of Ihnatko's Macquariums were constructed with parts from two sources located near Apple's headquarters in Cupertino, California. For these aquariums, Ihnatko used the case of the Macintosh as the tank and sealed the screen and vent holes to be watertight.
Macquariums are often stocked with 2–3 goldfish, which do not require tank heaters and are cheap. However, because goldfish grow large, have high oxygen requirements, and are messy eaters, they require much larger tanks for long-term survival. As such, Siamese fighting fish and small shrimp are better options for Macquariums.
Other Mac models have similarly been turned into aquariums, such as the Macintosh TV, the Apple Lisa, and the Power Mac G4 Cube. Various iMac models, such as the iMac G3, have been used to make "iMacquariums". By 1995, a Macquarium based on a Macintosh LC 575 appeared in a Macintosh magazine titled "Macquarium '95".
The term "Macquarium", as it refers to the Macintosh-based aquarium, is unrelated to the Atlanta, Georgia, user experience firm Macquarium Intelligent Communications.
Footnotes
External links
iMacquariums built out of G3 iMacs
Guide to MacQuarium construction, setup, and upkeep
Andy Ihnatko's original instructions for a Classic form Mac:
Macquarium construction diary with photos and tips
Aquariums
Fishkeeping
Macintosh platform | Macquarium | Technology | 639 |
57,252,492 | https://en.wikipedia.org/wiki/Notch%20regulated%20ankyrin%20repeat%20protein | NOTCH regulated ankyrin repeat protein is a protein that in humans is encoded by the NRARP gene.
References
Further reading | Notch regulated ankyrin repeat protein | Chemistry | 27 |
1,682,480 | https://en.wikipedia.org/wiki/Ormolu | Ormolu (; ) is the gilding technique of applying finely ground, high-carat gold–mercury amalgam to an object of bronze, and objects finished in this way. The mercury is driven off in a kiln, leaving behind a gold coating. The French refer to this technique as ; in English, it is known as gilt bronze. Around 1830, legislation in France outlawed the use of mercury for health reasons, though use continued to the 1900s.
Process
The manufacture of true ormolu employs a process known as mercury-gilding or fire-gilding, in which a solution of mercuric nitrate is applied to a piece of copper, brass, or bronze; followed by the application of an amalgam of gold and mercury. The item is then exposed to extreme heat until the mercury vaporizes and the gold remains, adhering to the metal object.
This process has generally been supplanted by the electroplating of gold over a nickel substrate, which is more economical and less dangerous.
Health risk
In literature there is a 1612 reference from John Webster:
After around 1830, legislation in France had outlawed the use of mercury, although it continued to be commonly employed until around 1900 and was still in use around 1960 in very few workshops. Other gilding techniques, like electroplating from the mid-19th century on, were utilized. Ormolu techniques are essentially the same as those used on silver, to produce silver-gilt (also known as vermeil).
Alternatives
A later substitute of a mixture of metals resembling ormolu was developed in France and called pomponne, though the mix of copper and zinc, sometimes with an addition of tin, is technically a type of brass. From the 19th century the term has been popularized to refer to gilt metal or imitation gold.
Gilt-bronze is found from antiquity onwards across Eurasia, and especially in Chinese art, where it was always more common than silver-gilt, the opposite of Europe.
Applications
Craftsmen principally used ormolu for the decorative mountings of furniture, clocks, lighting devices, and porcelain. The great French furniture designers and cabinetmakers, or ébénistes, of the 18th and 19th centuries made maximum use of the exquisite gilt-bronze mounts produced by fondeurs-ciseleurs (founders and finishers) such as the renowned Jacques Caffieri (1678–1755), whose finished gilt-bronze pieces were almost as fine as jewelers' work. Ormolu mountings attained their highest artistic and technical development in France.
Similarly fine results could be achieved for lighting devices, such as chandeliers and candelabras, as well as for the ornamental metal mounts applied to clock cases and to ceramic pieces. In the hands of the Parisian marchands-merciers, the precursors of decorators, ormolu or gilt-bronze sculptures were used for bright, non-oxidizing fireplace accessories or for Rococo or Neoclassical mantel-clocks or wall-mounted clock-cases – a specialty of Charles Cressent (1685–1768) – complemented by rock-crystal drops on gilt-bronze chandeliers and wall-lights.
The bronze mounts were cast by lost wax casting, and then chiseled and chased to add detail. Rococo gilt bronze tends to be finely cast, lightly chiseled, and part-burnished. Neoclassical gilt-bronze is often entirely chiseled and chased with extraordinary skill and delicacy to create finely varied surfaces.
The ormolu technique was extensively used in the French Empire mantel clocks, reaching its peak during this period.
Chinese and European porcelains mounted in gilt-bronze were luxury wares that heightened the impact of often-costly and ornamental ceramic pieces sometimes used for display. Chinese ceramics with gilt-bronze mounts were produced under the guidance of the Parisian marchands-merciers, for only they had access to the ceramics (often purchased in the Netherlands) and the ability to overleap the guild restrictions. A few surviving pieces of 16th-century Chinese porcelain subsequently mounted in contemporary European silver-gilt, or vermeil, show where the foundations of the later fashion lay.
From the late 1760s, Matthew Boulton (1728–1809) of Birmingham produced English ormolu vases and perfume-burners in the latest Neoclassical style. Though the venture never became a financial success, it produced the finest English ormolu. In the early 19th century fine English ormolu came from the workshops of Benjamin Lewis Vulliamy (1780–1854).
In France, the tradition of neoclassic ormolu to Pierre-Philippe Thomire (1751–1843) was continued by Lucien-François Feuchère. Beurdeley & Cie. produced excellent ormolu in Rococo and Neoclassical styles in Paris, and rococo gilt-bronze is characteristic of the furniture of François Linke.
Gallery
See also
Gold plating
References and sources
References
Sources
Swantje Koehler: Ormolu Dollhouse Accessories. Swantje-Köhler-Verlag, Bonn 2007. .
External links
National Pollutant Inventory – Copper and compounds fact sheet
Kevin Brown, Artist and Patrons: Court Art and Revolution in Brussels at the end of the Ancien Regime, Dutch Crossing, Taylor and Francis ( 2017)
Gilding
Artistic techniques
Gold
Metal plating
Artworks in metal
Copper alloys
Porcelain
de:Feuervergoldung
fr:Dorure#Dorure au mercure | Ormolu | Chemistry | 1,129 |
11,311,171 | https://en.wikipedia.org/wiki/Capnodium%20footii | Capnodium footii is a sooty mold that develops in coconut leaves.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Coconut palm diseases
Leaf diseases
Capnodiaceae
Fungi described in 1849
Taxa named by Miles Joseph Berkeley
Taxa named by John Baptiste Henri Joseph Desmazières
Fungus species | Capnodium footii | Biology | 68 |
51,921,147 | https://en.wikipedia.org/wiki/Spin%20gapless%20semiconductor | Spin gapless semiconductors are a novel class of materials with unique electrical band structure for different spin channels in such a way that there is no band gap (i.e., 'gapless') for one spin channel while there is a finite gap in another spin channel.
In a spin-gapless semiconductor, conduction and valence band edges touch, so that no threshold energy is required to move electrons from occupied (valence) states to empty (conduction) states. This gives spin-gapless semiconductors unique properties: namely that their band structures are extremely sensitive to external influences (e.g., pressure or magnetic field).
Because very little energy is needed to excite electrons in an SGS, charge concentrations are very easily ‘tuneable’. For example, this can be done by introducing a new element (doping) or by application of a magnetic or electric field (gating).
A new type of SGS identified in 2017, known as Dirac-type linear spin-gapless semiconductors, has linear dispersion and is considered an ideal platform for massless and dissipationless spintronics because spin-orbital coupling opens a gap for the spin fully polarized conduction and valence band, and as a result, the interior of the sample becomes an insulator, however, an electrical current can flow without resistance at the sample edge. This effect, the quantum anomalous Hall effect has only previously been realised in magnetically doped topological insulators.
As well as Dirac/linear SGSs, the other major category of SGS are parabolic spin gapless semiconductors.
Electron mobility in such materials is two to four orders of magnitude higher than in classical semiconductors.
A convergence of topology and magnetism known as Chern magnetism makes SGSs ideal candidate materials for realizing room-temperature quantum anomalous Hall effect (QAHE).
SGSs are topologically non-trivial.
Prediction and discovery
The spin gapless semiconductor was first proposed as a new spintronics concept and a new class of candidate spintronic materials in 2008 in a paper by Xiaolin Wang of the University of Wollongong in Australia.
Properties and applications
The dependence of bandgap on spin direction leads to high carrier-spin-polarization, and offers promising spin-controlled electronic and magnetic properties for spintronics applications.
The spin gapless semiconductor is a promising candidate material for spintronics because its charged particles can be fully spin-polarised, so that spin can be controlled via only a small applied external energy.
References
Condensed matter physics
Semiconductors
Spintronics | Spin gapless semiconductor | Physics,Chemistry,Materials_science,Engineering | 534 |
78,591,075 | https://en.wikipedia.org/wiki/Split%20Fiction | Split Fiction is an upcoming action-adventure video game developed by Hazelight Studios and published by Electronic Arts. As a cooperative multiplayer-only game, Split Fiction is set to be released for Windows, PlayStation 5 and Xbox Series X and Series S in March 2025.
Gameplay
As with Hazelight's previous game, It Takes Two, Split Fiction was specifically designed for split screen cooperative multiplayer, which means that it must be played with another player through either local or online play. In the game, players assume control of either Zoe or Mio, two authors who both become trapped inside their own stories. The two strangers must work together to escape and overcome numerous challenges along the way, as Mio's science fiction story interweaves with Zoe's fantasy story, putting both characters in grave dangers. The game is an action-adventure video game played from a third-person perspective, requiring players to complete platforming challenges and occasionally combating against hostile enemies. Each stage is often accompanied with a unique but one-off gameplay mechanic. For instance, the two may be commanding dragons in one stage, while wielding a laser sword in another stage. The ability for both characters within each stage is also different, meaning that the two must collaborate with each other and utilize the respective abilities of their playable characters in order to progress. There are also side stories in each stage, with each also featuring an accompanying gameplay mechanic.
Development
Development of the game started following the release of It Takes Two (2021). A team of 80 people built the game using Unreal Engine 5. The two characters are named after Fares' two daughters, and Fares compared the game's narrative to that of a "buddy movie" as the two characters started as complete strangers who must slowly bond with each other in order to survive. Fares wrote that one of the biggest challenges developing a game with diverse gameplay mechanics was to ensure that each mechanic was fully polished. While a gameplay sequence utilizing a certain gameplay mechanic may only last for several minutes, the team had to work on them for months to ensure its controls were sufficiently intuitive.
Split Fiction was announced by Hazelight Studios and its game director Josef Fares at The Game Awards 2024. The game is currently set to be released for Windows, PlayStation 5 and Xbox Series X and Series S on March 6, 2025. As with Hazelight's previous games, it will utilize a "Friend’s Pass" system, in which owner of the game can invite a friend to play together for free. It will also support cross-platform play at launch. As with Hazelight's previous games, Split Fiction will be published under Electronic Arts' EA Originals label.
References
External links
Asymmetrical multiplayer video games
Video games developed in Sweden
Hazelight Studios games
Electronic Arts games
Windows games
PlayStation 5 games
Xbox Series X and Series S games
Cooperative video games
Video games featuring female protagonists
Fantasy video games
Science fiction video games
Split-screen multiplayer games
Action-adventure games
Platformers
Upcoming video games scheduled for 2025
Unreal Engine 5 games | Split Fiction | Physics | 611 |
21,095,955 | https://en.wikipedia.org/wiki/NGC%202170 | NGC 2170 is a reflection nebula in the constellation Monoceros. It was discovered on October 16, 1784 by William Herschel.
References
External links
The Interactive NGC Catalog Online: NGC 2170
SIMBAD: NGC 2170
NASA/IPAC Extragalactic Database: NGC 2170
Reflection nebulae
Monoceros
2170 | NGC 2170 | Astronomy | 67 |
51,092,592 | https://en.wikipedia.org/wiki/Nandrolone%20undecanoate | Nandrolone undecanoate (NU), also known as nandrolone undecylate, and sold under the brand names Dynabolon, Dynabolin, and Psychobolan, is an androgen and anabolic steroid medication and a nandrolone ester. It was developed in the 1960s, and was previously marketed in France, Germany, Italy, and Monaco, but has since been discontinued and is now no longer known to be available. The pharmacokinetics of nandrolone undecanoate alone (Dynabolon) and in combination with other steroid esters (Trophobolene) have been studied and compared.
See also
List of androgen esters § Nandrolone esters
Estrapronicate/hydroxyprogesterone heptanoate/nandrolone undecanoate
References
Abandoned drugs
Anabolic–androgenic steroids
Nandrolone esters
Progestogens
Undecanoate esters | Nandrolone undecanoate | Chemistry | 207 |
563,977 | https://en.wikipedia.org/wiki/Filgrastim | Filgrastim, sold under the brand name Neupogen among others, is a medication used to treat low neutrophil count. Low neutrophil counts may occur with HIV/AIDS, following chemotherapy or radiation poisoning, or be of an unknown cause. It may also be used to increase white blood cells for gathering during leukapheresis. It is given either by injection into a vein or under the skin. Filgrastim is a leukocyte growth factor.
Common side effects include fever, cough, chest pain, joint pain, vomiting, and hair loss. Severe side effects include splenic rupture and allergic reactions. It is unclear if use in pregnancy is safe for the baby. Filgrastim is a recombinant form of the naturally occurring granulocyte colony-stimulating factor (G-CSF). It works by stimulating the body to increase neutrophil production.
Filgrastim was approved for medical use in the United States in 1991. It is on the World Health Organization's List of Essential Medicines. Filgrastim biosimilar medications are available.
Medical uses
Filgrastim is used to treat neutropenia; acute myeloid leukemia; nonmyeloid malignancies; leukapheresis; congenital neutropenia‚ cyclic neutropenia‚ or idiopathic neutropenia; and myelosuppressive doses of radiation.
Tbo-filgrastim (Granix) is indicated for reduction in the duration of severe neutropenia in people with non-myeloid malignancies receiving myelosuppressive anti-cancer drugs associated with a clinically significant incidence of febrile neutropenia.
Adverse effects
The most commonly observed adverse effect is mild bone pain after repeated administration, and local skin reactions at the site of injection. Other observed adverse effects include serious allergic reactions (including a rash over the whole body, shortness of breath, wheezing, dizziness, swelling around the mouth or eyes, fast pulse, and sweating), ruptured spleen (sometimes resulting in death), alveolar hemorrhage, acute respiratory distress syndrome, and hemoptysis. Severe sickle cell crises, in some cases resulting in death, have been associated with the use of filgrastim in people with sickle cell disorders.
Interactions
Increased hematopoietic activity of the bone marrow in response to growth factor therapy has been associated with transient positive bone imaging changes; this should be considered when interpreting bone-imaging results.
Mechanism of action
G-CSF is a colony stimulating factor which has been shown to have minimal direct in vivo or in vitro effects on the production of other haematopoietic cell types. Neupogen (filgrastim) is the name for recombinant methionyl human granulocyte colony stimulating factor (r-metHuG-CSF).
Society and culture
Biosimilars
In 2015, Sandoz's filgrastim-sndz (Zarxio), obtained the approval of the US Food and Drug Administration (FDA) as a biosimilar. This was the first product to be passed under the Biologics Price Competition and Innovation Act of 2009 (BPCI Act), as part of the Affordable Care Act. Zarxio was approved as a biosimilar, not as an interchangeable product, the FDA notes. And under the BPCI Act, only a biologic that has been approved as an "interchangeable" may be substituted for the reference product without the intervention of the health care provider who prescribed the reference product. The FDA said its approval of Zarxio is based on review of evidence that included structural and functional characterization, animal study data, human pharmacokinetic and pharmacodynamics data, clinical immunogenicity data and other clinical safety and effectiveness data that demonstrates Zarxio is biosimilar to Neupogen.
In 2018, filgrastim-aafi (Nivestym) was approved for use in the United States.
In September 2008, Ratiograstim, Tevagrastim, Biograstim, and Filgrastim ratiopharm were approved for use in the European Union. Filgrastim ratiopharm was withdrawn in July 2011 and Biograstim was withdrawn in December 2016.
In February 2009, Filgrastim Hexal and Zarzio were approved for use in the European Union.
In June 2010, Nivestim was approved for use in the European Union.
In October 2013, Grastofil was approved for use in the European Union.
In September 2014, Accofil was approved for use in the European Union.
In 2016, Fraven was approved for use by Republic of Turkey ministry of health.
Nivestym was approved for medical use in Canada in April 2020.
In October 2021, Nypozi was approved for medical use in Canada.
In February 2022, filgrastim-ayow (Releuko) was approved for medical use in the United States.
In June 2024, filgrastim-txid (Nypozi) was approved for medical use in the United States.
In December 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Zefylti, intended for the treatment of neutropenia and the mobilization of peripheral blood progenitor cells. The applicant for this medicinal product is CuraTeQ Biologics s.r.o. Zefylti is a biosimilar medicinal product. It is highly similar to the reference product Neupogen (filgrastim), which has been authorized in various EU countries.
Economics
Shortly after it was introduced, analyses of whether filgrastim is a cost-effective way of preventing febrile neutropenia depended upon the clinical situation and the financial model used to pay for treatment. The longer-acting pegfilgrastim may in some cases be more cost-effective.
References
Further reading
Amgen
Drugs developed by Hoffmann-La Roche
Drugs developed by Novartis
Drugs developed by AbbVie
Drugs developed by Pfizer
Drugs acting on the blood and blood forming organs
Growth factors
Immunostimulants
Recombinant proteins
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Filgrastim | Chemistry,Biology | 1,358 |
43,086 | https://en.wikipedia.org/wiki/Project%20Mogul | Project Mogul (sometimes referred to as Operation Mogul) was a top secret project by the US Army Air Forces involving microphones flown on high-altitude balloons, whose primary purpose was long-distance detection of sound waves generated by Soviet atomic bomb tests. The project was carried out from 1947 until early 1949. It was a classified portion of an unclassified project by New York University (NYU) atmospheric researchers. The project was moderately successful, but was very expensive and was superseded by a network of seismic detectors and air sampling for fallout, which were cheaper, more reliable, and easier to deploy and operate.
Project Mogul was conceived by Maurice Ewing who had earlier researched the deep sound channel in the oceans and theorized that a similar sound channel existed in the upper atmosphere: a certain height where the air pressure and temperature result in minimal speed of sound, so that sound waves would propagate and stay in that channel due to refraction. The project involved arrays of balloons carrying disc microphones and radio transmitters to relay the signals to the ground. It was supervised by James Peoples, who was assisted by Albert P. Crary.
One of the requirements of the balloons was that they maintain a relatively constant altitude over a prolonged period of time. Thus instrumentation had to be developed to maintain such constant altitudes, such as pressure sensors controlling the release of ballast.
The early Mogul balloons consisted of large clusters of rubber meteorological balloons, however, these were quickly replaced by enormous balloons made of polyethylene plastic. These were more durable, leaked less helium, and also were better at maintaining a constant altitude than the early rubber balloons. Constant-altitude-control and polyethylene balloons were the two major innovations of Project Mogul.
Subsequent programs
Project Mogul was the forerunner of the Skyhook balloon program, which started in the late 1940s, as well as two other espionage programs involving balloon overflights and photographic surveillance of the Soviet Union during the 1950s, Project Moby Dick and Project Genetrix. The spy balloon overflights raised storms of protest from the Soviets. The constant-altitude balloons also were used for scientific purposes such as cosmic ray experiments. Further development of nuclear detonation detection systems was extensive for decades afterward, culminating in worldwide systems by various countries to keep eyes and ears on detecting and verifying the others' nuclear weapon developments. There would also be fixed-wing United States aerial reconnaissance of the Soviet Union during the 1950s. Overflights would end in 1960 (once an aircraft had been shot down by SAMs), and reconnaissance would for decades afterward be handled mostly by reconnaissance satellites and to some extent by aircraft, such as the A-12 OXCART and SR-71 Blackbird (photography and radar) and RC-135U and similar aircraft (SIGINT including ELINT and COMINT).
Roswell incident
In 1947, a Project Mogul balloon NYU Flight 4, launched June 4, crashed in the desert near Roswell, New Mexico. The subsequent military cover-up of the true nature of the balloon and burgeoning conspiracy theories from UFO enthusiasts led to a celebrated "UFO" incident.
Unlike a weather balloon, the Project Mogul paraphernalia was massive and contained unusual types of materials, according to research conducted by The New York Times: "...squadrons of big balloons ... It was like having an elephant in your backyard and hoping that no one would notice it. ... To the untrained eye, the reflectors looked extremely odd, a geometrical hash of lightweight sticks and sharp angles made of metal foil. .. photographs of it, taken in 1947 and published in newspapers, show bits and pieces of what are obviously collapsed balloons and radar reflectors."
Legacy
Implementation of Mogul's experimental infrasound detection of nuclear tests exist today in ground-based detectors, part of so-called Geophysical MASINT (Measurement And Signal INTelligence). In 2013, this world-wide network of sound detectors picked up the large explosion of the Chelyabinsk meteor in Russia. The strength of the sound waves was used to estimate the size of the explosion.
References
External links
Obituary of the man who launched the balloon
Balloons (aeronautics)
Military projects of the United States
Roswell incident
Soviet Union–United States relations
Projects of the United States Air Force
Cold War military history of the United States
Articles containing video clips | Project Mogul | Engineering | 887 |
32,425,139 | https://en.wikipedia.org/wiki/Research%20and%20Technology%20Computing%20Center%20%28France%29 | The Research and Technology Computing Center (Centre de calcul recherche et technologie, CCRT) is a supercomputing center in Île-de-France.
The center started operation in 2003 and is part of the CEA scientific computing complex in Bruyères-le-Châtel. It operates the Tera 100 machine, as of July 2011 the fastest supercomputer in Europe with a peak of 1.25 petaFLOPs.
See also
TOP500
National Computer Center for Higher Education (France)
Supercomputing in Europe
References
External links
Official website
Supercomputer sites
Supercomputers
Science and technology in France
2003 establishments in France | Research and Technology Computing Center (France) | Technology | 138 |
8,547,005 | https://en.wikipedia.org/wiki/Frontal%20solver | A frontal solver is an approach to solving sparse linear systems which is used extensively in finite element analysis. Algorithms of this kind are variants of Gauss elimination that automatically avoids a large number of operations involving zero terms due to the fact that the matrix is only sparse. The development of frontal solvers is usually considered as dating back to work by Bruce Irons.
A frontal solver builds a LU or Cholesky decomposition of a sparse matrix.
Frontal solvers start with one or a few diagonal entries of the matrix, then consider all of those diagonal entries that are coupled to the first set via off-diagonal entries, and so on. In the finite element context, these consecutive sets form "fronts" that march through the domain (and consequently through the matrix, if one were to permute rows and columns of the matrix in such a way that the diagonal entries are ordered by the wave they are part of). Processing the front involves dense matrix operations, which use the CPU efficiently.
Given that the elements of the matrix are only needed as the front marches through the matrix, it is possible (but not necessary) to provide matrix elements only as needed. For example, for matrices arising from the finite element method, one can structure the "assembly" of element matrices by assembling the matrix and eliminating equations only on a subset of elements at a time. This subset is called the front and it is essentially the transition region between the part of the system already finished and the part not touched yet. In this context, the whole sparse matrix is never created explicitly, though the decomposition of the matrix is stored. This approach was mainly used historically, when computers had little memory; in such implementations, only the front is in memory, while the factors in the decomposition are written into files. The element matrices are read from files or created as needed and discarded. More modern implementations, running on computers with more memory, no longer use this approach and instead store both the original matrix and its decomposition entirely in memory.
A variation of frontal solvers is the multifrontal method that originates in work of Duff and Reid. It is an improvement of the frontal solver that uses several independent fronts at the same time. The fronts can be worked on by different processors, which enables parallel computing.
See for a monograph exposition.
See also
MUMPS
Skyline matrix
Banded matrix
References
Numerical linear algebra
Numerical software | Frontal solver | Mathematics | 479 |
58,709,631 | https://en.wikipedia.org/wiki/Mycoparasitism | A mycoparasite is an organism with the ability to parasitize fungi.
Mycoparasites might be biotrophic or necrotrophic, depending on the type of interaction with their host.
Types of mycoparasitic organisms
Myco-heterotrophy
Various plants may be considered mycoparasites, in that they parasitize and acquire most of their nutrition from fungi during a part or all of their life cycle. These include many orchid seedlings, as well as some plants that lack chlorophyll such as Monotropa uniflora. Mycoparasitic plants are more precisely described as myco-heterotrophs.
Mycoparasitic bacteria
Some bacteria live on or within fungal cells as parasites or symbionts.
Mycoparasitic viruses
Some viruses, called mycoviruses live on or within fungal cells as parasites or symbionts.
Mycoparasitic fungi
Many mycoparasites are fungi, though not all fungicolous fungi are parasites (some are commensals or saprobes.) Biotrophic mycoparasites acquire nutrients from living host cells. Necrotrophic mycoparasites rely on dead host cells, which they might first kill with toxins or enzymes (saprophytic growth).
Kinds of mycoparasitic interactions
Biotrophic and necrotrophic mycoparasites
Biotrophic mycoparasites get nutrients from living host cells and growth of these parasites is greatly influenced by the metabolism of the host. Biotrophic mycoparasites tend to show high host specificity, and often form specialized infection structures. Necrotrophic mycoparasites can be aggressively antagonistic, invading the host fungus and killing, then digesting components of its cells. Necrotrophic parasites tend to have low host specificity, and are relatively unspecialized in their mechanism of parasitism.
Balanced and destructive mycoparasites
Balanced mycoparasites have little or no destructive effect on the host, whereas destructive mycoparasites have the opposite effect. Biotrophic mycoparasites are generally considered to be balanced mycoparasites; necrotrophic mycoparasites use toxins or enzymes to kill host cells, therefore necrotrophic mycoparasites are usually considered to be destructive mycoparasites. However, in some combinations, the parasite may live during its early development as a biotroph, then kill its host and act more like destructive mycoparasites in late stages of parasitization.
Mechanisms of Mycoparasitism
The four main steps of mycoparasitism include target location; recognition; contact and penetration; and nutrient acquisition.
Target location
Many research indicate that hyphal growth direction, spore germination, and bud tube elongation of mycoparasitic fungi may exhibit tropism in response to detection of a potential host. This tropic recognition reaction is thought to arise from detection of signature chemicals of the host; the direction of the concentration gradient determines the growth direction of the parasite. As the mycoparasitic interaction is host-specific and not merely a contact response, it is likely that signals from the host fungus are recognized by mycoparasites such as Trichoderma and provoke transcription of mycoparasitism-related genes.
Recognition
When mycoparasites contact their fungal host, they will recognize each other. This recognition between mycoparasites and their host fungi may be related to the agglutinin on the cell surface of the mycohost. Carbohydrate residues on the cell wall of mycoparasites might bind to lectins on the surface of the host fungi to achieve mutual recognition.
Contact and penetration
Once a mycoparasitic fungus and its host recognize each other, both may exhibit changes in external form and internal structure. Different mycoparasitic fungi form different structures when interacting with their hosts. For example, the hyphae of some mycoparasitic fungi form specialized contact cells resembling haustoria on the hyphae of their hosts; others may coil around the hyphae of their host fungus or penetrate then grow inside host hyphae. Nectrophic mycoparasites may kill host hyphae with toxins or enzymes before invading them.
Application
Mycoparasitic fungi can be important controls of plant disease fungi in natural systems and in agriculture, and may play a role in integrated pest management (IPM) as biological controls
Some Trichoderma species have been developed as biocontrols of a range of commercially important diseases, and have been applied in the United States, India, Israel, New Zealand, Sweden, and other countries to control plant diseases caused by Rhizoctonia solani, Botrytis cinerea, Sclerotium rolfsii, Sclerotinia sclerotiorum, Pythium spp., and Fusarium spp. as a promising alternative to chemical pesticides.
Further study of mycoparasitism may drive discovery off more bioactive compounds including biopesticides and biofertilizers.
References
Parasites of fungi | Mycoparasitism | Biology | 1,094 |
3,135,403 | https://en.wikipedia.org/wiki/Richard%20Scott%20Perkin | Richard Scott Perkin (1906–1969) was an American entrepreneur and one of the cofounders of Perkin-Elmer.
Life
At an early age he developed an interest in astronomy, and began making telescopes and grinding lenses and mirrors. He only spent a year in college studying chemical engineering before he began working at a brokerage firm on Wall Street.
During the 1930s, he met Charles Elmer when the latter was presenting a lecture. The two had a mutual interest in astronomy and decided to go into business together. In 1937, they founded Perkin-Elmer as an optical design and consulting company. Richard served as president of the company until 1960, then became chairman of the board.
The crater Perkin on the Moon was named after him, while Elmer was named after his business partner.
Perkin was married to Gladys Frelinghuysen Talmage who became CEO after he died. A decade later, Gladys commissioned a commemorative history to be written. One hundred copies were printed and distributed to friends.
Further reading
Fahy, Thomas P., Richard Scott Perkin and the Perkin-Elmer Corporation, 1987, Perkin-Elmer Print Shop, .
See also
List of astronomical instrument makers
References
External links
1906 births
1969 deaths
Telescope manufacturers
20th-century American businesspeople | Richard Scott Perkin | Astronomy | 258 |
6,757,195 | https://en.wikipedia.org/wiki/Straight%20skeleton | In geometry, a straight skeleton is a method of representing a polygon by a topological skeleton. It is similar in some ways to the medial axis but differs in that the skeleton is composed of straight line segments, while the medial axis of a polygon may involve parabolic curves. However, both are homotopy-equivalent to the underlying polygon.
Straight skeletons were first defined for simple polygons by , and generalized to planar straight-line graphs (PSLG) by .
In their interpretation as projection of roof surfaces, they are already extensively discussed by .
Definition
The straight skeleton of a polygon is defined by a continuous shrinking process in which the edges of the polygon are moved inwards parallel to themselves at a constant speed. As the edges move in this way, the vertices where pairs of edges meet also move, at speeds that depend on the angle of the vertex. If one of these moving vertices collides with a nonadjacent edge, the polygon is split in two by the collision, and the process continues in each part. The straight skeleton is the set of curves traced out by the moving vertices in this process.
In the illustration the top figure shows the shrinking process and the middle figure depicts the straight skeleton in blue.
Algorithms
The straight skeleton may be computed by simulating the shrinking process by which it is defined; a number of variant algorithms for computing it have been proposed, differing in the assumptions they make on the input and in the data structures they use for detecting combinatorial changes in the input polygon as it shrinks.
The following algorithms consider an input that forms a polygon, a polygon with holes, or a PSLG. For a polygonal input we denote the number of vertices by n and the number of reflex (concave, i.e., angle greater than ) vertices by r. If the input is a PSLG then we consider the initial wavefront structure, which forms a set of polygons, and again denote by n the number of vertices and by r the number of reflex vertices w.r.t. the propagation direction. Most of the algorithms listed here are designed and analyzed in the real RAM model of computation.
Aichholzer et al. showed how to compute straight skeletons of PSLGs in time O(n3 log n), or more precisely time O((n2+f) log n), where n is the number of vertices of the input polygon and f is the number of flip events during the construction. The best known bound for f is O(n3).
An algorithm with a worst case running time in O(nr log n), or simply O(n2 log n), is given by , who argue that their approach is likely to run in near-linear time for many inputs.
Petr Felkel and Štěpán Obdržálek designed an algorithm for simple polygons that is said to have an efficiency of O(nr + n log r). However, it has been shown that their algorithm is incorrect.
By using data structures for the bichromatic closest pair problem, Eppstein and Erickson showed how to construct straight skeleton problems using a linear number of closest pair data structure updates. A closest pair data structure based on quadtrees provides an O(nr + n log n) time algorithm, or a significantly more complicated data structure leads to the better asymptotic time bound , or more simply , where ε is any constant greater than zero. This remains the best worst-case time bound known for straight skeleton construction with unrestricted inputs, but is complicated and has not been implemented.
For simple polygons in general position, the problem of straight skeleton construction is easier. Cheng, Mencel, and Vigneron showed how to compute the straight skeleton of simple polygons in time O(n log n log r + r4/3 + ε). In the worst case, r may be on the order of n, in which case this time bound may be simplified to O(n4/3+ε). If the vertices of the input polygon have O(log n)-bit rational coordinates, their algorithm can be improved to run in O(n log n) time, even if the input polygon is not in general position.
A monotone polygon with respect to a line L is a polygon with the property that every line orthogonal to L intersects the polygon in a single interval. When the input is a monotone polygon, its straight skeleton can be constructed in time O(n log2 n).
Applications
Each point within the input polygon can be lifted into three-dimensional space by using the time at which the shrinking process reaches that point as the z-coordinate of the point. The resulting three-dimensional surface has constant height on the edges of the polygon, and rises at constant slope from them except for the points of the straight skeleton itself, where surface patches at different angles meet. In this way, the straight skeleton can be used as the set of ridge lines of a building roof, based on walls in the form of the initial polygon. The bottom figure in the illustration depicts a surface formed from the straight skeleton in this way.
Demaine, Demaine and Lubiw used the straight skeleton as part of a technique for folding a sheet of paper so that a given polygon can be cut from it with a single straight cut (the fold-and-cut theorem), and related origami design problems.
Barequet et al. use straight skeletons in an algorithm for finding a three-dimensional surface that interpolates between two given polygonal chains.
Tănase and Veltkamp propose to decompose concave polygons into unions of convex regions using straight skeletons, as a preprocessing step for shape matching in image processing.
Bagheri and Razzazi use straight skeletons to guide vertex placement in a graph drawing algorithm in which the graph drawing is constrained to lie inside a polygonal boundary.
The straight skeleton can also be used to construct an offset curve of a polygon, with mitered corners, analogously to the construction of an offset curve with rounded corners formed from the medial axis. Tomoeda and Sugihara apply this idea in the design of signage, visible from wide angles, with an illusory appearance of depth. Similarly, Asente and Carr use straight skeletons to design color gradients that match letter outlines or other shapes.
As with other types of skeleton such as the medial axis, the straight skeleton can be used to collapse a two-dimensional area to a simplified one-dimensional representation of the area. For instance, Haunert and Sester describe an application of this type for straight skeletons in geographic information systems, in finding the centerlines of roads.
Every tree with no degree-two vertices can be realized as the straight skeleton of a convex polygon. The convex hull of the roof shape corresponding to this straight skeleton forms a Steinitz realization of the Halin graph formed from the tree by connecting its leaves in a cycle.
Higher dimensions
Barequet et al. defined a version of straight skeletons for three-dimensional polyhedra, described algorithms for computing it, and analyzed its complexity on several different types of polyhedron.
Huber et al. investigated metric spaces under which the corresponding Voronoi diagrams and straight skeletons coincide. For two dimensions, the characterization of such metric spaces is complete. For higher dimensions, this method can be interpreted as a generalization of straight skeletons of certain input shapes to arbitrary dimensions by means of Voronoi diagrams.
References
External links
2D Straight Skeleton in CGAL, the Computational Geometry Algorithms Library
Straight Skeleton for polygon with holes Straight Skeleton builder implemented in java.
STALGO: "STALGO is an industrial-strength C++ software package for computing straight skeletons and mitered offset-curves." by Stefan Huber.
Discrete geometry
Computational geometry | Straight skeleton | Mathematics | 1,619 |
7,830,834 | https://en.wikipedia.org/wiki/Pack%20saddle | A pack saddle is any device designed to be secured on the back of a horse, mule, or other working animal so it can carry heavy loads such as luggage, firewood, small cannons, or other things too heavy to be carried by humans.
Description
Ideally the pack saddle rests on a saddle blanket or saddle pad to spread the weight of the saddle and its burden on the pack animal's back. The underside of the pack saddle is designed to conform well to the shape of the pack animal's back. It is typically divided into two symmetrical parts separated by a gap at the top to ensure that the weight being carried does not rest on the draft animal's backbone and to provide good ventilation to promote the evaporation of sweat.
The pack saddle consists of a tree, or the wooden blocks that sit on the horse's back, the half breed which is the canvas saddle cover, the breeching and often a crupper which prevents the loaded saddle from sliding too far forward and the breast collar which holds the loaded saddle from sliding too far back on the packhorse or mule. The flexible bars on this packsaddle adjust to a horse's back and offer several options for hanging panniers, manties (packs wrapped in canvas), or other loads.
There are many types of pack saddle:
Crossbuck / Sawbuck pack saddle has crossed wooden bars to attach sling ropes.
Otago pack saddle, known in military use as the British universal pack saddle, is a rideable pack saddle with two large cushioning pads to prevent injury to the animal and large hooks on each side of the metal pommel and cantle arches for hanging pack bags or crates.
Decker pack saddle has two rings for tying sling ropes.
The modern pack saddle is usually not intended to support a human rider. The upper side of the pack saddle resembles a rack to let its load rest on and be tied on with ropes, straps, a surcingle, or other devices. One historical exception was a pack saddle used in feudal Japan by non-samurai class commoners who were not allowed to use riding saddles (kura) for transportation.
See also
Backpacking with animals
Packhorse
Saddle
SCR-203 pack radio
Trail riding
References
External links
Animal equipment
Saddles | Pack saddle | Biology | 456 |
37,700,662 | https://en.wikipedia.org/wiki/Rifalazil | Rifalazil (also known as KRM-1648 and AMI-1648) is an antibiotic substance that kills bacterial cells by blocking off the β-subunit in RNA polymerase. Rifalazil is used as a treatment for many different diseases. The most common are Chlamydia infection, Clostridioides difficile associated diarrhea (CDAD), and tuberculosis (TB). Using rifalazil and the effects that coincide with taking rifalazil for treating a bacterial disease vary from person to person, as does any drug put into the human body. Food interactions and genetic variation are a few causes for the variation in side effects from the use of rifalazil. Its development was terminated in 2013 due to severe side effects.
Biological properties
Rifalazil works well alone, and in conjunction with other antibiotics alone. In a study conducted in 2005, it was found that combining rifalazil with vancomycin increased bacterial killing by a factor of 3. Rifalazil also has a very long half-life which allows for more infrequent dosages as opposed to frequent small dosages of antibiotics.
Many different studies have been conducted that have researched the effect of rifalazil on certain strains of bacterial diseases. In a study conducted in 2004, it was found that rifalazil reduced C. difficile strains when studied in vitro.
Uses
Rifalazil has been developed to treat cases of tuberculosis and chlamydia. It is a very good treatment for tuberculosis because rifalazil achieves a very high concentration in the blood cells and the lungs. In addition, rifalazil is becoming more widely used because it can be used along with many other indications, such as HIV, TB, and MRSA. Rifalazil has a very long half-life which is very useful for certain medications. The drug is administered orally which is also convenient in terms of drug administration. A longer half-life allows for fewer treatments and dosages, which makes this an up-and-coming drug for tuberculosis, CDAD, and chlamydia. Although the uses for rifalazil seem very effective, there are negative side effects that make its use limited. Rifalazil interacts with other drugs and on top of that, rapid resistance develops to other drugs.
Tested diseases for rifalazil treatment
Chlamydia infection
Clostridioides difficile associated diarrhea
Trachoma
Tuberculosis
Leprosy
Buruli ulcer
References
Abandoned drugs
Phenylpiperazines
Rifamycin antibiotics
Anti-tuberculosis drugs
Heterocyclic compounds with 6 rings
Acetate esters
Carboxamides
Isobutyl compounds | Rifalazil | Chemistry | 581 |
20,775,036 | https://en.wikipedia.org/wiki/Iron%20oxide%20cycle | For chemical reactions, the iron oxide cycle (Fe3O4/FeO) is the original two-step thermochemical cycle proposed for use for hydrogen production.
It is based on the reduction and subsequent oxidation of iron ions, particularly the reduction and oxidation between Fe3+ and Fe2+. The ferrites, or iron oxide, begins in the form of a spinel and depending on the reaction conditions, dopant metals and support material forms either Wüstites or different spinels.
Process description
The thermochemical two-step water splitting process uses two redox steps. The steps of solar hydrogen production by iron based two-step cycle are:
Where M can by any number of metals, often Fe itself, Co, Ni, Mn, Zn or mixtures thereof.
The endothermic reduction step (1) is carried out at high temperatures greater than , though the "Hercynite cycle" is capable of temperatures as low as . The oxidative water splitting step (2) occurs at a lower ~ temperature which produces the original ferrite material in addition to hydrogen gas. The temperature level is realized by using geothermal heat from magma or a solar power tower and a set of heliostats to collect the solar thermal energy.
Hercynite cycle
Like the traditional iron oxide cycle, the hercynite is based on the oxidation and reduction of iron atoms. However unlike the traditional cycle, the ferrite material reacts with a second metal oxide, aluminum oxide, rather than simply decomposing. The reactions take place via the following two reactions:
The reduction step of the hercynite reaction takes place at temperature ~ lower than the traditional water splitting cycle (). This leads to lower radiation losses, which scale as temperature to the fourth power.
Advantages and disadvantages
The advantages of the ferrite cycles are: they have lower reduction temperatures than other 2-step systems, no metallic gasses are produced, high specific H2 production capacity, non-toxicity of the elements used and abundance of the constituent elements.
The disadvantages of the ferrite cycles are: similar reduction and melting temperature of the spinels (except for the hercynite cycle as aluminates have very high melting temperatures), and slow rates of the oxidation, or water splitting, reaction.
See also
Cerium(IV) oxide-cerium(III) oxide cycle
Copper-chlorine cycle
Hybrid sulfur cycle
Hydrosol-2
Sulfur-iodine cycle
Zinc zinc-oxide cycle
References
External links
Solar hydrogen from iron oxide based thermochemical cycles
Chemical reactions
Hydrogen production | Iron oxide cycle | Chemistry | 538 |
4,557,120 | https://en.wikipedia.org/wiki/Centered%20tree | In the mathematical subfield of graph theory, a centered tree is a tree with only one center, and a bicentered tree is a tree with two centers.
Given a graph, the eccentricity of a vertex is defined as the greatest distance from to any other vertex. A center of a graph is a vertex with minimal eccentricity. A graph can have an arbitrary number of centers. However, has proved that for trees, there are only two possibilities:
The tree has precisely one center (centered trees).
The tree has precisely two centers (bicentered trees). In this case, the two centers are adjacent.
A proof of this fact is given, for example, by Harary.
Notes
References
External links
Trees (graph theory) | Centered tree | Mathematics | 152 |
59,500,418 | https://en.wikipedia.org/wiki/SN%2035210 | SN 35210 is an arylcyclohexylamine dissociative anesthetic drug. It was derived from ketamine with the intention of producing a shorter acting agent more suitable to be used as a stand-alone drug, whereas ketamine itself generally has to be used in combination with other drugs such as midazolam to minimise the occurrence of emergence reactions due to its hallucinogenic side effects. In common with other short-acting anaesthetic drugs such as remifentanil and remimazolam, SN 35210 has had the chemical structure modified to incorporate a methyl ester group which is rapidly metabolised to a carboxylic acid, producing an inactive compound and thus rapidly terminating the effects of the drug. It was selected for development from a series of structurally related alkyl esters due to having the shortest duration of action and the most similar pharmacological profile to ketamine itself.
References
Arylcyclohexylamines
Dissociative drugs
NMDA receptor antagonists | SN 35210 | Chemistry | 221 |
39,117,324 | https://en.wikipedia.org/wiki/Soldier%20Creek%20Kilns | The Soldier Creek Kilns near Stockton, Utah date from about 1873, the time of their construction, and were in use up until about 1899. Also known as the Waterman Coking Ovens, they were listed on the National Register of Historic Places (NRHP) in 1980. The listing included 14 contributing structures over .
The site includes four smelting kilns which document smelting technology brought from California and from the eastern U.S. One of the four, the best-preserved, is an eastern beehive-type parabolic-shaped kiln, that would hold more than 10 cords of wood and would be tended from two iron doors.
In 1996, it was argued that these were worth preserving.
The location of the site is not disclosed; they are listed as "Address Restricted", as is done for archeological resources that may be damaged and lose their information potential, if not protected.
See also
Lime Kilns, Eureka, Utah, NRHP-listed
Charcoal Kilns, Eureka, Utah, NRHP-listed
Frisco Charcoal Kilns, Milford, Utah, NRHP-listed
References
Archaeological sites on the National Register of Historic Places in Utah
Industrial buildings completed in 1860
Industrial buildings and structures on the National Register of Historic Places in Utah
Buildings and structures in Tooele County, Utah
Kilns
National Register of Historic Places in Tooele County, Utah | Soldier Creek Kilns | Chemistry,Engineering | 281 |
37,011,817 | https://en.wikipedia.org/wiki/HD%2022663 | HD 22663 (y Eridani) is a candidate astrometric binary star system in the equatorial constellation of Eridanus. It is visible to the naked eye with an apparent visual magnitude of 4.57. Based upon an annual parallax shift of , it is located around 230 light years from the Sun. It is moving further away from the Earth with a heliocentric radial velocity of +11.5 km/s, having come within some 3.76 million years ago.
The visible component is an orange-hued giant star with a stellar classification of K1 III, having exhausted the hydrogen at its core and evolved away from the main sequence. It has an estimated 1.4 times the mass of the Sun and has expanded to 13 times the Sun's radius. At the age of 2.6 billion years, this star is radiating 96 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,660 K.
References
K-type giants
Astrometric binaries
Eridanus (constellation)
Eridani, y
Durchmusterung objects
022663
016870
1106 | HD 22663 | Astronomy | 237 |
58,451,522 | https://en.wikipedia.org/wiki/Aspergillus%20parvisclerotigenus | Aspergillus parvisclerotigenus is a species of fungus in the genus Aspergillus. It is from the Flavi section. The species was first described in 2005. A. parvisclerotigenus has been isolated in Nigeria and has been found to produce aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, aflatrem, aflavarin, aspirochlorin, cyclopiazonic acid, kojic acid, and paspaline.
References
parvisclerotigenus
Fungi described in 2005
Fungus species | Aspergillus parvisclerotigenus | Biology | 128 |
31,967,305 | https://en.wikipedia.org/wiki/Dimethyl%20tetrachloroterephthalate | Dimethyl tetrachloroterephthalate (DCPA, with the main trade name Dacthal) is an organic compound with the formula C6Cl4(CO2CH3)2. It is the dimethyl ester of tetrachloroterephthalic acid, used as a preemergent herbicide with the ISO common name chlorthal-dimethyl. It kills annual grasses and many common weeds without killing sensitive plants such as turf grasses, flowers, fruits, vegetables, and cotton.
DCPA was first registered for use in the United States in 1958, for use on turf grasses, for the control of annual grasses such as crabgrass, and certain annual broad-leaved weeds. Production of DCPA was eventually discontinued by ISK Biosciences in 1998, but the large manufacturing company AMVAC (American Vanguard Corporation) began producing the product in 2001 for use in America. In Australia, DCPA is the active ingredient in agchem company Farmalinx's herbicide called Dynamo 750.
EPA registration for use on vegetable crops was voluntarily terminated by the manufacturer in 2005 in response to EPA concerns regarding the contamination of groundwater. In 2009, DCPA was banned for use on crops in the European Union. On August 6, 2024 the United States Environmental Protection Agency announced the emergency suspension of all registrations of the pesticide in the United States due to concerns regarding embryo-fetal toxicity According to the EPA, when pregnant mothers are exposed to DCPA their babies could experience changes to hormone levels, and these changes are generally linked to symptoms such as low birth weight, impaired brain development, decreased IQ, and impaired motor skills later in life, some of which may be irreversible.
Synthesis
The use of DCPA as a herbicide was first described in a patent filed in 1958. The material was prepared as had been described in a 1948 research paper by treating terephthaloyl chloride with chlorine to give tetrachloroterephthaloyl chloride which was then esterified with methanol.
C6H4(COCl)2 + 4 Cl2 + Fe (cat.) → C6Cl4(COCl)2
C6Cl4(COCl)2 + 2 CH3OH → C6Cl4(CO2CH3)2
Contamination
DCPA is released directly into the environment during its use as a herbicide. DCPA exists in both the vapor and particulate phases when exposed to the air. In the vapor phase, DCPA should react slowly with hydroxyl radicals with an estimated half-life of 36 days. Particulate-phase DCPA may be physically removed from air by wet and dry deposition. With a high Koc of 3900, DCPA is presumably immobile in soil, and thus may strongly attach to inorganic material in soil and other environments.
In addition, its breakdown products, TPA (Tetrachloroterephthalic acid) and MTP (Monomethyl tetrachloroterephthalic acid), enter the environment after being formed through various processes. Studies have shown that DCPA can partially degrade through volatilization, as well as via photodegradation, but biodegradation is the primary route of DCPA degradation leading to MTP and TPA. Environmental Protection Agency testing in New York showed "measurable residues of DCPA and degradates" on land that had endured five years of treatment with DCPA, followed by three years of no treatment. DCPA is also prevalent in water and bioconcentration is seen in aquatic animals. DCPA accumulation was shown in fish at "several locations" in the United States. Some of these locations included the Apalachicola, Colorado, Mobile, Savannah, and Pee Dee River Basins in both bass and carp. In the study done on these river basin locations, roughly 39% of the fish tested had stored DCPA concentrations that exceeded their limit of detection, and the male fish had higher stored concentrations of DCPA than their female counterparts. Fish collected from different locations throughout the United States are often contaminated by DCPA if they are near agricultural areas that use or have used it as an herbicide.
Humans are exposed to DCPA through drinking well water, eating fish, and eating leafy and root vegetables. In some areas where agriculture is prominent, inhalation of air can be a means of exposure.
Degradation and metabolism
DCPA degrades via successive hydrolysis of the two ester linkages, first forming monomethyl tetrachloroterephthalate (MTP) then further to tetrachloroterephthalic acid (TPA). Following ingestion, TPA is formed in tissues during the metabolism of DCPA.
In the presence of sunlight, the half-life for DCPA floating on the surface of water is less than three days. In soil, the half-life in the presence of sunlight ranges from 14 to 100 days.
Properties of Degradates
TPA and MTP are both more water-soluble than DCPA, and readily leach into groundwater wherever DCPA is used, regardless of soil composition. TPA has been observed to cause weight loss and diarrhea in laboratory rats, the same symptoms caused by DCPA, but at lower doses than necessary for DCPA. TPA does not degrade, and infiltrates soil and nearby water sources. The accumulation of TPA and its salts in areas where DCPA is widely used has prompted research on TPA, although no carcinogenicity studies have been conducted yet. There have been no standard toxicity studies identified for MTP.
Toxicology
DCPA is listed as a Group C Possible Human Carcinogen by the National Library of Medicine.
Studies show that DCPA and TPA may cause detrimental health effects in laboratory animals, mainly weight loss and diarrhea occurring at doses near 2000 mg/kg/day. There were also effects on the lungs, liver, kidney, and thyroid glands of male and female rats. The LD50, or 50% lethal dose of DCPA, is greater than 10,000 mg/kg in beagle dogs. In humans, it seems that DCPA is poorly absorbed, as 6% of a 25-mg dose and 12% of a 50-mg dose were absorbed according to metabolites in urine. Decreased motor activity and poor sight reflexes were also observed in a study on New Zealand white rabbits that were exposed to DCPA. Exposure from DCPA to pregnant rats resulted in reduced body weight in both mother and pup, as well as changes to the thyroid in pups. Additionally, rats whose mothers were exposed to DCPA during pregnancy had impaired higher-level learning test scores than those whose mothers were not exposed. Studies regarding the carcinogenicity of DCPA have produced mixed results. A study by ISK Biotech Corp. in 1993 showed DCPA leading to thyroid tumors in male and female rats, and liver tumors in female rats. Alternatively, a 1963 study using pure DCPA did not produce any negative results when administered to albino rats.
Studies have demonstrated that DCPA acts as a chemical disruptor by interfering with microtubule formation in exposed cells. This interference results in abnormal cell division. The abnormal microtubules affect cell wall formation as well as chromosome replication and division. The key difference between DCPA and other mitotic inhibitors is that it often produces multinucleate cells. It essentially kills plants by inhibiting cell division in this manner.
Exposure to DCPA has shown damaging effects in the adrenal glands, kidneys, livers, thyroids, and spleens of laboratory animals. The effects on the rabbits included decreased motor activity and poor reflexes.
According to the US EPA, when pregnant mothers are exposed to DCPA their babies could experience changes to hormone levels, and these changes are generally linked to symptoms like low birth weight, impaired brain development, decreased IQ, and impaired motor skills, which may be irreversible and lifelong.
Regulation
Australia
Products containing DCPA had their registration cancelled and usage banned by the APVMA on 10 October 2024.
Europe
DCPA has been prohibited for use on crops in the European Union since 2009.
United States
In the U.S. The Safe Drinking Water Act of 1996 has the U.S. EPA publish a list of contaminants referred to as the Contaminant Candidate List to assist in research efforts. The Safe Drinking Water Act also calls for the EPA to choose five contaminants from the list and determine whether regulation is necessary. In July 2008, the EPA determined that no regulatory action is necessary for DCPA mono-acid (MTP) degradate and DCPA di-acid (TPA) degradate. After multiple studies, it was determined that degradates of DCPA appear too infrequently to pose a serious health risk so the government does not regulate DCPA or its degradates in drinking water. Public water systems are also not required to monitor DCPA, MTP, or TPA. There are standards set by some states ranging from 0.17 µg/L to 2 µg/L.
Some uses of DCPA, particularly on vegetable crops, were voluntarily terminated by the registrant in 2005, in response to the EPA's concerns regarding DCPA and TPA contamination of groundwater.
In California, DCPA products are required to be labeled with information that states that products with DCPA also contain trace amounts of Hexachlorobenzene (HCB) which is a chemical known to the State of California to cause cancer or birth defects.
On August 6, 2024 the United States Environmental Protection Agency announced the emergency suspension of all registrations of this pesticide in the United States due to embryo-fetal toxicity concerns. This was the first time in almost 40 years that the EPA had taken this type of emergency action. “DCPA is so dangerous that it needs to be removed from the market immediately,” said Assistant Administrator for the Office of Chemical Safety and Pollution Prevention Michal Freedhoff.
References
Preemergent herbicides
Chloroarenes
Methyl esters
Terephthalate esters
Reproductive toxicants | Dimethyl tetrachloroterephthalate | Chemistry | 2,093 |
5,729,303 | https://en.wikipedia.org/wiki/Analog%20high-definition%20television | Analog high-definition television has referred to a variety of analog video broadcast television systems with various display resolutions throughout history.
Before 1940
On 2 November 1936 the BBC began transmitting the world's first public regular analog "high definition" television service from the Victorian Alexandra Palace in north London. It therefore claims to be the birthplace of television broadcasting as we know it today. The UK's 405-line system introduced in 1936 was described as "high definition"; however, this was in comparison with the early 30-line (largely) experimental system from the 1920s, and would not be considered high definition by modern standards.
John Logie Baird, Philo T. Farnsworth, and Vladimir Zworykin had each developed competing TV systems, but resolution was not the issue that separated their substantially different technologies, it was patent interference lawsuits and deployment issues given the tumultuous financial climate of the late 1920s and 1930s. Most patents were expiring by the end of World War II leaving no worldwide standard for television. The standards introduced in the early 1950s stayed for over half a century.
French 819-line system
819-line was a monochrome TV system developed and used in France as television broadcast resumed after World War II. Transmissions started in 1949 and were active up to 1985, although limited to France, Belgium and Luxembourg. It is associated with CCIR System E and F.
Despite some attempts to create a color SECAM version of the 819-line system, France gradually abandoned the system in favor of the Europe-wide standard of 625-lines with the final 819-line transmissions taking place in Paris from the Eiffel Tower on 19 July 1983. Tele Monte Carlo in Monaco were the last broadcasters to transmit 819-line television, closing down their transmitter in 1985.
Multiple sub-nyquist sampling Encoding system (MUSE)
Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting wideband analog high-definition video signals in the late 1980s using an interlaced resolution of 1035 or 1080-lines active (1035i) and 1125-lines total supported by the Sony HDVS line of equipment.
The Japanese system, developed by NHK Science & Technology Research Laboratories in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK. Japanese broadcast engineers rejected conventional vestigial sideband broadcasting to allow transmitting a HD signal on a tighter bandwidth. It was decided early on that MUSE would be a satellite broadcast format as Japan economically supports satellite broadcasting.
In the typical setup, three picture elements on a line were actually derived from three separate scans. Stationary images were transmitted at full resolution. However, as MUSE lowers the horizontal and vertical resolution of material that varies greatly from frame to frame, moving images were blurred in a manner similar to using 16 mm movie film for HDTV projection. In fact, whole-camera pans would result in a loss of 50% of horizontal resolution. Shadows and multipath still plague this analog frequency modulated transmission mode.
MUSE's "1125-lines" are an analog measurement, which includes non-video "scan lines" during which a CRT's electron beam returns to the top of the screen to begin scanning the next field. Only 1035-lines have picture information. Digital signals count only the lines (rows of pixels) of the picture makeup as there are no other scanning lines (though conversion to an analogue format will introduce them), so NTSC's 525-lines become 480i, and MUSE would be 1035i.
Japan has since switched to a digital HDTV system based on ISDB; the original MUSE-based BS Satellite channel 9 (NHK BS Hi-vision) ended transmitting on November 30, 2007, moving to BS-digital channel 103.
Subsampling lives on in modern MPEG systems based on JPEG coding, as JPEG offers Chroma sub-sampling. High quality HD television has a sampling structure approximating 4:2:1 (Luma : Chroma : Saturation) for reference images (I-Frames), though 4:0.75:0.65 is probably typical for multi-channel delivery.
HD-MAC
HD-MAC was a proposed television standard by the European Commission in 1986 (MAC standard). It was an early attempt by the EEC to provide HDTV in Europe. It was a complex mix of analog signal (Multiplexed Analog Components) multiplexed with digital sound. The video signal (1,250 (1,152 visible) lines/50 frames in 16:9 aspect ratio) was encoded with a modified D2-MAC encoder.
For the 1992 Summer Olympics, experimental HD-MAC broadcasting took place. 100 HD-MAC receivers (in that time, retroprojectors) in Europe were used to test the capabilities of the standard. This project was financed by the European Union (EU). The PAL-converted signal was used by mainstream broadcasters such as SWR, BR and 3Sat.
The HD-MAC standard was abandoned in 1993, and since then all EU and EBU efforts have focused on the DVB system (Digital Video Broadcasting), which allows both SDTV and HDTV.
See also
The analog TV systems these systems were meant to replace
SECAM
NTSC
PAL
Related standards
NICAM-like audio coding is used in the HD-MAC system.
Chroma subsampling in TV indicated as 4:2:2, 4:1:1 etc...
Electronovision, a video tape movie production technique based on the 819-line system.
References
External links
819lignes Restore operation on a French 1951 TV set (French language only)
HDTV coverage of the Barcelona Olympic Games by M. Romero and E. Gavilan (EBU)
The HDTV demonstrations at the Expo 92 by J.L. Tejerina and F. Visintin (EBU)
European Broadcasting Union
COUNCIL DIRECTIVE 92/38/EEC of 11 May 1992.
Television technology
High-definition television
Japanese inventions | Analog high-definition television | Technology | 1,249 |
373,810 | https://en.wikipedia.org/wiki/Grassmannian | In mathematics, the Grassmannian (named in honour of Hermann Grassmann) is a differentiable manifold that parameterizes the set of all -dimensional linear subspaces of an -dimensional vector space over a field that has a differentiable structure.
For example, the Grassmannian is the space of lines through the origin in , so it is the same as the projective space of one dimension lower than .
When is a real or complex vector space, Grassmannians are compact smooth manifolds, of dimension . In general they have the structure of a nonsingular projective algebraic variety.
The earliest work on a non-trivial Grassmannian is due to Julius Plücker, who studied the set of projective lines in real projective 3-space, which is equivalent to , parameterizing them by what are now called Plücker coordinates. (See below.) Hermann Grassmann later introduced the concept in general.
Notations for Grassmannians vary between authors; they include , ,, to denote the Grassmannian of -dimensional subspaces of an -dimensional vector space .
Motivation
By giving a collection of subspaces of a vector space a topological structure, it is possible to talk about a continuous choice of subspaces or open and closed collections of subspaces. Giving them the further structure of a differentiable manifold, one can talk about smooth choices of subspace.
A natural example comes from tangent bundles of smooth manifolds embedded in a Euclidean space. Suppose we have a manifold of dimension embedded in . At each point , the tangent space to can be considered as a subspace of the tangent space of , which is also just . The map assigning to its tangent space defines a map from to . (In order to do this, we have to translate the tangent space at each so that it passes through the origin rather than , and hence defines a -dimensional vector subspace. This idea is very similar to the Gauss map for surfaces in a 3-dimensional space.)
This can with some effort be extended to all vector bundles over a manifold , so that every vector bundle generates a continuous map from to a suitably generalised Grassmannian—although various embedding theorems must be proved to show this. We then find that the properties of our vector bundles are related to the properties of the corresponding maps. In particular we find that vector bundles inducing homotopic maps to the Grassmannian are isomorphic. Here the definition of homotopy relies on a notion of continuity, and hence a topology.
Low dimensions
For , the Grassmannian is the space of lines through the origin in -space, so it is the same as the projective space of dimensions.
For , the Grassmannian is the space of all 2-dimensional planes containing the origin. In Euclidean 3-space, a plane containing the origin is completely characterized by the one and only line through the origin that is perpendicular to that plane (and vice versa); hence the spaces , , and (the projective plane) may all be identified with each other.
The simplest Grassmannian that is not a projective space is .
The Grassmannian as a differentiable manifold
To endow with the structure of a differentiable manifold, choose a basis for . This is equivalent to identifying with , with the standard basis denoted , viewed as column vectors. Then for any -dimensional subspace , viewed as an element of , we may choose a basis consisting of linearly independent column vectors . The homogeneous coordinates of the element consist of the elements of the maximal rank rectangular matrix whose -th column vector is , . Since the choice of basis is arbitrary, two such maximal rank rectangular matrices and represent the same element if and only if
for some element of the general linear group of invertible matrices with entries in . This defines an equivalence relation between matrices of rank , for which the equivalence classes are denoted .
We now define a coordinate atlas. For any homogeneous coordinate matrix , we can apply elementary column operations (which amounts to multiplying by a sequence of elements ) to obtain its reduced column echelon form. If the first rows of are linearly independent, the result will have the form
and the affine coordinate matrix
with entries determines . In general, the first rows need not be independent, but since has maximal rank , there exists an ordered set of integers such that the submatrix whose rows are the -th rows of is nonsingular. We may apply column operations to reduce this submatrix to the identity matrix, and the remaining entries uniquely determine . Hence we have the following definition:
For each ordered set of integers , let be the set of elements for which, for any choice of homogeneous coordinate matrix , the submatrix whose -th row is the -th row of is nonsingular. The affine coordinate functions on are then defined as the entries of the matrix whose rows are those of the matrix complementary to , written in the same order. The choice of homogeneous coordinate matrix in representing the element does not affect the values of the affine coordinate matrix representing on the coordinate neighbourhood . Moreover, the coordinate matrices may take arbitrary values, and they define a diffeomorphism from to the space of -valued matrices.
Denote by
the homogeneous coordinate matrix having the identity matrix as the submatrix with rows and the affine coordinate matrix in the consecutive complementary rows. On the overlap between any two such coordinate neighborhoods, the affine coordinate matrix values and are related by the transition relations
where both and are invertible. This may equivalently be written as
where
is the invertible matrix whose th row is the th row of . The transition functions are therefore rational in the matrix elements of , and gives an atlas for as a differentiable manifold and also as an algebraic variety.
The Grassmannian as a set of orthogonal projections
An alternative way to define a real or complex Grassmannian as a manifold is to view it as a set of orthogonal projection operators ( problem 5-C). For this, choose a positive definite real or Hermitian inner product on , depending on whether is real or complex. A -dimensional subspace determines a unique orthogonal projection operator whose image is by splitting into the orthogonal direct sum
of and its orthogonal complement and defining
Conversely, every projection operator of rank defines a subspace as its image. Since the rank of an orthogonal projection operator equals its trace, we can identify the Grassmann manifold with the set of rank orthogonal projection operators :
In particular, taking or this gives completely explicit equations for embedding the Grassmannians , in the space of real or complex matrices , , respectively.
Since this defines the Grassmannian as a closed subset of the sphere this is one way to see that the Grassmannian is a compact Hausdorff space. This construction also turns the Grassmannian into a metric space with metric
for any pair of -dimensional subspaces, where denotes the operator norm. The exact inner product used does not matter, because a different inner product will give an equivalent norm on , and hence an equivalent metric.
For the case of real or complex Grassmannians, the following is an equivalent way to express the above construction in terms of matrices.
Grassmannians Gr(k,Rn) and Gr(k,Cn) as affine algebraic varieties
Let denote the space of real matrices and the subset of matrices that satisfy the three conditions:
is a projection operator: .
is symmetric: .
has trace .
There is a bijective correspondence between and the Grassmannian of -dimensional subspaces of given by sending to the -dimensional subspace of spanned by its columns and, conversely, sending any element to the projection matrix
where is any orthonormal basis for , viewed as real component column vectors.
An analogous construction applies to the complex Grassmannian , identifying it bijectively with the subset of complex matrices satisfying
is a projection operator: .
is self-adjoint (Hermitian): .
has trace ,
where the self-adjointness is with respect to the Hermitian inner product in which the standard basis vectors are orthonomal. The formula for the orthogonal projection matrix onto the complex -dimensional subspace spanned by the orthonormal (unitary) basis vectors is
The Grassmannian as a homogeneous space
The quickest way of giving the Grassmannian a geometric structure is to express it as a homogeneous space. First, recall that the general linear group acts transitively on the -dimensional subspaces of . Therefore, if we choose a subspace of dimension , any element
can be expressed as
for some group element ,
where is determined only up to right multiplication
by elements of the stabilizer of :
under the -action.
We may therefore identify with the quotient space
of left cosets of .
If the underlying field is or and is considered as a Lie group, this construction makes the Grassmannian a smooth manifold under the quotient structure. More generally, over a ground field , the group is an algebraic group, and this construction shows that the Grassmannian is a non-singular algebraic variety. It follows from the existence of the Plücker embedding that the Grassmannian is complete as an algebraic variety. In particular, is a parabolic subgroup of .
Over or it also becomes possible to use smaller groups in this construction. To do this over , fix a Euclidean inner product on . The real orthogonal group acts transitively on the set of -dimensional subspaces and the stabiliser of a -space is
,
where is the orthogonal complement of in .
This gives an identification as the homogeneous space
.
If we take and (the first components) we get the isomorphism
Over , if we choose an Hermitian inner product , the unitary group acts transitively, and we find analogously
or, for and ,
In particular, this shows that the Grassmannian is compact, and of (real or complex) dimension .
The Grassmannian as a scheme
In the realm of algebraic geometry, the Grassmannian can be constructed as a scheme by expressing it as a representable functor.
Representable functor
Let be a quasi-coherent sheaf on a scheme . Fix a positive integer . Then to each -scheme , the Grassmannian functor associates the set of quotient modules of
locally free of rank on . We denote this set by .
This functor is representable by a separated -scheme . The latter is projective if is finitely generated. When is the spectrum of a field , then the sheaf is given by a vector space and we recover the usual Grassmannian variety of the dual space of , namely: .
By construction, the Grassmannian scheme is compatible with base changes: for any -scheme , we have a canonical isomorphism
In particular, for any point of , the canonical morphism
induces an isomorphism from the fiber to the usual Grassmannian over the residue field .
Universal family
Since the Grassmannian scheme represents a functor, it comes with a universal object, , which is an object of
and therefore a quotient module of , locally free of rank over . The quotient homomorphism induces a closed immersion from the projective bundle:
For any morphism of -schemes:
this closed immersion induces a closed immersion
Conversely, any such closed immersion comes from a surjective homomorphism of -modules from to a locally free module of rank . Therefore, the elements of are exactly the projective subbundles of rank in
Under this identification, when is the spectrum of a field and is given by a vector space , the set of rational points correspond to the projective linear subspaces of dimension in , and the image of in
is the set
The Plücker embedding
The Plücker embedding is a natural embedding of the Grassmannian into the projectivization of the th Exterior power of .
Suppose that is a -dimensional subspace of the -dimensional vector space . To define , choose a basis for , and let be the projectivization of the wedge product of these basis elements:
where denotes the projective equivalence class.
A different basis for will give a different wedge product, but the two will differ only by a non-zero scalar multiple (the determinant of the change of basis matrix). Since the right-hand side takes values in the projectivized space, is well-defined. To see that it is an embedding, notice that it is possible to recover from as the span of the set of all vectors such that
.
Plücker coordinates and Plücker relations
The Plücker embedding of the Grassmannian satisfies a set of simple quadratic relations called the Plücker relations. These show that the Grassmannian embeds as a nonsingular projective algebraic subvariety of the projectivization of the th exterior power of and give another method for constructing the Grassmannian. To state the Plücker relations, fix a basis for , and let be a -dimensional subspace of with basis . Let be the components of with respect to the chosen basis of , and the -component column vectors forming the transpose of the corresponding homogeneous coordinate matrix:
For any ordered sequence of positive integers, let be the determinant of the matrix with columns . The elements are called the Plücker coordinates of the element of the Grassmannian (with respect to the basis of ). These are the linear coordinates of the image of under the Plücker map, relative to the basis of the exterior power space generated by the basis of . Since a change of basis for gives rise to multiplication of the Plücker coordinates by a nonzero constant (the determinant of the change of basis matrix), these are only defined up to projective equivalence, and hence determine a point in .
For any two ordered sequences and of and positive integers, respectively, the following homogeneous quadratic equations, known as the Plücker relations, or the Plücker-Grassmann relations, are valid and determine the image of under the Plücker map embedding:
where denotes the sequence with the term omitted. These are consistent, determining a nonsingular projective algebraic variety, but they are not algebraically independent. They are equivalent to the statement that is the projectivization of a completely decomposable element of .
When , and (the simplest Grassmannian that is not a projective space), the above reduces to a single equation. Denoting the homogeneous coordinates of the image under the Plücker map as , this single Plücker relation is
In general, many more equations are needed to define the image of the Grassmannian in under the Plücker embedding.
Duality
Every -dimensional subspace determines an -dimensional quotient space of . This gives the natural short exact sequence:
Taking the dual to each of these three spaces and the dual linear transformations yields an inclusion of in with quotient
Using the natural isomorphism of a finite-dimensional vector space with its double dual shows that taking the dual again recovers the original short exact sequence. Consequently there is a one-to-one correspondence between -dimensional subspaces of and -dimensional subspaces of . In terms of the Grassmannian, this gives a canonical isomorphism
that associates to each subspace its annihilator .
Choosing an isomorphism of with therefore determines a (non-canonical) isomorphism between and . An isomorphism of with is equivalent to the choice of an inner product, so with respect to the chosen inner product, this isomorphism of Grassmannians sends any -dimensional subspace into its }-dimensional orthogonal complement.
Schubert cells
The detailed study of Grassmannians makes use of a decomposition into affine subpaces called Schubert cells, which were first applied in enumerative geometry. The Schubert cells for are defined in terms of a specified complete flag of subspaces of dimension . For any integer partition
of weight
consisting of weakly decreasing non-negative integers
whose Young diagram fits within the rectangular one , the Schubert cell consists of those elements whose intersections with the subspaces have the following dimensions
These are affine spaces, and their closures (within the Zariski topology) are known as Schubert varieties.
As an example of the technique, consider the problem of determining the Euler characteristic of the Grassmannian of -dimensional subspaces of . Fix a -dimensional subspace and consider the partition of into those -dimensional subspaces of that contain and those that do not. The former is and the latter is a rank vector bundle over . This gives recursive formulae:
Solving these recursion relations gives the formula: if is even and is odd and
otherwise.
Cohomology ring of the complex Grassmannian
Every point in the complex Grassmann manifold defines a -plane in -space. Mapping each point in a k-plane to the point representing that plane in the Grassmannian, one obtains the vector bundle which generalizes the tautological bundle of a projective space. Similarly the -dimensional orthogonal complements of these planes yield an orthogonal vector bundle . The integral cohomology of the Grassmannians is generated, as a ring, by the Chern classes of . In particular, all of the integral cohomology is at even degree as in the case of a projective space.
These generators are subject to a set of relations, which defines the ring. The defining relations are easy to express for a larger set of generators, which consists of the Chern classes of and . Then the relations merely state that the direct sum of the bundles and is trivial. Functoriality of the total Chern classes allows one to write this relation as
The quantum cohomology ring was calculated by Edward Witten. The generators are identical to those of the classical cohomology ring, but the top relation is changed to
reflecting the existence in the corresponding quantum field theory of an instanton with fermionic zero-modes which violates the degree of the cohomology corresponding to a state by units.
Associated measure
When is an -dimensional Euclidean space, we may define a uniform measure on in the following way. Let be the unit Haar measure on the orthogonal group and fix . Then for a set , define
This measure is invariant under the action of the group ; that is,
for all .
Since , we have
.
Moreover, is a Radon measure with respect to the metric space topology and is uniform in the sense that every ball of the same radius (with respect to this metric) is of the same measure.
Oriented Grassmannian
This is the manifold consisting of all oriented -dimensional subspaces of . It is a double cover of and is denoted by .
As a homogeneous space it can be expressed as:
Orthogonal isotropic Grassmannians
Given a real or complex nondegenerate symmetric bilinear form on the -dimensional space (i.e., a scalar product), the totally isotropic Grassmannian is defined as the subvariety consisting of all -dimensional subspaces for which
Maximal isotropic Grassmannians with respect to a real or complex scalar product are closely related to Cartan's theory of spinors.
Under the Cartan embedding, their connected components are equivariantly diffeomorphic to the projectivized minimal spinor orbit, under the spin representation, the so-called projective pure spinor variety which, similarly to the image of the Plücker map embedding, is cut out as the intersection of a number of quadrics, the
Cartan quadrics.
Applications
A key application of Grassmannians is as the "universal" embedding space for bundles with connections on compact manifolds.
Another important application is Schubert calculus, which is the enumerative geometry involved in calculating the number of points, lines, planes, etc. in a projective space that intersect a given set of points, lines, etc., using the intersection theory of Schubert varieties. Subvarieties of Schubert cells can also be used to parametrize simultaneous eigenvectors of complete sets of commuting operators in quantum integrable spin systems, such as the Gaudin model, using the Bethe ansatz method.
A further application is to the solution of hierarchies of classical completely integrable systems of partial differential equations, such as the Kadomtsev–Petviashvili equation and the associated KP hierarchy. These can be expressed in terms of abelian group flows on an infinite-dimensional Grassmann manifold. The KP equations, expressed in Hirota bilinear form in terms of the KP Tau function are equivalent to the Plücker relations.
A similar construction holds for solutions of the BKP integrable hierarchy, in terms of abelian group flows on an infinite dimensional maximal isotropic Grassmann manifold.
Finite dimensional positive Grassmann manifolds can be used to express soliton solutions of KP equations which are nonsingular for real values of the KP flow parameters.
The scattering amplitudes of subatomic particles in maximally supersymmetric super Yang-Mills theory may be calculated in the planar limit via a positive Grassmannian construct called the amplituhedron.
Grassmann manifolds have also found applications in computer vision tasks of video-based face recognition and shape recognition, and are used in the data-visualization technique known as the grand tour.
See also
Schubert calculus
For an example of the use of Grassmannians in differential geometry, see Gauss map
In projective geometry, see Plücker embedding and Plücker co-ordinates.
Flag manifolds are generalizations of Grassmannians whose elements, viewed geometrically, are nested sequences of subspaces of specified dimensions.
Stiefel manifolds are bundles of orthonormal frames over Grassmanians.
Given a distinguished class of subspaces, one can define Grassmannians of these subspaces, such as Isotropic Grassmanians or Lagrangian Grassmannians .
Isotropic Grassmanian
Lagrangian Grassmannian
Grassmannians provide classifying spaces in K-theory, notably the classifying space for U(n). In the homotopy theory of schemes, the Grassmannian plays a similar role for algebraic K-theory.
Affine Grassmannian
Grassmann bundle
Grassmann graph
Notes
References
section 1.2
see chapters 5–7
Differential geometry
Projective geometry
Algebraic homogeneous spaces
Algebraic geometry | Grassmannian | Mathematics | 4,584 |
2,028,723 | https://en.wikipedia.org/wiki/Dynel | Dynel is a trade name for a type of synthetic fiber used in fibre reinforced plastic composite materials, especially for marine applications. As it is easily dyed, it was also used to fabricate wigs. The fashion designer Pierre Cardin used Dynel fabric (which he marketed as "Cardine") to make a collection of heat-molded dresses in 1968. A copolymer of acrylonitrile and vinyl chloride, Dynel shares many properties with both polyacrylonitrile (high abrasion resistance, good tensile strength) and PVC (flame resistance). It is an acrylic resin.
Dynel was originally produced by Union Carbide corporation.
References
Synthetic fibers
Wigs
Acrylate polymers
Copolymers | Dynel | Chemistry | 160 |
47,936,812 | https://en.wikipedia.org/wiki/C13H25NO2 | {{DISPLAYTITLE:C13H25NO2}}
The molecular formula C13H25NO2 (molar mass: 227.34 g/mol, exact mass: 227.1885 u) may refer to:
Cyprodenate
4-Nonanoylmorpholine (MPA or MPK)
Molecular formulas | C13H25NO2 | Physics,Chemistry | 72 |
821,148 | https://en.wikipedia.org/wiki/Level%20of%20measurement | Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables. Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio. This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others. Other classifications include those by Mosteller and Tukey, and by Chrisman.
Stevens's typology
Overview
Stevens proposed his typology in a 1946 Science article titled "On the theory of scales of measurement". In that article, Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval", and "ratio", unifying both "qualitative" (which are described by his "nominal" type) and "quantitative" (to a different degree, all the rest of his scales). The concept of scale types later received the mathematical rigour that it lacked at its inception with the work of mathematical psychologists Theodore Alper (1985, 1987), Louis Narens (1981a, b), and R. Duncan Luce (1986, 1987, 2001). As Luce (1997, p. 395) wrote:
Comparison
Nominal level
A nominal scale consists only of a number of distinct classes or categories, for example: [Cat, Dog, Rabbit]. Unlike the other scales, no kind of relationship between the classes can be relied upon. Thus measuring with the nominal scale is equivalent to classifying.
Nominal measurement may differentiate between items or subjects based only on their names or (meta-)categories and other qualitative classifications they belong to. Thus it has been argued that even dichotomous data relies on a constructivist epistemology. In this case, discovery of an exception to a classification can be viewed as progress.
Numbers may be used to represent the variables but the numbers do not have numerical value or relationship: for example, a globally unique identifier.
Examples of these classifications include gender, nationality, ethnicity, language, genre, style, biological species, and form. In a university one could also use residence hall or department affiliation as examples. Other concrete examples are
in grammar, the parts of speech: noun, verb, preposition, article, pronoun, etc.
in politics, power projection: hard power, soft power, etc.
in biology, the taxonomic ranks below domains: kingdom, phylum, class, etc.
in software engineering, type of fault: specification faults, design faults, and code faults
Nominal scales were often called qualitative scales, and measurements made on qualitative scales were called qualitative data. However, the rise of qualitative research has made this usage confusing. If numbers are assigned as labels in nominal measurement, they have no specific numerical value or meaning. No form of arithmetic computation (+, −, ×, etc.) may be performed on nominal measures. The nominal level is the lowest measurement level used from a statistical point of view.
Mathematical operations
Equality and other operations that can be defined in terms of equality, such as inequality and set membership, are the only non-trivial operations that generically apply to objects of the nominal type.
Central tendency
The mode, i.e. the most common item, is allowed as the measure of central tendency for the nominal type. On the other hand, the median, i.e. the middle-ranked item, makes no sense for the nominal type of data since ranking is meaningless for the nominal type.
Ordinal scale
The ordinal type allows for rank order (1st, 2nd, 3rd, etc.) by which data can be sorted but still does not allow for a relative degree of difference between them. Examples include, on one hand, dichotomous data with dichotomous (or dichotomized) values such as "sick" vs. "healthy" when measuring health, "guilty" vs. "not-guilty" when making judgments in courts, "wrong/false" vs. "right/true" when measuring truth value, and, on the other hand, non-dichotomous data consisting of a spectrum of values, such as "completely agree", "mostly agree", "mostly disagree", "completely disagree" when measuring opinion.
The ordinal scale places events in order, but there is no attempt to make the intervals of the scale equal in terms of some rule. Rank orders represent ordinal scales and are frequently used in research relating to qualitative phenomena. A student's rank in his graduation class involves the use of an ordinal scale. One has to be very careful in making a statement about scores based on ordinal scales. For instance, if Devi's position in his class is 10th and Ganga's position is 40th, it cannot be said that Devi's position is four times as good as that of Ganga.
Ordinal scales only permit the ranking of items from highest to lowest. Ordinal measures have no absolute values, and the real differences between adjacent ranks may not be equal. All that can be said is that one person is higher or lower on the scale than another, but more precise comparisons cannot be made. Thus, the use of an ordinal scale implies a statement of "greater than" or "less than" (an equality statement is also acceptable) without our being able to state how much greater or less. The real difference between ranks 1 and 2, for instance, may be more or less than the difference between ranks 5 and 6. Since the numbers of this scale have only a rank meaning, the appropriate measure of central tendency is the median. A percentile or quartile measure is used for measuring dispersion. Correlations are restricted to various rank order methods. Measures of statistical significance are restricted to the non-parametric methods (R. M. Kothari, 2004).
Central tendency
The median, i.e. middle-ranked, item is allowed as the measure of central tendency; however, the mean (or average) as the measure of central tendency is not allowed. The mode is allowed.
In 1946, Stevens observed that psychological measurement, such as measurement of opinions, usually operates on ordinal scales; thus means and standard deviations have no validity, but they can be used to get ideas for how to improve operationalization of variables used in questionnaires. Most psychological data collected by psychometric instruments and tests, measuring cognitive and other abilities, are ordinal, although some theoreticians have argued they can be treated as interval or ratio scales. However, there is little prima facie evidence to suggest that such attributes are anything more than ordinal (Cliff, 1996; Cliff & Keats, 2003; Michell, 2008). In particular, IQ scores reflect an ordinal scale, in which all scores are meaningful for comparison only. There is no absolute zero, and a 10-point difference may carry different meanings at different points of the scale.
Interval scale
The interval type allows for defining the degree of difference between measurements, but not the ratio between measurements. Examples include temperature scales with the Celsius scale, which has two defined points (the freezing and boiling point of water at specific conditions) and then separated into 100 intervals, date when measured from an arbitrary epoch (such as AD), location in Cartesian coordinates, and direction measured in degrees from true or magnetic north. Ratios are not meaningful since 20 °C cannot be said to be "twice as hot" as 10 °C (unlike temperature in kelvins), nor can multiplication/division be carried out between any two dates directly. However, ratios of differences can be expressed; for example, one difference can be twice another; for example, the ten-degree difference between 15 °C and 25 °C is twice the five-degree difference between 17 °C and 22 °C. Interval type variables are sometimes also called "scaled variables", but the formal mathematical term is an affine space (in this case an affine line).
Central tendency and statistical dispersion
The mode, median, and arithmetic mean are allowed to measure central tendency of interval variables, while measures of statistical dispersion include range and standard deviation. Since one can only divide by differences, one cannot define measures that require some ratios, such as the coefficient of variation. More subtly, while one can define moments about the origin, only central moments are meaningful, since the choice of origin is arbitrary. One can define standardized moments, since ratios of differences are meaningful, but one cannot define the coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment.
Ratio scale
See also:
The ratio type takes its name from the fact that measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit of measurement of the same kind (Michell, 1997, 1999). Most measurement in the physical sciences and engineering is done on ratio scales. Examples include mass, length, duration, plane angle, energy and electric charge. In contrast to interval scales, ratios can be compared using division. Very informally, many ratio scales can be described as specifying "how much" of something (i.e. an amount or magnitude). Ratio scale is often used to express an order of magnitude such as for temperature in Orders of magnitude (temperature).
Central tendency and statistical dispersion
The geometric mean and the harmonic mean are allowed to measure the central tendency, in addition to the mode, median, and arithmetic mean. The studentized range and the coefficient of variation are allowed to measure statistical dispersion. All statistical measures are allowed because all necessary mathematical operations are defined for the ratio scale.
Debate on Stevens's typology
While Stevens's typology is widely adopted, it is still being challenged by other theoreticians, particularly in the cases of the nominal and ordinal types (Michell, 1986). Duncan (1986), for example, objected to the use of the word measurement in relation to the nominal type and Luce (1997) disagreed with Stevens's definition of measurement.
On the other hand, Stevens (1975) said of his own definition of measurement that "the assignment can be any consistent rule. The only rule not allowed would be random assignment, for randomness amounts in effect to a nonrule". Hand says, "Basic psychology texts often begin with Stevens's framework and the ideas are ubiquitous. Indeed, the essential soundness of his hierarchy has been established for representational measurement by mathematicians, determining the invariance properties of mappings from empirical systems to real number continua. Certainly the ideas have been revised, extended, and elaborated, but the remarkable thing is his insight given the relatively limited formal apparatus available to him and how many decades have passed since he coined them."
The use of the mean as a measure of the central tendency for the ordinal type is still debatable among those who accept Stevens's typology. Many behavioural scientists use the mean for ordinal data anyway. This is often justified on the basis that the ordinal type in behavioural science is in fact somewhere between the true ordinal and interval types; although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude.
For example, applications of measurement models in educational contexts often indicate that total scores have a fairly linear relationship with measurements across the range of an assessment. Thus, some argue that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. Statistical analysis software such as SPSS requires the user to select the appropriate measurement class for each variable. This ensures that subsequent user errors cannot inadvertently perform meaningless analyses (for example correlation analysis with a variable on a nominal level).
L. L. Thurstone made progress toward developing a justification for obtaining the interval type, based on the law of comparative judgment. A common application of the law is the analytic hierarchy process. Further progress was made by Georg Rasch (1960), who developed the probabilistic Rasch model that provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
Other proposed typologies
Typologies aside from Stevens's typology have been proposed. For instance, Mosteller and Tukey (1977) and Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. See also Chrisman (1998), van den Berg (1991).
Mosteller and Tukey's typology (1977)
Mosteller and Tukey noted that the four levels are not exhaustive and proposed seven instead:
Names
Grades (ordered labels like beginner, intermediate, advanced)
Ranks (orders with 1 being the smallest or largest, 2 the next smallest or largest, and so on)
Counted fractions (bound by 0 and 1)
Counts (non-negative integers)
Amounts (non-negative real numbers)
Balances (any real number)
For example, percentages (a variation on fractions in the Mosteller–Tukey framework) do not fit well into Stevens's framework: No transformation is fully admissible.
Chrisman's typology (1998)
Nicholas R. Chrisman introduced an expanded list of levels of measurement to account for various measurements that do not necessarily fit with the traditional notions of levels of measurement. Measurements bound to a range and repeating (like degrees in a circle, clock time, etc.), graded membership categories, and other types of measurement do not fit to Stevens's original work, leading to the introduction of six new levels of measurement, for a total of ten:
Nominal
Gradation of membership
Ordinal
Interval
Log-interval
Extensive ratio
Cyclical ratio
Derived ratio
Counts
Absolute
While some claim that the extended levels of measurement are rarely used outside of academic geography, graded membership is central to fuzzy set theory, while absolute measurements include probabilities and the plausibility and ignorance in Dempster–Shafer theory. Cyclical ratio measurements include angles and times. Counts appear to be ratio measurements, but the scale is not arbitrary and fractional counts are commonly meaningless. Log-interval measurements are commonly displayed in stock market graphics. All these types of measurements are commonly used outside academic geography, and do not fit well to Stevens's original work.
Scale types and Stevens's "operational theory of measurement"
The theory of scale types is the intellectual handmaiden to Stevens's "operational theory of measurement", which was to become definitive within psychology and the behavioral sciences, despite Michell's characterization as its being quite at odds with measurement in the natural sciences (Michell, 1999). Essentially, the operational theory of measurement was a reaction to the conclusions of a committee established in 1932 by the British Association for the Advancement of Science to investigate the possibility of genuine scientific measurement in the psychological and behavioral sciences. This committee, which became known as the Ferguson committee, published a Final Report (Ferguson, et al., 1940, p. 245) in which Stevens's sone scale (Stevens & Davis, 1938) was an object of criticism:
That is, if Stevens's sone scale genuinely measured the intensity of auditory sensations, then evidence for such sensations as being quantitative attributes needed to be produced. The evidence needed was the presence of additive structure—a concept comprehensively treated by the German mathematician Otto Hölder (Hölder, 1901). Given that the physicist and measurement theorist Norman Robert Campbell dominated the Ferguson committee's deliberations, the committee concluded that measurement in the social sciences was impossible due to the lack of concatenation operations. This conclusion was later rendered false by the discovery of the theory of conjoint measurement by Debreu (1960) and independently by Luce & Tukey (1964). However, Stevens's reaction was not to conduct experiments to test for the presence of additive structure in sensations, but instead to render the conclusions of the Ferguson committee null and void by proposing a new theory of measurement:
Stevens was greatly influenced by the ideas of another Harvard academic, the Nobel laureate physicist Percy Bridgman (1927), whose doctrine of operationalism Stevens used to define measurement. In Stevens's definition, for example, it is the use of a tape measure that defines length (the object of measurement) as being measurable (and so by implication quantitative). Critics of operationalism object that it confuses the relations between two objects or events for properties of one of those of objects or events (Moyer, 1981a, b; Rogers, 1989).
The Canadian measurement theorist William Rozeboom was an early and trenchant critic of Stevens's theory of scale types.
Same variable may be different scale type depending on context
Another issue is that the same variable may be a different scale type depending on how it is measured and on the goals of the analysis. For example, hair color is usually thought of as a nominal variable, since it has no apparent ordering. However, it is possible to order colors (including hair colors) in various ways, including by hue; this is known as colorimetry. Hue is an interval level variable.
See also
Cohen's kappa
Coherence (units of measurement)
Hume's principle
Inter-rater reliability
Logarithmic scale
Ramsey–Lewis method
Set theory
Statistical data type
Transition (linguistics)
References
Further reading
Briand, L. & El Emam, K. & Morasca, S. (1995). On the Application of Measurement Theory in Software Engineering. Empirical Software Engineering, 1, 61–88. [On line] https://web.archive.org/web/20070926232755/http://www2.umassd.edu/swpi/ISERN/isern-95-04.pdf
Cliff, N. (1996). Ordinal Methods for Behavioral Data Analysis. Mahwah, NJ: Lawrence Erlbaum.
Cliff, N. & Keats, J. A. (2003). Ordinal Measurement in the Behavioral Sciences. Mahwah, NJ: Erlbaum.
See also reprints in:
Readings in Statistics, Ch. 3, (Haber, A., Runyon, R. P., and Badia, P.) Reading, Mass: Addison–Wesley, 1970
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison–Wesley.
Luce, R. D. (2000). Utility of uncertain gains and losses: measurement theoretic and experimental approaches. Mahwah, N.J.: Lawrence Erlbaum.
Michell, J. (1999). Measurement in Psychology – A critical history of a methodological concept. Cambridge: Cambridge University Press.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danish Institute for Educational Research.
Stevens, S. S. (1951). Mathematics, measurement and psychophysics. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1–49). New York: Wiley.
Stevens, S. S. (1975). Psychophysics. New York: Wiley.
Scientific method
Statistical data types
Measurement
Cognitive science | Level of measurement | Physics,Mathematics | 4,028 |
5,402,159 | https://en.wikipedia.org/wiki/Watoga%20State%20Park | Watoga State Park is a state park located near Seebert in Pocahontas County, West Virginia. The largest of West Virginia's state parks, it covers slightly over . Nearby parks include the Greenbrier River Trail, which is adjacent to the park, Beartown State Park, and Droop Mountain Battlefield State Park. Also immediately adjacent to the park is the 9,482-acre Calvin Price State Forest. It is one of the darkest night skies of all of West Virginia State Parks.
History
Watoga State Park’s name comes from the Cherokee word for “starry waters.” The land that forms the nucleus of Watoga was originally acquired in January 1925, when the park was initially planned to be a state forest. In May 1934, a decision was made to instead develop the site as a state park. Much of the development on the site was done by the Civilian Conservation Corps (CCC) and the park was first opened on July 1, 1937. Development of the park stopped during WWII, but after the war, work on the park resumed, and the first camping area opened in 1953, and eight deluxe cabins opened in 1956. Recreational use of the park increased during the 60s and 70s, requiring the addition of another camping area. Today, the park is supported by the Watoga State Park Foundation which promotes the recreation, conservation, ecology, history, and natural resources of the park.
New Deal Resources in Watoga State Park Historic District
The New Deal Resources in Watoga State Park Historic District is a national historic district encompassing 59 contributing buildings, 35 contributing structures, 2 contributing sites, and 11 contributing objects. They include water fountains; trails; a swimming pool; a reservoir; rental cabins; and picnic shelters; as well as a former CCC camp. The park is the site of the Fred E. Brooks Memorial Arboretum, a 400-acre arboretum that encompasses the drainage of Two Mile Run. Named in honor of Fred E. Brooks, a noted West Virginia naturalist who died in 1933, the Arboretum's construction began about 1935 and a dedication was held in 1938.
It was listed on the National Register of Historic Places in 2010.
Features
34 cabins
2 campgrounds with 88 total campsites (50 with electricity)
Swimming pool
fishing lake with boat rentals
37.5 miles of hiking trails
Brooks Memorial Arboretum
Ann Bailey Lookout Tower
Greenbrier River Trail
CCC Museum
Picnic areas
Hiking Trails
Watoga State Park has many hiking trails to choose from that vary in length and difficulty.
A small list of these trails includes
Allegheny Trail
Ann Bailey Trail
Arrowhead Trail
Bearpen Trail
Brooks Memorial Arboretum Trails
Buck and Doe Trail
Burnside Ridge Trail
Honeymoon Trail
Jesse's Cove Trail
Kennison Run Trail
Lake Trail
Monongaseneka Trail
North Boundary Trail
Pine Run Trail
T. M. Cheek Trail
Ten Acre Trail
South Burnside Trail
These trails are regularly maintained by the Watoga Foundation, and you can look at a map by clicking here.
See also
List of West Virginia state parks
State park
References
External links
West Virginia CCC information
An entry by the International Dark-Sky Association
National Register of Historic Places in Pocahontas County, West Virginia
Historic districts in Pocahontas County, West Virginia
History of West Virginia
State parks of West Virginia
Protected areas of Pocahontas County, West Virginia
IUCN Category V
Protected areas established in 1934
Civilian Conservation Corps in West Virginia
Campgrounds in West Virginia
Parks on the National Register of Historic Places in West Virginia
Dark-sky preserves in the United States
West Virginia placenames of Native American origin | Watoga State Park | Astronomy | 714 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.